Quantum F1000

Category Archives: Cameras

Color plays big role in the indie thriller Rust Creek

In the edge-of-your-seat thriller Rust Creek, confident college student Sawyer (Hermione Corfield) loses her way while driving through very rural Appalachia and quickly finds herself in a life-or-death struggle with some very dangerous men. The modestly-budgeted feature from Lunacy Productions — a company that encourages female filmmakers in top roles — packs a lot of power with virtually no pyrotechnics using well-thought-out filmmaking techniques, including a carefully planned and executed approach to the use of color throughout the film.

Director Jen McGowan and DP Michelle Lawler

Director Jen McGowan, cinematographer Michelle Lawler and colorist Jill Bogdanowicz of Company 3 collaborated to help express Sawyer’s character arc through the use of color. For McGowan, successful filmmaking requires thorough prep. “That’s where we work out, ‘What are we trying to say and how do we illustrate that visually?’” she explains. “Film is such a visual medium,” she adds, “but it’s very different from something like painting because of the element of time. Change over time is how we communicate story, emotion and theme as filmmakers.”

McGowan and Lawler developed the idea that Sawyer is lost, confused and overwhelmed as her dire situation becomes clear. Lawler shot most of Rust Creek handholding an ARRI Alexa Mini (with Cooke S4s) following Sawyer as she makes her way through the late autumn forest. “We wanted her to become part of the environment,” Lawler says. “We shot in winter and everything is dead, so there was a lot of brown and orange everywhere with zero color separation.”

Production designer Candi Guterres pushed that look further, rather than fighting it, with choices about costumes and some of the interiors.

“They had given a great deal of thought to how color affects the story,” recalls colorist Bogdanowicz, who sat with both women during the grading sessions (using Blackmagic’s DaVinci Resolve) at Company 3 in Santa Monica. “I loved the way color was so much a part of the process, even subtly, of the story arc. We did a lot in the color sessions to develop this concept where Sawyer almost blends into the environment at first and then, as the plot develops and she finds inner strength, we used tonality and color to help make her stand out more in the frame.”

Lawler explains that the majority of the film was shot on private property deep in the Kentucky woods, without the use of any artificial light. “I prefer natural light where possible,” she says. “I’d add some contrast to faces with some negative fill and maybe use little reflectors to grab a rake of sunlight on a rock, but that was it. We had to hike to the locations and we couldn’t carry big lights and generators anyway. And I think any light I might have run off batteries would have felt fake. We only had sun about three days of the 22-day shoot, so generally I made use of the big ‘silk’ in the sky and we positioned actors in ways that made the best use of the natural light.”

In fact, the weather was beyond bad, it was punishing. “It would go from rain to snow to tornado conditions,” McGowan recalls. “It dropped to seven degrees and the camera batteries stopped working.”

“The weather issues can’t be overstated,” Lawler adds, describing conditions on the property they used for much of the exterior location. “Our base camp was in a giant field. The ground would be frozen in the morning and by afternoon there would be four feet of mud. We dug trenches to keep craft services from flooding.”

The budget obviously didn’t provide for waiting around for the elements to change, David Lean-style. “Michelle and I were always mindful when shooting that we would need to be flexible when we got to the color grading in order to tie the look together,” McGowan explains. “I hate the term ‘fix it post.’ It wasn’t about fixing something, it was about using post to execute what was intended.”

Jill Bogdanowicz

“We were able to work with my color grading toolset to fine tune everything shot by shot,” says Bogdanowicz. “It was lovely working with the two of them. They were very collaborative but were very clear on what they wanted.”

Bogdanowicz also adapted a film emulation LUT, which was based on the characteristics of a Fujifilm print stock and added in a subtle hint of digital grain, via a Boris FX Sapphire plug-in, to help add a unifying look and filmic feel to the imagery. At the very start of the process, the colorist recalls, “I showed Jen and Michelle a number of ‘recipes’ for looks and they fell in love with this one. It’s somewhat subtle and elegant and it made ‘electric’ colors not feel so electric but has a film-style curve with strong contrast in the mids and shadows you can still see into.”

McGowan says she was quite pleased with the work that came out of the color theater. “Color is not one of the things audiences usually pick up on, but a lot of people do when they see Rust Creek. It’s not highly stylized, and it certainly isn’t a distracting element, but I’ve found a lot of people have picked up on what we were doing with color and I think it definitely helped make the story that much stronger.”

Rust Creek is currently streaming on Amazon Prime and Google.

Helicopter Film Services intros Titan ultra-heavy lifting drone

Helicopter Filming Services (HFS) has launched an ultra-heavy lift drone that incorporates a large, capable airframe paired with the ARRI SRH-3. Known as the Titan, the drone’s ARRI SRH-3 stabilized head enables easy integration of existing ARRI lens motors and other functionality directly with the ARRI Alexa 65 and LF cameras.

HFS developed the large drone in response to requests from some legendary DPs and VFX supervisors to enable filmmakers to fly large-format digital or 35mm film packages.

“We have trialed other heavy-lift machines, but all of them have been marginal in terms of performance when carrying the larger cameras and lenses that we’re asked to fly,” says Alan Perrin, chief UAV pilot at HFS. “What we needed, and what we’ve designed, is a system that will capably and safely operate with the large-format cameras and lenses that top productions demand.”

The Titan combines triple redundancy on flight controls and double redundancy on power supply and ballistic recovery into an aircraft that can deploy and operate easily on any production involving a substantial flight duration. The drone can easily fly a 35mm film camera while carrying an ARRI 435 and 400-foot magazine.

Here are some specs:
• Optimized for large-format digital and 35mm film cameras
• Max payload up to 30 kilograms
• Max take-off mass — 80 kilograms
• Redundant flight control systems
• Ballistic recovery system (parachute)
• Class-leading stability
• Flight duration up to 15 minutes (subject to payload weight and configuration)
• HD video downlink
• Gimbal: ARRI SRH3 or Movi XL

Final payload-proving flights are taking place now, and the company is in the process of planning first use on major productions. HFS is also exploring the ability to fly a new 65mm film camera on the Titan.

Quantum F1000

SciTech Medallion Recipient: A conversation with Curtis Clark, ASC

By Barry Goch

The Academy of Motion Pictures Arts & Sciences has awarded Curtis Clark, ASC, the John A. Bonner Medallion “in appreciation for outstanding service and dedication in upholding the high standards of the Academy.” The presentation took place in early February and just prior to the event, I spoke to Clark and asked him to reflect on the transition from film to digital cinema and his contributions to the industry.

Clark’s career as a cinematographer includes features, TV and commercials. He is also the chair of the ASC Motion Imaging Technology Council that developed the ASC- CDL.

Can you reflect on the changes you’ve seen over your career and how you see things moving ahead in the future?
Once upon a time, life was an awful lot simpler. I look back on it nostalgically when it was all film-based, and the possibilities of the cinematographer included follow-up on the look of dailies and also follow through with any photographic testing that helped to hone in on the desired look. It had its photochemical limitations; its analog image structure was not as malleable or tonally expansive as the digital canvas we have now.

Do you agree that Kodak’s Cineon helped us to this digital revolution — the hybrid film/digital imaging system where you would shoot on film, scan it and then digitally manipulate it before going back out to film via a film recorder?
That’s where the term digital intermediate came into being, and it was an eye opener. I think at the time not everyone fully understood the ramifications of the sort of impact it was making. Kodak created something very potent and led the way in terms of methodologies, or how to arrive at integration of digital into what was then called a hybrid imaging system —combining digital and film together.

The DCI (Digital Cinema Initiative) was created to establish digital projection standards. Without a standard we’d potentially be creating chaos in terms of how to move forward. For the studios, distributors and exhibitors, it would be a nightmare Can you talk about that?
In 2002, I had been asked to form a technology committee at the ASC to explore these issues: how the new emerging digital technologies were impacting the creative art form of cinematography and of filmmaking, and also to help influence the development of these technologies so they best serve the creative intent of the filmmaker.

DCI proposed that for digital projection to be considered ready for primetime, its image quality needed to be at least as good as, if not better than, a print from the original negative. I thought this was a great commitment that the studios were making. For them to say digital projection was going to be judged against a film print projection from the original camera negative of the exact same content was a fantastic decision. Here was a major promise of a solution that would give digital cinema image projection an advantage since most people saw release prints from a dupe negative.

Digital cinema had just reached the threshold of being able to do 2K digital cinema projection. At that time, 4K digital projection was emerging, but it was a bit premature in terms of settling on that as a standard. So you had digital cinema projection and the emergence of a sophisticated digital intermediate process that could create the image quality you wanted from the original negative, but projected on a digital projection.

In 2004, the Michael Mann film Collateral film was shot with the Grass Valley Viper Film Stream, the Sony F900 and Sony F950 cameras, the latest generation of digital motion picture cameras — basically video cameras that were becoming increasingly sophisticated with better dynamic range and tonal contrast, using 24fps and other multiple frame rates, but 24p was the key.
These cameras were used in the most innovative and interesting manner, because Mann combined film with digital, using the digital for the low-light level night scenes and then using film for the higher-light level day exterior scenes and day interior scenes where there was no problem with exposure.

Because of the challenge of shooting the night scenes, they wanted to shoot at such low light levels that film would potentially be a bit degraded in terms of grain and fog levels. If you had to overrate the negative, you needed to underexpose and overdevelop it, which was not desirable, whereas the digital cameras thrived in lower light levels. Also, you could shoot at a stop that gave you better depth of field. At the time, it was a very bold decision. But looking back on it historically, I think it was the inflection point that brought the digital motion picture camera into the limelight as a possible alternative to shooting on film.

That’s when they decided to do Camera Assessment Series tests, which evaluates all the different digital cinema cameras available at the time?
Yeah, with the idea being that we’d never compare two digital cameras together, we’d always compare the digital camera against a film reference. We did that first Camera Assessment Series, which was the first step in the direction of validating the digital motion picture camera as viable for shooting motion pictures compared with shooting on film. And we got part way there. A couple of the cameras were very impressive: the Sony F35, the Panavision Genesis, the Arri D21 and the Grass Valley Viper were pretty reasonable, but this was all still mainly within a 2K (1920×1080) realm. We had not yet broached that 4K area.

A couple years later, we decided to do this again. It was called the Image Control Assessment Series, ICAS. That was shot at Warner Bros. It was the scenes that we shot in a café — daylight interior and then night time exterior. Both scenes had a dramatically large range of contrast and different colors in the image. It was the big milestone. The new Arri Alexa was used, along with the Sony F65 and the then latest versions of the Red cameras.

So we had 4K projection and 4K cameras and we introduced the use of ACES (Academy Color Encoding System) color management. So we were really at the point where all the key components that we needed were beginning to come together. This was the first instance where these digital workflow components were all used in a single significant project testing. Using film as our common benchmark reference — How are these cameras in relation to film? That was the key thing. In other words, could we consider them to be ready for prime time? The answer was yes. We did that project in conjunction with the PGA and a company called Revelations Entertainment, which is Morgan Freeman’s company. Lori McCreary, his partner, was one of the producers who worked with us on this.

So filmmakers started using digital motion picture cameras instead of film. And with digital cinema having replaced film print as a distribution medium, these new generation digital cameras started to replace film as an image capture medium. Then the question was would we have an end-to-end digital system that would become potentially viable as an alternative to shooting on film.

L to R: Josh Pines, Steve MacMillan, Curtis Clark and Dhanendra Patel.

Part of the reason you are getting this acknowledgement from the Academy is your dedication on the highest quality of image and the respect for the artistry, from capture through delivery. Can you talk about your role in look management from on-set through delivery?
I think we all need to be on the same page; it’s one production team whose objective is maintaining the original creative intent of the filmmakers. That includes director and cinematographer and working with an editor and a production designer. Making a film is a collective team effort, but the overall vision is typically established by the director in collaboration with the cinematographer and a production designer. The cinematographer is tasked with capturing that with lighting, with camera composition, movement, lens choices — all those elements that are part of the process of creative filmmaking. Once you start shooting with these extremely sophisticated cameras, like the Sony F65 or Venice, Panavision Millennium DXL, an Arri or the latest versions of the Red camera, all of which have the ability to reproduce high dynamic range, wide color gamut and high resolution. All that raw image data is inherently there and the creative canvas has certainly been expanded.

So if you’re using these creative tools to tell your story, to advance your narrative, then you’re doing it with imagery defined by the potential of what these technologies are able to do. In the modern era, people aren’t seeing dailies at the same time, not seeing them together under controlled circumstances. The viewing process has become fragmented. When everyone had to come together to view projected dailies, there was a certain camaraderie constructive contributions that made the filmmaking process more effective. So if something wasn’t what it should be, then everyone could see exactly what it was and make a correction if you needed to do that.

But now, we have a more dispersed production team at every stage of the production process, from the initial image capture through to dailies, editorial, visual effects and final color grading. We have so many different people in disparate locations working on the production who don’t seem to be as unified, sometimes, as we were when it was all film-based analog shooting. But now, it’s far easier and simpler to integrate visual effects into your workflow. Like Cineon indicated when it first emerged, you could do digital effects as opposed to optical effects and that was a big deal.

So coming back to the current situation, and particularly now with the most advanced forms of imaging, which include high dynamic range, wider color gamut, wider than even P3, REC 2020, having a color management system like ACES that actually has enough color gamut to be able to contain any color space that you capture and want to be able to manipulate.

Can you talk about the challenges you overcame, and how that fits into the history of cinema as it relates to the Academy recognition you received?
As a cinematographer, working on feature films or commercials, I kept thinking, if I’m fortunate enough to be able to manage the dailies and certainly the final color grading, there are these tools called lift gain gamma, which are common to all the different color correctors. But they’re all implemented differently. They’re not cross-platform-compatible, so the numbers from a lift gain gamma — which is the primary RGB grading — from one color corrector will not translate automatically to another color corrector. So I thought, we should have a cross platform version of that, because that is usually seen as the first step for grading.

That’s about as basic as you can get, and it was designed so that it would be a cross-platform implementation, so that everybody who installs and applies the ASC-CDL in a color grading system compatible with that app, whether you did it on a DaVinci, Baselight, Lustre or whatever you were using, the results would be the same and transferable.

You could transport those numbers from one set-up on set using a dailies creation tool, like ColorFront for example. You could then use the ASC CDL to establish your dailies look during the shoot, not while you’re actually shooting, but with the DIT to establish a chosen look that could then be applied to dailies and used for VFX.

Then when you make your way into the final color grading session with the final cut — or whenever you start doing master color grading going back to the original camera source — you would have these initial grading corrections as a starting point as references. This now gives you the possibility of continuing on that color grading process using all the sophistication of a full color corrector, whether it’s power windows or secondary color correction. Whatever you felt you needed to finalize the look.

I was advocating this in the ASC Technology Committee, as it was called, now subsequently renamed the Motion Imaging Technology Council (MITC). We needed a solution like this and there were a group of us who got together and decided that we would do this. There were plenty of people who were skeptical, “Why would you do something like that when we already have lift gain gamma? Why would any of the manufacturers of the different color grading systems integrate this into their system? Would it not impinge upon their competitive advantage if they had a system that people were used to using, and if their own lift gain gamma would work perfectly well for them, why would they want to use the ASC CDL?

We live in a much more fragmented post world, and I saw that becoming even more so with the advances of digital. The ASC CDL would be a “look unifier” that would establish initial look parameters. You would be able to have control over the look at every stage of the way.

I’m assuming that the cinematographer would work with the director and editor, and they would assess certain changes that probably should be made because we’re now looking at cut sequences and what we had thought would be most appropriate when we were shooting is now in the context of an edit and there may need to be some changes and adjustments.

Were you involved in ACES? Was it a similar impetus for ACES coming about? Or was it just spawned because visual effects movies became so big and important with the advent of digital filmmaking?
It was bit of both, including productions without VFX. So I would say that initially it was driven by the fact that there really should be a standardized color management system. Let me give you an example of what I’m talking about. When we were all photochemical and basically shooting with Kodak stock, we were working with film-based Kodak color science.

It’s a color science that everybody knew and understood, even if they didn’t understand it from an engineering photochemical point of view, they understood the effects of it. It’s what helps enable the look and the images that we wanted to create.

That was a color management system that was built into film. That color science system could have been adapted into the digital world, but Kodak resisted that because of the threat to negatives. If you apply that film color science to digital cameras, then you’re making digital cameras look more like film and that could pose a threat to the sale of color film negative.

So that’s really where the birth of ACES came about — to create a universal, unified color management system that would be appropriate anywhere you shot and with the widest possible color gamut. And it supports any camera or display technology because it would always have a more expanded (future proofing) capability within which the digital camera and display technologies would work effectively and efficiently but accurately, reliably and predictably.

Very early on, my ASC Technology Committee (now called Motion Imaging Technology Council) got involved with ACES development and became very excited about it. It was the missing ingredient needed to be able to make the end-to-end digital workflow the success that we thought that it could become. Because we no longer could rely on film-based color science, we had to either replicate that or emulate it with a color management system that could accommodate everything we wanted to do creatively. So ACES became that color management system.

So, in addition to becoming the first cross-platform primary color grading tool, the ASC CDL became the first official ACES look modification transform. Because ACES is not a color grading tool, it’s a color management system, you have to have color grading tools with color management. So you have the color management with ACES, you have the color grading with ASC CDL and the combination of those together is the look management system because it takes them all to make that work. And it’s not that the ASC CDL is the only tool you use for color grading, but it has the portable cross-platform ability to be able to control the color grading from dailies through visual effects up to the final color grade when you’re then working with a sophisticated color corrector.

What do you see for the future of cinematography and the merging of the worlds of post and on-set work and, what do you see as future challenges for future integrations between maintaining the creative intent and the metadata.
We’re very involved in metadata at the moment. Metadata is a crucial part of making all this work, as you well know. In fact, we worked on the common 3D LUT format, which we worked on with the Academy. So there is a common 3D LUT format that is something that would again have cross-platform consistency and predictability. And it’s functionality and its scope of use would be better understood if everyone were using it. It’s a work in progress. Metadata is critical.

I think as we expand the canvas and the palette of the possibility of image making, you have to understand what these technologies are capable of doing, so that you can incorporate them into your vision. So if you’re saying my creative vision includes doing certain things, then you would have to understand the potential of what they can do to support that vision. A very good example in the current climate is HDR.

That’s very controversial in a lot of ways, because the set manufacturers really would love to have everything just jump off the screen to make it vibrant and exciting. However, from a storytelling point of view, it may not be appropriate to push HDR imagery where it distracts from the story.
Well, it depends on how it’s done and how you are able to use that extended dynamic range when you have your bright highlights. And you can use foreground background relationships with bigger depth of field for tremendous effect. They have a visceral presence, because they have a dimensionality when, for example, you see the bright images outside of a window.

When you have an extended dynamic range of scene tones that could add dimensional depth to the image, you can choreograph and stage the blocking for your narrative storytelling with the kind of images that take advantage of those possibilities.

So HDR needs to be thought of as something that’s integral to your storytelling, not just something that’s there because you can do it. That’s when it can become a distraction. When you’re on set, you need a reference monitor that is able to show and convey, all the different tonal and color elements that you’re working with to create your look, from HDR to wider color gamut, whatever that may be, so that you feel comfortable that you’ve made the correct creative decision.

With virtual production techniques, you can incorporate some of that into your live-action shooting on set with that kind of compositing, just like James Cameron started with Avatar. If you want to do that with HDR, you can. The sky is the limit in terms of what you can do with today’s technology.

So these things are there, but you need to be able to pull them all together into your production workflow to make sure that you can comfortably integrate in the appropriate way at the appropriate time. And that it conforms to what the creative vision for the final result needs to be and then, remarkable things can happen. The aesthetic poetry of the image can visually drive the narrative and you can say things with these images without having to be expositional in your dialogue. You can make it more of an experientially immersive involvement with the story. I think that’s something that we’re headed toward, that’s going to make the narrative storytelling very interesting and much more dynamic.

Certainly, and certainly with the advancements of consumer technology and better panels and the high dynamic range developments, and Dolby Vision coming into the home and Atmos audio coming into the home. It’s really an amazing time to be involved in the industry; it’s so fun and challenging.

It’s a very interesting time, and a learning curve needs to happen. That’s what’s driven me from the very beginning and why I think our ASC Motion Imaging Technology Council has been so successful in its 16 years of continuous operation influencing the development of some of these technologies in very meaningful ways. But always with the intent that these new imaging technologies are there to better serve the creative intent of the filmmaker. The technology serves the art. It’s not about the technology per se, it’s about the technology as the enabling component of the art. It enables the art to happen. And expands it’s scope and possibility to broader canvases with wider color gamuts in ways that have never been experienced or possible before.


Barry Goch is a Finishing Artist at The Foundation and a Post Production Instructor at UCLA Extension. You can follow him on Twitter at @gochya.


Red intros LCD touch monitor for DSMC2 cameras

Red Digital Cinema has introduced the DSMC2 Touch 7-inch Ultra-Brite LCD monitor to its line of camera accessories. It offers an optically-bonded touchscreen with Gorilla Glass that allows for what the company calls “intuitive ways to navigate menus, adjust camera parameters and review .R3D clips directly out of the camera.”

The monitor offers a brighter high-definition viewing experience for recording and viewing footage on DSMC2 camera systems, even in direct sunlight. A 1920×1200 resolution display panel provides 2,200 nits of brightness to overcome viewing difficulties in bright outdoor environments as well as a high-pixel density (at 323ppi) and a 1200:1 contrast ratio.

The Ultra-Brite display mounts to Red’s DSMC2 Brain or other 1/4-20 mounting surfaces, and provides a LEMO connection to the camera, making it an ideal monitoring option for gimbals, cranes, and cabled remote viewing. Shooters can use a DSMC2 LEMO Adaptor A in conjunction with the Ultra-Brite display for convenient mounting options away from the DSMC2 camera Brain.

Check out a demo of the new monitor, priced at $3,750, here.


Lucid and Eys3D partner on VR180 depth camera module

EYS3D Microelectronics Technology, the company behind embedded camera modules in some top-tier AR/VR headsets, has partnered with that AI startup Lucid. Lucid will power their next-generation depth-sensing camera module, Axis. This means that a single, small, handheld device can capture accurate 3D depth maps with up to a 180-degree field of view at high resolution, allowing content creators to scan, reconstruct and output precise 3D point clouds.

This new camera module, which was demoed for the first time at CES, will allow developers, animators and game designers a way to transform the physical world into a virtual one, ramping up content for 3D, VR and AR all with superior performance in resolution and field of view at a lower cost than some technologies currently available.

A device capturing the environment exactly as you perceive it, but enhanced with capabilities of precise depth, distance and understanding could help eliminate the boundaries between what you see in the real world and what you can create in the VR and AR world. This is what the Lucid-powered EYS3D’s Axis camera module aims to bring to content creators, as they gain the “super power” of transforming anything in their vision into a 3D object or scene which others can experience, interact with and walk in.

What was only previously possible with eight to 16 high-end DSLR cameras, and expensive software or depth sensors is now combined into one tiny camera module with stereo lenses paired with IR sensors. Axis will cover up to a 180-degree field of view while providing millimeter-accurate 3D in point cloud or depth map format. This device provides a simple plug-and-play experience through USB 3.1 Gen1/2 and supported Windows and Linux software suites, allowing users to further develop their own depth applications such as 3D reconstructing an entire scene, scanning faces into 3D models or just determining how far away an object is.

Lucid’s AI-enhanced 3D/depth solution, known as 3D Fusion Technology, is currently deployed in many devices, such as 3D cameras, robots and mobile phones, including the Red Hydrogen One, which just launched through AT&T and Verizon nationwide.

EYS3D’s new depth camera module powered by Lucid will be available in Q3 2019.


Review: iOgrapher Multi Case for mobile filmmaking

By Brady Betzel

Thanks to the amazing iPhone X, Google Pixel and Samsung Galaxy, almost everyone has a high-end video camera on their person at all times and this is helping to spur on mobile filmmaking and vlogging.

From YouTube to Instagram to movies like Unsane (Steven Soderbergh) or Tangerine (Sean Baker) — and regardless of whether you think a $35,000 camera setup tells a story better than a $1,000 cell phone (looking at you Apple Phone XS Max) — mobile filmmaking is here to stay and will only get better.

iOgrapher’s latest release is the iOgrapher Multi Case, a compact mobile filmmaking mounting solution that works with today’s most popular phones. iOgrapher has typically created solutions that were tied to the mobile device being used for filmmaking, such as an iPhone, the latest Samsung Galaxy phones, iPads or even action cameras like a GoPro Hero 7 Black.

With the new iOgrapher Multi Case you can fit any mobile device that measures more than 5 ½” x 2 ¼” and less than 6 ½” by 3 ⅜”. Unfortunately, you won’t be fitting an iPad or a GoPro in the iOgrapher Multi Case, but don’t fret! iOgrapher makes rigs for those as well. On the top of the Multi Case are two cold shoe mounts for lights, microphones or any other device, like a GoPro. To mount things with ¼” 20 screw mounts in the cold shoes you will need to find a cold shoe to ¼” 20 adapter, which is available on iOgrapher’s accessory page. You can also find these at Monoprice or Amazon for real cheap.

And if you are looking to order more mounts you may want to order some extra cold shoe adapters that can be mounted on the handles of the iOgrapher Multi Case in the additional ¼” 20 screw mounts. The mounts on the handles are great for adding in additional lighting or microphones. I’ve even found that if you are going to be doing some behind-the-scenes filming or need another angle for your shooting, a small camera like a GoPro can be easily mounted and angled. With all this mounting you should assume that you are going to be using the iOgrapher on a sturdy tripod. Just for fun, I mounted the iOgrapher Multi Case onto a GoPro 3-Way Grip, which can also be used as a light tripod. It wasn’t exactly stable but it worked. I wouldn’t suggest using it for more than an emergency shooting situation though.

On the flip side (all pun intended), the iOgrapher can be solidly mounted vertically with the ¼” 20 screw mounts on the handles. With Instagram making headway with vertical video in their Instagram Stories, iOgrapher took that idea and built that into their Multi Case, further cementing grumbling from the old folks who just don’t get vertical video.

Testing
I tried out both a Samsung Galaxy s8+ as well as an iPhone 7+ with their cases on inside of the iOgrapher Multi Case. Both fit. The iPhone 7+ was stretching the boundaries of the Multi Case, but it did fit and worked well. The way the phones are inserted into the Multi Case is by a spring-loaded bottom piece. From the left or top side, if you are shooting vertically, you push the bottom of the mobile device into the corner covered slots of the iOgrapher Multi Case until the top or the left side can be secured under the left or top side of the Multi Case. It’s really easy.

I was initially concerned with the spring loading of the case; I wasn’t sure if the springs would be resilient enough to handle the constant pulling in and out of the phones, but the springs are high quality and held up beautifully. I even tried inserting my mobile phones tons of times and didn’t notice any issues with the springs or my phones.

Take care when inserting your phone into the Multi Case if you have a protective shield on the screen of your device. If you aren’t extra careful it can pull or snag on the cover — especially with the tight fit of a case. Just pay attention and there will be nothing to worry about. The simple beauty of the iOgrapher is that with a wider grip of your filmmaking device, you have a larger area to distribute any shaking coming from your hands, essentially helping stabilize your filmmaking without the need for a full-fledged gimbal.

If you accidentally drop your iOgrapher you may get a scratch, but for the most part they are built sturdy and can withstand punishment, whether it’s from your four year old or from weather. If you want to get a little fancy, you can buy affordable lights like the Litra Torch (check out my review) to attach to the cold shoe mounts, or even a Rode microphone (don’t forget the TRS to TRRS adapter if you are plugging into an iPhone), and you are off and running.

Summing Up
I have been really intrigued with iOgrapher’s products since day one. They are an affordable and sturdy way to jump into filmmaking using cameras everyone carries with them every day: their phones.

Whether you are a high school student looking to get steady and professional mobile video, or a journalist looking for a quick way to make the most of your shots with just a phone, light, mic and tripod mount, the iOgrapher Multi Case will unlock your mobile filmmaking potential.

The iOgrapher Multi Case is a very durable protective case for your mobile filmmaking devices that is a steal at $79.99. If you are a parent that is looking for an inexpensive way to try and tease your child’s interest in video take a look at www.iographer.com and grab a few accessories like a Manfrotto light and Rode VideoMicro to add some subtle lighting and pick up the best quality audio.

Make sure to check out Dave Basulto’s — the creator of iOgrapher — demo of the iOgrapher Multi Case, including trying out the fit of different phones.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


Catching up with Aquaman director James Wan

By Iain Blair

Director James Wan has become one of the biggest names in Hollywood thanks to the $1.5 billion-grossing Fast & Furious 7, as well as the Saw, Conjuring and Insidious films — three of the most successful horror franchises of the last decade.

Now the Malaysian-born, Australian-raised Wan, who also writes and produces, has taken on the challenge of bringing Aquaman and Atlantis to life. The origin story of half-surface dweller, half-Atlantean Arthur Curry stars Jason Momoa in the title role. Amber Heard plays Mera, a fierce warrior and Aquaman’s ally throughout his journey.

James Wan and Iain Blair

Additional cast includes Willem Dafoe as Vulko, council to the Atlantean throne; Patrick Wilson as Orm, the present King of Atlantis; Dolph Lundgren as Nereus, King of the Atlantean tribe Xebel; Yahya Abdul-Mateen II as the revenge-seeking Manta; and Nicole Kidman as Arthur’s mom, Atlanna.

Wan’s team behind the scenes included such collaborators as Oscar-nominated director of photography Don Burgess (Forrest Gump), his five-time editor Kirk Morri (The Conjuring), production designer Bill Brzeski (Iron Man 3), visual effects supervisor Kelvin McIlwain (Furious 7) and composer Rupert Gregson-Williams (Wonder Woman).

I spoke with the director about making the film, dealing with all the effects, and his workflow.

Aquaman is definitely not your usual superhero. What was the appeal of doing it? 
I didn’t grow up with Aquaman, but I grew up with other comic books, and I always was well aware of him as he’s iconic. A big part of the appeal for me was he’d never really been done before — not on the big screen and not really on TV. He’s never had the spotlight before. The other big clincher was this gave me the opportunity to do a world-creation film, to build a unique world we’ve never seen before. I loved the idea of creating this big fantasy world underwater.

What sort of film did you set out to make?
Something that was really faithful and respectful to the source material, as I loved the world of the comic book once I dove in. I realized how amazing this world is and how interesting Aquaman is. He’s bi-racial, half-Atlantean, half-human, and he feels he doesn’t really fit in anywhere at the start of the film. But by the end, he realizes he’s the best of both worlds and he embraces that. I loved that. I also loved the fact it takes place in the ocean so I could bring in issues like the environment and how we treat the sea, so I felt it had a lot of very cool things going for it — quite apart from all the great visuals I could picture.

Obviously, you never got the Jim Cameron post-Titanic memo — never, ever shoot in water.
(Laughs) I know, but to do this we unfortunately had to get really wet as over 2/3rds of the film is set underwater. The crazy irony of all this is when people are underwater they don’t look wet. It’s only when you come out of the sea or pool that you’re glossy and dripping.

We did a lot of R&D early on, and decided that shooting underwater looking wet wasn’t the right look anyway, plus they’re superhuman and are able to move in water really fast, like fish, so we adopted the dry-for-wet technique. We used a lot of special rigs for the actors, along with bluescreen, and then combined all that with a ton of VFX for the hair and costumes. Hair is always a big problem underwater, as like clothing it behaves very differently, so we had to do a huge amount of work in post in those areas.

How early on did you start integrating post and all the VFX?
It’s that kind of movie where you have to start post and all the VFX almost before you start production. We did so much prep, just designing all the worlds and figuring out how they’d look, and how the actors would interact with them. We hired an army of very talented concept artists, and I worked very closely with my production designer Bill Brzeski, my DP Don Burgess and my visual effects supervisor Kelvin McIlwain. We went to work on creating the whole look and trying to figure out what we could shoot practically with the actors and stunt guys and what had to be done with VFX. And the VFX were crucial in dealing with the actors, too. If a body didn’t quite look right, they’d just replace them completely, and the only thing we’d keep was the face.

It almost sounds like making an animated film.
You’re right, as over 90% of it was VFX. I joke about it being an animated movie, but it’s not really a joke. It’s no different from, say, a Pixar movie.

Did you do a lot of previs?
A lot, with people like Third Floor, Day For Nite, Halon, Proof and others. We did a lot of storyboards too, as they are quicker if you want to change a camera angle, or whatever, on the fly. Then I’d hand them off to the previs guys and they’d build on those.

What were the main technical challenges in pulling it all together on the shoot?
We shot most of it Down Under, near Brisbane. We used all nine of Village Roadshow Studios’ soundstages, including the new Stage 9, as we had over 50 sets, including the Atlantis Throne Room and Coliseum. The hardest thing in terms of shooting it was just putting all the actors in the rigs for the dry-for-wet sequences; they’re very cumbersome and awkward, and the actors are also in these really outrageous costumes, and it can be quite painful at times for them. So you can’t have them up there too long. That was hard. Then we used a lot of newish technology, like virtual production, for scenes where the actors are, say, riding creatures underwater.

We’d have it hooked up to the cameras so you could frame a shot and actually see the whole environment and the creature the actor is supposed to be on — even though it’s just the actors and bluescreen and the creature is not there. And I could show the actors — look, you’re actually riding a giant shark — and also tell the camera operator to pan left or right. So it was invaluable in letting me adjust performance and camera setups as we shot, and all the actors got an idea of what they were doing and how the VFX would be added later in post. Designing the film was so much fun, but executing it was a pain.

The film was edited by Kirk Morri, who cut Furious 7, and worked with you on the Insidious and The Conjuring films. How did that work?
He wasn’t on set but he’d visit now and again, especially when we were shooting something crazy and it would be cool to actually see it. Then we’d send dailies and he’d start assembling, as we had so much bluescreen and VFX stuff to deal with. I’d hop in for an hour or so at the end of each day’s shoot to go over things as I’m very hands on — so much so that I can drive editors crazy, but Kirk puts up with all that.

I like to get a pretty solid cut from the start. I don’t do rough assemblies. I like to jump straight into the real cut, and that was so important on this because every shot is a VFX shot. So the sooner you can lock the shot, the better, and then the VFX teams can start their work. If you keep changing the cut, then you’ll never get your VFX shots done in time. So we’d put the scene together, then pass it to previs, so you don’t just have actors floating in a bluescreen, but they’re in Atlantis or wherever.

Where did you do the post?
We did most of it back in LA on the Warner lot.

Do you like the post process?
I absolutely love it, and it’s very important to my filmmaking style. For a start, I can never give up editing and tweaking all the VFX shots. They have to pull it away from me, and I’d say that my love of all the elements of the post process — editing, sound design, VFX, music — comes from my career in suspense movies. Getting all the pieces of post right is so crucial to the end result and success of any film. This post was creatively so much fun, but it was long and hard and exhausting.

James Wan

All the VFX must have been a huge challenge.
(Laughs) Yes, as there’s over 2,500 VFX shots and we had everyone working on it — ILM, Scanline, Base, Method, MPC, Weta, Rodeo, Digital Domain, Luma — anyone who had a computer! Every shot had some VFX, even the bar scene where Arthur’s with his dad. That was a set, but the environment outside the window was all VFX.

What was the hardest VFX sequence to do?
The answer is, the whole movie. The trench sequence was hard, but Scanline did a great job. Anything underwater was tough, and then the big final battle was super-difficult, and ILM did all that.

Did the film turn out the way you hoped?
For the most part, but like most directors, I’m never fully satisfied.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Inside the mind and workflow of a 14-year-old filmmaker

By Brady Betzel

From editing to directing, I have always loved how mentoring and teaching is a tradition that lives on in this industry. When I was an assistant editor, my hope was that the editors would let me watch them work, or give me a chance to edit. And a lot of the time I got that opportunity.

Years ago I worked with an editor named Robb McPeters, who edited The Real Housewives of New York City. I helped cut a few scenes, and Robb was kind enough to give me constructive feedback. This was the first time I edited a scene that ran on TV. I was very excited, and very appreciative of his feedback. Taking the time to show younger assistant editors who have their eye on advancement makes you feel good — something I’ve learned firsthand.

As I’ve become a “professional” editor I have been lucky enough to mentor assistant editors, machine room operators, production assistants and anyone else that was interested in learning post. I have found mentoring to be very satisfying, but also integral to the way post functions. Passing on our knowledge helps the community move forward.

Even with a couple of little scenes to cut for Robb, the direction I received helped make me the kind of editor I am today. Throughout the years I was lucky enough to encounter more editors like Robb and took all of the advice I could.

Last year, I heard that Robb’s son, Griffin, had made his first film at 13 years old, Calling The Shots. Then a few months ago I read an article about Griffin making a second film, at 14 years old, The Adventure of T.P. Man and Flusher. Griffin turns 15 in February and hopes to make a film a year until he turns 18.

It makes sense that someone who has been such a good mentor has produced a son with such a passion for filmmaking. I can see the connection between fatherhood and mentorship, especially between an editor and an assistant. And seeing Robb foster his son’s love for filmmaking, I realized I wanted to be able to do that with my sons. That’s when I decided to reach out to find out more.

CAN YOU TALK ABOUT YOUR MOST RECENT FILM?
The Adventure of T.P. Man and Flusher is really a story of adventure, friendship and finding love. After learning that his best friend Jim (Sam Grossinger) has attempted suicide, Tom (Adam Simpson) enlists the help of the neighborhood kingpin, Granddaddy’ (Blake Borders). Their plan is to sneak Jim out of the hospital for one last adventure before his disconnected parents move him off to Memphis. On the way they encounter a washed up ‘90s boy-band star and try to win the hearts of their dream girls.

Tom realizes that this adventure will not fix his friend, but their last night together does evolve into the most defining experience of their lives.

HOW DID YOU COME UP WITH THE IDEA FOR THIS FILM?
The Adventure of T.P. Man and Flusher is a feature film that I wrote while in 8th grade. I saved every penny I could earn and then begged my parents to let me use money from my college savings. They knew how important this film was to me so they agreed. This is my second feature and I wanted to do everything better, starting with the script to casting. I was able to cast professional actors and some of my schoolmates.

I shot in 4K UHD using my Sony A7riii. I then brought the footage into the iMac and transcoded into CineForm 720p files. This allowed me to natively edit them on the family iMac in Adobe Premiere. We have a cabin in Humboldt County, which is where I assemble my rough cuts.

I spent hours and hours this summer in my grandfather’s workshop editing the footage. Day after day my mom and sister would go swimming at the river, pick berries, all the lazy summer day stuff and I would walk down to the shop to cut, so that I could finish a version of my scene.

Once I finished my director’s cut, I would show the assembly to my parents, and they would start giving me ideas on what was working and what wasn’t. I am currently polishing the movie, adding visual effects (in After Effects), sound design, and doing a color grade in Adobe SpeedGrade. I’ll also add the final 5.1 surround sound mix in Adobe Audition to deliver for distribution.

WHERE DID YOU GET THE IDEA FOR THE FILM?
In 8th grade, a classmate attempted suicide and it affected me very deeply. I wondered if other kids were having this type of depression. After doing some research I realized that many kids suffer from deep depression. In fact, in 2016, adolescents and young adults aged 15 to 24 had a suicide rate of 13.15. That amazed and saddened me. I felt that I had to do something about it. I took my ideas and headed to our cabin in the woods to write the script over my winter break.

I was so obsessed with this story that I wrote a 120-page script.

CAN YOU TALK ABOUT PRODUCING?
It was a lot of scheduling, scheduling and scheduling. Locking locations, permits, insurance, and did I mention scheduling?

I think there was some begging in there too. “Please let us use. Please can we…” My school SCVi was extremely helpful with getting me insurance. It was heartwarming to see how many people wanted to help. Even support from companies, including Wooden Nickel who donated an entire lighting package.

WHAT ABOUT AS A DIRECTOR?
As the director I really wanted to push the fantastical and sometimes dark and lonely world these characters were living in. Of course, because I wrote the script I already had an idea of what I wanted to capture in the scene, but I put it to paper with shotlist’s and overhead camera placements. That way I had a visual reference to show of how I wanted to film from day one to the end.

Rehearsals with the actors were key with such a tight shooting schedule. Right from the start the cast responded to me as their director, which surprised me because I had just turned 14. Every question came to me for approval to represent my vision.

My dad was on set as my cinematographer, supporting me every step of the way. We have a great way of communicating. Most of the time we were on the same page, but if we were not, he deferred to me. I took my hits when I was wrong and then learned from them.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT MAKING THIS FILM?
This was a true, small-budget, independent film that I made at 14 years old. Our production office was my mom and dad and myself. Three people usually don’t make films. Even though I am young, my parents trusted the weight of the film to me. It is my film. This means I did a little of everything all of the time, from pulling costumes to stocking the make-up kit to building my own 4K editing system.

We had no grips, no electric, no PAs. If we needed water or craft service, it was me, my dad and my mom. If a scene needed to be lit, my dad and I lit everything ourselves, we were the last ones loading costumes, extension cords and equipment. In post was all the same ordeal.

WHAT WAS YOUR FAVORITE PART?
I really love everything about filmmaking. I love crafting a story, having to plan and think of how to capture a scene. How show something that isn’t necessarily in front of your eyes. I love talking out my ideas. My mom teases me that I even sleep moviemaking because she saw me in the hall going to the bathroom the other night and I mumbled, “Slow pan on Griffin going to bathroom.”

But post is really where the movie comes together. I like seeing what works for a scene. Which reaction is better? What music or sound effects help tell the story? Music design is also very personal to me. I listen to songs for hours to find the perfect one for a scene.

WHAT’S YOUR LEAST FAVORITE?
Having to cut some really great scenes that I know an actor is looking forward to seeing in that first screening. It is a really hard decision to remove good work. I even cut my grandmother from my first film. Now that’s hard!

WHAT CAMERAS AND PRODUCTION EQUIPMENT DO YOU USE?
For recording I use the Sony A7rIII with various lenses recording to a Ninja Flame at 10-bit 4K. For sound I use a Røde NG2 boom and three lav mics. For lighting we used a few Aputure LED lights and a Mole Richardson 2k Baby Junior.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
I am not much of a night person. I get really tired around 9:30pm. In fact, I still have a bedtime of 10:00pm. I would say my best work is done at the time I have after school until my bedtime. I edit every chance I get. I do have to break for dinner and might watch one half of a episode of The Office. Other than that I am in the bay from 3:30-10:00pm every day.

CAN YOU THINK OF ANOTHER JOB YOU MIGHT WANT SOMEDAY?
No, not really. I enjoy taking people on emotional rides, creating a presentation that evokes personal feelings and using visuals to takes my audience somewhere else. With all that said, if I couldn’t do this I would probably build professional haunted houses. Is that a real job?

IT’S STILL VERY EARLY, BUT HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
My parents have this video of me reaching for the camera on the way to my first day of pre-school saying, “I want the camera, I want to shoot.”

When I was younger, silent films mesmerized me. I grew up wanting to be Buster Keaton. The defining moment was seeing Jaws. I watched it at five and then realized what being a filmmaker was, making a mosaic of images (as mentioned by Hitchcock on editing). I began trying to create. At 11 and 12 I made shorts, at 13 I made my first full-length feature film. The stress and hard work did not even faze me; I was excited by it.

CAN YOU TALK ABOUT YOUR FIRST FILM?
Calling the Shots, which is now available on Amazon Prime, was an experiment to see if I could make a full-length film. A test flight, if you will. With T.P. Man I really got to step behind the camera and an entirely different side of directing I didn’t get to experience with my first film since I was the lead actor in that.

I also love the fact that all the music and sound design and graphics were done with my hands and alone, most the time, in my editing suite. My dad designed it for me. I have two editing systems that I bounce back and forth between. I can set the lighting in the room, watch on a big 4K monitor and mix in 5.1 surround. Some kids have tree forts. I have my editing bay.

FINALLY, DO YOU GET STRESSED OUT FROM THE PROCESS?
I don’t allow myself to stress out about any of these things. The way I look at it is that I have a very fun and hard job. I try to keep things in perspective — there are no lives in danger here. I do my best work when I am relaxed. But, if there is a time, I walk away, take a bike ride or watch a movie. Watching others work inspires me to make my movies better.

Most importantly, I brainstorm about my next project. This helps me keep a perspective that this project will soon be over and I should enjoy it while I can and make it the best I possibly can.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


DP Chat: No Activity cinematographer Judd Overton

By Randi Altman

Judd Overton, who grew up in the Australian Outback, knew he wanted to be a DP before he even knew exactly what that was, spending a lot of his time watching and re-watching movies on VHS tapes. When he was young, a documentary film crew came to his town. “I watched as the camera operator was hanging off the side of my motorbike filming as we charged over sand dunes. I thought that was a pretty cool job!”

No Activity

The rest, as they say, is history. Overton’s recent work includes the Netflix comedy series The Letdown and No Activity, which is a remake of the Australian comedy series of the same name. It stars Patrick Brammall and Tim Meadows and is produced by CBS Television Studios in association with Funny or Die, Jungle and Gary Sanchez Productions. It streams on CBS All Access.

We recently reached out to Overton, who also just completed the documentary Lessons from Joan, about one of the first female British theater directors, Joan Littlewood.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
What I love about what I do is being able to see things, and show the world to audiences in a way people haven’t seen before. I always keep abreast of technology, but for me the technology really needs to service the story. I choose particular equipment in order to capture the emotion of the piece.

What new technology has changed the way you work (looking back over the past few years?
The greatest change in my world is the high-quality, high-ISO cameras now on the market. This has meant being able to shoot in a much less obtrusive way, shooting and lighting to create footage that is far closer to reality.

The use of great-quality LED lighting is something I’m really enjoying. The ability to create and capture any color and control it from your iPhone opens the floodgates for some really creative lighting.

 

Judd Overton

Can you describe your ideal collaboration with the director when setting the look of a project?
Every director is different, it’s a role and relationship I fill as required. Some directors like to operate the camera themselves. In that case, I oversee the lighting. Some directors just want to work with the actors, so my job then involves more responsibilities for coverage, camera movement and selecting locations.

I try to be open to each new experience and make creative guidelines for a project in collaboration with the director and producers, trying to preempt obstacles before they strike.

Tell us about the CBS All Access show No Activity. Can you describe the overall look of the show and what you and the director/producers wanted to achieve?
I shot the pilot for the original No Activity five years ago. Trent O’Donnell (writer/director, co-creator) wanted to make a series out of simple two hander (two actor) scenes.

We decided to use the police procedural drama genre because we knew the audience would fill in gaps with their own knowledge. In a show where very little happens, the mood and style become far more important.

How early did you get involved in the production?
I’ve been involved since the show was conceptualized. We shot the pilot in a parking lot in one of Sydney’s seedier areas. We fought off a lot of rats.

No Activity

How did you go about choosing the right camera and lenses for this project?
I had to shoot three cameras, as the show is heavily improvised. Other than my main cameras with zoom lenses, I chose the best cameras for each sequence. We used Blackmagic cameras Ursa Pro and Micro for a lot of our rigged positions. I also used Panasonic cameras for our available light work, and even an Arri 65 for some projection plates.

Were there any scenes that you are particularly proud of?
The scene I had the most fun with was the siege, which plays over the last two episodes of Season 2. We dusted off and fired up two 1930s Arc lights. Carbon Arc lights are what all the old Hollywood films used before HMIs. They are a true 5600 Kelvin, daylight source.

My gaffer’s father actually made these units, and they were refurbished for Quentin Tarantino’s film Once Upon a Time in Hollywood. We used them as searchlights for our nighttime siege, and the bright beams and plumes of smoke rising really gave the scene an epic scale.

What’s your go-to gear — things you can’t live without?
Communication is everything, and the latest toy in my toy box is HME headsets. They allow me to have constant communications with my camera operators, grips and electrics, essential when you’re running five cameras across multiple units.

Director Barry Jenkins on latest, If Beale Street Could Talk

By Iain Blair

If they handed out Oscars for shots of curling cigarette smoke, Barry Jenkins’ follow-up to his Oscar-winning Moonlight would win hands down. If Beale Street Could Talk looks certain to be an awards show darling, already picking up three Golden Globe nods — Best Drama Motion Picture, Best Screenplay for Jenkins and Best Supporting Actress for Regina King.

Based on the 1974 novel by writer and civil rights activist James Baldwin, it tells the story of a young black couple — Clementine “Tish” Rivers (KiKi Layne) and Alonzo “Fonny” Hunt (Stephan James) — who grow up together in Harlem and get engaged. But their romantic dreams soon begin to dissolve under the harsh glare of white authority and racism when Fonny is falsely accused of rape and thrown in jail, just as Tish realizes she is pregnant with their child.

While the couple is the focus of the film, the family drama also features a large ensemble cast that includes King as Tish’s mother and Colman Domingo as her father, along with Michael Beach, Brian Tyree Henry, Diego Luna, Pedro Pascal and Dave Franco.

Behind the camera, Jenkins reteamed with Moonlight cinematographer James Laxton, editors Nat Sanders and Joi McMillion, and composer Nick Britell.

I spoke with Jenkins about making the film and workflow.

Our writer Iain Blair with Barry Jenkins

It’s always a challenge to adapt an acclaimed novel for the screen. How tough was this one?
It was extremely tough, especially since I love James Baldwin so much. Every step of the way you’re deciding at which point you have to be completely faithful to the material and then where it’s OK to break away from the text and make it your own for the movie version.

I first read the novel around 2010, and in 2013 I went to Europe to get away and write the screenplay. I also wrote one for Moonlight, which then ended up happening first. This was a harder project to get made. Moonlight was smaller and more controllable. And this is told from a female’s perspective, so there were a lot of challenges.

What sort of film did you set out to make?
I wanted to take the energy of the novel and its lush romantic sensuality, and then pair it with the more biting, bitter social commentary of Baldwin’s non-fiction work. I see film as a very malleable art form, and I felt I could build it. So at times it could be extremely lush and beautiful — even distractingly so — but then it could turn very dark and angry, and contain all of that.

The film was shot by your go-to cinematographer James Laxton. Talk about the look you wanted and how you got it.
There are a lot of cinema references in Moonlight, but we couldn’t find many for this period set in this sort of neighborhood. There are nods to great directors and stylists, like Douglas Sirk and Hou Hsiao-hsien, but we ended up paying more attention to stills. We studied the work of the great photographers Roy DeCarava and Gordon Parks. I wanted it to look lush and beautiful.

You shot on location, and it’s a period piece. How hard was that?
It was pretty challenging because I’m the kind of guy — and James is too — where we like to have the freedom to point the camera anywhere and just shoot. But when you’re making a period film in New York, which is changing so fast every damn day, you just don’t have that freedom. So it was very constricting, and our production designer Mark Friedberg had to be very inventive and diligent about all the design.

Where did you post?
We split it between New York and partly in LA. We cut the whole film here in LA at this little place in Silverlake called Fancy Post, and did all the sound mix at Formosa. Then we moved to New York since the composer lives there, and we did the DI at Technicolor PostWorks in New York with colorist Alex Bickel, who did Moonlight. We spent a lot of time getting the look just right — all the soft colors. We chose to shoot on the Alexa 65, which is unusual for a small drama, but we loved the intimacy it gave us.

You reteamed with your go-to editors Nat Sanders, who’s cut all three of your films, and Joi McMillion, who cut Moonlight with Nat. Tell us how it worked this time.
Fancy Post is essentially a house, so they each had their own bedroom, and I’d come in each day and check on their progress. Both of them were at film school with me, and we all work really well together, and I love the editing process.

Can you talk about the importance of music and sound in the film?
Sound has always been so important to me, ever since film school. One of my professors there was Richard Portman, who really developed the overlapping, multi-track technique with Robert Altman.  I’ll always remember one of the first things he said to us about the importance of sound: a movie is 50 percent image and 50 percent sound, not ninety-five percent image and five percent sound. So that’s how I approach it.

We had a fantastic sound team: supervising sound editor Onnalee Blank and re-recording mixer Matt Waters. They usually do these huge projects with dragons and so on, like Game of Thrones, but they also do small dramas like this. They came on very late, but did incredible, really detailed work with all the dialogue. And there’s a lot of dialogue and conversation, most of it in interiors, and then there’s the whole soundscape that they built up layer by layer, which takes us back in time to the 1970s. They mixed all the dialogue so it comes from the front of the room, but we also created what we called “the voice of God” for all of Tish’s voiceovers.

 

In this story she really functions as the voice of James Baldwin, and while the voiceovers are in her head, we surround the audience with them. That was the approach. Just as with Moonlight, I feel that a film’s soundscape is beholden to the mental states and consciousness of the main characters, and not necessarily to a genre or story form. So in this, composer Nick Britell and I both felt that the sound of the film is paced by how Tish and Fonny are feeling. That opened it up in so many ways. Initially, we thought we’d have a pure jazz score, since it suited the era and location, but as we watched the actors working it evolved into this jazz chamber orchestra kind of thing.

This is obviously not a VFX-driven piece, but the VFX must have played a role in the final look. What was involved?
Crafty Apes in LA and Phosphene and Significant Others in New York did it all, and we had some period stuff, clean up and some augmentation, but we didn’t use any greenscreens on set. The big thing was that New York in the ‘70s was much grittier and dirtier, so all the graffiti on the subway cars was VFX. I hadn’t really worked much with visual effects before, but I loved it

There’s been so much talk in Hollywood about the lack of diversity — in front of and behind the camera. Do you see much improvement since we last spoke?
Well, look at all the diverse films out last year and now this year — Green Book, The Hate U Give, Black Panther, Widows, BlacKkKlansman — with black directors and casts. So there has been change, and I think Moonlight was part of a wave, increasing visibility around this issue. There’s more accountability now, and we’re in the middle of a cycle that is continuing. Change is a direction, not a destination.

Barry Jenkins on set.

We’re heading into awards season. How important are they for a film like this?
Super important. Look, Moonlight would not have had the commercial success it had if it hadn’t been for all the awards attention and talk.

Where do you keep your Oscar?
I used to keep it on the floor behind my couch, but I got so much shit about keeping it hidden that now it sits up high on a speaker. I’m very proud of it.

What’s next?
I’m getting into TV. I’m doing a limited series for Amazon called The Underground Railroad, and we’re in pre-production. I’ve got a movie thing on the horizon, but my focus is on this right now.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Review: GoPro Hero 7 Black action camera

By Brady Betzel

Every year GoPro offers a new iteration of its camera. One of the biggest past upgrades was from the Hero 4 to the Hero 5, with an updated body style, waterproofing without needing external housing and minimal stabilization. That was one of the biggest… until now.

The Hero 7 Black is by far the best upgrade GoPro users have seen, especially if you are sitting on a Hero 5 or earlier. I’ll tell you up front that the built-in stabilization (called Hypersmooth) alone is worth the Hero 7 Black’s $399 price tag, but there are a ton of other features that have been upgraded and improved.

There are three versions of the Hero 7: Black for $399, Silver for $299 and White for $199. The White is the lowest priced Hero 7 and includes features like 1080p @ 60fps video recording, a built-in battery, waterproofing to 33 feet-deep without extra housing, standard video stabilization, 2x slow-mo (1440p/1080p @ 60fps), video recording up to 40Mb/s (1440p), two-mic audio recording, 10MP Photos, and 15/1 burst photos. After reading that you can surmise that the Hero 7 White is as basic as it gets, GoPro even skipped 24fps video recording, ProTune and a front LCD display. But that doesn’t mean the Hero 7 White is a throwaway; what I love about the latest update to the Hero line is the simplicity in operating the menus. In previous generations, the GoPro Hero menus were difficult to use and would often cause me to fumble shots. The Hero 7 menu has been streamlined for a much more simple mode selection process, making the Hero 7 White a basic and relatively affordable waterproof GoPro.

The Hero 7 Silver can be purchased for $299 and has everything the Hero 7 White has, plus some extras, including 4K video recording at 30fps up to 60MB/s, 10MP photos with wide dynamic range to bring out details in the highlights and shadows and a GPS location to show you where your videos and photos were taken. .

The Hero 7 Black
The Hero 7 Black is the big gun in the GoPro Hero 7 lineup. For anyone who wants to shoot multiple frame rates; harness a flat picture profile using ProTune to have extended range when color correcting; record ultra-smooth video without an external gimbal and no post processing; or shoot RAW photos, the Hero 7 Black is for you.

The Hero 7 Black has all of the features of the White and Silver plus a bunch more, including the front-facing LCD display. One of the biggest still-photo upgrades is the ability to shoot 12MP photos with SuperPhoto. SuperPhoto is essentially a “make my image look like the GoPro photos on Instagram” look. It’s an auto-image processor that will turn good photos into awesome photos. Essentially it’s an HDR mode that gives as much latitude in the shadows and highlights as well as noise reduction.
Beyond the SuperPhoto, the Hero 7 has burst rates from 3/1 up to 30/1, a timelapse photo function with intervals ranging from .5 seconds to 60 seconds; the ability to shoot RAW photos in GPR format alongside JPG; the ability to shoot video in 4K at 60fps, 30fps and 24fps in wide mode, as well as 30 and 24fps in SuperView mode (essentially ultra-wide angle); 2.7K wide video up to 120fps and down to 24fps in linear view (no wide-angle warping) all the way down to 720p in wide at 240fps. s.

The Hero 7 records in both MP4 H.264/AVC and H.265/HEVC formats at up to 78MB/s (4K). The Hero 7 Black has a bunch of additional modes including Night Photo; Looping; Timelapse Photo; Timelapse Video; Night Lapse Photo; 8x Slow Mo and Hypersmooth stabilization. It has Wake on Voice commands, as well as live streaming to Facebook Live, Twitch, Vimeo and YouTube. It also features Timewarp video (I will talk more about later); a GP1 processor created by GoPro; advanced metadata that the GoPro app uses to create videos of just the good parts (like smiling photos); ProTune; Karma compatibility; dive-housing compatibility; three-mic stereo audio; RAW audio captured in WAV format; the ability to plug in an external mic with the optional 3.5mm audio mic in cable; and HDMI video output with a micro HDMI cable.

I really love the GoPro Hero 7 and consider it a must-buy if you are on the edge about upgrading an older GoPro camera.

Out of the Box
When I opened the GoPro Hero7 Black I was immediately relieved that it was the same dimensions as the Hero 5 and 6, since I have access to the GoPro Karma drone, Karma gimbal and various accessories. (As a side note, the Hero 7 White and Silver are not compatible with the Karma Drone or Gimbal.) I quickly plugged in the Hero 7 Black to charge it, which only took half an hour. When fully drained the Hero 7 takes a little under two hours to charge.

I was excited to try the new built-in stabilization feature Hypersmooth, as well as the new stabilized in-camera timelapse creator, TimeWarp. I received the Hero 7 Black around Halloween so I took it to an event called “Nights of the Jack” at King Gillette Ranch in Calabasas, California, near Malibu. It took place after dark and featured lit-up jack-o-lanterns, so I figured I could test out the TimeWarp, Hypersmooth and low-light capabilities in one fell swoop.

It was really incredible. I used a clamp mount to hold it onto the kids’ wagon and just hit record. When I stopped recording, the GoPro finished processing the TimeWarp video and I was ready to view it or share it. Overall, the quality of video and the low-light recording were pretty good — not great but good. You can check out the video on YouTube.

The stabilization was mind blowing, especially considering it is electronic image stabilization (EIS), which is software-based, not optical, which is hardware-based. Hardware-based stabilization is typically preferred to software-based stabilization, but GoPro’s EIS is incredible. For most shooting scenarios, the built-in stabilization will be amazing — everyone who watches your clips will think that you are using a hardware gimbal. It’s that good.

The Hero 7 Black has a few options for TimeWarp mode to keep the video length down — you can choose different speeds: 2x, 5x, 10x, 15x, and 30x. For example, 2x will take one minute of footage and turn it into 30 seconds, and 30x will take five minutes of footage and turn it into 10 seconds. Think of TimeWarp as a stabilized timelapse. In terms of resolution, you can choose from 16:9 or 4:3 aspect ratio; 4K, 1440p or 1080p. I always default to 1080 if posting on Instagram or Twitter, since you can’t really see what the 4K difference, and it saves all my data bits and bytes for better image fidelity.

If you’re wondering why you would use TimeWarp over Timelapse, there are a couple of differences. Timewarp will create a smooth video when walking, riding a bike or generally moving around because of the Hypersmooth stabilization. Timelapse will act more like a camera taking pictures at a certain interval to show a passage of time (say from day to night) and will playback a little more choppy. Check out a sample day-to-night timelapse I filmed using the Hero 7 Black set to Timelapse on YouTube.

So beyond the TimeWarp what else is different? Well, just plain shooting 4K at 60fps — you now have the ability to enable the EIS stabilization where you couldn’t on the GoPro Hero 6 Black. It’s a giant benefit for anyone shooting 4K in the palm of their hands and wanting to even slow their 4K down by 50% and retain smooth motion with stabilization already done in-camera. This is a huge perk in my mind. The image processing is very close to what the Hero 6 produces and quite a bit better than the what the Hero 5 produces.

When taking still images, the low-light ability is pretty incredible. With the new Superphoto setting you can get that signature high saturation and contrast with noise reduction. It’s a great setting, although I noticed the subject in focus cannot be moving too fast or you will get some purple fringing. When used under the correct circumstances, the Superphoto is the next iteration of HDR.

I was surprised how much I used the GoPro Hero 7 Black’s auto-rotating menu feature when the camera was held vertically. The Hero 6 could shoot vertically but with the addition of the auto-rotation of the menu, the Hero 7 Black encourages more vertically photos and videos. I found myself taking more vertical photos, especially outdoors — getting a lot more sky in the shots, which adds an interesting perspective.

Summing Up
In the end, the GoPro Hero 7 Black is a must-buy if you are looking for the latest and greatest action-cam or are on the fence about upgrading from the Hero 5 or 6. The Hypersmooth video stabilization is incredible. If you want to take it a step further, combining it with a Karma gimbal will give you a silky smooth shot.

I really fell in love with the TimeWarp function, whether you are a prosumer filming your family at Disneyland or shooting a show in the forest, a quick TimeWarp is a great way to film some dynamic b-roll without any post processing.

Don’t forget the Hero 7 Black has voice control for hands-free operation. On the outside,the Hero 7 Black is actually black in color unlike the Hero 6 (which is a gray) and also has the number “7” labeled on it for easy finding in your case.

I would really love for GoPro to make these cameras charge wirelessly on a mat like my Galaxy phone. It seems like the GoPro action-cameras would be great to just throw on a wireless charger and also use the charger as a file-transfer station. It gets cumbersome to remove a bunch of tiny memory cards or use a bunch of cables to connect your cameras, so why not make it wireless?! I’m sure they are thinking of things like that, because focusing on stabilization was the right move in my opinion.

If GoPro can continue to make focused and powerful updates to their cameras, they will be here for a long time — and the Hero 7 is the right way to start.

Check out GoPro’s website for more info, including accessories like the Travel Kit, which features a little mini tripod/handle (called “Shorty”), a rubberized cover with a lanyard and a case for $59.99.

If you need the ultimate protection for your GoPro Hero 7 Black, look into GoPro Plus, which, for $4.99 a month, gives you VIP support; automatic cloud backup, access for editing on your phone from anywhere and camera replacement for up to two cameras per year of the same model, no questions asked, when something goes wrong. Compare all the new GoPro Hero 7 Models on their website website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Director Peter Farrelly gets serious with Green Book

By Iain Blair

Director, producer and writer Peter Farrelly is best known for the classic comedies he made with his brother Bob: Dumb and Dumber; There’s Something About Mary; Shallow Hal; Me, Myself & Irene; The Three Stooges; and Fever Pitch. But for all their over-the-top, raunchy and boundary-pushing comedy, those movies were always sweet-natured at heart.

Peter Farrelly

Now Farrelly has taken his gift for heartfelt comedy and put his stamp on a very different kind of film, Green Book, a racially charged feel-good drama inspired by a true friendship that transcended race, class and the 1962 Mason-Dixon line.

Starring Oscar-nominee Viggo Mortensen and Oscar-winner Mahershala Ali, it tells the fact-based story of the ultimate odd couple: Tony Lip, a bouncer from The Bronx and Dr. Don Shirley, a world-class black pianist. Lip is hired to drive and protect the worldly and sophisticated Shirley during a 1962 concert tour from Manhattan to the Deep South, where they must rely on the titular “Green Book” — a travel guide to safe lodging, dining and business options for African-Americans during the segregation era.

Set against the backdrop of a country grappling with the valor and volatility of the civil rights movement, the two men are confronted with racism and danger as they challenge long-held assumptions, push past their seemingly insurmountable differences and embrace their shared humanity.

The film also features Linda Cardellini as Tony Vallelonga’s wife, Dolores, along with Dimiter D. Marinov and Mike Hatton as two-thirds of The Don Shirley Trio. The film was co-written by Farrelly, Nick Vallelonga and Brian Currie and reunites Farrelly with editor Patrick J. Don Vito, with whom he worked on the Movie 43 segment “The Pitch.” Farrelly also collaborated for the first-time with cinematographer Sean Porter (read our interview with him), production designer Tim Galvin and composer Kris Bowers.

I spoke with Farrelly about making the film, his workflow and the upcoming awards season. After its Toronto People’s Choice win and Golden Globe nominations (Best Director, Best Musical or Comedy Motion Picture, Best Screenplay, Best Actor for Mortensen, Best Supporting Actor for Ali), Green Book looks like a very strong Oscar contender.

You told me years ago that you’d love to do a more dramatic film at some point. Was this a big stretch for you?
Not so much, to be honest. People have said to me, “It must have been hard,” but the hardest film I ever made was The Three Stooges… for a bunch of reasons. True, this was a bit of a departure for me in terms of tone, and I definitely didn’t want it to get too jokey — I tend to get jokey so it could easily have gone like that.  But right from the start we were very clear that the comedy would come naturally from the characters and how they interacted and spoke and moved, and so on, not from jokes.

So a lot of the comedy is quite nuanced, and in the scene where Tony starts talking about “the orphans” and Don explains that it’s actually about the opera Orpheus, Viggo has this great reaction and look that wasn’t in the script, and it’s much funnier than any joke we could have made there.

What sort of film did you set out to make?
A drama about race and race relations set in a time when it was very fraught, with light moments and a hopeful, uplifting ending.

It has some very timely themes. Was that part of the appeal?
Absolutely. I knew that it would resonate today, although I wish it didn’t. What really hooked me was their common ground. They really are this odd couple who couldn’t be more different — an uneducated, somewhat racist Italian bouncer, and this refined, highly educated, highly cultured doctor and classically trained pianist. They end up spending all this time together in a car on tour, and teach each other so much along the way. And at the end, you know they’ll be friends for life.

Obviously, casting the right lead actors was crucial. What did Viggo and Mahershala bring to the roles?
Well, for a start they’re two of the greatest actors in the world, and when we were shooting this I felt like an observer. Usually, I can see a lot of the actor in the role, but they both disappeared totally into these characters — but not in some method-y way where they were staying in character all the time, on and off the set. They just became these people, and Viggo couldn’t be less like Tony Lip in real life, and the same with Mahershala and Don. They both worked so hard behind the scenes, and I got a call from Steven Spielberg when he first saw it, and he told me, “This is the best buddy movie since Butch Cassidy and the Sundance Kid,” and he’s right.

It’s a road picture, but didn’t you end up shooting it all in and around New Orleans?
Yes, we did everything there apart from one day in northern New Jersey to get the fall foliage, and a day of exteriors in New York City with Viggo for all the street scenes. Louisiana has everything, from rolling hills to flats. We also found all the venues and clubs they play in, along with mansions and different looks that could double for places like Pennsylvania, Ohio, Indiana, Iowa, Missouri, Kentucky, Tennessee, as well as Carolinas and the Deep South.

We shot for just 35 days, and Louisiana has great and very experienced crews, so we were able to work pretty fast. Then for scenes like Carnegie Hall, we used CGI in post, done by Pixel Magic, and we were also amazingly lucky when it came to the snow scenes set in Maryland at the end. We were all ready to use fake snow when it actually started snowing and sticking. We got a good three, four inches, which they told us hadn’t happened in a decade or two down there.

Where did you post?
We did most of the editing at my home in Ojai, and the sound at Fotokem, where we also did the DI with colorist Walter Volpatto.

Do you like the post process?
I love it. My favorite part of filmmaking is the editing. Writing is the hardest part, pulling the script together. And I always have fun on the shoot, but you’re always having to make sure you don’t screw up the script. So when you get to the edit and post, all the hard work is done in that sense, and you have the joy of watching the movie find its shape as you cut and add in the sound and music.

What were the big editing challenges, given there’s such a mix of comedy and drama?
Finding that balance was the key, but this film actually came together so easily in the edit compared with some of the movies I’ve done. I’ll never forget seeing the first assembly of There’s Something About Mary, which I thought was so bad it made me want to vomit! But this just flowed, and Patrick did a beautiful job.

Can you talk about the importance of music and sound in the film.
It was a huge part of the film and we had a really amazing pianist and composer in Kris Bowers, who worked a lot with Mahershala to make his performance as a musician as authentic as possible. And it wasn’t just the piano playing — Mahershala told me right at the start, “I want to know just how a pianist sits at the piano, how he moves.” So he was totally committed to all the details of the role. Then there’s all the radio music, and I didn’t want to use all the obvious, usual stuff for the period, so we searched out other great, but lesser-known songs. We had great music supervisors, Tom Wolfe and Manish Raval, and a great sound team.

We’re already heading into the awards season. How important are awards to you and this film?
Very important. I love the buzz about it because that gets people out to see it. When we first tested it, we got 100%, and the studio didn’t quite believe it. So we tested again, with “a tougher” audience, and got 98%. But it’s a small film. Everyone took pay cuts to make it, as the budget was so low, but I’m very proud of the way it turned out.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Red upgrades line with DSMC2 Dragon-X 5K S35 camera

Red Digital Cinema has further simplified its product line with the DSMC2 Dragon X 5K S35 camera. Red also announced the DSMC2 Production Module and DSMC2 Production Kit, which are coming in early 2019. More on that in a bit.

The DSMC2 Dragon-X camera uses the Dragon sensor technology found in many of Red’s legacy cameras with an evolved sensor board to enable Red’s enhanced image processing pipeline (IPP2) in camera.

In addition to IPP2, the Dragon-X provides 16.5 stops of dynamic range, as well as 5K resolution up to 96fps in full format and 120fps at 5K 2.4:1. Consistent with the rest of Red’s DSMC2 line-up, Dragon-X offers 300MB/s data transfer speeds and simultaneous recording of Redcode RAW and Apple ProRes or Avid DNxHD/HR.

The new DSMC2 Dragon-X is priced at $14,950 and is also available as a fully-configured kit priced at $19,950. The kit includes: 480GB Red Mini-Mag; Canon lens mount; Red DSMC2 Touch LCD 4.7-inch monitor; Red DSMC2 outrigger handle; Red V-Lock I/O expander; two IDX DUO-C98 batteries with VL-2X charger; G-Technology ev Series Red Mini-Mag reader; Sigma 18-35mm F1.8 DC HSM art lens; Nanuk heavy-duty camera case.

Both the camera and kit are available now at red.com or through Red’s authorized dealers.

Red also announced the new DSMC2 Production Module. Designed for pro shooting configurations, this accessory mounts directly to the DSMC2 camera body and incorporates an industry standard V-Lock mount with integrated battery mount and P-Tap for 12V accessories. The module delivers a comprehensive array of video, XLR audio, power and communication connections, including support for 3-pin 24V accessories. It has a smaller form factor and is more lightweight than Red’s RedVolt Expander with a battery module.

The DSMC2 Production Module is available to order for$4,750 and is expected to ship in early 2019. It will also be available as a DSMC2 Production Kit that will include the DSMC2 Production Module and DSMC2 production top plate. The DSMC2 Production Kit is also available for order for $6,500 and is expected to ship in early 2019.

Scarlet-W owners can upgrade to DSMC2 Dragon-X for $4,950 through Red authorized dealers or directly from Red.

DP Chat: Green Book’s Sean Porter

Sean Porter has worked as a cinematographer on features, documentaries, short films and commercials. He was nominated for a Film Independent Spirit Award for Best Cinematography for his work on It Felt Like Love, and his credits include 20th Century Women, Green Room, Rough Night and Kumiko, the Treasure Hunter.

His most recent collaboration was with director Peter Farrelly on Green Book, which is currently in theaters. Set in 1962, the film follows Italian-American bouncer/bodyguard Tony Lip (Academy Award-nominee Viggo Mortensen) and world-class black pianist Dr. Don Shirley (Academy Award-winner Mahershala Ali) on a concert tour from Manhattan to the Deep South. They must rely on “The Green Book” to guide them to the few establishments that were then safe for African-Americans. Confronted with racism and danger — as well as unexpected humanity and humor — they are forced to set aside differences to survive and thrive on the journey of a lifetime.

Green Book director Peter Farrelly (blue windbreaker) with DP Sean Porter (right, brown jacket).

Porter chose the Alexa Mini mounted with Leica Summilux-C lenses to devise the look for “Green Book.” End-to-end post services were provided by FotoKem, from dailies at their New Orleans site to final color and deliverables at Burbank.

We spoke to him recently about his rise to director of photography and his work on Green Book:

How did you become interested in cinematography?
My relationship with cinematography, and really filmmaking, developed over many years during my childhood. I didn’t study fine art or photography in school, but discovered it later as many others do. I went in through the front door when I was probably 12 or so, and it’s been a long road.

I’m the oldest of four — two brothers and a sister. We grew up in a small town about an hour outside of Seattle, we had a modest yard that butted up to the “back woods.” It was an event when the neighborhood kids got on bikes and road a half mile or so to the only small convenience store around. There wasn’t much to do there, so we naturally had to be pretty inventive in our play. We’d come home from school, put on the TV and at the time Movie Magic was airing on The Discovery Channel. I think that show honestly was a huge inspiration, not only to me but to my brothers as well, who are also visual artists. It was right before Jurassic Park changed the SFX landscape — it was a time when everything was still done photographically, by hand. There were episodes showing how these films achieved all sorts of amazing images using rather practical tools and old school artistry.

My dad was always keen on technology and he had various camcorders throughout the years, beginning with the VHS back when the recorder had to be carried separately. As the cameras became more compact and easier to use, my brothers and I would make all kinds of films, trying to emulate what we had seen on the show. We were experimenting with high-level concepts at a very young age, like forced perspective, matte paintings, miniatures (with our “giant” cat as the monster) and stop motion.

I picked up the technology bug and by the time I was in middle school I was using our family’s first PC to render chromakeys — well before I had access to NLEs. I was conning my teachers into letting me produce “video” essays instead of writing them. Later we moved closer to Seattle and I was able to take vocational programs in media production and went on to do film theory and experimental video at the University of Washington, where I think I started distilling my focus as a cinematographer.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
As I mentioned earlier, I didn’t discover film via fine art or photography, so I didn’t have that foundation of image making and color theory. I learned it all just by doing and paying attention to what I responded to. I didn’t have famous artists to lean on. You could say it was much more grassroots. My family was a lover of popular films, especially American comedies and action adventure. We watched things like Spies like Us, Star Wars, Indiana Jones and The Princess Bride. It was all pure entertainment, of course. I wasn’t introduced to Bergman or Fellini until much, much later. As we got older, my film language expanded and I started watching films by Lynch and Fincher. I will say that those popular ‘90s films had a great combination of efficient storytelling and technical craft that I still resonate with to this day. It’s very much a sort of “blue-collar” film language.

Staying on top of the technology oscillates between an uncontrollable obsession and an unbearable chore. I’ve noticed over the years that I’m becoming less and less invigorated by the tech — many of the new tools are invaluable, but I love relying on my team to filter out the good from the hype so I can focus on how best to tell the story. Some developments you simply can’t ignore; I remember the day I saw footage in class from a Panasonic DVX100. It changed everything!

What new technology has changed the way you work (looking back over the past few years)?
I feel like the digital cameras, while continuing to get better, have slowed down a bit. There was such a huge jump between the early 2000s and the late 2000s. There’s no question digital acquisition has changed the way we make images — and it’s up for debate if it’s been a wholly positive shift. But generally, it’s been very empowering for filmmakers, especially on smaller budgets. It’s given me and my peers the chance to create cinema-quality images on projects that couldn’t afford to shoot on 16mm or 35mm. And over the last five years, the gap between digital and film has diminished, even vanished for many of us.

But if I had to single out one development it’s probably been LEDs over the last two or three years. Literally, five years ago it was all HMI and Kino Flos, and now I don’t remember the last time I touched a Kino. Sometimes we go entire jobs without firing up an HMI. The LEDs have gotten much better recently, and the control we have on set is unprecedented. It makes you wonder how we did it before!

What are some of your best practices or rules you try to follow on each job?
Every time I start a new project, I say to myself, “This time I’m going to get my shit together.” I think I’m going to get organized, develop systems, databases, Filemaker apps, whatever, and streamline the process so I can be more efficient. I’ll have a method for combining scouting photos with storyboards and my script notes so everything is in one place and I can disseminate information to relevant departments. Then I show up at prep and realize the same thing I realize every movie: They are all so, so different.

It’s an effort in futility to think you can adopt a “one-size-fits-all” mentality to preproduction. It just doesn’t work. Some directors storyboard every shot. Some don’t even make shot lists. Some want to previs every scene during the scouting process using stand-ins, others won’t even consider blocking until the actors are there, on the day. So I’ve learned that the efficiency is found in adaptation. My job is to figure out how to get inside my director’s head, see things the way they are seeing them and help them get those ideas into actions and decisions. There’s no app for that, unfortunately! I suppose I try to really listen, and not just to the words my director uses to describe things, but to the subtext and what is between the lines. I try to understand what’s really important to them so I can protect those things and fight for them when the pressure to compromise starts mounting.

Linda Cardellini as Dolores Vallelonga and Viggo Mortensen as Tony Vallelonga in “Green Book,” directed by Peter Farrelly.

On a more practical note, I read many years ago about a DP who would stand on the actor’s mark and look back toward the camera — just to be aware of what sort of environment they were putting the talent in. Addressing a stray glare or a distracting stand might make a big difference to the actor’s experience. I try to do that as often as I can.

Explain your ideal collaboration with the director when setting the look of a project.
It’s hard to reduce such an array of possible experiences down to an “ideal,” as an ideal situation for one film might not be ideal for another depending on the experience the director wants to create on set. I’ve had many different, even conflicting, “processes” with my directors because it suited that specific collaboration. Again, it’s about adapting, being a chameleon to their process. It’s not about coming in and saying, “This is the best way to do this.”

I remember with one director we basically locked ourselves in her apartment for three days and just watched films. We’d pause them now and then and discuss a shot or a scene, but a lot of the time it was just about being together experiencing this curated body of work and creating a visual foundation for us to work from. With another director, we didn’t really watch any films at all, but we did lots and lots of testing. Camera tests, lens tests, lighting tests, filter tests, makeup and SFX tests. And we’d go into a DI suite and look at everything and talk about what was working and what wasn’t. He was also a DP so I think that technical, hands-on approach made sense to him. I think I tested every commercially available fluorescent tube that was on the market to find the right color for that film. I’ll admit as convenient as it would be to have a core strategy to work from, I think I would tire of it. I love walking onto a film and saying, “Ok, how are we going do this?”

Tell us about Green Book. How would you describe the overarching look of the film that you and Peter Farrelly wanted to achieve?
I think, maybe more than I want to admit, that the look of my films is a culmination of the restraints that are imparted by either myself or by production. You’re only going to have a certain amount of time and money for each scene, so calculations and compromises must be made there. You have to work with the given location, time of day and how it’s going be art decorated, so that adds a critical layer. Peter wanted to work a certain way with his actors and have lots of flexibility, so you adapt your process to make that work. Then you give yourself certain creative constraints, and somewhere in between all those things pushing on each other, the look of the film emerges.

That sounds a little arbitrary and Pete and I had some discussions about how it should look, but they were broad conversations. Honesty and authenticity were very important to Pete. He didn’t want things to ever look or feel disingenuous. My very first conversation with him after I was hired was about the car work. He was getting pressure to shoot it all on stage with LED screens. I was honest with him. I told him he’d probably get more time with his actors, and more predictable results on stage, but he’d get more realism from the look and from the performances dragging the entire company out onto the open road and battling the elements.

So we shot all the car work practically, save for a few specific night scenes. I took his words to heart and tried to shape the look out of what was authentic to the time. My gaffer and I researched what lighting fixtures were used then — it wasn’t like it is now with hundreds of different light sources. Back then it was basically tungsten, fluorescent, neon mercury and sodium. We limited our palette to those colors and tuned all our fixtures accordingly. I also avoided stylistic choices that would have made the film feel dated or “affected” — the production design, wardrobe and MCU departments did all of that. Pete and I wanted the story to feel just as relevant now as it did then, so I kept the images clean and largely unadulterated.

How early did you get involved in the production?
I came on about five weeks before shooting. I prepped for one week and then we were all sent home! Some negotiations had stalled production and for several weeks I didn’t know if we would start up again. I’m very grateful everyone made it work so we could make the film.

How did you go about choosing the right camera and lenses for Green Book?
While 35mm would have been a great choice aesthetically for the film, there were some real production advantages to shooting digitally. As we were shooting all the car work practically, it was my prerogative to get as much of the coverage inside the car accomplished at a go. Changing lighting conditions, road conditions and tight schedules prohibited me from shooting an angle, then pulling over and re-rigging the camera. We had up to three Alexa Mini cameras inside the car at once, and many times that was all the coverage planned for the scene, save for a couple cutaways. This allowed us to get multi-page scenes done very efficiently while maintaining light continuity, keeping the realism of the landscapes and capturing those happy (and sometimes sad) accidents.

I chose some very clean, very fast, and very portable lenses: the Leica Summilux-Cs. I used to shoot stills with various Leica film cameras and developed an affinity for the way the lenses rendered. They are always sharp, but there’s some character to the fall off and the micro-contrast that always make faces look great. I had shot many of my previous films with vintage lenses with lots of character and could have easily gone that route, but as I mentioned, I was more interested in removing abstractions — finding something more modern yet still classic and utilitarian.

Any challenging scenes that you are particularly proud of?
Not so much a particular scene, but a spanning visual idea. Many times, when you start a film, you’ll have some cool visual arc you want to try to employ, and along the way various time, location or schedule constraints eventually break it all down. Then you’re left with a few disparate elements that don’t connect the way you wanted them to. Knowing I would face those same challenges but having a bit more resources than some of my other films, I aimed low but held my ground: I wanted the color of the streetlights to work on a spectrum, shifting between safety and danger deepening on the scene or where things were heading in the story.

I broke the film down by location and worked with my gaffer to decide where the environment would be majority sodium (safe/familiar/hopeful) and where it would be mercury (danger/fear/despair). It sounds very rudimentary but when you try to actually pull it off with so many different locations, it can get out of hand pretty quickly. And, of course, many scenes had varying ratios of those colors. I was pleased that I was able to hold onto the idea and not have it totally disintegrate during the shoot.

What’s your go-to gear (camera, lens, mount/accessories) — things you can’t live without?
Go-to tools change from job to job, but the one I rely on more than any is my crew. Their ideas, support and positive energy keep me going in the darkest of hours! As for the nuts and bolts — lately I rarely do a job without SkyPanels and LiteMats. For my process on set, I’ve managed to get rid of just about everything except my light meter and my digital still camera. The still camera is a very fast way to line up shots, and I can send images to my iPad and immediately communicate framing ideas to all departments. It saves a lot of time and guess work!

Main Image: Sean Porter (checkered shirt) on set of Green Book, pictured with director Peter Farrelly.

Steve McQueen on directing Widows

By Iain Blair

British director/writer/producer Steve McQueen burst onto the international scene in 2013 when his harrowing 12 Years a Slave dominated awards season, winning as Academy Award, Golden Globe, BAFTA and a host of others. His directing was also recognized with many nominations and awards.

Now McQueen, who also helmed the 2011 feature Shame (Michael Fassbender, Carey Mulligan) is back with the film Widows.

A taut thriller, 20th Century Fox’s Widows is set in contemporary Chicago in a time of political and societal turmoil. When four armed robbers are killed in a botched heist, their widows — with nothing in common except a debt left behind by their dead husbands’ criminal activities — take fate into their own hands to forge a future on their own terms.

With a screenplay by Gillian Flynn and McQueen himself — and based on the old UK television miniseries of the same name — the film stars, among others, Viola Davis, Michelle Rodriguez, Colin Farrell, Brian Tyree Henry, Daniel Kaluuya, Carrie Coon, Jon Bernthal, Robert Duvall and Liam Neeson.

The production team includes Academy Award-nominated editor Joe Walker (12 Years a Slave), Academy Award-winning production designer Adam Stockhausen (The Grand Budapest Hotel) and director of photography Sean Bobbit (12 Years a Slave).

I spoke with McQueen, whose credits also include 2008’s Hunger, about making the film and his love of post.

This isn’t just a simple heist movie, is it?
No, it isn’t. I wanted to make an all-encompassing movie, an epic in a way, about how we live our daily lives and how they’re affected by politics, race, gender, religion and corruption, and do it through this story. I remember watching the TV series as a kid and how it affected me — how strong all these women were — and I decided to change the location from London to Chicago, which is really an under-used city in movies, and make it a more contemporary view of all these issues.

You assembled a great cast, led by Oscar-winner Viola Davis. What did she bring to the table?
So much weight and gravitas. She’s like an iceberg. There’s so much hidden depth in everything she does, and there’s this well of meaning and emotion she brings to the role, and then everyone has to step up to that.

What were the main technical challenges in pulling it all together?
The big one was logistics and dealing with all the Chicago locations. We had over 60 locations, all over the city, and 81 speaking parts. So there was a lot of planning, and if one thing got stuck it threw off the whole schedule. It would have been almost impossible to reschedule some of the scenes.

How tough was the shoot?
Pretty tough. They’re always grueling, and when you’re writing a script you don’t always think about how many night shoots you’re going to face, and you forget about this big machine you have to bring with you to all the locations. Trying to make any quick change or adjustment is like trying to turn the Titanic. It takes a while.

How early on did you start integrating post and all the VFX?
From day one. You have to when you have a big production with a set release date, so we began cutting and assembling while I shot.

Where did you post?
In Amsterdam, where I live, and then we finished it off in London.

Do you like the post process?
I love it. It’s my favorite part as you have civilized hours — 9 till 5 or whatever —and you’re in total control. You’re not having to deal with 40 or 50 people. It’s just you and the editor in a dark room, actually making the film.

Joe Walker has cut all of your films, including Hunger and Shame, as well Blade Runner 2049, Arrival and Sicario. Can you talk about working with him?
He wasn’t on set, and we had someone else assembling stuff as Joe was still finishing up Blade Runner. He came in when I got back to Amsterdam. Joe and I go way back to 2007, when we did Hunger, and we always work very closely together. I sit right next to him, and I’m there for every single cut, dissolve, whatever. I’m very present. I’m not one of those directors who comes in, gives some notes and then disappears. I don’t know how you do that. I love editing and finding the pace and rhythm. What makes Joes such a great editor is that he started off in music, so he has a great sense of how to work with sound.

What were the big editing challenges?
There are all these intertwined stories and characters, so it’s about finding the right balance and tone and rhythm. The whole opening sequence is all about pulling the audience in and then grabbing them with a caress and then a slap — and another caress and slap — as we set up the story and the main characters. Then there are so many parts to the story that it’s like this big Swiss watch: all these moving parts and different functions. But you always go back to the widows. A script isn’t a film, it’s a guide, so you’re feeling your way in the edit, and seeing what works and what doesn’t. The whole thing has to be cohesive, one thing. That’s your goal.

What about the visual effects?
They were all done by One Of Us and Outpost VFX (both in the UK), but the VFX were all about enhancing stuff, not dazzling the audience. The aim was always for realism, not fantasy.

Talk about the importance of sound and music.
They’re huge for me, and it’s interesting as a lot of the movie has no sound or music. At the beginning, there’s just this one chord on a violin when we get to the title card, and that’s it. There’s no sound for 2/3 of the movie, and then we only have some ambient music and Procul Harum’s “Whiter Shade of Pale” and a Van Morrison song. That’s why all the sound design is so important. When the women lose their husbands, I didn’t want it to be hammy and tug at your heartstrings. I wanted you to feel that pain and that grief and that journey. When they start to act and take control of their lives, that’s when the music and sound kick in, almost like this muscular drive. Our supervising sound editor James Harrison did a great job with all that. We did all the mixing in Atmos at De Lane Lea in London.

Where did you do the DI and how important is it to you?
We did it at Company 3 London with colorist Tom Poole, and it’s very important. We shot on film, and our DP Sean and I spent a lot of time just talking about the palette and the look. When you’re shooting in over 60 locations, it’s not so much about putting your own stamp and look on them, but about embracing what they offer you visually and then tweaking it.

For the warehouse scenes, there was a certain mood and it had crappy tungsten lighting, so we changed it a bit to feel more tactile, and it was the same with most of the locations. We’d play with the palette and the visual mood, which the DI allows you to do so well.

Did the film turn out the way you hoped?
(Laughs) I always hope it turns out better than I hoped or imagined, as your imagination can only take you so far. What’s great is when you go beyond that and come up with something cooler than you could have imagined. That’s what I always want.

What’s next?
I’ve got a few things cooking on the stove, and I should finish writing something in the next few months and then start it next year.

All Images Courtesy of 20th Century Fox/Merrick Morton


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

DP Chat: Polly Morgan, ASC, BSC

Cinematographer Polly Morgan, who became an active member of the ASC in July, had always been fascinated with films, but she got the bug for filmmaking as a teenager growing up in Great Britain. A film crew shot at her family’s farmhouse.

“I was fixated by the camera and cranes that were being used, and my journey toward becoming a cinematographer began.”

We reached out to Morgan recently to talk about her process and about working on the FX show Legion.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
I am inspired by the world around me. As a cinematographer you learn to look at life in a unique way, noticing elements that you might not have been aware of before. Reflections, bouncing light, colors, atmosphere and so many more. When I have time off, I love to travel and experience different cultures and environments.

I spend my free time reading various periodicals to stay of top of the latest developments in technology. Various publications, such as the ASC’s magazine, help to not only highlight new tools but also people’s experiences with them. The filmmaking community is united by this exploration, and there are many events where we are able to get together and share our thoughts on a new piece of equipment. I also try to visit different vendors to see demos of new advances in technology.

Has any recent or new technology changed the way you work?
Live on-set grading has given me more control over the final image when I am not available for the final DI. Over the last two years, I have worked more on episodic television, and I am often unable to go and sit with the colorist to do the final grade, as I am working on another project. Live grading enables me to get specific with adjustments on the set, and I feel confident that with good communication, these adjustments will be part of the final look of the project.

How do you go about choosing the right camera and lenses to achieve the right look for a story?
I like to vary my choice of camera and lenses depending on what story I am telling.
When it comes to cameras, resolution is an important factor depending on how the project is going to be broadcast and if there are specific requirements to be met from the distributor, or if we are planning to do any unique framing that might require a crop into the sensor.

Also, ergonomics play a part. Am I doing a handheld show, or mainly one in studio mode? Or are there any specifications that make the camera unique that will be useful for that particular project? For example, I used the Panasonic VariCam when I needed an extremely sensitive sensor for night driving around downtown Los Angeles. Lenses are chosen for contrast and resolution and speed. Also, sometimes size and weight play a part, especially if we are working in tight locations or doing lots of handheld.

What are some best practices, or rules, you try to follow on each job?
Every job is different, but I always try to root my work in naturalism to keep it grounded. I feel like a relatable story can have the most impact on its viewer, so I want to make images that the audience can connect with and be drawn into emotionally. As a cinematographer, we want our work to be invisible but yet always support and enhance the narrative.

On set, I always ensure a calm and pleasant working environment. We work long and bizarre hours, and the work is demanding so I always strive to make it an enjoyable and safe experience for everyone,

Explain your ideal collaboration with the director when setting the look of a project.
It is always my aim to get a clear idea of what the director is imagining when they describe a certain approach. As we are all so different, it is really about establishing a language that can be a shorthand on set and help me to deliver exactly what they want. It is invaluable to look at references together, whether that is art, movies, photography or whatever.

As well as the “look,” I feel it is important to talk about pace and rhythm and how we will choose to represent that visually. The ebb and flow of the narrative needs to be photographed, and sometimes directors want to do that in the edit, or sometimes we express it through camera movement and length of shots. Ideally, I will always aim to have a strong collaboration with a director during prep and build a solid relationship before production begins.

How do you typically work with a colorist?
This really varies from project to project, depending if I am available to sit in during the final DI. Ideally, I would work with the colorist from pre-production to establish and build the look of the show. I would take my camera tests to the post house and work on building a LUT together that would be the base look that we work off while shooting.

I like to have an open dialogue with them during the production stage so they are aware and involved in the evolution of the images.

During post, this dialogue continues as VFX work starts to come in and we start to bounce the work between the colorist and the VFX house. Then in the final grade, I would ideally be in the room with both the colorist and the director so we can implement and adjust the look we have established from the start of the show.

Tell us about FX’s Legion. How would you describe the general look of the show?
Legion is a love letter to art. It is inspired by anything from modernist pop art to old Renaissance masters. The material is very cerebral, and there are many mental planes or periods of time to express visually, so it is a very imaginative show. It is a true exploration of color and light and is a very exciting show to be a part of.

How early did you get involved in the production?
I got involved with Legion starting in Season 2. I work alongside Dana Gonzales, ASC, who established the look of the show in Season one with creator Noah Hawley. My work begins during the production stage when I worked with various directors both prepping and shooting their individual episodes.

Any challenging scenes that you are particularly proud of how it turned out?
Most of the scenes in Legion take a lot of thought to figure out… contextually as well as practically. In Season 2, Episode 2, a lot of the action takes place out in the desert. After a full day, we still had a night shoot to complete with very little time. Instead of taking time to try to light the whole desert, I used one big soft overhead and then lit the scene with flashlights on the character’s guns and headlights of the trucks. I added blue streak filters to create multiple horizontal blue flares from each on-camera source (headlights and flashlights) that provided a very striking lighting approach.

FX’s Legion, Season 2, Episode 2

With the limited hours available, we didn’t have enough time to complete all the coverage we had planned so, instead, we created one very dynamic camera move that started overhead looking down at the trucks and then swooped down as the characters ran out to approach the mysterious object in the scene. We followed the characters in the one move, ending in a wide group shot. With this one master, we only ended up needing a quick reverse POV to complete the scene. The finished product was an inventive and exciting scene that was a product of limitations.

What’s your go-to gear (camera, lens, mount/accessories you can’t live without)?
I don’t really have any go-to gear except a light meter. I vary the equipment I use depending on what story I am telling. LED lights are becoming more and more useful, especially when they are color- and intensity-controllable and battery-operated. When you need just a little more light, these lights are quick to throw in and often save the day!

iOgrapher now offering Multi Case for Androids and iOS phones

iOgrapher has debuted the iOgrapher Multi Case rig for mobile filmmaking. It’s the companies first non-iOS offering. An early pioneer of mobile media filmmaking cases for iOS devices, iOgrapher is now targeting mobile filmmakers with a flexible design to support recent model iOS and Android mobile phones of all sizes.

The iOgrapher Multi Case features:

• Slide in function for a strong and secure fit
• The ability to attach lighting and mics for higher quality mobile video production
• Flexible mount options for any standard tripod in landscape or portrait mode
• ¼ 20-inch screw mounts on handles to attach accessories
• Standard protective cases for your phone can be used — filmmakers no longer need to remove protective cases to use the iOgrapher Multi Case
• It works with Moment Lenses. Users do not need to remove Moment Lens cases or lenses to use the iOgrapher Multi Case
• The Multi Case is designed to work with iPhone 6 and later models, and has been tested to work with popular Samsung, Google Pixel, LG and Motorola phones.

With the launch of the Multi Case, iOgrapher is introducing a new design. The capabilities and mounting options have evolved as a result of customer reviews and feedback, as well as real-world use cases from professional broadcasters, filmmakers, pro-sport coaches and training facilities.

The iOgrapher Multi Case is available for pre-order and is priced at $79. It will ship at the end of November.

Timecode Systems’ timecode-over-bluetooth solution

Timecode Systems has introduced UltraSync Blue, which uses the company’s new patented timecode sync and control protocol. UltraSync Blue transmits timecode to a recording device over Bluetooth with sub-frame accuracy. This enables timecode to be transmitted wirelessly from UltraSync Blue directly into the media file of a connected device.

“The beauty of this solution is that the timecode is embedded directly into a timecode track, so there is no need for any additional conversion software; the metadata is in the right format to be automatically recognized by professional NLEs,” reports Paul Scurrell, CEO of Timecode Systems. “This launches a whole new era for multicamera video production in which content from prosumer and consumer audio and video has the potential to be combined, aligned and edited together with ease and efficiency, and with the same high level of accuracy as footage from top-end, professional recording devices.”

The device itself measures just 55mmx43mmx17mm, weighs only 36g, and costs $179 US, making it small enough to fit neatly into a pocket during filming and affordable enough to be used on any type of production, from documentaries, news gathering, and reality shows to wedding videos and independent films.

By removing the restrictions of a wired connection, crews not only benefit from the convenience of being cable-free, but also from even more versatility in how they can sync content. One feature of UltraSync Blue is the ability to use a single unit to sync up to four recording devices shooting in close range over Bluetooth — a great option for small shoots and interviews, and also for content captured for vlogs and social media.

However, as filming is not always this simple, especially in the professional world, UltraSync Blue is also designed to work seamlessly with the rest of the Timecode Systems product range. For more complicated shoots, sprawling filming locations and recording using a variety of professional equipment, UltraSync Blue can be connected to devices over Bluetooth and then synced over robust, long-range RF to other camera and audio recorders using Timecode Systems units. This also includes any equipment containing a Timecode Systems OEM sync module, such as the AtomX Sync module that was recently launched by Atomos for the new Ninja V.

“With more and more prosumer and consumer cameras and sound recorders coming with built-in Bluetooth technology, we saw an opportunity to use this wireless connectivity to exchange timecode metadata,” Scurrell adds. “By integrating a robust Bluetooth Low Energy chip into UltraSync Blue, we’ve been able to create a simple, low-cost timecode sync product that has the potential to work with any camera or sound recording device with Bluetooth connectivity.”

Timecode Systems is now working with manufacturers and app developers to adopt its new super-accurate timing protocol into their Bluetooth-enabled products. At launch, both the MAVIS professional camera app and Apogee MetaRecorder app (both for iPhone) are already fully compatible, allowing — for the first time — sound and video recorded on iPhone devices to be synchronized over the Timecode Systems network.

“It’s been an exciting time for sync technology. In the past couple of years, we’ve seen some massive advancements not only in terms of reducing the size and cost of timecode solutions, but also with solutions becoming more widely compatible with more consumer-level devices such as GoPro and DSLR cameras,” Scurrell explains. “But there was still no way to embed frame-accurate timecode into sound and video recordings captured on an iPhone; this was the biggest thing missing from the market. UltraSync Blue, in combination with the MAVIS and MetaRecorder apps, fills this gap.”

Zoom Corporation is working on new releases of H3-VR Handy Recorder and F8n MultiTrack Field Recorder. When released later this year, both of these Zoom sound recorders will have the ability to receive timecode over Bluetooth from the UltraSync Blue.

Timecode Systems is now taking orders for UltraSync Blue and will be shipping in October 2018.

New CFast 2.0 card for ARRI Alexa Mini and Amira cameras

ARRI has introduced the ARRI Edition AV Pro AR 256 CFast 2.0 card by Angelbird, which has been designed and certified for use in the ARRI Alexa Mini and Amira camera systems and can be used for ProRes and MXF/ARRIRAW recording. (Support for new CFast 2.0 cards is currently not planned for ALEXA XT, SXT(W) and LF cameras.)

ARRI has worked closely with Angelbird Technologies, based in Vorarlberg, Austria. Angelbird is no stranger to film production, and some of their gear can be found at ARRI Rental European locations.

For the ARRI Edition CFast card, the Angelbird team developed an ARRI-specific card that uses a combination of thermally conductive material and so-called underfill to provide superior heat dissipation from the chips and to secure the electronic components against mechanical damage.

The result, according to ARRI, is a rock-solid 256 GB CFast 2.0 card with stable recording performance all the way across the storage space. The ARRI Edition AV PRO AR 256 memory card is available from ARRI and other sales channels offering ARRI products.

GoPro introduces new Hero7 camera lineup

GoPro’s new Hero7 lineup includes the company’s flagship Hero7 Black, which comes with a timelapse video mode, live streaming and improved video stabilization. The new video stabilization, HyperSmooth, allows users to capture professional-looking, gimbal-like stabilized video without  a motorized gimbal. HyperSmooth also works underwater and in high-shock and wind situations where gimbals fail.

With Hero7 Black, GoPro is also introducing a new form of video called TimeWarp. TimeWarp Video applies a high-speed, “magic-carpet-ride” effect, transforming longer experiences into short, flowing videos. Hero7 Black is the first GoPro to live stream, enabling users to automatically share in realtime to Facebook, Twitch, YouTube, Vimeo and other platforms internationally.

Other Hero7 Black features:

  • SuperPhoto – Intelligent scene analyzation for professional-looking photos via automatically applied HDR, Local Tone Mapping and Multi-Frame Noise Reduction
  • Portrait Mode – Native vertical-capture for easy sharing to Instagram Stories, Snapchat and others
  • Enhanced Audio – Re-engineered audio captures increased dynamic range, new microphone membrane reduces unwanted vibrations during mounted situations
  • Intuitive Touch Interface – 2-inch touch display with simplified user interface enables native vertical (portrait) use of camera
  • Face, Smile + Scene Detection – Hero7 Black recognizes faces, expressions and scene-types to enhance automatic QuikStory edits on the GoPro app
  • Short Clips – Restricts video recording to 15- or 30-second clips for faster transfer to phone, editing and sharing.
  • High Image Quality – 4K/60 video and 12MP photos
  • Ultra Slo-Mo – 8x slow motion in 1080p240
  • Waterproof – Waterproof without a housing to 33ft (10m)
  • Voice Control – Verbal commands are hands-free in 14 languages
  • Auto Transfer to Phone – Photos and videos move automatically from camera to phone when connected to the GoPro app for on-the-go sharing
  • GPS Performance Stickers – Users can track speed, distance and elevation, then highlight them by adding stickers to videos in the GoPro app

The Hero7 Black is available now on pre-order for $399.

Panavision, Sim, Saban Capital agree to merge

Saban Capital Acquisition Corp., a publicly traded special purpose acquisition company, Panavision and Sim Video International have agreed to combine their businesses to create a premier global provider of end-to-end production and post production services to the entertainment industry. Under the terms of the business combination agreement, Panavision and Sim will become wholly owned subsidiaries of Saban Capital Acquisition Corp. Upon completion, Saban Capital Acquisition Corp. will change its name to Panavision Holdings Inc. and is expected to continue to trade on the Nasdaq stock exchange. Kim Snyder, president and chief executive officer of Panavision, will serve as chairman and chief executive officer. Bill Roberts, chief financial officer of Panavision, will serve in that role for the combined company.

Panavision designs, manufactures and provides high-precision optics and camera technology for the entertainment industry and is a leading global provider of production equipment and services. Sim is a leading provider of production and post production solutions with facilities in Los Angeles, Vancouver, Atlanta, New York and Toronto.

“This acquisition will leverage the best of Panavision’s and Sim’s resources by providing comprehensive products and services to best address the ever-adapting needs of content creators globally,” says Snyder.

“We’re combining the talent and integrated services of Sim with two of the biggest names in the business, Panavision and Saban,” adds James Haggarty, president and CEO of Sim. “The resulting scale of the new combined enterprise will better serve our clients and help shape the content-creation landscape.”

The respective boards of directors of Saban Capital Acquisition Corp., Panavision and Sim have unanimously approved the merger with completion subject to Saban Capital Acquisition Corp. stockholder approval, certain regulatory approvals and other customary closing conditions. The parties expect that the process will be completed in the first quarter of 2019.

Our Virtual Production Roundtable

By Randi Altman

Evolve or die. That old adage, while very dramatic, fits well with the state of our current production workflows. While most productions are now shot digitally, the warmth of film is still in the back of pros’ minds. Camera makers and directors of photography often look for ways to retain that warmth in digital. Whether it’s through lighting, vintage lenses, color grading, newer technology or all of the above.

There is also the question of setting looks on-set and how 8K and HDR are affecting the picture and workflows. And let’s not forget shooting for OTT series. There is a lot to cover!

In an effort to get a variety of perspectives, we reached out to a few cinematographers and some camera manufacturers to talk trends and technology. Enjoy!

Claudio Miranda, ASC

Claudio Miranda is a Chilean cinematographer who won an Oscar for his work on Life of Pi. He also worked on The Curious Case of Benjamin Button, the first movie nominated for a cinematography Oscar that was shot entirely on digital. Other films include Oblivion, Tomorrowland and the upcoming Top Gun: Maverick.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses?
Seems like everyone is shooting large format. Chris Nolan and Quentin Tarantino shot 65mm film for their last projects. New digital cameras such as the Alexa LF and Sony Venice cater to this demand. People seem to like the shallow depth of field of these larger format lenses.

How is HDR affecting the way things are being shot these days? Are productions shooting/monitoring HDR on-set?
For me, too much grain in HDR can be distracting. This must be moderated in the camera acquisition format choice and DI. Panning in a high-contrast environment can cause painful strobing. This can be helped in the DI and set design. HDR done well is more important than 8K or even 3D.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? Or are latitude and framerate more important currently?
8K can be important for VFX plates. For me, creatively it is not important, 4K is enough. The positive of 8K is just more K. The downside is that I would rather the camera companies focus on dynamic range, color latitude, sensitivity and the look and feel of the captured image instead of trying to hit a high K number. Also, there are storage and processing issues.

Can you talk about how shooting streaming content, for OTTs like Netflix/Amazon, has changed production practices, and workflows, if at all?
I have not shot for a streaming service. I do think we need to pay attention to all deliverables and make adjustments accordingly. In the DI, I am there for the standard cinema pass, HDR pass, IMAX pass, home video pass and other formats that arise.

Is the availability of all those camera resolutions a help or a hindrance?
I choose the camera that will fit the job. It is my job in prep to test and pick the camera that best serves the movie.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
On set, I am able to view HDR or 709. I test the pipeline and make sure the LUT is correct and make modifications if needed. I do not play with many LUTs on set, I normally just have one. I treat the camera like a film stock. I know I will be there in the DI to finalize the look. On set is not the place for futzing with LUTs on the camera. My plate is full enough as it is.

If not already covered, how has production changed in the last two years?
I am not sure production has changed, but there are many new tools to use to help make work more efficient and economical. I feel that I have always had to be mindful of the budget, no matter how large the show is. I am always looking for new solutions.

Daryn Okada, ASC
Daryn Okada is known for his work on films such as Mean GirlsAnna Karenina and Just Like Heaven. He has also worked on many TV series, such as Scandal, Grey’s Anatomy and Castle. He served as president of the ASC from 2006 to 2009.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses? 

Modern digital cinema cameras can achieve a level of quality with the proper workflows and techniques to evolve a story’s visual identity parallel explorations shooting on film. Larger image sensors, state-of-the-art lenses and mining historic optics enable cinematographers to use their experience and knowledge of the past to paint rich visual experiences for today’s audience.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?
HDR is a creative and technical medium just as shooting and projecting 65mm film would be. It’s up to the director and the cinematographer to decide how to orchestrate the use of HDR for their particular story.

Can you address 8K? What are the positives, and the negatives? Do we just have too many Ks these days? Or are latitude and framerate more important currently?
8K will is working its way into production like 65mm and 35mm VistaVision did by providing more technical resolution for use in VFX or special-venue exhibition. The enormous amount of data and cost to handle it must be justified by its financial return and does it benefit a particular story. Latitude and color depth are paramount to creating a motion picture’s pallet and texture. Trying to use a format just because it’s technically possible may be distracting to an audience’s acceptance of a story or creative concept.

Can you talk about how shooting streaming content, for OTTs like Netflix/Amazon, has changed production practices, and workflows, if at all?

I think the delivery specifications of OTT have generally raised the bar, making 4K and wide color gamut the norm. For cinematographers that have spent years photographing features, we are accustomed to creating images with detail for a big screen and a wide color pallet. It’s a natural creative process to shoot for 4K and HDR in that respect. 

Are the availability of all those camera resolutions a help or a hindrance? 
Having the best imaging available is always welcomed. Even if a camera is not technically exploited, the creation of subtle images is richer and possible through the smoother transition and blending of color, contrast and detail from originating with higher resolutions and color range.

Can you talk about color management from the sensor/film to the screen? How do you ensure correct color management from the set into dailies and post, the DI and final delivery?
As a cinematographer we are still involved in workflows for dailies and post production to ensure everyone’s creative efforts to the final production are maintained for the immediate viewer and preserved for the audiences in the future.

How has production changed over the last two years?
There are more opportunities to produce content with creative high-quality cinematography thanks to advancements in cameras and cost-effective computing speed combined with demands of high quality displays and projection.

Vanja Černjul, ASC
This New York-based DP recently worked on the huge hit Crazy Rich Asians. In addition to feature film work, Černjul has shot TV shows (Deuce’s season 1 finale and two seasons of Marco Polo, as well as commercials for Panasonic and others.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses?
One interesting trend I noticed is the comeback of image texture. In the past, cinematographers used to expose film stock differently according to the grain texture they desired. Different exposure zones within the same frame had different grain character, which produced additional depth of the image. We lost that once we switched to digital. Crude simulations of film grain, such as overall filters, couldn’t produce the dimensionality we had with film.

Today, I am noticing new ways of bringing the texture back as a means of creative expression. The first one comes in the form of new, sophisticated post production tools designed to replicate the three-dimensional texturing that occurs naturally when shooting film, such as the realtime texturing tool LiveGrain. Monitoring the image on the set with a LiveGrain texture applied can impact lighting, filtration or lens choices. There are also new ways to manipulate texture in-camera. With the rise of super-sensitive, dual-native ISO sensors we can now shoot at very low-light levels and incorporate so-called photon shot noise into the image. Shot noise has organic character, very much like film grain.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?

The creative potential of HDR technology is far greater than that of added resolution. Unfortunately, it is hard for cinematographers to take full advantage of HDR because it is still far from being the standard way the audience sees our images. We can’t have two completely different looks for a single project, and we have to make sure the images are working on SDR screens. In addition, it is still impractical to monitor in HDR on the set, which makes it difficult to adjust lighting and lens choices to expanded dynamic range. Once HDR screens become a standard, we will be able to really start creatively exploring this new territory.

Crazy Rich Asians

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? Or are latitude and framerate more important currently?
Additional resolution adds more available choices regarding relationship of optical systems and aspect ratios. I am now able to choose lenses for their artifacts and character regardless of the desired aspect ratio. I can decide to shoot one part of the film in spherical and the other part in anamorphic and crop the image to the project’s predetermined aspect ratio without fear of throwing away too much information. I love that freedom.

Can you talk about how shooting streaming content, for OTTs like Netflix/Amazon, has changed production practices and workflows, if at all?
For me, the only practical difference between shooting high-quality content for cable or streaming is the fact that Netflix demands their projects to be capt
ured in true 4K RAW. I like the commitment to higher technical standards, even though this may be an unwelcome restriction for some projects.

Is the availability of all those camera resolutions a help or a hindrance?
I like choices. As large format lenses become more available, shooting across formats and resolutions will become easier and simpler.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
The key for correct color management from the set to final color grading is in preproduction. It is important to take the time to do proper tests and establish the communication between DIT, the colorist and all other people involved as early as possible. This ensures that original ideas aren’t lost in the process.

Adjusting and fine-tuning the LUT to the lenses, lighting gels and set design and then testing it with the colorist is very important. Once I have a bulletproof LUT, I light and expose all the material for it specifically. If this part of the process is done correctly, the time in final color grading can be spent on creative work rather than on fixing inconsistencies.

I am very grateful for ACES workflow, which offers long-overdue standardization. It is definitely a move in the right direction.

How has production changed over the last two years?
With all the amazing post tools that are becoming more available and affordable, I am seeing negative trends of further cutting of preproduction time, and lack of creative discipline on the set. I sincerely hope this is just a temporary confusion due to recalibration of the process.

Kate Reid, DP
Kate Reid is a UK-based DP working in TV and film. Her recent work includes the TV series Hanna (Amazon) Marcella 2 (Netflix) and additional photography on the final season Game of Thrones for HBO. She is currently working on Press for BBC.

Can you talk about some camera trends you’ve been seeing? Such as Large Format? The use of old/vintage lenses?
Large format cameras are being used increasingly on drama productions to satisfy the requirement for additional resolution by certain distribution platforms. And, of course, the choice to use large format cameras in drama brings with it another aesthetic that DPs now have as another tool: Choosing if increased depth-of-field fall off, clarity in the image etc., enhances the particular story they wish to portray on screen.

Like many other DPs, I have always enjoyed using older lenses to help make the digital image softer, more organic and less predictable, but the larger format cameras now mean that much of this older glass designed for 35mm size sensor may not cover the increased sensor size, so newer lenses designed for the larger format cameras may become popular by necessity, alongside older larger format glass that is enjoying a renaissance.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?
I have yet to shoot a show that requires HDR delivery. It hasn’t yet become the default in drama production in the UK.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? Or are latitude and frame rate more important currently?
I don’t inherently find an ultra sharp image attractive. Through older glass and diffusion filters on the lens, I am usually looking to soften and break down my image, so I personally am not all about the extra Ks. How the camera’s sensor reproduces color and handles highlights and shadows is of more interest to me, and I believe has more impact on the picture.

Of primary importance is how practical a camera is to work with — size and how comfortable the camera is to handle would supersede excessive resolution — as the first requirement of any camera has got to be whether it allows you to achieve the shots you have in mind, because a story isn’t told through its resolution.

Can you talk about how shooting streaming content for OTTs, like Netflix/Amazon, has changed production practices, and workflows, if at all?
The major change is the requirement by Netflix for true 4K resolution, determining which cameras cinematographers are allowed to shoot on. For many cinematographers the Arri Alexa was their digital camera of choice, which was excluded by this rule, and therefore we have had to look to other cameras for such productions. Learning a new camera, its sensor, how it handles highlights, produces color, etc., and ensuring the workflow through to the post facility is something that requires time and testing, which has certainly added to a DP’s workload.

From a creative perspective, however, I found shooting for OTTs (I shot two episodes of the TV series Hanna made by Working Title TV and NBC Universal for Amazon) has been more liberating than making a series for broadcast television as there is a different idea and expectation around what the audience wants to watch and enjoy in terms of storytelling. This allowed for a more creative way of filming.

Is the availability of all those camera resolutions a help or a hindrance?
Where work is seen now can vary from a mobile phone screen to a digital billboard in Times Square, so it is good for DPs to have a choice of cameras and their respective resolutions so we can use the best tool of each job. It only becomes a hindrance if you let the technology lead your creative process rather than assist it.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
Ideally, I will have had the time and opportunity to shoot tests during prep and then spend half a day with the show’s colorist to create a basic LUT I can work with on set. In practice, I have always found that I tweak this LUT during the first days of production with the DIT, and this is what serves me throughout the rest of the show.

I usually work with just one LUT that will be some version of a modified Rec. 709 (unless the look of the show drastically requires something else). It should then be straight forward in that the DIT can attach a LUT to the dailies, and this is the same LUT applied by editorial so that exactly what you see on set is what is being viewed in the edit.

However, where this fails is that the dailies uploaded to FTP sites — for viewing by the execs, producers and other people who have access to the work — are usually very compressed with low resolution, so it bears little resemblance to how the work looked on set or looks in the edit. This is really unsatisfying as for months, key members of production are not seeing an accurate reflection of the picture. Of course, when you get into the grade this can be restored, but it’s dangerous if those viewing the dailies in this way have grown accustomed to something that is a pale comparison of what was shot on set.

How has production changed over the last two years?
There is less differentiation between film and television in how productions are being made and, critically, where they are being seen by audiences, especially with online platforms now making award-winning feature films. The high production values we’ve seen with Netflix and Amazon’s biggest shows has seen UK television dramas pushing to up their game, which does put pressure on productions, shooting schedules and HODs, as the budgets to help achieve this aren’t there yet.

So, from a ground-level perspective, for DPs working in drama this looks like more pressure to produce work of the highest standard in less time. However, it’s also a more exciting place to be working as the ideas about how you film something for television versus cinema no longer need apply. The perceived ideas of what an audience is interested in, or expect, are being blown out the water by the success of new original online content, which flies in the face of more traditional storytelling. Broadcasters are noticing this and, hopefully, this will lead to more exciting and cinematic mainstream television in the future.

Blackmagic’s Bob Caniglia
In addition to its post and broadcast tools, Blackmagic offers many different cameras, including the Pocket Cinema Camera, Pocket Cinema Camera 4K, Micro Studio Camera 4K, Micro Cinema Camera, Studio Camera, Studio Camera 4K, Ursa Mini Pro, Ursa Mini 4.6K, Ursa Broadcast.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses?
Lens freedom is on everyone’s mind right now… having the freedom to shoot in any style. This is bringing about things like seeing projects shot on 50-year-old glass because the DP liked the feel of a commercial back in the ‘60s.

We actually just had a customer test out actual lenses that were used on The Godfather, The Shining and Casablanca, and it was amazing to see the mixing of those with a new digital cinema camera. And so many people are asking for a camera to work with anamorphic lenses. The trend is really that people expect their camera to be able to handle whatever look they want.

For large format use, I would say that both Hollywood and indie filmmakers are using them more often. Or, at least they trying to get the general large format look by using anamorphic lenses to get a shallow depth of field.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?
Right now, HDR is definitely more of a concern for DPs in Hollywood, but also with indie filmmakers and streaming service content creators. Netflix and Hulu have some amazing HDR shows right now. And there is plenty of choice when it comes to the different HDR formats and shooting and monitoring on set. All of that is happening everyday, while 8K still needs the industry to catch up with the various production tools.

As for impacting shooting, HDR is about more immersive colors, and a DP needs to plan for it. It gives viewers a whole new level of image detail in what they shoot. They have to be much more aware of every surface or lighting impact so that the viewer doesn’t get distracted. Attention to detail gets even higher in HDR, and DPs and colorists will need to keep a close eye on every shot, including when an image in a sideview mirror’s reflection is just a little too sharp and needs a tweak.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? Or are latitude and framerate more important currently?
You can never have enough Ks! Seriously. It is not just about getting a beautiful 8K TV, it is about giving the production and post pros on a project as much data as possible. More data means more room to be creative, and is great for things like keying.

Latitude and framerate are important as well, and I don’t think any one is more important than another. For the viewers, the beauty will be in large displays, you’re already seeing 8K displays in Times Square, and though you may not need 8K on your phone, 8K on the side of a building or highway will be very impactful.

I do think one of the ways 8K is changing production practices is that people are going to be much more storage conscious. Camera manufacturers will need to continue to improve workflows as the images get larger in an effort to maximize storage efficiencies.

Can you talk about how shooting streaming content, for OTTs like Netflix/Amazon, has changed production practices, and workflows, if at all?
For streaming content providers, shoots have definitely been impacted and are forcing productions to plan for shooting in a wider number of formats. Luckily, companies like Netflix have been very good about specifying up front the cameras they approve and which formats are needed.

Is the availability of all those camera resolutions a help or a hindrance?
While it can be a bit overwhelming, it does give creatives some options, especially if they have a smaller delivery size than the acquisition format. For instance, if you’re shooting in 4K but delivering in HD, you can do dynamic zooms from the 4K image that look like an optical zoom, or you can get a tight shot and wide shot from the same camera. That’s a real help on a limited budget of time and/or money.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
Have the production and the post people planning together from the start and create the look everyone should be working on right up front.

Set the LUTs you want before a single shot is done and manage the workflow from camera to final post. Also, choose post software that can bring color correction on-set, near-set and off-set. That lets you collaborate remotely. Definitely choose a camera that works directly with any post software, and avoid transcoding.

How has production changed in the last two years?
Beyond the rise of HDR, one of the other big changes is that more productions are thinking live and streaming more than ever before. CNN’s Anderson Cooper now does a daily Facebook Live show. AMC has the live Talking Dead-type formats for many of their shows. That trend is going to keep happening, so cinematographers and camera people need to be thinking about being able to jump from scripted to live shooting.

Red Digital Cinema’s Graeme Nattress
Red Digital Cinema manufactures professional digital cameras and accessories. Red’s DSMC2 camera offers three sensor options — Gemini 5K S35, Helium 8K S35 and Monstro 8K VV.

Can you talk about some camera trends you’ve been seeing?
Industry camera trends continue to push image quality in all directions. Sensors are getting bigger, with higher resolutions and more dynamic range. Filmmakers continue to innovate, making new and amazing images all the time, which drives our fascination for advancing technology in service to the creative.

How is HDR affecting the way things are being shot these days?
One of the benefits of a primary workflow based on RAW recording is that HDR is not an added extra, but a core part of the system. Filmmakers do consider HDR important, but there’s some concern that HDR doesn’t always look appealing, and that it’s not always an image quality improvement. Cinematography has always been about light and shade and how they are controlled to shape the image’s emotional or storytelling intent. HDR can be a very important tool in that it greatly expands the display canvas to work on, but a larger canvas doesn’t mean a better picture. The increased display contrast of HDR can make details more visible, and it can also make motion judder more apparent. Thus, more isn’t always better; it’s about how you use what you have.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? What’s more important, resolution or dynamic range?
Without resolution, we don’t have an image. Resolution is always going to be an important image parameter. What we must keep in mind is that camera resolution is based on input resolution to the system, and that can — and often will — be different to the output resolution on the display. Traditionally, in video the input and output resolutions were one and the same, but when film was used — which had a much higher resolution than a TV could display — we were taking a high-resolution input and downsampling it to the display, the TV screen.

As with any sampled system, in a digital cinema camera there are some properties we seek to protect and others to diminish. We want a high level of detail, but we don’t want sharpening artifacts and we don’t want aliasing. The only way to achieve that is through a high-resolution sensor, properly filtered (optical low-pass) that can see a large amount of real, un-enhanced detail. So yes, 8K can give you lots of fine detail should you want it, but the imaging benefits extend beyond downsampling to 4K or 2K. 8K makes for an incredibly robust image, but noise is reduced, and what noise remains takes on more of a texture, which is much more aesthetically pleasing.

One challenge of 8K is an increase in the amount of sensor data to be recorded, but that can be addressed through quality compression systems like RedCode.

Addressing dynamic range is very important because dynamic range and resolution work together to produce the image. It’s easy to think that high resolutions have a negative impact upon dynamic range, but improved pixel design means you can have dynamic range and resolution.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
Color management is vitally important and so much more than just keeping color control from on-set through to delivery. Now with the move to HDR and an increasing amount of mobile viewing, we have a wide variety of displays, all with their own characteristics and color gamuts. Color management allows content creators to display their work at maximum quality without compromise. Red cameras help in multiple ways. On camera, one can monitor in both SDR and HDR simultaneously with the new IPP2 image processing pipeline’s output independence, which also allows you to color via CDL and creative 3D LUT in such a way as to have those decisions represented correctly on different monitor types.

In post and grading, the benefits of output independence continue, but now it’s critical that scene colors, which can so easily go out of gamut, are dealt with tastefully. Through the metadata support in the RedCode format, all the creative decisions taken on set follow through to dailies and post, but never get in the way of producing the correct image output, be it for VFX, editorial or grading.

Panavision’s Michael Cioni 
Panavision designs and manufactures high-precision camera systems, including both film and digital cameras, as well as lenses and accessories for the motion picture and television industries.

Can you talk about some camera trends you’ve been seeing?
With the evolution of digital capture, one of the most interesting things I’ve noticed in the market are new trends emerging from the optics side of cinematography. At a glance, it can appear as if there is a desire for older or vintage lenses based on the increasing resolution of large format digital cameras. While resolution is certainly a factor, I’ve noticed the larger contributor to vintage glass is driven by the quality of sensors, not the resolution itself. As sensors increase in resolution, they simultaneously show improvements in clarity, low-light capability, color science and signal-to-noise ratio.

The compounding effect of all these elements are improving images far beyond what was capable with analog film technology, which explains why the same lens behaves differently on film, S35 digital capture and large-format digital capture. As these looks continue to become popular, Panavision is responding through our investments in both restoration of classic lenses as well as designing new lenses with classic characteristics and textures that are optimized for large format photography on super sensors.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look?
Creating images is not always about what component is better, but rather how they elevate images by working in concert. HDR images are a tool that increases creative control alongside high resolution and 16-bit color. These components work really well together because a compelling image can make use of more dynamic range, more color and more clarity. Its importance is only amplified by the amalgamation of high-fidelity characteristics working together to increase overall image flexibility.

Today, the studios are still settling into an HDR world because only a few groups, led by OTT, are able to distribute in HDR to wide audiences. On-set tools capable of HDR, 4K and 16-bit color are still in their infancy and currently cost-prohibitive. 4K/HDR on the set is going to become a standard practice by 2021. 4K wireless transmitters are the first step — they are going to start coming online in 2019. Smaller OLED displays capable of 750 nits+ will follow in 2020, creating an excellent way to monitor higher quality images right on set. In 2021, editorial will start to explore HDR and 4K during the offline process. By 2024, all productions will be HDR from set to editorial to post to mobile devices. Early adopters that work out the details today will find themselves ahead of the competition and having more control as these trends evolve. I recommend cinematographers embrace the fundamentals of HDR, because understanding the tools and trends will help prevent images from appearing artificial or overdone.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? What’s more important, resolution or dynamic range?
One of the reasons we partnered with Red is because the Monstro 8K VV sensor makes no sacrifice in dynamic range while still maintaining ultra high smoothness at 16 bits. The beauty of technology like this is that we can finally start to have the best from all worlds — dynamic range, resolution, bit depth, magnification, speed and workflow — without having to make quality sacrifices. When cinematographers have all these elements together, they can create images previously never seen before, and 8K is as much part of that story as any other element.

One important way to view 8K is not solely as a thermometer for high-resolution sharpness. A sensor with 35 million pixels is necessary in order to increase the image size, similar to trends in professional photography. 8K large format creates a larger, more magnified image with a wider field of view and less distortion, like the difference in images captured by 70mm film. The biggest positive I’ve noticed is that DXL2’s 8K large-format Red Monstro sensor is so good in terms of quality that it isn’t impacting images themselves. Lower quality sensors can add a “fingerprint” to the image, which can distort the original intention or texture of a particular lens.

With sensors like Monstro capable of such high precision, the lenses behave exactly as the lens maker intended. The same Panavision lenses on a lower grade sensor, or even 35mm film, are exhibiting characteristics that we weren’t able to see before. This is literally breathing new life into lenses that previously didn’t perform the same way until Monstro and large format.

Is the availability of so many camera formats a help or a hindrance?
You don’t have to look far to identify individuals who are easily fatigued by having too many choices. Some of these individuals cope with choices by finding ways to regulate them, and they feel fewer choices means more stability and perhaps more control (creative and economic). As an entrepreneur, I find the opposite to be true: I believe regulating our world, especially with regards to the arts and sciences, is a recipe for protecting the status quo. I fully admit there are situations in which people are fatigued by too many complex choices.

I find that failure is not of the technology itself, rather it’s the fault of the manufactures who have not provided the options in easy-to-consume ways. Having options is exactly what creatives need in order to explore something new and improved. But it’s also up to manufacturers to deliver the message in ways everyone can understand. We’re still learning how to do that, and with each generation the process changes a bit. And while I am not always certain which are the best ways to help people understand all the options, I am certain that the pursuit of new art will motivate us to go out of our comfort zones and try something previously thought not possible.

Have you encountered any examples of productions that have shot streaming content (i.e. for Netflix/Amazon) and had to change production practices and workflows for this format/deliverable?
Netflix and Amazon are exceptional examples of calculated risk takers. While most headlines discuss their investment in the quantity of content, I find the most interesting investment they make is in relationships. Netflix and Amazon are heavily invested in standards groups, committees, outreach, panels and constant communication. The model of the past and present (incumbent studios) are content creators with technology divisions. The model of the future (Netflix, Amazon, Hulu, Apple, Google and YouTube) are all the technology companies with the ability to create content. And technology companies approach problems from a completely different angle by not only embracing the technology, they help invent it. In this new technological age, those who lead and those who follow will likely be determined by the tools and techniques used to deliver. What I call “The Netflix Effect” is the impact Netflix has on traditional groups and how they have all had to strategically pivot based on Netflix’s impact.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
The DXL2 has an advanced color workflow. In collaboration with LiveGrade by Pomfort, DXL2 can capture looks wirelessly from DITs in the form of CDLs and LUTs, which are not only saved into the metadata of the camera, but also baked into in-camera proxy files in the form of Apple ProRes or Avid DNx. These files now contain visual references of the exact looks viewed on monitors and can be delivered directly to post houses, or even editors. This improves creative control because it eliminates the guess work in the application of external color decisions and streamlines it back to the camera where the core database is kept with all the other camera information. This metadata can be traced throughout the post pipeline, which also streamlines the process for all entities that come in contact with camera footage.

How has production changed over the last two years?
Sheesh. A lot!

ARRI‘s Stephan Ukas-Bradley
The ARRI Group manufactures and distributes motion picture cameras, digital intermediate systems and lighting equipment. Their camera offerings include the Alexa LF, Alexa Mini, Alexa 65, Alexa SXT W and the Amira.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses?
Large format opens some new creative possibilities, using a shallow depth of field to guide the audience’s view and provide a wonderful bokeh. It also conveys a perspective truer to the human eye, resulting in a seemingly increased dimensional depth. The additional resolution combined with our specially designed large format Signature Primes result in beautiful and emotional images.

Old and vintage lenses can enhance a story. For instance, when Gabriel Beristain, ASC, used Bausch & Lomb Super Baltar on the Starz show Magic City, and Bradford Young used detuned DNA lenses in conjunction with Alexa 65 on Solo: A Star Wars Story, certain characteristics like flares, reflections, distortions and focus fall-off are very difficult to recreate in post organically, so vintage lenses provide an easy way to create a unique look for a specific story and a way for the director of photography to maintain creative control.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?
Currently, things are not done much differently on set when shooting HDR versus SDR. While it would be very helpful to monitor in both modes on-set, HDR reference monitors are still very expensive and very few productions have the luxury to do that. One has to be aware of certain challenges when shooting for an HDR finish. High contrast edges can result in a more pronounced stutter/strobing effect when panning the camera, windows that are blown out in SDR might retain detail in the HDR pass and now all of a sudden, a ladder or grip stand are visible.

In my opinion, HDR is more important than higher resolution. HDR is resolution-independent in regard to viewing devices like phone/tablets and gives the viewer a perceived increased sharpness, and it is more immersive than increased resolution. Also, let’s not forget that we are working in the motion picture industry and that we are either capturing moving objects or moving the camera, and with that introducing motion blur. Higher resolution only makes sense to me in combination with higher frame rates, and that in return will start a discussion about aesthetics, as it may look hyper-real compared to the traditional 24fps capture. Resolution is one aspect of the overall image quality, but in my opinion extended dynamic range, signal/noise performance, sensitivity, color separation and color reproduction are more important.

Can you talk about how shooting streaming content for OTTs, like Netflix/Amazon, has changed production practices and workflows, if at all?
Shooting streaming content has really not changed production practices or workflows. At ARRI, we offer very flexible and efficient workflows and we are very transparent documenting our ARRIRAW file formats in SMPTE RDD 30 (format) and 31 (processing) and working with many industry partners to provide native file support in their products.

Is the availability of all those camera resolutions a help or a hindrance?
I would look at all those different camera types and resolutions as different film stocks and recommend to creatives to shoot their own test and select the camera systems based on what suits their project best.

We offer the ARRI Look Library for Amira, Alexa Mini and Alexa SXT (SUP 3.0), which is a collection of 87 looks, each of them available in three different intensities provided in Rec. 709 color space. Those looks can either be recorded or only used for monitoring. These looks travel with the picture, embedded in the metadata of the ARRIRAW file, QuickTime Atom or HD/SDI stream in form of the actual LUT and ASC CDL. One can also create a look dynamically on set, feeding the look back to the camera and having the ASC CDL values embedded in the same way.

More commonly, one would record in either ARRIRAW or ProRes LogC, while applying a standard Rec. 709 look for monitoring. The “C” in LogC stands for Cineon, which is a film-like response very much the like of a scanned film image. Colorists and post pros are very familiar with film and color grading LogC images is easy and quick.

How has production changed over the last two years?
I don’t have the feeling that production has changed a lot in the past two years, but with the growing demand from OTTs and increased production volume, it is even more important to have a reliable and proven system with flexible workflow options.

Main Image: DP Kate Reid.

DITs: Maintaining Order on Set

By Karen Moltenbrey

The DIT, or digital imaging technician, can best be described as that important link between on-set photography and post production. Part of the camera crew, the DIT works with the cinematographer and post production on the workflow, camera settings, signal integrity and image acquisition. Much more than a data wrangler, a DIT ensures the technical quality control, devises creative solutions involving photography technology and sees that the original camera data and metadata are backed up regularly.

Years ago, the DIT’s job was to solve issues as the industry transitioned from film to digital. But today, with digital being so complex and involving many different formats, this job is more vital than ever, sweating the technical stuff so that the DP and others can focus on their work for a successful production. In fact, one DIT interviewed for this piece notes that the job today focuses less on fine-tuning the live look than it did in the past. One reason for that is the many available tools that enable the files to be shaped more carefully in post.

The DITs interviewed here note that the workflow usually changes from production to production. “If you ask 10 different DITs what they do, they would probably give you 10 different answers,” says one. Still, the focus remains the same: to assist the DP and others, ensuring that everyone and everything is working in concert.

And while some may question whether a production needs the added expense of a DIT, perhaps a better question would be whether they can afford not to have one.

Here, two DITs discuss their ever-changing workflows for this important job.

Michele deLorimier 
Veteran DIT Michele deLorimier describes the role of a digital imaging technician as a problem solver. “It’s like doing puzzles — multiple, different-size puzzles that have to be sorted out,” she says. “It always involves problem solving, from trying to fix the director’s iPhone to the tech parameter settings in the cameras to the whole computer having to be torn apart and put back together. All the while, shooting has not stopped and footage is accumulating.”

There are often multiple cameras, and the footage needs to be downloaded and QC’d, and cards erased and sent back into rotation in order to continue shooting. “So, I guess the greatest tool on the cart is the complete computer workstation, and if it is having a problem, it requires high-gear, intense problem solving,” she adds.

And through it all, deLorimier and her fellow DITs must keep their cool and come up with a solution — and fast.

deLorimier has been working as a DIT for many years now. She honed her problem-solving skills working at live concerts, where she had to be fast on her feet while working with live control of multiple cameras through remote control units and paint boxes. “I’d sit at a switcher, with a stack of monitors and one big monitor, and keep the look consistent — black levels, paint controls — on all cameras, live.”

Later, this segued into setting up and controlling on- and off-board videotape and data-recorder digital cinema cameras on set for commercial film production.

“I just kind of fell into [DIT work] because of what I had done, and then it just continued to evolve,” says deLorimier. With the introduction of digital cinema cameras, DITs with a film and video background were needed during the transition period — spawning the term “digital imaging technician.”

“It went from being tape-based, where you’re creating and baking in a look while you’re shooting, to tape-based where you’re shooting sort of a flat pass and creating a timeline of looks you’re delivering alongside the videotape. And then to data recording, delivering files and additionally honing the look after the footage is ingested,” she says.

Among the equipment deLorimier uses is a reference grade monitor “that must be calibrated properly,” she says, a way to objectively assess exposure, such as with a waveform monitor, and some method of objectively assessing color, so a type of vectorscope. That is the base-level equipment. For commercials, efficient hardware and software are needed for downloading, manipulating and QC’ing the footage, color correcting it and creating deliverables for post.”

deLorimier prefers Flanders Scientific monitors — she has six for various tasks: a pair of 25 inch, a 24 inch, a pair of 21 inch and a 17 inch — as well as a Leader waveform monitor/vectorscope.

“We’re using wireless video a lot these days so we can move around freely and the cables aren’t all over the ground to trip on,” she says. “That part of the chain can have the incorrect setting, so it’s important to ensure that everything is [set at] baseline and that what you are adding to it — usually some form of a LUT to the livestream — is baseline too.” This starts with settings in the camera and then anything the video signal chain might touch.

Then there is various software, drivers, readers, cables and power management, which change and get updated regularly. Thus, deLorimier stresses that any software change should be tested and updated during prep, to ensure compatibility. “There are unexpected things that you can’t prep for. There are times when you show up at a shoot and will be told, ‘We shot some drone footage yesterday,’ and it’s with a camera that you had no control over the settings,” she says. “So, the more you can prep for, the higher the rate of success you will have.”

Over the years, deLorimier has worked on a variety of productions, from features to TV commercials, with each type of project requiring a different setup. Preparing for a commercial usually entails physically prepping equipment and putting pieces together, as well as checking its end-to-end signal chain, from camera settings, through distribution of the video signal, to the final destination for monitoring and data delivery.

A day before this interview, deLorimier finished a Subaru commercial, shooting in Sequoia National Forest for the first few days, then Griffith Park and some areas around LA. Before that was a multi-unit job for a Nike spot that was filmed in numerous cities over the course of five days. For that project, each of the DITs for the A, B and C units had to coordinate with one another for consistency, ensuring that the cameras would be set up the same way, that they had the same specs and were delivering a similar look. “We were shooting with big projectors onto buildings and screens, and the cameras needed to sync to the projectors in some instances,” deLorimier explains.

According to deLorimier, it is unusual for the work of a DIT not to be physical. “We’re on the move a lot,” she says, crediting her past concert experience for her ability to adjust to adverse and unexpected conditions. “And we are not working in a controlled environment, but we do our best under the constraints we have and always try to keep post in mind.”

She recalls one physically demanding job that required three consecutive nights of shooting in the rain near Santa Barbara, to film a train coming down the tracks. Part of the crew was on one side of the tracks, and part on the other. And deLorimier was in a cornfield with her carts, computer system and monitors, inside a tent to keep dry. “They kept calling me to come to B camera. But I was also remotely setting up live looks inside my tent.

“I had a headlamp on because I had to deal with cables and stuff in my tent, and at one point illuminated by my headlamp, I could see that there were at least 45 snails crawling up the inside of my tent and cart. I was getting mud on my glasses and in my eyes. Then my whole cart, which was pretty heavy, started tipping and tilting, and I was bracing myself and my feet were starting to get sucked into the mud in the mole holes that were filling with rainwater. I couldn’t even call for help because it took both of my hands to hold up the cart, and the snails were everywhere! And, through it all, they kept calling on the walkie-talkie, ‘Michele, B camera needs you. The train’s coming.’”

Insofar as acquisition formats are concerned, deLorimier says that it’s higher resolution and almost always raw files for commercials these days. “A minimum of 4K is almost mandatory across the board,” she notes. And if the project is shooting with Red Digital Cinema cameras, it is between 6K and 8K, as the team she works with mostly use Red Monstros or ARRIRAW. She also works with Phantom Cine raw files.

“The higher data rates have definitely given me more gray hairs,” says deLorimier with a smile. “There’s no downtime. There’s always six or seven balls in the air, and there’s very little room for error or any fixing on set. This is also why the prep day is vital; so much can be worked out and pre-run during the prep, and this pays off for production during the shoot.”

Francesco Luigi Giardiello
Francesco Luigi Giardiello defines his role as that of an on-set workflow supervisor, as opposed to a traditional DIT. “Over the last five to 10 years, I have been designing a workflow that basically extends from set to post production, focusing on the whole pipeline so we don’t have to throw away what has been done on set,” he says.

Giardiello has been designing a pipeline based on a white balance match, which he says is quite unusual in the business because everything gets done through a simplified and more standardized color grading. “We designed something that goes a bit deeper into the color science and works with the Academy’s ACES workflow, trying to establish a common working colorspace, common color pipeline and a common method to control and manipulate colors. This — across any possible camera or source media used in production — is to provide balanced and consistent footage to the DI and visual effects teams. This allows the CG to be applied without having to spend time on balancing and tweaking the color of the shots.”

The Thor team (L-R): Francesco Giardiello, Kramer Morgenthau ASC (DP), Fabio Ferrantini (data manager).

This is important, especially today, where people are shooting with different digital systems. Or maybe even film and digital cameras, plus different lenses, so the shots look very different, even with the same lighting conditions. To this end, Giardiello’s role as DIT would be to grade or match everything so it all looks the same.

“Normally this gets done by using color tools, some of which are more sophisticated than others. When the tools are too sophisticated, they are intractable in the workflow and, therefore, become useless after leaving the set. When they are too ‘simple,’ like CDLs, often they are insufficient in correctly balancing the shots. And, because they are applied during a stage of the pipeline where the cinematographer’s look is introduced, they end up lost or often convolute the pipeline,” he notes. “We designed a system where the color balance occurs before any other color grading, and then the color grading is applied just as a look.”

Giardiello is currently in production on Marvel Studios’ Spider-Man: Far from Home, scheduled for release July 5, 2019. Not his first trip into the Marvel universe, he has worked on Thor: The Dark World, in addition to a number of episodic TV series and other big VFX productions, including Jurassic World and Aladdin. “You are the ambassador of the post production and VFX work,” he explains. “You have to foresee any technical issue and establish a workflow that will facilitate them. So, doing my job without being on set would be a complete waste of time. Sure, I can work in the studios and post production facilities to design workflows that will work without a DIT, but the problem is that things happen on set because that’s where decisions get made.”

As Giardiello points out, the other departments, such as camera and VFX, even the cinematographers, have different priorities and different jobs to fulfill. So, they’re not necessarily spending the time to ensure that every camera, every lens and every setting is in line with a consistent workflow to match the others. “They tend to shoot with whatever camera or medium they think is best and then expect that VFX or post will be able to fit that into an existing workflow.”

On average, Giardiello spends a few weeks of prep to design a project’s workflow, probably longer than producers and production companies would like. But, he believes that the more you plan, the less you have to deal with on set and in post production. When a shoot is finished, he will spend a week or two with the post facility, more to facilitate the handoff than to fix major issues.

Jurassic World was shot with 6K Arri Alexa 65s and the 8K Red Digital Cinema Helium camera, but the issue with high-resolution cameras is the amount of data they generate. “When you start shooting 4, 5, 6 or 8 terabytes a day, you have to make sure you are on set as a data point and that post production is capable of handling all this incoming data,” Giardiello advises. To this end, he had been working with Pinewood Digital to streamline a workflow for moving the data from set to post, whereby rather than sending the original mags to post, his group packaged up the data into very fast, very secure Codex Digital SLEDs.

The most important challenge on a VFX-oriented film, Giardiello says, is the color pipeline, as large studios, like Marvel, Disney, Warner Bros. and Universal, are focused on making sure that the so-called “digital negatives,” or raw footage, that arrive to post and VFX is well balanced and doesn’t require a lot of fixing before those departments can begin their work. “So, having balanced footage has been, and still is, one of the biggest concerns for any major studio when it comes to managing color from set to post production,” he notes.

So, for the last few years, this issue has been handled through the in-camera white balance with a system developed by Giardiello. “We changed the white balance on every single camera, using that to match every single shot before it gets to post production. So when it arrives in front of a VFX compositor and the DI suite, the average color and density of every single shot is consistent,” he adds.

Francesco Giardiello’s rig on Jurassic World.

Giardiello’s workflow is one that he has designed and developed over a five-year period and shines particularly when it comes to facilitating VFX interaction with action footage. “If you have to spend weeks fixing things in VFX on a big job like Jurassic World, Aladdin or Spider-Man, we’re talking about losing thousands of dollars every day,” he points out.

The work entails using a range of tools, some of which are designed for each new job. One tool that has been used on Giardiello’s last few films modifies the metadata for Red cameras to match them with that of the Alexa camera. Meanwhile, on set he uses Filmlight’s Prelight for light grading or to design CDLs. Probably the most important tool for dealing with RAW footage, he maintains, is Codex Digital’s Codex Production Suite. “It allows us to streamline the cloning and backup processes, to perform a visual QC near set and to access the metadata of raw footage and change it (when it is not changed in-camera).

“When those files get to post production in [Filmlight’s] Daylight, which is mostly used these days to process rushes, Daylight doesn’t recognize that change as an actual change, but as something that the DIT does on set in-camera,” Giardiello says.

In addition, he also uses the new SSD SLED designed by Codex, which offers encryption — an important feature for studios like Marvel or Sony. Then, on set, he uses BoxIOs, a LUT box from Flanders Scientific, as well as Flanders monitors, either DM240s (LCDs) or DM250s (OLEDs), depending on the type of project.

Over the years, Giardiello has often worked with the same DPs, but in the past three years, his major clients instead have been studios: Universal, Marvel and Warner Bros. “But my boss is still the DP,” he adds.

During the past 12 years, Giardiello has witnessed an evolution in the role of DIT and expects this to continue, particularly as media continues to converge and merge — from cinema or television to mobile devices. “So yeah, I would say our job has changed and is going to change, but I think it’s more important now than it was 10 years ago, and obviously it’s going to be even more important in the next 10 years.”


Karen Moltenbrey is a longtime writer and editor in the CG and post industries.

Behind the Camera: Television DPs

By Karen Moltenbrey

Directors of photography on television series have their work cut out for them. Most collaborate early on with the director on a signature “look.” Then they have to make sure that aesthetic is maintained with each episode and through each season, should they continue on the series past the pilot. Like film cinematographers, their job entails a wide range of responsibilities aside from the camera work. Once shooting is done, they are often found collaborating with the colorists to ensure that the chosen look is maintained throughout the post process.

Here we focus on two DPs working on two popular television series — one drama, one sitcom — both facing unique challenges inherent in their current projects as they detail their workflows and equipment choices.

Ben Kutchins: Ozark
Lighting is a vital aspect in the look of the Netflix family crime drama Ozark. Or perhaps more accurate, the lack of lighting.

Ben Kutchins (left) on set with actor/director Jason Bateman.

“I’m going for a really naturalistic feel,” says DP Ben Kutchins. “My hope is that it never feels like there’s a light or any kind of artificial lighting on the actors or lighting the space. Rather, it’s something that feels more organic, like sunlight or a lamp that’s on in the room, but still offers a level of being stylized and really leans into the darkness… mining the shadows for the terror that goes along with Ozark.”

Ozark, which just kicked off its second season, focuses on financial planner Marty Byrde, who relocates his family from the Chicago suburbs to a summer resort area in the Missouri Ozarks. After a money laundering scheme goes awry, he must pay off a debt to a Mexican drug lord by moving millions of the cartel’s money from this seemingly quiet place, or die. But, trouble is waiting for them in the Ozarks, as Marty is not the only criminal operating there, and he soon finds himself in much deeper than he ever imagined.

“It’s a story about a family up against impossible odds, who constantly fear for their safety. There is always this feeling of imminent threat. We’re trying to invoke a heightened sense of terror and fear in the audience, similar to what the characters might be feeling,” explains Kutchins. “That’s why a look that creates a vibe of fear and danger is so important. We want it to feel like there is danger lurking around every corner — in the shadows, in the trees behind the characters, in the dark corners of the room.”

In summary, the look of the show is dark — literally and figuratively.

“It is pretty extreme by typical television standards,” Kutchins concedes. “We’ve embraced an aesthetic and are having fun pushing its boundaries, and we’re thrilled that it stands out from a pretty crowded market.”

According to Kutchins, there are numerous examples where the actor disappears into the shadows and then reappears moments later in a pool of light, falling in and out of shadow. For instance, a character may turn off a light and plunge the room into complete darkness, and you do not see that character again until they reappear, until they’re lit by moonlight coming through a window or silhouetted against a window.

“We’re not spending a lot of time trying to fill in the shadows. In fact, we spend most of our time creating more shadows than exist naturally,” he points out.

Jason Bateman, who plays Marty, is also an executive producer and directed the first two and last two episodes of Season 1. Early on, he, along with Kutchins and Pepe Avila del Pino, who shot the pilot, hashed out the desired look for the show, leaning into a very cyan and dark color palette — and leaning in pretty strongly. “Most people think of [this area as] the South, where it’s warm and bright, sweaty and hot. We just wanted to lean into something more nuanced, like a storm was constantly brewing,” Kutchins explains. “Jason really pushed that aesthetic hard across every department.”

Alas, that was made even more difficult since the show was mainly shot outdoors in the Atlanta area, and a good deal of work went into reacting to Mother Nature and transforming the locations to reflect the show’s Ozark mountain setting. “I spent an immense amount of time and effort killing direct sunlight, using a lot of negative fill and huge overheads, and trying to get rid of that direct, harsh sun,” says Kutchins. “Also, there are so many windows inside the Byrde house that it’s essentially like shooting an exterior location; there’s not a lot of controlled light, so you again are reacting and adapting.”

Kutchins shoots the series on a Panasonic VariCam, which he typically underexposes by a stop or two, mining the darker part of the sensor, “the toe of the exposure curve.” And by doing so, he is able to bring out the dirtier, more naturalistic, grimy parts of the image, rather than something that looks clean and polished. “Something that has a little bit of texture to it, some grit and grain, something that’s evocative of a memory, rather than something that looks like an advertisement,” he says.

To further achieve the look, Kutchins uses an in-camera LUT that mimics old Fuji film stock. “Then we take that into post,” he says, giving kudos to his colorist, Company 3’s Tim Stipan, who he says has been invaluable in helping to develop the “vibe” of the show. “As we moved along through Season 1 and into Season 2, he’s been instrumental in enhancing the footage.”

A lot of Kutchins’ work occurs in post, as the raw images captured on set are so different from the finals. Insofar as the digital intermediate is concerned, significant time is spent darkening parts of the frame, brightening small sections of the frame and working to draw the viewer into the frame. “I want people to be leaning on the edge of their seat, kind of wanting to look inside of the screen and poke their head in for a look around,” Kutchins says. “So I do a lot of vignetting and darkening of the edges, and darkening specific things that I think are distracting.”

Nevertheless, there is a delicate balance he must maintain. “I talk about the darkness of Ozark, but I am trying to ride that fine line of how dark it can be but still be something that’s pleasant to watch. You know, where you’re not straining to see the actor’s face, where there’s just enough information there and the frame is just balanced enough so your eyes feel comfortable looking at it,” he explains. “I spend a lot of time creating a focal point in the frame for your eyes to settle on — highlighting certain areas and letting some areas go black, leaving room for mystery in every frame.”

When filming, Kutchins and his crew use Steadicams, cranes, dollies and handheld. He also uses Cooke Optics’ S4 lenses, which he tends to shoot wide open, “to let the flaws and character of the lenses shine through.”

Before selecting the Panasonic VariCam, Kutchins and his group tested other cameras. Because of Netflix’s requirement for 4K, that immediately ruled out the ARRI Alexa, which is Kutchins’ preferred camera. “But the Panasonic ended up shining,” he adds.

In Ozark, the urban family is pitted against nature, and thus, the natural elements around them need to feel dangerous, Kutchins points out. “There’s a line in the first season about how people drown in the lake all the time. The audience should always feel that; when we are at the water’s edge, that someone could just slip in and disappear forever,” he says. “So, the natural elements play a huge role in the inspiration for the lighting and the feel of the show.”

Jason Blount:The Goldbergs
A polar opposite to Ozark in almost every way, The Goldbergs is a single-camera comedy sitcom set in the ’80s about a caring but grumpy dad, an overbearing mother and three teens — the oldest, a popular girl; the middle one, who fancies himself a gifted athlete and strives to be popular; and the youngest, a geek who is obsessed with filmmaking, as he chronicles his life and that of his family on film. The series is created and executive-produced by Adam F. Goldberg and is based on his own life and childhood, which he indeed captured on film while growing up.

The series is filmed mostly on stage, with the action taking place within the family home or at the kids’ schools. For the most part, The Goldbergs is an up-lit, broad comedy. The colors are rich, with a definite nod to the vibrant palette of the ’80s. “Our colorist, Scott Ostrowsky [from Level 3], has been grading the show from day one. He knows the look of the show so well that by the time I sit with him, there are very few changes that have to be made,” says Blount.

The Goldbergs began airing in 2013 and is now entering its sixth season. And the series’ current cinematographer, Jason Blount, has been involved since the start, first serving as the A camera/Steadicam operator before assuming the role of DP for the Season 1 finale — for a total of 92 episodes now and counting.

As this was a Sony show for ABC, the plan was to shoot with a Sony PMW-F55 CineAlta 4K digital camera, but at the time, it did not record at a fast enough frame rate for some of the high-speed work the production wanted. So, they ended up using the ARRI Alexa for Season 1. Blount took over as DP full time from Season 2 onward, and the decision was made to switch to the F55 for Season 2, as the frame rate issue had been resolved.

“The look of the show had already been established, and I wanted to make sure that the transition between cameras was seamless,” says Blount. “Our show is all about faces and seeing the comedy. From the onset, I was very happy with the Sony F55. The way the camera renders skin tone, the lack of noise in the deep shadows and the overall user-friendly nature of the camera impressed me from the beginning.”

Blount points to one particular episode where the F55 really shined. “The main character was filming a black-and-white noir-style home movie. The F55 handled the contrast beautifully. The blacks were rich and the highlights held onto detail very well,” he says. “We had a lot of smoke, hard light directly into the lens, and really pushed the limits of the sensor. I couldn’t have been happier with the results.”

In fact, the camera has proved its mettle winter, spring, summer and fall. “We’ve used it in the dead of winter, at night in the rain and during day exterior [shots] at the height of summer when it’s been over 100 degrees. It’s never skipped a beat.”

Blount also commends Keslow Camera in Los Angeles, which services The Goldbergs’ cameras. In addition, the rental house has accessorized the F55 camera body with extra bracketry and integrated power ports for more ease of use.

Due to the fast pace at which the show is filmed — often covering 10-plus pages of script a day — Blount uses Angenieux Optimo zoom lenses. “The A camera has a full set of lightweight zooms covering 15mm to 120mm, and the B camera always has the [Optimo] 24-290,” he says. “The Optimo lenses and F55 are a great combination, making it easy to move fast and capture beautiful images.”

Blount points out that he also does all the Steadicam work on the show, and with the F55 being so lightweight, compact and versatile, it makes for a “very comfortable camera in Steadicam mode. It’s perfect to use in all shooting modes.”

The Goldbergs’ DP always shoots with two cameras, sometimes three depending on the scene or action. And, there is never an issue of the cameras not matching, according to Blount. “I’m not a big fan of the GoPro image in the narrative world, and I own a Sony a7S. It’s become my go-to camera for mounts or tight space work on the show, and works perfectly with the F55.”

And, there is something to say for consistency, too. “Having used the same camera and lens package for the past five seasons has made it easy to keep the look consistent for The Goldbergs,” says Blount. “At the beginning of this season, I looked at shooting with the new Sony Venice. It’s a fantastic-looking camera, and I love the options, like the variable ND filters, more color temperature options and the dual ISO, but the limit of 60fps at this stage was a deal-breaker for me; we do a fair amount of 72fps and 120fps.”

“If only the F55 had image stabilization to take out the camera shake when the camera operators are laughing so hard at the actors’ performances during some scenes. Then it would be the perfect camera!” he says with a laugh himself.


Karen Moltenbrey is a longtime writer and editor in the CG and post industries.

Q&A: Camera Operators

By Randi Altman

Camera operators might not always get the glory, but they certainly do get the job done. Working hand in hand with DPs and directors, these artists make sure the camera is in the right place for the right shot, and so much more. As one of the ops we spoke to says, “The camera operator is the “protector of the frame.”

We reached out to three different camera operators, all of whom are members of the Society of Camera Operators (SOC), to find out more about their craft and how their job differs from some of the others on set.

Lisa Stacilauskas

Lisa Stacilauskas, SOC
What is the role of the camera operator? What is the camera operator accountable for on set?
The role of the camera operator varies quite a bit depending on the format. I work primarily in scripted television on “single camera” comedies. Don’t let the name “single camera” fool you. It’s meant to differentiate the shooting format from multicam, but these days most single camera shows shoot with two or three cameras. The show I work on, American Housewife, uses three cameras. I am the C camera operator.

In the most basic sense, the camera operator is responsible for the movement of the camera and the inclusion or exclusion of what is in frame. It takes a team of craftspeople to accomplish this. My immediate team includes a 1st and 2nd camera assistant and a dolly grip. Together we get the camera where it needs to be to get the shots and to tell the story as efficiently as possible.

In a larger sense, the camera operator is a storyteller. It is my responsibility to know the story we are trying to tell and assist the director in attaining their vision of that story. As C camera operator, I think about how the scene will come together in editing so I know which pieces of coverage to get.

Another big part of my job is keeping lighting equipment and abandoned water bottles out of my shot. The camera operator is the “protector of the frame”!

How do you typically work with the DP?
The DP is the head of the camera department. Each DP has nuances in the way they work with their operators. Some DPs tell you exactly where to put your camera and what focal length your lenses should be. Others give you an approximate position and an indication of the size (wide, medium, close-up) and let you work it out with the actors or stand-ins.

American Housewife

How does the role of the camera operator and a DP differ?
The DP is in charge of camera and lighting. Officially, I have no responsibility for lighting. However, it’s very important for a camera operator to think like a DP; to know and pay attention to the lighting. Additionally, especially when shooting digitally, once the blocking is determined, camera operators stay on set, working with all the other departments to prepare a shot while the DP is at the monitors evaluating the lighting and/or discussing set ups with the director.

What is the relationship between the operator and the director?
The relationship between the operator and director can vary depending on the director and the DP. Some directors funnel all instructions through the DP and only come to you with minor requests once the shots have already been determined.

If the director comes to me directly without going through the DP, it is my responsibility to let the DP know of the requested shot, especially if she/he hasn’t lit for it! Sometimes you are a mediator between the two and hopefully steer them closer to being on the same page. It can be a tough spot to be in if the DP and director have different visions.

Can you talk about recent projects you’ve worked on?
I’m currently working on Season 3 of American Housewife. The C camera position was a day-playing position at the beginning of Season 1, but DP Andrew Rawson loves to work with three cameras, and really knows how to use all three efficiently. Once production saw how much time we saved them, they brought us on full time. Shooting quickly and efficiently is especially important on American Housewife because three of our five principal actors are minors, whose hours on set are restricted by law.

During a recent hiatus, I operated B camera on a commercial with the DP operating A camera. It seemed like the DP appreciated the “extra set of eyes.”

Prior to American Housewife, I worked on several comedies operating the B camera, including Crazy Ex- Girlfriend (Season 1), Teachers (Season 1) and Playing House (Season 2).

Stephen Campanelli, SOC
What is the role of the camera operator? What is the camera operator accountable for on set?
The role of the camera operator on the set is to physically move the camera around to tell the story. The camera operator is accountable for what the director and the director of photography interpret to be the story that needs to be told visually by the camera.

Stephen Campanelli (center) on the set of American Sniper.

As a camera operator, you listen to their input, and sometimes have your own input and opinion to make the shot better or to convey the story point from a different view. It is the best job on set in my opinion, as you get to physically move the camera to tell great stories and work with amazing actors who give you their heart and soul right in front of you!

How do you typically work with the DP?
I have been very fortunate in my career to have worked with some very collaborative DPs. After 24 years of working with Clint Eastwood, I have absorbed so much of his directing style and visual nature that working closely with the DP we have created the Eastwood-style of filmmaking. When I am doing other films with other DPs we always talk about conveying the story in the truest, most visual way without letting the camera get in the way of a good story. That is one of the most important things to remember: A camera operator is not to bring attention to the camera, but bring attention to the story!

How does the role of the camera operator and a DP differ?
A DP usually is in charge of the lighting and the look of the entire motion picture. Some DPs also operate the camera, but that is a lot of work on both sides. A camera operator is very essential, as he or she can rehearse with the actors or stand-ins while the director of photography can concentrate solely on the lighting.

What is the relationship between the operator and the director?
As I mentioned earlier, my relationship with Clint Eastwood has been a very close one, as he works closely with the camera operator rather than the director of photography. We have an incredible bond where very few words are spoken, but we each know how to tell the story once we read the script. On some films, the director and the DP are the ones that work together closely to cohesively set the tone for the movie and to tell the story, and the camera operator interprets that and physically moves the camera with the collaboration of both the director and director of photography.

Can you talk about recent projects you’ve worked on?
I recently just wrapped my 22nd movie with Clint Eastwood as his camera operator, it is called The Mule. We filmed in Atlanta, New Mexico and Colorado. It is a very good script and Clint is back in front of the camera again, acting in it. It also stars Bradley Cooper, Laurence Fishburne and Michael Pena. [Editor’s note: He recently worked on A Star is Born, also with Bradley Cooper.]

Recently, I moved up to directing. In 2015, I directed a movie called Momentum, and this year I directed an award-winning film called Indian Horse that was a big hit in Canada and will soon be released in the United States.

Jamie Hitchcock

Jamie Hitchcock, SOC
What is the role of the camera operator? What is the camera operator accountable for on set?
The role of a camera operator is to compose an assigned shot and physically move the camera if necessary to perform that shot or series of shots as many times as needed to achieve the final take. The camera operator is responsible for maintaining the composition of the shot while also scanning the frame for anything that shouldn’t be there. The camera operator is responsible for communicating to all departments about elements that should or should not be in the frame.

How do you typically work with the DP?
The way a camera operator works with a director of photography varies depending on the type of project they are working on. On a feature film, episodic or commercial, the director of photography is very involved. The DP will set each shot and the operator then repeats it as many times as necessary. On a variety show, live show or soap opera, the DP is usually a lighting designer/director, and the director works with the camera operators to set the shots. On multi-camera sitcoms, the shots are usually set by the director… with the camera operator. When the production requires a complicated scene or location, the DP will become more actively involved with the selection of the shots.

How does the role of the camera operator and a DP differ?
The role of the DP and operator are quite different yet the goal is the same. The DP is involved with the pre-production process, lighting, running the set, managing the crew and the post process. The operator is involved on set working shot by shot. Ultimately, both the DP and operator are responsible for the final image the viewing audience will see.

What is the relationship between the operator and the director?
The relationship between the operator and the director, like that of the operator and DP, varies depending on the type of project. On a feature-type project, the director may be only using one camera. On a sports or variety program the director might be looking at 15 or more cameras. In all cases, the director is counting on the operators to perform their assigned shots each time. Ultimately, when a shot or take is complete, the director is the person who decides to move on or do it again, and they trust the operator to tell them if the shot was good or not.

CBS’s Mom

Can you talk about recent projects you’ve worked on?
I am currently working on The Big Bang Theory and Mom for CBS. Both are produced by Chuck Lorre Productions and Warner Bros. Television. Steven V. Silver, ASC, is the DP for both shows. Both shows use four cameras and are taped in front of a studio audience. We work in what I like to call the “Desilu-type” format because all four cameras are on a J.L. Fisher dolly with a 1st assistant working the lens and a dolly grip physically moving the camera. This format was perfected by Desi Arnez for I Love Lucy and still works well for this format.

The working relationship with the DP and director on our shows falls somewhere in the middle. Mark Cendroski and Jamie Widdoes direct almost all of our shows, and they work directly with the four operators to set the required shots. They have a lot of trust in us to know what elements are needed in the frame, and sometimes the only direction we receive is the type of shot they want. I work on a center camera, which is commonly referred to as a “master” camera, however it’s not uncommon to have a master, close-up and two or three shots all in the same scene. Each scene is shot beginning to end with all the coverage set, and a final edit is done in post. We do have someone cutting a live edit that feeds to the audience so they can follow along.

Our process is very fast, and our DP usually only sees the lighting when we start blocking shots with stand-ins. Steve spends a lot of time at the monitor and constantly switches between all four cameras — he’s looking at composition and lighting, and setting the final look with our video controller. Generally, when the actors are on set we roll the cameras. It’s a pretty high-pressure way to work for a product that will potentially be seen by millions of people around the world, but I love it and can’t imagine doing anything else.

Main Image Caption: Stephen Campanelli


The SOC, which celebrates 40 years in 2019, is an international organization that aims to bring together camera operators and crew. The Society also hosts an annual Lifetime Achievement Awards show, publishes the magazine Camera Operator and has a charitable commitment to The Vision Center at Children’s Hospital Los Angeles.

The ASC: Mentoring and nurturing diversity

Cynthia Pusheck, ASC, co-chairs the ASC Vision Committee, along with John Simmons, ASC. Working together they focus on encouraging and supporting the advancement of underrepresented cinematographers, their crews and other filmmakers. They hope their efforts inspire others in the industry to help positive change through hiring talent that better reflects society.

In addition to her role on the ASC Vision Committee, Pusheck is a VP of the ASC board. She became a member in 2013. Her credits include Sacred Lies, Good Girls Revolt, Revenge and Brothers & Sisters. She is currently shooting Limetown for Facebook Watch.

To find out more about their work, we reached out to Pusheck.

Can you talk about what the ASC Vision Committee has done since its inception? What it hopes to accomplish?
The ASC Vision Committee was formed in January 2016 as a way for the ASC to actively support those who face unique hurdles as they build their cinematography careers. We’ve held three full-day diversity events, and some individual panel discussions.

We’ve also awarded a number of scholarships to the ASC Master Class and will continue awarding a handful each year. Our mentorship program is getting off the ground now with many ASC members offering to give time to young DPs from underrepresented groups. There’s a lot more that John Simmons (my co-chair) and our committee members want to accomplish, and with the support of the ASC staff, board members and president, we will continue to push things forward.

(L-R) Diversity Day panel: Rebecca Rhine, Dr. Stacy Smith, Alan Caso, Natasha Foster-Owens, Xiomara Comrie, Tema Staig, Sarah Caplan.

The word “progress” has always been part of the ASC mission statement. So, with the goal of progress in mind, we redesigned an ASC red lapel pin and handed it out at the ASC Awards earlier this year (#ASCVision). We wanted to use it to call attention to the work of our committee and to encourage our own community of cinematographers and camera people to do their part. If directors of photography and their department heads (camera, grip and set lighting) hire with inclusivity in mind, then we can change the face of the industry.

What do you think is contributing to more females becoming interested in camera crew careers? What are you seeing in terms of tangible developments?
Gender inequality in this industry has certainly gotten a lot of attention the last few years, which is fantastic but despite all that attention, the actual facts and figures don’t show as much change as you’d think.

The percentage of women or people of color shooting movies and TV shows hasn’t really changed much. There certainly is a lot more “content” getting produced for TV, and that has been great for many of us, and it’s a very exciting time. But, we have a long way to go still.

What’s very hopeful, though, is that more producers and studios are really pushing for inclusivity. That means hiring more women and people of color in positions of leadership, and encouraging their crews to bring more underrepresented crew members onto the production.

Currently we’re also seeing more young female DPs getting some really good shooting opportunities very early in their careers. That didn’t happen so much in the past, and I think that continues to motivate more young women to consider the camera department, or cinematography, as a viable career path.

We also have to remember that it’s not just about getting more women on set, it’s about having our sets look like society at large. The ultimate goal should be that everyone has a fair chance to succeed in this industry.

How can women looking to get into this part of the industry find mentors?
The union (Local 600), and also now the ASC have mentorship programs. The union’s program is great for those coming up the ranks looking for help or advice as they build their career.

For example, an assistant can find another assistant, or an operator, to help them navigate the next phase of their career and give them advice. The ASC mentorship program is aimed more for young cinematographers or operators from underrepresented groups who may benefit from the support of an experienced DP.

Another way to find a mentor is by contacting someone whom you admire directly. Many women would be surprised to find that if they reach out and request a coffee or phone call, often that person will try and find time for them.

My advice would be to do your homework about the person you’re contacting and be specific in your questions and your goals. Asking broad questions like “How do I get a job” or “Will you hire me?” won’t get you very far.

What do you think will create the most change? What are the hurdles that still must be overcome?
Bias and discrimination, whether conscious or unconscious, is still a problem on our sets. It may have lessened in the last 25 years, but we all continue to hear stories about crew members (at all levels) who behave badly, make inappropriate comments or just have trouble working for woman or people of color. These are all unnecessary stresses for those trying to get hired and build their careers.

Behind the Camera: Feature Film DPs

By Karen Moltenbrey

The responsibilities of a director of photography (DP) span far more than cinematography. Perhaps they are best known for their work behind the camera capturing the action on set, but that is just one part of their multi-faceted job. Well before they step onto the set, they meet with the director, at times working hand-in-hand to determine the overall look of the project. They also make a host of technical selections, such as the type of camera and lenses they will use as well as the film stock if applicable – crucial decisions that will support the director’s vision and make it a reality.

Here we focus on two DPs for a pair of recent films with specialized demands and varying aesthetics, as they discuss their workflows on these projects as well as the technical choices they made concerning equipment and the challenges each project presented.

Hagen Bogdanski: Papillon
The 2018 film Papillon, directed by Michael Noer, is a remake of the 1973 classic. Set in the 1930s, it follows two inmates who must serve, and survive, time in a French Guyana penal colony. The safecracker nicknamed Papillon (Charlie Hunnam) is serving a life sentence and offers protection to wealthy inmate Louis Dega (Rami Malek) in exchange for financing Papillon’s escape.

“We wanted to modernize the script, the whole story. It is a great story but it feels aged. To bring it to a new, younger audience, it had to be modernized in a more radical way, even though it is a classic,” says Hagen Bogdanski, the film’s DP, whose credits include the film The Beaver and the TV series Berlin Station, among others. To that end, he notes, “we were not interested in mimicking the original.”

This was done in a number of ways. First, through the camera work, using a semi-documentary style. The director has a history of shooting documentaries and, therefore, the crew shot with two cameras at all times. “We also shot the rehearsals,” notes Bogdanski, who was brought onto the project and given nearly five weeks of prep before shooting began. Although this presented a lot of potential risk for Bogdanski, the film “came out great in the end. I think it’s one of the reasons the look feels so modern, so spontaneous.”

In the film, the main characters face off against the harsh environment of their prison island. But to film such a landscape required the cinematographer and crew to also contend with these trying conditions. They shot on location outdoors for the majority of the feature, using just one physical structure: the prison. Also helping to define the film’s aesthetic was the lighting, which, as is typical with Bogdanski’s films, is as natural as possible without large artificial sources.

Most of the movie was shot in Montenegro, near sun-drenched Greece and Albania. Bogdanski does not mince words: “The locations were difficult.”

Weather seemed to impact Bogdanski the most. “It was very remote, and if it’s raining, it’s really raining. If it’s getting dark, it’s dark, and if it’s foggy, there is fog. You have to deal with a lot of circumstances you cannot control, and that’s always a bit of a nightmare for any cinematographer,” he says. “But, what is good about it is that you get the real thing, and you get texture, layers, and sometimes it’s better when it rains than when the sun is shining. Most of the time we were lucky with the weather and circumstances. The reality of location shooting adds quite heavily to the look and to the whole texture of the movie.”

The location shooting also affected this DP’s choice of cameras. “The footprint [I used] was as small as possible because we basically visited abandoned locales. Therefore, I chose as small a kit — lenses, cameras and lights — as possible,” Bogdanski points out. “Because [the camera] was handheld, every pound counted.” In this regard, he used ARRI’s Arriflex Mini cameras and one Alexa SXT, and only shot with Zeiss Ultra Prime lenses – “big zooms, no big filters, nothing,” he adds.

The prison build was on a remote mountain. On the upside, Bogdanski could shoot 360 degrees there without requiring the addition of CGI later. On the downside, the crew had to get up the mountain. A road was constructed to transport the gear and for the set construction, but even so, the trek was not easy. “It took two hours or longer each day from our hotel. It was quite an adventure,” he says.

As for the lighting, Bogdanski tried to shoot when the light was good, taking advantage of the location’s natural light as much as possible — within his documentary style. When this was not enough, LEDs were used. “Again, small footprint, smaller lens, smaller electrical power, smaller generators….” The night scenes were especially challenging because the nights were very short, no longer than five to six hours. When artificial rain had to be used, shooting was “a little painful” due to the size of the set, requiring the use of more traditional lighting sources, such as large Tungsten light units.

According to Bogdanski, filming Papillon followed what he calls an “eclectic” workflow, akin to the European method of filming whereby rehearsal occurred in the morning and was quite long, as the director rehearsed with the actors. Then, scenes were shot in script order, on the first take without technical rehearsals. “From there, we tried to cover the scene in handheld mode with two cameras in a kind of mash-up. We did pick up the close-ups and all that, but always in a very spontaneous and quick way,” says Bogdanski.

Looking back, Bogdanski describes Papillon as a “modern-period film”: a period look, without looking “period.” “It sounds a bit Catch-22, which it is, in my opinion, but that’s what we aimed for, a film that plays basically in the ’40s and ’50s, and later in the ’60s,” he says.

During the time since the original film was made in 1973, the industry has witnessed quite a technical revolution in terms of film equipment, providing the director and DP on the remake with even more tools and techniques at their disposal to leave their own mark on this classic for a new generation.

Nancy Schreiber: Mapplethorpe
Award-winning cinematographer Nancy Schreiber, ASC, has a resume spanning episodic television (The Comeback), documentaries (Eva Hesse) and features (The Nines). Her latest film, Mapplethorpe, paints an unflinching portrait of controversial-yet-revered photographer Robert Mapplethorpe, who died at the age of 42 from AIDS-related complications in 1989. Mapplethorpe, whose daring work influenced popular culture, rose to fame in the 1970s with his black-and-white photography.

In the early stages of planning the film, Schreiber worked with director Ondi Timoner and production designer Jonah Markowitz while they were still in California prior to the shoot in New York, where Mapplethorpe (played by The Crown’s Matt Smith) lived and worked at the height of his popularity.

“We looked at a lot of reference materials — books and photographs — as Ondi and I exchanged look books. Then we honed in on the palette, the color of my lights, the set dressing and wardrobe, and we were off to the races,” says Schreiber. Shooting began mid-July 2017.

Mapplethorpe is a period piece that spans three decades, all of which have a slightly different feel. “We kept the ’60s and into the ’70s quite warm in tone,” as this is the period when he first meets Patty Smith, his girlfriend at the time, and picks up a camera, explains Schreiber. “It becomes desaturated but still warm tonally when he and Patti visit his parents back home in Queens while the two are living at the Chelsea Hotel. The look progresses until it’s very much on the cool blue/gray side, almost black and white, in the later ’70s and ’80s.” During that time period, Mapplethorpe is successful, with an enormous studio, photographically exploring male body parts like no other person has ever done, while continuing to shoot portraits of the rich and famous.

Schreiber opted to use film, Super 16, rather than digital to capture the life of this famed photographer. “He shot in film, and we felt that format was true to his photography,” she notes. Despite Mapplethorpe’s penchant for mostly shooting in black and white, neither Timoner nor Schreiber considered using that format for the feature, mostly because the ’60s through ’80s in New York had very distinctive color palettes. They felt, however, that film in and of itself was very “textural and beautiful,” whereas you have to work a little harder with digital to make it look like film — even though new ways of adding grain to digital have become quite sophisticated. “Yet, the grain of Super 16 is so distinctive,” she says.

In addition, Kodak had just opened a lab in New York in the spring of 2017, facilitating their ability to shoot film by having it processed quickly nearby.

Schreiber used an ARRI Arriflex 416 camera for the project; when possible, she used two. She also had a set of Zeiss 35mm Super Speed lenses, along with two zoom lenses she used only occasionally for outdoor shots. “The Super Speeds were terrific. They’re vintage and were organic to the look of this period.”

She also used a light meter faithfully. Although Schreiber occasionally uses light meters when shooting digital, it was not optional for shooting film. “I had to use it for every shot, although after a couple of days, I was pretty good at guessing [by eyeing it],” Schreiber points out, “as I used to do when we only shot film.”

Soon after ARRI had introduced the Arriflex 416 – which is small and lightweight – the industry started moving to digital, prompting ARRI to roll out the now-popular Alexa. “But the [Arriflex 416] camera really caught on for those still shooting Super 16, as they do for the series The Walking Dead, Schreiber says, adding she was able to get her pair from TCS Technological Cinevideo Services rental house in New York.

“I had owned an Aaton, a French camera that was very popular in the 1980s and ’90s. But today, the 416 is very much in demand, resembling the shape of my Aaton, both of which are ergonomic, fitting nicely on your shoulder. There were numerous scenes in the car, and I could just jump in the car with this very small camera, much smaller than the digital cameras we use on movies; it was so flexible and easy to work with,” recalls Schreiber.

As for the lenses, “again, I chose the Super Speed Primes not only because they were vintage, but because I needed the speed of the 1.3 lens since film requires more light.” She tested other lenses at TCS, but those were her favorites.

While Schreiber has used film on some commercials and music videos, it had been some time since she had used it for an entire movie. “I had forgotten how freeing it is, how you can really move. There are no cables to worry about. Although, we did transmit to a tiny video village,” she says. “We didn’t always have two cameras [due to cost], so I needed to move fast and get all the coverage the editor needed. We had 19 days, and we were limited in how long we could shoot each day; our budget was small and we couldn’t afford overtime.” At times, though, she was able to hire a Steadicam or B operator who really helped move them along, keeping the camera fluid and getting extra coverage. Timoner also shot a bit of Super 8 along the way.

There was just one disadvantage to using film: The stocks are slow. As Schreiber explains, she used a 500 ASA stock. Therefore, she needed very fast lenses and a fair amount of light in order to compensate. “That worked OK for me on Mapplethorpe because there was a different sense of lighting in the 1970s, and films seemed more ‘lit.’ For example, I might use backlight or hair light, which I never would do for [a film set in] present day,” she says. “I rated that stock at 400 to get rich blacks; that also slightly minimized the grain when the day interior stock was 250 that I rated at 200. We are so used to shooting at 800 or 1280 ISO these days. It was an adjustment.”

Schreiber on set with “Mapplethorpe” director Ondi Timoner.

Shooting with film was also more efficient for Schreiber. “We had monitors for the video village, but we were standard def, old-school, which is not an exact representation. So, I could move quickly to get enough coverage, and I never looked at a monitor except when we had Steadicam. What you see is not what you get with an SD tap. I was trusted to create the imagery as I saw fit. I think many people today are used to seeing the digital image on the monitor as what the final film will look like and may be nervous about waiting for the processing and transfer, not trusting the mystery or mystique of how celluloid will look.”

To top things off, Schreiber was backed by an all-female A camera team. “I know how hard it is for women to get work,” she adds. “There are so many competent women working behind the camera these days, and I was happy to hire them. I remember how challenging it was when I was a gaffer or started to shoot.”

As for costs, digital camera equipment is more expensive than Super 16 film equipment, yet there were processing and transfer costs associated with getting the film into the edit suite. So, when all was said and done, film was indeed more expensive to use, but not by much.

“I am really proud that we were able to do the movie in 19 days with a very limited budget, in New York, covering many periods,” concludes Schreiber. “We had a great time, and I am happy I was able to hire so many women in my departments. Women are still really under-represented, and we must demonstrate that there is not a scarcity of talent, just a lack of exposure and opportunity.”

Mapplethorpe is expected in theaters this October.


Karen Moltenbrey is a longtime writer and editor in the CG and post industries.

DP Rick Ray: Traveling the world capturing stock images

By Randi Altman

It takes a special kind of human to travel the world, putting himself in harm’s way to collect hard-to-find stock imagery, but Rick Ray thrives on this way of life. This Adobe Stock contributor has a long history as a documentary filmmaker and a resume that includes 10 Questions for the Dalai Lama (2006), Letters Home from the South China Seas: Adventures in Singapore & Borneo (1989) and Letters Home from Iceland (1990).

Let’s find out more about what makes Ray tick.

As a DP, are you just collecting footage to sell or are you working on films, docs and series as well?
I used to be a documentary filmmaker and have about 24 published titles in travel and biography, including the 10 Questions For The Dalai Lama and the TV series Raising The Bamboo Curtain With Martin Sheen. However, I found that unless you are Ken Burns or Michael Moore, making a living in the world of documentary films can be very difficult. It wasn’t until I came to realize that individual shots taken from my films and used in other productions were earning me more income than the whole film itself that I understood how potentially lucrative and valuable your footage can be when it is repurposed as stock.

That said, I still hire myself out as a DP on many Hollywood and independent films whenever possible. I also try to retain the stock rights for these assignments whenever possible.

A Bedouin man in Jordan.

How often are you on the road, and how do you pick your next place to shoot?
I travel for about three to four months each year now. Lately, I travel to places that interest me from a beauty or cultural perspective, whether or not they may be of maximal commercial potential. The stock footage world is inundated with great shots of Paris, London or Tokyo. It’s very hard for your footage to be noticed in such a crowded field of content. For that reason, lesser known locations of the world are attractive to me because there is less good footage of those places.

I also enjoy the challenges of traveling and filming in less comfortable places in the world, something I suppose I inherited from my days as a 25-year-old backpacking and hitchhiking around the world.

Are you typically given topics to capture — filling a need — or just shooting what interests you?
Mostly what interests me, but also I see a need for many topics of political relevance, and this also informs my shooting itinerary.

For example, immigration is in the news intensively these days, so I have recently driven the border wall from Tijuana to the New Mexico border capturing imagery of that. It’s not a place I’d normally go for a shoot, but it proved to be very interesting and it’s licensing all the time.

Rick Ray

Do you shoot alone?
Yes, normally. Sometimes I go with one other person, but that’s it. To be an efficient and effective stock shooter, you are not a “film crew” per se. You are not hauling huge amounts of gear around. There are no “grips,” and no “craft services.” In stock shooting around the world, as I define it, I am a low-key casual observer making beautiful images with low-key gear and minimal disruption to life in the countries I visit. If you are a crew of three or more, you become a group unto yourself, and it’s much more difficult to interact and experience the places you are visiting.

What do you typically capture with camera-wise? What format? Do you convert footage or let Adobe Stock do that?
I travel with two small (but excellent) Sony 4K handicams (FDR-AX100), two drones, a DJI Osmo handheld steady-grip, an Edelkrone slider kit and two lightweight tripods. Believe it or not, these can all fit into one standard large suitcase. I shoot in XDCAM 4K and then convert it to Apple ProRes in post. Adobe Stock does not convert my clips for me. I deliver them ready to be ordered.

You edit on Adobe Premiere. Why is that the right system for you, and do you edit your footage before submitting? How does that Adobe Stock process work?
I used to work in Final Cut Pro 7 and Final Cut Pro X, but I switched to Adobe Premiere Pro after struggling with FCPX. As for “editing,” it doesn’t really play a part in stock footage submission. There is no editing as we are almost always dealing with single clips. I do grade, color correct, stabilize and de-noise many clips before I export them. I believe in having the clips look great before they are submitted. They have to compete with thousands of other clips on the site, and mine need to jump out at you and make you want to use them. Adobe allows users to submit content directly from Premiere to Adobe Stock, but since I deal in large volumes of clips in submitting, I don’t generally use this approach. I send a drive in with a spreadsheet of data when a batch of clips are done.

A firefighter looks back as a building collapses during the Thomas Fire in Ventura, California.

What are the challenges of this type of shooting?
Well, you are 100% responsible for the success or failure of the mission. There is no one to blame but yourself. Since you are mostly traveling low-key and without a lot of protection, it’s very important to have a “fixer” or driver in difficult countries. You might get arrested or have all of your equipment stolen by corrupt customs authorities in a country like Macedonia, as happened to me. It happens! You have to roll with the good and the bad, ask forgiveness rather than permission and be happy for the amazing footage you do manage to get,

You left a pretty traditional job to travel the world. What spurred that decision, and do you ever see yourself back at a more 9-to-5  type of existence?
Never! I have figured out the perfect retirement plan for myself. Every day I can check my sales from anywhere in the world, and on most days the revenue more than justifies the cost of the travel! And it’s all a tax write-off. Who has benefits like that?

A word of warning, though — this is not for everyone. You have to be ok with the idea of spending money to build a portfolio before you see significant revenue in return. It can take time and you may not be as lucky as I have been. But for those who are self-motivated and have a knack for cinematography and travel, this is a perfect career.

Can you name some projects that feature your work?
Very often this takes me by surprise since I often don’t know exactly how my footage is used. More often than not, I’m watching CNN, a TV show or a movie and I see my footage. It’s always a surprise and makes me laugh. I’ve seen my work on the Daily Show, Colbert, CNN, in commercials for everything from pharmaceuticals to Viking Cruises, in political campaign ads for people I agree and disagree with, and in music videos for Neil Young, Bruce Springsteen, Coldplay and Roger Waters.

Fire burns along the road near a village in the Palestinian territories.

Shooting on the road must be interesting. Can you share a story with us?
There have been quite a few. I have had my gear stolen in Israel (twice). In Thailand my gear was confiscated by corrupt customs authorities in Macedonia, as I mentioned earlier. I have been jailed by Ethiopian police for not having a valid filming permit, which was not necessary. Once a proper bribe was arranged they changed clothes from police into costumed natives and performed as tour guides and cultural emissaries for me.

In India, I was on a train to the Kumba Mela, which was stopped by a riot and burned. I escaped with minor injuries. I was also accosted by communist revolutionaries in Bihar, India. Rather than be a victim, I got out of the car and filmed it, and the leader and his generals then reviewed the footage and decided to do it over. After five takes of them running down the road and past the camera, the leader finally approved the take and I was left unharmed.

I’ve been in Syria and Lebanon and felt truly threatened by violence. I’ve been chased by Somali bandits at night in a van in Northern Kenya. Buy me a beer sometime, I’ll tell you more.

Behind the Title: WIG director/DP Daniel Hall

NAME: Daniel Hall

COMPANY: LA-based Where It’s Greater (@whereitsgreater)

Dan on set for Flyknit Hyperdunk project.

CAN YOU DESCRIBE YOUR COMPANY?
Where It’s Greater is a nimble creative studio. We sit somewhere between the traditional production company and age-old advertising agency, meaning we are a small team of creatives who are able to work with brands and other agencies alike. It doesn’t matter where they are in the spectrum of their campaign; we help bring their projects to life from concept to camera to final delivery. We like getting our hands dirty. We have a physical studio space with various production capabilities and in-house equipment that affords us some unique opportunities from both and efficiency and creative standpoint.

WHAT’S YOUR JOB TITLE?
Along with being the founder, I am director and lead cinematographer.

WHAT DOES THAT ENTAIL?
That entails pretty much everything and then some. Where It’s Greater is my baby, so everything from physically lighting and capturing the photos on shoots to making sure we’re headed in the right direction as a company to securing new clients and jobs on a consistent basis. I take out the trash sometimes, too.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think what may surprise people the most is that we work mostly client-direct. A lot of agencies or cinematographers have agents or reps that go out and get them work, but I’ve been fortunate enough to personally establish long-lasting, fruitful relationships with clients like Nike and Beats By Dre and MeUndies.

WHAT’S YOUR FAVORITE PART OF THE JOB?
By far, my favorite part is creating beautiful advertising work for great brands. It’s really special when you get to connect with clients who not only share the same values as you, but also align and speak the same language in terms of taste and preferences. Those projects always come out memorable.

WHAT’S YOUR LEAST FAVORITE?
All the other mundane tasks I take on during a day-to-day basis solely so that I can create some truly great work every now and then. But it’s apart of the process; you can’t have one without the other.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Anytime a client calls me with an exciting new opportunity (smiles).

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I could see myself doing a few different things, but they are all in the creative/production field. So I would most likely be doing what I’m doing, but just not for myself.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I always remember having a creative eye from a young age. I get that naturally from my dad who was a camera operator, but it wasn’t until my cousin put a camera in my hand around 18 or 19 that I really fell in love with photography. But even then I didn’t exactly know what to do with it. I just followed the flow of life. I took advantage of the opportunities in front of me and worked my ass off to maximize them and, in turn, set myself for the next opportunity.

After 10 years, I have a 4,000-square-foot studio space in Los Angeles with a bunch of toys and equipment that I love to use on projects with some of the top brands in the world. I’m very grateful and fortunate in that way. I’m excited to look up again in the next 10 years.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We most recently worked with Beats By Dre on their global ‘Made Defiant’ campaign. We were commissioned to direct and produce a series of product films and still-life imagery to showcase their product line of headphones and earbuds in new colors that resemble their original headphone in order to pay homage and celebrate the brand’s 10-year anniversary. We took advantage of this opportunity to use our six-axis robotic arm, which we own and operate in-house. The arm gave us the ability to capture a series of beauty shots in motion that wouldn’t be possible with any other tech on the market. I think that is what made this job special.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I really loved what we did last summer for Nike Basketball and Dick’s Sporting Goods. We directed and produced a 30-second live-action spot centered around one of the most popular basketball shoes of the summer, the Flyknit Hyperdunk. Again, we were able to produce this completely in-house, building out a stylize basketball court in our studio space and harnessing our six-axis robot yet again to make a simple yet compelling advert for the sportswear giant.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Chemex — I’m by no means a coffee snob, but I definitely have to have a cup to start my day. There is something therapeutic about it.

Color meter — I can live without my light meter. I rarely, if ever, shoot film for commercial jobs, at least at this phase in my career, but I love my Sekonic C-700R color meter. It allows me to balance all my images and films to taste.

Hyperice foam roller — In the last year I’ve been a lot more active and more into health and fitness. It’s really changed my life in a lot of ways for the better. This vibrating foam roller is a major key to keeping my muscles loose and stretched so I can recover a lot faster.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Of course. I got my start growing up in Atlanta directing music videos for some pretty noteworthy artists, so there is frequently some form of southern hip-hop playing throughout the studio. From the iconic duo of Outkast to the newer generation of artists like Future and 2 Chainz, who I’ve had the pleasure of working with, I always have something playing in the background.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I do some of the typical things on a regular basis: exercise, massage therapy, vacation time. Nothing special really as of yet, but if I crack the code and find a new technique I’ll be sure to share!

Atomos Ninja V records 4K 10-bit from new Nikon mirrorless cameras  

The new Nikon Z6 and Z7 mirrorless cameras output a full-frame 10-bit 4K N-Log signal, which the new Atomos Ninja V 4K HDR monitor/recorder can record and display in HDR.

The Nikon Z6 and Z7 have sensors that output 4K images over HDMI, ready for conversion to HDR by Atomos. The Atomos Ninja V records the output to production-ready 10-bit Apple ProRes or Avid DNx formats.

Atomos supports Nikon Log, Apple ProRes recording and HDR monitoring from the Z series cameras. The tiny Ninja V 5-inch device makes a nice go-to for these full-frame mirrorless cameras, making the setup ideal for corporate, news, documentary, nature films or b-roll for Hollywood productions.

The Z6 and Z7 offer the Nikon N-Log gamma, a brand-new Log gamma designed by Nikon to get the most out of the cameras’ sensors and wide dynamic range. Atomos helped resolve N-Log to HDR on their devices and their engineers have developed specific presets for it. Setup is automatic — plug the Ninja V into the cameras. The Ninja V can show 10+ stops of dynamic range on-screen to allow users to make accurate exposure and color decisions. The recorder can receive timecode and be triggered directly from the cameras.

The Ninja V costs $695, excluding SSD and batteries.

“It’s fantastic to push technology barriers with our friends at Nikon,” says Atomos CEO Jeromy Young. “Combining the new Nikon and our Ninja V HDR monitor/recorder gives filmmakers exactly what they have been asking for — a compact full-frame 4K 10-bit recording system at [this] price point.”

Review: OConnor camera assistant bag

By Brady Betzel

After years and years of gear acquisition, I often forget to secure proper bags and protection for my equipment. From Pelican cases to the cheapest camera bags, a truly high-quality bag will extend the life of your equipment.

In this review I am going to go over a super-heavy-duty assistant camera bag by OConnor, which is a part of the Vitec Group. While the Vitec Group provides many different products — from LED lighting to robotic camera systems — OConnor is typically known for their professional fluid heads and tripods. This camera bag is made to not only fit their products, but also other gear, such as pan bars and ARRI plates. The OConnor AC bag is a no-nonsense camera and accessory bag with velcro enforced-repositionable inserts that will accommodate most cameras and accessories you have.

As soon as I opened the box and touched the AC bag I could tell it was high quality. The bag exterior is waterproof and easily wipeable. But, more importantly, there is an internal water- and dust-proof liner that allows the lid to be hinged while the equipment is close at hand while the liner is fully zipped. This internal waterproofing is resistant up to a 1.2M/4ft. column of water. Once I got past the quality of materials, my second inspection focused on the zippers. If I have a camera bag with bad zippers or snaps, it usually is given away or tossed, but the AC bag has strong and easy gliding zippers.

On the lid and inside of the front pockets are extremely tough and see-through mesh pockets for everything from batteries to memory cards. On the front is a business card/label holder. Around the outside are multiple pockets with fixing points for Carabiner hooks. In addition, there are d-rings for the included leather strap if you want to carry this bag over your shoulder instead of using the handles. The bag comes with five dividers to be velcroed on the inside, including two right angle dividers.The dividers are made to securely tie down all OConnor heads and accessories. Finally, the AC bag comes with a separate pouch to use on set for quick use.

Summing Up
In the end, the OConnor AC bag is a well made and roomy bag that will protect your camera gear and accessories from dust as well as water for $375. The inside measures in at 18x12x10.5 inches while the outside measures in at 22×14.5×10.5 inches and has been designed to fit inside of a Pelicase 1620. You can check out the OConnor AC bag on their website and find a dealer in your area.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Franz Kraus to advisory role at ARRI, Michael Neuhaeuser takes tech lead

The ARRI Group has named Dr. Michael Neuhaeuser as the new executive board member responsible for technology. He succeeds Professor Franz Kraus, who after more than 30 years at ARRI, joins the Supervisory Board and will continue to be closely associated with the company. Neuhaeuser starts September 1.

Kraus, who has been leading tech development at ARRI for the last few decades, played an essential role in the development of the Alexa digital camera system and early competence in multi-channel LED technology for ARRI lighting. During Kraus’ tenure at ARRI, and while he was responsible for research and development, the company was presented with nine Scientific and Technical Awards by the Academy of Motion Picture Arts and Sciences for its outstanding technical achievements.

In 2011, along with two colleagues, Kraus was honored with an Academy Award of Merit, an Oscar statuette for the design and development of the digital film
recorder, the ARRILASER.

Neuhaeuser, who is now responsible for technology at the ARRI Group, previously served as VP of automotive microcontroller development at Infineon Technologies in Munich. He studied electrical engineering at the Ruhr-University Bochum, Germany, and subsequently completed his doctorate in semiconductor devices. He brings with him 30 years of experience in the electronics industry.

Neuhaeuser started his industrial career at Siemens Semiconductor in Villach, Austria, and also took over leadership development at Micram Microelectronic in Bochum. He joined Infineon Technologies in 1998, where he performed various management functions in Germany and abroad. Some of his notable accomplishments include being responsible for the digital cordless business since 2005 and, together with his team, having developed the world’s first fully integrated DECT chip. In 2009, he was appointed to VP/GM at Infineon Technologies Romania in Bucharest where, as country manager, he built up various local activities with more than 300 engineers. In 2012, he was asked to head up the automotive microcontroller development division for which he and his team developed the highly successful Aurix product family, which is used in every second car worldwide.

Main Image: L-R: Franz Kraus and Michael Neuhaeuser.

Roundtable: Director Autumn McAlpin and her Miss Arizona post team

By Randi Altman

The independent feature film Miss Arizona is a sort of fish out of water tale that focuses on Rose Raynes, former beauty queen and current bored wife and mother who accepts an invitation to teach a life skills class at a women’s shelter. As you might imagine, the four women who she meets there don’t feel they have much in common. While Rose is “teaching,” the women are told that one of their abusers is on his way to the shelter. The women escape and set out on an all-night adventure through LA and, ultimately, to a club where the women enter Rose into a drag queen beauty pageant — and, of course, along the way they form a bond that changes them all.

L-R: Camera operator Eitan Almagor, DP Jordan McKittrick and Autumn McAlpin.

Autumn McAlpin wrote and directed the film, which has been making its way through the film festival circuit. She hired a crew made up of 70 percent women to help tell this tale of female empowerment. We reached out to her, her colorist Mars Williamson and her visual effects/finishing artist John Davidson to find out more.

Why did you choose the Alexa Mini? And why did you shoot mostly handheld?
Autumn McAlpin: The Alexa Mini was the first choice of our DP Jordan McKittrick, with whom I frequently collaborate. We were lucky enough to be able to score two Alexa Mini cameras on this shoot, which really helped us achieve the coverage needed for an ensemble piece in which five-plus key actors were in almost every shot. We love the image quality and dynamic range of the Alexas, and the compact and lightweight nature of the Mini helped us achieve an aggressive shooting schedule in just 14 days.

We felt handheld would achieve the intimate yet at times erratic look we were going for following an ensemble of five women from very different backgrounds who were learning to get along while trying to survive. We wanted the audience to feel as if they were going on the journey along with the women, and thus felt handheld would be a wise approach to accomplish this goal.

How early did post — edit, color — get involved?
McAlpin: We met with our editor Carmen Morrow before the shoot, and she and her assistant editor Dustin Fleischmann were integral in delivering a completed rough cut just five weeks after we wrapped. We needed to make key festival deadlines. Each day Dustin would drive footage from set over to Carmen’s bay, where she could assemble while we were shooting so we could make sure we weren’t missing anything crucial. This was amazing, as we’d often be able to see a rough assembly of a scene we had shot in the morning by the end of day. They cut on Avid Media Composer.

My DP Jordan and I agreed on the overall look of the film and how we wanted the color to feel rich and saturated. We were really excited about what we saw in our colorist’s reel. We didn’t meet our colorist Mars Williamson until after we had wrapped production. Mars had moved from LA to Melbourne, so we knew we wouldn’t be able to work in close quarters, but we were confident we’d be able to accomplish the desired workflow in the time needed. Mars was extremely flexible to work with.

Can you talk more about the look of the film.
McAlpin: Due to the nature of our film, we sought to create a rich, saturated look color wise. Our film follows a former pageant queen on an all-night adventure through LA with four unlikely friends she meets at a women’s shelter. In a way, we tried to channel an Oz-like world as our ensemble embarks into the unknown. We deliberately used color to represent the various realities the women inhabit. In the film’s open, our production design (by Gabriel Gonzales) and wardrobe (by Cat Velosa) helped achieve a stark, cold world — filled with blues and whites — to represent our protagonist Rose’s loneliness.

As Rose moves into the shelter, we went with warmer tones and a more eclectic production design. A good portion of Act II takes place in a drag club, which we asked Gabe to design to be rich and vibrant, using reds and purples. Toward the end of the film as Rose finds resolution, we went with more naturalistic lighting, primarily outdoor shots and golden hues. Before production, Jordan and I pulled stills from films such as Nick & Norah’s Infinite Playlist, Black Swan and Short Term 12, which provided strong templates for the looks we were trying to achieve.

Is there a particular scene or look that stands out for you?
McAlpin: There is a scene when our lead Rose (Johanna Braddy) performs a ventriloquist act onstage with a puppet and they sing Shania Twain’s “Man, I Feel Like a Woman.”  Both Rose and the puppet wore matching cowgirl wardrobe and braids, and this scene was lit to be particularly vibrant with hot pinks and purples. I remember watching the monitors on set and feeling like we had really nailed the rich, saturated look we were going for in this offbeat pageant world we had created.

L-R: Dana Wheeler-Nicholson, Shoniqua Shandai, producer De Cooper, Johanna Brady, Autumn McAlpin, Otmara Marrero and Robyn Lively.

Can you talk about the workflow from set to post?
McAlpin: As a low-budget indie, many of our team work from home offices, which made collaboration friendly and flexible. For the four months following production, I floated between the workspaces of our talented and efficient editor Carmen Morrow, brilliant composer Nami Melumad, dedicated sound designer Yu-Ting Su, VFX and online extraordinaire John Davidson, and we used Frame.io to work with our amazing colorist Mars Williamson. Everyone worked so hard to help achieve our vision in our timeframe. Using Frame.io and Box helped immensely with file delivery, and I remember many careful drives around LA, toting our two RAID drives between departments. Postmates food delivery service helped us power through! Everyone worked hard together to deliver the final product, and for that I’m so grateful.

Can you talk about the type of film you were trying to make, and did it turn out as you hoped?
McAlpin: I volunteered in a women’s shelter for several years teaching a life skills class, and this was an experience that introduced me to strong, vibrant women whose stories I longed to tell. I wrote this script very quickly, in just three weeks, though really, the story seemed to write itself. It was the fall of 2016, at a time where I was agitated by the way women were being portrayed in the media. This was shortly before the #metoo movement, and during the election and women’s march. The time felt right to tell a story about women and other marginalized groups coming together to help each other find their voices and a safe community in a rapidly divisive world.

I’m not going to lie, with our budget, all facets of production and post were quite challenging, but I was so overwhelmed by the fastidious efforts of everyone on our team to create something powerful. I feel we were all aligned in vision, which kept everyone fueled to create a finished product I am very proud of. The crowning moment of the experience was after our world premiere at Geena Davis’ Bentonville Film Fest, when a few women from the audience approached and confided that they, too, had lived in shelters and felt our film spoke to the truths they had experienced. This certainly made the whole process worthwhile.

Autumn, you wrote as well as directed. Did the story change or evolve once you started shooting or did you stick to the original script?
McAlpin: As a director who is very open to improv and creative play on set, I was quite surprised by how little we deviated from the script. Conceptually, we stuck to the story as written. We did have a few actors who definitely punched up scenes by making certain lines more their own (and much more humorous, i.e. the drag queens). And there were moments when location challenges forced last-minute rewrites, but hey, I guess that’s one advantage to having the writer in the director’s chair! This story seemed to flow from the moment it first arrived in my head, telling me what it wanted to be, so we kind of just trusted that, and I think we achieved our narrative goals.

You used a 70 percent female crew. Can you talk about why that was important to you?
McAlpin: For this film, our producer DeAnna Cooper and I wanted to flip the traditional gender ratios found on sets, as ours was indeed a story rooted in female empowerment. We wanted our set to feel like a compatible, safe environment for characters seeking safety and trusted female friendships. So many of the cast and crew who joined our team expressed delight in joining a largely female team, and I think/hope we created a safe space for all to create!

Also, as women, we tend to get each other — and there were times when those on our production team (all mothers) were able to support each other’s familial needs when emergencies at home arose. We also want to give a shout-out to the numerous woman-supporting men we had on our team, who were equally wonderful to work with!

What was everyone’s favorite scene and why?
McAlpin: There’s a moment when Rose has a candid conversation with a drag queen performer named Luscious (played by Johnathan Wallace) in a green room during which each opens up about who they are and how they got there. Ours is a fish out of water story as Rose tries to achieve her goal in a world quite new to her, but in this scene, two very different people bond in a sincere and heartfelt way. The performances in this scene were just dynamite, thanks to the talents of Johanna and Johnathan. We are frequently told this scene really affects viewers and changes perspectives.

I also have a personal favorite moment toward the end of the film in which a circle of women from very different backgrounds come together to help out a character named Leslie, played by the dynamic Robyn Lively, who is searching for her kids. One of the women helping Leslie says, “I’m a mama, too,” and I love the strength felt in this group hug moment as the village comes together to defend each other.

If you all had to do it again, what would you do differently?
McAlpin: This was one fast-moving train, and I know, as is the case in every film, there are little shots or scenes we’d all love to tweak just a little if given the chance to start over from scratch. But at this point, we are focusing on the positives and what lies in store for Miss Arizona. Since our Bentonville premiere and LA premiere at Dances With Films, we have been thrilled to receive numerous distribution offers, and it’s looking like a fall worldwide release may be in store. We look forward to connecting with audiences everywhere as we share the message of this film.

Mars Williamson

Mars, can you talk about your process and how you worked with the team? 
Williamson: Autumn put us in touch, and John and I touched based a little bit before I was going to start color. We all had a pretty good idea of where we were taking it from the offline and discussed little tweaks here and there, so it was fairly straightforward. There were a couple of things like changing a wall color and the last scene needing more sunset than was shot. Autumn and John are super easy and great to work with. We found out pretty early that we’d be able to collaborate pretty easily since John has DaVinci Resolve on his end in the states as well.  I moved to Melbourne permanently right before I really got into the grade.

Unbeknownst to me, Melbourne was/is in the process of upgrading their Internet, which is currently painfully slow. We did a couple of reviews via Frame.io and eventually moved to me just emailing John my project. He could relink to the media on his end and all of my color grading would come across for sessions in LA with Autumn. It was the best solution to contend with the snail pace uploads of large files. From there it was just going through it reel by reel and getting notes from the stateside team. I couldn’t have worked on this with a better group of people.

What types of projects do you work on most often?
Williamson: My bread and butter has always been TV commercials, but I’ve worked hard to make sure I work on all sort of formats across different genres. I like to make sure I’ve got a wide range of stuff under my belt. The pool is smaller here in Australia than it is in LA (where I moved from) so TV commercials are still the bill payers, but I’m also still dipping into the indie scene here and trying to diversify what I work on. Still working on a lot of indie projects and music videos from the states as well so thank you stateside clients! Thankfully the difference in time hasn’t hindered most of them (smiles). It has led to an all-nighter here and there for me, but I’m happy to lose sleep for the right projects.

How did you work with the DP and director on the look of the film? What look did you want and how did you work to achieve that look or looks?
John Davidson: Magic Feather is a production company and creative agency that I started back in 2004. We provide theatrical marketing and creative services for a wide variety of productions. From the 3D atomic transitions in Big Bang Theory to the recent Jurassic World Fallen Kingdom week-long event on Discovery, we have a pretty great body of work. I came onboard Miss Arizona very much by accident. Last year, after working with Weta in New Zealand, we moved to Laguna Niguel and connected with Autumn and her husband Michael via some mutual friends. I was intrigued that they had just finished shooting this movie on their own and offered to replace a few license plates and a billboard. Somehow I turned that into coordinating the post-Avid workflow across the planet and creating 100-plus visual effects shots. It was a fantastic opportunity to use every tool in our arsenal to help a film with a nice message and a family we have come to adore.

John Davidson

Working with Jordan and Autumn for VFX and final mastering was educational for all of us, but definitely so with me. As I mentioned to Jordan after the showing in Hollywood, if I did my job right you would never know. There were quite a few late nights, but I think that they are both very happy with the results.

John, I understand there were some challenges in the edit? Relinking the camera source footage? Can you talk about that and how you worked around it?
Davidson: The original Avid cut was edited off of the dailies at 1080p with embedded audio. The masters were 3.2k Arri Alexa Mini Log with no sync sound. There were timecode issues the first few days on set and because Mars was using DaVinci Resolve to color, we knew we had to get the footage from Avid to Resolve somehow. Once we got the footage into DaVinci via AAF, I realized it was going to be a challenge relinking sources from the dailies. Resolve was quite the utility knife, and after a bit of tweaking we were able to get the silent master video clips linked up. Because 12TB drives are expensive, we thought it best to trim media to 48-frame handles and ship a smaller drive to Australia for Mars to work with. With Mars’s direction we were able to get that handled and shipped.

While Mars was coloring in Australia, I went back into the sources and began the process of relinking the original separate audio to the video sources because I needed to be able to adjust/re-edit a few scenes that had technical issues we couldn’t fix with VFX. Resolve was fantastic here again. Any clip that couldn’t be automatically linked via timecode was connected with clap marks using the waveform. For safety, I batch-exported all of the footage out with embedded audio and then relinked the film to that. This was important for archival purposes as well as any potential fixes we might have to do before the film delivered.

At this point Mars was sharing her cuts on Frame.io with Jordan and Autumn. I felt like a little green shift was being introduced over H.264 so we would occasionally meet here to review a relinked XML that Mars would send for a full quality inspection. For VFX we used Adobe After Effects and worked in flat color. We then would upload shots to box.com for Mars to incorporate into her edit. There were also two re-cut scenes that were done this way as well which was a challenge because any changes had to be shared with the audio teams who were actively scoring and mixing.

Once Mars was done we put the final movie together here, and I spent about two weeks working on it. At this point I took the film from Resolve to FCP X. Because we were mastering at 1080p, we had the full 3.2K frame for flexibility. Using a 1080p timeline in FCP X, the first order of business was making final on-site color adjustments with Autumn.

Can you talk about the visual effects provided?
Davidson: For VFX, we focused on things like the license plates and billboards, but also took a bit of initiative and reviewed the whole movie for areas we could help. Like everyone else, I loved the look of the stage and club scenes, but wanted to add just a little flare to the backlights so the LED grids would be less visible. This was done in Final Cut Pro X using the MotionVFX plugin mFlare2. It made very quick work of using its internal Mocha engine to track the light sources and obscure them as needed when a light went behind a person’s head, for example. It would have been agony tracking so many lights in all those shots using anything else. We had struggled for a while getting replacement license plates to track using After Effects and Mocha. However, the six shots that gave us the most headaches were done as a test in FCP X in less than a day using CoreMelt’s TrackX. We also used Final Cut Pro X’s stabilization to smooth out any jagged camera shakes as well as added some shake using FCP X’s handheld effect on a few shots that needed it for consistency.

Another area we had to get creative with was with night driving shots that were just too bright even after color. By layering a few different Rampant Design overlays set to multiply, we were able to simulate lights in motion around the car at night with areas randomly increasing and decreasing in brightness. That had a big impact on smoothing out those scenes, and I think everyone was pretty happy with the result. For fun, Autumn also let me add in a few mostly digital shots, like the private jet. This was done in After Effects using Trapcode Particular for the contrails, and a combination of Maxon Cinema 4D and Element 3D for the jet.

Resolve’s face refinement and eye brightening were used in many scenes to give a little extra eye light. We also used Resolve for sky replacement on the final shot of the film. Resolve’s tracker is also pretty incredible, and was used to hide little things that needed to be masked or de-emphasized.

What about finishing?
Davidson: We finalized everything in FCP X and exported a full, clean ProRes cut of the film. We then re-imported that and added grain, unsharp masks and a light vignette for a touch of cinematic texture. The credits were an evolving process, so we created an Apple Numbers document that was shared with my internal Magic Feather team, as well as Autumn and the producers. As the final document was adjusted and tweaked we would edit an Affinity Photo file that my editor AJ Paschall and I shared. We would then export a huge PNG file of the credits into FCP X and set position keyframes to animate the scroll. Any time a change was made we would just relink to the new PNG export and FCP X would automatically update the credits. Luckily, that was easy because we did that probably 50 times.

Lastly, our final delivery to the DCP company was a HEVC 10-bit 2K encode. I am a huge fan of HEVC. It’s a fantastic codec, but it does have a few caveats in that it takes forever to encode. Using Apple Compressor and a 10-core iMac Pro, it took approximately 13 hours. That said, it was worth it because the colors were accurately represented and gave us a file that 5.52GB versus 18GB or 20GB. That’s a hefty savings on size while also being an improvement in quality over H.264.

Photo Credit: Rich Marchewka

 

DP Patrick Stewart’s path and workflow on Netflix’s Arrested Development

With its handheld doc-style camerawork, voiceover narration and quirky humor, Arrested Development helped revolutionize the look of TV sitcoms. Created by Mitchell Hurwitz, with Ron Howard serving as one of its executive producers, the half-hour comedy series follows the once-rich Bluth family, that continues to live beyond their means in Southern California. At the center of the family is the mostly sane Michael Bluth (Jason Bateman), who does his best to keep his dysfunctional family intact.

Patrick Stewart

The series first aired for three seasons on the Fox TV network (2003-2006) but was canceled due to low ratings. Because the series was so beloved, in 2013, Netflix brought it back to life with its original cast in place. In May 2018, the fifth season began streaming, shot by cinematographer Patrick Stewart (Curb Your Enthusiasm, The League, Flight of the Conchords). He called on VariCam LT cinema cameras.

Stewart’s path to becoming a cinematographer wasn’t traditional. Growing up in Los Angeles and graduating with a degree in finance from the University of Santa Clara, he got his start in the industry when a friend called him up and asked if he’d work on a commercial as a dolly grip. “I did it well enough where they called me for more and more jobs,” explains Stewart. “I started as a dolly grip but then I did sound, worked as a tape op and then started in the camera department. I also worked with the best gaffers in San Francisco, who showed me how to look at the light, understand it and either augment it or recreate it. It was the best practical film school I could have ever attended.”

Not wanting to stay “in a small pond with big fish” Stewart decided to move back to LA and started working for MTV, which brought him into the low-budget handheld world. It also introduced him to “interview lighting” where he lit celebrities like Barbara Streisand, Mick Jagger and Paul McCartney. “At that point I got to light every single amazing musician, actor, famous person you could imagine,” he says. “This practice afforded me the opportunity to understand how to light people who were getting older, and how to make them look their best on camera.”

In 1999, Stewart received an offer to shoot Mike Figgis’ film Time Code (2000), which was one of the landmark films of the DV/film revolution. “It was groundbreaking not only in the digital realm but the fact that Time Code was shot with four cameras from beginning to end, 93 minutes, without stopping, shown in a quad split with no edits — all handheld,” explains Stewart. “It was an amazingly difficult project, because having no edits meant you couldn’t make mistakes. I was very fortunate to work with a brilliant renegade director like Mike Figgis.”

Triple Coverage
When hired for Arrested Development, the first request Stewart approached Hurwitz with was to add a third camera. Shooting with three cameras with multiple characters can be a logistical challenge, but Stewart felt he could get through scenes more quickly and effectively, in order to get the actors out on time. “I call the C camera the center camera and the A and the B are screen left and screen right,” Stewart explains. “C covers the center POV, while A and B cover the scene from their left and right side POV, which usually starts with overs. As we continue to shoot the scene, each camera will get tighter and tighter. If there are three or more actors in the scene, C will get tighter on whoever is in the center. After that, C camera might cover the scene following the dialogue with ‘swinging’ singles. If no swinging singles are appropriate, then the center camera can move over and help out coverage on the right or left side.

“I’m on a walkie — either adjusting the shots during a scene for either of their framing or exposure, or I’m planning ahead,” he continues. “You give me three cameras and I’ll shoot a show really well for you and get it done efficiently, and with cinematic style.”

Because it is primarily a handheld show, Stewart needed lenses that would not weigh down his operators during long takes. He employed Fujinon Cabrio zooms (15-35mm, 19-90mm, and 85-300mm), which are all f/2.8 lenses.

For camera settings, Stewart captures 10-bit 422 UHD (3840×2160) AVC Intra files at 23.98-fps. He also captures in V-Log but uses the V-709 LUT. “To me, you can create all the LUTs you want,” he says, “but more than likely you get to color correction and end up changing things. I think the basic 709 LUT is really nice and gentle on all the colors.”

Light from Above
Much of Arrested Development is shot on a stage, so lighting can get complicated, especially when there are multiple characters in a scene. To makes things less complicated, Stewart provided a gentle soft light from softboxes covering the top of each stage set, using 4-by-8 wooden frames with Tungsten-balanced Quasar tubes dimmed down to 50%. His motivated lighting explanation is that the unseen source could basically be a skylight. If characters are close to windows, he uses HMIs creating “natural sunlight” punching through to light the scene. “The nice thing about the VariCam is that you don’t need as many photons, and I did pretty extensive tests during pre-production on how to do it.”

On stage, Stewart sets his ISO to 5000 base and dials down to 2500 and generally shoots at an f/2.8 and ½. He even uses one level of ND on top of that. “You can imagine 27-foot candles at one level of ND at a 2.8 and 1/2 — that’s a pretty sensitive camera, and I noticed very little noise. My biggest concern was mid-tones, so I did a lot of testing — shooting at 5000, shooting at 2500, 800, 800 pushed up to 1600 and 2500.

“Sometimes with certain cameras, you can develop this mid-tone noise that you don’t really notice until you’re in post. I felt like shooting at 5000 knocked down to 2500 was giving me the benefit of lighting the stage at these beautifully low-lit levels where we would never be hot. I could also easily put 5Ks outside the windows to have enough sunlight to make it look like it’s overexposed a bit. I felt that the 5000 base knocked down to 2500, the noise level was negligible. At native 5000 ISO, there was a little bit more mid-tone noise, even though it was still acceptable. For daytime exteriors, we usually shot at ISO 800, dialing down to 500 or below.”

Stewart and Arrested Development director Troy Miller have known each other for many years since working together on the HBO’s Flight of the Conchords. “There was a shorthand between director and DP that really came in handy,” says Stewart. “Troy knows that I know what I’m doing, and I know on his end that he’s trying to figure out this really complicated script and have us shoot it. Hand in hand, we were really able to support Mitch.”

Zoe Iltsopoulos Borys joins Panavision Atlanta as VP/GM

Panavision has hired Zoe Iltsopoulos Borys to lead the company’s Atlanta office as vice president and general manager. Borys will oversee day-to-day operations in the region.

Borys’ 25 years of experience in the motion picture industry includes business development for Production Resources Group (PRG), and GM for Fletcher Camera and Lenses (now VER). This is her second turn at Panavision, having served in a marketing role at the company from 1998-2006. She is also an associate member of the American Society of Cinematographers.

Panavision’s Atlanta facilities, located in West Midtown and at Pinewood Studios, supplies camera rental equipment in the southern US, with a full staff of prep technicians and camera service experts. The Atlanta team has provided equipment and services to productions including Avengers: Infinity War, Black Panther, Guardians of the Galaxy Vol. 2, The Immortal Life of Henrietta Lacks, Baby Driver and Pitch Perfect 3.

Kees van Oostrum weighs in on return as ASC president

The American Society of Cinematographers (ASC) has re-elected Kees van Oostrum as president. He will serve his third consecutive term at the organization.

The ASC board also re-upped its roster of officers for 2018-2019, including Bill Bennett, John Simmons and Cynthia Pusheck as vice presidents; Levie Isaacks as treasurer; David Darby as secretary; and Isidore Mankofsky as sergeant-at-arms.

Van Oostrum initiated and chairs the ASC Master Class program, which has expanded to locations worldwide under his presidency. The Master Classes take place several times a year and are taught by ASC members. The classes are designed for cinematographers with an intermediate-to-advanced skill set and incorporates practical, hands-on demonstrations of lighting and camera techniques with essential instruction in current workflow practices.

The ASC Vision Committee, founded during van Oostrum’s first term, continues to organize successful symposiums that encourage diversity and inclusion on camera crews, and also offers networking opportunities. The most recent was a standing-room-only event that explored practical and progressive ideas for changing the face of the industry. The ASC will continue to host more of these activities during the coming years.

Van Oostrum has earned two Primetime Emmy nominations for his work on the telefilms Miss Rose White and Return to Lonesome Dove. His peers chose the latter for a 1994 ASC Outstanding Achievement Award. Additional ASC Award nominations for his television credits came for The Burden of Proof, Medusa’s Child and Spartacus. He also shot the Emmy-winning documentary The Last Chance.

A native of Amsterdam, van Oostrum studied at the Dutch Film Academy with an emphasis on both cinematography and directing. He went on to earn a scholarship sponsored by the Dutch government, which enabled him to enroll in the American Film Institute (AFI). Van Oostrum broke into the industry shooting television documentaries for several years. He has subsequently compiled a wide range of some 80-plus credits, including movies for television and the cinema, such as Gettysburg, Gods and Generals and occasional documentaries. He recently wrapped the final season of TV series The Fosters.

The 2018-2019 board who voted in this election includes John Bailey, Paul Cameron, Russell Carpenter, Curtis Clark, Dean Cundey, George Spiro Dibie, Stephen Lighthill, Lowell Peterson, Roberto Schaefer, John Toll and Amelia Vincent. Alternate Board members are Karl-Walter Lindenlaub, Stephen Burum, David Darby, Charlie Lieberman and Eric Steelberg.

The ASC has over 20 committees driving the organization’s initiatives, such as the award-winning Motion Imaging Technology Council (MITC), and the Educational and Outreach committee.

We reached out to Van Oostrum to find out more:

How fulfilling has being ASC President been —either personally or professionally (or both)?
My presidency has been a tremendously fulfilling experience. The ASC grew its educational programs. The masterclass expanded from domestic to international locations, and currently eight to 10 classes a year are being held based on demand (up from four to five from the inaugural year of the master class). Our public outreach activities have brought in over 7,000 students in the last two years, giving them a chance to meet ASC members and ask questions about cinematography and filmmaking.

Our digital presence has also grown, and the ASC and American Cinematographer websites are some of the most visited sites in our industry. Interest from the vendor community has expanded as well, introducing a broader range of companies who are involved in the image pipeline to our members. Then, our efforts to support ASC’s heritage, research and museum acquisitions have taken huge steps forward. I believe the ASC has grown into a relevant organization for people to watch.

What do you hope to accomplish in the coming year?
We will complete our Educational Center, a new building behind the historic ASC clubhouse in Hollywood; produce several online master classes about cinematography; and we also are set to produce two major documentaries about cinematography and will continue to strengthen our role as a technology partner through the efforts of our Motion Imaging Technology Council (formerly the ASC Technology Committee).

What are your proudest achievements from previous years?
I’m most proud of the success of the Master Classes, as well as the support and growth in the number of activities by the Vision Committee. I’m also pleased with the Chinese language edition of our magazine, and having cinematography stories shared in a global way. We’ve also beefed up our overall internal communications so members feel more connected.

Testing large format camera workflows

By Mike McCarthy

In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.

L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.

Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.

Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.

First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.

Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.

We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.

Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.

One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”

Post Production
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.

Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.

The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.

Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.

Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.

I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:

This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.

We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.

We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.

Main Image: DP Michael Svitak


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Panavision Millennium DXL2’s ecosystem grows with color science, lenses, more

Panavision’s Millennium DXL2 8K camera was on display at Cine Gear last week featuring  a new post-centric firmware upgrade, along with four new large-format lens sets, a DXL-inspired accessories kit for Red DSMC2 cameras and a preview of custom advancements in filter technology.

DXL2 incorporates technology advancements based on input from cinematographers, camera assistants and post production groups. The camera offers 16 stops of dynamic range with improved shadow detail, a native ISO setting of 1600 and 12-bit ProRes XQ up to 120fps. New to the DXL2 is version 1.0 of a directly editable (D2E) workflow. D2E gives DITs wireless LUT and CDL look control and records all color metadata into camera-generated proxy files for instant and render-free dailies.

DXL2, which is available to rent worldwide, also incorporates an updated color profile: Light Iron Color 2 (LiColor2). This latest color science provides cinematographers and DITs with a film-inspired tonal look that makes the DXL2 feel more cinematic and less digital.

Panavision also showcased their large-format spherical and anamorphic lenses. Four new large-format lens sets were on display:
• Primo X is a cinema lens designed for use on drones and gimbals. It’s fully sealed, weatherproof and counterbalanced to be aerodynamic and it’s able to easily maintain a proper center of gravity. Primo X lenses come in two primes – 14mm (T3.1) and 24mm (T1.6) – and one 24-70mm zoom (T2.8) and will be available in 2019.

• H Series is a traditionally designed spherical lens set with a rounded, soft roll-off, giving what the company calls a “pleasing tonal quality to the skin.” Created with vintage glass and coating, these lenses offer slightly elevated blacks for softer contrast. High speeds separate subject and background with a smooth edge transition, allowing the subject to appear naturally placed within the depth of the image. These lenses are available now.
• Ultra Vista is a series of large-format anamorphic optics. Using a custom 1.6x squeeze, Ultra Vista covers the full height of the 8K sensor in the DXL and presents an ultra-widescreen 2.76:1 aspect ratio along with a classic elliptical bokeh and Panavision horizontal flare. Ultra Vista lenses will be available in 2019.
• PanaSpeed is a large-format update of the classic Primo look. At T1.4, PanaSpeed is a fast large-format lens. It will be available in Q3 of 2018.

Panavision also showed an adjustable liquid crystal neutral density (LCND) filter. LCND adjusts up to six individual stops with a single click or ramp — a departure from traditional approaches to front-of-lens filters, which require carrying a set and manually swapping individual NDs based on changing light. LCND starts at 0.3 and goes through 0.6, 0.9, 1.2, 1.5, to 1.8. It will be available in 2019.

Following up on the DXL1 and DXL2, Panavision launched the latest in its cinema line-up with the newly created DXL-M accessory kit. Designed to work with Red DSMC2 cameras, DXL-M marries the quality and performance of DXL with the smaller size and weight of the DSMC2. DXL-M brings popular features of DXL to Red Monstro, Gemini and Helium sensors, such as the DXL menu system (via an app for the iPhone), LiColor2, motorized lenses, wireless timecode (ACN) and the Primo HDR viewfinder. It will be available in Q4 of 2018.

Sony updates Venice to V2 firmware, will add HFR support

At CineGear, Sony introduced new updates and developments for its Venice CineAlta camera system including Version 2 firmware, which will now be available in early July.

Sony also showed the new Venice Extension System, which features expanded flexibility and enhanced ergonomics. Also announced was Sony’s plan for high frame rate support for the Venice system.

Version 2 adds new features and capabilities specifically requested by production pros to deliver more recording capability, customizable looks, exposure tools and greater lens freedom. Highlights include:

With 15+ stops of exposure latitude, Venice will support high base ISO of 2500 in addition to an existing ISO of 500, taking full advantage of Sony’s sensor for superb low-light performance with dynamic range from +6 stops to -9 stops as measured at 18% middle gray. This increases exposure indexes at higher ISOs for night exteriors, dark interiors, working with slower lenses or where content needs to be graded in high dynamic range while maintaining the maximum shadow details; Select FPS (off speed) in individual frame increments, from 1 to 60; V2.0 adds several Imager Modes, including 25p in 6K full-frame, 25p in 4K 4:3 anamorphic, 6K 17:9, 1.85:1 and 4K 6:5 anamorphic imager modes; user-uploadable 3D LUTs allows users to customize their own looks and save them directly into the camera; wired LAN remote control allows users to remotely control and change key functions, including camera settings, fps, shutter, EI, iris (Sony E-mount lens), record start/stop and built-in optical ND filters; and E-mount allows users to remove the PL mount and use a wide assortment of native E-mount lenses.

The Venice Extension System is a full-frame tethered extension system that allows the camera body to detach from the actual image sensor block with no degradation in image quality up to 20 feet apart. These are the result of Sony’s long-standing collaboration with James Cameron’s Lightstorm Entertainment.

“This new tethering system is a perfect example of listening to our customers, gathering strong and consistent feedback, and then building that input into our product development,” said Peter Crithary, marketing manager for motion picture cameras, Sony. “The Avatar sequels will be among the first feature films to use the new Venice Extension System, but it also has tremendous potential for wider use with handheld stabilizers, drones, gimbals and remote mounting in confined places.”

Also at CineGear, Sony shared the details of a planned optional upgrade to support high frame rate — targeting speeds up to 60fps in 6K, up to 90fps in 4K and up to 120fps in 2K. It will be released in North America in the spring of 2019.

Red simplifies camera lineup with one DSMC2 brain

Red Digital Cinema modified its camera lineup to include one DSMC2 camera Brain with three sensor options — Monstro 8K VV, Helium 8K S35 and Gemini 5K S35. The single DSMC2 camera Brain includes high-end frame rates and data rates regardless of the sensor chosen. In addition, this streamlined approach will result in a price reduction compared to Red’s previous camera line-up.

“We have been working to become more efficient, as well as align with strategic manufacturing partners to optimize our supply chain,” says Jarred Land, president of Red Digital Cinema. “As a result, I am happy to announce a simplification of our lineup with a single DSMC2 brain with multiple sensor options, as well as an overall reduction on our pricing.”

Red’s DSMC2 camera Brain is a modular system that allows users to configure a fully operational camera setup to meet their individual needs. Red offers a range of accessories, including display and control functionality, input/output modules, mounting equipment, and methods of powering the camera. The camera Brain is capable of up to 60fps at 8K, offers 300MB/s data transfer speeds and simultaneous recording of RedCode RAW and Apple ProRes or Avid DNxHD/HR.

The Red DSMC2 camera Brain and sensor options:
– DSMC2 with Monstro 8K VV offers cinematic full frame lens coverage, produces ultra-detailed 35.4 megapixel stills and offers 17+ stops of dynamic range for $54,500.
– DSMC2 with Helium 8K S35 offers 16.5+ stops of dynamic range in a Super 35 frame, and is available now for $24,500.
– DSMC2 with Gemini 5K S35 uses dual sensitivity modes to provide creators with greater flexibility using standard mode for well-lit conditions or low-light mode for darker environments priced at $19,500.

Red will begin to phase out new sales of its Epic-W and Weapon camera Brains starting immediately. In addition to the changes to the camera line-up, Red will also begin offering new upgrade paths for customers looking to move from older Red camera systems or from one sensor to another. The full range of upgrade options can be found here.

 

 

The Duffer Brothers: Showrunners on Netflix’s Stranger Things

By Iain Blair

Kids in jeopardy! The Demogorgon! The Hawkins Lab! The Upside Down! Thrills and chills! Since they first pitched their idea for Stranger Things, a love letter to 1980’s genre films set in 1983 Indiana, twin brothers Matt and Ross Duffer have quickly established themselves as masters of suspense in the science-fiction and horror genres.

The series was picked up by Netflix, premiered in the summer of 2016, and went on to become a global phenomenon, with the brothers at the helm as writers, directors and executive producers.

The Duffer Brothers

The atmospheric drama, about a group of nerdy misfits and strange events in an outwardly average small town, nailed its early ’80s vibe and overt homages to that decade’s master pop storytellers: Steven Spielberg and Stephen King. It quickly made stars out of its young ensemble cast — Millie Bobby Brown, Natalia Dyer, Charlie Heaton, Joe Keery, Gaten Matarazzo, Caleb McLaughlin, Noah Schnapp, Sadie Sink and Finn Wolfhard.

It also quickly attracted a huge, dedicated fan base, critical plaudits and has won a ton of awards, including Emmys, a SAG Award for Best Ensemble in a Drama Series and two Critics Choice Awards for Best Drama Series and Best Supporting Actor in a Drama Series. The show has also been nominated for a number of Golden Globes.

I recently talked with the Duffers, who are already hard at work on the highly anticipated third season (which will premiere on Netflix in 2019) about making the ambitious hit series, their love of post and editing, and VFX.

How’s the new season going?
Matt Duffer: We’re two weeks into shooting, and it’s going great. We’re very excited about it as there are some new tones and it’s good to be back on the ground with everyone. We know all the actors better and better, the kids are getting older and are becoming these amazing performers — and they were great before. So we’re having a lot of fun.

Are you shooting in Atlanta again?
Ross Duffer: We are, and we love it there. It’s really our home base now, and we love all these pockets of neighborhoods that have not changed at all since the ‘80s, and there is an incredible variety of locations. We’re also spreading out a lot more this season and not spending so much time on stages. We have more locations to play with.

Will all the episodes be released together next year, like last time? That would make binge-watchers very happy.
Matt: Yes, but we like to think of it more as like a big movie release. To release one episode per week feels so antiquated now.

The show has a very cinematic look and feel, so how do you balance that with the demands of TV?
Ross: It’s interesting, because we started out wanting to make movies and we love genre, but with a horror film they want big scares every few minutes. That leaves less room for character development. But with TV, it’s always more about character, as you just can’t sustain hours and hours of a show if you don’t care about the people. So ‘Stranger Things’ was a world where we could tell a genre story, complete with the monster, but also explore character in far more depth than we could in a movie.

Matt: Movies and TV are almost opposites in that way. In movies, it’s all plot and no character, and in TV it’s about character and you have to fight for plot. We wanted this to have pace and feel more like a movie, but still have all the character arcs. So it’s a constant balancing act, and we always try and favor character.

Where do you post the show?
Matt: All in Hollywood, and the editors start working while we’re shooting. After we shoot in Atlanta, we come back to our offices and do all the post and VFX work right there. We do all the sound mix and all the color timing at Technicolor down the road. We love post. You never have enough time on the set, and there’s all this pressure if you want to redo a shot or scene, but in post if a scene isn’t working we can take time to figure it out.

Tell us about the editing. I assume you’re very involved?
Ross: Very. We have two editors this season. We brought back one of our original editors, Dean Zimmerman, from season one. We are also using Nat Fuller, who was on season two. He was Dean’s assistant originally and then moved up, so they’ve been with us since the start. Editing’s our favorite part of the whole process, and we’re right there with them because we love editing. We’re very hands on and don’t just give notes and walk away. We’re there the whole time.

Aren’t you self-taught in terms of editing?
Matt: (Laughs) I suppose. We were taught the fundamentals of Avid at film school, but you’re right. We basically taught ourselves to edit as kids, and we started off just editing in-camera, stopping and starting, and playing the music from a tape recorder. They weren’t very good, but we got better.

When iMovie came out we learned how to put scenes together, so in college the transition to Avid wasn’t that hard. We fell in love with editing and just how much you can elevate your material in post. It’s magical what you can do with the pace, performances, music and sound design, and then you add all the visual effects and see it all come together in post. We love seeing the power of post as you work to make your story better and better.

How early on do you integrate post and VFX with the production?
Ross: On day one now. The biggest change from season one to two was that we integrated post far earlier in the second season — even in the writing stage. We had concept artists and the VFX guys with us the whole time on set, and they were all super-involved. So now it all kind of happens together.

All the VFX are a much bigger deal. For last season we had a lot more VFX than the first year — about 1,400 shots, which is a huge amount, like a big movie. The first season it wasn’t a big deal. It was a very old-school approach, with mainly practical effects, and then in the middle we realized we were being a bit naïve, so we brought in Paul Graff as our VFX supervisor on season two, and he’s very experienced. He’s worked on big movies like The Wolf of Wall Street as well as Game of Thrones and Boardwalk Empire, and he’s doing this season too. He’s in Atlanta with us on the shoot.

We have two main VFX houses on the show — Atomic Fiction and Rodeo — they’re both incredible, and I think all the VFX are really cinematic now.

But isn’t it a big challenge in terms of a TV show’s schedule?
Ross: You’re right, and it’s always a big time crunch. Last year we had to meet that Halloween worldwide release date and we were cutting it so close trying to finish all the shots in time.

Matt: Everyone expects movie-quality VFX — just in a quarter of the time, or less. So it’s all accelerated.

The show has a very distinct, eerie, synth-heavy score by Kyle Dixon and Michael Stein, the Grammy nominated duo. How important is the music and sound, which won several Emmys last year?
Ross: It’s huge. We use it so much for transitions, and we have great sound designers — including Brad North and Craig Henighan — and great mixers, and we pay a lot of attention to all of it. I think TV has always put less emphasis on great sound compared to film, and again, you’re always up against the scheduling, so it’s always this balancing act.

You can’t mix it for a movie theater as very few people have that set up at home, so you have to design it for most people who’re watching on iPhones, iPads and so on, and optimize it for that, so we mostly mix in stereo. We want the big movie sound, but it’s a compromise.

The DI must be vital?
Matt: Yes, and we work very closely with colorist Skip Kimball (who recently joined Efilm), who’s been with us since the start. He was very influential in terms of how the show ended up looking. We’d discussed the kind of aesthetic we wanted, and things we wanted to reference and then he played around with the look and palette. We’ve developed a look we’re all really happy with. We have three different LUTs on set designed by Skip and the DP Tim Ives will choose the best one for each location.

Everyone’s calling this the golden age of TV. Do you like being showrunners?
Ross: We do, and I feel we’re very lucky to have the chance to do this show — it feels like a big family. Yes, we originally wanted to be movie directors, but we didn’t come into this industry at the right time, and Netflix has been so great and given us so much creative freedom. I think we’ll do a few more seasons of this, and then maybe wrap it up. We don’t want to repeat ourselves.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

postPerspective names NAB Impact Award MVPs and winners

NAB is a bear. Anyone who has attended this show can attest to that. But through all the clutter, postPerspective sought to seek out the best of the best for our Impact Awards. So we turned to a panel of esteemed industry pros (to whom we are very grateful!) to cast their votes on what they thought would be most impactful to their day-to-day workflows, and those of their colleagues.

In addition to our Impact Award winners, this year we are also celebrating two pieces of technology that not only caused a big buzz around the show, but are also bringing things a step further in terms of technology and workflow: Blackmagic’s DaVinci Resolve 15 and Apple’s ProRes RAW.

With ProRes RAW, Apple has introduced a new, high-quality video recording codec that has already been adopted by three competing camera vendors — Sony, Canon and Panasonic. According to Mike McCarthy, one of our NAB bloggers and regular contributors, “ProRes RAW has the potential to dramatically change future workflows if it becomes even more widely supported. The applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.”

Fairlight’s audio tools are now embedded in the new Resolve 15.

With Resolve 15, Blackmagic has launched the product further into a wide range of post workflows, and they haven’t raised the price. This standalone app — which comes in a free version — provides color grading, editing, compositing and even audio post, thanks to the DAW Fairlight, which is now built into the product.

These two technologies are Impact Award winners, but our judges felt they stood out enough to be called postPerspective Impact Award MVPs.

Our other Impact Award winners are:

• Adobe for Creative Cloud

• Arri for the Alexa LF

• Codex for Codex One Workflow and ColorSynth

• FilmLight for Baselight 5

• Flanders Scientific for the XM650U monitor

• Frame.io for the All New Frame.io

• Shift for their new Shift Platform

• Sony for their 8K CLED display

In a sea of awards surrounding NAB, the postPerspective Impact Awards stand out, and are worth waiting for, because they are voted on by working post professionals.

Flanders Scientific’s XM650U monitor.

“All of these technologies from NAB are very worthy recipients of our postPerspective Impact Awards,” says Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually have an impact on workflows as well as the ability to make users’ working lives easier and their projects better. This year we have honored 10 different products that span the production and post pipeline.

“We’re very proud of the fact that companies don’t ‘submit’ for our awards,” continues Altman. “We’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We feel it makes our awards quite special.”

With our Impact Awards, postPerspective is also hoping to help those who weren’t at the show, or who were unable to see it all, with a starting point for their research into new gear that might be right for their workflows.

postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at SIGGRAPH 2018.

Atomos at NAB offering ProRes RAW recorders

Atomos is at this year’s NAB showing support for ProRes RAW, a new format from Apple that combines the performance of ProRes with the flexibility of RAW video. The ProRes RAW update will be available free for the Atomos Shogun Inferno and Sumo 19 devices.

Atomos devices are currently the only monitor recorders to offer ProRes RAW, with realtime recording from the sensor output of Panasonic, Sony and Canon cameras.

The new upgrade brings ProRes RAW and ProRes RAW HQ recording, monitoring, playback and tag editing to all owners of an Atomos Shogun Inferno or Sumo19 device. Once installed, it will allow the capture of RAW images in up to 12-bit RGB — direct from many of our industry’s most advanced cameras onto affordable SSD media. ProRes RAW files can be imported directly into Final Cut Pro 10.4.1 for high-performance editing, color grading, and finishing on Mac laptop and desktop systems.
Eight popular cine cameras with a RAW output — including the Panasonic AU-EVA1, Varicam LT, Sony FS5/FS7 and Canon C300mkII/C500 — will be supported with more to follow.

With this ProRes RAW support, filmmakers can work easily with RAW – whether they are shooting episodic TV, commercials, documentaries, indie films or social events.

Shooting ProRes RAW preserves maximum dynamic range, with a 12-bit depth and wide color gamut — essential for HDR finishing. The new format, which is available in two compression levels — ProRes RAW and ProRes RAW HQ — preserves image quality with low data rates and file sizes much smaller than uncompressed RAW.

Atomos recorders through ProRes RAW allow for increased flexibility in captured frame rates and resolutions. Atomos can record ProRes RAW up to 2K at 240 frames a second, or 4K at up to 120 frames per second. Higher resolutions such as 5.7K from the Panasonic AU-EVA1 are also supported.

Atomos’ OS, AtomOS 9, gives users filming tools to allow them to work efficiently and creatively with ProRes RAW in portable devices. Fast connections in and out and advanced HDR screen processing means every pixel is accurately and instantly available for on-set creative playback and review. Pull the SSD out and dock to your Mac over Thunderbolt 3 or USB-C 3.1 for immediate super fast post production.

Download the AtomOS 9 update for Shogun Inferno and Sumo 19 at www.atomos.com/firmware.

B&H expands its NAB footprint to target multiple workflows

By Randi Altman

In a short time, many in our industry will be making the pilgrimage to Las Vegas for NAB. They will come (if they are smart) with their comfy shoes, Chapstick and the NAB Show app and plot a course for the most efficient way to see all they need to see.

NAB is a big show that spans a large footprint, and typically companies showing their wares need to pick a hall — Central, South Lower, South Upper or North. This year, however, The Studio-B&H made some pros’ lives a bit easier by adding a booth in South Lower in addition to their usual presence in Central Hall.

B&H’s business and services have grown, so it made perfect sense to Michel Suissa, managing director at The Studio-B&H, to grow their NAB presence to include many of the digital workflows the company has been servicing.

We reached out to Suissa to find out more.

This year B&H and its Studio division are in the South Lower. Why was it important for you guys to have a presence in both the Central and South Halls this year?
The Central Hall has been our home for a long time and it remains our home with our largest footprint, but we felt we needed to have a presence in South Hall as well.

Production and post workflows merge and converge constantly and we need to be knowledgeable in both. The simple fact is that we serve all segments of our industry, not just image acquisition and camera equipment. Our presence in image and data centric workflows has grown leaps and bounds.

This world is a familiar one for you personally.
That’s true. The post and VFX worlds are very dear to me. I was an editor, Flame artist and colorist for 25 years. This background certainly plays a role in expanding our reach and services to these communities. The Studio-B&H team is part of a company-wide effort to grow our presence in these markets. From a business standpoint, the South Hall attendees are also our customers, and we needed to show we are here to assist and support them.

What kind of workflows should people expect to see at both your NAB locations?
At the South Hall, we will show a whole range of solutions to show the breadth and diversity of what we have to offer. That includes VR post workflow, color grading, animation and VFX, editing and high-performance Flash storage.

In addition to the new booth in South Hall, we have two in Central. One is for B&H’s main product offerings, including our camera shootout, which is a pillar of our NAB presence.

This Studio-B&H booth features a digital cinema and broadcast acquisition technology showcase, including hybrid SDI/IP switching, 4K studio cameras, a gyro-stabilized camera car, the most recent full-frame cinema cameras, and our lightweight cable cam, the DynamiCam.

Our other Central Hall location is where our corporate team can discuss all business opportunities with new and existing B2B customers

How has The Studio-B&H changed along with the industry over the past year or two?
We have changed quite a bit. With our services and tools, we have re-invented our image from equipment providers to solution providers.

Our services now range from system design to installation and deployment. One of the more notable recent examples is our recent collaboration with HBO Sports on World Championship Boxing. The Studio-B&H team was instrumental in deploying our DynamiCam system to cover several live fights in different venues and integrating with NEP’s mobile production team. This is part of an entirely new type of service —  something the company had never offered its customers before. It is a true game-changer for our presence in the media and entertainment industry.

What do you expect the “big thing” to be at NAB this year?
That’s hard to say. Markets are in transition with a number of new technology advancements: machine learning and AI, cloud-based environments, momentum for the IP transition, AR/VR, etc.

On the acquisition side, full frame/large sensor cameras have captured a lot of attention. And, of course, HDR will be everywhere. It’s almost not a novelty anymore. If you’re not taking advantage of HDR, you are living in the past.

Red’s new Gemini 5K S35 sensor offers low-light and standard mode

Red Digital Cinema’s new Gemini 5K S35 sensor for its Red Epic-W camera leverages dual-sensitivity modes, allowing shooters to use standard mode for well-lit conditions or low-light mode for darker environments.

In low-light conditions, the Gemini 5K S35 sensor allows for cleaner imagery with less noise and better shadow detail. Camera operators can easily switch between modes through the camera’s on-screen menu with no down time.

The Gemini Mini 5K S35 sensor offers an increased field of view at 2K and 4K resolutions compared to the higher-resolution Red Helium sensor. In addition, the sensor’s 30.72mm x 18mm dimensions allow for greater anamorphic lens coverage than with Helium or Red Dragon sensors.

“While the Gemini sensor was developed for low-light conditions in outer space, we quickly saw there was so much more to this sensor,” explains Jarred Land, president of Red Digital Cinema. “In fact, we loved the potential of this sensor so much, we wanted to evolve it to for broader appeal. As a result, the Epic-W Gemini now sports dual-sensitivity modes. It still has the low-light performance mode, but also has a default, standard mode that allows you to shoot in brighter conditions.”

Built on the compact DSMC2 form factor, this new camera and sensor combination captures 5K full-format motion at up to 96fps along with data speeds of up to 275MB per second. Additionally, it supports Red’s IPP2 enhanced image processing pipeline in-camera. Like all of Red’s DSMC2 cameras, the Epic-W is able to shoot simultaneous Redcode RAW and Apple ProRes or Avid DNxHD/HR recording and adheres to Red’s “Obsolescence Obsolete” program, which allows current Red owners to upgrade their technology as innovations are unveiled. It also lets’ them move between camera systems without having to purchase all new gear.

Starting at $24,500, the new Red Epic-W with Gemini 5K S35 sensor is available for purchase now. Alternatively, Weapon Carbon Fiber and Red Epic-W 8K customers will have the option to upgrade to the Gemini sensor at a later date.

Sony to ship Venice camera this month, adds capabilities

Sony’s next-gen CineAlta motion picture camera Venice, which won a postPerspective Impact Award for IBC2017, will start shipping this month. As previously announced, V.1.0 features support for full-frame 24x36mm recording. In addition, and as a result of customer feedback, Sony has added several new capabilities, including a Dual Base ISO mode. With 15+ stops of exposure latitude, Venice will support an additional High Base ISO of 2500 using the sensor’s physical attributes. This takes advantage of Sony’s sensor for low-light performance with high dynamic range — from 6 stops over to 9 stops under 18% middle gray.

This new capability increases exposure indexes at higher ISOs for night exteriors, dark interiors, working with slower lenses or where content needs to be graded in HDR, while maintaining the maximum shadow details. An added benefit within Venice is its built-in 8-step optical ND filter servo mechanism. This can emulate different ISO operating points when in High Base ISO 2500 and also maintains the extremely low levels of noise characteristics of the Venice sensor.

Venice also features new color science designed to offer a soft tonal film look, with shadows and mid-tones having a natural response and the highlights preserving the dynamic range.

Sony has also developed the Venice camera menu simulator. This tool is designed to give camera operators an opportunity to familiarize themselves with the camera’s operational workflow before using Venice in production.

Features and capabilities planned to be available later this year as free firmware upgrades in Version 2 include:
• 25p in 6K full-frame mode will be added in Version 2
• False Color (moved from Version 3 to Version 2)

Venice has an established workflow with support from Sony’s RAW Viewer 3, and third-party vendors including Filmlight Baselight 5, Davinci Resolve 14.3, and Assimilate Scratch 8.6 among others. Sony continues to work closely with all relevant third parties on workflows including editing, grading, color management and dailies.

Another often requested feature is support for high frame rates, which Sony is working to implement and make available at a later date.

Venice features include:
• True 36x24mm full frame imaging based on the photography standard that goes back 100 years
• Built-in 8-step optical ND filter servo mechanism
• Dual Base ISO mode, with High Base ISO 2500
• New color science for appealing skin tones and graceful highlights – out of the box
• Aspect ratio freedom: Full frame 3:2 (1.5:1), 4K 4:3 full height anamorphic, spherical 17:9, 16:9.
• Lens mount with 18mm flange depth opens up tremendous lens options (PL lens mount included)
• 15+ stops of exposure latitude
• User-interchangeable sensor that requires removal of just six screws
• 6K resolution (6048 x 4032) in full frame mode

Seasoned pros and young talent team on short films

By James Hughes

In Los Angeles on a Saturday morning, a crew of 10 students from Hollywood High School — helmed by 17-year-old director Celine Gimpirea — were transforming a corner of the Calgary Cemetery into a movie set. In The Box, a boy slips inside a cardboard box and finds himself transported to other realms. On this well-manicured lawn, among rows of flat, black granite grave markers, are rows of flat, black camera cases holding Red cameras, DIT stations, iPads and MacBook Pros.

Gimpirea’s is one of three teams of filmmakers involved in a month-long filmmaking workshop connecting creative pros with emerging talent. The teams worked with tools from Apple, including the MacBook Pro, iMac and Final Cut Pro X, as well as the Red Raven camera for shooting. LA-based independent filmmaking collective We Make Movies provided post supervision. They used a workflow very similar to that of the feature film Whiskey Tango Foxtrot, which was shot on Red and edited in FCP X.

In the documentary La Buena Muerte produced by instructors from the Mobile Film Classroom, a non-profit that provides digital media workshops to youth in under-resourced communities, the filmmakers examine mortality and family bonds surrounding the Day of the Dead, the Mexican holiday honoring lost loved ones. And in The Dancer, director Krista Amigone channels her background in theater to tell a personal story about a dancer confronting the afterlife.

Krista Amigone

During a two-week post period, teams received feedback from a rotating cast of surprise guests and mentors from across the industry, each a professional working in the field of film and television production.

Among the first mentors to view The Dancer was Sean Baker, director of 2017’s critically acclaimed The Florida Project and the 2015 feature Tangerine, shot entirely on iPhone 5S. Baker, who edits his own films, surveyed clips from Amigone’s shoot. Each take had been marked with the Movie Slate app on an iPad, which automatically stores and logs the timecode data. Together, they discussed Amigone’s backstory as well. A stay-at-home mother of a three-year-old daughter, she is no stranger to maximizing time and resources. She not only served as writer and director, but also star and choreographer.

Meanwhile, the La Buena Muerte crew, headed by executive producer Manon Banta, were editing their piece. Reviewing the volume of interviews and B-roll, all captured by cinematographer Elle Schneider on the 4.5K Red Raven camera, initially felt like a daunting task. Fortunately, their metadata was automatically organized after being imported straight into Final Cut Pro X from Shot Notes X and Lumberjack, along with the secondary source audio via Sync-N-Link X, which spared days of hand syncing.

Perhaps the most constructive feedback about story structure came from TJ Martin, director of LA92 and Undefeated, the Oscar-winner for Best Documentary Feature in 2012, which director Jean Balest has used as teaching material in the Mobile Film Classroom. Midway through the cut, Martin was struck by a plot point he felt required precision placement up front: A daughter is introduced while presiding over a conceptual art altar alongside her mother, who reveals she’s coping with her own pending death after a stage four cancer diagnosis.

Reshoots were vital to The Box. The dream world Gimpirea created — she cites Christopher Nolan’s Inception as an influence — required some clarification. During a visit from Valerie Faris, the Oscar-nominated co-director of Little Miss Sunshine and Battle of the Sexes, Gimpirea listened intently as she offered advice for pickup shots. Faris urged Gimpirea to keep the story focused on the point of view of her young lead during his travels. “There’s a lot told in his body and seeing him from behind,” Faris said. “In some ways, I’m more with him when I’m traveling behind him and seeing what he’s seeing.”

Celine Gimpirea

Gimpirea’s collaborative nature was evident throughout post. She was helped out by Antonio Manriquez, a video production teacher at Hollywood High, as well as her crew. Kais Karram was the film’s assistant director, and twin brother Zane was cinematographer. The brothers’ athleticism was an asset on-set, particularly during a day-long shoot in Griffith Park where they executed numerous tracking shots behind the film’s fleet-footed star as he navigated a walkway they had cleared of park visitors.

The selection of music was crucial, particularly for Amigone. For her main theme, she wanted a sound reminiscent of John Coltrane’s “After The Rain” and Claude Debussy’s “Clair De Lune.” She chose an original nocturne by John Mickevich, a composer and fellow member of the collective We Make Movies, whose founder/CEO Sam Mestman is also the CEO of LumaForge, developer of the Jellyfish Mobile — a “portable cloud,” as he put it — which, along with two MacBook Pros, were storing and syncing Amigone’s footage on location. Mestman believes “post should live on set.” As proof, a half-day of work for the editing team was done before the dance studio shoot had even wrapped.

During his mentor visit, Aaron Kaufman, director and longtime producing partner of filmmaker Robert Rodriguez, encouraged the teams to not be precious about losing shots in service of story. The documentary team certainly heeded this advice, as did Gimpirea, who cut a whole scene from Calvary Cemetery from her film.

As the project was winding down, Gimpirea reflected on her experience. “Knowing all the possibilities that I have in post now, it allows me to look completely differently at production and pre-production, and to pick out, more precisely, what I want,” she said.

Main Image: Shooting with the Red Raven at the Calvary Cemetery.


James Hughes is a writer and editor based in Chicago.

Panavision Hollywood names Dan Hammond VP/GM

Panavision has named Dan Hammond, a longtime industry creative solutions technologist, as vice president and general manager of Panavision Hollywood. He will be responsible for overseeing daily operations at the facility and working with the Hollywood team on camera systems, optics, service and support.

Hammond is a Panavision veteran, who worked at the company between 1989 and 2008 in various departments, including training, technical marketing and sales. Most recently he was at Production Resource Group (PRG), expanding his technical services skills. He is active with industry organizations, and is an associate member of the American Society of Cinematographers (ASC), as well as a member of the Academy of Television Arts and Sciences (ATAS) and Association of Independent Commercial Producers (AICP).

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Mercy Christmas director offers advice for indie filmmakers

By Ryan Nelson

After graduating from film school at The University of North Carolina School of the Arts, I was punched in the gut. I had driven into Los Angeles mere hours after the last day of school ready to set Hollywood on fire with my thesis film. But Hollywood didn’t seem to know I’d arrived. A few months later, Hollywood still wasn’t knocking on my door. Desperate to work on film sets and learn the tools of the trade, I took a job as a grip. In hindsight, it was a lucky accident. I spent the next few years watching some of the industry’s most successful filmmakers from just a few feet away.

Like a sponge, I soaked in every aspect of filmmaking that I could from my time on the sets of Avengers, Real Steel, Spider Man 3, Bad Boys 2, Seven Psychopaths, Smokin’ Aces and a slew of Adam Sandler comedies. I spent hours working, watching, learning and judging. How are they blocking the actors in this scene? What sort of cameras are they using? Why did they use that light? When do you move the camera? When is it static? When I saw the finished films in theaters, I ultimately asked myself, did it all work?

During that same time, I wrote and directed a slew of my own short films. I tried many of the same techniques I’d seen on set. Some of those attempts succeeded and some failed.

Recently, the stars finally aligned and I directed my first feature-length film, Mercy Christmas, from a script I co-wrote with my wife Beth Levy Nelson. After five years of writing, fundraising, production and post production, the movie is finished. We made the movie outside the Hollywood system, using crowd funding, generous friends and loving family members to compile enough cash to make the ultra-low-budget version of the Mercy Christmas screenplay.

I say low budget because it was financially, but thanks to my time on set, years of practice and much trial and error, the finished film looks and feels like much more than it cost.

Mercy Christmas, by the way, features Michael Briskett, who meets the perfect woman and his ideal Christmas dream comes true when she invites him to her family’s holiday celebration. Michael’s dream shatters, however, when he realizes that he will be the Christmas dinner. The film is currently on iTunes.

My experience working professionally in the film business while I struggled to get my shot at directing taught me many things. I learned over those years that a mastery of the techniques and equipment used to tell stories for film was imperative.

The stories I gravitate towards tend to have higher concept set pieces. I really enjoy combining action and character. At this point in my career, the budgets are more limited. However, I can’t allow financial restrictions to hold me back from the stories I want to tell. I must always find a way to use the tools available in their best way.

Ryan Nelson with camera on set.

Two Cameras
I remember an early meeting with a possible producer for Mercy Christmas. I told him I was planning to shoot two cameras. The producer chided me, saying it would be a waste of money. Right then, I knew I didn’t want to work with that producer, and I didn’t.

Every project I do now and in the future will be two cameras. And the reason is simple: It would be a waste of money not to use two cameras. On a limited budget, two cameras offer twice the coverage. Yes, understanding how to shoot two cameras is key, but it’s also simple to master. Cross coverage is not conducive to lower budget lighting so stacking the cameras on a single piece of coverage gives you a medium shot and close shot at the same time. Or for instance, when shooting the wide master shot, you can also get a medium master shot to give the editor another option to breakaway to while building a scene.

In Mercy Christmas, we have a fight scene that consists of seven minutes of screen time. It’s a raucous fight that covers three individual fights happening simultaneously. We scheduled three days to shoot the fight. Without two cameras it would have taken more days to shoot, and we definitely didn’t have more days in the budget.

Of course, two camera rentals and camera crews are budget concerns, so the key is to find a lower budget but high-quality camera. For Mercy Christmas, we chose the Canon C-300 Mark II. We found the image to be fantastic. I was very happy with the final result. You can also save money by only renting one lens package to use for both cameras.

Editing
Good camera coverage doesn’t mean much without an excellent editor. Our editor for Mercy Christmas, Matt Evans, is a very good friend and also very experienced in post. Like me, Matt started at the bottom and worked his way up. Along the way, he worked on many studio films as apprentice editor, first assistant editor and finally editor. Matt’s preferred tool is Avid Media Composer. He’s incredibly fast and understands every aspect of the system.

Matt’s technical grasp is superb, but his story sense is the real key. Matt’s technique is a fun thing to witness. He approaches a scene by letting the footage tell him what to do on a first pass. Soaking in the performances with each take, Matt finds the story that the images want to tell. It’s almost as if he’s reading a new script based on the images. I am delighted each time I can watch Matt’s first pass on a scene. I always expect to see something I hadn’t anticipated. And it’s a thrill.

Color Grading
Another aspect that should be budgeted into an independent film is professional color grading. No, your editor doing color does not count. A professional post house with a professional color grader is what you need. I know this seems exorbitant for a small-budget indie film, but I’d highly recommend planning for it from the beginning. We budgeted color grading for Mercy Christmas because we knew it would take the look to professional levels.

Color grading is not only a tool for the cinematographer it’s a godsend for the director as well. First and foremost, it can save a shot, making a preferred take that has an inferior look actually become a usable take. Second, I believe strongly that color is another tool for storytelling. An audience can be as moved by color as by music. Every detail coming to the audience is information they’ll process to understand the story. I learned very early in my career how shots I saw created on set were accentuated in post by color grading. We used Framework post house in Los Angeles on Mercy Christmas. The colorist was David Sims who did the color and conform in DaVinci Resolve 12.

In the end, my struggle over the years did gain my one of my best tools: experience. I’ve taken the time to absorb all the filmmaking I’ve been surrounded by. Watching movies. Working on sets. Making my own.

After all that time chasing my dream, I kept learning, refining my skills and honing my technique. For me, filmmaking is a passion, a dream and a job. All of those elements made me the storyteller I am today and I wouldn’t change a thing.