Category Archives: Cinematography

Paris Can Wait director Eleanor Coppola

By Iain Blair

There are famous Hollywood dynasties, and then there’s the Coppolas, with such giant talents as Francis, Sofia, Roman, Nic Cage and the late Carmine.

While Eleanor, the matriarch of the clan and Francis’ wife, has long been recognized as a multi-talented artist in her own right, thanks to her acclaimed documentaries and books (Hearts of Darkness: A Filmmaker’s Apocalypse, Notes on the Making of Apocalypse Now, Notes on a Life), it’s only recently — at the grand age of 81 — that she’s written, produced and directed her feature film debut, Paris Can Wait.

Eleanor Coppola on set in France.

It stars Oscar-nominee Diane Lane as a woman who unexpectedly takes a trip through France, which reawakens her sense of self and her joie de vivre. At a crossroads in her life, and long married to an inattentive movie producer (Alec Baldwin), she finds herself taking a car trip from Cannes to Paris with a garrulous business associate of her husband. What should be a seven-hour drive turns into a journey of discovery involving mouthwatering meals, spectacular wines and picturesque sights.

Maybe it’s something in the water — or the famed Coppola wine, or her genes — but like her many family members, Eleanor Coppola seems to have a natural gift for capturing visual magic, and the French road trip unfolds like a sun-drenched adventure that makes you want to pack your bags and join the couple immediately.

I recently spoke with Coppola about making the film.

You began directing feature films at an age when most directors have long since retired. What took you so long?
I made documentaries, and my nature is to be an observer, so I never thought about doing a fiction film. But I had this true story, this trip I took with a Frenchman, and it felt like a really good basis for a road movie — and I love road movies — so I began writing it and included all these wonderful, picturesque places we stopped at, and someone suggested that we break down. Then my son said, “You should fix it,” so I gradually added all these textures and colors and flavors that would make it as rich as possible.

I heard it took a long time to write?
I began writing, and once I had the script together I began looking for a director, but I couldn’t quite find the right person. Then one morning at breakfast (my husband) Francis said, “You should direct it.” I’d never thought of directing it myself, so I took classes in directing and acting to prepare, but it ended up taking six years to bring all the elements together.

I assume getting financing was hard?
It was, especially as I’m not only a first-time feature director, but my movie has no aliens, explosions, kidnappings, guns, train wrecks — and nobody dies. It doesn’t have any of the usual elements that bankers want to invest in, so it took a long time to patch together the money — a bit here, a bit there. That was probably the hardest part of the whole thing. You can’t get the actors until you have the financing, and you can’t get the financing until you have the actors. It’s like Catch-22, and you’re caught in this limbo between the two while you try and get it all lined up.

After Francis persuaded you to direct it, did he give you a lot of encouragement and advice?
I asked him a lot about working with actors. I’ve been on so many sets with him and watched him directing, and he was very helpful and supportive, especially when we ran into the usual problems every film has.

I heard that just two weeks into shooting, the actor originally set to play Michael was unable to get out of another project?
Yes, and I was desperate to find a replacement, and it was such short notice. But by some miracle, Alec Baldwin called Francis about something, and he was able to fly over to France at the last moment and fill in. And other things happened. We were going to shoot the opening at the Hotel Majestic in Cannes, but a Saudi Arabian prince arrived and took over the entire hotel, so we had to scramble to find another location.

How long was the shoot?
Just 28 days, so it was a mad dash all over France, especially as we had so many locations I wanted to fit in. Pretty much every day, the AD and the production manager would come over to me after lunch and say, “Okay, you had 20 shots scheduled for today, but we’re going to have to lose four or five of them. Which ones would you like to cut?” So you’re in a constant state of anxiety and wondering if the shots you are getting will even cut together.Since we had so little time and money, we knew that we could never come back to a location if we missed something and that we’d have to cut some stuff out altogether, and there’s the daily race to finish before you lose light, so it was very difficult at times.

Where did you do the post?
All back at our home in Napa Valley, where we have editing and post production facilities all set up at the winery.

You worked with editor Glen Scantlebury, whose credits include Godfather III and Bram Stoker’s Dracula for Francis, Michael Bay’s The Rock, Armageddon and Transformers, Conair, The General’s Daughter and Tomb Raider. What did he bring to the project?
What happened was, I had a French editor who assembled the film while we were there, but it didn’t make financial sense to then bring her back to Napa, so Francis put me together with Glen and we worked really well together. He’s so experienced, but not just cutting these huge films. He’s also cut a lot of indies and smaller films and documentaries, and he did Palo Alto for (my granddaughter) Gia, so he was perfect for this. He didn’t come to France.

What were the main editing challenges?As they say, there are three films you make: the one you wrote, the one you shot and the one you then edit and get onto the screen. It’s always the same challenge of finding the best way of telling the story, and then we screened versions for people to see where any weaknesses were, and then we would go back and try to correct them. Glen is very creative, and he’d come up with fresh ways of dealing with any problems. We ended up spending a couple of months working on it, after he spent an initial month at home doing his own assembly.

I must say, I really enjoyed the editing process more than anything, because you get to relax more and shape the material like clay and mold it in a way you just can’t see when you’re in the middle of shooting it. I love the way you can move scenes around and juxtapose things that suddenly work in a whole new way.

Can you talk about the importance of sound and music?
They’re so important, and can radically alter a scene and the emotions an audience feels. I had the great pleasure of working with sound designer Richard Beggs, who won the Oscar for Apocalypse Now, and who’s done the sound for so many great films, including Rain Man and Harry Potter, and he’s worked with (my daughter) Sofia on some of her films like Lost in Translation and Marie Antoinette.

He’s a master of his craft and helped bring the film alive. Also, he recommended the composer Laura Karpman, who’s won several Emmys and worked with Spielberg and John Legend and all sorts of people. Music is really the weakest part for me, because I just don’t know what to do, and like Glen, Laura was just a perfect match for me. The first things she wrote were a little too dark, I felt, as I wanted this to be fun and light, and she totally got it, and also used all these great finger-snaps, and the score just really captures the feeling I wanted. We mixed everything up in Napa as well.

Eleanor Coppola and writer Iain Blair.

Do you want to direct another feature now, or was once enough?
I don’t have anything cooking that I want to make, but I’ve recently made two short story films, and I really enjoyed doing that since I didn’t have to wait for years to get the financing. I shot them in Northern California, and they were a joy to do.

There’s been a lot of talk about the lack of opportunity for women directors. What’s your advice to a woman who wants to direct?
Well, first off, it’s never too late! (Laughs) Look at me. I’m 81, and this is my first narrative film. Making any film is hard, finding the financing is even harder. Yes, it is a boy’s club, but if you have a story to tell never give up. Women should have a voice.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Canon targets HDR with EOS C200, C200B cinema cameras

Canon has grown its Cinema EOS line of pro cinema cameras with the EOS C200 and EOS C200B. These new offerings target filmmakers and TV productions. They offer two 4K video formats — Canon’s new Cinema RAW Light and MP4 — and are optimized for those interested in shooting HDR video.

Alongside a newly developed dual Digic DV6 image processing system, Canon’s Dual Pixel CMOS AF system and improved operability for pros, these new cameras are built for capturing 4K video across a variety of production applications.

Based on feedback from Cinema EOS users, these new offerings will be available in two configurations, while retaining the same core technologies within. The Canon EOS C200 is a production-ready solution that can be used right out of the box, accompanied by an LCD monitor, LCD attachment, camera grip and handle unit. The camera also features a 1.77 million-dot OLED electronic view finder (EVF). For users who need more versatility and the ability to craft custom setups tailored to their subject or environment, the C200B offers cinematographers the same camera without these accessories and the EVF to optimize shooting using a gimbal, drone or a variety of other configurations.

Canon’s Peter Marr was at Cine Gear demo-ing the new cameras.

New Features
Both cameras feature the same 8.85MP CMOS sensor that combines with a newly developed dual Digic DV6 image processing system to help process high-resolution image data and record video from full HD (1920×1080) and 2K (2048×1080) to 4K UHD (3840×2160) and 4K DCI (4096×2160). A core staple of the third-generation Cinema EOS system, this new processing platform offers wide-ranging expressive capabilities and improved operation when capturing high-quality HDR video.

The combination of the sensor and a newly developed processing system also allows for the support for two new 4K file formats designed to help optimize workflow and make 4K and HDR recording more accessible to filmmakers. Cinema RAW Light, available in 4K 60p/50p at 10-bit and 30p/25p/24p at 12-bit, allows users to record data internally to a CFast card by cutting data size to about one-third to one-fifth of a Cinema RAW file, without losing grading flexibility. Due to the reduced file size, users will appreciate rich dynamic range and easier post processing without sacrificing true 4K quality. Alongside recording to a CFast card, proxy data (MP4) can also be simultaneously recorded to an SD card for use in offline editing.

Additionally, filmmakers will also be able to export 4K in MP4 format on SD media cards at 60/50/30/25/24P at 8-bit. Support for UHD recording allows for use in cinema and broadcasting applications or scenarios where long recording times are needed while still maintaining top image quality. The digital cinema cameras also offer slow-motion full-HD recording support at up to 120fps.

The Canon EOS C200and Canon EOS C200B feature Innovative Focus Control that helps assist with 4K shooting that demands precise focusing, whether from single or remote operation. According to Canon, its Dual Pixel CMOS AF technology helps to expand the distance of the subject area to enable faster focus during 4K video recording. This also allows for highly accurate continuous AF and face detection AF when using EF lenses. For 4K video opportunities that call for precise focus accuracy that can’t be checked on an HD monitor, users can also take advantage of the LCD Monitor LM-V1 (supplied with the EOS C200 camera), which provides intuitive touch focusing support to help filmmakers achieve sophisticated focusing even as a single operator.

In addition to these features, the cameras offer:
• Oversampling HD processing: enhances sensitivity and helps minimize noise
• Wide DR Gamma: helps reduce overexposure by retaining continuity with a gamma curve
• ISO 100-102400 and 54db gain: high quality in both low sensitivity and low-light environments
• In-camera ND filter: internal ND unit allows cleaning of glass for easier maintenance
• ACESproxy support: delivers standardized color space in images, helping to improve efficiency
• Two SD card and one CFast card slots for internal recording
• Improved grip and Cinema-EOS-system-compatible attachment method
• Support for Canon Cine-Servo and EF cinema lenses

Editing and grading of the Cinema RAW Light video format will be supported in Blackmagic Resolve. Editing will also be possible in Avid Media Composer, using a Canon RAW plugin for Avid Media Access. This format can also be processed using the Canon application, Cinema RAW Development.

Also, Premiere Pro CC of Adobe will support this format until the end of 2017. Editing will also be possible in Final Cut Pro X from Apple, using the Canon RAW Plugin for Final Cut Pro X after the second half of this year.

The Canon EOS C200 and EOS C200B are scheduled to be available in August for estimated retail prices of $7,499 and $5,999, respectively. The EOS C200 comes equipped with additional accessories including the LM-V1 LCD monitor, LA-V1 LCD attachment, GR-V1 camera grip and HDU-2 handle unit. Available in September, these accessories will also be sold separately.

Dell 6.15

Review: Zylight’s IS3 LED lights

By Brady Betzel

I see a lot of footage from all over the world captured on all sorts of cameras and shot in good and bad lighting conditions. Besides camera types and lenses, proper lighting is consistently an area that needs the most attention.

If you troll around YouTube, you will see all sorts of lighting tutorials (some awful, but some outstanding) — some tutorials offer rundowns on what lighting you can get for your budget, from the clamp-style garage lights with LED bulbs that can be purchased at your local Lowe’s, a standard three-piece lighting kit, or even the ever-trendy Kino Flo lights. There are so many choices it’s hard to know what you should be looking at or even why you are choosing things like LED over tungsten or fluorescent.

In this review, I am going to go over the Zylight IS3/c LED light. The “c” in IS3/c stands for the Chimera softbox, which can be purchased with the light.

Recently, I have really been interested in lighting, and a few months back Zylight sent me the IS3/c to try out. Admittedly, I am not a world-famous DP or photographer with extensive experience in lighting. I know my way around a mid-level lighting setup and can get my way through a decent-looking three-light setup, so my apologies if I don’t touch on the difference between the daylight and tungsten foot candle output. Not that footcandles are not interesting subjects, but those can take a while to figure out and are probably best left to a good Lynda.com tutorial, or better yet a physics class on optics and lighting like the one I took in college.

Diving In
The Zylight IS3/c comes with the light head itself, Yoke bar with 5/8-inch baby pin-adapter, some knobs and washers, AC adapter and hanging pouch, safety cable, guide and the Chimera softbox (if you purchased the IS3/c package). Before reading the manual, which would have been the proper thing to do, I immediately opened the box and plugged in the light. It lit up the whole interior of my house at night — think Christmas Vacation when Clark plugged in the Christmas lights (good movie). I saw, in one second, how I could immediately paint a wall (or all of my walls) with the IS3.

The beauty of LED lights is that they are typically lightweight and some can reproduce any color you can dream of while staying cool to the touch. So I wanted to see if I could paint a 15-foot wall chromakey green. With little effort I switched into color mode by flipping the rocker switch on the back of the light, turned the Hue knob until I hit green, and adjusted the saturation to 100% to try and literally paint my wall green with light. It was pretty incredible and dead simple.

The IS3 has a 90-degree beam angle on center with a 120-degree beam angle total (I found multiple specs on this like 95/115-degree beam angle, so this is approximate), has a power consumption of 220 watts max, can be purchased in black and white and is made in the USA. The IS3 has two presets for white light and two presets for color. In white mode the IS3 can output any color temperature between 2500K and 10,000K. The Kelvin range is adjusted in 50K steps. Because LEDs are known for giving off a green tint, there is an adjustment knob to lower or raise the green adjustment. There is also a dimmer knob that allows for dimming with little color shift. In color mode, there are three adjustments: hue, saturation and dimming.

One of the big features among IS3 lights, and Zylight lights in general, is the built-in wireless transmitter that can talk to the Zylink bridge and Zylink iOs app. You can link multiple lights together and control them simultaneously. With the iOs app you can set hue values and even color presets like crossfade, strobe, police and flame. You can run the Zylight by either the AC adapter or rechargeable battery. The outside of the light is built sturdy with a rubberized front and a metal back that doubles as heat dissipation as well. In addition to the Zylink wireless connection, you can use the DMX connection to connect to and control the Zylight.

In the end, the Zylight IS3/c is the soft light as well as wall wash light that I’ve been dreaming of. I was even thinking I could use the IS3 as Christmas lights. I could get a couple IS3s to paint the house red and green.

The Zylight is as easy to configure as any light I have ever used; unfortunately the price doesn’t match its ease of use. It’s pricey. The IS3/c is currently listed on Adorama.com for $2,699, and just the IS3 is $2,389. But you get what you pay for — it’s a professional light that will run 50,000 hours without needing calibration, it weighs 11 pounds and measures 18.5” x 10.75” x 1.9” — and you will most likely not need to replace this light.

If you run a stage show and need to control multiple lights with multiple color combinations quickly, the Zylink wireless bridge and iOs app may be just for you.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


At Cine Gear, Panasonic shows 5.7K Super 35mm cinema camera

During Cine Gear this past weekend, Panasonic previewed the AU-EVA1, a new 5.7K cinema camera positioned between the Panasonic Lumix GH5 4K mirrorless camera and the VariCam LT 4K cinema camera. Compact and lightweight, the AU-EVA1 is made for handheld shooting, but is also suited for documentaries, commercials and music videos.

“For cinema-style acquisition, we realized there was a space between the GH5 and the VariCam LT,” said Panasonic cinema product manager Mitch Gross. “With its compact size and new 5.7K sensor, the EVA1 fills that gap for a variety of filmmaking applications.”

The EVA1 contains a newly designed 5.7K Super 35mm-sized sensor for capturing true cinematic images. By starting at a higher native resolution, the 5.7K sensor yields a higher resolving image when down sampled to 4K, UHD, 2K and even 720p. The increased color information results in a finer, more accurate finished image.

One of the key features of the VariCam 35, VariCam LT and VariCam Pure is dual native ISO. Using a process that allows the sensor to be read in a fundamentally different way, Dual Native ISO extracts more information from the sensor without degrading the image. This results in a camera that can switch from a standard sensitivity to a high sensitivity without an increase in noise or other artifacts.

On the VariCams, dual native ISO has allowed cinematographers to use less light on set, saving time and money, as well as allowing for a great variety of artistic choices. The EVA1 will include dual native ISO, but the camera is currently being tested to determine final ISO specifications.

The ability to capture accurate colors and rich skin tones is a must for any filmmaker. Like the VariCam lineup of cinema cameras, the EVA1 contains V-Log/V-Gamut capture to deliver high dynamic range and broad colors. V-Log has log curve characteristics that are reminiscent of negative film and V-Gamut delivers a color space even larger than film. The EVA1 will also import the colorimetry of the VariCam line.

Weighing only 2.65 pounds (body only) with a compact form factor (6.69” x 5.31” x 5.23”) and a removable hand-grip, the EVA1 can be used for efficient handheld shooting and can also be mounted on a drone, gimbal rig or jib arm for complex yet smooth camera moves. There will also be numerous mounting points and Panasonic is currently working with top accessory makers to allow further customization with the EVA1.

Also suited for indie filmmakers, the EVA1 records to lower-cost SD cards. The camera can record in several formats and compression rates and offers up to 10-bit 4:2:2, even in 4K. For high-speed capture, the EVA1 offers 2K up to 240fps. In terms of bitrates, you can record up to 400Mbps for robust recording. A complete breakdown of recording formats will be available at the time of the EVA1’s release this fall.

In terms of lenses, the camera uses a native EF-mount, allowing shooters access to the broad EF lens ecosystem, including dozens of cinema-style prime and zoom lenses from numerous manufacturers. Electronic Image Stabilization (EIS) is employed to compensate for camera shake and blurring, which will help smooth out handheld or shoulder-mount shots on documentary or run-and-gun projects. Behind the lens mount, an integrated ND filter wheel in 2, 4 and 6 stops allows for precise exposure control. The EVA1 also allows the IR Cut filter to be swung out of the path to the sensor at the push of a button. Photographic effects and night vision imagery are possible with this control over infrared.

The EVA1 offers dual balanced XLR audio inputs and 4K-capable video outputs in both HDMI and SDI. In a future firmware upgrade, the EVA1 will offer 5.7K RAW output to third-party recorders.

The EVA1 will ship for just under $8,000 (body only).

 


Sony’s offerings at NAB

By Daniel Rodriguez

Sony has always been a company that prioritizes and implements the requests of the customer. They are constantly innovation throughout all aspects of production — from initial capture to display. At NAB 2017, Sony’s goal was to further expand benchmarks the company has made in the past few months.

To reflect its focus as a company, Sony’s NAB booth was focused on four areas: image capture, media solutions, IP Live and HDR (High Dynamic Range). Sony’s focus was to demonstrate its ability to anticipate for future demands in capture and distribution while introducing firmware updates to many of their existing products to complement these future demands.

Cameras
Since Sony provides customers and clients with a path from capture to delivery, it’s natural to start with what’s new for imaging. Having already tackled the prosumer market with its introduction of the a7sii, a7rii, FS5 and FS7ii, and firmly established its presence in the cinema camera line with the Sony F5, F55 and F65, it’s natural that Sony’s immediate steps weren’t to follow up on these models so soon, but rather introduce models that fit more specific needs and situations.

The newest Sony camera introduced at NAB was the UMC-S3CA. Sporting the extremely popular sensor from the a7sii, the UMC-S3CA is a 4K interchangeable lens E mount camera that is much smaller than its sensor sibling. Its Genlock ability allows any user to monitor, operate and sync many at a time, something extremely promising for emerging media like VR and 360 video. It boasts an incredible ISO range from 100-409,600 and recording internal 4K UHD recording at 23.98p, 25fps and 29.97p in 100Mbps and 60Mbps modes. The size of this particularly small camera is promising for those who love the a7sii but want to employ it in more specific cases, such as crash cams, drones, cranes and sliders.

To complement its current camera line, Sony has released an updated version of their electronic viewfinder DVF-EL100 —the DVF-EL200 (pictured)— which also boasts a full 1920x1080p resolution image and is about twice as bright as the previous model. Much like updated versions of Sony’s cameras, this monitor’s ergonomics are attributed to the vast input from users of the previous model, something that the company prides itself on. (Our main image show the F55 with the DVF-EL200 viewfinder.)

Just because Sony is introducing new products doesn’t mean that it has forgotten about older products, especially those that are part of its camera lines. Prosumer models, like the Sony PXW-Z150 and Sony PXW-FS5, to professional cinema cameras, such as the Sony PMW-F5 and PMW-F55, are all receiving firmware updates coming in July 2017.

The most notable firmware update of the Z150 will be its ability to capture images in HLG (Hybrid Log Gamma) to support easier HDR capture and workflow. The FS5 will also receive the ability to capture in HLG, in addition to the ability to change the native ISO from 2000 to 3200 when shooting in SLog2 or SLog3 and 120fps capabilities at 1080p full HD. While many consider the F65 to be Sony’s flagship camera, some consider the F55 to be the more industry friendly of Sony’s cinema camera line, and Sony backs that up by increasing it’s high frame rate capture in a new firmware update. This new firmware update will allow the F55 to record in 72, 75, 90, 96 and 100fps in 4K RAW and in the company’s new compressed Extended Original Camera Negative (X-OCN) format.

X-OCN
Sony’s new X-OCN codec continues to be a highlight of the company’s developments as it boasts an incredible 16-bit bit-depth despite it being compressed, and it’s virtually indistinguishable from Sony’s own RAW format. Due to its compression, it boasts file sizes that are equivalent to 50 percent less than 2K 4:3 Arriraw and 4K ProRes 4444 XQ and 30 percent less than F55 RAW. It’s considered the most optimal and suitable format for HDR content capturing. With cameras like the F5, F55 and its smaller alternatives, like the FS7 and FS7II allowing RAW recording, Sony is offering a nearly indistinguishable alternative to cut down on storage space as well as allow more recording time on set.

Speed and Storage
As Sony continues to increase its support for HDR and larger resolutions like 8K, it’s easy to consider the emergence of X-OCN as an introduction of what to expect from Sony in the future.

Despite the introduction of X-OCN being the company’s answer to large file sizes from shooting RAW, Sony still maintain a firm understanding of the need for storage and the read/write speeds that come with such innovations. As part of such innovations, Sony has introduced the AXS-AR1 AXS memory and SXS Thunderbolt card reader. Using a Thunderbolt 2 connector, which can be daisy-chained since the reader has two inputs, the reader has a theoretical transfer speed of approximately 9.6Gbps, or 1200MBps. Supporting SxS and Sony’s new AXS cards, if one were to download an hour’s worth of true 4K footage at 24fps, shot in X-OCN, it would only take about 2.5 minutes to complete the transfer.

To complement these leaps in storage space and read/write speeds, Sony’s Optical Disc Archive Generation 2 is designed as an optic disc-based storage media with expandable robotic libraries called PetaSites, which through the use of 3.3TB Optical Disc Archive Cartridges guarantee a staggering 100-year shelf life. Unlike LTOs, which are generally only used a handful of times for storing and retrieving, Sony’s optical discs can be quickly and randomly accessed as needed.

HDR
HDR continues to gain traction in the world of broadcast and cinema. From capture to monitoring, the introduction of HDR has spurred many companies to implement new ways to create, monitor, display and distribute HDR content. As mentioned earlier, Sony is implementing firmware updates in many of its cameras to allow internal HLG, or Instant HDR, capture without the need for color grading, as well as compressed X-OCN RAW recording to allow more complex HDR grading to be possible without the massive amounts of data that uncompressed RAW takes up.

HDR gamma displays can now be monitored on screens like the Sony FS5’s, as well as higher-end displays such as their BVM E171, BVM X300/2 and PVM X550.

IP Live
What stood out about Sony’s mission with HDR is to further implement its use in realtime, non-fiction content, and broadcasts like sporting events through IP Live. The goal is to offer instantaneous conversions to not only output media in 4K HDR and SDR but also offer full HD HDR and SDR at the same time. With its SR Live System Sony hopes to implement updates in their camera lines with HLG to provide instant HDR which can be processed through its HDRC-4000 converters. As the company’s business model has stated Sony’s goal is to offer full support throughout the production process, which has led to the introduction of XDCAM Air, which will be an ENG-based cloud service that addresses the growing need for speed to air. XDCAM Air will launch in June 2017.

Managing Files
To round out its production through delivery goals, Sony continues with Media Backbone Navigator X, which is designed to be an online content storage and management solution to ease the work between capture and delivery. It accepts nearly any file type and allows multiple users to easily search for keywords and even phrases spoken in videos while being able to stream in realtime speeds.

Media Backbone Navigator X is designed for productions that create an environment of constant back and forth and will eliminate any excessive deliberation when figuring out storage and distribution of materials.

Sony’s goal at NAB wasn’t to shock or awe but rather to build on an established foundation for current and new clients and customers who are readying for an ever-changing production environment. For Sony, this year’s NAB could be considered preparation for the “upcoming storm” as firmware updates roll out more support for promising formats like HDR.


Daniel Rodriquez is a New York-based cinematographer, photographer and director. Follow him on Instragram: https://www.instagram.com/realdanrodriguez.


DP John Kelleran shoots Hotel Impossible

Director of photography John Kelleran shot season eight of the Travel Channel’s Hotel Impossible, a reality show in which struggling hotels receive an extensive makeover by veteran hotel operator and hospitality expert Anthony Melchiorri and team.

Kelleran, who has more than two decades experience shooting reality/documentary projects, called on Panasonic VariCam LT 4K cinema camcorders for this series.

eWorking for New York production company Atlas Media, Kelleran shot a dozen Hotel Impossible hour-long episodes in locations that include Palm Springs, Fire Island, Capes May, Cape Hatteras, Sandusky, Ohio, and Albany, New York. The production, which began last April and wrapped in December 2016, spent five days in each location.

Kelleran liked the VariCam LT’s dual native ISOs of 800/5000. “I tested ISO5000 by shooting in my own basement at night, and had my son illuminated only by a lighter and whatever light was coming through the small basement window, one foot candle at best. The footage showed spectacular light on the boy.”

Kelleran regularly deployed ISO5000 on each episode. “The crux of the show is chasing out problems in dark corners and corridors, which we were able to do like never before. The LT’s extreme low light handling allowed us to work in dark rooms with only motivated light sources like lamps and windows, and absolutely keep the honesty of the narrative.”

Atlas Media is handling the edit, using Avid Media Composer. “We gave post such a solid image that they had to spend very little time or money on color correction, but could rather devote resources to graphics, sound design and more,” concludes Kelleran.


Building a workflow for The Great Wall

Bling Digital, which is part of the SIM Group, was called on to help establish the workflow on Legendary/Universal’s The Great Wall, starring Matt Damon as a European mercenary imprisoned within the wall. While being held he sees exactly why the Chinese built this massive barrier in the first place — and it’s otherworldly. This VFX-heavy mystery/fantasy was directed by Yimou Zhang.

We reached out to Bling’s director of workflow services, Jesse Korosi, to talk us through the process on the film, including working with data from the Arri 65, which at that point hadn’t yet been used on a full-length feature film. Bling Digital is a post technology and services provider that specializes in on-set data management, digital dailies, editorial system rentals and data archiving

Jesse Korosi

When did you first get involved on The Great Wall and in what capacity?
Bling received our first call from the unit production manager Kwame Parker about providing on-set data management, dailies, VFX and stereo pulls, Avid rentals and a customized process for the digital workflow for The Great Wall in December of 2014.

At this time the information was pretty vague, but outlined some of the bigger challenges, like the film being shot in multiple locations within China, and that the Arri 65 camera may be used, which had not yet been used on a full-length feature. From this point on I worked with our internal team to figure out exactly how we would tackle such a challenge. This also required a lot of communication with the software developers to ensure that they would be ready to provide updated builds that could support this new camera.

After talks with the DP Stuart Dryburgh, the studio and a few other members of production, a big part of my job and anyone on my workflow team is to get involved as early as possible. Therefore our role doesn’t necessarily start on day one of principal photography. We want to get in and start testing and communicating with the rest of the crew well ahead of time so that by the first day, the process runs like a well-oiled machine and the client never has to be concerned with “week-one kinks.”

Why did they opt for the Arri 65 camera and what were some of the challenges you encountered?
Many people who we work with love Arri. The cameras are known for recording beautiful images. For anyone who may not be a huge Arri fan, they might dislike the lower resolution in some of the cameras, but it is very uncommon that someone doesn’t like the final look of the recorded files. Enter the Arri 65, a new camera that can record 6.5K files (6560×3100) and every hour recorded is a whopping 2.8TB per hour.

When dealing with this kind of data consumption, you really need to re-evaluate your pipeline. The cards are not able to be downloaded by traditional card readers — you need to use vaults. Let’s say someone records three hours of footage in a day — that equals 8.7TB of data. If you’re sending that info to another facility even using a 500Mb/s Internet line, that would take 38 hours to send! LTO-ing this kind of media is also dreadfully slow. For The Great Wall we ended up setting up a dedicated LTO area that had eight decks running at any given time.

Aside from data consumption, we faced the challenge of having no dailies software that could even read the files. We worked with Colorfront to get a new build-out that could work, and luckily, after having been through this same ordeal recording Arri Open Gate on Warcraft, we knew how to make this happen and set the client at ease.

Were you on set? Near set? Remote?
Our lab was located in the production office, which also housed editorial. Considering all of the traveling this job entailed, from Beijing and Qingdao to Gansu, we were mostly working remotely. We wanted to be as close to production as possible, but still within a controlled environment.

The dailies set-up was right beside editor Craig Wood’s suite, making for a close-knit workflow with editorial, which was great. Craig would often pull our dailies team into his suite to view how the edit was coming along, which really helped when assessing how the dailies color was working and referencing scenes in the cut when timing pickup shots.

How did you work with the director and DP?
At the start of the show we established some looks with the DP Stuart Dryburgh, ASC. The idea was that we would handle all of the dailies color in the lab. The DIT/DMT would note as much valuable information on set about the conditions that day and we would use our best judgment to fulfill the intended look. During pre-production we used a theatre at the China Film Group studio to screen and review all the test materials and dial in this look.

With our team involved from the very beginning of these color talks, we were able to ensure that decisions made on color and data flow were going to track through each department, all the way to the end of the job. It’s very common for decisions to be made color wise at the start of a job that get lost in the shuffle once production has wrapped. Plus, sometimes there isn’t anyone available who recognizes why certain decisions were made up front when you‘re in the post stage.

Can you talk us through the workflow? 
In terms of workflow, the Arri 65 was recording media onto Codex cards, which were backed up onset with a VaultS. After this media was backed up, the Codex card would be forwarded onto the lab. Within the lab we had a VaultXL that would then be used to back this card up to the internal drive. Unfortunately, you can’t go directly from the card to your working drive, you need to do two separate passes on the card, a “Process” and a “Transfer.”

The Transfer moves the media off the card and onto an internal drive on the Vault. The Process then converts all the native camera files into .ARI files. Once this media is processed and on the internal drive, we were able to move it onto our SAN. From there we were able to run this footage through OSD and make LTO back-ups. We also made additional back-ups to G-Tech GSpeed Studio drives that would be sent back to LA. However, for security purposes as well as efficiency, we encrypted and shipped the bare drives, rather than the entire chassis. This meant that when the drives were received in LA, we were able to mount them into our dock and work directly off of them, i.e no need to wait on any copies.

Another thing that required a lot of back and forth with the DI facility was ensuring that our color pipeline was following the same path they would take once they hit final color. We ended up having input LUTs for any camera that recorded a non-LogC color space. In regards to my involvement, during production in China I had a few members of my team on the ground and I was overseeing things remotely. Once things came back to LA and we were working out of Legendary, I became much more hands-on.

What kind of challenges did providing offline editorial services in China bring, and how did that transition back to LA?
We sent a tech to China to handle the set-up of the offline editorial suites and also had local contacts to assist during the run of the project. Our dailies technicians also helped with certain questions or concerns that came up.

Shipping gear for the Avids is one thing, however shipping consoles (desks) for the editors would have been far too heavy. Therefore this was probably one of the bigger challenges — ensuring the editors were working with the same caliber of workspace they were used to in Los Angeles.

The transition of editorial from China to LA required Dave French, director of post engineering, and his team to mirror the China set-up in LA and have both up and running at the same time to streamline the process. Essentially, the editors needed to stop cutting in China and have the ability to jump on a plane and resume cutting in LA immediately.

Once back in LA, you continued to support VFX, stereo and editorial, correct?
Within the Legendary office we played a major role in building out the technology and workflow behind what was referred to as the Post Hub. This Post Hub was made up of a few different systems all KVM’d into one desk that acted as the control center for VFX and stereo reviews, VFX and stereo pulls and final stereo tweaks. All of this work was controlled by Rachel McIntire, our dailies, VFX and stereo management tech. She was a jack-of-all-trades who played a huge role in making the post workflow so successful.

For the VFX reviews, Rachel and I worked closely with ILM to develop a workflow to ensure that all of the original on set/dailies color metadata would carry into the offline edit from the VFX vendors. It was imperative that during this editing session we could add or remove the color, make adjustments and match exactly what they saw on set, in dailies and in the offline edit. Automating this process through values from the VFX Editors EDL was key.

Looking back on the work provided, what would you have done differently knowing what you know now?
I think the area I would focus on next time around would be upgrading the jobs database. With any job we manage at Bling, we always ensure we keep a log of every file recorded and any metadata that we track. At the time, this was a little weak. Since then, I have been working on overhauling this database and allowing creative to access all camera metadata, script metadata, location data, lens data, etc. in one centralized location. We have just used this on our first job in a client-facing capacity and I think it would have done wonders for our VFX and stereo crews on The Great Wall. It is all too often that people are digging around for information already captured by someone else. I want to make sure there is a central repository for that data.

FMPX8.14

Creating the color of Hacksaw Ridge

Australian colorist Trish Cahill first got involved in the DI on Mel Gibson’s Hacksaw Ridge when cinematographer Simon Duggan enquired about her interest and availability for the film. She didn’t have to consider the idea long before saying yes.

Hacksaw Ridge, which earned Oscar nominations for Best Picture, Director, Lead Actor, Film Editing (won), Sound Editing and Sound Mixing (won), is about a real-life World War II conscientious observer, Desmond Doss, who refused to pick up a gun but instead used his bravery to save lives on the battlefield.

Trish Cahill

Let’s find out more about Cahill’s work and workflow on Hacksaw Ridge

What was the collaboration like between you and director Mel Gibson and cinematographer Simon Duggan?
I first met Mel and the editor John Gilbert when I visited them in the cutting room halfway through the edit. We looked through the various scenes and — in particular, the different battle sequences — and discussed the different tone that was needed for each.

Simon had already talked through the Kodachrome idea with a gradual and subtle desaturation as the film progressed and it was very helpful to be spinning through the actual images and listening to Mel and John talk through their thoughts. We then chose a collection of shots that were representative of the different looks and turning points in the film to use in a look development session.

Simon was overseas at the time, but we had a few phone conversations and he sent though some reference stills prior to the session. The look development session not only gave us our look template for the film but it also gave us a better idea of how smoke continuity was shaping up and what could be done in the grade to help.

During the DI, Mel, John and producer Bill Mechanic came in see my work every couple of days for a few hours to review spools down. Once the film was in good shape, Simon flew in with a nice fresh eye to help tighten it further.

What was the workflow for this project?
Being a war film, there are quite a few bullet hits, blood splatter, smoke elements and various other VFX to be completed across a large number of shots. One of the main concerns was the consistency of smoke levels, so it was important that the VFX team had a balanced set of shots put into sequence reflecting how they would appear in the film.

While the edit was still evolving, the film was conformed and assistant colorist Justin Tran started a balance grade of the war sequences on FilmLight Baselight at Definition Films. This provided VFX supervisor Chris Godfrey and the rest of the team with a better idea of how each shot should be treated in relation to the shots around them and if additional treatment was required for shots not ear-marked for VFX. The balance grading work was carried across to the DI grade in the form of BLGs and were applied to the final edit with the use of Baselight’s multi-paste, so I had full control and nothing was baked in.

Was there a particular inspiration or reference that you used for the look of this film?
Simon sent through a collection of vintage photograph references from the era to get me started. There were shots of old ox blood red barns, mechanics and machinery, train yards and soldiers in uniform — a visual board of everyday pictures of real scenes from the 1930s and 1940s, which was an excellent starting point to spring from. Key words were desaturated, Kodachrome and, the phrase “twist the primaries a touch” was used a bit!

The film starts when our hero, Desmond Doss, is a boy in the 1930s. These scenes have a slight chocolaty sepia tone, which lessens when Doss becomes a young man and enters the military training camp. Colors become more desaturated again when he arrives in Okinawa and then climbs the ridge. We wanted the ridge to be a world unto itself — the desolate battlefield. Each battle from there occurs at different times of day in different environmental conditions, so each has been given its own color variation.

What were the main challenges in grading such a film?
Hacksaw Ridge is a war film. A big percentage of screen time is action-packed and fast-paced with a high-cut ratio. So there are many more shots to grade, there are varied cameras to balance between and fluctuating smoke levels to figure out. It’s more challenging to keep consistency in this type of film than the average drama.

The initial attack on top of the ridge happens just after an aerial bombing raid, and it was important to the story for the grade to help the smoke enhance a sense of vulnerability and danger. We needed to keep visibility as low as possible, but at the same time we wanted it still to be interesting and foreboding. It needed analysis at an individual shot level: what can be done on this particular image to keep it interesting and tonal but still have the audience feel a sense of “I can’t see anything.”

Then on a global level — after making each shot as tonal and interesting as possible — do we still have the murkiness we need to sell the vulnerability and danger? If not, where is the balance to still provide enough visual interest and definition to keep the audience in the moment?

What part of the grading process do you spend most of your time on?
I would say I spend more time on the balancing and initial grade. I like to keep my look in a layer at the end of the stack that stays constant for every shot in the scene. If you have done a good job matching up, you have the opportunity of being able to continue to craft the look as well as add secondaries and global improvements with confidence that you’re not upsetting the apple cart. It gives you better flexibility to change your mind or keep improving as the film evolves and as your instincts sharpen on where the color mood needs to sit. I believe tightening the match and improving each shot on the primary level is time very well spent.

What was the film shot on, and did this bring any challenges or opportunities to you during the grade?
The majority of Hacksaw Ridge was shot with an Arri Alexa. Red Dragon and Blackmagic pocket cameras were also used in the battle sequences. Whenever possible I worked with the original camera raw. I worked in LogC and used Baselight’s generalized color space to normalize the Red and Blackmagic cameras to match this.

Matching the flames between Blackmagic and Alexa footage was a little tricky. The color hues and dynamic range captured by each camera are quite different, so I used the hue shift controls often to twist the reds and yellows of each closer together. Also, on some shots I had several highlight keys in place to create as much dynamic range as possible.

Could you say more about how you dealt with delivering for multiple formats?
The main deliverables required for Hacksaw Ridge were an XYZ and a Rec709 version. Baselight’s generalized color space was used to do the conversions from P3 to XYZ and Rec709. I then made minimal tweaks for the Rec709 version.

Was there a specific scene or sequence you found particularly enjoyable or challenging?
I enjoyed working with the opening scene of the film, enhancing the golden warmth as the boys are walking through the forest in Virginia. The scenes within the Doss house were also a favorite. The art direction and lighting had a beautiful warmth to it and I really enjoyed bringing out the chocolaty, 1930’s and 1940’s tones.

On the flip side of that I also loved working with the cooler crisper dawn tones that we achieved in the second battle sequence. I find when you minimize the color palette and let the contrast and light do the tonal work it can take you to a unique and emotionally amplified place.

One of the greater challenges of grading the film was eliminating any hint of green plant life throughout the Okinawa scenes. With lush, green plants happily existing in the background, we were in danger of losing the audience’s belief that this was a bleak place. Unfortunately, the WW II US military uniforms were the same shade of green found in many parts of the surrounding landscape of the location, making it impossible to get a clean key. There is one scene in particular where a convoy of military trucks rolls through a column of soldiers adding clouds of dust to an already challenging situation.


25 Million Reasons to Smile: When a short film is more than a short

By Randi Altman

For UK-based father and son Paul and Josh Butterworth, working together on the short film 25 Million Reasons to Smile was a chance for both of them to show off their respective talents — Paul as an actor/producer and Josh as an aspiring filmmaker.

The film features two old friends, and literal partners in crime, who get together to enjoy the spoils of their labors after serving time in prison. After so many years apart, they are now able to explore a different and more intimate side of their relationship.

In addition to writing the piece, Josh served as DP and director, calling on his Canon 700D for the shoot. “I bought him that camera when he started film school in Manchester,” says Paul.

Josh and Paul Butterworth

The film stars Paul Butterworth (The Full Monty) and actor/dialect/voice coach Jon Sperry as the thieves who are filled with regret and hope. 25 Million Reasons to Smile was shot in Southern California, over the course of one day.

We reached out to the filmmakers to find out why they shot the short film, what they learned and how it was received.

With tools becoming more affordable these days, making a short is now an attainable goal. What are the benefits of creating something like 25 Million Reasons to Smile?
Josh: It’s wonderful. Young and old aspiring filmmakers alike are so lucky to have the ability to make short films. This can lead to issues, however, because people can lose sight of what it is important: character and story. What was so good about making 25 Million was the simplicity. One room, two brilliant actors, a cracking story and a camera is all you really need.

What about the edit?
Paul: We had one hour and six minutes (a full day’s filming) to edit down to about six minutes, which we were told was a day’s work. An experienced editor starts at £500 a day, which would have been half our total budget in one bite! I budgeted £200 for edit, £100 for color grade and £100 for workflow.

At £200 a day, you’re looking at editors with very little experience, usually no professional broadcast work, often no show reel… so I took a risk and went for somebody who had a couple of shorts in good festivals, named Harry Baker. Josh provided a lot of notes on the story and went from there. And crucial cuts, like staying off the painting as long as possible and cutting to the outside of the cabin for the final lines — those ideas came from our executive producer Ivana Massetti who was brilliant.

How did you work with the colorist on the look of the film?
Josh: I had a certain image in my head of getting as much light as possible into the room to show the beautiful painting in all its glory. When the colorist, Abhishek Hans, took the film, I gave him the freedom to do what he thought was best, and I was extremely happy with the results. He used Adobe Premiere Pro for the grade.

Paul: Josh was DP and director, so on the day he just shot the best shots he could using natural light — we didn’t have lights or a crew, not even a reflector. He just moved the actors round in the available light. Luckily, we had a brilliant white wall just a few feet away from the window and a great big Venice Beach sun, which flooded the room with light. The white walls bounced light everywhere.

The colorist gave Josh a page of notes on how he envisioned the color grade — different palettes for each character, how he’d go for the dominant character when it was a two shot and change the color mood from beginning to end as the character arc/resolution changed and it went from heist to relationship movie.

What about the audio?
Paul: I insisted Josh hire out a professional Róde microphone and a TASCAM sound box from his university. This actually saved the shoot as we didn’t have a sound person on the boom, and consequently the sound box wasn’t turned up… and also we swiveled the microphone rather than moving it between actors, so one had a reverb on the voice while the other didn’t.

The sound was unusable (too low), but since the gear was so good, sound designer Matt Snowden was able to boost it in post to broadcast standard without distortion. Sadly, he couldn’t do anything about the reverb.

Can you comment on the score?
Paul: A BAFTA mate of mine, composerDavid Poore, offered to do the music for free. It was wonderful and he was so professional. Dave already had a really good hold on the project as we’d had long chats but he took the Josh’s notes and we ended up with a truly beautiful score.

Was the script followed to the letter? Any improvisations?
Josh: No, not quite. Paul and Jon were great, and certainly added a lot to the dialogue through conversations before and during the shoot. Jon, especially, was very helpful in Americanizing his character, Jackson’s, dialogue.

Paul: Josh spent a long time on the script and worked on every word. We had script meetings at various LA cafes and table reads with me and Jon. On the shoot day, it was as written.

Josh ended up cutting one of my lines in the edit as it wasn’t entirely necessary, and the reverb was bad. It tightened it up. And our original ending had our hands touching on the bottle, but it didn’t look right so Josh went with the executive producer’s idea of going to the cabin.

What are the benefits of creating something like 25 Million Reasons to Smile?
Paul: Wow! The benefits are amazing… as an actor I never realized the process. The filming is actually a tiny proportion of the entire process. It gave me the whole picture (I’m now in awe of how hard producers work, and that’s only after playing at it!) and how much of a team effort it is — how the direction, edit, sound design and color grade can rewrite the film. I can now appreciate how the actor doesn’t see the bigger picture and has no control over any of those these elements. They are (rightly) fully immersed in their character, which is exactly what the actor’s role is: to turn up and do the lines.

I got a beautiful paid short film out of it, current footage for my show reel and a fantastic TV job — I was cast by Charles Sturridge in the new J.K.Rowling BBC1/HBO series Cormoran Strike as the dad of the female lead Robin (Holliday Grainger). I’d had a few years out bringing Josh up and getting him into film school. I relaunched when he went to university, but my agent said I needed a current credit as the career gap was causing casting directors problems. So I decided to take control and make my own footage — but it had to stand up on my show reel against clips like The Full Monty. If it wasn’t going to be broadcast-standard technically, then it had to have something in the script, and my acting (and my fellow actor had to be good) had to show that I could still do the job.

Josh met a producer in LA who’s given him runner work over here in England, and a senior producer with an international film company saw this and has given him an introduction to their people in Manchester. He also got a chance to write and direct a non-student short using industry professionals, which in the “real” world he might not get for years. And it came with real money and real consequences.

Josh, what did you learn from this experience from a filmmaker’s point of view?
More hands on deck is never a bad thing! It’s great having a tight-knit cast and crew, but the shoot would have definitely benefited from more people to help with lighting and sound, as well as the process running smoother overall.

Any surprises pop up? Any challenges?
Josh: The shoot actually ran very smoothly. The one challenge we had to face was time. Every shot took longer than expected, and we nearly ran out of time but got everything we needed in the end. It helped having such professional and patient actors.

Paul: I was surprised how well Josh (at 20 years old and at the start of film school) directed two professional middle-aged actors. Especially as one was his dad… and I was surprised by how filmic his script was.

Any tips for those looking to do something similar?
Josh: Once you have a story, find some good actors and just do it. As I said before, keep it simple and try to use character not plot to create drama.

Paul: Yes, my big tip would be to get the script right. Spend time and money on that and don’t film it till it’s ready. Get professional help/mentoring if you can. Secondly, use professional actors — just ask! You’d be surprised how many actors will take a project if the script and director are good. Of course, you need to pay them (not the full rate, but something).

Finally, don’t worry too much about the capture — as a producer said to me, “If I like a project I can buy in talent behind the camera. In a short I’m looking for a director’s voice and talent.”

Dog in the Night director/DP Fletcher Wolfe

By Cory Choy

Silver Sound Showdown Music + Video Festival is unique in two ways. First, it is both a music video festival and battle of the bands at the same time. Second, every year we pair up the Grand Prize-winners, director and band, and produce a music video with them. The budget is determined by the festival’s ticket sales.

I conceived of the festival, which is held each year at Brooklyn Bowl, as a way to both celebrate and promote artistic collaboration between the film and music communities — two crowds that just don’t seem to intersect often enough. One of the most exciting things for me is then working with extremely talented filmmakers and musicians who have more often than not met for the first time at our festival.

Dog in the Night (song written by winning band Side Saddle) was one of our most ambitious videos to date — using a combination of practical and post effects. It was meticulously planned and executed by director/cinematographer Fletcher Wolfe, who was not only a pleasure to work with, but was gracious enough to sit down with me for a discussion about her process and the experience of collaborating.

What was your favorite part of making Dog in the Night?
As a music video director I consider it my first responsibility to get to know the song and its meaning very intimately. This was a great opportunity to stretch that muscle, as it was the first time I was collaborating with musicians who weren’t already close friends. In fact, I hadn’t even met them before the Showdown. I found it to be a very rewarding experience.

What is Dog in the Night about?
The song Dog in the Night is, quite simply, about a time when the singer Ian (a.k.a. Angler Boy) is enamored with a good friend, but that friend doesn’t share his romantic feelings. Of course, anyone who has been in that position (all of us?) knows that it’s never that simple. You can hear him holding out hope, choosing to float between friendship and possibly dating, and torturing himself in the process.

I decided to use dusk in the city to convey that liminal space between relationship labels. I also wanted to play on the nervous and lonely tenor of the track with images of Angler Boy surrounded by darkness, isolated in the pool of light coming from the lure on his head. I had the notion of an anglerfish roaming aimlessly in an abyss, hoping that another angler would find his light and end his loneliness. The ghastly head also shows that he doesn’t feel like he has anything in common with anybody around him except the girl he’s pining after, who he envisions having the same unusual head.

What did you shoot on?
I am a DP by trade, and always shoot the music videos I direct. It’s all one visual storytelling job to me. I shot on my Alexa Mini with a set of Zeiss Standard Speed lenses. We used the 16mm lens on the Snorricam in order to see the darkness around him and to distort him to accentuate his frantic wanderings. Every lens in the set weighed in at just 1.25lbs, which is amazing.

The camera and lenses were an ideal pairing, as I love the look of both, and their light weight allowed me to get the rig down to 11lbs in order to get the Snorricam shots. We didn’t have time to build our own custom Snorricam vest, so I found one that was ready to rent at Du-All Camera. The only caveats were that it could only handle up to 11lbs, and the vest was quite large, meaning we needed to find a way to hide the shoulders of the vest under Ian’s wardrobe. So, I took a cue from Requiem for a Dream and used winter clothing to hide the bulky vest. We chose a green and brown puffy vest that held its own shape over the rig-vest, and also suited the character.

I chose a non-standard 1.5:1 aspect ratio, because I felt it suited framing for the anglerfish head. To maximize resolution and minimize data, I shot 3.2K at a 1.78:1 aspect ratio and cropped the sides. It’s easy to build custom framelines in the Alexa Mini for accurate framing on set. On the Mini, you can also dial in any frame rate between 0.75-60fps (at 3.2K). Thanks to digital cinema cameras, it’s standard these days to over-crank and have the ability to ramp to slow motion in post. We did do some of that; each time Angler Boy sees Angler Girl, his world turns into slow motion.

In contrast, I wanted his walking around alone to be more frantic, so I did something much less common and undercranked to get a jittery effect. The opening shot was shot at 6fps with a 45-degree shutter, and Ian walked in slow motion to a recording of the track slowed down to quarter-time, so his steps are on the beat. There are some Snorricam shots that were shot at 6fps with a standard 180-degree shutter. I then had Ian spin around to get long motion blur trails of lights around him. I knew exactly what frame rate I wanted for each shot, and we wound up shooting at 6fps, 12fps, 24fps, 48fps and 60fps, each for a different emotion that Angler Boy is having.

Why practical vs. CG for the head?
Even though the fish head is a metaphor for Angler Boy’s emotional state, and is not supposed to be real, I wanted it to absolutely feel real to both the actor and the audience. A practical, and slightly unwieldy, helmet/mask helped Ian find his character. His isolation needed to be tangible, and how much he is drawn to Angler Girl as a kindred spirit needed to be moving. It’s a very endearing and relatable song, and there’s something about homemade, practical effects that checks both those boxes. The lonely pool of light coming from the lure was also an important part of the visuals, and it needed to play naturally on their faces and the fish mask. I wired Lite Gear LEDs into the head, which was the easy part. Our incredibly talented fabricator, Lauren Genutis, had the tough job — fabricating the mask from scratch!

The remaining VFX hurdle then was duplicating the head. We only had the time and money to make one and fit it to both actors with foam inserts. I planned the shots so that you almost never see both actors in the same shot at the same time, which kept the number of composited shots to a minimum. It also served to maintain the emotional disconnect between his reality and hers. When you do see them in the same shot, it’s to punctuate when he almost tells her how he feels. To achieve this I did simple split screens, using the Pen Tool in Premiere to cut the mask around their actions, including when she touches his knee. To be safe, I shot takes where she doesn’t touch his knee, but none of them conveyed what she was trying to tell him. So, I did a little smooshing around of the two shots and some patching of the background to make it so the characters could connect.

Where did you do post?
We were on a very tight budget, so I edited at home, and I always use Adobe Premiere. I went to my usual colorist, Vladimir Kucherov, for the grade. He used Blackmagic Resolve, and I love working with him. He can always see how a frame could be strengthened by a little shaping with vignettes. I’ll finally figure out what nuance is missing, and when I tell him, he’s already started working on that exact thing. That kind of shaping was especially helpful on the day exteriors, since I had hoped for a strong sunset, but instead got two flat, overcast days.

The only place we didn’t see eye to eye on this project was saturation — I asked him to push saturation farther than he normally would advise. I wanted a cartoon-like heightening of Angler Boy’s world and emotions. He’s going through a period in which he’s feeling very deeply, but by the time of writing the song he is able to look back on it and see the humor in how dramatic he was being. I think we’ve all been there.

What did you use VFX for?
Besides having to composite shots of the two actors together, there were just a few other VFX shots, including dolly moves that I stabilized with the Warp Stabilizer plug-in within Premiere. We couldn’t afford a real dolly, so we put a two-foot riser on a Dana Dolly to achieve wide push-ins on Ian singing. We were rushing to catch dusk between rainstorms, and it was tough to level the track on grass.

The final shot is a cartoon night sky composited with a live shot. My very good friend, Julie Gratz of Kaleida Vision, made the sky and animated it. She worked in Adobe After Effects, which communicates seamlessly with Premiere. Julie and I share similar tastes for how unrealistic elements can coexist with a realistic world. She also helped me in prep, giving feedback on storyboards.

Do you like the post process?
I never used to like post. I’ve always loved being on set, in a new place every day, moving physical objects with my hands. But, with each video I direct and edit I get faster and improve my post working style. Now I can say that I really do enjoy spending time alone with my footage, finding all the ways it can convey my ideas. I have fun combining real people and practical effects with the powerful post tools we can access even at home these days. It’s wonderful when people connect with the story, and then ask where I got two anglerfish heads. That makes me feel like a wizard, and who doesn’t like that?! A love of movie magic is why we choose this medium to tell our tales.


Cory Choy, Silver Sound Showdown festival director and co-founder of Silver Sound Studios, produced the video.