Category Archives: Cameras

Franz Kraus to advisory role at ARRI, Michael Neuhaeuser takes tech lead

The ARRI Group has named Dr. Michael Neuhaeuser as the new executive board member responsible for technology. He succeeds Professor Franz Kraus, who after more than 30 years at ARRI, joins the Supervisory Board and will continue to be closely associated with the company. Neuhaeuser starts September 1.

Kraus, who has been leading tech development at ARRI for the last few decades, played an essential role in the development of the Alexa digital camera system and early competence in multi-channel LED technology for ARRI lighting. During Kraus’ tenure at ARRI, and while he was responsible for research and development, the company was presented with nine Scientific and Technical Awards by the Academy of Motion Picture Arts and Sciences for its outstanding technical achievements.

In 2011, along with two colleagues, Kraus was honored with an Academy Award of Merit, an Oscar statuette for the design and development of the digital film
recorder, the ARRILASER.

Neuhaeuser, who is now responsible for technology at the ARRI Group, previously served as VP of automotive microcontroller development at Infineon Technologies in Munich. He studied electrical engineering at the Ruhr-University Bochum, Germany, and subsequently completed his doctorate in semiconductor devices. He brings with him 30 years of experience in the electronics industry.

Neuhaeuser started his industrial career at Siemens Semiconductor in Villach, Austria, and also took over leadership development at Micram Microelectronic in Bochum. He joined Infineon Technologies in 1998, where he performed various management functions in Germany and abroad. Some of his notable accomplishments include being responsible for the digital cordless business since 2005 and, together with his team, having developed the world’s first fully integrated DECT chip. In 2009, he was appointed to VP/GM at Infineon Technologies Romania in Bucharest where, as country manager, he built up various local activities with more than 300 engineers. In 2012, he was asked to head up the automotive microcontroller development division for which he and his team developed the highly successful Aurix product family, which is used in every second car worldwide.

Main Image: L-R: Franz Kraus and Michael Neuhaeuser.

Roundtable: Director Autumn McAlpin and her Miss Arizona post team

By Randi Altman

The independent feature film Miss Arizona is a sort of fish out of water tale that focuses on Rose Raynes, former beauty queen and current bored wife and mother who accepts an invitation to teach a life skills class at a women’s shelter. As you might imagine, the four women who she meets there don’t feel they have much in common. While Rose is “teaching,” the women are told that one of their abusers is on his way to the shelter. The women escape and set out on an all-night adventure through LA and, ultimately, to a club where the women enter Rose into a drag queen beauty pageant — and, of course, along the way they form a bond that changes them all.

L-R: Camera operator Eitan Almagor, DP Jordan McKittrick and Autumn McAlpin.

Autumn McAlpin wrote and directed the film, which has been making its way through the film festival circuit. She hired a crew made up of 70 percent women to help tell this tale of female empowerment. We reached out to her, her colorist Mars Williamson and her visual effects/finishing artist John Davidson to find out more.

Why did you choose the Alexa Mini? And why did you shoot mostly handheld?
Autumn McAlpin: The Alexa Mini was the first choice of our DP Jordan McKittrick, with whom I frequently collaborate. We were lucky enough to be able to score two Alexa Mini cameras on this shoot, which really helped us achieve the coverage needed for an ensemble piece in which five-plus key actors were in almost every shot. We love the image quality and dynamic range of the Alexas, and the compact and lightweight nature of the Mini helped us achieve an aggressive shooting schedule in just 14 days.

We felt handheld would achieve the intimate yet at times erratic look we were going for following an ensemble of five women from very different backgrounds who were learning to get along while trying to survive. We wanted the audience to feel as if they were going on the journey along with the women, and thus felt handheld would be a wise approach to accomplish this goal.

How early did post — edit, color — get involved?
McAlpin: We met with our editor Carmen Morrow before the shoot, and she and her assistant editor Dustin Fleischmann were integral in delivering a completed rough cut just five weeks after we wrapped. We needed to make key festival deadlines. Each day Dustin would drive footage from set over to Carmen’s bay, where she could assemble while we were shooting so we could make sure we weren’t missing anything crucial. This was amazing, as we’d often be able to see a rough assembly of a scene we had shot in the morning by the end of day. They cut on Avid Media Composer.

My DP Jordan and I agreed on the overall look of the film and how we wanted the color to feel rich and saturated. We were really excited about what we saw in our colorist’s reel. We didn’t meet our colorist Mars Williamson until after we had wrapped production. Mars had moved from LA to Melbourne, so we knew we wouldn’t be able to work in close quarters, but we were confident we’d be able to accomplish the desired workflow in the time needed. Mars was extremely flexible to work with.

Can you talk more about the look of the film.
McAlpin: Due to the nature of our film, we sought to create a rich, saturated look color wise. Our film follows a former pageant queen on an all-night adventure through LA with four unlikely friends she meets at a women’s shelter. In a way, we tried to channel an Oz-like world as our ensemble embarks into the unknown. We deliberately used color to represent the various realities the women inhabit. In the film’s open, our production design (by Gabriel Gonzales) and wardrobe (by Cat Velosa) helped achieve a stark, cold world — filled with blues and whites — to represent our protagonist Rose’s loneliness.

As Rose moves into the shelter, we went with warmer tones and a more eclectic production design. A good portion of Act II takes place in a drag club, which we asked Gabe to design to be rich and vibrant, using reds and purples. Toward the end of the film as Rose finds resolution, we went with more naturalistic lighting, primarily outdoor shots and golden hues. Before production, Jordan and I pulled stills from films such as Nick & Norah’s Infinite Playlist, Black Swan and Short Term 12, which provided strong templates for the looks we were trying to achieve.

Is there a particular scene or look that stands out for you?
McAlpin: There is a scene when our lead Rose (Johanna Braddy) performs a ventriloquist act onstage with a puppet and they sing Shania Twain’s “Man, I Feel Like a Woman.”  Both Rose and the puppet wore matching cowgirl wardrobe and braids, and this scene was lit to be particularly vibrant with hot pinks and purples. I remember watching the monitors on set and feeling like we had really nailed the rich, saturated look we were going for in this offbeat pageant world we had created.

L-R: Dana Wheeler-Nicholson, Shoniqua Shandai, producer De Cooper, Johanna Brady, Autumn McAlpin, Otmara Marrero and Robyn Lively.

Can you talk about the workflow from set to post?
McAlpin: As a low-budget indie, many of our team work from home offices, which made collaboration friendly and flexible. For the four months following production, I floated between the workspaces of our talented and efficient editor Carmen Morrow, brilliant composer Nami Melumad, dedicated sound designer Yu-Ting Su, VFX and online extraordinaire John Davidson, and we used Frame.io to work with our amazing colorist Mars Williamson. Everyone worked so hard to help achieve our vision in our timeframe. Using Frame.io and Box helped immensely with file delivery, and I remember many careful drives around LA, toting our two RAID drives between departments. Postmates food delivery service helped us power through! Everyone worked hard together to deliver the final product, and for that I’m so grateful.

Can you talk about the type of film you were trying to make, and did it turn out as you hoped?
McAlpin: I volunteered in a women’s shelter for several years teaching a life skills class, and this was an experience that introduced me to strong, vibrant women whose stories I longed to tell. I wrote this script very quickly, in just three weeks, though really, the story seemed to write itself. It was the fall of 2016, at a time where I was agitated by the way women were being portrayed in the media. This was shortly before the #metoo movement, and during the election and women’s march. The time felt right to tell a story about women and other marginalized groups coming together to help each other find their voices and a safe community in a rapidly divisive world.

I’m not going to lie, with our budget, all facets of production and post were quite challenging, but I was so overwhelmed by the fastidious efforts of everyone on our team to create something powerful. I feel we were all aligned in vision, which kept everyone fueled to create a finished product I am very proud of. The crowning moment of the experience was after our world premiere at Geena Davis’ Bentonville Film Fest, when a few women from the audience approached and confided that they, too, had lived in shelters and felt our film spoke to the truths they had experienced. This certainly made the whole process worthwhile.

Autumn, you wrote as well as directed. Did the story change or evolve once you started shooting or did you stick to the original script?
McAlpin: As a director who is very open to improv and creative play on set, I was quite surprised by how little we deviated from the script. Conceptually, we stuck to the story as written. We did have a few actors who definitely punched up scenes by making certain lines more their own (and much more humorous, i.e. the drag queens). And there were moments when location challenges forced last-minute rewrites, but hey, I guess that’s one advantage to having the writer in the director’s chair! This story seemed to flow from the moment it first arrived in my head, telling me what it wanted to be, so we kind of just trusted that, and I think we achieved our narrative goals.

You used a 70 percent female crew. Can you talk about why that was important to you?
McAlpin: For this film, our producer DeAnna Cooper and I wanted to flip the traditional gender ratios found on sets, as ours was indeed a story rooted in female empowerment. We wanted our set to feel like a compatible, safe environment for characters seeking safety and trusted female friendships. So many of the cast and crew who joined our team expressed delight in joining a largely female team, and I think/hope we created a safe space for all to create!

Also, as women, we tend to get each other — and there were times when those on our production team (all mothers) were able to support each other’s familial needs when emergencies at home arose. We also want to give a shout-out to the numerous woman-supporting men we had on our team, who were equally wonderful to work with!

What was everyone’s favorite scene and why?
McAlpin: There’s a moment when Rose has a candid conversation with a drag queen performer named Luscious (played by Johnathan Wallace) in a green room during which each opens up about who they are and how they got there. Ours is a fish out of water story as Rose tries to achieve her goal in a world quite new to her, but in this scene, two very different people bond in a sincere and heartfelt way. The performances in this scene were just dynamite, thanks to the talents of Johanna and Johnathan. We are frequently told this scene really affects viewers and changes perspectives.

I also have a personal favorite moment toward the end of the film in which a circle of women from very different backgrounds come together to help out a character named Leslie, played by the dynamic Robyn Lively, who is searching for her kids. One of the women helping Leslie says, “I’m a mama, too,” and I love the strength felt in this group hug moment as the village comes together to defend each other.

If you all had to do it again, what would you do differently?
McAlpin: This was one fast-moving train, and I know, as is the case in every film, there are little shots or scenes we’d all love to tweak just a little if given the chance to start over from scratch. But at this point, we are focusing on the positives and what lies in store for Miss Arizona. Since our Bentonville premiere and LA premiere at Dances With Films, we have been thrilled to receive numerous distribution offers, and it’s looking like a fall worldwide release may be in store. We look forward to connecting with audiences everywhere as we share the message of this film.

Mars Williamson

Mars, can you talk about your process and how you worked with the team? 
Williamson: Autumn put us in touch, and John and I touched based a little bit before I was going to start color. We all had a pretty good idea of where we were taking it from the offline and discussed little tweaks here and there, so it was fairly straightforward. There were a couple of things like changing a wall color and the last scene needing more sunset than was shot. Autumn and John are super easy and great to work with. We found out pretty early that we’d be able to collaborate pretty easily since John has DaVinci Resolve on his end in the states as well.  I moved to Melbourne permanently right before I really got into the grade.

Unbeknownst to me, Melbourne was/is in the process of upgrading their Internet, which is currently painfully slow. We did a couple of reviews via Frame.io and eventually moved to me just emailing John my project. He could relink to the media on his end and all of my color grading would come across for sessions in LA with Autumn. It was the best solution to contend with the snail pace uploads of large files. From there it was just going through it reel by reel and getting notes from the stateside team. I couldn’t have worked on this with a better group of people.

What types of projects do you work on most often?
Williamson: My bread and butter has always been TV commercials, but I’ve worked hard to make sure I work on all sort of formats across different genres. I like to make sure I’ve got a wide range of stuff under my belt. The pool is smaller here in Australia than it is in LA (where I moved from) so TV commercials are still the bill payers, but I’m also still dipping into the indie scene here and trying to diversify what I work on. Still working on a lot of indie projects and music videos from the states as well so thank you stateside clients! Thankfully the difference in time hasn’t hindered most of them (smiles). It has led to an all-nighter here and there for me, but I’m happy to lose sleep for the right projects.

How did you work with the DP and director on the look of the film? What look did you want and how did you work to achieve that look or looks?
John Davidson: Magic Feather is a production company and creative agency that I started back in 2004. We provide theatrical marketing and creative services for a wide variety of productions. From the 3D atomic transitions in Big Bang Theory to the recent Jurassic World Fallen Kingdom week-long event on Discovery, we have a pretty great body of work. I came onboard Miss Arizona very much by accident. Last year, after working with Weta in New Zealand, we moved to Laguna Niguel and connected with Autumn and her husband Michael via some mutual friends. I was intrigued that they had just finished shooting this movie on their own and offered to replace a few license plates and a billboard. Somehow I turned that into coordinating the post-Avid workflow across the planet and creating 100-plus visual effects shots. It was a fantastic opportunity to use every tool in our arsenal to help a film with a nice message and a family we have come to adore.

John Davidson

Working with Jordan and Autumn for VFX and final mastering was educational for all of us, but definitely so with me. As I mentioned to Jordan after the showing in Hollywood, if I did my job right you would never know. There were quite a few late nights, but I think that they are both very happy with the results.

John, I understand there were some challenges in the edit? Relinking the camera source footage? Can you talk about that and how you worked around it?
Davidson: The original Avid cut was edited off of the dailies at 1080p with embedded audio. The masters were 3.2k Arri Alexa Mini Log with no sync sound. There were timecode issues the first few days on set and because Mars was using DaVinci Resolve to color, we knew we had to get the footage from Avid to Resolve somehow. Once we got the footage into DaVinci via AAF, I realized it was going to be a challenge relinking sources from the dailies. Resolve was quite the utility knife, and after a bit of tweaking we were able to get the silent master video clips linked up. Because 12TB drives are expensive, we thought it best to trim media to 48-frame handles and ship a smaller drive to Australia for Mars to work with. With Mars’s direction we were able to get that handled and shipped.

While Mars was coloring in Australia, I went back into the sources and began the process of relinking the original separate audio to the video sources because I needed to be able to adjust/re-edit a few scenes that had technical issues we couldn’t fix with VFX. Resolve was fantastic here again. Any clip that couldn’t be automatically linked via timecode was connected with clap marks using the waveform. For safety, I batch-exported all of the footage out with embedded audio and then relinked the film to that. This was important for archival purposes as well as any potential fixes we might have to do before the film delivered.

At this point Mars was sharing her cuts on Frame.io with Jordan and Autumn. I felt like a little green shift was being introduced over H.264 so we would occasionally meet here to review a relinked XML that Mars would send for a full quality inspection. For VFX we used Adobe After Effects and worked in flat color. We then would upload shots to box.com for Mars to incorporate into her edit. There were also two re-cut scenes that were done this way as well which was a challenge because any changes had to be shared with the audio teams who were actively scoring and mixing.

Once Mars was done we put the final movie together here, and I spent about two weeks working on it. At this point I took the film from Resolve to FCP X. Because we were mastering at 1080p, we had the full 3.2K frame for flexibility. Using a 1080p timeline in FCP X, the first order of business was making final on-site color adjustments with Autumn.

Can you talk about the visual effects provided?
Davidson: For VFX, we focused on things like the license plates and billboards, but also took a bit of initiative and reviewed the whole movie for areas we could help. Like everyone else, I loved the look of the stage and club scenes, but wanted to add just a little flare to the backlights so the LED grids would be less visible. This was done in Final Cut Pro X using the MotionVFX plugin mFlare2. It made very quick work of using its internal Mocha engine to track the light sources and obscure them as needed when a light went behind a person’s head, for example. It would have been agony tracking so many lights in all those shots using anything else. We had struggled for a while getting replacement license plates to track using After Effects and Mocha. However, the six shots that gave us the most headaches were done as a test in FCP X in less than a day using CoreMelt’s TrackX. We also used Final Cut Pro X’s stabilization to smooth out any jagged camera shakes as well as added some shake using FCP X’s handheld effect on a few shots that needed it for consistency.

Another area we had to get creative with was with night driving shots that were just too bright even after color. By layering a few different Rampant Design overlays set to multiply, we were able to simulate lights in motion around the car at night with areas randomly increasing and decreasing in brightness. That had a big impact on smoothing out those scenes, and I think everyone was pretty happy with the result. For fun, Autumn also let me add in a few mostly digital shots, like the private jet. This was done in After Effects using Trapcode Particular for the contrails, and a combination of Maxon Cinema 4D and Element 3D for the jet.

Resolve’s face refinement and eye brightening were used in many scenes to give a little extra eye light. We also used Resolve for sky replacement on the final shot of the film. Resolve’s tracker is also pretty incredible, and was used to hide little things that needed to be masked or de-emphasized.

What about finishing?
Davidson: We finalized everything in FCP X and exported a full, clean ProRes cut of the film. We then re-imported that and added grain, unsharp masks and a light vignette for a touch of cinematic texture. The credits were an evolving process, so we created an Apple Numbers document that was shared with my internal Magic Feather team, as well as Autumn and the producers. As the final document was adjusted and tweaked we would edit an Affinity Photo file that my editor AJ Paschall and I shared. We would then export a huge PNG file of the credits into FCP X and set position keyframes to animate the scroll. Any time a change was made we would just relink to the new PNG export and FCP X would automatically update the credits. Luckily, that was easy because we did that probably 50 times.

Lastly, our final delivery to the DCP company was a HEVC 10-bit 2K encode. I am a huge fan of HEVC. It’s a fantastic codec, but it does have a few caveats in that it takes forever to encode. Using Apple Compressor and a 10-core iMac Pro, it took approximately 13 hours. That said, it was worth it because the colors were accurately represented and gave us a file that 5.52GB versus 18GB or 20GB. That’s a hefty savings on size while also being an improvement in quality over H.264.

Photo Credit: Rich Marchewka

 

DG 7.9.18

DP Patrick Stewart’s path and workflow on Netflix’s Arrested Development

With its handheld doc-style camerawork, voiceover narration and quirky humor, Arrested Development helped revolutionize the look of TV sitcoms. Created by Mitchell Hurwitz, with Ron Howard serving as one of its executive producers, the half-hour comedy series follows the once-rich Bluth family, that continues to live beyond their means in Southern California. At the center of the family is the mostly sane Michael Bluth (Jason Bateman), who does his best to keep his dysfunctional family intact.

Patrick Stewart

The series first aired for three seasons on the Fox TV network (2003-2006) but was canceled due to low ratings. Because the series was so beloved, in 2013, Netflix brought it back to life with its original cast in place. In May 2018, the fifth season began streaming, shot by cinematographer Patrick Stewart (Curb Your Enthusiasm, The League, Flight of the Conchords). He called on VariCam LT cinema cameras.

Stewart’s path to becoming a cinematographer wasn’t traditional. Growing up in Los Angeles and graduating with a degree in finance from the University of Santa Clara, he got his start in the industry when a friend called him up and asked if he’d work on a commercial as a dolly grip. “I did it well enough where they called me for more and more jobs,” explains Stewart. “I started as a dolly grip but then I did sound, worked as a tape op and then started in the camera department. I also worked with the best gaffers in San Francisco, who showed me how to look at the light, understand it and either augment it or recreate it. It was the best practical film school I could have ever attended.”

Not wanting to stay “in a small pond with big fish” Stewart decided to move back to LA and started working for MTV, which brought him into the low-budget handheld world. It also introduced him to “interview lighting” where he lit celebrities like Barbara Streisand, Mick Jagger and Paul McCartney. “At that point I got to light every single amazing musician, actor, famous person you could imagine,” he says. “This practice afforded me the opportunity to understand how to light people who were getting older, and how to make them look their best on camera.”

In 1999, Stewart received an offer to shoot Mike Figgis’ film Time Code (2000), which was one of the landmark films of the DV/film revolution. “It was groundbreaking not only in the digital realm but the fact that Time Code was shot with four cameras from beginning to end, 93 minutes, without stopping, shown in a quad split with no edits — all handheld,” explains Stewart. “It was an amazingly difficult project, because having no edits meant you couldn’t make mistakes. I was very fortunate to work with a brilliant renegade director like Mike Figgis.”

Triple Coverage
When hired for Arrested Development, the first request Stewart approached Hurwitz with was to add a third camera. Shooting with three cameras with multiple characters can be a logistical challenge, but Stewart felt he could get through scenes more quickly and effectively, in order to get the actors out on time. “I call the C camera the center camera and the A and the B are screen left and screen right,” Stewart explains. “C covers the center POV, while A and B cover the scene from their left and right side POV, which usually starts with overs. As we continue to shoot the scene, each camera will get tighter and tighter. If there are three or more actors in the scene, C will get tighter on whoever is in the center. After that, C camera might cover the scene following the dialogue with ‘swinging’ singles. If no swinging singles are appropriate, then the center camera can move over and help out coverage on the right or left side.

“I’m on a walkie — either adjusting the shots during a scene for either of their framing or exposure, or I’m planning ahead,” he continues. “You give me three cameras and I’ll shoot a show really well for you and get it done efficiently, and with cinematic style.”

Because it is primarily a handheld show, Stewart needed lenses that would not weigh down his operators during long takes. He employed Fujinon Cabrio zooms (15-35mm, 19-90mm, and 85-300mm), which are all f/2.8 lenses.

For camera settings, Stewart captures 10-bit 422 UHD (3840×2160) AVC Intra files at 23.98-fps. He also captures in V-Log but uses the V-709 LUT. “To me, you can create all the LUTs you want,” he says, “but more than likely you get to color correction and end up changing things. I think the basic 709 LUT is really nice and gentle on all the colors.”

Light from Above
Much of Arrested Development is shot on a stage, so lighting can get complicated, especially when there are multiple characters in a scene. To makes things less complicated, Stewart provided a gentle soft light from softboxes covering the top of each stage set, using 4-by-8 wooden frames with Tungsten-balanced Quasar tubes dimmed down to 50%. His motivated lighting explanation is that the unseen source could basically be a skylight. If characters are close to windows, he uses HMIs creating “natural sunlight” punching through to light the scene. “The nice thing about the VariCam is that you don’t need as many photons, and I did pretty extensive tests during pre-production on how to do it.”

On stage, Stewart sets his ISO to 5000 base and dials down to 2500 and generally shoots at an f/2.8 and ½. He even uses one level of ND on top of that. “You can imagine 27-foot candles at one level of ND at a 2.8 and 1/2 — that’s a pretty sensitive camera, and I noticed very little noise. My biggest concern was mid-tones, so I did a lot of testing — shooting at 5000, shooting at 2500, 800, 800 pushed up to 1600 and 2500.

“Sometimes with certain cameras, you can develop this mid-tone noise that you don’t really notice until you’re in post. I felt like shooting at 5000 knocked down to 2500 was giving me the benefit of lighting the stage at these beautifully low-lit levels where we would never be hot. I could also easily put 5Ks outside the windows to have enough sunlight to make it look like it’s overexposed a bit. I felt that the 5000 base knocked down to 2500, the noise level was negligible. At native 5000 ISO, there was a little bit more mid-tone noise, even though it was still acceptable. For daytime exteriors, we usually shot at ISO 800, dialing down to 500 or below.”

Stewart and Arrested Development director Troy Miller have known each other for many years since working together on the HBO’s Flight of the Conchords. “There was a shorthand between director and DP that really came in handy,” says Stewart. “Troy knows that I know what I’m doing, and I know on his end that he’s trying to figure out this really complicated script and have us shoot it. Hand in hand, we were really able to support Mitch.”


Zoe Iltsopoulos Borys joins Panavision Atlanta as VP/GM

Panavision has hired Zoe Iltsopoulos Borys to lead the company’s Atlanta office as vice president and general manager. Borys will oversee day-to-day operations in the region.

Borys’ 25 years of experience in the motion picture industry includes business development for Production Resources Group (PRG), and GM for Fletcher Camera and Lenses (now VER). This is her second turn at Panavision, having served in a marketing role at the company from 1998-2006. She is also an associate member of the American Society of Cinematographers.

Panavision’s Atlanta facilities, located in West Midtown and at Pinewood Studios, supplies camera rental equipment in the southern US, with a full staff of prep technicians and camera service experts. The Atlanta team has provided equipment and services to productions including Avengers: Infinity War, Black Panther, Guardians of the Galaxy Vol. 2, The Immortal Life of Henrietta Lacks, Baby Driver and Pitch Perfect 3.


Kees van Oostrum weighs in on return as ASC president

The American Society of Cinematographers (ASC) has re-elected Kees van Oostrum as president. He will serve his third consecutive term at the organization.

The ASC board also re-upped its roster of officers for 2018-2019, including Bill Bennett, John Simmons and Cynthia Pusheck as vice presidents; Levie Isaacks as treasurer; David Darby as secretary; and Isidore Mankofsky as sergeant-at-arms.

Van Oostrum initiated and chairs the ASC Master Class program, which has expanded to locations worldwide under his presidency. The Master Classes take place several times a year and are taught by ASC members. The classes are designed for cinematographers with an intermediate-to-advanced skill set and incorporates practical, hands-on demonstrations of lighting and camera techniques with essential instruction in current workflow practices.

The ASC Vision Committee, founded during van Oostrum’s first term, continues to organize successful symposiums that encourage diversity and inclusion on camera crews, and also offers networking opportunities. The most recent was a standing-room-only event that explored practical and progressive ideas for changing the face of the industry. The ASC will continue to host more of these activities during the coming years.

Van Oostrum has earned two Primetime Emmy nominations for his work on the telefilms Miss Rose White and Return to Lonesome Dove. His peers chose the latter for a 1994 ASC Outstanding Achievement Award. Additional ASC Award nominations for his television credits came for The Burden of Proof, Medusa’s Child and Spartacus. He also shot the Emmy-winning documentary The Last Chance.

A native of Amsterdam, van Oostrum studied at the Dutch Film Academy with an emphasis on both cinematography and directing. He went on to earn a scholarship sponsored by the Dutch government, which enabled him to enroll in the American Film Institute (AFI). Van Oostrum broke into the industry shooting television documentaries for several years. He has subsequently compiled a wide range of some 80-plus credits, including movies for television and the cinema, such as Gettysburg, Gods and Generals and occasional documentaries. He recently wrapped the final season of TV series The Fosters.

The 2018-2019 board who voted in this election includes John Bailey, Paul Cameron, Russell Carpenter, Curtis Clark, Dean Cundey, George Spiro Dibie, Stephen Lighthill, Lowell Peterson, Roberto Schaefer, John Toll and Amelia Vincent. Alternate Board members are Karl-Walter Lindenlaub, Stephen Burum, David Darby, Charlie Lieberman and Eric Steelberg.

The ASC has over 20 committees driving the organization’s initiatives, such as the award-winning Motion Imaging Technology Council (MITC), and the Educational and Outreach committee.

We reached out to Van Oostrum to find out more:

How fulfilling has being ASC President been —either personally or professionally (or both)?
My presidency has been a tremendously fulfilling experience. The ASC grew its educational programs. The masterclass expanded from domestic to international locations, and currently eight to 10 classes a year are being held based on demand (up from four to five from the inaugural year of the master class). Our public outreach activities have brought in over 7,000 students in the last two years, giving them a chance to meet ASC members and ask questions about cinematography and filmmaking.

Our digital presence has also grown, and the ASC and American Cinematographer websites are some of the most visited sites in our industry. Interest from the vendor community has expanded as well, introducing a broader range of companies who are involved in the image pipeline to our members. Then, our efforts to support ASC’s heritage, research and museum acquisitions have taken huge steps forward. I believe the ASC has grown into a relevant organization for people to watch.

What do you hope to accomplish in the coming year?
We will complete our Educational Center, a new building behind the historic ASC clubhouse in Hollywood; produce several online master classes about cinematography; and we also are set to produce two major documentaries about cinematography and will continue to strengthen our role as a technology partner through the efforts of our Motion Imaging Technology Council (formerly the ASC Technology Committee).

What are your proudest achievements from previous years?
I’m most proud of the success of the Master Classes, as well as the support and growth in the number of activities by the Vision Committee. I’m also pleased with the Chinese language edition of our magazine, and having cinematography stories shared in a global way. We’ve also beefed up our overall internal communications so members feel more connected.


Testing large format camera workflows

By Mike McCarthy

In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.

L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.

Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.

Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.

First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.

Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.

We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.

Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.

One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”

Post Production
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.

Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.

The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.

Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.

Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.

I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:

This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.

We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.

We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.

Main Image: DP Michael Svitak


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Panavision Millennium DXL2’s ecosystem grows with color science, lenses, more

Panavision’s Millennium DXL2 8K camera was on display at Cine Gear last week featuring  a new post-centric firmware upgrade, along with four new large-format lens sets, a DXL-inspired accessories kit for Red DSMC2 cameras and a preview of custom advancements in filter technology.

DXL2 incorporates technology advancements based on input from cinematographers, camera assistants and post production groups. The camera offers 16 stops of dynamic range with improved shadow detail, a native ISO setting of 1600 and 12-bit ProRes XQ up to 120fps. New to the DXL2 is version 1.0 of a directly editable (D2E) workflow. D2E gives DITs wireless LUT and CDL look control and records all color metadata into camera-generated proxy files for instant and render-free dailies.

DXL2, which is available to rent worldwide, also incorporates an updated color profile: Light Iron Color 2 (LiColor2). This latest color science provides cinematographers and DITs with a film-inspired tonal look that makes the DXL2 feel more cinematic and less digital.

Panavision also showcased their large-format spherical and anamorphic lenses. Four new large-format lens sets were on display:
• Primo X is a cinema lens designed for use on drones and gimbals. It’s fully sealed, weatherproof and counterbalanced to be aerodynamic and it’s able to easily maintain a proper center of gravity. Primo X lenses come in two primes – 14mm (T3.1) and 24mm (T1.6) – and one 24-70mm zoom (T2.8) and will be available in 2019.

• H Series is a traditionally designed spherical lens set with a rounded, soft roll-off, giving what the company calls a “pleasing tonal quality to the skin.” Created with vintage glass and coating, these lenses offer slightly elevated blacks for softer contrast. High speeds separate subject and background with a smooth edge transition, allowing the subject to appear naturally placed within the depth of the image. These lenses are available now.
• Ultra Vista is a series of large-format anamorphic optics. Using a custom 1.6x squeeze, Ultra Vista covers the full height of the 8K sensor in the DXL and presents an ultra-widescreen 2.76:1 aspect ratio along with a classic elliptical bokeh and Panavision horizontal flare. Ultra Vista lenses will be available in 2019.
• PanaSpeed is a large-format update of the classic Primo look. At T1.4, PanaSpeed is a fast large-format lens. It will be available in Q3 of 2018.

Panavision also showed an adjustable liquid crystal neutral density (LCND) filter. LCND adjusts up to six individual stops with a single click or ramp — a departure from traditional approaches to front-of-lens filters, which require carrying a set and manually swapping individual NDs based on changing light. LCND starts at 0.3 and goes through 0.6, 0.9, 1.2, 1.5, to 1.8. It will be available in 2019.

Following up on the DXL1 and DXL2, Panavision launched the latest in its cinema line-up with the newly created DXL-M accessory kit. Designed to work with Red DSMC2 cameras, DXL-M marries the quality and performance of DXL with the smaller size and weight of the DSMC2. DXL-M brings popular features of DXL to Red Monstro, Gemini and Helium sensors, such as the DXL menu system (via an app for the iPhone), LiColor2, motorized lenses, wireless timecode (ACN) and the Primo HDR viewfinder. It will be available in Q4 of 2018.


Sony updates Venice to V2 firmware, will add HFR support

At CineGear, Sony introduced new updates and developments for its Venice CineAlta camera system including Version 2 firmware, which will now be available in early July.

Sony also showed the new Venice Extension System, which features expanded flexibility and enhanced ergonomics. Also announced was Sony’s plan for high frame rate support for the Venice system.

Version 2 adds new features and capabilities specifically requested by production pros to deliver more recording capability, customizable looks, exposure tools and greater lens freedom. Highlights include:

With 15+ stops of exposure latitude, Venice will support high base ISO of 2500 in addition to an existing ISO of 500, taking full advantage of Sony’s sensor for superb low-light performance with dynamic range from +6 stops to -9 stops as measured at 18% middle gray. This increases exposure indexes at higher ISOs for night exteriors, dark interiors, working with slower lenses or where content needs to be graded in high dynamic range while maintaining the maximum shadow details; Select FPS (off speed) in individual frame increments, from 1 to 60; V2.0 adds several Imager Modes, including 25p in 6K full-frame, 25p in 4K 4:3 anamorphic, 6K 17:9, 1.85:1 and 4K 6:5 anamorphic imager modes; user-uploadable 3D LUTs allows users to customize their own looks and save them directly into the camera; wired LAN remote control allows users to remotely control and change key functions, including camera settings, fps, shutter, EI, iris (Sony E-mount lens), record start/stop and built-in optical ND filters; and E-mount allows users to remove the PL mount and use a wide assortment of native E-mount lenses.

The Venice Extension System is a full-frame tethered extension system that allows the camera body to detach from the actual image sensor block with no degradation in image quality up to 20 feet apart. These are the result of Sony’s long-standing collaboration with James Cameron’s Lightstorm Entertainment.

“This new tethering system is a perfect example of listening to our customers, gathering strong and consistent feedback, and then building that input into our product development,” said Peter Crithary, marketing manager for motion picture cameras, Sony. “The Avatar sequels will be among the first feature films to use the new Venice Extension System, but it also has tremendous potential for wider use with handheld stabilizers, drones, gimbals and remote mounting in confined places.”

Also at CineGear, Sony shared the details of a planned optional upgrade to support high frame rate — targeting speeds up to 60fps in 6K, up to 90fps in 4K and up to 120fps in 2K. It will be released in North America in the spring of 2019.


Red simplifies camera lineup with one DSMC2 brain

Red Digital Cinema modified its camera lineup to include one DSMC2 camera Brain with three sensor options — Monstro 8K VV, Helium 8K S35 and Gemini 5K S35. The single DSMC2 camera Brain includes high-end frame rates and data rates regardless of the sensor chosen. In addition, this streamlined approach will result in a price reduction compared to Red’s previous camera line-up.

“We have been working to become more efficient, as well as align with strategic manufacturing partners to optimize our supply chain,” says Jarred Land, president of Red Digital Cinema. “As a result, I am happy to announce a simplification of our lineup with a single DSMC2 brain with multiple sensor options, as well as an overall reduction on our pricing.”

Red’s DSMC2 camera Brain is a modular system that allows users to configure a fully operational camera setup to meet their individual needs. Red offers a range of accessories, including display and control functionality, input/output modules, mounting equipment, and methods of powering the camera. The camera Brain is capable of up to 60fps at 8K, offers 300MB/s data transfer speeds and simultaneous recording of RedCode RAW and Apple ProRes or Avid DNxHD/HR.

The Red DSMC2 camera Brain and sensor options:
– DSMC2 with Monstro 8K VV offers cinematic full frame lens coverage, produces ultra-detailed 35.4 megapixel stills and offers 17+ stops of dynamic range for $54,500.
– DSMC2 with Helium 8K S35 offers 16.5+ stops of dynamic range in a Super 35 frame, and is available now for $24,500.
– DSMC2 with Gemini 5K S35 uses dual sensitivity modes to provide creators with greater flexibility using standard mode for well-lit conditions or low-light mode for darker environments priced at $19,500.

Red will begin to phase out new sales of its Epic-W and Weapon camera Brains starting immediately. In addition to the changes to the camera line-up, Red will also begin offering new upgrade paths for customers looking to move from older Red camera systems or from one sensor to another. The full range of upgrade options can be found here.

 

 

The Duffer Brothers: Showrunners on Netflix’s Stranger Things

By Iain Blair

Kids in jeopardy! The Demogorgon! The Hawkins Lab! The Upside Down! Thrills and chills! Since they first pitched their idea for Stranger Things, a love letter to 1980’s genre films set in 1983 Indiana, twin brothers Matt and Ross Duffer have quickly established themselves as masters of suspense in the science-fiction and horror genres.

The series was picked up by Netflix, premiered in the summer of 2016, and went on to become a global phenomenon, with the brothers at the helm as writers, directors and executive producers.

The Duffer Brothers

The atmospheric drama, about a group of nerdy misfits and strange events in an outwardly average small town, nailed its early ’80s vibe and overt homages to that decade’s master pop storytellers: Steven Spielberg and Stephen King. It quickly made stars out of its young ensemble cast — Millie Bobby Brown, Natalia Dyer, Charlie Heaton, Joe Keery, Gaten Matarazzo, Caleb McLaughlin, Noah Schnapp, Sadie Sink and Finn Wolfhard.

It also quickly attracted a huge, dedicated fan base, critical plaudits and has won a ton of awards, including Emmys, a SAG Award for Best Ensemble in a Drama Series and two Critics Choice Awards for Best Drama Series and Best Supporting Actor in a Drama Series. The show has also been nominated for a number of Golden Globes.

I recently talked with the Duffers, who are already hard at work on the highly anticipated third season (which will premiere on Netflix in 2019) about making the ambitious hit series, their love of post and editing, and VFX.

How’s the new season going?
Matt Duffer: We’re two weeks into shooting, and it’s going great. We’re very excited about it as there are some new tones and it’s good to be back on the ground with everyone. We know all the actors better and better, the kids are getting older and are becoming these amazing performers — and they were great before. So we’re having a lot of fun.

Are you shooting in Atlanta again?
Ross Duffer: We are, and we love it there. It’s really our home base now, and we love all these pockets of neighborhoods that have not changed at all since the ‘80s, and there is an incredible variety of locations. We’re also spreading out a lot more this season and not spending so much time on stages. We have more locations to play with.

Will all the episodes be released together next year, like last time? That would make binge-watchers very happy.
Matt: Yes, but we like to think of it more as like a big movie release. To release one episode per week feels so antiquated now.

The show has a very cinematic look and feel, so how do you balance that with the demands of TV?
Ross: It’s interesting, because we started out wanting to make movies and we love genre, but with a horror film they want big scares every few minutes. That leaves less room for character development. But with TV, it’s always more about character, as you just can’t sustain hours and hours of a show if you don’t care about the people. So ‘Stranger Things’ was a world where we could tell a genre story, complete with the monster, but also explore character in far more depth than we could in a movie.

Matt: Movies and TV are almost opposites in that way. In movies, it’s all plot and no character, and in TV it’s about character and you have to fight for plot. We wanted this to have pace and feel more like a movie, but still have all the character arcs. So it’s a constant balancing act, and we always try and favor character.

Where do you post the show?
Matt: All in Hollywood, and the editors start working while we’re shooting. After we shoot in Atlanta, we come back to our offices and do all the post and VFX work right there. We do all the sound mix and all the color timing at Technicolor down the road. We love post. You never have enough time on the set, and there’s all this pressure if you want to redo a shot or scene, but in post if a scene isn’t working we can take time to figure it out.

Tell us about the editing. I assume you’re very involved?
Ross: Very. We have two editors this season. We brought back one of our original editors, Dean Zimmerman, from season one. We are also using Nat Fuller, who was on season two. He was Dean’s assistant originally and then moved up, so they’ve been with us since the start. Editing’s our favorite part of the whole process, and we’re right there with them because we love editing. We’re very hands on and don’t just give notes and walk away. We’re there the whole time.

Aren’t you self-taught in terms of editing?
Matt: (Laughs) I suppose. We were taught the fundamentals of Avid at film school, but you’re right. We basically taught ourselves to edit as kids, and we started off just editing in-camera, stopping and starting, and playing the music from a tape recorder. They weren’t very good, but we got better.

When iMovie came out we learned how to put scenes together, so in college the transition to Avid wasn’t that hard. We fell in love with editing and just how much you can elevate your material in post. It’s magical what you can do with the pace, performances, music and sound design, and then you add all the visual effects and see it all come together in post. We love seeing the power of post as you work to make your story better and better.

How early on do you integrate post and VFX with the production?
Ross: On day one now. The biggest change from season one to two was that we integrated post far earlier in the second season — even in the writing stage. We had concept artists and the VFX guys with us the whole time on set, and they were all super-involved. So now it all kind of happens together.

All the VFX are a much bigger deal. For last season we had a lot more VFX than the first year — about 1,400 shots, which is a huge amount, like a big movie. The first season it wasn’t a big deal. It was a very old-school approach, with mainly practical effects, and then in the middle we realized we were being a bit naïve, so we brought in Paul Graff as our VFX supervisor on season two, and he’s very experienced. He’s worked on big movies like The Wolf of Wall Street as well as Game of Thrones and Boardwalk Empire, and he’s doing this season too. He’s in Atlanta with us on the shoot.

We have two main VFX houses on the show — Atomic Fiction and Rodeo — they’re both incredible, and I think all the VFX are really cinematic now.

But isn’t it a big challenge in terms of a TV show’s schedule?
Ross: You’re right, and it’s always a big time crunch. Last year we had to meet that Halloween worldwide release date and we were cutting it so close trying to finish all the shots in time.

Matt: Everyone expects movie-quality VFX — just in a quarter of the time, or less. So it’s all accelerated.

The show has a very distinct, eerie, synth-heavy score by Kyle Dixon and Michael Stein, the Grammy nominated duo. How important is the music and sound, which won several Emmys last year?
Ross: It’s huge. We use it so much for transitions, and we have great sound designers — including Brad North and Craig Henighan — and great mixers, and we pay a lot of attention to all of it. I think TV has always put less emphasis on great sound compared to film, and again, you’re always up against the scheduling, so it’s always this balancing act.

You can’t mix it for a movie theater as very few people have that set up at home, so you have to design it for most people who’re watching on iPhones, iPads and so on, and optimize it for that, so we mostly mix in stereo. We want the big movie sound, but it’s a compromise.

The DI must be vital?
Matt: Yes, and we work very closely with colorist Skip Kimball (who recently joined Efilm), who’s been with us since the start. He was very influential in terms of how the show ended up looking. We’d discussed the kind of aesthetic we wanted, and things we wanted to reference and then he played around with the look and palette. We’ve developed a look we’re all really happy with. We have three different LUTs on set designed by Skip and the DP Tim Ives will choose the best one for each location.

Everyone’s calling this the golden age of TV. Do you like being showrunners?
Ross: We do, and I feel we’re very lucky to have the chance to do this show — it feels like a big family. Yes, we originally wanted to be movie directors, but we didn’t come into this industry at the right time, and Netflix has been so great and given us so much creative freedom. I think we’ll do a few more seasons of this, and then maybe wrap it up. We don’t want to repeat ourselves.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.