Category Archives: Cameras

Kees van Oostrum weighs in on return as ASC president

The American Society of Cinematographers (ASC) has re-elected Kees van Oostrum as president. He will serve his third consecutive term at the organization.

The ASC board also re-upped its roster of officers for 2018-2019, including Bill Bennett, John Simmons and Cynthia Pusheck as vice presidents; Levie Isaacks as treasurer; David Darby as secretary; and Isidore Mankofsky as sergeant-at-arms.

Van Oostrum initiated and chairs the ASC Master Class program, which has expanded to locations worldwide under his presidency. The Master Classes take place several times a year and are taught by ASC members. The classes are designed for cinematographers with an intermediate-to-advanced skill set and incorporates practical, hands-on demonstrations of lighting and camera techniques with essential instruction in current workflow practices.

The ASC Vision Committee, founded during van Oostrum’s first term, continues to organize successful symposiums that encourage diversity and inclusion on camera crews, and also offers networking opportunities. The most recent was a standing-room-only event that explored practical and progressive ideas for changing the face of the industry. The ASC will continue to host more of these activities during the coming years.

Van Oostrum has earned two Primetime Emmy nominations for his work on the telefilms Miss Rose White and Return to Lonesome Dove. His peers chose the latter for a 1994 ASC Outstanding Achievement Award. Additional ASC Award nominations for his television credits came for The Burden of Proof, Medusa’s Child and Spartacus. He also shot the Emmy-winning documentary The Last Chance.

A native of Amsterdam, van Oostrum studied at the Dutch Film Academy with an emphasis on both cinematography and directing. He went on to earn a scholarship sponsored by the Dutch government, which enabled him to enroll in the American Film Institute (AFI). Van Oostrum broke into the industry shooting television documentaries for several years. He has subsequently compiled a wide range of some 80-plus credits, including movies for television and the cinema, such as Gettysburg, Gods and Generals and occasional documentaries. He recently wrapped the final season of TV series The Fosters.

The 2018-2019 board who voted in this election includes John Bailey, Paul Cameron, Russell Carpenter, Curtis Clark, Dean Cundey, George Spiro Dibie, Stephen Lighthill, Lowell Peterson, Roberto Schaefer, John Toll and Amelia Vincent. Alternate Board members are Karl-Walter Lindenlaub, Stephen Burum, David Darby, Charlie Lieberman and Eric Steelberg.

The ASC has over 20 committees driving the organization’s initiatives, such as the award-winning Motion Imaging Technology Council (MITC), and the Educational and Outreach committee.

We reached out to Van Oostrum to find out more:

How fulfilling has being ASC President been —either personally or professionally (or both)?
My presidency has been a tremendously fulfilling experience. The ASC grew its educational programs. The masterclass expanded from domestic to international locations, and currently eight to 10 classes a year are being held based on demand (up from four to five from the inaugural year of the master class). Our public outreach activities have brought in over 7,000 students in the last two years, giving them a chance to meet ASC members and ask questions about cinematography and filmmaking.

Our digital presence has also grown, and the ASC and American Cinematographer websites are some of the most visited sites in our industry. Interest from the vendor community has expanded as well, introducing a broader range of companies who are involved in the image pipeline to our members. Then, our efforts to support ASC’s heritage, research and museum acquisitions have taken huge steps forward. I believe the ASC has grown into a relevant organization for people to watch.

What do you hope to accomplish in the coming year?
We will complete our Educational Center, a new building behind the historic ASC clubhouse in Hollywood; produce several online master classes about cinematography; and we also are set to produce two major documentaries about cinematography and will continue to strengthen our role as a technology partner through the efforts of our Motion Imaging Technology Council (formerly the ASC Technology Committee).

What are your proudest achievements from previous years?
I’m most proud of the success of the Master Classes, as well as the support and growth in the number of activities by the Vision Committee. I’m also pleased with the Chinese language edition of our magazine, and having cinematography stories shared in a global way. We’ve also beefed up our overall internal communications so members feel more connected.

Testing large format camera workflows

By Mike McCarthy

In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.

L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.

Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.

Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.

First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.

Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.

We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.

Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.

One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”

Post Production
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.

Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.

The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.

Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.

Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.

I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:

This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.

We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.

We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.

Main Image: DP Michael Svitak


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Cinna 4.13

Panavision Millennium DXL2’s ecosystem grows with color science, lenses, more

Panavision’s Millennium DXL2 8K camera was on display at Cine Gear last week featuring  a new post-centric firmware upgrade, along with four new large-format lens sets, a DXL-inspired accessories kit for Red DSMC2 cameras and a preview of custom advancements in filter technology.

DXL2 incorporates technology advancements based on input from cinematographers, camera assistants and post production groups. The camera offers 16 stops of dynamic range with improved shadow detail, a native ISO setting of 1600 and 12-bit ProRes XQ up to 120fps. New to the DXL2 is version 1.0 of a directly editable (D2E) workflow. D2E gives DITs wireless LUT and CDL look control and records all color metadata into camera-generated proxy files for instant and render-free dailies.

DXL2, which is available to rent worldwide, also incorporates an updated color profile: Light Iron Color 2 (LiColor2). This latest color science provides cinematographers and DITs with a film-inspired tonal look that makes the DXL2 feel more cinematic and less digital.

Panavision also showcased their large-format spherical and anamorphic lenses. Four new large-format lens sets were on display:
• Primo X is a cinema lens designed for use on drones and gimbals. It’s fully sealed, weatherproof and counterbalanced to be aerodynamic and it’s able to easily maintain a proper center of gravity. Primo X lenses come in two primes – 14mm (T3.1) and 24mm (T1.6) – and one 24-70mm zoom (T2.8) and will be available in 2019.

• H Series is a traditionally designed spherical lens set with a rounded, soft roll-off, giving what the company calls a “pleasing tonal quality to the skin.” Created with vintage glass and coating, these lenses offer slightly elevated blacks for softer contrast. High speeds separate subject and background with a smooth edge transition, allowing the subject to appear naturally placed within the depth of the image. These lenses are available now.
• Ultra Vista is a series of large-format anamorphic optics. Using a custom 1.6x squeeze, Ultra Vista covers the full height of the 8K sensor in the DXL and presents an ultra-widescreen 2.76:1 aspect ratio along with a classic elliptical bokeh and Panavision horizontal flare. Ultra Vista lenses will be available in 2019.
• PanaSpeed is a large-format update of the classic Primo look. At T1.4, PanaSpeed is a fast large-format lens. It will be available in Q3 of 2018.

Panavision also showed an adjustable liquid crystal neutral density (LCND) filter. LCND adjusts up to six individual stops with a single click or ramp — a departure from traditional approaches to front-of-lens filters, which require carrying a set and manually swapping individual NDs based on changing light. LCND starts at 0.3 and goes through 0.6, 0.9, 1.2, 1.5, to 1.8. It will be available in 2019.

Following up on the DXL1 and DXL2, Panavision launched the latest in its cinema line-up with the newly created DXL-M accessory kit. Designed to work with Red DSMC2 cameras, DXL-M marries the quality and performance of DXL with the smaller size and weight of the DSMC2. DXL-M brings popular features of DXL to Red Monstro, Gemini and Helium sensors, such as the DXL menu system (via an app for the iPhone), LiColor2, motorized lenses, wireless timecode (ACN) and the Primo HDR viewfinder. It will be available in Q4 of 2018.


Sony updates Venice to V2 firmware, will add HFR support

At CineGear, Sony introduced new updates and developments for its Venice CineAlta camera system including Version 2 firmware, which will now be available in early July.

Sony also showed the new Venice Extension System, which features expanded flexibility and enhanced ergonomics. Also announced was Sony’s plan for high frame rate support for the Venice system.

Version 2 adds new features and capabilities specifically requested by production pros to deliver more recording capability, customizable looks, exposure tools and greater lens freedom. Highlights include:

With 15+ stops of exposure latitude, Venice will support high base ISO of 2500 in addition to an existing ISO of 500, taking full advantage of Sony’s sensor for superb low-light performance with dynamic range from +6 stops to -9 stops as measured at 18% middle gray. This increases exposure indexes at higher ISOs for night exteriors, dark interiors, working with slower lenses or where content needs to be graded in high dynamic range while maintaining the maximum shadow details; Select FPS (off speed) in individual frame increments, from 1 to 60; V2.0 adds several Imager Modes, including 25p in 6K full-frame, 25p in 4K 4:3 anamorphic, 6K 17:9, 1.85:1 and 4K 6:5 anamorphic imager modes; user-uploadable 3D LUTs allows users to customize their own looks and save them directly into the camera; wired LAN remote control allows users to remotely control and change key functions, including camera settings, fps, shutter, EI, iris (Sony E-mount lens), record start/stop and built-in optical ND filters; and E-mount allows users to remove the PL mount and use a wide assortment of native E-mount lenses.

The Venice Extension System is a full-frame tethered extension system that allows the camera body to detach from the actual image sensor block with no degradation in image quality up to 20 feet apart. These are the result of Sony’s long-standing collaboration with James Cameron’s Lightstorm Entertainment.

“This new tethering system is a perfect example of listening to our customers, gathering strong and consistent feedback, and then building that input into our product development,” said Peter Crithary, marketing manager for motion picture cameras, Sony. “The Avatar sequels will be among the first feature films to use the new Venice Extension System, but it also has tremendous potential for wider use with handheld stabilizers, drones, gimbals and remote mounting in confined places.”

Also at CineGear, Sony shared the details of a planned optional upgrade to support high frame rate — targeting speeds up to 60fps in 6K, up to 90fps in 4K and up to 120fps in 2K. It will be released in North America in the spring of 2019.


Red simplifies camera lineup with one DSMC2 brain

Red Digital Cinema modified its camera lineup to include one DSMC2 camera Brain with three sensor options — Monstro 8K VV, Helium 8K S35 and Gemini 5K S35. The single DSMC2 camera Brain includes high-end frame rates and data rates regardless of the sensor chosen. In addition, this streamlined approach will result in a price reduction compared to Red’s previous camera line-up.

“We have been working to become more efficient, as well as align with strategic manufacturing partners to optimize our supply chain,” says Jarred Land, president of Red Digital Cinema. “As a result, I am happy to announce a simplification of our lineup with a single DSMC2 brain with multiple sensor options, as well as an overall reduction on our pricing.”

Red’s DSMC2 camera Brain is a modular system that allows users to configure a fully operational camera setup to meet their individual needs. Red offers a range of accessories, including display and control functionality, input/output modules, mounting equipment, and methods of powering the camera. The camera Brain is capable of up to 60fps at 8K, offers 300MB/s data transfer speeds and simultaneous recording of RedCode RAW and Apple ProRes or Avid DNxHD/HR.

The Red DSMC2 camera Brain and sensor options:
– DSMC2 with Monstro 8K VV offers cinematic full frame lens coverage, produces ultra-detailed 35.4 megapixel stills and offers 17+ stops of dynamic range for $54,500.
– DSMC2 with Helium 8K S35 offers 16.5+ stops of dynamic range in a Super 35 frame, and is available now for $24,500.
– DSMC2 with Gemini 5K S35 uses dual sensitivity modes to provide creators with greater flexibility using standard mode for well-lit conditions or low-light mode for darker environments priced at $19,500.

Red will begin to phase out new sales of its Epic-W and Weapon camera Brains starting immediately. In addition to the changes to the camera line-up, Red will also begin offering new upgrade paths for customers looking to move from older Red camera systems or from one sensor to another. The full range of upgrade options can be found here.

 

 


The Duffer Brothers: Showrunners on Netflix’s Stranger Things

By Iain Blair

Kids in jeopardy! The Demogorgon! The Hawkins Lab! The Upside Down! Thrills and chills! Since they first pitched their idea for Stranger Things, a love letter to 1980’s genre films set in 1983 Indiana, twin brothers Matt and Ross Duffer have quickly established themselves as masters of suspense in the science-fiction and horror genres.

The series was picked up by Netflix, premiered in the summer of 2016, and went on to become a global phenomenon, with the brothers at the helm as writers, directors and executive producers.

The Duffer Brothers

The atmospheric drama, about a group of nerdy misfits and strange events in an outwardly average small town, nailed its early ’80s vibe and overt homages to that decade’s master pop storytellers: Steven Spielberg and Stephen King. It quickly made stars out of its young ensemble cast — Millie Bobby Brown, Natalia Dyer, Charlie Heaton, Joe Keery, Gaten Matarazzo, Caleb McLaughlin, Noah Schnapp, Sadie Sink and Finn Wolfhard.

It also quickly attracted a huge, dedicated fan base, critical plaudits and has won a ton of awards, including Emmys, a SAG Award for Best Ensemble in a Drama Series and two Critics Choice Awards for Best Drama Series and Best Supporting Actor in a Drama Series. The show has also been nominated for a number of Golden Globes.

I recently talked with the Duffers, who are already hard at work on the highly anticipated third season (which will premiere on Netflix in 2019) about making the ambitious hit series, their love of post and editing, and VFX.

How’s the new season going?
Matt Duffer: We’re two weeks into shooting, and it’s going great. We’re very excited about it as there are some new tones and it’s good to be back on the ground with everyone. We know all the actors better and better, the kids are getting older and are becoming these amazing performers — and they were great before. So we’re having a lot of fun.

Are you shooting in Atlanta again?
Ross Duffer: We are, and we love it there. It’s really our home base now, and we love all these pockets of neighborhoods that have not changed at all since the ‘80s, and there is an incredible variety of locations. We’re also spreading out a lot more this season and not spending so much time on stages. We have more locations to play with.

Will all the episodes be released together next year, like last time? That would make binge-watchers very happy.
Matt: Yes, but we like to think of it more as like a big movie release. To release one episode per week feels so antiquated now.

The show has a very cinematic look and feel, so how do you balance that with the demands of TV?
Ross: It’s interesting, because we started out wanting to make movies and we love genre, but with a horror film they want big scares every few minutes. That leaves less room for character development. But with TV, it’s always more about character, as you just can’t sustain hours and hours of a show if you don’t care about the people. So ‘Stranger Things’ was a world where we could tell a genre story, complete with the monster, but also explore character in far more depth than we could in a movie.

Matt: Movies and TV are almost opposites in that way. In movies, it’s all plot and no character, and in TV it’s about character and you have to fight for plot. We wanted this to have pace and feel more like a movie, but still have all the character arcs. So it’s a constant balancing act, and we always try and favor character.

Where do you post the show?
Matt: All in Hollywood, and the editors start working while we’re shooting. After we shoot in Atlanta, we come back to our offices and do all the post and VFX work right there. We do all the sound mix and all the color timing at Technicolor down the road. We love post. You never have enough time on the set, and there’s all this pressure if you want to redo a shot or scene, but in post if a scene isn’t working we can take time to figure it out.

Tell us about the editing. I assume you’re very involved?
Ross: Very. We have two editors this season. We brought back one of our original editors, Dean Zimmerman, from season one. We are also using Nat Fuller, who was on season two. He was Dean’s assistant originally and then moved up, so they’ve been with us since the start. Editing’s our favorite part of the whole process, and we’re right there with them because we love editing. We’re very hands on and don’t just give notes and walk away. We’re there the whole time.

Aren’t you self-taught in terms of editing?
Matt: (Laughs) I suppose. We were taught the fundamentals of Avid at film school, but you’re right. We basically taught ourselves to edit as kids, and we started off just editing in-camera, stopping and starting, and playing the music from a tape recorder. They weren’t very good, but we got better.

When iMovie came out we learned how to put scenes together, so in college the transition to Avid wasn’t that hard. We fell in love with editing and just how much you can elevate your material in post. It’s magical what you can do with the pace, performances, music and sound design, and then you add all the visual effects and see it all come together in post. We love seeing the power of post as you work to make your story better and better.

How early on do you integrate post and VFX with the production?
Ross: On day one now. The biggest change from season one to two was that we integrated post far earlier in the second season — even in the writing stage. We had concept artists and the VFX guys with us the whole time on set, and they were all super-involved. So now it all kind of happens together.

All the VFX are a much bigger deal. For last season we had a lot more VFX than the first year — about 1,400 shots, which is a huge amount, like a big movie. The first season it wasn’t a big deal. It was a very old-school approach, with mainly practical effects, and then in the middle we realized we were being a bit naïve, so we brought in Paul Graff as our VFX supervisor on season two, and he’s very experienced. He’s worked on big movies like The Wolf of Wall Street as well as Game of Thrones and Boardwalk Empire, and he’s doing this season too. He’s in Atlanta with us on the shoot.

We have two main VFX houses on the show — Atomic Fiction and Rodeo — they’re both incredible, and I think all the VFX are really cinematic now.

But isn’t it a big challenge in terms of a TV show’s schedule?
Ross: You’re right, and it’s always a big time crunch. Last year we had to meet that Halloween worldwide release date and we were cutting it so close trying to finish all the shots in time.

Matt: Everyone expects movie-quality VFX — just in a quarter of the time, or less. So it’s all accelerated.

The show has a very distinct, eerie, synth-heavy score by Kyle Dixon and Michael Stein, the Grammy nominated duo. How important is the music and sound, which won several Emmys last year?
Ross: It’s huge. We use it so much for transitions, and we have great sound designers — including Brad North and Craig Henighan — and great mixers, and we pay a lot of attention to all of it. I think TV has always put less emphasis on great sound compared to film, and again, you’re always up against the scheduling, so it’s always this balancing act.

You can’t mix it for a movie theater as very few people have that set up at home, so you have to design it for most people who’re watching on iPhones, iPads and so on, and optimize it for that, so we mostly mix in stereo. We want the big movie sound, but it’s a compromise.

The DI must be vital?
Matt: Yes, and we work very closely with colorist Skip Kimball (who recently joined Efilm), who’s been with us since the start. He was very influential in terms of how the show ended up looking. We’d discussed the kind of aesthetic we wanted, and things we wanted to reference and then he played around with the look and palette. We’ve developed a look we’re all really happy with. We have three different LUTs on set designed by Skip and the DP Tim Ives will choose the best one for each location.

Everyone’s calling this the golden age of TV. Do you like being showrunners?
Ross: We do, and I feel we’re very lucky to have the chance to do this show — it feels like a big family. Yes, we originally wanted to be movie directors, but we didn’t come into this industry at the right time, and Netflix has been so great and given us so much creative freedom. I think we’ll do a few more seasons of this, and then maybe wrap it up. We don’t want to repeat ourselves.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


postPerspective names NAB Impact Award MVPs and winners

NAB is a bear. Anyone who has attended this show can attest to that. But through all the clutter, postPerspective sought to seek out the best of the best for our Impact Awards. So we turned to a panel of esteemed industry pros (to whom we are very grateful!) to cast their votes on what they thought would be most impactful to their day-to-day workflows, and those of their colleagues.

In addition to our Impact Award winners, this year we are also celebrating two pieces of technology that not only caused a big buzz around the show, but are also bringing things a step further in terms of technology and workflow: Blackmagic’s DaVinci Resolve 15 and Apple’s ProRes RAW.

With ProRes RAW, Apple has introduced a new, high-quality video recording codec that has already been adopted by three competing camera vendors — Sony, Canon and Panasonic. According to Mike McCarthy, one of our NAB bloggers and regular contributors, “ProRes RAW has the potential to dramatically change future workflows if it becomes even more widely supported. The applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.”

Fairlight’s audio tools are now embedded in the new Resolve 15.

With Resolve 15, Blackmagic has launched the product further into a wide range of post workflows, and they haven’t raised the price. This standalone app — which comes in a free version — provides color grading, editing, compositing and even audio post, thanks to the DAW Fairlight, which is now built into the product.

These two technologies are Impact Award winners, but our judges felt they stood out enough to be called postPerspective Impact Award MVPs.

Our other Impact Award winners are:

• Adobe for Creative Cloud

• Arri for the Alexa LF

• Codex for Codex One Workflow and ColorSynth

• FilmLight for Baselight 5

• Flanders Scientific for the XM650U monitor

• Frame.io for the All New Frame.io

• Shift for their new Shift Platform

• Sony for their 8K CLED display

In a sea of awards surrounding NAB, the postPerspective Impact Awards stand out, and are worth waiting for, because they are voted on by working post professionals.

Flanders Scientific’s XM650U monitor.

“All of these technologies from NAB are very worthy recipients of our postPerspective Impact Awards,” says Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually have an impact on workflows as well as the ability to make users’ working lives easier and their projects better. This year we have honored 10 different products that span the production and post pipeline.

“We’re very proud of the fact that companies don’t ‘submit’ for our awards,” continues Altman. “We’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We feel it makes our awards quite special.”

With our Impact Awards, postPerspective is also hoping to help those who weren’t at the show, or who were unable to see it all, with a starting point for their research into new gear that might be right for their workflows.

postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at SIGGRAPH 2018.


Atomos at NAB offering ProRes RAW recorders

Atomos is at this year’s NAB showing support for ProRes RAW, a new format from Apple that combines the performance of ProRes with the flexibility of RAW video. The ProRes RAW update will be available free for the Atomos Shogun Inferno and Sumo 19 devices.

Atomos devices are currently the only monitor recorders to offer ProRes RAW, with realtime recording from the sensor output of Panasonic, Sony and Canon cameras.

The new upgrade brings ProRes RAW and ProRes RAW HQ recording, monitoring, playback and tag editing to all owners of an Atomos Shogun Inferno or Sumo19 device. Once installed, it will allow the capture of RAW images in up to 12-bit RGB — direct from many of our industry’s most advanced cameras onto affordable SSD media. ProRes RAW files can be imported directly into Final Cut Pro 10.4.1 for high-performance editing, color grading, and finishing on Mac laptop and desktop systems.
Eight popular cine cameras with a RAW output — including the Panasonic AU-EVA1, Varicam LT, Sony FS5/FS7 and Canon C300mkII/C500 — will be supported with more to follow.

With this ProRes RAW support, filmmakers can work easily with RAW – whether they are shooting episodic TV, commercials, documentaries, indie films or social events.

Shooting ProRes RAW preserves maximum dynamic range, with a 12-bit depth and wide color gamut — essential for HDR finishing. The new format, which is available in two compression levels — ProRes RAW and ProRes RAW HQ — preserves image quality with low data rates and file sizes much smaller than uncompressed RAW.

Atomos recorders through ProRes RAW allow for increased flexibility in captured frame rates and resolutions. Atomos can record ProRes RAW up to 2K at 240 frames a second, or 4K at up to 120 frames per second. Higher resolutions such as 5.7K from the Panasonic AU-EVA1 are also supported.

Atomos’ OS, AtomOS 9, gives users filming tools to allow them to work efficiently and creatively with ProRes RAW in portable devices. Fast connections in and out and advanced HDR screen processing means every pixel is accurately and instantly available for on-set creative playback and review. Pull the SSD out and dock to your Mac over Thunderbolt 3 or USB-C 3.1 for immediate super fast post production.

Download the AtomOS 9 update for Shogun Inferno and Sumo 19 at www.atomos.com/firmware.


B&H expands its NAB footprint to target multiple workflows

By Randi Altman

In a short time, many in our industry will be making the pilgrimage to Las Vegas for NAB. They will come (if they are smart) with their comfy shoes, Chapstick and the NAB Show app and plot a course for the most efficient way to see all they need to see.

NAB is a big show that spans a large footprint, and typically companies showing their wares need to pick a hall — Central, South Lower, South Upper or North. This year, however, The Studio-B&H made some pros’ lives a bit easier by adding a booth in South Lower in addition to their usual presence in Central Hall.

B&H’s business and services have grown, so it made perfect sense to Michel Suissa, managing director at The Studio-B&H, to grow their NAB presence to include many of the digital workflows the company has been servicing.

We reached out to Suissa to find out more.

This year B&H and its Studio division are in the South Lower. Why was it important for you guys to have a presence in both the Central and South Halls this year?
The Central Hall has been our home for a long time and it remains our home with our largest footprint, but we felt we needed to have a presence in South Hall as well.

Production and post workflows merge and converge constantly and we need to be knowledgeable in both. The simple fact is that we serve all segments of our industry, not just image acquisition and camera equipment. Our presence in image and data centric workflows has grown leaps and bounds.

This world is a familiar one for you personally.
That’s true. The post and VFX worlds are very dear to me. I was an editor, Flame artist and colorist for 25 years. This background certainly plays a role in expanding our reach and services to these communities. The Studio-B&H team is part of a company-wide effort to grow our presence in these markets. From a business standpoint, the South Hall attendees are also our customers, and we needed to show we are here to assist and support them.

What kind of workflows should people expect to see at both your NAB locations?
At the South Hall, we will show a whole range of solutions to show the breadth and diversity of what we have to offer. That includes VR post workflow, color grading, animation and VFX, editing and high-performance Flash storage.

In addition to the new booth in South Hall, we have two in Central. One is for B&H’s main product offerings, including our camera shootout, which is a pillar of our NAB presence.

This Studio-B&H booth features a digital cinema and broadcast acquisition technology showcase, including hybrid SDI/IP switching, 4K studio cameras, a gyro-stabilized camera car, the most recent full-frame cinema cameras, and our lightweight cable cam, the DynamiCam.

Our other Central Hall location is where our corporate team can discuss all business opportunities with new and existing B2B customers

How has The Studio-B&H changed along with the industry over the past year or two?
We have changed quite a bit. With our services and tools, we have re-invented our image from equipment providers to solution providers.

Our services now range from system design to installation and deployment. One of the more notable recent examples is our recent collaboration with HBO Sports on World Championship Boxing. The Studio-B&H team was instrumental in deploying our DynamiCam system to cover several live fights in different venues and integrating with NEP’s mobile production team. This is part of an entirely new type of service —  something the company had never offered its customers before. It is a true game-changer for our presence in the media and entertainment industry.

What do you expect the “big thing” to be at NAB this year?
That’s hard to say. Markets are in transition with a number of new technology advancements: machine learning and AI, cloud-based environments, momentum for the IP transition, AR/VR, etc.

On the acquisition side, full frame/large sensor cameras have captured a lot of attention. And, of course, HDR will be everywhere. It’s almost not a novelty anymore. If you’re not taking advantage of HDR, you are living in the past.

Red’s new Gemini 5K S35 sensor offers low-light and standard mode

Red Digital Cinema’s new Gemini 5K S35 sensor for its Red Epic-W camera leverages dual-sensitivity modes, allowing shooters to use standard mode for well-lit conditions or low-light mode for darker environments.

In low-light conditions, the Gemini 5K S35 sensor allows for cleaner imagery with less noise and better shadow detail. Camera operators can easily switch between modes through the camera’s on-screen menu with no down time.

The Gemini Mini 5K S35 sensor offers an increased field of view at 2K and 4K resolutions compared to the higher-resolution Red Helium sensor. In addition, the sensor’s 30.72mm x 18mm dimensions allow for greater anamorphic lens coverage than with Helium or Red Dragon sensors.

“While the Gemini sensor was developed for low-light conditions in outer space, we quickly saw there was so much more to this sensor,” explains Jarred Land, president of Red Digital Cinema. “In fact, we loved the potential of this sensor so much, we wanted to evolve it to for broader appeal. As a result, the Epic-W Gemini now sports dual-sensitivity modes. It still has the low-light performance mode, but also has a default, standard mode that allows you to shoot in brighter conditions.”

Built on the compact DSMC2 form factor, this new camera and sensor combination captures 5K full-format motion at up to 96fps along with data speeds of up to 275MB per second. Additionally, it supports Red’s IPP2 enhanced image processing pipeline in-camera. Like all of Red’s DSMC2 cameras, the Epic-W is able to shoot simultaneous Redcode RAW and Apple ProRes or Avid DNxHD/HR recording and adheres to Red’s “Obsolescence Obsolete” program, which allows current Red owners to upgrade their technology as innovations are unveiled. It also lets’ them move between camera systems without having to purchase all new gear.

Starting at $24,500, the new Red Epic-W with Gemini 5K S35 sensor is available for purchase now. Alternatively, Weapon Carbon Fiber and Red Epic-W 8K customers will have the option to upgrade to the Gemini sensor at a later date.