Author Archives: Randi Altman

Veteran colorist Walter Volpatto joins Efilm

Walter Volpatto, a colorist with 15 years under his belt, has joined LA’s Efilm. His long list of credits includes Dunkirk, Star Wars: The Last Jedi and, most recently, Amazon Studios’ series Homecoming.

As a colorist, Volpatto’s style gravitates toward an aesthetic of realism, though his projects span genres from drama and action to comedy and documentary, such as just-released Green Book, directed by Peter Farrelly; Quentin Tarantino’s The Hateful Eight; Independence Day: Resurgence, directed by Roland Emmerich; and Bad Moms, directed by Jon Lucas and Scott Moore.

He joins Efilm from Fotokem, where he started in digital intermediate before progressively shifting toward fully digital workflows while navigating emerging technologies such as HDR. (Watch our interview with him about his work on The Last Jedi.)

Volpatto found his way into color finishing by way of visual effects, a career he initially pursued as an outlet for his passion for photography. He began working as a digital intermediate artist at Cinecitta in Rome in 2002, before relocating to Los Angeles the following year. Since then, he’s continued honing his skillset for both film and digital, while also expanding his knowledge of color science.

While known for his feature film work, Volpatto periodically works in episodic television. Based at Efilm’s Hollywood facility, will be working at many of Deluxe’s color grading suites, including the newly opened Stage One. He will be working on Blackmagic’s DaVinci Resolve.

Technicolor welcomes colorists Trent Johnson and Andrew Francis

Technicolor in Los Angeles will be beefing up its color department in January with the addition of colorists Andrew Francis and Trent Johnson.

Francis joins Technicolor after spending the last three years building the digital intermediate department of Sixteen19 in New York. With recent credits that include Second Act, Night School, Hereditary and Girls Trip. Francis is a trained fine artist who has established a strong reputation of integrating the bleeding edge of technology in support of the craft of color.

Johnson, a Technicolor alumnus, returns after stints as a digital colorist at MTI, Deluxe and Sony Colorworks. His recent credits include horror hits Slender Man and The Possession of Hannah Grace, as well as comedies Overboard and Ted 2.

Johnson will be using FilmLight and Resolve for his work, while Francis will toggle between Resolve, BaseLight and Lustre, depending on the project.

Francis and Johnson join Technicolor LA’s roster, which includes Pankaj Bajpai, Tony Dustin, Doug Delaney, Jason Fabbro, recent HPA award-winner Maxine Gervais, Michael Hatzer, Roy Vasich, Tim Vincent, Sparkle and others.

Main Image: Trent Johnson and Andrew Francis

Making an animated series with Adobe Character Animator

By Mike McCarthy

In a departure from my normal film production technology focus, I have also been working on an animated web series called Grounds of Freedom. Over the past year I have been directing the effort and working with a team of people across the country who are helping in various ways. After a year of meetings, experimentation and work we finally started releasing finished episodes on YouTube.

The show takes place in Grounds of Freedom, a coffee shop where a variety of animated mini-figures gather to discuss freedom and its application to present-day cultural issues and events. The show is created with a workflow that weaves through a variety of Adobe Creative Cloud apps. Back in October I presented our workflow during Adobe Max in LA, and I wanted to share it with postPerspective’s readers as well.

When we first started planning for the series, we considered using live action. Ultimately, after being inspired by the preview releases of Adobe Character Animator, I decided to pursue a new digital approach to brick filming (a film made using Legos), which is traditionally accomplished through stop-motion animation. Once everyone else realized the simpler workflow possibilities and increased level of creative control offered by that new animation process, they were excited to pioneer this new approach. Animation gives us more control and flexibility over the message and dialog, lowers production costs and eases collaboration over long distances, as there is no “source footage” to share.

Creating the Characters
The biggest challenge to using Character Animator is creating digital puppets, which are deeply layered Photoshop PSDs with very precise layer naming and stacking. There are ways to generate the underlying source imagery in 3D animation programs, but I wanted the realism and authenticity of sourcing from actual photographs of the models and figures. So we took lots of 5K macro shots of our sets and characters in various positions with our Canon 60D and 70D DSLRs and cut out hundreds of layers of content in Photoshop to create our characters and all of their various possible body positions. The only thing that was synthetically generated was the various facial expressions digitally painted onto their clean yellow heads, usually to match an existing physical reference character face.

Mike McCarthy shooting stills.

Once we had our source imagery organized into huge PSDs, we rigged those puppets in Character Animator with various triggers, behaviors and controls. The walking was accomplished by cycling through various layers, instead of the default bending of the leg elements. We created arm movement by mapping each arm position to a MIDI key. We controlled facial expressions and head movement via webcam, and the mouth positions were calculated by the program based on the accompanying audio dialog.

Animating Digital Puppets
The puppets had to be finished and fully functional before we could start animating on the digital stages we had created. We had been writing the scripts during that time, parallel to generating the puppet art, so we were ready to record the dialog by the time the puppets were finished. We initially attempted to record live in Character Animator while capturing the animation motions as well, but we didn’t have the level of audio editing functionality we needed available to us in Character Animator. So during that first session, we switched over to Adobe Audition, and planned to animate as a separate process, once the audio was edited.

That whole idea of live capturing audio and facial animation data is laughable now, looking back, since we usually spend a week editing the dialog before we do any animating. We edited each character audio on a separate track and exported those separate tracks to Character Animator. We computed lipsync for each puppet based on their dedicated dialog track and usually exported immediately. This provided a draft visual that allowed us to continue editing the dialog within Premiere Pro. Having a visual reference makes a big difference when trying to determine how a conversation will feel, so that was an important step — even though we had to throw away our previous work in Character Animator once we made significant edit changes that altered sync.

We repeated the process once we had a more final edit. We carried on from there in Character Animator, recording arm and leg motions with the MIDI keyboard in realtime for each character. Once those trigger layers had been cleaned up and refined, we recorded the facial expressions, head positions and eye gaze with a single pass on the webcam. Every re-record to alter a particular section adds a layer to the already complicated timeline, so we limited that as much as possible, usually re-recording instead of making quick fixes unless we were nearly finished.

Compositing the Characters Together
Once we had fully animated scenes in Character Animator, we would turn off the background elements, and isolate each character layer to be exported in Media Encoder via dynamic link. I did a lot of testing before settling on JPEG2000 MXF as the format of choice. I wanted a highly compressed file, but need alpha channel support, and that was the best option available. Each of those renders became a character layer, which was composited into our stage layers in After Effects. We could have dynamically linked the characters directly into AE, but with that many layers that would decrease performance for the interactive part of the compositing work. We added shadows and reflections in AE, as well as various other effects.

Walking was one of the most challenging effects to properly recreate digitally. Our layer cycling in Character Animator resulted in a static figure swinging its legs, but people (and mini figures) have a bounce to their step, and move forward at an uneven rate as they take steps. With some pixel measurement and analysis, I was able to use anchor point keyframes in After Effects to get a repeating movement cycle that made the character appear to be walking on a treadmill.

I then used carefully calculated position keyframes to add the appropriate amount of travel per frame for the feet to stick to the ground, which varies based on the scale as the character moves toward the camera. (In my case the velocity was half the scale value in pixels per seconds.) We then duplicated that layer to create the reflection and shadow of the character as well. That result can then be composited onto various digital stages. In our case, the first two shots of the intro were designed to use the same walk animation with different background images.

All of the character layers were pre-comped, so we only needed to update a single location when a new version of a character was rendered out of Media Encoder, or when we brought in a dynamically linked layer. It would propagate all the necessary comp layers to generate updated reflections and shadows. Once the main compositing work was finished, we usually only needed to make slight changes in each scene between episodes. These scenes were composited at 5K, based on the resolution off the DSLR photos of the sets we had built. These 5K plates could be dynamically linked directly into Premiere Pro, and occasionally used later in the process to ripple slight changes through the workflow. For the interactive work, we got far better editing performance by rendering out flattened files. We started with DNxHR 5K assets, but eventually switched to HEVC files since they were 30x smaller and imperceptibly different in quality with our relatively static animated content.

Editing the Animated Scenes
In Premiere Pro, we had the original audio edit, and usually a draft render of the characters with just their mouths moving. Once we had the plate renders, we placed them each in their own 5K scene sub-sequence and used those sequences as source on our master timeline. This allowed us to easily update the content when new renders were available, or source from dynamically linked layers instead if needed. Our master timeline was 1080p, so with 5K source content we could push in two and a half times the frame size without losing resolution. This allowed us to digitally frame every shot, usually based on one of two rendered angles, and gave us lots of flexibility all the way to the end of the editing process.

Collaborative Benefits of Dynamic Link
While Dynamic Link doesn’t offer the best playback performance without making temp renders, it does have two major benefits in this workflow. It ripples change to the source PSD all the way to the final edit in Premiere just by bringing each app into focus once. (I added a name tag to one character’s PSD during my presentation, and 10 seconds later, it was visible throughout my final edit.) Even more importantly, it allows us to collaborate online without having to share any exported video assets. As long as each member of the team has the source PSD artwork and audio files, all we have to exchange online are the Character Animator project (which is small once the temp files are removed), the .AEP file and the .PrProj file.

This gives any of us the option to render full-quality visual assets anytime we need them, but the work we do on those assets is all contained within the project files that we sync to each other. The coffee shop was built and shot in Idaho, our voice artist was in Florida, our puppets faces were created in LA. I animate and edit in Northern California, the AE compositing was done in LA, and the audio is mixed in New Jersey. We did all of that with nothing but a Dropbox account, using the workflow I have just outlined.

Past that point, it was a fairly traditional finish, in that we edited in music and sound effects, and sent an OMF to Steve, our sound guy at DAWPro Studios http://dawpro.com/photo_gallery.html for the final mix. During that time we added other b-roll visuals or other effects, and once we had the final audio back we rendered the final result to H.264 at 1080p and uploaded to YouTube.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Rohde & Schwarz’s storage system R&S SpycerNode shipping

First shown at IBC 2018, Rohde & Schwarz’s new media storage system, R&S SpycerNode, is now available for purchase. This new storage system uses High Performance Computing (HPC), a term that refers to the system’s performance, scalability and redundancy. HPC is a combination of hardware, file system and RAID approach. HPC employs redundancy using software RAID technologies called erasure coding in combination with declustering to increase performance and reduce rebuild times. Also, system scalability is almost infinite and expansion is possible during operation.

According to Rohde & Schwarz, in creating this new storage system, their engineers looked at many of the key issues that impact on media storage systems within high-performance video editing environments — from annoying maintenance requirements, such as defraging, to much more serious system failures, including dying disk drives.

R&S SpycerNode features Rohde & Schwarz‘s device manager web application that makes it much easier to set up and use Rohde & Schwarz solutions in an integrated fashion. Device manager helps to reduce setup times and simplifies maintenance and service due to its intuitive web-based UI-operated through a single client.

To ensure data security, Rohde & Schwarz has introduced data protection systems based on erasure coding and declustering within the R&S SpycerNode. Erasure coding means that a data block is always written including parity.

Declustering is a part of the data protection approach of HPC setups (formerly known as RAID). It is software based, and in comparison to a traditional RAID setup the spare disk is spread over all other disks and is not a dedicated disk. This will decrease rebuild times and reduce performance impact. Also, there are no limitations with the RAID controller, which results in much higher IOPS (input/output operations per second). Importantly, there is no impact on system performance over time due to declustering.

R&S SpycerNode comes in multiple 2U and 5U chassis designs, which are available with NL-SAS HDD and SAS SSDs in different capacities. An additional 2U24 chassis design is a pure Flash system with main processor units and JBOD units. A main unit is always redundant, equipped with two appliance controllers (AP). Each AP features two 100Gb interfaces, resulting in four 100Gbinterfaces per main unit.

The combination of different chassis systems makes R&S SpycerNode applicable to a very broad range of applications. The 2U system represents a compact, lightweight unit that works well within mobile productions as well as offering a very dense, high-speed storage device for on-premise applications. A larger 5U system offers sophisticated large-scale storage facilities on-premise within broadcast production centers and post facilities.

VFX supervisor Simon Carr joins London’s Territory

Simon Carr has joined visual effects house Territory, bringing with him 20 years of experience as a VFX supervisor. He most recently served that role at London’s Halo, where he built the VFX department from scratch. He has also supervised at Realise Studio, Method Studios, Pixomondo, Digital Domain and others. While Carr will be based in London, he will also support the studio’s San Francisco offices as needed.

Having invested in a Shotgun pipeline, with a bespoke toolkit that integrates Territory’s design-led approach with VFX delivery, Carr’s appointment, according to the studio, signals a strategic approach to expanding the team’s capabilities. “Simon’s experience of all stages of the VFX process from pre-production to final delivery means that our clients and partners can be confident of seamless high-end VFX delivery at every stage of a project” says David Sheldon-Hicks, Territory’s founder and executive creative director.

At Territory, Carr will use his experience building and leading teams of artists, from compositing through to complex environment builds. The studio will also benefit from his experience of building a facility from scratch — establishing pipeline and workflows, recruiting and retaining artists; developing and maintaining relationships with clients and being involved with the pitching and bidding process.

The studio has worked on high-profile film projects, including Blade Runner 2049, Ready Player One, Pacific Rim: Uprising, Ghost in the Shell, The Martian and Guardians of the Galaxy. On the broadcast front, they have worked on the new series based on George R.R. Martin’s novella, Nightflyers, Amazon Prime/Channel 4’s Electric Dreams and National Geographic’s Year Million.

 

Review: GoPro Hero 7 Black action camera

By Brady Betzel

Every year GoPro offers a new iteration of its camera. One of the biggest past upgrades was from the Hero 4 to the Hero 5, with an updated body style, waterproofing without needing external housing and minimal stabilization. That was one of the biggest… until now.

The Hero 7 Black is by far the best upgrade GoPro users have seen, especially if you are sitting on a Hero 5 or earlier. I’ll tell you up front that the built-in stabilization (called Hypersmooth) alone is worth the Hero 7 Black’s $399 price tag, but there are a ton of other features that have been upgraded and improved.

There are three versions of the Hero 7: Black for $399, Silver for $299 and White for $199. The White is the lowest priced Hero 7 and includes features like 1080p @ 60fps video recording, a built-in battery, waterproofing to 33 feet-deep without extra housing, standard video stabilization, 2x slow-mo (1440p/1080p @ 60fps), video recording up to 40Mb/s (1440p), two-mic audio recording, 10MP Photos, and 15/1 burst photos. After reading that you can surmise that the Hero 7 White is as basic as it gets, GoPro even skipped 24fps video recording, ProTune and a front LCD display. But that doesn’t mean the Hero 7 White is a throwaway; what I love about the latest update to the Hero line is the simplicity in operating the menus. In previous generations, the GoPro Hero menus were difficult to use and would often cause me to fumble shots. The Hero 7 menu has been streamlined for a much more simple mode selection process, making the Hero 7 White a basic and relatively affordable waterproof GoPro.

The Hero 7 Silver can be purchased for $299 and has everything the Hero 7 White has, plus some extras, including 4K video recording at 30fps up to 60MB/s, 10MP photos with wide dynamic range to bring out details in the highlights and shadows and a GPS location to show you where your videos and photos were taken. .

The Hero 7 Black
The Hero 7 Black is the big gun in the GoPro Hero 7 lineup. For anyone who wants to shoot multiple frame rates; harness a flat picture profile using ProTune to have extended range when color correcting; record ultra-smooth video without an external gimbal and no post processing; or shoot RAW photos, the Hero 7 Black is for you.

The Hero 7 Black has all of the features of the White and Silver plus a bunch more, including the front-facing LCD display. One of the biggest still-photo upgrades is the ability to shoot 12MP photos with SuperPhoto. SuperPhoto is essentially a “make my image look like the GoPro photos on Instagram” look. It’s an auto-image processor that will turn good photos into awesome photos. Essentially it’s an HDR mode that gives as much latitude in the shadows and highlights as well as noise reduction.
Beyond the SuperPhoto, the Hero 7 has burst rates from 3/1 up to 30/1, a timelapse photo function with intervals ranging from .5 seconds to 60 seconds; the ability to shoot RAW photos in GPR format alongside JPG; the ability to shoot video in 4K at 60fps, 30fps and 24fps in wide mode, as well as 30 and 24fps in SuperView mode (essentially ultra-wide angle); 2.7K wide video up to 120fps and down to 24fps in linear view (no wide-angle warping) all the way down to 720p in wide at 240fps. s.

The Hero 7 records in both MP4 H.264/AVC and H.265/HEVC formats at up to 78MB/s (4K). The Hero 7 Black has a bunch of additional modes including Night Photo; Looping; Timelapse Photo; Timelapse Video; Night Lapse Photo; 8x Slow Mo and Hypersmooth stabilization. It has Wake on Voice commands, as well as live streaming to Facebook Live, Twitch, Vimeo and YouTube. It also features Timewarp video (I will talk more about later); a GP1 processor created by GoPro; advanced metadata that the GoPro app uses to create videos of just the good parts (like smiling photos); ProTune; Karma compatibility; dive-housing compatibility; three-mic stereo audio; RAW audio captured in WAV format; the ability to plug in an external mic with the optional 3.5mm audio mic in cable; and HDMI video output with a micro HDMI cable.

I really love the GoPro Hero 7 and consider it a must-buy if you are on the edge about upgrading an older GoPro camera.

Out of the Box
When I opened the GoPro Hero7 Black I was immediately relieved that it was the same dimensions as the Hero 5 and 6, since I have access to the GoPro Karma drone, Karma gimbal and various accessories. (As a side note, the Hero 7 White and Silver are not compatible with the Karma Drone or Gimbal.) I quickly plugged in the Hero 7 Black to charge it, which only took half an hour. When fully drained the Hero 7 takes a little under two hours to charge.

I was excited to try the new built-in stabilization feature Hypersmooth, as well as the new stabilized in-camera timelapse creator, TimeWarp. I received the Hero 7 Black around Halloween so I took it to an event called “Nights of the Jack” at King Gillette Ranch in Calabasas, California, near Malibu. It took place after dark and featured lit-up jack-o-lanterns, so I figured I could test out the TimeWarp, Hypersmooth and low-light capabilities in one fell swoop.

It was really incredible. I used a clamp mount to hold it onto the kids’ wagon and just hit record. When I stopped recording, the GoPro finished processing the TimeWarp video and I was ready to view it or share it. Overall, the quality of video and the low-light recording were pretty good — not great but good. You can check out the video on YouTube.

The stabilization was mind blowing, especially considering it is electronic image stabilization (EIS), which is software-based, not optical, which is hardware-based. Hardware-based stabilization is typically preferred to software-based stabilization, but GoPro’s EIS is incredible. For most shooting scenarios, the built-in stabilization will be amazing — everyone who watches your clips will think that you are using a hardware gimbal. It’s that good.

The Hero 7 Black has a few options for TimeWarp mode to keep the video length down — you can choose different speeds: 2x, 5x, 10x, 15x, and 30x. For example, 2x will take one minute of footage and turn it into 30 seconds, and 30x will take five minutes of footage and turn it into 10 seconds. Think of TimeWarp as a stabilized timelapse. In terms of resolution, you can choose from 16:9 or 4:3 aspect ratio; 4K, 1440p or 1080p. I always default to 1080 if posting on Instagram or Twitter, since you can’t really see what the 4K difference, and it saves all my data bits and bytes for better image fidelity.

If you’re wondering why you would use TimeWarp over Timelapse, there are a couple of differences. Timewarp will create a smooth video when walking, riding a bike or generally moving around because of the Hypersmooth stabilization. Timelapse will act more like a camera taking pictures at a certain interval to show a passage of time (say from day to night) and will playback a little more choppy. Check out a sample day-to-night timelapse I filmed using the Hero 7 Black set to Timelapse on YouTube.

So beyond the TimeWarp what else is different? Well, just plain shooting 4K at 60fps — you now have the ability to enable the EIS stabilization where you couldn’t on the GoPro Hero 6 Black. It’s a giant benefit for anyone shooting 4K in the palm of their hands and wanting to even slow their 4K down by 50% and retain smooth motion with stabilization already done in-camera. This is a huge perk in my mind. The image processing is very close to what the Hero 6 produces and quite a bit better than the what the Hero 5 produces.

When taking still images, the low-light ability is pretty incredible. With the new Superphoto setting you can get that signature high saturation and contrast with noise reduction. It’s a great setting, although I noticed the subject in focus cannot be moving too fast or you will get some purple fringing. When used under the correct circumstances, the Superphoto is the next iteration of HDR.

I was surprised how much I used the GoPro Hero 7 Black’s auto-rotating menu feature when the camera was held vertically. The Hero 6 could shoot vertically but with the addition of the auto-rotation of the menu, the Hero 7 Black encourages more vertically photos and videos. I found myself taking more vertical photos, especially outdoors — getting a lot more sky in the shots, which adds an interesting perspective.

Summing Up
In the end, the GoPro Hero 7 Black is a must-buy if you are looking for the latest and greatest action-cam or are on the fence about upgrading from the Hero 5 or 6. The Hypersmooth video stabilization is incredible. If you want to take it a step further, combining it with a Karma gimbal will give you a silky smooth shot.

I really fell in love with the TimeWarp function, whether you are a prosumer filming your family at Disneyland or shooting a show in the forest, a quick TimeWarp is a great way to film some dynamic b-roll without any post processing.

Don’t forget the Hero 7 Black has voice control for hands-free operation. On the outside,the Hero 7 Black is actually black in color unlike the Hero 6 (which is a gray) and also has the number “7” labeled on it for easy finding in your case.

I would really love for GoPro to make these cameras charge wirelessly on a mat like my Galaxy phone. It seems like the GoPro action-cameras would be great to just throw on a wireless charger and also use the charger as a file-transfer station. It gets cumbersome to remove a bunch of tiny memory cards or use a bunch of cables to connect your cameras, so why not make it wireless?! I’m sure they are thinking of things like that, because focusing on stabilization was the right move in my opinion.

If GoPro can continue to make focused and powerful updates to their cameras, they will be here for a long time — and the Hero 7 is the right way to start.

Check out GoPro’s website for more info, including accessories like the Travel Kit, which features a little mini tripod/handle (called “Shorty”), a rubberized cover with a lanyard and a case for $59.99.

If you need the ultimate protection for your GoPro Hero 7 Black, look into GoPro Plus, which, for $4.99 a month, gives you VIP support; automatic cloud backup, access for editing on your phone from anywhere and camera replacement for up to two cameras per year of the same model, no questions asked, when something goes wrong. Compare all the new GoPro Hero 7 Models on their website website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Director Peter Farrelly gets serious with Green Book

By Iain Blair

Director, producer and writer Peter Farrelly is best known for the classic comedies he made with his brother Bob: Dumb and Dumber; There’s Something About Mary; Shallow Hal; Me, Myself & Irene; The Three Stooges; and Fever Pitch. But for all their over-the-top, raunchy and boundary-pushing comedy, those movies were always sweet-natured at heart.

Peter Farrelly

Now Farrelly has taken his gift for heartfelt comedy and put his stamp on a very different kind of film, Green Book, a racially charged feel-good drama inspired by a true friendship that transcended race, class and the 1962 Mason-Dixon line.

Starring Oscar-nominee Viggo Mortensen and Oscar-winner Mahershala Ali, it tells the fact-based story of the ultimate odd couple: Tony Lip, a bouncer from The Bronx and Dr. Don Shirley, a world-class black pianist. Lip is hired to drive and protect the worldly and sophisticated Shirley during a 1962 concert tour from Manhattan to the Deep South, where they must rely on the titular “Green Book” — a travel guide to safe lodging, dining and business options for African-Americans during the segregation era.

Set against the backdrop of a country grappling with the valor and volatility of the civil rights movement, the two men are confronted with racism and danger as they challenge long-held assumptions, push past their seemingly insurmountable differences and embrace their shared humanity.

The film also features Linda Cardellini as Tony Vallelonga’s wife, Dolores, along with Dimiter D. Marinov and Mike Hatton as two-thirds of The Don Shirley Trio. The film was co-written by Farrelly, Nick Vallelonga and Brian Currie and reunites Farrelly with editor Patrick J. Don Vito, with whom he worked on the Movie 43 segment “The Pitch.” Farrelly also collaborated for the first-time with cinematographer Sean Porter (read our interview with him), production designer Tim Galvin and composer Kris Bowers.

I spoke with Farrelly about making the film, his workflow and the upcoming awards season. After its Toronto People’s Choice win and Golden Globe nominations (Best Director, Best Musical or Comedy Motion Picture, Best Screenplay, Best Actor for Mortensen, Best Supporting Actor for Ali), Green Book looks like a very strong Oscar contender.

You told me years ago that you’d love to do a more dramatic film at some point. Was this a big stretch for you?
Not so much, to be honest. People have said to me, “It must have been hard,” but the hardest film I ever made was The Three Stooges… for a bunch of reasons. True, this was a bit of a departure for me in terms of tone, and I definitely didn’t want it to get too jokey — I tend to get jokey so it could easily have gone like that.  But right from the start we were very clear that the comedy would come naturally from the characters and how they interacted and spoke and moved, and so on, not from jokes.

So a lot of the comedy is quite nuanced, and in the scene where Tony starts talking about “the orphans” and Don explains that it’s actually about the opera Orpheus, Viggo has this great reaction and look that wasn’t in the script, and it’s much funnier than any joke we could have made there.

What sort of film did you set out to make?
A drama about race and race relations set in a time when it was very fraught, with light moments and a hopeful, uplifting ending.

It has some very timely themes. Was that part of the appeal?
Absolutely. I knew that it would resonate today, although I wish it didn’t. What really hooked me was their common ground. They really are this odd couple who couldn’t be more different — an uneducated, somewhat racist Italian bouncer, and this refined, highly educated, highly cultured doctor and classically trained pianist. They end up spending all this time together in a car on tour, and teach each other so much along the way. And at the end, you know they’ll be friends for life.

Obviously, casting the right lead actors was crucial. What did Viggo and Mahershala bring to the roles?
Well, for a start they’re two of the greatest actors in the world, and when we were shooting this I felt like an observer. Usually, I can see a lot of the actor in the role, but they both disappeared totally into these characters — but not in some method-y way where they were staying in character all the time, on and off the set. They just became these people, and Viggo couldn’t be less like Tony Lip in real life, and the same with Mahershala and Don. They both worked so hard behind the scenes, and I got a call from Steven Spielberg when he first saw it, and he told me, “This is the best buddy movie since Butch Cassidy and the Sundance Kid,” and he’s right.

It’s a road picture, but didn’t you end up shooting it all in and around New Orleans?
Yes, we did everything there apart from one day in northern New Jersey to get the fall foliage, and a day of exteriors in New York City with Viggo for all the street scenes. Louisiana has everything, from rolling hills to flats. We also found all the venues and clubs they play in, along with mansions and different looks that could double for places like Pennsylvania, Ohio, Indiana, Iowa, Missouri, Kentucky, Tennessee, as well as Carolinas and the Deep South.

We shot for just 35 days, and Louisiana has great and very experienced crews, so we were able to work pretty fast. Then for scenes like Carnegie Hall, we used CGI in post, done by Pixel Magic, and we were also amazingly lucky when it came to the snow scenes set in Maryland at the end. We were all ready to use fake snow when it actually started snowing and sticking. We got a good three, four inches, which they told us hadn’t happened in a decade or two down there.

Where did you post?
We did most of the editing at my home in Ojai, and the sound at Fotokem, where we also did the DI with colorist Walter Volpatto.

Do you like the post process?
I love it. My favorite part of filmmaking is the editing. Writing is the hardest part, pulling the script together. And I always have fun on the shoot, but you’re always having to make sure you don’t screw up the script. So when you get to the edit and post, all the hard work is done in that sense, and you have the joy of watching the movie find its shape as you cut and add in the sound and music.

What were the big editing challenges, given there’s such a mix of comedy and drama?
Finding that balance was the key, but this film actually came together so easily in the edit compared with some of the movies I’ve done. I’ll never forget seeing the first assembly of There’s Something About Mary, which I thought was so bad it made me want to vomit! But this just flowed, and Patrick did a beautiful job.

Can you talk about the importance of music and sound in the film.
It was a huge part of the film and we had a really amazing pianist and composer in Kris Bowers, who worked a lot with Mahershala to make his performance as a musician as authentic as possible. And it wasn’t just the piano playing — Mahershala told me right at the start, “I want to know just how a pianist sits at the piano, how he moves.” So he was totally committed to all the details of the role. Then there’s all the radio music, and I didn’t want to use all the obvious, usual stuff for the period, so we searched out other great, but lesser-known songs. We had great music supervisors, Tom Wolfe and Manish Raval, and a great sound team.

We’re already heading into the awards season. How important are awards to you and this film?
Very important. I love the buzz about it because that gets people out to see it. When we first tested it, we got 100%, and the studio didn’t quite believe it. So we tested again, with “a tougher” audience, and got 98%. But it’s a small film. Everyone took pay cuts to make it, as the budget was so low, but I’m very proud of the way it turned out.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Red upgrades line with DSMC2 Dragon-X 5K S35 camera

Red Digital Cinema has further simplified its product line with the DSMC2 Dragon X 5K S35 camera. Red also announced the DSMC2 Production Module and DSMC2 Production Kit, which are coming in early 2019. More on that in a bit.

The DSMC2 Dragon-X camera uses the Dragon sensor technology found in many of Red’s legacy cameras with an evolved sensor board to enable Red’s enhanced image processing pipeline (IPP2) in camera.

In addition to IPP2, the Dragon-X provides 16.5 stops of dynamic range, as well as 5K resolution up to 96fps in full format and 120fps at 5K 2.4:1. Consistent with the rest of Red’s DSMC2 line-up, Dragon-X offers 300MB/s data transfer speeds and simultaneous recording of Redcode RAW and Apple ProRes or Avid DNxHD/HR.

The new DSMC2 Dragon-X is priced at $14,950 and is also available as a fully-configured kit priced at $19,950. The kit includes: 480GB Red Mini-Mag; Canon lens mount; Red DSMC2 Touch LCD 4.7-inch monitor; Red DSMC2 outrigger handle; Red V-Lock I/O expander; two IDX DUO-C98 batteries with VL-2X charger; G-Technology ev Series Red Mini-Mag reader; Sigma 18-35mm F1.8 DC HSM art lens; Nanuk heavy-duty camera case.

Both the camera and kit are available now at red.com or through Red’s authorized dealers.

Red also announced the new DSMC2 Production Module. Designed for pro shooting configurations, this accessory mounts directly to the DSMC2 camera body and incorporates an industry standard V-Lock mount with integrated battery mount and P-Tap for 12V accessories. The module delivers a comprehensive array of video, XLR audio, power and communication connections, including support for 3-pin 24V accessories. It has a smaller form factor and is more lightweight than Red’s RedVolt Expander with a battery module.

The DSMC2 Production Module is available to order for$4,750 and is expected to ship in early 2019. It will also be available as a DSMC2 Production Kit that will include the DSMC2 Production Module and DSMC2 production top plate. The DSMC2 Production Kit is also available for order for $6,500 and is expected to ship in early 2019.

Scarlet-W owners can upgrade to DSMC2 Dragon-X for $4,950 through Red authorized dealers or directly from Red.

Logan uses CG to showcase the luxury of the Lexus ES series

Logan, a creative studio with offices in Los Angeles and New York, worked on the new Lexus ES series “A Product of Mastery” campaign with agency Team One. The goal was to showcase the interior craftsmanship and amenities of this luxury sedan with detailed animations. Viewers are at first given just a glimpse of these features as the spot builds toward a reveal of the sedan’s design.

The campaign was created entirely in CG. “When we first saw Team One’s creative brief, we realized we would be able to control the environments, lighting and the overall mood better by using CG, which allowed us to make the campaign stand apart aesthetically and dramatically compared to shooting the products practically. From day one, our team and Team One were aligned on everything and they were an incredible partner throughout the entire process,” says Logan executive producer Paul Abatemarco.

The three spots in the campaign totaled 23 shots, highlighting things like the car’s high-end Mark Levinson sound system. They also reveal the craftsmanship of the driver seat’s reverse ventilation as infinite bars of light while in another spot, the sedan’s wide-view high-definition monitor is unveiled through a vivid use of color and shape.

Autodesk Maya was Logan’s main CG tool, but for the speaker spot they also called on Side Effects Houdini and Cinema 4D. All previs was done in Maya.

Editing was done on Adobe Premiere and they color graded in Resolve in their certified-Dolby Color Studio.

 According to Waka Ichinose and Sakona Kong, co-creative leads on the project, “We had a lot of visual ideas, and there was a lot of exploration on the design side of things. But finding the balance between the beautiful, abstract imagery and then clearly conveying the meaning of each product so that the viewers were intrigued and ultimately excited was a challenge. But it was also really fun and ultimately very satisfying to solve.”

Storage for VFX Studios

By Karen Moltenbrey

Visual effects are dazzling — inviting eye candy, if you will. But when you mention the term “storage,” the wide eyes may turn into a stifled yawn from viewers of the amazing content. Not so for the makers of that content.

They know that the key to a successful project rests within the reliability of their storage solutions. Here, we look at two visual effects studios — both top players in television and feature film effects — as they discuss how data storage enables them to excel at their craft.

Zoic Studios
A Culver City-based visual effects facility, with shops in Vancouver and New York, Zoic Studios has been crafting visual effects for a host of television series since its founding in 2002, starting with Firefly. In addition to a full plate of episodics, Zoic also counts numerous feature films and spots to its credits.

Saker Klippsten

According to Saker Klippsten, CTO, the facility has used a range of storage solutions over the past 16 years from BlueArc (before it was acquired by Hitachi), DataDirect Networks and others, but now uses Dell EMC’s Isilon cluster file storage system for its current needs. “We’ve been a fan of theirs for quite a long time now. I think we were customer number two,” he says, “back when they were trying to break into the media and entertainment sector.”

Locally, the studio uses Intel and NVMe drives for its workstations. NVMe, or non-volatile memory express, is an open logical device interface specification for accessing all-flash storage media attached via PCI Express (PCIe) bus. Previously, Zoic had been using Samsung SSD drives, with Samsung 1TB and 2TB EVO drives, but in the past year and a half, began migrating to NVMe on the local workstations.

Zoic transitioned to the Isilon system in 2004-2005 because of the heavy usage its renderfarm was getting. “Renderfarms work 24/7 and don’t take breaks. Our storage was getting really beat up, and people were starting to complain that it was slow accessing the file system and affecting playback of their footage and media,” explains Klippsten. “We needed to find something that could scale out horizontally.”

At the time, however, file-level storage was pretty much all that was available — “you were limited to this sort of vertical pool of storage,” says Klippsten. “You might have a lot of storage behind it, but you were still limited at the spigot, at the top end. You couldn’t get the data out fast enough.” But Isilon broke through that barrier by creating a cluster storage system that allotted the scale horizontally, “so we could balance our load, our render nodes and our artists across a number of machines, and access and update in parallel at the same time,” he adds.

Klippsten believes that solution was a big breakthrough for a lot of users; nevertheless, it took some time for others to get onboard. “In the media and entertainment industry, everyone seemed to be locked into BlueArc or NetApp,” he notes. Not so with Zoic.

Fairly recently, some new players have come onto the market, including Qumulo, touted as a “next-generation NAS company” built around advanced, distributed software running on commodity hardware. “That’s another storage platform that we have looked at and tested,” says Klippsten, adding that Zoic even has a number of nodes from the vendor.

There are other open-source options out there as well. Recently, Red Hat began offering Gluster Storage, an open, software-defined storage platform for physical, virtual and cloud environments. “And now with NVMe, it’s eliminating a lot of these problems as well,” Klippsten says.

Back when Zoic selected Isilon, there were a number of major issues that affected the studio’s decision making. As Klippsten notes, they had just opened the Vancouver office and were transferring data back and forth. “How do we back up that data? How do we protect it? Storage snapshot technology didn’t really exist at the time,” he says. But, Isilon had a number of features that the studio liked, including SyncIQ, software for asynchronous replication of data. “It could push data between different Isilon clusters from a block level, in a more automated fashion. It was very convenient. It offered a lot of parameters, such as moving data by time of day and access frequency.”

SyncIQ enabled the studio to archive the data. And for dealing with interim changes, such as a mistakenly deleted file, Zoic found Isilon’s SnapshotIQ ideal for fast data recovery. Moreover, Isilon was one of the first to support Aspera, right on the Isilon cluster. “You didn’t have to run it on a separate machine. It was a huge benefit because we transfer a lot of secure, encrypted data between us and a lot of our clients,” notes Klippsten.

Netflix’s The Chilling Adventures of Sabrina

Within the pipeline, Zoic’s storage system sits at the core. It is used immediately as the studio ingests the media, whether it is downloaded or transferred from hard drives – terabytes upon terabytes of data. The data is then cleaned up and distributed to project folders for tasks assigned to the various artists. In essence, it acts as a holding tank for the main production storage as an artist begins working on those specific shots, Klippsten explains.

Aside from using the storage at the floor level, the studio also employs it at the archive level, for data recovery as well as material that might not be accessed for weeks. “We have sort of a tiered level of storage — high-performance and deep-archival storage,” he says.

And the system is invaluable, as Zoic is handling 400 to 500 shots a week. If you multiply that by the number of revisions and versions that take place during that time frame, it adds up to hundreds of terabytes weekly. “Per day, we transfer between LA, Vancouver and New York somewhere around 20TB to 30TB,” he estimates. “That number increases quite a bit because we do a lot of cloud rendering. So, we’re pushing a lot of data up to Google and back for cloud rendering, and all of that hits our Isilon storage.”

When Zoic was founded, it originally saw itself as a visual effects company, but at the end of the day, Klippsten says they’re really a technology company that makes pretty pictures. “We push data and move it around to its limits. We’re constantly coming up with new, creative ideas, trying to find partners that can help provide solutions collaboratively if we cannot create them ourselves. The shot cost is constantly being squeezed by studios, which want these shots done faster and cheaper. So, we have to make sure our artists are working faster, too.”

The Chilling Adventures of Sabrina

Recently, Zoic has been working on a TV project involving a good deal of water simulations and other sims in general — which rapidly generate a tremendous amount of data. Then the data is transferred between the LA and Vancouver facilities. Having storage capable of handling that was unheard of three years ago, Klippsten says. However, Zoic has managed to do so using Isilon along with some off-the-shelf Supermicro storage with NVMe drives, enabling its dynamics department to tackle this and other projects. “When doing full simulation, you need to get that sim in front of the clients as soon as possible so they can comment on it. Simulations take a long time — we’re doing 26GB/sec, which is crazy. It’s close to something in the high-performance computing realm.”

With all that considered, it is hardly surprising to hear Klippsten say that Zoic could not function without a solid storage solution. “It’s funny. When people talk about storage, they are always saying they don’t have enough of it. Even when you have a lot of storage, it’s always running at 99 percent full, and they wonder why you can’t just go out to Best Buy and purchase another hard drive. It doesn’t work that way!”

Milk VFX
Founded just five years ago, Milk VFX is an independent visual effects facility in the UK with locations in London and Cardiff, Wales. While Milk VFX may be young, it was founded by experienced and award-winning VFX supervisors and producers. And the awards have continued, including an Oscar (Ex-Machina), an Emmy (Sherlock) and three BAFTAs, as the studio creates innovative and complex work for high-end television and feature films.

Benoit Leveau

With so much precious data, and a lot of it, the studio has to ensure that its work is secure and the storage system is keeping pace with the staff using it. When the studio was set up, it installed Pixit Media’s PixStor, a parallel file system with limitless storage, for its central storage solution. And, it has been growing with the company ever since. (Milk uses almost no local storage, except for media playback.)

“It was a carefully chosen solution due to its enterprise-level performance,” says Benoit Leveau, head of pipeline at Milk, about the decision to select PixStor. “It allowed us to expand when setting up our second studio in Cardiff and our rendering solutions in the cloud.”

When Milk was shopping for a storage offering while opening the studio, four things were forefront in their minds: speed, scalability, performance and reliability. Those were the functions the group wanted from its storage system — exactly the same four demands that the projects at the studios required.

“A final image requires gigabytes, sometimes terabytes, of data in the form of detailed models, high-resolution textures, animation files, particles and effects caches and so forth,” says Leveau. “We need to be able to review 4K image sequences in real time, so it’s really essential for daily operation.”

This year alone, Milk has completed a number of high-end visual effects sequences for feature films such as Adrift, serving as the principal vendor on this true story about a young couple lost at sea during one of the most catastrophic hurricanes in recorded history. The Milk team created all the major water and storm sequences, including bespoke 100-foot waves, all of which were rendered entirely in the cloud.

As Leveau points out, one of the shots in the film was more than 60TB, as it required complex ocean simulations. “We computed the ocean simulations on our local renderfarm, but the rendering was done in the cloud, and with this setup, we were able to access the data from everywhere almost transparently for the artists,” he explains.

Adrift

The studio also recently completed work on the blockbuster Fantastic Beasts sequel, The Crimes of Grindelwald.

For television, the studio created visual effects for an episode of the Netflix Altered Carbon sci-fi series, where people can live forever, as they digitally store their consciousness (stacks) and then download themselves into new bodies (sleeves). For the episode, the Milk crew created forest fires and the aftermath, as well as an alien planet and escape ship. For Origin, an action-thriller, the team generated 926 VFX shots in 4K for the 10-part series, spanning a wide range of work. Milk is also serving as the VFX vendor for Good Omens, a six-part horror/fantasy/drama series.

“For Origin, all the data had to be online for the duration of the four-month project. At the same time, we commenced work as the sole VFX vendor on the BBC/Amazon Good Omens series, which is now rapidly filling up our PixStor, hence the importance of scalability!” says Leveau.

Main Image: Origin via Milk VFX


Karen Moltenbrey is a veteran VFX and post writer.