Author Archives: Randi Altman

Quick Chat: Element’s Matthew O’Rourke on Vivian partnership

Recently, Boston-based production and post company Element Productions  launched Element Austin — partnership with production studio Vivian. Element’s is now representing a select directorial roster out of Austin.

We recently reached out to Element executive producer Matthew O’Rourke, who led the charge to get this partnership off the ground.

Can you talk a bit about your partnership with Vivian? How did that come about and why was this important for Element to do?
I’ve had a relationship with Vivian’s co-owner, Buttons Pham, for almost 10 years. She was my go-to Texas-based resource while I was an executive producer at MMB working on Toyota. She is incredibly resourceful and a great human being. When I joined Element she became a valued production service partner for our projects in the south (mostly based out of Texas and Atlanta). Our relationship with Vivian was always important to Element since it expands the production support we can offer for our directors and our clients.

Blue Cross Blue Shield

Expanding on that thought. What does Vivian offer that you guys don’t?
They let us have boots on the ground in Austin. They have a strong reputation there and deep resources to handle all levels of work.

How will this partnership work?
Buttons and her business partner Tim Hoppock have become additional executive producers for Element and lead the Element Austin office.

How does the Boston market differ from Austin?
Austin is a growing, vibrant market with tons of amazingly creative people and companies. Lots of production resources are coming in from Los Angeles, but are also developing locally.

Can you point to any recent jobs that resulted from this partnership?
Vivian has been a production services partner for several years, helping us with campaigns for BlueCross Blue Shield, Subway and more. Since our launch a few weeks ago, we have entered into discussions with several agencies on upcoming work out of the Austin market.

What trends are you seeing overall for this part of the market?
Creative agencies are looking for reliable resources. Having a physical presence in Austin allows us to better support local clients, but also bring in projects from outside that market and produce efficient, quality work.

De-aging John Goodman 30 years for HBO’s The Righteous Gemstones

For HBO’s original series The Righteous Gemstones, VFX house Gradient Effects de-aged John Goodman using its proprietary Shapeshifter tool, an AI-assisted tool that can turn back the time on any video footage. With Shapeshifter, Gradient sidestepped the Uncanny Valley to shave decades off Goodman for an entire episode, delivering nearly 30 minutes of film-quality VFX in six weeks.

In the show’s fifth episode, “Interlude,” viewers journey back to 1989, a time when the Gemstone empire was still growing and Eli’s wife, Aimee-Leigh, was still alive. But going back also meant de-aging Goodman for an entire episode, something never attempted before on television. Gradient accomplished it using Shapeshifter, which allows artists to “reshape” an individual frame and the performers in it and then extend those results across the rest of a shot.

Shapeshifter worked by first analyzing the underlying shape of Goodman’s face. It then extracted important anatomical characteristics, like skin details, stretching and muscle movements. With the extracted elements saved as layers to be reapplied at the end of the process, artists could start reshaping his face without breaking the original performance or footage. Artists could tweak additional frames in 3D down the line as needed, but they often didn’t need to, making the de-aging process nearly automated.

“Shapeshifter an entirely new way to de-age people,” says Olcun Tan, owner and visual effects supervisor at Gradient Effects. “While most productions are limited by time or money, we can turn around award-quality VFX on a TV schedule, opening up new possibilities for shows and films.”

Traditionally, de-aging work for film and television has been done in one of two ways: through filtering (saves time, but hard to scale) or CG replacements (better quality, higher cost), which can take six months to a year. Shapeshifter introduces a new method that not only preserves the actor’s original performance, but also interacts naturally with other objects in the scene.

“One of the first shots of ‘Interlude’ shows stage crew walking in front of John Goodman,” describes Tan. “In the past, a studio would have recommended a full CGI replacement for Goodman’s character because it would be too hard or take too much time to maintain consistency across the shot. With Shapeshifter, we can just reshape one frame and the work is done.”

This is possible because Shapeshifter continuously captures the face, including all of its essential details, using the source footage as its guide. With the data being constantly logged, artists can extract movement information from anywhere on the face whenever they want, replacing expensive motion-capture stages, equipment and makeup teams.

Jacki Sextro opens commercial production company Kin

Founder and executive producer Jacki Sextro has launched LA-based production company Kin. For over a decade, Sextro has been part of award-winning teams at production companies including Hungry Man, Biscuit and The Directors Bureau. In making the leap from executive producer to business owner, Sextro, says, “I had a list of goals — a commitment to diversity, green practices and creating memorable original content. I looked around for a company that shared these goals but didn’t see it. I knew that if I felt that way, directors must be searching for it too.”

Kin’s directorial lineup includes Ric Cantor, a D&AD, British Arrow and Cannes Lions-winning director who is known industry wide for elevating lifestyle, auto and comedy campaigns with a cinematic eye; Jeff Baena, a Lions-winning comedy filmmaker whose features have starred Alison Brie, Thomas Middleditch and Aubrey Plaza; Minhal Baig, a writer (BoJack Horseman, Dune: The Sisterhood) and director whose feature, Hala, about a Muslim teenager coping with the unraveling of her family as she comes into her own, will be released by Apple TV+; JD Dillard, who weaves genre with emotional, character-driven stories as showcased in his Sundance-premiering features Sleight and Sweetheart; Liza Mandelup, an award-winning documentary filmmaker whose work has explored what it means to be a mom, athlete, coder and fangirl; and Ryan Reichenfeld, who connects viewers with dynamic subjects, from skaters to footballers, by creating vivid scenes in everyday moments.

As for the directors she’s drawn to Sextro says they are makers “whose ideas and work surprise me. The biggest error you can make is to be forgettable or average.

“I love that we’re in an era where there is a spectrum of tone in advertising,” she continues. “Ultimately, my role is to help the directors shape ideas in a way where creative teams walk away with an elevated finished product.”

Main Image: (top L to R) Minhal Baig, Ric Cantor, JD Dillard
(Bottom L to R) Ryan Reichenfeld, Jeff Baena, Liza Mandelup

Alibi targets trailer editors with Sorcery music collection

Alibi Music Library has released Sorcery, the newest collection in its recently launched ATX catalog for high-end theatrical trailers and TV series. From epic magical quests and enchanted journeys to fantastic family adventures and whimsical mysteries, Sorcery is a collection of orchestral trailer cues that embody the unique sensibilities of such films designed to create an instant emotional link among viewers. Users can sample the new album here.

René Osmanczyk

ATX’s Sorcery, which features 10 tracks along with numerous stems and alternative mixes, was composed by long-time Alibi partner René Osmanczyk of DosComp, whose goal was to write a family-friendly, adventure-steeped album inspired by the soundtracks to Avatar and the Harry Potter franchise. Each of the 10 tracks has five different mix versions as well as stems for every instrument group so that clients can create their own custom mix versions.

“I wanted melodies that take you on a musical journey through each track, plus the classical trailer build that every cue has,” Osmanczyk explains. “I started composing this album in June, a process that took a bit longer since it was written for full orchestra.”

In terms of tools, he used the Steinberg Cubase 10 DAW and Native Instruments Kontakt 6 as the main sampler. “When it comes to libraries, I used things like Cinematic Studio Strings, Cinematic Studio Solo Strings, Cinematic Studio Brass, Berlin Strings, Orchestral Tools harps, various Hans Zimmer Percussion, Trailer Percussion and also self-crafted hits, etc. … Oceania choir, Berlin Woodwinds and many other things.”

Alibi VP/creative production Sam Wale adds, “What René ultimately delivered will provide trailer editors with some pretty amazing options. I would describe Sorcery as majestic and magical, epic and emotional, haunting and heartwarming.”

Alibi’s music and sound design has been used to promote projects such as the film Once Upon A Time… In Hollywood and the TV series American Horror Story.

Bernie Su: Creator of Twitch’s live and interactive show, Artificial

By Randi Altman

Thanks to today’s available technology, more and more artists are embracing experimental storytelling. One of those filmmakers is Bernie Su, creator, executive producer and director on the Twitch series, Artificial.

Bernie Su

Artificial, which won Twitch its first Primetime Emmy for “Outstanding Innovation in Interactive Media, features a doctor and his “daughter — a human-looking artificial intelligence creation named Sophie. Episodes air live with actors reacting to audience input in realtime. This is later edited into clips that live on Twitch.

The unique live broadcast and “choose your own adventure” factors created a need for a very specific workflow. We reached out to Su to talk about the show, his workflow and his collaboration with show editor Melanie Escano.

Where did the idea for Artificial come from, and what was its path to its production?
The original story came from my co-creator Evan Mandery. When we partnered, we looked at how we could present it in an innovative way. We identified Twitch pretty early in the process as a place to really push some groundbreaking storytelling methods. What would an original series on Twitch look like? What makes it Twitch and not Amazon video (Amazon owns Twitch by the way)? Once we pitched it to Twitch it was all systems go.

Can you talk about what it’s shot on and how you work with your DP to get the look you were after?
We shot on Panasonic Lumix GH4 cameras. We kept it pretty simple. DP Allen Ho and I have worked together a lot, and we tried to make Artificial feel real-yet-polished. Because we were on Twitch and that’s a platform where everything is livestreamed, we wanted it to feel like a real livestream yet have touches of a cinematic look. Every scene we shot in our show… we would always discuss why the camera is even there in the first place and how the characters react to them. The show called Artificial had to feel immersive.

Were you at all intimated by the interactive aspect of the show?
Nervous but not intimidated. I’ve done several interactive shows, but the live element is a different animal. A live interactive series is built around chaos, and if you aren’t embracing that chaos that the audience is going to throw at you, then you shouldn’t be making a live interactive series.

What went into making this interactive? Can you talk about the challenges and how you overcame those?
Well the first step is figuring how you’re building the audience into the story while still maintaining an arc. The second is how you make that audience consequential. You can always let the audience choose something non-consequential, like should someone drink tea or coffee.

Yes the audience made a choice, but it’s not consequential to the story. Now when we have the audience choose what a character’s relationship to another will be, or even if a relationship will end or not, now you’re letting the audience play with fire, and once they do it’s our responsibility to honor the consequences of that. The simplest way I can describe our solution is that we as the storytellers accepted every result we presented. We dared the audience to play with fire and if they burned a character, then that’s the consequences.

Your team used Adobe Creative Cloud to make this a reality. Can you talk about that and how you worked with your editor and post team? How involved were you?
Oh yeah, Premiere, Photoshop, After Effects and Audition were all in play for us. We don’t have a big team, but we have an incredibly versatile team. Any of us could comfortably jump into several of those tools and be able to knock something out quickly. We were all about speed and efficiency.

Once we got the systems in place, I wanted to stay at a very high level and let my team play. I trust them, if I didn’t, they wouldn’t be on my team. Co-producer Jen Enfield-Kane worked closely with our editor Melanie Escano and our writer/sound editor Micah McFarland. They would go back and forth with cuts and mixes. Then upon approval, it would go to creative producer Rachel Williams, who could implement final effects for broadcast. If everything is going smoothly, then we’re good to go.

But because of the speed of weekly broadcasts consisting of 30 to 45 minutes of edited content a week, and the fact that the post-team is literally four people, there were many times that someone would have to assist, and that’s fine. That’s what a great team does.

How did you work with the editor, specifically? What was your workflow?
Pretty straight forward. When you’re innovating in the live format, you purposely make the post system as simple and easy as possible. If the show is chaotic, you don’t want your post to be that. Melanie would do a rough pass using the script supervisor notes, Jen would give her notes on that and she would come back with another cut. After that, they might discuss color correction on a particular scene and get that done but that was rare. We always kept it simple.

Where can people go to experience Artificial?
Please visit https://www.twitch.tv/artificialnext.

Nice Shoes Toronto adds colorist Yulia Bulashenko

Creative studio Nice Shoes has added colorist Yulia Bulashenko to its Toronto location. She brings over seven years of experience as a freelance colorist, working worldwide across on projects with such top global clients as Nike, Volkswagen, MTV, Toyota, Diesel, Uniqlo, Uber, Adidas and Zara, among numerous others.

Bulashenko’s resume includes work across commercials, music videos, fashion, and feature films. Notable projects include Sia and Diplo’s (LSD) music video for “Audio,” “Sound and Vision” a tribute to the late singer David Bowie directed by Canada for whom she has been a colorist of choice for the past five years; and feature films The Girl From The Song and Gold.

Toronto-based Bulashenko is available immediately and also available remotely via Nice Shoes’s New York, Boston, Chicago, and Minneapolis spaces.

Bulashenko began her career as a fashion photographer before transitioning into creating fashion films. Through handling all of the post on her own film projects, she discovered a love for color grading. After building relationships with a number of collaborators, she began taking on projects as a freelancer, working with clients in Spain and the UK working on a wide range of projects throughout Europe, Mexico, Qatar and India.

Managing director Justin Pandolfino notes, “We’re excited to announce Yulia as the first of a number of new signings as we enter our fourth year in the Toronto market. Bringing her onboard is part of our ongoing efforts to unite the best talent from around the world to deliver stunning design, animation, VFX, VR/AR, editorial, color grading and finishing for our clients.”

Colorist Chat: Scott Ostrowsky on Amazon’s Sneaky Pete

By Randi Altman

Scott Ostrowsky, senior colorist at Deluxe’s Level 3 in Los Angeles has worked on all three seasons of Amazon’s Sneaky Pete, produced by Bryan Cranston and David Shore and starring Giovanni Ribisi. Season 3 is the show’s last.

For those of you unfamiliar with the series, it follows a con man named Marius (Ribisi), who takes the place of his former cell-mate Pete and endears himself to Pete’s seemingly idyllic family while continuing to con his way through life. Over time he comes to love the family, which is nowhere as innocent as they seem.

Scott Ostrowsky

We reached out to this veteran colorist to learn more about how the look of the series developed over the seasons and how he worked with the showrunners and DPs.

You’ve been on Sneaky Pete since the start. Can you describe how the look has changed over the years?
I worked on Seasons 1 through Season 3. The DP for Season 1 was Rene Ohashi and it had somewhat of a softer feel. It was shot on a Sony F55. It mostly centered around the relationship of Bryan Cranston’s character and Giovanni Ribisi’s newly adopted fake family and his brother.

Season 2 was shot by DPs Frank DeMarco and William Rexer on a Red Dragon, and it was a more stylized and harsher look in some ways. The looks were different because the storylines and the locations had changed. So, even though we had some beautiful, resplendent looks in Season 2, we also created some harsher environments, and we did that through color correction. Going into Season 2, the storyline changed, and it became more defined in the sense that we used the environments to create an atmosphere that matched the storyline and the performances.

An example of this would be the warehouse where they all came together to create the scam/ heist that they were going to pull off. Another example of this would be the beautiful environment in the casino that was filled with rich lighting and ornate colors. But there are many examples of this through the show — both DPs used shadow and light to create a very emotional mood or a very stark mood and everything in between.

Season 3 shot by Arthur Albert and his son, Nick Albert on a Red Gemini, and it had a beautiful, resplendent, rich look that matched the different environments when it moved from the cooler look of New York to the more warm, colorful look in California.

So you gave different looks based on locale? 
Yes, we did. Many times, the looks would depend on time of day and the environment that they were in. An example of this might be the harsh fluorescent green in the gas station bathroom where Giovanni’s character is trying to figure out a way to help his brother and avoid his captures.

How did you work with the Alberts on the most recent season?
I work at Level 3 Post, which is a Deluxe company. I did Season 1 and 2 at the facility on the Sony lot. Season 3 was posted at Level 3. Arthur and Nick Albert came in to my color suite with the camera tests shot on the Red Gemini and also the Helium. We set up a workflow based on the Red cameras and proceeded to grade the various setups.

Once Arthur and Nick decided to use the Gemini, we set up our game plan for the season. When I received my first conform, I proceeded to grade it based on our conversations. I was very sensitive to the way they used their setups, lighting and exposures. Once I finished my first primary grade, Arthur would come in and sit with me to watch the show and make any changes. After Arthur approved the grade, The producers and showrunner would come in for their viewing. They could make any additional changes at that time. (Read our interview with Arthur Albert here.)

How do you prefer to work with directors/DPs?
The first thing is have conversation with them on their approach and how they view color as being part of the story they want to tell. I always like to get a feel for how the cinematographer will shoot the show and what, if any, LUTs they’re using so I can emulate that look as a starting point for my color grading.

It is really important to me to find out how a director envisions the image he or she would like to portray on the screen. An example of this would be facial expressions. Do we want to see everything or do they mind if the shadow side remains dark and the light falls off.

A lot of times, it’s about how the actors emote and how they work in tandem with each other to create tension, comedy or other emotions — and what the director is looking for in these scenes.

Any tips for getting the most out of a project from a color perspective?
Communication. Communication. Communication. Having an open dialogue with the cinematographer, showrunners and directors is extremely important. If the colorist is able to get the first pass very close, you spend more time on the nuisances rather than balancing or trying to find a look. That is why it is so important to have an understanding of the essence of what a director, cinematographer and showrunner is looking for.

How do you prefer the DP or director to describe their desired look?
However they’re comfortable in enlightening me to their styles or needs for the show is fine. Usually, we can discuss this when we have a camera test before principal photography starts. There’s no one way that you can work with everybody — you just adapt to how they work. And as a colorist, it’s your job to make that image sing or shine the way that they intended it to.

You used Resolve on this. Is there a particular tool that came in handy for this show?
All tools on the Resolve are useful for a drama series. You would not buy the large crayon box and throw out colors you didn’t like because, at some point, you might need them. I use all tools — from keys, windows, log corrections and custom curves to create the looks that were needed.

You have been working in TV for many years. How has color grading changed during that time?
Color correction has become way more sophisticated over the years, and is continually growing and expanding into a blend of not only color grading but helping to create environments that are needed to express the look of a show. We no longer just have simple color correctors with simple secondaries; the toolbox continues to grow with added filters, added grain and sometimes even helping to create visual effects, which most color correctors are able to do today.

Where do you find inspiration? Art? Photography?
I’ve always loved photography and B&W movies. There’s a certain charm or subtlety that you find in B&W, whether it’s a film noir, the harshness of film grain, or just the use of shadow and light. I’ve always enjoyed going to museums and looking at different artists and how they view the world and what inspires them.

To me, it’s trying to portray an image and have that image make a statement. In daily life, you can see multiple examples as you go through your day, and I try and keep the most interesting ones that I can remember in my lexicon of images.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Good Company adds director Daniel Iglesias Jr.

Filmmaker Daniel Iglesias Jr., whose reel spans narrative storytelling to avant-garde fashion films with creativity and an eccentric visual style, has signed with full-service creative studio Good Company.

Iglesias’ career started while attending Chapman University’s renowned film school, where he earned a BFA in screen acting. At the same time, Iglesias and his friend Zack Sekuler began crafting images for his friends in the alt-rock band The Neighbourhood. Iglesias’ career took off after directing his first music video for the band’s breakout hit “Sweater Weather,” which reached over 310 million views. He continues working behind the camera for The Neighbourhood and other artists like X Ambassadors and AlunaGeorge.

Iglesias uses elements of surrealism and a blend of avant-garde and commercial compositions, often stemming from innovative camera techniques. His work includes projects for clients like Ralph Lauren, Steve Madden, Skyy Vodka and Chrysler and the Vogue film Death Head Sphinx.

One of his most celebrated projects was a two-minute promo for Margaux the Agency. Designed as a “living magazine,” Margaux Vol 1 merges creative blocking, camera movement and effects to create a kinetic visual catalog that is both classic and contemporary. The piece took home Best Picture at the London Fashion Film Festival, along with awards from the Los Angeles Film Festival, the International Fashion Film Awards and Promofest in Spain.

Iglesias’ first project since joining Good Company was Ikea’s Kama Sutra commercial for Ogilvy NY, a tongue-in-cheek exploration of the boudoir. Now he is working on a project for Paper Magazine and Tiffany.

“We all see the world through our own lens; through film, I can unscrew my lens and pop in onto other people and, by effect, change their point of view or even the depth of culture,” he says. “That’s why the medium excites me — I want to show people my lens.”

We reached out to Iglesias to learn a bit more about how he works.

How do you go about picking the people you work with?
I do have a couple DPs and PDs I like to work with on the regular, depending on the job, and sometimes it makes sense to work with someone new. If it’s someone new that I haven’t worked with before, I typically look at three things to get a sense of how right they are for the project: image quality, taste and versatility. Then it’s a phone call or meeting to discuss the project in person so we can feel out chemistry and execution strategy.

Do you trust your people completely in terms of what to shoot on, or do you like to get involved in that process as well?
I’m a pretty hands-on and involved director, but I think it’s important to know what you don’t know and delegate/trust accordingly. I think it’s my job as a director to communicate, as detailed and effectively as possible, an accurate explanation of the vision (because nobody sees the vision of the project better than I do). Then I must understand that the DPs/PDs/etc. have a greater knowledge of their field than I do, so I must trust them to execute (because nobody understands how to execute in their fields better than they do).

Since Good Company also provides post, how involved do you get in that process?
I would say I edit 90% of my work. If I’m not editing it myself, then I still oversee the creative in post. It’s great to have such a strong post workflow with Good Company.

Review: PixelTools V.1 PowerGrade presets for Resolve

By Brady Betzel

Color correction and color grading can be tricky (especially for those of us who don’t work as a dedicated colorist). And to be good at one doesn’t necessarily mean you will be good at the other. After watching hundreds of hours of tutorials, the only answer to getting better at color correction and color grading is to practice. As trite and cliche as it sounds, it’s the truth. There is also the problem of a creative block. I can sometimes get around a creative block when color correcting or general editing by trying out of the box ideas, like adding a solid color on top of footage and changing blend modes to spark some ideas.

An easier way to get a bunch of quick looks on your footage is with LUTs (Look Up Tables) and preset color grades. LUTs can sometimes work at getting your footage into an acceptable spot color correction-wise or technically, in the correct color space (the old technical vs. creative LUTs discussion). They often need (or should) be tweaked to fit the footage you are using.

Dawn

This is where PixelTool’s PowerGrade presets for Blackmagic’s DaVinci Resolve come in to play. PixelTool’s presets give you that instant wow of a color grade, sharpening and even grain, but with the flexibility to tweak and adjust to your own taste.

PixelTool’s PowerGrade V.1 are a set of Blackmagic’s DaVinci Resolve PowerGrades (essentially pre-built color grades sometimes containing noise reduction, glows or film grain) that retail for $79.99. Once purchased, the PowerGrade presets can be downloaded immediately. If you aren’t sure about the full commitment to purchase for $79.99, you can download eight sample PowerGrade presets to play with by signing up for PixelTools’ newsletter.

While it doesn’t typically matter what version of Resolve you are using with the PixelTool PowerGrade, you will probably want to make sure you are using Resolve Studio 15 (or higher) or you may miss out on some of the noise reduction or film. I’m running Resolve 16 Studio.

What are PowerGrades? In Resolve, you can save and access pre-built color correction node trees across all projects in a single database. This way if you have an amazing orange and teal, bleach bypass, or maybe a desaturated look with a vignette and noise reduction that you don’t want to rebuild inside every project you can them in the PowerGrades folder in the color correction tab. Easy! Just go into the Color Correction Tab > Gallery (in the upper left corner) > click the little split window icon > right click and “Add PowerGrade Album.”

Golden

Installing the PixelTools presets is pretty easy, but there are a few steps you are going to want to follow if you’ve never made a PowerGrades folder before. Luckily, there is a video just for that. Once you’ve added the presets into your database you can access over 110 grades in both Log and Rec 709 color spaces. In addition, there is a folder of “Utilities,” which offers some helpful tools like Scanlines (Mild-Intense), various Vignettes, Sky Debanding, preset Noise Reductions, two-and three-way Grain Nodes and much more. Some of the color grading presets can fit on one node but some have five or six nodes like the “2-Strip Holiday.” They will sometimes be applied as a Compound Node for organization-sake but can be decomposed to see all the goodness inside.

The best part of PixelTools, other than the great looks, is the ability to decompose or view the Compound Node structure and see what’s under the hood. Not only does it make you appreciate all of the painstaking work that is already done for you, but you can study it, tweak it and learn from it. I know a lot of companies that don’t like to reveal how things are done, but with PixelTools you can break the grades. Follows my favorite motto: “A rising tide lifts all boats” mindset.

From the understated “2-Strip Holiday” look to the crunchy “Bleach Duotone 2” with the handy “Saturation Adjust” node on the end of the tree, PixelTools is the prime example of pre-built looks that can be as easy as drag-and-dropping onto a clip or as intricate as adjusting each node to the way you like it. One of my favorite looks is a good-old Bleach Bypass — use two layer nodes (one desaturated and one colored), layer mix with a composite mode set to Overlay and adjust saturation to taste. The Bleach Bypass setup is not a tightly guarded secret, but PixelTools gets you right to the Bleach Bypass look with the Bleach Duotone 2 and also adds a nice orange and teal treatment on top.

2-Strip Holiday

Now I know what you are thinking — “Orange and Teal! Come on, what are we Michael Bay making Transformers 30?!” Well, the answer is, obviously, yes. But to really dial the look to taste on my test footage I brought down the Saturation node at the end of the node tree to around 13%, and it looks fantastic! Moral of the story is: always dial in your looks, especially with presets. Just a little customization can take your preset-look to a personalized look quickly. Plus, you won’t be the person who just throws on a preset and walks away.

Will these looks work with my footage? If you shot in a Log-ish style like SLog or BMD Film, Red Log Film or even GoPro Flat you can use the Log presets and dial them to taste. If you shot footage in Rec. 709 with your Canon 5D Mark II, you can just use the standard looks. And if you want to create your own basegrade on Log footage just add the PixelTool PowerGrade Nodes after!

Much like my favorite drag-and-drop tools from Rampant Design, PixelTools will give you a jump on your color grading quickly and if nothing else can maybe shake loose some of that colorist creative block that creeps in. Throw on that “Fuji 1” or “Fuji 2” look, add a serial node in the beginning and crank up the red highlights…who knows it may give you some creative jumpstart that you are looking for. Know the rules to break the rules, but also break the rules to get those creative juices flowing.

Saturate-Glow-Shadows

Summing Up
In the end, PixelTools is not just a set of PowerGrades for DaVinci Resolve, they can also be creative jumpstarts. If you think your footage is mediocre, you will be surprised at what a good color grade will do. It can save your shoot. But don’t forget about the rendering when you are finished. Rendering speed will still be dependent on your CPU and GPU setup. In fact, using an Asus ConceptD 7 laptop with an Nvidia RTX 2080 GPU, I exported a one-minute long Blackmagic Raw sequence with only color correction (containing six clips) to 10-bit DPX files in :46 seconds, with a random PixelTools PowerGrade applied to each clip it took :40 seconds! In this case the Nvidia RTX 2080 really aided in the fast export but your mileage may vary.

Check out pixeltoolspost.com and make sure to at least download their sample pack. From the one of five Kodak looks, two Fuji Looks, Tobacco Newspaper to Old Worn VHS 2 with a hint of chromatic aberration you are sure to find something that fits your footage.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Director Ang Lee: Gemini Man and a digital clone

By Iain Blair

Filmmaker Ang Lee has always pushed the boundaries in cinema, both technically and creatively. His film Life of Pi, which he directed and produced, won four Academy Awards — for Best Direction, Best Cinematography, Best Visual Effects and Best Original Score.

Lee’s Brokeback Mountain won three Academy Awards, including Best Direction, Best Adapted Screenplay and Best Original Score. Crouching Tiger, Hidden Dragon was nominated for 10 Academy Awards and won four, including Best Foreign Language Film for Lee, Best Cinematography, Best Original Score and Best Art Direction/Set Decoration.

His latest, Paramount’s Gemini Man, is another innovative film, this time disguised as an action-thriller. It stars Will Smith in two roles — first, as Henry Brogan, a former Special Forces sniper-turned-assassin for a clandestine government organization; and second (with the assistance of ground-breaking visual effects) as “Junior,” a cloned younger version of himself with peerless fighting skills who is suddenly targeting him in a global chase. The chase takes them from the estuaries of Georgia to the streets of Cartagena and Budapest.

Rounding out the cast is Mary Elizabeth Winstead as Danny Zakarweski, a DIA agent sent to surveil Henry; Golden Globe Award-winner Clive Owen as Clay Verris, a former Marine officer now seeking to create his own personal military organization of elite soldiers; and Benedict Wong as Henry’s longtime friend, Baron.

Lee’s creative team included director of photography Dion Beebe (Memoirs of a Geisha, Chicago), production designer Guy Hendrix Dyas (Inception, Indiana Jones and the Kingdom of the Crystal Skull), longtime editor Tim Squyres (Life of Pi and Crouching Tiger, Hidden Dragon) and composer Lorne Balfe (Mission: Impossible — Fallout, Terminator Genisys).

The groundbreaking visual effects were supervised by Bill Westenhofer, Academy Award-winner for Life of Pi as well as The Golden Compass, and Weta  Digital’s Guy Williams, an Oscar-nominee for The Avengers, Iron Man 3 and Guardians of the Galaxy Vol. 2.

Will Smith and Ang Lee on set

I recently talked to Lee — whose directing credits include Taking Woodstock, Hulk, Ride With the Devil, The Ice Storm and Billy Lynn’s Long Halftime Walk — about making the film, which has already generated a lot of awards talk about its cutting-edge technology, the workflow and his love of editing and post.

Hollywood’s been trying to make this for over two decades now, but the technology just wasn’t there before. Now it’s finally here!
It was such a great idea, if you can visualize it. When I was first approached about it by Jerry Bruckheimer and David Ellison, they said, “We need a movie star who’s been around a long time to play Henry, and it’s an action-thriller and he’s being chased by a clone of himself,” and I thought the whole clone idea was so fascinating. I think if you saw a young clone version of yourself, you wouldn’t see yourself as special anymore. It would be, “What am I?” That also brought up themes like nature versus nurture and how different two people with the same genes can be. Then the whole idea of what makes us human? So there was a lot going on, a lot of great ideas that intrigued me. How does aging work and affect you? How would you feel meeting a younger version of yourself? I knew right away it had to be a digital clone.

You certainly didn’t make it easy for yourself as you also decided to shoot it in 120fps at 4K and in 3D.
(Laughs) You’re right, but I’ve been experimenting with new technology for the past decade, and it all started with Life of Pi. That was my first taste of 3D, and for 3D you really need to shoot digitally because of the need for absolute precision and accuracy in synchronizing the two cameras and your eyes. And you need a higher frame rate to get rid of the strobing effect and any strangeness. Then when you go to 120 frames per second, the image becomes so clear and far smoother. It’s like a whole new kind of moviemaking, and that’s fascinating to me.

Did you shoot native 3D?
Yes, even though it’s still so clumsy, and not easy, but for me it’s also a learning process on the set which I enjoy.

Junior

There’s been a lot of talk about digital de-aging use, especially in Scorsese’s The Irishman. But you didn’t use that technique for Will’s younger self, right?
Right. I haven’t seen The Irishman so I don’t know exactly what they did, but this was a total CGI creation, and it’s a lead character where you need all the details and performance. Maybe the de-aging is fine for a quick flashback, but it’s very expensive to do, and it’s all done manually. This was also quite hard to do, and there are two parts to it: Scientifically, it’s quite mind-boggling, and our VFX supervisor Bill Westenhofer and his team worked so hard at it, along with the Weta team headed by VFX supervisor Guy Williams. So did Will. But then the hardest part is dealing with audiences’ impressions of Junior, as you know in the back of your mind that a young Will Smith doesn’t really exist. Creating a fully digital believable human being has been one of the hardest things to do in movies, but now we can.

How early on did you start integrating post and all the VFX?
Before we even started anything, as we didn’t have unlimited money, a big part of the budget went to doing a lot of tests, new equipment, R&D and so on, so we had to be very careful about planning everything. That’s the only way you can reduce costs in VFX. You have to be a good citizen and very disciplined. It was a two-year process, and you plan and shoot layer by layer, and you have to be very patient… then you start making the film in post.

I assume you did a lot of previz?
(Laughs) A whole lot, and not only for all the obvious action scenes. Even for the non-action stuff, we designed and made the cartoons and did previz and had endless meetings and scouted and measured and so on. It was a lot of effort.

How tough was the shoot?
It was very tough and very slow. My last three movies have been like this since the technology’s all so new, so it’s a learning process as you’re figuring it all out as you go. No matter how much you plan, new stuff comes up all the time and equipment fails. It feels very fragile and very vulnerable sometimes. And we only had a budget for a regular movie, so we could only shoot for 80 days, and we were on three continents and places like Budapest and Cartagena as well as around Savannah in the US. Then I insist on doing all the second unit stuff as well, apart from a few establishing shots and sunsets. I have to shoot everything, so we had to plan very carefully with the sound team as every shot is a big deal.

Where did you post?
All in New York. We rented space at Final Frame, and then later we were at Harbor. The thing is, no lab could process our data since it was so huge, so when we were based in Savannah we just built our own technology base and lab so we could process all our dailies and so on — and we bought all our servers, computers and all the equipment needed. It was all in-house, and our technical supervisor Ben Gervais oversaw it all. It was too difficult to take all that to Cartagena, but we took it all to Budapest and then set it all up later in New York for post.

Do you like the post process?
I like the first half, but then it’s all about previews, getting notes, changing things. That part is excruciating. Although I have to give a lot of credit to Paramount as they totally committed to all the VFX quite early and put the big money there before they even saw a cut so we had time to do them properly.

Junior

Talk about editing with Tim Squyres. How did that work?
We sent him dailies. When I’m shooting, I just want to live in my dreams, unless something alarms me, and he’ll let me know. Otherwise, I prefer to work separately. But on this one, since we had to turn over some shots while we were shooting, he came to the set in Budapest, and we’d start post already, which was new to me. Before, I always liked to cut separately.

What were the big editing challenges?
Trying to put all the complex parts together, dealing with the rhythm and pace, going from quiet moments to things like the motorcycle chase scenes and telling the story as effectively as we could —all the usual things. In this medium, everything is more critical visually.

All the VFX play a big role. How many were there?
Over 1,000, but then Junior alone is a huge visual effect in every scene he’s in. Weta did all of him and complained that they got the hardest and most expensive part. (Laughs) The other, easier stuff was spread out to several companies, including Scanline and Clear Angle.

Ang Lee and Iain Blair

Talk about the importance of sound and music.
We did the mix at Harbor on its new stage, and it’s always so important. This time we did something new. Typically, you do Atmos at the final mix and mix the music along with all the rest, but our music editor did an Atmos mix on all the music first and then brought it to us for the final mix. That was very special.

Where did you do the DI and how important is it to you?
It’s huge on a movie like this. We set up our own DI suite in-house at Final Frame with the latest FilmLight Baselight, which is amazing. Our colorist Marcy Robinson had trained on it, and it was a lot easier than on the last film. Dion came in a lot and they worked together, and then I’d come in. We did a lot of work, especially on all the night scenes, enhancing moonlight and various elements.

I think the film turned out really well and looks great. When you have the combination of these elements like 3D, digital cinematography, high frame rate and high resolution, you really get “new immersive cinema.” So for me, it’s a new and different way of telling stories and processing them in your head. The funny thing is, personally I’m a very low-tech person, but I’ve been really pursuing this for the last few years.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Colorist Joanne Rourke grades Netflix horror film In the Tall Grass

Colorists are often called on to help enhance a particular mood or item for a film, show or spot. For Netflix’s In the Tall Grass — based on a story from horror writers Stephen King and Joe Hill — director Vincenzo Natali and DP Craig Wrobleski called on Deluxe Toronto’s Joanne Rourke to finesse the film’s final look using color to give the grass, which plays such a large part in the film, personality.

In fact, most of the film takes place in a dense Kansas field. It all begins when a brother and his pregnant sister hear a boy’s cries coming from a field of tall grass and go to find him. Soon they realize they can’t escape.

Joanne Rourke

“I worked with Vincenzo more than 20 years ago when I did the video mastering for his film Cube, so it was wonderful to reconnect with him and a privilege to work with Craig. The color process on this project was highly collaborative and we experimented a lot. It was decided to keep the day exteriors natural and sunny with subtle chromatic variations between. While this approach is atypical for horror flicks, it really lends itself to a more unsettling and ominous feeling when things begin to go awry,” explains Rourke.

In the Tall Grass was principally shot using the ARRI Alexa LF camera system, which helped give the footage a more immersive feeling when the characters are trapped in the grass. The grass itself comprised a mix of practical and CG grass that Rourke adjusted the color of depending on the time of day and where the story was taking place in the field. For the night scenes, she focused on giving the footage a silvery look while keeping the overall look as dark as possible with enough details visible. She was also mindful to keep the mysterious rock dark and shadowed.

Rourke completed the film’s first color pass in HDR, then used that version to create an SDR trim pass. She found the biggest challenge of working in HDR on this film to be reining in unwanted specular highlights in night scenes. To adjust for this, she would often window specific areas of the shot, an approach that leveraged the benefits of HDR without pushing the look to the extreme. She used Blackmagic Resolve 15 along with the occasional Boris FX Sapphire plugins.

“Everyone involved on this project had a keen attention to detail and was so invested in the final look of the project, which made for such great experience,” says Rourke. “I have many favorite shots, but I love how the visual of the dead crow on the ground perfectly captures the silver feel. Craig and Vincenzo created such stunning imagery, and I was just happy to be along for the ride. Also, I had no idea that head squishing could be so gleeful and fun.”

In the Tall Grass is now streaming on Netflix.

Harbor adds talent to its London, LA studios

Harbor has added to its London- and LA-based studios. Marcus Alexander joins as VP of picture post, West Coast and Darren Rae as senior colorist. He will be supervising all dailies in the UK.

Marcus Alexander started his film career in London almost 20 years ago as an assistant editor before joining Framestore as a VFX editor. He helped Framestore launch its digital intermediate division, producing multiple finishes on a host of tent-pole and independent titles, before joining Deluxe to set up its London DI facility. Alexander then relocated to New York to head up Deluxe New York DI. With the growth in 3D movies, he returned to the UK to supervise stereo post conversions for multiple studios before his segue into VFX supervising.

“I remember watching It Came from Outer Space at a very young age and deciding there and then to work in movies,” says Alexander. “Having always been fascinated with photography and moving images, I take great pride in thorough involvement in my capacity from either a production or creative standpoint. Joining Harbor allows me to use my skills from a post-finishing background along with my production experience in creating both 2D and 3D images to work alongside the best talent in the industry and deliver content we can be extremely proud of.”

Rae began his film career in the UK in 1995 as a sound sync operator at Mike Fraser Neg Cutters. He moved into the telecine department in 1997 as a trainee. By 1998 he was a dailies colorist working with 16mm and 35mm film. From 2001, Rae spent three years with The Machine Room in London as telecine operator and joined Todd AO’s London lab in 2014 as colorist working on drama and commercials 35mm and 16mm film and 8mm projects for music videos. In 2006 Rae moved into grading dailies at Todd AO parent company Deluxe in Soho London, moving to Company 3 London in 2007 as senior dailies colorist. In 2009, he was promoted to supervising colorist.

Prior to joining Harbor, Rae was senior colorist for Pinewood Digital, supervising multiple shows and overseeing a team of four, eventually becoming head of grading. Projects include Pokemon Detective Pikachu, Dumbo, Solo: A Star Wars Story, The Mummy, Rogue One, Doctor Strange and Star Wars Episode VII — The Force Awakens.

“My main goal is to make the director of photography feel comfortable. I can work on a big feature film from three months to a year, and the trust the DP has in you is paramount. They need to know that wherever they are shooting in the world, I’m supporting them. I like to get under the skin of the DP right from the start to get a feel for their wants and needs and to provide my own input throughout the entire creative process. You need to interpret their instructions and really understand their vision. As a company, Harbor understands and respects the filmmaker’s process and vision, so for me, it’s the ideal new home for me.”

Harbor has also announced that colorists Elodie Ichter and Katie Jordan are now available to work with clients on both the East and West Coasts in North America as well as the UK. Some of the team’s work includes Once Upon a Time in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Anna, Snow White and the Huntsman and Rise of the Planet of the Apes.

The editors of Ad Astra: John Axelrad and Lee Haugen

By Amy Leland

The new Brad Pitt film Ad Astra follows astronaut Roy McBride (Pitt) as he journeys deep into space in search of his father, astronaut Clifford McBride (Tommy Lee Jones). The elder McBride disappeared years before, and his experiments in space might now be endangering all life on Earth. Much of the film features Pitt’s character alone in space with his thoughts, creating a happy challenge for the film’s editing team, who have a long history of collaboration with each other and the film’s director James Gray.

L-R: Lee Haugen and John Axelrad

Co-editors John Axelrad, ACE, and Lee Haugen share credits on three previous films — Haugen served as Axelrad’s apprentice editor on Two Lovers, and the two co-edited The Lost City of Z and Papillon. Ad Astra’s director, James Gray, was also at the helm of Two Lovers and The Lost City of Z. A lot can be said for long-time collaborations.

When I had the opportunity to speak with Axlerad and Haugen, I was eager to find out more about how this shared history influenced their editing process and the creation of this fascinating story.

What led you both to film editing?
John Axelrad: I went to film school at USC and graduated in 1990. Like everyone else, I wanted to be a director. Everyone that goes to film school wants that. Then I focused on studying cinematography, but then I realized several years into film school that I don’t like being on the set.

Not long ago, I spoke to Fred Raskin about editing Once Upon a Time… in Hollywood. He originally thought he was going to be a director, but then he figured out he could tell stories in an air-conditioned room.
Axelrad: That’s exactly it. Air conditioning plays a big role in my life; I can tell you that much. I get a lot of enjoyment out of putting a movie together and of being in my own head creatively and really working with the elements that make the magic. In some ways, there are a lot of parallels with the writer when you’re an editor; the difference is I’m not dealing with a blank page and words — I’m dealing with images, sound and music, and how it all comes together. A lot of people say the first draft is the script, the second draft is the shoot, and the third draft is the edit.

L-R: John and Lee at the Papillon premiere.

I started off as an assistant editor, working for some top editors for about 10 years in the ’90s, including Anne V. Coates. I was an assistant on Out of Sight when Anne Coates was nominated for the Oscar. Those 10 years of experience really prepped me for dealing with what it’s like to be the lead editor in charge of a department — dealing with the politics, the personalities and the creative content and learning how to solve problems. I started cutting on my own in the late ‘90s, and in the early 2000s, I started editing feature films.

When did you meet your frequent collaborator James Gray?
Axelrad: I had done a few horror features, and then I hooked up with James on We Own the Night, and that went very well. Then we did Two Lovers after that. That’s where Lee Haugen came in — and I’ll let him tell his side of the story — but suffice it to say that I’ve done five films for James Gray, and Lee Haugen rose up through the ranks and became my co-editor on the Lost City of Z. Then we edited the movie Papillon together, so it was just natural that we would do Ad Astra together as a team.

What about you, Lee? How did you wind your way to where we are now?
Lee Haugen: Growing up in Wisconsin, any time I had a school project, like writing a story or writing an article, I would change it into a short video or short film instead. Back then I had to shoot on VHS tape and edited tape to tape by pushing play and hitting record and timing it. It took forever, but that was when I really found out that I loved editing.

So I went to school with a focus on wanting to be an editor. After graduating from Wisconsin, I moved to California and found my way into reality television. That was the mid-2000s and it was the boom of reality television; there were a lot of jobs that offered me the chance to get in the hours needed for becoming a member of the Editors Guild as well as more experience on Avid Media Composer.

After about a year of that, I realized working the night shift as an assistant editor on reality television shows was not my real passion. I really wanted to move toward features. I was listening to a podcast by Patrick Don Vito (editor of Green Book, among other things), and he mentioned John Axelrad. I met John on an interview for We Own the Night when I first moved out here, but I didn’t get the job. But a year or two later, I called him, and he said, “You know what? We’re starting another James Gray movie next week. Why don’t you come in for an interview?” I started working with John the day I came in. I could not have been more fortunate to find this group of people that gave me my first experience in feature films.

Then I had the opportunity to work on a lower-budget feature called Dope, and that was my first feature editing job by myself. The success of the film at Sundance really helped launch my career. Then things came back around. John was finishing up Krampus, and he needed somebody to go out to Northern Ireland to edit the assembly of The Lost City of Z with James Gray. So, it worked out perfectly, and from there, we’ve been collaborating.

Axelrad: Ad Astra is my third time co-editing with Lee, and I find our working as a team to be a naturally fluid and creative process. It’s a collaboration entailing many months of sharing perspectives, ideas and insights on how best to approach the material, and one that ultimately benefits the final edit. Lee wouldn’t be where he is if he weren’t a talent in his own right. He proved himself, and here we are together.

How has your collaborative process changed and grown from when you were first working together (John, Lee and James) to now, on Ad Astra?
Axelrad: This is my fifth film with James. He’s a marvelous filmmaker, and one of the reasons he’s so good is that he really understands the subtlety and power of editing. He’s very neoclassical in his approach, and he challenges the viewer since we’re all accustomed to faster cutting and faster pacing. But with James, it’s so much more of a methodical approach. James is very performance-driven. It’s all about the character, it’s all about the narrative and the story, and we really understand his instincts. Additionally, you need to develop a second-hand language and truly understand what the director wants.

Working with Lee, it was just a natural process to have the two of us cutting. I would work on a scene, and then I could say, “Hey Lee, why don’t you take a stab at it?” Or vice versa. When James was in the editing room working with us, he would often work intensely with one of us and then switch rooms and work with the other. I think we each really touched almost everything in the film.

Haugen: I agree with John. Our way of working is very collaborative —that includes John and I, but also our assistant editors and additional editors. It’s a process that we feel benefits the film as a whole; when we have different perspectives, it can help us explore different options that can raise the film to another level. And when James comes in, he’s extremely meticulous. And as John said, he and I both touched every single scene, and I think we’ve even touched every frame of the film.

Axelrad: To add to what Lee said, about involving our whole editing team, I love mentoring, and I love having my crew feel very involved. Not just technical stuff, but creatively. We worked with a terrific guy, Scott Morris, who is our first assistant editor. Ultimately, he got bumped up during the course of the film and got an additional editor credit on Ad Astra.

We involve everyone, even down to the post assistant. We want to hear their ideas and make them feel like a welcome part of a collaborative environment. They obviously have to focus on their primary tasks, but I think it just makes for a much happier editing room when everyone feels part of a team.

How did you manage an edit that was so collaborative? Did you have screenings of dailies or screenings of cuts?
Axelrad: During dailies it was just James, and we would send edits for him to look at. But James doesn’t really start until he’s in the room. He really wants to explore every frame of film and try all the infinite combinations, especially when you’re dealing with drama and dealing with nuance and subtlety and subtext. Those are the scenes that take the longest. When I put together the lunar rover chase, it was almost easier in some ways than some of the intense drama scenes in the film.

Haugen: As the dailies came in, John and I would each take a scene and do a first cut. And then, once we had something to present, we would call everybody in to watch the scene. We would get everybody’s feedback and see what was working, what wasn’t working. If there were any problems that we could address before moving to the next scene, we would. We liked to get the outside point of view, because once you get further and deeper into the process of editing a film, you do start to lose perspective. To be able to bring somebody else in to watch a scene and to give you feedback is extremely helpful.

One thing that John established with me on Two Lovers — my first editing job on a feature — was allowing me to come and sit in the room during the editing. After my work was done, I was welcome to sit in the back of the room and just observe the interaction between John and James. We continued that process with this film, just to give those people experience and to learn and to observe how an edit room works. That helped me become an editor.

John, you talked about how the action scenes are often easier to cut than the dramatic scenes. It seems like that would be even more true with Ad Astra, because so much of this film is about isolation. How does that complicate the process of structuring a scene when it’s so much about a person alone with his own thoughts?
Axelrad: That was the biggest challenge, but one we were prepared for. To James’ credit, he’s not precious about his written words; he’s not precious about the script. Some directors might say, “Oh no, we need to mold it to fit the script,” but he allows the actors to work within a space. The script is a guide for them, and they bring so much to it that it changes the story. That’s why I always say that we serve the ego of the movie. The movie, in a way, informs us what it wants to be, and what it needs to be. And in the case of this, Brad gave us such amazing nuanced performances. I believe you can sometimes shape the best performance around what is not said through the more nuanced cues of facial expressions and gestures.

So, as an editor, when you can craft something that transcends what is written and what is photographed and achieve a compelling synergy of sound, music and performance — to create heightened emotions in a film — that’s what we’re aiming for. In the case of his isolation, we discovered early on that having voiceover and really getting more interior was important. That wasn’t initially part of the cut, but James had written voiceover, and we began to incorporate that, and it really helped make this film into more of an existential journey.

The further he goes out into space, the deeper we go into his soul, and it’s really a dive into the subconscious. That sequence where he dives underwater in the cooling liquid of the rocket, he emerges and climbs up the rocket, and it’s almost like a dream. Like how in our dreams we have superhuman strength as a way to conquer our demons and our fears. The intent really was to make the film very hypnotic. Some people get it and appreciate it.

As an editor, sound often determines the rhythm of the edit, but one of the things that was fascinating with this film is how deafeningly quiet space likely is. How do you work with the material when it’s mostly silent?
Haugen: Early on, James established that he wanted to make the film as realistic as possible. Sound, or lack of sound, is a huge part of space travel. So the hard part is when you have, for example, the lunar rover chase on the moon, and you play it completely silent; it’s disarming and different and eerie, which was very interesting at first.

But then we started to explore how we could make this sound more realistic or find a way to amplify the action beats through sound. One way was, when things were hitting him or things were vibrating off of his suit, he could feel the impacts and he could hear the vibrations of different things going on.

Axelrad: It was very much part of our rhythm, of how we cut it together, because we knew James wanted to be as realistic as possible. We did what we could with the soundscapes that were allowable for a big studio film like this. And, as Lee mentioned, playing it from Roy’s perspective — being in the space suit with him. It was really just to get into his head and hear things how he would hear things.

Thanks to Max Richter’s beautiful score, we were able to hone the rhythms to induce a transcendental state. We had Gary Rydstrom and Tom Johnson mix the movie for us at Skywalker, and they were the ultimate creators of the balance of the rhythms of the sounds.

Did you work with music in the cut?
Axelrad: James loves to temp with classical music. In previous films, we used a lot of Puccini. In this film, there was a lot of Wagner. But Max Richter came in fairly early in the process and developed such beautiful themes, and we began to incorporate his themes. That really set the mood.

When you’re working with your composer and sound designer, you feed off each other. So things that they would do would inspire us, and we would change the edits. I always tell the composers when I work with them, “Hey, if you come up with something, and you think musically it’s very powerful, let me know, and I am more than willing to pitch changing the edit to accommodate.” Max’s music editor, Katrina Schiller, worked in-house with us and was hugely helpful, since Max worked out of London.

We tend not to want to cut with music because initially you want the edit not to have music as a Band-Aid to cover up a problem. But once we feel the picture is working, and the rhythm is going, sometimes the music will just fit perfectly, even as temp music. And if the rhythms match up to what we’re doing, then we know that we’ve done it right.

What is next for the two of you?
Axelrad: I’m working on a lower-budget movie right now, a Lionsgate feature film. The title is under wraps, but it stars Janelle Monáe, and it’s kind of a socio-political thriller.

What about you Lee?
Haugen: I jumped onto another film as well. It’s an independent film starring Zoe Saldana. It’s called Keyhole Garden, and it’s this very intimate drama that takes place on the border between Mexico and America. So it’s a very timely story to tell.


Amy Leland is a film director and editor. Her short film, Echoes, is now available on Amazon Video. She also has a feature documentary in post, a feature screenplay in development, and a new doc in pre-production. She is an editor for CBS Sports Network and recently edited the feature “Sundown.” You can follow Amy on social media on Twitter at @amy-leland and Instagram at @la_directora.

Quick Chat: Frame.io’s new global SVP of innovation, Michael Cioni

By Randi Altman

Production and post specialist Michael Cioni, whom many of you might know from his years at Light Iron and Panavision, has joined Frame.io as global SVP of innovation. He will lead a new LA-based division of Frame.io that is focused on continued investment into cloud-enabled workflows for films and episodics — specifically, automated camera-to-cutting room technology.

Frame.io has been 100 percent cloud-based since the company was formed, according to founder Emery Wells. “We started seeding new workflows around dailies, collaborative review and realtime integration with NLEs for parallel work and approvals. Now, with Michael, we’re building Frame.io for the new frontier of cloud-enabled professional workflows. Frame.io will leverage machine learning and a combination of software and hardware in a way that will truly revolutionize collaboration.”

Quoted in a Frame.io release that went out today, Cioni says, “A robust camera-to-cloud approach means filmmakers will have greater access to their work, greater control of their content, and greater speed with which to make key decisions,” says Cioni. “Our new roadmap will dramatically reduce the time it takes to get original camera negative into the hands of editors. Directors, cinematographers, post houses, DITs and editors will all be able to work with recorded images in real time, regardless of location.”

We reached out to Cioni with some questions about Frame.io and the cloud.

Why was now the right time for you to move on from Light Iron — which you helped to establish — and Panavision to join Frame.io?
After 10 years at Light Iron and over four at Panavision, I have been very fortunate to spend large portions of my career focused on both post and production. Being at both these groups gave me more access to the unique challenges our industry collaborators face, especially with more productions operating on global schedules. Light Iron and Panavision equipped me with the ideal training to explore something entirely new that couples production and post together in an entirely new way. Frame.io is the right foundation for this change.

What will your day-to-day look like at the company?
I will be based in LA and helping build out Frame.io’s newest division in Los Angeles. I will also be traveling regularly to New York to work directly with the engineers and security teams on our roadmap development. This is great for me because I loved living in New York when we opened up Light Iron NY, but I also love working in LA, where so many post and production infrastructures call home.

Frame.io was founded by post pros. Why is it so important for the company to continue that tradition with your hire?
I find that the key to success in any industry is largely dependent on how deep your knowledge well goes. Even though we in media and entertainment serve the world through creative means, the filmmaking process is inherently complex and inherently technical. It always has been.

The best technologies are the ones that are invisible and let the creative process flow without thought about the technology behind what is happening in your mind. Frame.io CEO Emery Wells and I have a profound respect for post production because we were both entrepreneurs and experts in the post space. Anyone who has built or operated a post facility (big or small) knows that post is a hub linking together nearly all workflow components for both creative and technical team members.

Because post lives at the core of Emery and myself, Frame.io will always be grounded in the professional workflow space, which enables us to better evolve our technology into markets of every type and scale.

Your roadmap seems in line with the MovieLabs white paper on the future of production, which is cloud-based. Can you address that?
MovieLabs is arguably the best representation of a technological roadmap for the media and entertainment industry. I was thrilled to see an early copy because it parallels a similar vision I have been exploring since 2013. I believe MovieLabs paints an accurate picture of the great things we are going to be able to do using cloud and machine learning technology, but it also demonstrates how many challenges there are before we can enjoy all the benefits. Frame.io not only supports the conclusions of the MovieLabs white paper, we have already begun deploying solutions to bring a new virtual creative world to reality.

Main Image: (L-R) Michael Cioni and Emery Wells

Review: Boxx’s Apexx A3 AMD Ryzen workstation

By Mike McCarthy

Boxx’s Apexx A3 is based on AMD’s newest Ryzen CPUs and the X570 chipset. Boxx has taken these elements and added liquid CPU cooling, professional GPUs and a compact, solid case to create an optimal third-generation Ryzen system configured for pros. It can support dual GPUs and two 3.5-inch hard drives, as well as the three M.2 slots on the board and anything that can fit into its five PCIe slots. The system I am reviewing came with AMD’s top CPU, the 12-core 3900X running at 3.8GHz, as well as 64GB of DDR4-2666 RAM and a Quadro RTX 4000 GPU. I also tested it with a 40GbE network card and a variety of other GPUs.

I have been curious about AMD’s CPU reboot with Ryzen architecture, but I haven’t used an AMD-based system since the 64-bit Opterons in the HP xw9300s that I had in 2006. That was also around the same time that I last used a system from Boxx, in the form of its HD Pro RT editing systems, based on those same AMD Opteron CPUs. At the time, Boxx systems were relatively unique in that they had large internal storage arrays with eight or 10 separate disks, and those arrays came in a variety of forms.

The three different locations that I worked during that time period had Boxx workstations with IDE-, SATA- and SCSI-based storage arrays. All three types of storage experienced various issues at the locations where I worked with them, but that might have been more a result of unreliable hard drives and relatively new PCI RAID controllers available at that time more than a reflection on Boxx.

Regardless, and for whatever reason, Boxx focused more on processing performance than storage over the next decade, marketing more toward 3D animation and VFX artists (among other users) who do lots of processing on small amounts of data, instead of video editors who do small amounts of processing on large amounts of data. At this point, most large data sets are stored on network appliances or external arrays, although my projects have recently been leaning the other way, using older server chassis with lots of internal drive slots.

Out of the Box
The Apexx system shipped from Boxx in a reasonably sized carton with good foam protection. Compared to the servers I have been using recently, it is tiny and feather-light at 25 pounds. The compact case is basically designed upside down from conventional layouts, with the power supply at the bottom and the card slots at the top. To save space, it fits the 750W power supply directly over the CPU, which is liquid-cooled with a radiator at the front of the case. There are two SATA hard drive bays at the top of the case. The system is based on the X570 Aorus Ultra motherboard, which has three full-length and two x1 PCIe slots, as well as three M.2 slots.

The system has no shortage of USB ports, with four USB 3.0 ports up front next to the headphone and mic connectors, and 10 on the back panel. Of those, three are USB 3.1 Gen2, including one that is a Type-C port. All the rest are Type-A, three more USB 3.0 ports and four USB 2.0 ports. The white USB 3.0 port allows you to update the BIOS from a USB stick if desired, which might come in handy when AMD’s fix to the Zen2 boost frequency issue becomes available. There are also 5.1 analog audio and SPDIF connectors on the board, as well as HDMI out and Wi-Fi antenna ports.

I hooked up my 8K monitor and connected it to my network for initial config and setup. The simplest test I run is Maxon’s Cinebench 15, which returned a GPU score of 207 and a multi-core CPU score of 3169. Both those values are the highest results I have ever gotten with that tool, including from dual-socket systems workstations, although I have not tested the newest generation of Intel Xeons. AMD’s CPUs are well-suited for that particular test, and this is the first true Nvidia Quadro card I have tested from the Turing-based RTX generation.

As this is an AMD X570 board, it supports PCIe 4.0, but that is of little benefit to current GPUs. The one case where the extra bandwidth could currently make a difference is NVMe SSDs playing back high-resolution frames. This system only came with a PCIe 3.0 SSD, but I am hoping to get a newer PCIe 4.0 one to run benchmarks on for a future article. In the meantime, this one is doing just fine for most uses, with over 3GB/sec of read and over 2GB/sec of write bandwidth. This is more than fast enough for uncompressed 4K work.

Using Adobe Tools
Next I installed both the 2018 and 2019 versions of Adobe Premiere Pro and Media Encoder so I could run tests with the same applications I had used for previous benchmarks on other systems, for more accurate comparisons. I have a standard set of sequences I export in AME, which are based on raw camera footage from Red Monstro, Sony Venice and ARRI Alexa LF cameras, exported to HEVC at 8K and 4K, testing both 8-bit and deep color render paths. Most of these renders were also completed faster than on any other system I have tested, and this is “only” a single-socket consumer-level architecture (compared to Threadripper and Epyc).

I did further tests after adding a Mellanox 40GbE network card, and swapping out the Quadro RTX 4000 for more powerful GPUs. I tested a GeForce RTX 2080 TI, a Quadro RTX 6000, an older Quadro P6000 and an AMD Radeon Pro WX 8200. The 2080TI and RTX6000 did allow 8K playback in realtime from RedCineX, but the max resolution, full-frame 8K files were right at the edge of smooth (around 23fps). Any smaller frame sizes were fine at 24p. The more powerful GeForce card didn’t improve my AME export times much if at all and got a 25% lower OpenGL score in Cinebench, revealing that Quadro drivers still make a difference for some 3D applications and that Adobe users don’t benefit much from investing in a GPU beyond a GeForce 2070. The AMD card did much better than in my earlier tests, showing that AMD drivers and software support have improved significantly since then.

Real-World Use
Where the system really stood out is when I started to do some real work with it. The 40GbE connection to my main workstation allowed me to seamlessly open projects that are stored on my internal 40TB array. I am working on a large feature film at the moment, so I used it to export a number of reels and guide tracks. These are 4K sequences of 7K anamorphic Red footage with layers of GPU effects, titles, labels and notes, with over 20 layers of audio as well. Rendering out a 4K DNxHR file of a 20-minute reel takes 140 minutes on my 16-core dual-socket workstation, but this “consumer-level” AMD system kicks them out in under 90 minutes. My watermarked DNxHD guides render out 20% faster than before as well, even over the network. This is probably due to the higher overall CPU frequency, as I have discovered that Premiere doesn’t multi-thread very well.

For AME Render times, lower is better and for Cinebench scores, higher is better.
Comparison system details:
Dell Precision 7910 with the GeForce 2080 TI
Supermicro X9DRi with Quadro P6000
HP Z4 10-core workstation with GeForce 2080TI
Razer Blade 15 with GeForce 2080 TI Max-Q

I also did some test exports in Blackmagic DaVinci Resolve. I am less familiar with that program, so my testing was much more limited, but it exported nearly as fast as Premiere, and the Nvidia cards were only slightly faster than the AMD GPUs in that app. (But I have few previous Resolve tests to use as a point of comparison to other systems.)

As an AMD system, there are a few limitations as compared to a similar Intel model. First of all, there is no support for the hardware encoding available in Intel’s Quick Sync integrated graphics hardware. This lack of support only matters if you have software that uses that particular functionality, such as my Adobe apps. But the system seems fast enough to accomplish those encode and decode tasks on its own. It also lacks a Thunderbolt port, as until recently that was an exclusively Intel technology. Now that Thunderbolt 3 is being incorporated into USB 4.0, it will be more important to have, but it will become available in a wider variety of products. It might be possible to add a USB 4.0 card to this system when the time comes, which would alleviate this issue.

When I first received the system, it reported the CPU as an 800MHz chip, which was the result of a BIOS configuration issue. After fixing that, the only other problem I had was a conflict between my P6000 GPU and my 8K display, which usually work great together. But it won’t boot with that combo, which is a pretty obscure corner case. All other GPU and monitor combinations worked fine, and I tested a bunch. I worked with Boxx technical support on that and a few other minor issues, and they were very helpful, sending me spare parts to confirm that the issues weren’t caused by my own added hardware.

In the End
The system performed very well for me, and the configuration I received would meet the needs of most users. Even editing 8K footage no longer requires stepping up to a dual-socket system. The biggest variation will come with matching a GPU to your needs, as Boxx offers GeForce, Quadro and AMD options. Editors will probably be able to save some money, while those doing true 3D rendering might want to invest in an even more powerful GPU than the Quadro RTX 4000 that this system came with.

All of those options are available on the Boxx website, with the online configuration tool. The test model Boxx sent me retails for about $4,500. There are cheaper solutions available if you are a DIY person, but Boxx has assembled a well-balanced solution in a solid package, built and supported for you. They also sell much higher-end systems if you are in the market for that, but with recent advances, these mid-level systems probably meet the needs of most users. If you are interested in purchasing a system from them, using the code MIKEPOST at checkout will give you a discount.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB NY Panel: Working in 4K HDR for Netflix’s Russian Doll

Goldcrest Post senior colorist Nat Jencks will take part in a discussion about the technology and creativity behind the Netflix series Russian Doll at NAB Show New York. Joining Jencks will be post supervisor Lisa Melodia in a session moderated by our own postPerspective editor-in-chief Randi Altman.

Nat Jencks

The session will take place on Thursday, October 17 at 3:30pm at the Javits Convention Center. Those wishing to attend this event may do so for free by entering the code EP06 when registering for NAB Show New York.

Nominated for 13 Emmy Awards, including Outstanding Comedy Series, Outstanding Cinematography for a Single-Camera Series and Outstanding Single-Camera Picture Editing for a Comedy Series, Russian Doll has won critical acclaim and popular embrace for its story of a young New York City woman, Nadia Vulvokov (Natasha Lyonne), who, after being killed in a traffic accident, finds herself continuously reliving a birthday party held in her honor the same night. Think of Groundhog’s Day, but darker.

In this session, Jencks and Melodia will discuss how they balance art and tech, taking advantage of the latest technologies in depicting a highly cinematic version of New York’s East Village while still prioritizing creativity in storytelling. They will also discuss the intricacies of working in 4K HDR.

Goldcrest’s Jencks collaborated once again with cinematographer Chris Teague to finalize the look of Russian Doll. A colorist with 10 years of experience in this aspect of the job, Jencks’ work ranges from studio features to indies and includes episodic series, commercials and music videos. Jencks has worked in post for two decades total, including in the fields of VFX, title design and editorial. 

Melodia is a post supervisor working in New York City. Prior to Russian Doll, she worked on comedies such as The Jim Gaffigan Show for TV Land and The Detour for TBS, as well as movies for HBO. Currently, she is the post supervisor on Darren Star’s new show, Emily in Paris.

 

Charlieuniformtango names company vets as new partners

Charlieuniformtango principal/CEO Lola Lott has named three of the full-service studio’s most veteran artists as new partners — editors Deedle LaCour and James Rayburn, and Flame artist Joey Waldrip. This is the first time in the company’s almost 25-year history that the partnership has expanded. All three will continue with their current jobs but have received the expanded titles of senior editor/partner and senior Flame artist/partner, respectively. Lott, who retains majority ownership of Charlieuniformtango, will remain principal/CEO, and Jack Waldrip will remain senior editor/co-owner.

“Deedle, Joey and James came to me and Jack with a solid business plan about buying into the company with their futures in mind,” explains Lott. “All have been with Charlieuniformtango almost from the beginning: Deedle for 20 years, Joey for 19 years and James for 18. Jack and I were very impressed and touched that they were interested and willing to come to us with funding and plans for continuing and growing their futures with us.

So why now after all these years? “Now is the right time because while Jack and I still have a passion for this business and we also have employees/talent — that have been with us for over 18 years — who also have a passion be a partner in this company,” says Lott. “While still young, they have invested and built their careers within the Tango culture and have the client bonds, maturity and understanding of the business to be able to take Tango to a greater level for the next 20 years. That was mine and Jack’s dream, and they came to us at the perfect time.”

Charlieuniformtango is a full-service creative studio that produces, directs, shoots, edits, mixes, animates and provides motion graphics, color grading, visual effects and finishing for commercials, short films, full-length feature films, documentaries, music videos and digital content.

Main Image: (L-R) Joey Waldrip, James Rayburn, Jack Waldrip, Lola Lott and Deedle LaCour

Colorist Chat: Lucky Post’s Neil Anderson

After joining Lucky Post in Dallas in 2013 right out of film school, Neil Anderson was officially promoted to colorist in 2017. He has worked on a variety of projects during his time at the studio, including projects for Canada Dry, Costa, TGI Fridays, The Salvation Army and YETI. He also contributed to Augustine Frizzell’s feature comedy, Never Goin’ Back, which premiered at Sundance and was distributed by A24.

YETI

We checked in with Anderson to find out how he works, some favorite projects and what inspires him.

What do you enjoy most about your work?
That’s a really hard question because there are a lot of things I really enjoy about color grading. If I had to choose, I think it comes back to the fact that it’s rewarding to both left- and right-brained people. It truly is both an art and a science.

The satisfaction I get when I first watch a newly graded spot is also very special. A cohesive and mindful color grade absolutely transforms the piece into something greater, and it’s a great feeling to be able to make such a powerful impact.

What’s the most misunderstood aspect of color artistry?
I’m not sure many people stop and think about how amazing it is that we can fine tune our engineering to something as wild as our eye sight. Our vision is very fluid and organic, constantly changing under different constraints and environments, filled with optical illusions and imperfect guesses. There are immensely strange phenomena that drastically change our perception of what we see. Yet we need to make camera systems and displays work with this deeply non-uniform perception. It’s an absolutely massive area of study that we take for granted; I’m thankful for those color scientists out there.

Where do you find your creative inspiration?
I definitely like to glean new ideas and ways of approaching new projects from seeing other great colorists. Sometimes certain commercials come on TV that catch my eye and I’ll excitedly say to my partner Odelie, “That is damn good color!” Depending on the situation, I might get an eye-roll or two from her.

Tell us about some recent projects, and what made them stand out to you creatively?
Baylor Scott & White Health: I just loved how moody we took these in the end. They are very inspiring stories that we wanted to make feel even more impactful. I think the contrast and color really turned out beautiful.

Is This All There Is?

Is This All There Is? by Welcome Center: This is a recent music video that we filmed in a stunningly dilapidated house. The grit and grain we added in color really brings out the “worst” of it.

Hurdle: This was a documentary feature I worked on that I really enjoyed. The film was shot over a six-month window in the West Bank in Israel, so wrangling it in while also giving it a distinctly unique look was both difficult and fun.

Light From Light: Also a feature film that I finished a few months ago. I really enjoyed the process of developing the look with its wonderful DP Greta Zozula. We specifically wanted to capture the feeling of paintings by Andrew Wyeth, Thomas Eakins and Johannes Vermeer.

Current bingeable episodics and must see films?
Exhibit A, Mindhunter, Midsommar and The Cold Blue.

When you are not at Lucky Post, where do you like to spend time?
I’m an avid moviegoer so definitely a lot of my time (and money) is spent at the theater. I’m also a huge sports fan; you’ll find me anywhere that carries my team’s games! (Go Pack Go)

Favorite podcast?
The Daily (“The New York Times”)

Current Book?
“Parting the Waters: America in the King Years 1954-1963”

Dumbest thing you laughed at today?
https://bit.ly/2MYs0V1

Song you can’t stop listening to?
John Frusciante — 909 Day

Updated Apple Final Cut Pro features new Metal engine

Apple has updated Final Cut Pro X with a new Metal engine designed to provide performance gains across a wide range of Mac systems. It takes advantage of the new Mac Pro and the high-resolution, high-dynamic-range viewing experience of Apple Pro Display XDR. The company also optimized Motion and Compressor with Metal as well.

The Metal-based engine improves playback and accelerates graphics tasks in FCP X, including rendering, realtime effects and exporting on compatible Mac computers. According to Apple, video editors with a 15-inch MacBook Pro will benefit from performance that’s up to 20 percent faster, while editors using an iMac Pro will see gains up to 35 percent.

Final Cut Pro also works with the new Sidecar feature of macOS Catalina, which allows users to extend their Mac workspace by using an iPad as a second display to show the browser or viewer. Video editors can use Sidecar with a cable or they can connect wirelessly.

Final Cut Pro will now support multiple GPUs and up to 28 CPU cores. This means that rendering is up to 2.9 times faster and transcoding is up to 3.2 times faster than on the previous-generation 12-core Mac Pro. And Final Cut Pro uses the new Afterburner card when working with ProRes and ProRes Raw. This allows editors to simultaneously play up to 16 streams of 4K ProRes 422 video or work in 8K resolution with support for up to three streams of 8K ProRes Raw video.

Pro Display XDR
The Pro Display XDR features a 32-inch Retina 6K display, P3 wide color and extreme dynamic range. Final Cut Pro users can view, edit, grade and deliver HDR video with 1,000 nits of full screen sustained brightness, 1,600 nits peak brightness and a 1,000,000:1 contrast ratio. Pro Display XDR connects to the Mac through a single Thunderbolt cable, and pros using Final Cut Pro on Mac Pro can simultaneously use up to three Pro Display XDR units — two for the Final Cut Pro interface and one as a dedicated professional reference monitor.

Final Cut Pro 10.4.7 is available now as a free update for existing users and for $299.99 for new users on the Mac App Store. Motion 5.4.4 and Compressor 4.4.5 are also available today as free updates for existing users and for $49.99 each for new users on the Mac App Store.

Review: Samsung’s 970 EVO Plus 500GB NVMe M.2 SSD

By Brady Betzel

It seems that the SSD drives are dropping in price by the hour. (This might be a slight over-exaggeration, but you understand what I mean.) Over the last year or so there has been a huge difference in pricing, including high-speed NVMe SSD drives. One of those is the highly touted Samsung EVO Plus NVMe line.

In this review, I am going to go over Samsung’s 500GB version of the 970 EVO Plus NVMe M.2 SSD drive. The Samsung 970 EVO Plus NVMe M.2 SSD drive comes in four sizes — 250GB, 500GB, 1TB, and 2TB — and retails (according to www.samsung.com) for $74.99, $119.99, $229.99 and $479.99, respectively. For what it’s worth, I really didn’t see much of price difference on other sites I visited, namely Amazon.com and Best Buy.

On paper, the EVO Plus line of drives can achieve speeds of up to 3,500MB/s read and 3,300MB/s write. Keep in mind that the lower the storage size the lower the read/write speeds will be. For instance, the EVO Plus 250GB SSD can still get up to 3,500MB/s in sequential read speeds, while the sequential write speeds dwindle down to max speeds of 2,300MB/s. Comparatively, the “standard” EVO line can get 3,400MB/s to 3,500MB/s sequential read speeds and 1,500MB/s sequential write speeds on the 250GB EVO SSD. The 500GB version costs just $89.99, but if you need more storage size, you will have to pay more.

There is another SSD to compare the 970 EVO Plus to, and that is the 970 Pro, which only comes in 512GB and 1TB sizes — costing around $169.99 and $349.99, respectively. While the Pro version has similar read speeds to the Plus (up to 3,500MB/s read) and actually slower write speeds (up to 2,700MB/s), the real ticket to admission for the Samsung 970 Pro is the Terabytes Written (TBW) warranty period. Samsung warranties the 970 line of drives for five years or Terabytes Written, whichever comes first. In the 500GB line of 970 drives, the “standard” and Plus 970 cover 300TBW, while the Pro covers a whopping 600TBW.

Samsung says its use of the latest V-NAND technology, in addition to its Phoenix controller, provides the highest speeds and power efficiency of the EVO NVMe drives. Essentially, V-NAND is a way to vertically stack memory instead of the previous method of stacking memory in a planar way. Stacking vertically allows for more memory in the same space in addition to longer life spans. You can read more about the Phoenix controller here.

If you are like me and want both a good warranty (or, really, faith in the product) and blazing speeds, check out the Samsung 970 EVO Plus line of drives. Great price point with almost all of the features as the Pro line. The 970 line of NVMe M.2 SSD drives fits the 2280 form factor (meaning 22mm x 80mm) and fits an M key-style interface. It’s important to understand what interface your SSD is compatible with: either M key (or M) or B key. Cards in the Samsung 970 EVO line are all M key. Most newer motherboards will have at least one if not two M.2 ports to plug drives into. You can also find PCIe adapters for under $20 or $30 on Amazon that will give you essentially the same read/write speeds. External USB 3.1 Gen 2, USB-C enclosures can also be found that will give you an easier way of replacing the drives when needed without having to open your case.

One really amazing way to use these newly lower-priced drives: When color correcting, editing, and/or performing VFX miracles in apps like Adobe Premiere Pro or Blackmagic Resolve, use NVMe drives for only cache, still stores, renders and/or optimized media. With the low cost of these NVMe M.2 drives, you might be able to include the price of one when charging a client and throw it on the shelf when done, complete with the project and media. Not only will you have a super-fast way to access the media, but you can easily get another one in the system when using an external drive.

Summing Up
In the end, the price points of the Samsung 970 EVO Plus NVMe M.2 drives are right in the sweet spot. There are, of course, competing drives that run a little bit cheaper, like the Western Digital Black SN750 NVMe SSDs (at around $99 for the 500GB model), but they come with a slightly slower read/write speed. So for my money, the Samsung 970 line of NVMe drives is a great combination of speed and value that can take your computer to the next level.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

GoPro intros Hero8 Black and Max cameras, plus accessories

GoPro has added two new cameras to its lineup — Hero8 Black and GoPro Max — as well as modular accessories called Mods.

The Hero8 Black ($399) features HyperSmooth 2.0 video stabilization and offers improved pitch-axis stabilization. It also now supports all frame rates and resolutions. TimeWarp 2.0 auto adjusts to the operator’s speed and can be slowed to realtime with a tap. The revamped SuperPhoto feature offers ghost-free HDR action photos, and new LiveBurst captures 1.5 seconds of 12MP (4K 4:3) footage before and after the shutter. Hero8 Black also has a new wind-optimized front-facing mic and high-fidelity audio improvements.

The camera has four new digital lenses — ranging from GoPro’s patented SuperView to zero-distortion linear — and customizable capture presets for quick access to settings for any activity. It’s all housed in a frameless design with folding mounting fingers.

Hero8 Black is available for preorder now, with shipments beginning October 15.

Accessories
Hero8 Black can be turned into a vlogging or production camera with Mods, GoPro’s new modular accessory ecosystem. The Media Mod, Display Mod and Light Mod ($49.99) equip Hero8 Black with professional-grade audio, a front-facing display and enhanced lighting. The Mods enable on-demand expansion of Hero8 Black’s capabilities without losing the compact ruggedness of the Hero camera design.

The Media Mod ($79.99) features shotgun-mic directional audio and has two cold shoe mounts for additional accessories along with Type-C, HDMI and 3.5mm external mic adapter ports.

The Display Mod ($79.99) is a folding front- or rear-facing 1.9-inch display that attaches to the top of the Media Mod. It’s the perfect size for both framing up vlogging shots and folding down and out of the way when not in use.

The Light Mod ($49.99) is waterproof to 33 feet (10 meters), wearable and gear-mountable. The Light Mod is ready to brighten any scene, whether mounted to the Media Mod or attached to a GoPro mount. It’s rechargeable and comes complete with a diffuser to soften lighting when filming with Hero8 Black. Mods will be available for preorder in December.

GoPro Max
GoPro Max ($499) is a dual-lens GoPro camera. Waterproof to 16 feet (five meters), Max can be used as a single lens max stabilized Hero camera, a dual lens 360 camera or a vlogging camera — all in one. Max HyperSmooth, with its unbreakable stabilization and in-camera horizon leveling, eliminates the need for a gimbal. Max TimeWarp bends time and space with expanded control and performance over traditional TimeWarp. And Max SuperView delivers GoPro’s widest, most immersive field of view yet.

When creating 360 edits, Max users now have Reframe, the GoPro app’s new keyframe-based editing experience. Now it’s easy to quickly “reframe” 360 footage into a traditional video with super-smooth pans and transitions. Reframe matches the power of desktop 360 editing solutions, but with the convenience and usability of the GoPro app.

For vlogging, Max has four digital lenses for the ideal “look,” a front-facing touch screen for easy framing and six mics that enable shotgun-mic audio performance.

Max can be preordered now, with shipments beginning in late October.

Color grading IT Chapter Two’s terrifying return

In IT Chapter Two, the kids of the Losers’ Club are all grown up and find themselves lured back to their hometown of Derry. Still haunted both by the trauma that monstrous clown Pennywise let loose on the community and by each one’s own unique insecurities, the group (James McAvoy, Jessica Chastain, Bill Hader) find themselves up against even more terrifying forces than they faced in the first film, IT.

Stephen Nakamura

IT Chapter Two director Andy Muschietti called on cinematographer Checco Varese and colorist Stephen Nakamura of Company 3. Nakamura returned to the franchise, performing the final color grade at Efilm in Hollywood. “I felt the first one was going to be a big hit when we were working on it, because these kids’ stories were so compelling and the performances were so strong. It was more than just a regular horror movie. This second one, in my opinion, is just as powerful in terms of telling these characters’ stories. And, not surprisingly, it also takes the scary parts even further.”

According to Nakamura, Muschietti “is a very visually oriented director. When we were coloring both of the films, he was very aware of the kinds of things we can do in the DI to enhance the imagery and make things even more scary. He pushed me to take some scenes in Chapter Two in directions I’ve never gone with color. I think it’s always important, whether you’re a colorist or a chef or a doctor, to always push yourself and explore new aspects of your work. Andy’s enthusiasm encouraged me to try new approaches to working in DaVinci Resolve. I think the results are very effective.”

For one thing, the technique he used to bring up just the light level in the eyes of the shapeshifting clown Pennywise got even more use here because there were more frightening characters to use it on. In many cases, the companies that created the visual effects also provided mattes that let Nakamura easily isolate and adjust the luminance of each individual eye in Resolve. When such mattes weren’t available, he used Resolve to track each eyeball a frame at a time.

“Resolve has excellent tracking capabilities, but we were looking to isolate just the tiny whites of the characters’ eyes,” Nakamura explains, “and there just wasn’t enough information to track.” It was meticulous work, he recalls, “but it’s very effective. The audience doesn’t consciously know we’re doing anything, but it makes the eyes brighter in a very strange way, kind of like a cat’s eyes when they catch the light. It really enhances the eerie feeling.”

In addition, Nakamura and the filmmakers made use of Resolve’s Flicker tool in the OpenFX panel to enhance the flickering effect in a scene involving flashing lights, taking the throbbing light effects further than they did on set. Not long ago, this type of enhancement would have been a more involved process in which the shots would likely be sent to a visual effects house. “We were able to do it as part of the grading, and we all thought it looked completely realistic. They definitely appreciated the ability to make little enhancements like that in the final grade, when everyone can see the scenes with the grade in context and on a big screen.”

Portions of the film involve scenes of the Losers’ Club as children, which were comprised of newly shot material (not cut in from the production of the first It). Nakamura applied a very subtle amount of Resolve’s mid-tone detail tool over them primarily to help immediately and subliminally orient the audience in time.

But the most elaborate use of the color corrector involved one short sequence in which Hader’s character, walking in a local park on a pleasant, sunny day, has a sudden, terrifying interaction with a very frightening character. The shots involved a significant amount of CGI and compositing work, which was completed at several effects houses. Muschietti was pleased with the effects work, but he wanted Nakamura to bring in an overall quality to the look of the scene that made it feel a bit more otherworldly.

Says Nakamura, “Andy described something that reminded me of the old-school, two-strip color process, where essentially anything red would get pushed into being a kind of magenta, and something blue or green would become a kind of cyan.”

Nakamura, who colored Martin Scorsese’s The Aviator (shot by Robert Richardson, ASC), had designed something at that point to create more of a three-strip look, but this process was more challenging, as it involved constraining the color palette to an even greater degree — without, of course, losing definition in the imagery.

With a bit of trial and error, Nakamura came up with the notion of using the splitter/combiner node and recombined some nodes in the output, forcing the information from the green channel into the red and blue channels. He then used a second splitter/combiner node to control the output. “It’s almost like painting a scene with just two colors,” he explains. “Green grass and blue sky both become shades of cyan, while skin and anything with red in it goes into the magenta area.”

The work became even more complex because the red-haired Pennywise also makes an appearance; it was important for him to retain his color, despite the rest of the scene going two-tone. Nakamura treated this element as a complex chroma key, using a second splitter/combiner node and significantly boosting the saturation just to isolate Pennywise while preventing the two-tone correction from affecting him.

When it came time to complete the pass for HDR Dolby Cinema — designed for specialty projectors capable essentially of displaying brighter whites and darker blacks than normal cinema projectors — Muschietti was particularly interested in the format’s treatment of dark areas of the frame.

“Just like in the first one,” Nakamura explains, “we were able to make use of Dolby Cinema to enhance suspense. People usually talk about how bright the highlights can be in HDR. But, when you push more light through the picture than you do for the P3 version, we also have the ability to make shadowy areas of the image appear even darker while keeping the details in those really dark areas very clear. This can be very effective in a movie like this, where you have scary characters lurking in the shadows.

“The color grade always plays some kind of role in a movie’s storytelling,” Nakamura sums up, “but this was a fun example of how work we did in the color grade really helped scare the audience.”

You can check out our Q&A with Nakamura about his work on the original IT.

DP Chat: Late Night cinematographer Matthew Clark

Directed by Nisha Ganatra, Amazon Studios’ comedy Late Night stars Emma Thompson as Katherine, a famous talk show host who hires Molly, her first-ever female writer (played by Mindy Kaling, who also wrote the screenplay).

Ganatra — whose rich directing background includes Transparent, Brooklyn Nine-Nine, Fresh Off the Boat and Chutney Popcorn — worked closely with her, DP Matthew Clark. The two were students together at NYU’s Tisch School of the Arts. Clark’s credits include Pitch Perfect 3, Up All Night and 30 Rock, among many others.

Matthew Clark

Clark has said that one of the toughest tasks in shooting comedy is to make it look and feel natural for the audience while allowing the space for them to laugh. “The visuals must have some depth to them but you need to let the actors work things out on screen,” he explains. “That was a big part for this film knowing the way Nisha likes to work. She’s visual but she’s also very actor-oriented, so one of the things I wanted to do was make our technical footprint as small as possible to give the actors room to work and to find those comic moments.”

For Late Night, Clark describes the look as “heightened naturalism.” He created a look book of images from still photographers, including Gregory Crewdson (artificial reality) and Robert Frank (super naturalism). He also worked with Light Iron colorists Corinne Bogdanowicz in Los Angeles and Sean Dunckley in New York to develop the look during prep. “There were three distinct kind of looks we wanted,” describes Clark. “One was for Katherine’s home, which was more elegant with warm tones. The television studio needed to be crisp and clean with more neutral tones. and for the writers’ room office, the look was more chaotic and business-like with blue or cooler tones.”

We recently reached out to Clark with a few questions designed to learn more about his work on the film and his most recent collaboration with Ganatra.

How would you describe the overarching look of the film? What did you and the director want to achieve? You’ve described it as heightened naturalism. Can you expand on that?
Nisha and I wanted a sophisticated look without being too glamorous. We started off looking at the story, the locations and the ideas that go along with placing our characters in those spaces — both physically and emotionally. Comedy is not easy in that regard. It can be easy to go from joke to joke, but if you want something layered and something that lasts in the audience’s mind, you have to ground the film.

So we worked very hard to give Nisha and the actors space to find those moments. It meant less lighting and a more natural approach. We didn’t back away completely though. We still used camera and light to elevate the scenes and accentuate the mood; for example, huge backlight on the stage, massive negative space when we find out about Katherine’s betrayal or a smoke-filled room as Katherine gives up. That’s what I mean by “heightened naturalism.”

How did Ganatra describe the look she wanted?
Nisha and I started going over looks well before prep began. We talked photos and films. Two of our favorites photographers are William Eggleston and Philip-Lorca DiCorsia. So I was ahead of the game when the official prep started. There was a definite shorthand. Because of that, I was able to go to Light Iron in LA and work out some basic looks for the film — overall color, highlights, shadow detail/color, grain, etc. We wanted three distinct looks. The rest would fall into place.

Katherine’s home was elegant and warm. The writers’ office was cool and corporate. The talk show’s studio was crisper and more neutral. As you know, even at that point, it’s just an idea unless you have your camera, lenses, etc.

Can you talk about the tools you chose?
Once prep started, I realized that we would need to shed some weight to accomplish our days due to very few extra days for rigging and the amount of daily company moves. So we went without a generator and took advantage of the 5000 ISO Panasonic VariCam 35 in conjunction some old, beautiful Panavision UltraSpeeds and Super Speeds.

That lens choice came after I sat with Dan Sasaki and told him what I was going for. He knew I was a fan of older lenses having used an old set of Baltars and similar Ultras on my last movie. I think they take the digital edge off of the sensor and can provide beautiful anomalies and flares when used to achieve your look. Anyway, I think he emptied out the closets at the Woodland Hills location and let us test everything. This was very exciting for a DP.

What makes the process a smooth one for you?
I think what got me started, artistic inspiration and rules/process, all stem from the same thing. The story, the telling, the showing and the emotion. The refined and the raw. It sounds simple. but for me, it is true.

Always try to serve the story; don’t get tied to the fancy new thing or the splashy piece of equipment. Just tell the story. Sometimes, those things coincide. But, always tell a story.

Where do you find inspiration for your work?
I think inspiration for each project comes from many different sources — music, painting, photography, a walk in the afternoon, a sound. That’s very vague, I know, but we have to be open to the world and draw from that. Obviously, it is crucial to spend time with the director — to breathe the same air, so to speak. That’s what puts me on the path and allows me to use the inspirations that fit the film.

Main Image: Matthew Clark and director Nisha Ganatra.

Foundry updates Nuke to version 12.0

Foundry has released Nuke 12.0, which introduces the next cycle of releases for the Nuke family. The Nuke 12.0 release brings improved interactivity and performance across the Nuke family, from additional GPU-enabled nodes for cleanup to a rebuilt playback engine in Nuke Studio and Hiero. Nuke 12.0 also sees the integration of GPU-accelerated tools integrated from Cara VR for camera solving, stitching and corrections and updates to the latest industry standards.

OpenEXR

New features of Nuke 12.0 include:
• UI interactivity and script loading – This release includes  a variety of optimizations throughout the software to improve performance, especially when working at scale. One key improvement offers a much smoother experience and noticeably maintains UI interactivity and reduced loading times when working in large scripts.
• Read and write performance – Nuke 12.0 includes focused improvement to OpenEXR read and write performance, including optimizations for several popular compression types (Zip1, Zip16, PIZ, DWAA, DWAB), improving render times and interactivity in scripts. Red and Sony camera formats also see additional GPU support.
• Inpaint and EdgeExtend – These GPU-accelerated nodes provide faster and more intuitive workflows for common tasks, with fine detail controls and contextual paint strokes.
• Grid Warp Tracker – Extending the Smart Vector toolset in NukeX, this node uses Smart Vectors to drive grids for match moving, warping and morphing images.
• Cara VR node integration – The majority of Cara VR’s nodes are now integrated into NukeX, including a suite of GPU-enabled tools for VR and stereo workflows and tools that enhance traditional camera solving and cleanup workflows.
• Nuke Studio, Hiero and HieroPlayer Playback – The timeline-based tools in the Nuke family see dramatic improvements in playback stability and performance as a result of a rebuilt playback engine optimized for the heavy I/O demands of color-managed workflows with multichannel EXRs.

HPA Awards name 2019 creative nominees

The HPA Awards Committee has announced the nominees for the creative categories for the 2019 HPA Awards. The HPA Awards honor outstanding achievement and artistic excellence by the individuals and teams who help bring stories to life. Launched in 2006, the HPA Awards recognize outstanding achievement in color grading, editing, sound and visual effects for work in episodic, spots and feature films.

The winners of the 14th Annual HPA Awards will be announced at a gala ceremony on November 21 at the Skirball Cultural Center in Los Angeles.

The 2019 HPA Awards Creative Category nominees are:

Outstanding Color Grading – Theatrical Feature

-“First Man”

Natasha Leonnet // Efilm

-“Roma”

Steven J. Scott // Technicolor

-“Green Book”

Walter Volpatto // FotoKem

-“The Nutcracker and the Four Realms”

Tom Poole // Company 3

-“Us”

Michael Hatzer // Technicolor

-“Spider-Man: Into the Spider-Verse”

Natasha Leonnet // Efilm

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

-“The Handmaid’s Tale – Liars”

Bill Ferwerda // Deluxe Toronto

-“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”

Steven Bodner // Light Iron

-“Game of Thrones – Winterfell”

Joe Finley // Sim, Los Angeles

-“I am the Night – Pilot”

Stefan Sonnenfeld // Company 3

-“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”

Paul Westerbeck // Picture Shop

-“The Man in the High Castle – Jahr Null”

Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

-Zara – “Woman Campaign Spring Summer 2019”

Tim Masick // Company 3

-Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”

James Tillett // Moving Picture Company

-Hennessy X.O. – “The Seven Worlds”

Stephen Nakamura // Company 3

-Palms Casino – “Unstatus Quo”

Ricky Gausis // Moving Picture Company

-Audi – “Cashew”

Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

-“Once Upon a Time… in Hollywood”

Fred Raskin, ACE

-“Green Book”

Patrick J. Don Vito, ACE

-“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”

David Tedeschi, Damian Rodriguez

-“The Other Side of the Wind”

Orson Welles, Bob Murawski, ACE

-“A Star Is Born”

Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

“Russian Doll – The Way Out”

Todd Downing

-“Homecoming – Redwood”

Rosanne Tan, ACE

-“Veep – Pledge”

Roger Nygard, ACE

-“Withorwithout”

Jake Shaver, Shannon Albrink // Therapy Studios

-“Russian Doll – Ariadne”

Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

-“Stranger Things – Chapter Eight: The Battle of Starcourt”

Dean Zimmerman, ACE, Katheryn Naranjo

-“Chernobyl – Vichnaya Pamyat”

Simon Smith, Jinx Godfrey // Sister Pictures

-“Game of Thrones – The Iron Throne”

Katie Weiland, ACE

-“Game of Thrones – The Long Night”

Tim Porter, ACE

-“The Bodyguard – Episode One”

Steve Singleton

 

Outstanding Sound – Theatrical Feature

-“Godzilla: King of Monsters”

Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.

Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

-“Shazam!”

Michael Keller, Kevin O’Connell // Warner Bros.

Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

-“Smallfoot”

Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

-“Roma”

Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

-“Aquaman”

Tim LeBlanc // Warner Bros.

Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

-“Chernobyl – 1:23:45”

Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

-“Deadwood: The Movie”

John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Coleman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

-“Game of Thrones – The Bells”

Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

-“The Haunting of Hill House – Two Storms”

Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

-“Homecoming – Protocol”

John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

-John Lewis & Partners – “Bohemian Rhapsody”

Mark Hills, Anthony Moore // Factory

Audi – “Life”

Doobie White // Therapy Studios

-Leonard Cheshire Disability – “Together Unstoppable”

Mark Hills // Factory

-New York Times – “The Truth Is Worth It: Fearlessness”

Aaron Reynolds // Wave Studios NY

-John Lewis & Partners – “The Boy and the Piano”

Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

-“Avengers: Endgame”

Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

-“Spider-Man: Far From Home”

Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

-“The Lion King”

Robert Legato

Andrew R. Jones

Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film

Tom Peitzman // T&C Productions

-“Alita: Battle Angel”

Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

-“Pokemon Detective Pikachu”

Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

-“Game of Thrones – The Long Night”

Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

-“The Umbrella Academy – The White Violin”

Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

-“The Man in the High Castle – Jahr Null”

Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

-“Chernobyl – 1:23:45”

Lindsay McFarlane

Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

-“Game of Thrones – The Bells”

Steve Kullback, Joe Bauer, Ted Rae

Mohsen Mousavi // Scanline

Thomas Schelesny // Image Engine

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

-“Hawaii Five-O – Ke iho mai nei ko luna”

Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

-“9-1-1 – 7.1”

Jon Massey, Tony Pizadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

-“Star Trek: Discovery – Such Sweet Sorrow Part 2”

Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

-“The Flash – King Shark vs. Gorilla Grodd”

Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

-“The Orville – Identity: Part II”

Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX

Brandon Fayette, Brooke Noska // Twentieth Century Fox TV

 

In addition to the nominations announced today, the HPA Awards will present a small number of special awards. Visual effects supervisor and creative Robert Legato (The Lion King, The Aviator, Hugo, Harry Potter and the Sorcerer’s Stone, Titanic, Avatar) will receive the HPA Award for Lifetime Achievement.

Winners of the Engineering Excellence Award include Adobe, Epic Games, Pixelworks, Portrait Displays Inc. and LG Electronics. The recipient of the Judges Award for Creativity and Engineering, a juried honor, will be announced in the coming weeks. All awards will be bestowed at the HPA Awards gala.

For more information or to buy tickets to the 2019 HPA Awards, click here.

 

 

Uppercut ups Tyler Horton to editor

After spending two years as an assistant at New York-based editorial house Uppercut, Tyler Horton has been promoted to editor. This is the first internal talent promotion for Uppercut.

Horton first joined Uppercut in 2017 after a stint as an assistant editor at Whitehouse Post. Stepping up as editor he’s cut notable projects, such as a recent Nike campaign “Letters to Heroes,” a series launched in conjunction with the US Open that highlights young athletes meeting their role models, including Serena Williams and Naomi Osaka. He also has cut campaigns for brands such as Asics, Hypebeast, Volvo and MOMA.

“From the beginning, Uppercut was always intentionally a boutique studio that embraced a collaborative of visions and styles — never just a one-person shop,” says Uppercut EP Julia Williams. “Tyler took initiative from day one to be as hands-on as possible with every project and we’ve been proud to see him really grow and refine his own voice.”

Horton’s love of film was sparked by watching sports reels and highlight videos. He went on to study film editing, then hit the road to tour with his band for four years before returning to his passion for film.

Behind the Title: Title Designer Nina Saxon

For 40 years, Nina Saxon has been a pioneer in the area of designing movie titles. She is still one of the few women working in this part of the industry.

NAME: Nina Saxon

COMPANY: Nina Saxon Design

CAN YOU DESCRIBE YOUR COMPANY?
We design main and end titles for film and television as well as branding for still and moving images.

WHAT’S YOUR JOB TITLE?
Title Designer

WHAT DOES THAT ENTAIL?
Making a moving introduction for films, like a book cover, that introduces a film. Or it might be simple type over picture. Also watching a film and showing the director samples or storyboards of what I think should be used for the film.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
That I’m one of only a few women in this field and have worked for 40 years, hiring others to help me only if necessary.

WHAT’S YOUR FAVORITE PART OF THE JOB?
When my project is done and I get to see my finished work up on the screen.

WHAT’S YOUR LEAST FAVORITE?
Waiting to be paid.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Morning

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be a psychologist.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
In 1975, I was in the film department at UCLA and decided I was determined to work in the film business.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
The upcoming documentary on Paul McCartney called Here, There and Everywhere, and upcoming entertainment industry corporate logos that will be revealed in October. In the past, I did the movie Salt with Angeline Jolie and the movie Flight with Denzel Washington.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Working on the main title open for Forrest Gump.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My iPad, iPhone and computer

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I exercise a lot, five to six days a week; drink a nice glass of wine; try to get enough sleep; listen to music while meditating before sleep; and make sure I know what I need to do the next day before I go to bed.

Cinelab London adds sound mastering supervisor and colorist

Cinelab London, which provides a wide range of film and digital restoration services, has added two new creatives to its staff — sound mastering supervisor Jason Stevens and senior colorist Mike David.

Stevens brings with him over 20 years of experience in sound and film archive restoration. Prior to his new role, he was part of the archive and restoration team at Pinewood Studios. Having worked there his whole career, Stevens’ worked on many big films, including the recent Yesterday, Rocketman and Judy. His clients have included the BFI, Arrow Films, Studio Canal and Fabulous Films.

During his career, Stevens has also been involved in short films, commercials and broadcast documentaries, recently completing a three-year project for Adam Matthew, the award-winning digital publisher of unique primary source collections from archives around the world.

“We have seen Jason’s enviable skills and talents put to their best use over the six years we have worked together,” says Adrian Bull, co-founder and CEO of Cinelab London. “Now we’re thrilled to have him join our growing in-house team. Talents like Jason’s are rare. He brings a wealth of creative and technical knowledge, so we feel lucky to be able to welcome him to our film family.”

Colorist Mike Davis also joins from Pinewood Studios (following its recent closure) where he spent five years grading feature films and episodic TV productions and specializing in archive and restoration. He has graded over 100 restoration titles for clients such as BFI, Studio Canal and Arrow Films on projects such as A Fish Called Wanda, Rita, Sue & Bob Too and Waterworld.

Davis has worked with the world’s leading DPs, handling dailies and grading major feature films including Mission Impossible, Star Wars: Rogue One and Annihilation. He enjoys working on a variety of content including short films, commercials, broadcast documentaries and Independent DI projects. He recently worked on Adewale Akinnuoye-Agbaje’s Farming, which won Best British Film at the Edinburgh Film Festival in June.

Davis started his career at Ascent Media, assisting on film rushes, learning how to grade and operate equipment. By 2010, he segued into production, spending time on set and on location working on stereoscopic 3D projects and operating 3D rigs. Returning to grading film and TV at Company 3, Davis then strengthened his talents working in long format film at Pinewood Studios.

Main Image: (L-R) Stevens and Davis

Ziva VFX 1.7 helps simplify CG character creation


Ziva Dynamics has introduced Ziva VFX 1.7, designed to make CG character creation easier thanks to the introduction of Art Directable Rest Shapes (ADRS). This tool allows artists to make characters conform to any shape without losing its dynamic properties, opening up a faster path to cartoons and digi-doubles.

Users can now adjust a character’s silhouette with simple sculpting tools. Once the goal shape is established, Ziva VFX can morph to match it, maintaining all of the dynamics embedded before the change. Whether unnatural or precise, ADRS works with any shape, removing the difficulty of both complex setups and time-intensive corrective work.

The Art Directable Rest Shapes feature has been in development for over a year and was created in collaboration with several major VFX and feature animation studios. According to Ziva, while outputs and art styles differed, each group essentially requested the same thing: extreme accuracy and more control without compromising the dynamics that sell a final shot.

For feature animation characters not based on humans or nature, ADRS can rapidly alter and exaggerate key characteristics, allowing artists to be expressive and creative without losing the power of secondary physics. For live-action films, where the use of digi-doubles and other photorealistic characters is growing, ADRS can minimize the setup process when teams want to quickly tweak a silhouette or make muscles fire in multiple ways during a shot.

According to Josh diCarlo, head of rigging at Sony Pictures Imageworks, “Our creature team is really looking forward to the potential of Art Directable Rest Shapes to augment our facial and shot-work pipelines by adding quality while reducing effort. Ziva VFX 1.7 holds the potential to shave weeks of work off of both processes while simultaneously increasing the quality of the end results.”

To use Art Directable Rest Shapes, artists must duplicate a tissue mesh, sculpt their new shape onto the duplicate and add the new geometry as a Rest Shape over select frames. This process will intuitively morph the character, creating a smooth, novel deformation that adheres to any artistic direction a creative team can think up. On top of ADRS, Ziva VFX 1.7 will also include a new zRBFWarp feature, which can warp NURBS surfaces, curves and meshes.

For a free 60-day trial, click here. Ziva VFX 1.7 is available now as an Autodesk Maya plugin for Windows and Linux users. Ziva VFX 1.7 can be purchased in monthly or yearly installments, depending on user type.

According to Michael Smit, chief commercial officer at Ziva Dynamics, “Ziva is working towards a new platform that will more easily allow us to deploy the software into other software packages, operating systems, and different network architectures. As an example we are currently working on our integrations into iOS and Unreal, both of which have already been used in limited release for production settings. We’re hopeful that once we launch the new platform commercially there will be an opportunity to deploy tools for macOS users.”

Using VFX to turn back time for Downton Abbey film

The feature film Downton Abbey is a continuation of the popular TV series, which followed the lives of the aristocratic Crawley family and their domestic help. Created by Julian Fellowes, the film is based in 1927, one year after the show’s final episode, bringing with it the exciting announcement of a royal visit to Downton from King George V and Queen Mary.

Framestore supported the film’s shoot and post, with VFX supervisor Kyle McCulloch and senior producer Ken Dailey leading the team. Following Framestore’s work creating post-war Britain for the BAFTA-nominated Darkest Hour, the VFX studio was approached to work directly with the film’s director, Michael Engler, to help ground the historical accuracy of the film.

Much of the original cast and crew returned, with a screenplay that required the new addition of a VFX department, “although it was important that we had a light footprint,” explains McCulloch. “I want people to see the credits and be surprised that there are visual effects in it.” Supporting VFX on over 170 shots ranged from cleanups and seamless set transitions to extensive environment builds and augmentation.

Transporting the audience to an idealized interpretation of 1920s Britain required careful work on the structures of buildings, including the Abbey (Highclere Castle), Buckingham Palace and Lacock village, a national trust village in the Cotswolds that was used as a location for Downton’s village. Using the available photogrammetry and captured footage, the artists set to work restoring the period, adding layers of dirt and removing contemporary details to existing historical buildings.

Having changed so much since the early 20th century, King’s Cross Station needed a complete rebuild in CG, with digital train carriages, atmospheric smoke and large interior and exterior environment builds.

The team also helped with landscaping the idyllic grounds of the Abbey, replacing the lawn, trees and grass and removing power lines, cars and modern roads. Research was key, with the team collaborating with production designer Donal Woods and historical advisor Alastair Bruce, who came equipped with look books and photographs from the era. “A huge amount of the work was in the detail,” explains McCulloch. “We questioned everything; looking at the street surfaces, the type of asphalt used, down to how the gutters were built. All these tiny elements create the texture of the entire film. Everyone went through it with a very fine-tooth comb — every single frame.”

 

In addition, a long shot that followed the letter from the Royal Household from the exterior of the abbey, through the corridors of the domestic “downstairs” to the aristocratic “upstairs,” was a particular challenge. The scenes based downstairs — including in the kitchen — were shot at Shepperton Studios on a set, with the upstairs being captured on location at Highclere Castle. It was important to keep the illusion of the action all being within one large household, requiring Framestore to stitch the two shots together.

Says McCulloch, “It was brute force, it was months of work and I challenge anyone to spot where the seam is.”

Wildlife DP Steve Lumpkin on the road and looking for speed

For more than a decade, Steve Lumpkin has been traveling to the Republic of Botswana to capture and celebrate the country’s diverse and protected wildlife population. As a cinematographer and still photographer, Under Prairies Skies Photography‘s Lumpkin will spend a total of 65 days this year filming in the bush for his current project, Endless Treasures of Botswana.

Steve Lumpkin

It’s a labor of love that comes through in his stunning photographs, whether they depict a proud and healthy lioness washed with early-morning sunlight, an indolent leopard draped over a tree branch or a herd of elephants traversing a brilliant green meadow. The big cats hold a special place in Lumpkin’s heart, and documenting Botswana’s largest pride of lions is central to the project’s mission.

“Our team stands witness to the greatest conservation of the natural world on the planet. Botswana has the will and the courage to protect all things wild,” he explains. “I wanted to fund a not-for-profit effort to create both still images and films that would showcase The Republic of Botswana’s success in protecting these vulnerable species. In return, the government granted me a two-year filming permit to bring back emotional, true tales from the bush.”

Lumpkin recently graduated to shooting 4K video in the bush in Apple ProRes Raw, using a Sony FS5 camera and an Atomos Inferno recorder. He brings the raw footage back to his US studio for post, working in Apple Final Cut Pro on an iMac 5K and employing a variety of tools, including Color Grading Central and Neat Video.

Leopard

Until recently, Lumpkin was hitting a performance snag when transferring files from his QNAP TBS 882T NAS storage system to his iMac Pro. “I was only getting read times of about 100 Mb/sec from Thunderbolt, so editing 4K footage was painful,” he says. “At the time, I was transitioning to ProRes RAW, and I knew I needed a big performance kick.”

On the recommendation of Bob Zelin, video engineering consultant and owner of Rescue 1, Lumpkin installed Sonnet’s Solo10G Thunderbolt 3 adapter. The Solo10G uses the 10GbE standard to connect computers via Ethernet cables to high-speed infrastructure and storage systems. “Instantly, I jumped to a transfer rate of more than 880MB per second, a nearly tenfold throughput increase,” he says. “The system just screams now – the Solo10G has accelerated every piece of my workflow, from ingest to 4K editing to rendering and output.”

“So many colleagues I know are struggling with this exact problem — they need to work with huge files and they’ve got these big storage arrays, but their Thunderbolt 2 or 3 connections alone just aren’t cutting it.”

With Lumpkin, everything comes down to the wildlife. He appreciates any tools that help streamline his ability to tell the story of the country and its tremendous success in protecting threatened species. “The work we’re doing on behalf of Botswana is really what it’s all about — in 10 or 15 years, that country might be the only place on the planet where some of these animals still exist.

“Botswana has the largest herd of elephants in Africa and the largest group of wild dogs, of which there are only about 6,000 left,” says Lumpkin. “Products like Sonnet’s Solo10G, Final Cut, the Sony FS5 camera and Atomos Inferno, among others, help our team celebrate Botswana’s recognition as the conservation leader of Africa.”

Flavor adds Joshua Studebaker as CG supervisor

Creative production house Flavor has added CG supervisor Joshua Studebaker to its Los Angeles studio. For more than eight years, Studebaker has been a freelance CG artist in LA, specializing in design, animation, dynamics, lighting/shading and compositing via Maya, Cinema 4D, Vray/Octane, Nuke and After Effects.

A frequent collaborator with Flavor and its brand and agency partners, Studebaker has also worked with Alma Mater, Arsenal FX, Brand New School, Buck, Greenhaus GFX, Imaginary Forces and We Are Royale in the past five years alone. In his new role with Flavor, Studebaker oversees visual effects and 3D services across the company’s global operations. Flavor’s Chicago, Los Angeles and Detroit studios offer color grading, VFX and picture finishing using tools like Autodesk Lustre and Flame Premium.

Flavor creative director Jason Cook also has a long history of working with Studebaker and deep respect for his talent. “What I love most about Josh is that he is both technical and a really amazing artist and designer. Adding him is a huge boon to the Flavor family, instantly elevating our production capabilities tenfold.”

Flavor has always emphasized creativity as a key ingredient, and according to Studebaker, that’s what attracted him. “I see Flavor as a place to grow my creative and design skills, as well as help bring more standardization to our process in house,” he explained. “My vision is to help Flavor become more agile and more efficient and to do our best work together.”

Pace Pictures and ShockBox VFX formalize partnership

Hollywood post house Pace Pictures and bicoastal visual effects, animation and motion graphics specialist ShockBox VFX have formed a strategic alliance for film and television projects. The two specialist companies provide studios and producers with integrated services encompassing all aspects of post in order to finish any project efficiently, cost-effectively and with greater creative control.

The agreement formalizes a successful collaborative partnership that has been evolving over many years. Pace Pictures and ShockBox collaborated informally in 2015 on the independent feature November Rule. Since then, they have teamed up on numerous projects, including, most recently, the Hulu series Veronica Mars, Lionsgate’s 3 From Hell and Universal Pictures’ Grand-Daddy Day Care and Undercover Brother 2. Pace provided services including creative editorial, color grading, editorial finishing and sound mixing. ShockBox contributed visual effects, animation and main title design.

“We offer complementary services, and our staff have developed a close working rapport,” says Pace Pictures president Heath Ryan. “We want to keep building on that. A formal alliance benefits both companies and our clients.”

“In today’s world of shrinking budgets and delivery schedules, the time for creativity in the post process can often suffer,” adds ShockBox founder and director Steven Addair. “Through our partnership with Pace, producers and studios of all sizes will be able to maximize our integrated VFX pipeline for both quality and volume.”

As part of the agreement, ShockBox will move its West Coast operations to a new facility that Pace plans to open later this fall. The two companies have also set up an encrypted, high-speed data connection between Pace Pictures Hollywood and ShockBox New York, allowing them to exchange project data quickly and securely.

Michael Engler on directing Downton Abbey movie

By Iain Blair

If, like millions of other fans around the world, you still miss watching the Downton Abbey series, don’t despair. The acclaimed show is back as a new feature film, still showcasing plenty of drama, nostalgia, glamour and good British values with every frame.

So sit back in a comfy armchair, grab a cup of tea (assuming you don’t have servants to fetch it for you) and forget about the stresses of modern life. Just let Downton Abbey take you back to a simpler time of relative innocence and understated elegance.

Director Michael Engler

The film reunites the series’ cast (including Hugh Bonneville, Jim Carter, Michelle Dockery, Elizabeth McGovern, Maggie Smith) and also adds some new members. The film starts with a simple but effective plot device, a visit to the Great House from the most illustrious guests the Crawley family could ever hope to entertain — their Majesties King George V and Queen Mary. With a dazzling parade and lavish dinner to orchestrate, Mary (Dockery), now firmly at the reins of the estate, faces the greatest challenge to her tenure as head of Downton.

At the film’s helm was TV and theater director Michael Engler, whose diverse credits include 30 Rock, Empire, Deadwood, Nashville, Unbreakable Kimmy Schmidt and several episodes of the series Downton Abbey.

I recently talked to him about making the film, its durable appeal and the workflow.

You directed one episode in the fifth season of the TV show and then a few in the final season. How daunting was it making a film of such a beloved show?
It was very daunting, especially as people have such high expectations. They love it so much, so you feel you really have to deliver. You can’t disappoint them. But basically, you’re pretty lucky in life and in your career when those are your big problems. Then you also have the advantage of this amazing cast, who know their characters so well, and Julian (Fellowes, the series creator), who loves writing these characters. We’ve all developed such a good working rhythm together, and all that really helped so much. Because of the huge fan base, it’s not like so many projects where you’re trying to get audiences to pay attention. They’re already very invested in it, and I’d far rather have that than the worry of directing an unknown project.

What were the big differences between shooting the series and the movie?
The big one was the need to ramp it up, even though the TV series was always ambitious cinematically, and we knew that the template would be a good one to build on. The DNA of the show was a good foundation. For instance, one of the things we discovered very quickly, even shooting intimate scenes of a few people in a bedroom or a drawing room, it would be full-scale. We could hold the shots longer and see everyone’s reactions in a big wide shot. We didn’t have to emphasize plot points with a lot of cutting as you’d do in TV. We could let the rooms play in full size for a while, and that automatically made it all feel bigger and richer. It almost feels like you’re in those rooms, and you get the whole visual sweep of their grandeur.

Then the royal visit gave us some tremendous opportunities with all the lavish set pieces — the arrival, the banquet, the parade, the ball — to really show them fully and showcase the huge scale of them. In the series, more often than not, you’d imply the sheer scale of such events and focus more on details and pieces of them. I think the series was more realistic and objective in many ways, more “on the ground” and real and undecorated. It is more understated. The film is far more sweeping, with more camera movement. It’s elevated for the big screen.

Was it a plus being an American? Did it give you a fresh perspective?
I was already such a big fan when I began working on the series, and I’d seen many of the episodes several times, so I did feel I knew it and understood it well. But then there was a lot of the protocol and etiquette that I didn’t know, so I studied and learned as much as I could and consulted with a historical advisor. After that, I quickly felt very much at home in this world.

How tough was it juggling so many familiar characters — along with some new ones?
That was difficult, but mainly because of all the filming logistics and schedules. We had people flying in from all over — India, New York, California — maybe just for a day or two, so it was a big logistical puzzle to make it work out.

The film looks gorgeous. You used DP Ben Smithard, who shot Blinded by the Light and Goodbye Christopher Robin. Can you talk about how you collaborated with him on the look?
We wanted it to have a big, rich film feel and look, so we shot it in 6K. And Ben does such beautiful work with the lighting, which really helped take the edge off the digital look. He’s just so good at capturing the romance of all those great sweeping period films and the very different look between upstairs — which is all elegant, sparkly and light-filled — and downstairs, which is rougher, less refined and darker. There are a lot of tonal shifts, so we worked on all those visual contrasts, both in camera and in post and the DI.

L-R: Cinematographer Ben Smithard, director Michael Engler and producer Gareth Neame.

Where did you post?
We did all the editing at Hireworks in London with editor Mark Day and his team, and sound at Hackenbacker Studios and Abbey Road Studios, where we recorded with an orchestra twice as big as any we had on the series, which also elevated all the sound and music. Framestore did all the VFX.

Do you like the post process?
I absolutely love it. I like shooting, but it’s so stressful because of the ticking clock and a huge crew waiting while we fix something and the light is going down. Then you get into post, and it’s stress-free in that sense, and you can look at what you have and start playing with it and really be creative. You can leave for a few days and have a fresh perspective on it. You can’t do that on the set.

Talk about editing with Mark Day. How did that work?
We didn’t start cutting until after we wrapped, and we experimented quite a lot, trying to find the best way to tell all the stories. For instance, we took one scene that was originally early on, and moved it five scenes later, and it changed the entire meaning of it. So we tried a lot of that sort of thing. Then there are all the other post elements that work on a subconscious level, especially once you cut in all the tiny background sounds — voices in the distance, footsteps and so on, that help create and add to the reality of the visuals.

What were the big editing challenges?
The big one was taking the rhythms of the series and adjusting them for the film. In the series, it was far more broken up because all the different stories didn’t have to be finished by the end of an episode. There would be some cliffhangers while some would be resolved, so we could hop around a lot and break up scenes. But on this we found it was far more effective to stay with a storyline and let longer arcs play out and finish. That way the audiences would know exactly where they were if we left one story, went to another and then came back. Mark was very clear about that, keeping the main story moving forward all the time, while juggling all the side stories.

What was involved in all the visual effects?
More than you’d think. We had a big set piece at King’s Cross train station, which we actually shot at a tiny two-track station in the north of England. Framestore then created everything around it and built the whole world, and they did an amazing job. Then we had the big military parade, and they did a lot of work on the surroundings and the pub overlooking it. And, of course, we had a ton of cleanup and replacement background work, as it’s a period piece.

Talk about the importance of sound in this film.
As they say, it’s half the movie, and our supervising sound editor Nigel Heath was so thorough and detailed in his work. He also really understands how sound can help storytelling. In the scene where Molesley embarrasses himself, we played around with it a lot, thinking maybe it needed some music and so on. But when Nigel started on it, he kept it totally silent except for the sound of a ticking clock — and it was so perfect. It made the moment and silence that much more vivid, along with underscoring how time was dragging on. It heightened the whole thing. Sound is also so important downstairs in the house, where you feel this constant activity and work going on in every room, and all the small sounds and noises add so much weight and reality.

Where did you do the DI and how important is it to you?
We did the digital intermediate at Molinare with Gareth Spensley, and it’s hugely important to me, though the DP’s more involved. I let them do their work and then went through it with them and gave my notes, and we got quite detailed.

Did the film turn out the way you hoped?
Much better! I was worried it might feel too disjointed and not unified enough since there were so many plotlines and characters and tones to deal with. But in the end it all flowed together so well.

How do you explain the huge global appeal of Downton Abbey?
I think that, apart from the great acting and fascinating characters, the themes are so universal. It’s like a workplace drama and a family drama with all the complex relationships, and you get romance, emotion, suspense, comedy and then all the great costumes and beautiful locations. The nostalgia appeals to so many people, and the Brits do these period dramas just better than anyone else.

What’s next? Would you do another Downton movie?
I’d love to, if it happens. They’re all such lovely people to work with. Making movies is hard, but this was just such a wonderful experience.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

FotoKem expands post services to Santa Monica

FotoKem is now offering its video post services in Santa Monica. This provides an accessible location for those working on the west side of LA, as well as access to the talent from its Burbank and Hollywood studios.

Designed to support an entire pipeline of services, the FotoKem Santa Monica facility is housed just off the 10 freeway, above FotoKem’s mixing and recording studio Margarita Mix. For many projects, color grading, sound mixing and visual effects reviews often take place in multiple locations around town. This facility offers showrunners and filmmakers a new west side post production option. Additionally, the secure fiber network connecting all FotoKem-owned locations ensures feature film and episodic finishing work can take place in realtime among sites.

FotoKem Santa Monica features a DI color grading theater, episodic and commercial color suite, editorial conform bay and a visual effects team — all tied to the comprehensive offerings at FotoKem’s main Burbank campus, Keep Me Posted’s episodic finishing facility and Margarita Mix Hollywood’s episodic grading suites. FotoKem’s entire roster of colorists are available to collaborate with filmmakers to ensure their vision is supported throughout the process. Recent projects include Shazam!, Vice, Aquaman, The Dirt, Little and Good Trouble.

Review: Accusonus Era 4 Pro audio repair plugins

By Brady Betzel

With each passing year it seems that the job title of “editor” changes. It’s not just someone responsible for shaping the story of the show but also for certain aspects of finishing, including color correction and audio mixing.

In the past, when I was offline editing more often, I learned just how important sending a properly mixed and leveled offline cut was. Whether it was a rough cut, fine cut or locked cut — the mantra to always put my best foot forward was constantly repeating in my head. I am definitely a “video” editor but, as I said, with editors becoming responsible for so many aspects of finishing, you have to know everything. For me this means finding ways to take my cuts from the middle of the road to polished with just a few clicks.

On the audio side, that means using tools like Accusonus Era 4 Pro audio repair plugins. Accusonus advertises these Era 4 plugins as one-button solutions, and they are as easy as one button but you can also nuance the audio if you like. The Era 4 Pro plugins work not only work with your typical DAW like Pro Tools 12.x and higher, but within nonlinear editors like Adobe Premiere Pro CC 2017 or higher, FCP X 10.4 or higher and Avid Media Composer 2018.12.

Digging In
Accusonus’ Era 4 Pro Bundle will cost you $499 for the eight plugins included in its audio repair offering. This includes De-Esser Pro, De-Esser, Era-D, Noise Remover, Reverb Remover, Voice Leveler, Plosive Remover and De-Clipper. There is also an Era 4 (non-pro) bundle for $149 that includes everything mentioned previously except for De-Esser Pro and Era-D. I will go over a few of the plugins in this review and why the Pro bundle might warrant the additional $350.

I installed the Era 4 Pro Bundle on a Wacom MobileStudio Pro tablet that is a few years old but can still run Premiere. I did this intentionally to see just how light the plugins would run. To my surprise my system was able to toggle each plug-in off and on without any issue. Playback was seamless when all plugins were applied. Now I wasn’t playing anything but video, but sometimes when I do an audio pass I turn off video monitoring to be extra sure I am concentrating on the audio only.

De-Esser
First up is the De-Esser, which tackles harsh sounds resulting from “s,” “z,” “ch,” “j” and “sh.” So if you run into someone who has some ear piercing “s” pronunciations, apply the De-Esser plugin and choose from narrow, normal or broad. Once you find which mode helps remove the harsh sounds (otherwise known as sibilance), you can enable “intense” to add more processing power (but doing this can potentially require rendering). In addition, there is an output gain setting, “Diff,” that plays only the parts De-Esser is affecting. If you want to just try the “one button” approach, the Processing dial is really all you need to touch. In realtime, you can hear the sibilance diminish. I personally like a little reality in my work so I might dial the processing to the “perfect” amount then dial it back 5% or 10%.

De-Esser Pro
Next up is De-Esser Pro. This one is for the editor who wants the one-touch processing but also the ability to dive into the specific audio spectrum being affected and see how the falloff is being performed. In addition, there are presets such as male vocals, female speech, etc. to jump immediately to where you need help. I personally find the De-Esser Pro more useful than the De-Esser. I can really shape the plugin. However, if you don’t want to be bothered with the more intricate settings, the De-Esser is a still a great solution. Is it worth the extra $350? I’m not sure, but combining it with the Era-D might make you want to shell out the cash for the Era 4 Pro bundle.

Era-D
Speaking of the Era-D, it’s the only plugin not described by its own title, funnily enough, but it is a joint de-noise and de-reverberation plugin. However, Era-D goes way beyond simple hum or hiss removal. With Era-D, you get “regions” (I love saying that because of the audio mixers who constantly talk in regions and not timecode) that can not only be split at certain frequencies — and have different percentage of plugin applied to said region — but also have individual frequency cutoff levels.

Something I had never heard of before is the ability to use two mics to fix a suboptimal recording on one of the two mics, which can be done in the Era-D plugin. There is a signal path window that you can use to mix the amount of de-noise and de-reverb. It’s possible to only use one or the other, and you can even run the plugin in parallel or cascade. If that isn’t enough, there is an advanced window with artifact control and more. Era-D is really the reason for that extra $350 between the standard Era 4 bundle and the Era 4 Bundle Pro — and it is definitely worth it if you find yourself removing tons of noise and reverb.

Noise Remover
My second favorite plugin in the Era 4 Bundle Pro is the Noise Remover. Not only is the noise removal pretty high-quality (again, I dial it back to avoid robot sounds), but it is painless. Dial in the amount of processing and you are 80% done. If you need to go further, then there are five buttons that let you focus where the processing occurs: all-frequencies (flat), high frequencies, low frequencies, high and low frequencies and mid frequencies. I love clicking the power button to hear the differences — with and without the noise removal — but also dialing the knob around to really get the noise removed without going overboard. Whether removing noise in video or audio, there is a fine art in noise reduction, and the Era 4 Noise Removal makes it easy … even for an online editor.

Reverb Remover
The Reverb Remover operates very much like the Noise Remover, but instead of noise, it removes echo. Have you ever gotten a line of ADR clearly recorded on an iPhone in a bathtub? I’ve worked on my fair share of reality, documentary, stage and scripted shows, and at some point, someone will send you this — and then the producers will wonder why it doesn’t match the professionally recorded interviews. With Era 4 Noise Remover, Reverb Remover and Era-D, you will get much closer to matching the audio between different recording devices than without plugins. Dial that Reverb Remover processing knob to taste and then level out your audio, and you will be surprised at how much better it will sound.

Voice Leveler
To level out your audio, Accusonus also has included the Voice Leveler, which does just what is says: It levels your audio so you won’t get one line blasting in your ears while the next one doesn’t because the speaker backed away from the mic. Much like the De-Esser, you get a waveform visual of what is being affected in your audio. In addition, there are two modes: tight and normal, helping to normalize your dialog. Think of the tight mode as being much more distinctive than a normal interview conversation. Accusonus describes tight as a more focused “radio” sound. The Emphasis button helps to address issues when the speaker turns away from a microphone and introduces tonal problems. Breath control is a simple

De-Clipper and Plosive Remover
The final two plugins in the Era 4 Bundle Pro are the Plosive Remover and De-Clipper. De-Clipper is an interesting little plugin that tries to restore lost audio due to clipping. If you recorded audio at high gain and it came out horribly, then it’s probably been clipped. De-Clipper tries to salvage this clipped audio by recreating overly saturated audio segments. While it’s always better to monitor your audio recording on set and re-record if possible, sometimes it is just too late. That’s when you should try De-Clipper. There are two modes: normal/standard use and one for trickier cases that take a little more processing power.

The final plugin, Plosive Remover, focuses on artifacting that’s typically caused by “p” and “b” sounds. This can happen if no pop screen is used and/or if the person being recorded is too close to the microphone. There are two modes: normal and extreme. Subtle pops will easily be repaired in normal mode, but extreme pops will definitely need the extreme mode. Much like De-Esser, Plosive Remover has an audio waveform display to show what is being affected, while the “Diff” mode only plays back what is being affected. However, if you just want to stick to that “one button” mantra, the Processing dial is really all you need to mess with. The Plosive Remover is another amazing plugin that, when you need it, really does a great job fast and easily.

Summing Up
In the end, I downloaded all of the Accusonus audio demos found on the Era 4 website, along with installers. This is the same place you can download the installers if you want to take part in the 14-day trial. I purposely limited my audio editing time to under one minute per clip and plugin to see what I could do. Check out my work with the Accusonus Era 4 Pro audio repair plugins on YouTube and see if anything jumps out at you. In my opinion, the Noise Remover, Reverb Remover and Era-D are worth the price of admission, but each plugin from Accusonus does great work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Colorist Jimmy Hsu joins Encore Vancouver

Seasoned colorist Jimmy Hsu has joined Encore Vancouver, bringing with him experience in content creation and color science. He comes to Encore Vancouver from Side Street Post Production, where he began as an online editor in 2012 before focusing on color grading.

Hsu’s work spans live action and animated projects across genres, including features, video game cinematics and commercials for clients such as Universal Studios, Disney and Lifetime.

Upon graduating from British Columbia’s Simon Fraser University with a bachelor’s in interactive arts and film production, Hsu held various roles in production and post production, including as a creative editor and motion graphics artist. Having edited more than a hundred movie trailers, Hsu is well-versed in project deliverables and specs, which helps inform his color process. He also draws from his artistic background, leveraging the latest capabilities of Blackmagic DaVinci Resolve to incorporate significant compositing and visual effects work into his projects.


Senior colorist Maria Carretero joins Nice Shoes

NYC-based post studio Nice Shoes has hired senior colorist Maria Carretero, who comes to Nice Shoes with nearly two decades of global experience in color grading under her belt. Her portfolio includes a wide range of feature films, short films, music videos and commercials for brands like Apple, Jeep, Porsche, Michael Kors, Disney and Marriott, among many others. She will be based at Nice Shoes’ NYC studio, also working across Nice Shoes’s Boston, Chicago, Toronto and Minneapolis spaces and through its network of remote partnerships globally.

She comes to Nice Shoes from Framestore in Chicago, where she spent nearly two years establishing relationships with agencies such as BBDO, FCB, DDB, Leo Burnett Chicago and Media Arts Lab LA.

Carretero is originally from Spain, where she received an education in fine arts. She soon discovered the creative possibilities in digital color grading, quickly establishing a career for herself as an international artist. Her background in painting, coupled with her natural eye for nuanced visuals, are the tools that help her maximize her clients’ creative visions. Carretero’s ability to convey a brand story through her work has earned her a long list of awards, including Cannes Lions and a Clio.

Carretero’s recent work includes Jeep’s Recalculating, Disney’s You Can Fly and Bella Notte, Porsche’s The Fix and Avocados From Mexico’s Top Dog spot for Super Bowl 2019.

“Nice Shoes brings together the expertise backed by 20 years of experience with a personal approach that really celebrates female talent and collaboration,” adds Carretero. “I’m thrilled to be joining a team that truly supports the creative exploration process that color takes in storytelling. I’ve always wanted to live in New York. Throughout my whole life, I visited this city again and again and was fascinated by the diversity, the culture, and incredible energy that you breathe in as you walk the city’s streets.”

Behind the Title: Landia EP/MD Juan Taylor

“My role’s very behind-the-scenes… almost invisible. If everything goes well, which is my job, it’s like I was never there.”

Name: Juan Taylor

Company: Landia

Can you describe your company?
Landia is a production house, but we prefer to think of ourselves as a boutique network. We have offices in Los Angeles, Buenos Aires, Madrid, Barcelona, São Paulo and Mexico. We shoot and collaborate with talented teams around the world.

Devour spot

What’s your job title?
Executive Producer/Managing Partner

What does that entail?
It’s everything and nothing. My role’s very behind-the-scenes… almost invisible. If everything goes well, which is my job, it’s like I was never there. If something doesn’t go well, that’s when the phone starts ringing. It’s like being a conductor at the symphony.

What would surprise people about what falls under that title?
No matter how established you are, a producer must never stop learning. Whenever I have the time, I always try to immerse myself with the latest industry news and trends. How can you expect to innovate if you don’t know what’s already out there?

What have you learned over the years about running a business?
The importance of balance and preparation.

A lot of it must be about trying to keep employees and clients happy. How do you balance that?
The key is to surround yourself with the right people on both ends. As an executive producer, there will be times when there are too many cooks in the kitchen, and your best assets will always be the team members you hand-picked for the job. This little support system will help you handle situations when egos clash during business or artistic decisions.

What’s your favorite part of the job?
Ironically, the most difficult one, which is working with different types of personalities. I like discovering and developing talent. For example, we run this platform called The Movement that guides young, emerging visual storytellers under Landia’s banner not only giving them an injection of the industry life, but room to spread their creative wings as well. The more tedious parts of the job — such as putting together timelines, treatments and logistics — also give me joy because I’m converting broad ideas into something tangible and profound.

What’s your least favorite?
The misconception that producers aren’t creative. I’m very much involved with brainstorming, shooting and final cut discussions. And guess what? Most of the problems that come to me need creative solutions.

Canon spot

If you didn’t have this job, what would you be doing instead?
I have a wide range of interests and like to think that if I didn’t work in this field I could’ve been an architect, musician, restaurant owner, real estate agent or even urban developer. There are too many to count.

Can you name some recent clients?
We’re really proud of the work we did with Canon this year, as well as the Devour spot we did for Super Bowl LIII.

Name three pieces of technology you can’t live without.
Although I love my phone, I’m a bit old-school. I don’t have Facebook, Twitter or Instagram, and I’m consciously trying to avoid becoming too tech-dependent.

AJA adds HDR Image Analyzer 12G and more at IBC

AJA will soon offer the new HDR Image Analyzer 12G, bringing 12G-SDI connectivity to its realtime HDR monitoring and analysis platform developed in partnership with Colorfront. The new product streamlines 4K/Ultra HD HDR monitoring and analysis workflows by supporting the latest high-bandwidth 12G-SDI connectivity. The HDR Image Analyzer 12G will be available this fall for $19,995.

HDR Image Analyzer 12G offers waveform, histogram and vectorscope monitoring and analysis of 4K/Ultra HD/2K/HD, HDR and WCG content for broadcast and OTT production, post, QC and mastering. It also features HDR-capable monitor outputs that not only go beyond HD resolutions and offer color accuracy but make it possible to configure layouts to place the preferred tool where needed.

“Since its release, HDR Image Analyzer has powered HDR monitoring and analysis for a number of feature and episodic projects around the world. In listening to our customers and the industry, it became clear that a 12G version would streamline that work, so we developed the HDR Image Analyzer 12G,” says Nick Rashby, president of AJA.

AJA’s video I/O technology integrates with HDR analysis tools from Colorfront in a compact 1-RU chassis to bring HDR Image Analyzer 12G users a comprehensive toolset to monitor and analyze HDR formats, including PQ (Perceptual Quantizer) and hybrid log gamma (HLG). Additional feature highlights include:

● Up to 4K/Ultra HD 60p over 12G-SDI inputs, with loop-through outputs
● Ultra HD UI for native resolution picture display over DisplayPort
● Remote configuration, updates, logging and screenshot transfers via an integrated web UI
● Remote Desktop support
● Support for display referred SDR (Rec.709), HDR ST 2084/PQ and HLG analysis
● Support for scene referred ARRI, Canon, Panasonic, Red and Sony camera color spaces
● Display and color processing lookup table (LUT) support
● Nit levels and phase metering
● False color mode to easily spot pixels out of gamut or brightness
● Advanced out-of-gamut and out-of-brightness detection with error intolerance
● Data analyzer with pixel picker
● Line mode to focus a region of interest onto a single horizontal or vertical line
● File-based error logging with timecode
● Reference still store

At IBC 2019, AJA also showed new products and updates designed to advance broadcast, production, post and pro AV workflows. On the stand were the Kumo 6464-12G for routing and the newly shipping Corvid 44 12G developer I/O models. AJA has also introduced the FS-Mini utility frame sync Mini-Converter and three new OpenGear-compatible cards: OG-FS-Mini, OG-ROI-DVI and OG-ROI-HDMI. Additionally, the company previewed Desktop Software updates for Kona, Io and T-Tap; Ultra HD support for IPR Mini-Converter receivers; and FS4 frame synchronizer enhancements.

IBC 2019 in Amsterdam: Big heads in the cloud

By David Cox

IBC 2019 kicked off with an intriguing announcement from Avid. The company entered into a strategic alliance with Microsoft and Disney’s Studio Lab to enable remote editorial workflows in the cloud.

The interesting part for me is how this affects the perception of post producing in the cloud, rather than the actual technology of it. It has been technically possible to edit remotely in the cloud for some time —either by navigating the Wild West interfaces of the principal cloud providers and “spinning up” a remote computer, connecting some storage and content, and then running an edit app or alternatively, by using a product that takes care of all that such as Blackbird. No doubt, the collaboration with Disney will produce products and services within an ecosystem that makes the technical use of the cloud invisible.

Avid press conference

However, what interests me is that arguably, the perception of post producing in the cloud is instantly changed. The greatest fear of post providers relates to the security of their clients’ intellectual property. Should a leak ever occur, to retain the client (or indeed avoid a catastrophic lawsuit), the post facility would have to make a convincing argument that security protocols were appropriate. Prior to the Disney/Avid/Microsoft Azure announcement, the part of that argument where the post houses say “…then we sent your valuable intellectual property to the cloud” caused a sticky moment. However, following this announcement, there has been an inherent endorsement by the owner of one of the most valuable IP catalogs (Disney) that post producing in the cloud is safe — or at least will be.

Cloudy Horizons
At the press conference where Avid made its Disney announcement, I asked whether the proposed cloud service would be a closed, Avid-only environment or an open platform to include other vendors. I pointed out that many post producers also use non-Avid products for various aspects, from color grading to visual. Despite my impertinence in mentioning competitors (even though Avid had kindly provided lunch), CEO Jeff Rosica provided a well-reasoned and practical response. To paraphrase, while he did not explicitly say the proposed ecosystem would be closed, he suggested that from a commercial viewpoint, other vendors would more likely want to make their own cloud offerings.

Rosica’s comments suggest that post houses can expect many clouds on their horizons from various application developers. The issue will then be how these connect to make coherent and streamlined workflows. This is not a new puzzle for post people to solve — we have been trying to make local systems from different manufacturers to talk to each other for years, with varying degrees of success. Making manufacturers’ various clouds work together would be an extension of that endeavor. Hopefully, manufacturers will use their own migrations to the cloud to further open their systems, rather than see it as an opportunity to play defensive, locking bespoke file systems and making cross-platform collaboration unnecessarily awkward. Too optimistic, perhaps!

Or One Big Cloud?
Separately to the above, just prior to IBC, MovieLabs introduced its white paper, which discussed a direction of travel for movie production toward the year 2030. The IBC produced a MovieLabs panel on the Sunday of the show, moderated by postPerspective’s own Randi Altman and featuring tech chiefs from the major studios. It would be foolish not to pay it proper consideration, given that it’s backed by Disney, Sony, Paramount, Warner Bros. and Universal.

MovieLabs panel

To summarize, the proposition is that the digital assets that will be manipulated to make content stay in one centralized cloud. Apps that manipulate those assets, such as editorial and visual effects apps, delivery processes and so on, will operate in the same cloud space. The talent that drives those apps will do so via the cloud. Or to put it slightly differently, the content assets don’t move — rather, the production apps and talent move to the assets. Currently, we do the opposite: the assets are transferred to where the post services are provided.

There are many advantages to this idea. Multiple transfers of digital assets to many post facilities would end. Files would be secured on a policy basis, enabling only the relevant operators to have access for the appropriate duration. Centralized content libraries would be produced, helping to enable on-the-fly localization, instant distribution and multi-use derivatives, such as marketing materials and games.

Of course, there are many questions. How do the various post application manufacturers maintain their product values if they all work as in-cloud applications on someone else’s hardware? What happens to traditional post production facilities if they don’t need any equipment and their artists log in from wherever? How would a facility protect itself from payment disputes if it does not have control over the assets it produces?

Personally, I have moved on from the idea of brick-and-mortar facilities. Cloud post permits nearly unlimited resources and access to a global pool of talent, not just those who reside within a commutable distance from the office. I say, bring it on… within reason. Of course, this initiative relates only to the production of content for those key studios. There’s a whole world of content production beyond that scope.

Blackmagic

Knowing Your Customer
Another area of interest for me at IBC 2019 was how offerings to colorists have become quite polarized. On one hand there is the seemingly all-conquering Resolve from Blackmagic Design. Inexpensive, easy to access and ubiquitous. On the other hand there is Baselight from FilmLight — a premium brand with a price tag and associated entry barrier to match. The fact that these two products are both successful in the same market but with very different strategies is testament to a fundamental business rule: “Know your customer.” If you know who your customer is going to be, you can design and communicate the ideal product for them and sell it at the right price.

A chat with FilmLight’s joint founder, Wolfgang Lempp, and development director Martin Tlaskal was very informative. Lempp explained that the demand placed on FilmLight’s customers is similarly polarized. On one hand, clients — including major studios and Netflix — mandate fastidious adherence to advanced and ever-improving technical standards, as well as image pipelines that are certified at every step. On the other hand, different clients place deadline or budget as a prevalent concern. Tlaskal set out for FilmLight to support those color specialists that aim for top-of-the industry excellence. Having the template for the target customer defines and drives what features FilmLight will develop for its Baselight product.

FilmLight

At IBC 2019, FilmLight hosted guest speaker-led demonstrations (“Colour on Stage”) to inspire creative grading and to present its latest features and improvements including better hue-angle keying, tracking and dealing with lens distortions.

Blackmagic is no less focused on knowing its customer, which explains its success in recent years. DaVinci Resolve once shared the “premium” space occupied by FilmLight but went through a transition to aim itself squarely at a democratized post production landscape. This shift meant a recognition that there would be millions of content producers and thousands of small post houses rather than a handful of large post facilities. That transition required a great deal more than merely slashing the price. The software product would have to work on myriad hardware combinations, not just the turnkey approved setup, and would need to have features and documentation aimed at those who hadn’t spent the past three years training in a post facility. By knowing exactly who the customer would be, Blackmagic built Resolve into an extremely successful, cross-discipline, post production powerhouse. Blackmagic was demonstrating the latest Resolve at IBC 2019, although all new features had been previously announced because, as director of software engineering Rohit Gupta explained, Blackmagic does not time its feature releases to IBC.

SGO

Aiming between the extremities established by FilmLight and Blackmagic Design, SGO promoted a new positioning of its flagship product, Mistika, via the Boutique subproduct. This is essentially a software-only Mistika that runs on PC or Mac. Subscription prices range from 99 euros per month to 299 euros per month, depending on features, although there have been several discounted promotions. The more expensive options include SGO’s highly regarded stereo 3D tools and camera stitching features for producing wrap-around movies.

Another IBC — done!


David Cox is a VFX compositor and colorist with more than 20 years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox specializes in unusual projects, such as those using very high resolutions and interactive immersive experiences featuring realtime render engines and augmented reality.

Behind the Title: Chapeau CD Lauren Mayer-Beug

This creative director loves the ideation process at the start of a project when anything is possible, and saving some of those ideas for future use.

COMPANY: LA’s Chapeau Studios

CAN YOU DESCRIBE YOUR COMPANY?
Chapeau provides visual effects, editorial, design, photography and story development fluidly with experience in design, web development, and software and app engineering.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
It often entails seeing a job through from start to finish. I look at it like making a painting or a sculpture.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Perhaps just how hands-on the process actually is. And how analog I am, considering we work in such a tech-driven environment.

Beats

WHAT’S YOUR FAVORITE PART OF THE JOB?
Thinking. I’m always thinking big picture to small details. I love the ideation process at the start of a project when anything is possible. Saving some of those ideas for future use, learning about what you want to do through that process. I always learn more about myself through every ideation session.

WHAT’S YOUR LEAST FAVORITE?
Letting go of the details that didn’t get addressed. Not everything is going to be perfect, so since it’s a learning process there is inevitably something that will catch your eye.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
My mind goes to so many buckets. A published children’s book author with a kick-ass coffee shop. A coffee bean buyer so I could travel the world.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I always skewed in this direction. My thinking has always been in the mindset of idea coaxer and gatherer. I was put in that position in my mid-20s and realized I liked it (with lots to learn, of course), and I’ve run with it ever since.

IS THERE A PROJECT YOU ARE MOST PROUD OF?
That’s hard to say. Every project is really so different. A lot of what I’m most proud of is behind the scenes… the process that will go into what I see as bigger things. With Chapeau, I will always love the Facebook projects, all the pieces that came together — both on the engineering side and the fun creative elements.

Facebook

What I’m most excited about is our future stuff. There’s a ton on the sticky board that we aim to accomplish in the very near future. Thinking about how much is actually being set in motion is mind-blowing, humbling and — dare I say — makes me outright giddy. That is why I’m here, to tell these new stories — stories that take part in forming the new landscape of narrative.

WHAT TOOLS DO YOU USE DAY TO DAY?
Anything Adobe. My most effective tool is the good-old pen to paper. That works clearly in conveying ideas and working out the knots.

WHERE DO YOU FIND INSPIRATION?
I’m always looking for inspiration and find it everywhere, as many other creatives do. However, nature is where I’ve always found my greatest inspiration. I’m constantly taking photos of interesting moments to save for later. Oftentimes I will refer back to those moments in my work. When I need a reset I hike, run or bike. Movement helps.

I’m always going outside to look at how the light interacts with the environment. Something I’ve become known for at work is going out of my way to see a sunset (or sunrise). They know me to be the first one on the roof for a particularly enchanting magic hour. I’m always staring at the clouds — the subtle color combinations and my fascination with how colors look the way they do only by context. All that said, I often have my nose in a graphic design book.

The overall mood realized from gathering and creating the ever-popular Pinterest board is so helpful. Seeing the mood color wise and texturally never gets old. Suddenly, you have a fully formed example of where your mind is at. Something you could never have talked your way through.

Then, of course, there are people. People/peers and what they are capable of will always amaze me.

Martin Scorsese to receive VES Lifetime Achievement Award  

The Visual Effects Society (VES) has named Martin Scorsese as the forthcoming recipient of the VES Lifetime Achievement Award in recognition of his valuable contributions to filmed entertainment. The award will be presented next year at the 18th Annual VES Awards at the Beverly Hilton Hotel.

The VES Lifetime Achievement Award, voted on by the VES Board of Directors, recognizes an outstanding body of work that has significantly contributed to the art and/or science of the visual effects industry.  The VES will honor Scorsese for “his artistry, expansive storytelling and gift for blending iconic imagery and unforgettable narrative.”

“Martin Scorsese is one of the most influential filmmakers in modern history and has made an indelible mark on filmed entertainment,” says Mike Chambers, VES board chair. “His work is a master class in storytelling, which has brought us some of the most memorable films of all time.  His intuitive vision and fiercely innovative direction has given rise to a new era of storytelling and has made a profound impact on future generations of filmmakers. Martin has given us a rich body of groundbreaking work to aspire to, and for this, we are honored to award him with the Visual Effects Society Lifetime Achievement Award.”

Martin Scorsese has directed critically acclaimed, award-winning films including Mean Streets, Taxi Driver, Raging Bull, The Last Temptation of Christ, Goodfellas, Gangs of New York, The Aviator, The Departed (Academy Award for Best Director and Best Picture), Shutter Island and Hugo (Golden Globe for Best Director).

Scorsese has also directed numerous documentaries, including Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese, Elia Kazan: A Letter to Elia and the classic The Last Waltz about The Band’s final concert. His George Harrison: Living in the Material World received Emmy Awards for Outstanding Directing for Nonfiction Programming and Outstanding Nonfiction Special.

In 2010, Scorsese executive produced the HBO series Boardwalk Empire, winning an Emmy and DGA awards for directing the pilot episode. In 2014, he co-directed The 50 Year Argument with his long-time documentary editor David Tedeschi.

This September, Scorsese’s film, The Irishman, starring Robert De Niro, Al Pacino and Joe Pesci, will make its world premiere at the New York Film Festival and will have a theatrical release starting November 1 in New York and Los Angeles before arriving on Netflix on November 27.

Scorsese is the founder and chair of The Film Foundation, a non-profit organization dedicated to the preservation and protection of motion picture history.

Previous winners of the VES Lifetime Achievement Award have included George Lucas; Robert Zemeckis; Dennis Muren, VES; Steven Spielberg; Kathleen Kennedy and Frank Marshall; James Cameron; Ray Harryhausen; Stan Lee; Richard Edlund, VES; John Dykstra; Sir Ridley Scott; Ken Ralston; Jon Favreau and Chris Meledandri.ri

Visual Effects in Commercials: Chantix, Verizon

By Karen Moltenbrey

Once too expensive to consider for use in television commercials, visual effects soon found their way into this realm, enlivening and enhancing the spots. Today, countless commercials are using increasingly complex VFX to entertain, to explain and to elevate a message. Here, we examine two very different approaches to using effects in this way. In the Verizon commercial Helping Doctors Fight Cancer, augmented reality is transferred from a holographic medical application and fused into a heartwarming piece thanks to an extremely delicate production process. For the Chantix Turkey Campaign, digital artists took a completely different method, incorporating a stylized digital spokes-character — with feathers, nonetheless – into various scenes.

Verizon Helping Doctors Fight Cancer

The main goal of television advertisements — whether they are 15, 30 or 60 seconds in length — is to sell a product. Some do it through a direct sales approach. Some by “selling” a lifestyle or brand. And some opt to tell a story. Verizon took the latter approach for a campaign promoting its 5G Ultra Wideband.

Vico Sharabani

For the spot Helping Doctors Fight Cancer, directed by Christian Weber, Verizon adds a human touch to its technology through a compelling story illustrating how its 5G network is being used within a mixed-reality environment so doctors can better treat cancer patients. The 30-second commercial features surgeons and radiologists using high-fidelity holographic 3D anatomical renderings that can be viewed from every angle and even projected onto a person’s body for a more comprehensive examination, while the imagery can potentially be shared remotely in near real time. The augmented-reality application is from Medivis, a start-up medical visualization company that is using Verizon’s next-generation 5G wireless speeds to deliver the high speeds and low latencies necessary for the application’s large datasets and interactive frame rates.

The spot introduces video footage of patients undergoing MRIs and discussion by Medivis cofounder Dr. Osamah Choudhry about how treatment could be radically changed using the technology. Holographic medical imagery is then displayed showing the Medivis AR application being used on a patient.

“McGarryBowen New York, Verizon’s advertising agency, wanted to show the technology in the most accurate and the most realistic way possible. So, we studied the technology,” says Vico Sharabani, founder/COO of The-Artery, which was tasked with the VFX work in the spot. To this end, The Artery team opted to use as much of the actual holographic content as possible, pulling assets from the Medivis software and fusing it with other broadcast-quality content.

The-Artery is no stranger to augmented reality, virtual reality and mixed reality. Highly experienced in visual effects, Sharabani founded the company to solve business problems within the visual space across all platforms, from films to commercials to branding, and as such, alternate reality and story have been integral elements to achieving that goal. Nevertheless, the work required for this spot was difficult and challenging.

“It’s not just acquiring and melding together 3D assets,” says Sharabani. “The process is complex, and there are different ways to do it — some better than others. And the agency wanted it to be true to the real-life application. This was not something we could just illustrate in a beautiful way; it had to be very technically accurate.”

To this end, much of the holographic imagery consisted of actual 3D assets from the Medivis holographic AR system, captured live. At times, though, The Artery had to rework the imagery using multiple assets from the Medivis application, and other times the artists re-created the medical imagery in CG.

Initially, the ad agency expected that The-Artery would recreate all the digital assets in CG. But after learning as much as they could about the Medivis system, Sharabani and the team were confident they could export actual data for the spot. “There was much greater value to using actual data when possible, actual CT data,” says Sharabani. “Then you have the most true-to-life representation, which makes the story even more heartfelt. And because we were telling a true story about the capabilities of the network around a real application being used by doctors, any misrepresentation of the human anatomy or scans would hurt the message and intention of the campaign.”

The-Artery began developing a solution with technicians at Medivis to export actual imagery via the HoloLens headset that’s used by the medical staff to view and manipulate the holographic imagery, to coincide with the needs of the commercial. Sometimes this involved merely capturing the screen performance as the HoloLens was being used. Other times the assets from the Medivis system were rendered over a greenscreen without a background and later composited into a scene.

“We have the ability to shoot through the HoloLens, which was our base; we used that as our virtual camera whereby the output of the system is driven by the HoloLens. Every time we would go back to do a capture (if the edit changed or the camera position changed), we had to use the HoloLens as our virtual camera in order to get the proper camera angle,” notes Sharabani. Because the HoloLens is a stereoscopic device, The Artery always used the right-eye view for the representations, as it most closely reflected the experience of the user wearing the device.

Since the Medivis system is driven by the HoloLens, there is some shakiness present — an artifact the group retained in some of the shots to make it truer to life. “It’s a constant balance of how far we go with realism and at what point it is too distracting for the broadcast,” says Sharabani.

For imagery like the CT scans, the point cloud data was imported directly into Autodesk’s Maya, where it was turned into a 3D model. Other times the images were rendered out at 4K directly from the system. The Medivis imagery was later composited into the scenes using Autodesk’s Flame.

However, not every bit of imagery was extracted from the system. Some had to be re-created using a standard 3D pipeline. For instance, the “scan” of the actor’s skull was replicated by the artists so that the skull model matched perfectly with the holographic imagery that was overlaid in post production (since everyone’s skull proportions are different). The group began by creating the models in Maya and then composited the imagery within Autodesk’s Flame, along with a 3D bounding box of the creative implant.

The artists also replicated the Medivis UI in 3D to recreate and match the performance of the three-dimensional UI to the AI hand gestures by the person “using” the Medivis system in the spot — both of which were filmed separately. For the CG interface, the group used Autodesk’s Maya and Flame, as well as Adobe’s After Effects.

“The process was so integrated to the edit, we needed the proper 3D tracking and some of the assets to be built as a 3D screen element,” explains Sharabani. “It gave us more flexibility to build the 3D UI inside of Flame, enabling us to control it more quickly and easily when we changed a hand gesture or expanded the shots.”

With The-Artery’s experience pertaining to virtual technology, the team was quick to understand the limitations of the project using this particular equipment. Once that was established, however, they began to push the boundaries with small hacks that enabled them to achieve their goals of using actual holographic data to tell an amazing story.

Chantix “Turkey” Campaign

Chantix is medication to help smokers kick the habit. To get its message across in a series of television commercials, the drug maker decided to talk turkey, focusing the campaign on a CG turkey that, well, goes “cold turkey” with the assistance of Chantix.

A series of four spots — Slow Turkey, Camping, AC and Beach Day — prominently feature the turkey, created at The Mill. The spots were directed and produced in-house by Mill+, The Mill’s end-to-end production arm, with Jeffrey Dates directing.


L-R: John Montefusco, Dave Barosin and Scott Denton

“Each one had its own challenges,” says CG lead John Montefusco. Nevertheless, the initial commercial, Slow Turkey, presented the biggest obstacle: the build of the character from the ground up. “It was not only a performance feat, but a technical one as well,” he adds.

Effects artist Dave Barosin iterated Montefusco’s assessment of Slow Turkey, which, in addition to building the main asset from scratch, required the development of a feather system. Meanwhile, Camping and AC had the addition of clothing, and Beach Day presented the challenge of wind, water and simulation in a moving vehicle.

According to senior modeler Scott Denton, the team was given a good deal of creative freedom when crafting the turkey. The artists were presented with some initial sketches, he adds, but more or less had free rein in the creation of the look and feel of the model. “We were looking to tread the line between cartoony and realistic,” he says. The first iterations became very cartoony, but the team subsequently worked backward to where the character was more of a mix between the two styles.

The crew modeled the turkey using Autodesk’s Maya and Pixologic’s ZBrush. It was then textured within Adobe’s Substance and Foundry’s Mari. All the details of the model were hand-sculpted. “Nailing the look and feel was the toughest challenge. We went through a hundred iterations before getting to the final character you see in the commercial,” Denton says.

The turkey contains 6,427 body feathers, 94 flight feathers and eight scalp feathers. They were simulated using a custom feather setup built by the lead VFX artist within SideFX Houdini, which made the process more efficient. Proprietary tools also were used to groom the character.

The artists initially developed a concept sculpt in ZBrush of just the turkey’s head, which underwent numerous changes and versions before they added it to the body of the model. Denton then sculpted a posed version with sculpted feathers to show what the model might look like when posed, giving the client a better feel for the character. The artists later animated the turkey using Maya. Rendering was performed in Autodesk’s Arnold, while compositing was done within Foundry’s Nuke.

“Developing animation that holds good character and personality is a real challenge,” says Montefusco. “There’s a huge amount of evolution in the subtleties that ultimately make our turkey ‘the turkey.’”

For the most part, the same turkey model was used for all four spots, although the artists did adapt and change certain aspects — such as the skeleton and simulation meshes – for each as needed in the various scenarios.

For the turkey’s clothing (sweater, knitted vest, scarf, down vest, knitted cap, life vest), the group used Marvelous Designer 3D software for virtual clothes and fabrics, along with Maya and ZBrush. However, as Montefusco explains, tailoring for a turkey is far different than developing CG clothing for human characters. “Seeing as a lot of the clothes that were selected were knit, we really wanted to push the envelope and build the knit with geometry. Even though this made things a bit slower for our effects and lighting team, in the end, the finished clothing really spoke for itself.”

The four commercials also feature unique environments ranging from the interior and exterior of a home to a wooded area and beach. The artists used mostly plates for the environments, except for an occasional tent flap and chair replacement. The most challenging of these settings, says Montefusco, was the beach scene, which required full water replacement for the shot of the turkey on the paddle board.


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

VFX in Features: Hobbs & Shaw, Sextuplets

By Karen Moltenbrey

What a difference a year makes. Then again, what a difference 30 years make. That’s about the time when the feature film The Abyss included photoreal CGI integrated with live action, setting a trend that continues to this day. Since that milestone many years ago, VFX wizards have tackled a plethora of complicated problems, including realistic hair and skin, resulting in realistic digital humans, as well as realistic water, fire and other elements. With each new blockbuster VFX film, digital artists continually raise the bar, challenging the status quo and themselves to elevate the art even further.

The visual effects in today’s feature films run the gamut from in-your-face imagery that can put you on the edge of your seat through heightened action to the kind that can make you laugh by amping up the comedic action. As detailed here, Fast & Furious Presents: Hobbs & Shaw takes the former approach, helping to carry out amazing stunts that are bigger and “badder” than ever. Opposite that is Sextuplets, which uses VFX to carry out a gag central to the film in a way that also pushes the envelope.

Fast & Furious Presents: Hobbs & Shaw

The Fast and the Furious film franchise, which has included eight features that collectively have amassed more than $5 billion worldwide since first hitting the road in 2001, is known for its high-octane action and visual effects. The latest installment, Fast & Furious Presents: Hobbs & Shaw, continues that tradition.

At the core of the franchise are next-level underground street racers who become reluctant fugitives pulling off big heists. Hobbs & Shaw, the first stand-alone vehicle, has Dwayne Johnson and Jason Statham reprising their roles as loyal Diplomatic Security Service lawman Luke Hobbs and lawless former British operative Deckard Shaw, respectively. This comes after facing off in Furious 7 (2015) and then playing cat and mouse as Shaw tries to escape from prison and Hobbs tries to stop him in 2017’s The Fate of the Furious. (Hobbs first appeared in 2011’s Fast Five and became an ally to the gang. Shaw’s first foray was in 2013’s Fast & Furious 6.)

Now, in the latest installment, the pair are forced to join forces to hunt down anarchist Brixton Lorr (Idris Elba), who has control of a bio weapon. The trackers are hired separately to find Hattie, a rogue MI6 agent (who is also Shaw’s sister, a fact that initially eludes Hobbs) after she injects herself with the bio agent and is on the run, searching for a cure.

The Universal Pictures film is directed by David Leitch (Deadpool 2, Atomic Blonde). Jonathan Sela (Deadpool 2, John Wick) is DP, and visual effects supervisor is Dan Glass (Deadpool 2, Jupiter Ascending). A number of VFX facilities worked on the film, including key vendor DNeg along with other contributors such as Framestore.

DNeg delivered 1,000-plus shots for the film, including a range of vehicle-based action sequences set in different global locations. The work involved the creation of full digi-doubles and digi-vehicle duplicates for the death-defying stunts, jumps and crashes, as well as complex effects simulations and extensive digital environments. Naturally, all the work had to fit seamlessly alongside live-action stunts and photography from a director with a stunt coordinator pedigree and a keen eye for authentic action sequences. In all, the studio worked on 26 sequences divided among the Vancouver, London and Mumbai locations. Vancouver handled mostly the Chernobyl break-in and escape sequences, as well as the Samoa chase. London did the McLaren chase and the cave fight, as well as London chase sequences. The Mumbai team assisted its colleagues in Vancouver and London.

When you think of the Fast & Furious, the first thing that comes to mind are intense car chases, and according to Chris Downs, CG supervisor at DNeg Vancouver, the Chernobyl beat is essentially one long, giant car-and-motorcycle pursuit, describing it as “a pretty epic car chase.”

“We essential have Brixton chasing Shaw and Hattie, and then Shaw and Hattie are trying to catch up to a truck that’s being driven by Hobbs, and they end up on these utility ramps and pipes, using them almost as a roadway to get up and into the turbine rooms, onto the rooftops and then jump between buildings,” he says. “All the while, everyone is getting chased by these drones that Brixton is controlling.”

The Chernobyl sequences — the break-in and the escape — were the most challenging work on the film for DNeg Vancouver. The villain, Brixton, is using the Chernobyl nuclear power plant in Russia as the site of his hideaway, leading Hobbs and Shaw to secretly break into his secret lab underneath Chernobyl to locate a device Brixton has there — and then not-so-secretly break out.

The break-in was filmed at a location outside of London, at the decommissioned Eggborough coal-powered plant that served as a backdrop. To transform the locale into Chernobyl, DNeg augmented the site with cooling towers and other digital structures. Nevertheless, the artists also built an entire CG version of the site for the more extreme action, using photos of the actual Chernobyl as reference for their work. “It was a very intense build. We had artistic liberty, but it was based off of Chernobyl, and a lot of the buildings match the reference photography. It definitely maintained the feeling of a nuclear power plant,” says Downs.

Not only did the construction involve all the exteriors of the industrial complex around Chernobyl, but also an interior build of an “insanely complicated” turbine hall that the characters race through at one point.

The sequence required other environment work, too, as well as effects, digi-doubles and cloth sims for the characters’ flight suits and parachutes as they drop into the setting.

Following the break-in, Hobbs and Shaw are captured and tortured and then manage to escape from the lab just in time as the site begins to explode. For this escape sequence, the crew built a CG Chernobyl reactor and power station, automated drones, a digital chimney, an epic collapse of buildings, complex pyrotechnic clouds and burning material.

“The scope of the work, the amount of buildings and pipes, and the number of shots made this sequence our most difficult,” says Downs. “We were blowing it up, so all the buildings had to be effects-friendly as we’re crashing things through them.” Hobbs and Shaw commandeer vehicles as they try to outrun Brixton and the explosion, but Brixton and his henchmen give chase in a range of vehicles, including trucks, Range Rovers, motorcycles and more — a mix of CGI and practical with expert stunt drivers behind the wheel.

As expected for a Fast & Furious film, there’s a big variety of custom-built vehicles. Yet, for this scene and especially in Samoa, DNeg Vancouver crafted a range of CG vehicles, including motorcycles, SUVs, transport trucks, a flatbed truck, drones and a helicopter — 10 in all.

According to Downs, maintaining the appropriate wear and tear on the vehicles as the sequences progressed was not always easy. “Some are getting shot up, or something is blown up next to them, and you want to maintain the dirt and grime on an appropriate level,” he says. “And, we had to think of that wear and tear in advance because you need to build it into the model and the texture as you progress.”

The CG vehicles are mostly used for complex stunts, “which are definitely an 11 on the scale,” says Downs. Along with the CG vehicles, digi-doubles of the actors were also used for the various stunt work. “They are fairly straightforward, though we had a couple shots where we got close to the digi-doubles, so they needed to be at a high level of quality,” he adds. The Hattie digi-double proved the most difficult due to the hair simulation, which had to match the action on set, and the cloth simulation, which had to replicate the flow of her clothing.

“She has a loose sweater on during the Chernobyl sequence, which required some simulation to match the plate,” Downs adds, noting that the artists built the digi-doubles from scratch, using scans of the actors provided by production for quality checks.

The final beat of the Chernobyl escape comes with the chimney collapse. As the chase through Chernobyl progresses, Shaw tries to get Hattie to Hobbs, and Brixton tries to grab Hattie from Shaw. In the process, charges are detonated around the site, leading to the collapse of the main chimney, which just misses obliterating the vehicle they are all in as it travels down a narrow alleyway.

DNeg did a full environment build of the area for this scene, which included the entire alleyway and the chimney, and simulated the destruction of the chimney along with an explosive concussive force from the detonation. “There’s a large fireball at the beginning of the explosion that turns into a large volumetric cloud of dust that’s getting kicked up as the chimney is collapsing, and all that had to interact with itself,” Downs says of the scene. “Then, as the chimney is collapsing toward the end of the sequence, we had the huge chunks ripping through the volumetrics and kicking up more pyrotechnic-style explosions. As it is collapsing, it is taking out buildings along the way, so we had those blowing up and collapsing and interacting with our dust cloud, as well. It’s quite a VFX extravaganza.”

Adding to the chaos: The sequence was reshot. “We got new plates for the end of that escape sequence that we had to turn around in a month, so that was definitely a white-knuckle ride,” says Downs. “Thankfully we had already been working on a lot of the chimney collapse and had the Chernobyl build mostly filled in when word came in about the reshoot. But, just the amount of effects that went into it — the volumetrics, the debris and then the full CG environment in the background — was a staggering amount of very complex work.”

The action later turns from London at the start of the film, to Russia for the Chernobyl sequences, and then in the third act, to Samoa, home of the Hobbs family, as the main characters seek refuge on the island while trying to escape from Brixton. But Brixton soon catches up to them, and the last showdown begins amid the island’s tranquil setting with a shimmering blue ocean and green lush mountains. Some of the landscape is natural, some is man-made (sets) and some is CGI. To aid in the digital build of the Samoan environment, Glass traveled to the Hawaiian island of Kauai, where the filming took place, and took a good amount of reference footage.

For a daring chase in Samoa, the artists built out the cliff’s edge and sent a CG helicopter tumbling down the steep incline in the final battle with Brixton. In addition to creating the fully-digital Samoan roadside, CG cliff and 3D Black Hawk, the artists completed complex VFX simulations and destruction, and crafted high-tech combat drones and more for the sequence.

The helicopter proved to be the most challenging of all the vehicles, as it had a couple of hero moments when certain sections were fairly close to the camera. “We had to have a lot of model and texture detail,” Downs notes. “And then with it falling down the cliff and crash-landing onto the beach area, the destruction was quite tricky. We had to plan out which parts would be damaged the most and keep that consistent across the shots, and then go back in and do another pass of textures to support the scratches, dents and so forth.”

Meanwhile, DNeg London and Mumbai handled a number of sequences, among them the compelling McLaren chase, the CIA building descends and the final cave fight in Samoa. There were also a number of smaller sequences, for a total of approximately 750 shots.

One of the scenes in the film’s trailer that immediately caught fans’ attention was the McLaren escape/motorcycle transformation sequence, during which Hobbs, Shaw and Hattie are being chased by Brixton baddies on motorcycles through the streets of London. Shaw, behind the wheel of a McLaren 720S, tries to evade the motorbikes by maneuvering the prized vehicle underneath two crossing tractor trailer rigs, squeezing through with barely an inch to spare. The bad news for the trio: Brixton pulls an even more daring move, hopping off the bike while grabbing onto the back of it and then sliding parallel inches above the pavement as the bike zips under the road hazard practically on its side; once cleared, he pulls himself back onto the motorbike (in a memorable slow-motion stunt) and continues the pursuit thanks to his cybernetically altered body.

Chris Downs

According to Stuart Lashley, DNeg VFX supervisor, this sequence contained a lot of bluescreen car comps in which the actors were shot on stage in a McLaren rigged on a mechanical turntable. The backgrounds were shot alongside the stunt work in Glasgow (playing as London). In addition, there were a number of CG cars added throughout the sequence. “The main VFX set pieces were Hobbs grabbing the biker off his bike, the McLaren and Brixton’s transforming bike sliding under the semis, and Brixton flying through the double-decker bus,” he says. “These beats contained full-CG vehicles and characters for the most part. There was some background DMP [digital matte-painting] work to help the location look more like London. There were also a few shots of motion graphics where we see Brixton’s digital HUD through his helmet visor.”

As Lashley notes, it was important for the CG work to blend in with the surrounding practical stunt photography. “The McLaren itself had to hold up very close to the camera; it has a very distinctive look to its coating, which had to match perfectly,” he adds. “The bike transformation was a welcome challenge. There was a period of experimentation to figure out the mechanics of all the small moving parts while achieving something that looked cool at the same time.”

As exciting and complex as the McLaren scene is, Lashley believes the cave fight sequence following the helicopter/tractor trailer crash was perhaps even more of a difficult undertaking, as it had a particular VFX challenge in terms of the super slow-motion punches. The action takes place at a rock-filled waterfall location — a multi-story set on a 30,000-square-foot soundstage — where the three main characters battle it out. The film’s final sequence is a seamless blend of CG and live footage.

Stuart Lashley

“David [Leitch] had the idea that this epic final fight should be underscored by these very stylized, powerful impact moments, where you see all this water explode in very graphic ways,” explains Lashley. “The challenge came in finding the right balance between physics-based water simulation and creative stylization. We went through a lot of iterations of different looks before landing on something David and Dan [Glass] felt struck the right balance.”

The DNeg teams used a unified pipeline for their work, which includes Autodesk’s Maya for modeling, animation and the majority of cloth and hair sims; Foundry’s Mari for texturing; Isotropix’s Clarisse for lighting and rendering; Foundry’s Nuke for compositing; and SideFX’s Houdini for effects work, such as explosions, dust clouds, particulates and fire.

With expectations running high for Hobbs & Shaw, filmmakers and VFX artists once more delivered, putting audiences on the edge of their seats with jaw-dropping VFX work that shifted the franchise’s action into overdrive yet again. “We hope people have as much fun watching the result as we had making it. This was really an exercise in pushing everything to the max,” says Lashley, “often putting the physics book to one side for a bit and picking up the Fast & Furious manual instead.”

Sextuplets

When actor/comedian/screenwriter/film producer Marlon Wayans signed on to play the lead in the Netflix original movie Sextuplets, he was committing to a role requiring an extensive acting range. That’s because he was filling not one but seven different lead roles in the same film.

In Sextuplets, directed by Michael Tiddes, Wayans plays soon-to-be father Alan, who hopes to uncover information about his family history before his child’s arrival and sets out to locate his birth mother. Imagine Alan’s surprise when he finds out that he is part of “identical” sextuplets! Nevertheless, his siblings are about as unique as they come.

There’s Russell, the nerdy, overweight introvert and the only sibling not given up by their mother, with whom he lived until her recent passing. Ethan, meanwhile, is the embodiment of a 1970s pimp. Dawn is an exotic dancer who is in jail. Baby Pete is on his deathbed and needs a kidney. Jaspar is a villain reminiscent of Austin Powers’ Dr. Evil. Okay, that is six characters, all played by Wayans. Who is the seventh? (Spoiler alert: Wayans also plays their mother, who was simply on vacation and not actually dead as Russell had claimed.)

There are over 1,100 VFX shots in the movie. None, really, involved the transformation of the actor into the various characters — that was done using prosthetics, makeup, wigs and so forth, with slight digital touch-ups as needed. Instead, the majority of the effects work resulted from shooting with a motion-controlled camera and then compositing two (or more) of the siblings together in a shot. For Baby Pete, the artists also had to do a head replacement, comp’ing Wayans onto the body of a much smaller actor.

“We used quite a few visual effects techniques to pull off the movie. At the heart was motion control, [which enables precise control and repetition of camera movement] and allowed us to put multiple characters played by Marlon together in the scenes,” says Tiddes, who has worked with Wayans on multiple projects in the past, including A Haunted House.

The majority of shots involving the siblings were done on stage, filmed on bluescreen with a TechnoDolly for the motion control, as it is too impractical to fit the large rig inside an actual house for filming. “The goal was to find locations that had the exterior I liked [for those scenes] and then build the interior on set,” says Tiddes. “This gave me the versatility to move walls and use the TechnoDolly to create multiple layers so we could then add multiple characters into the same scene and interact together.”

According to Tiddes, the team approached exterior shots similarly to interior ones, with the added challenge of shooting the duplicate moments at the same time each day to get consistent lighting. “Don Burgess, the DP, was amazing in that sense. He was able to create almost exactly the same lighting elements from day to day,” he notes.

Michael Tiddes

So, whenever there was a scene with multiple Wayans characters, it would be filmed on back-to-back days with each of the characters. Tiddes usually started off with Alan, the straight man, to set the pace for the scene, using body doubles for the other characters. Next, the director would work out the shot with the motion control until the timing, composition and so forth was perfected. Then he would hit the Record button on the motion-control device, and the camera would repeat the same exact move over and over as many times as needed. The next day, the shot was replicated with the other character, and the camera would move automatically, and Wayans would have to hit the same marks at the same moment established on the first day.

“Then we’d do it again on the third day with another character. It’s kind of like building layers in Photoshop, and in the end, we would composite all those layers on top of each other for the final version,” explains Tiddes.

When one character would pass in front of another, it became a roto’d shot. Oftentimes a small bluescreen was set up on stage to allow for easier rotoscoping.

Image Engine was the main visual effects vendor on the film, with Bryan Jones serving as visual effects supervisor. The rotoscoping was done using a mix of SilhouetteFX’s Silhouette and Foundry’s Nuke, while compositing was mainly done using Nuke and Autodesk’s Flame.

Make no mistake … using the motion-controlled camera was not without challenges. “When you attack a scene, traditionally you can come in and figure out the blocking on the day [of the shoot],” says Tiddes. “With this movie, I had to previsualize all the blocking because once I put the TechnoDolly in a spot on the set, it could not move for the duration of time we shot in that location. It’s a large 13-foot crane with pieces of track that are 10 feet long and 4 feet wide.”

In fact, one of the main reasons Tiddes wanted to do the film was because of the visual effects challenges it presented. In past films where an actor played multiple characters in a scene, usually one character is on one side of the screen and the other character is on the other side, and a basic split-screen technique would have been used. “For me to do this film, I wanted to visually do it like no one else has ever done it, and that was accomplished by creating camera movement,” he explains. “I didn’t want to be constrained to only split-screen lock-off camera shots that would lack energy and movement. I wanted the freedom to block scenes organically, allowing the characters the flexibility to move through the room, with the opportunity to cross each other and interact together physically. By using motion control, by being able to re-create the same camera movement and then composite the characters into the scene, I was able to develop a different visual style than previous films and create a heightened sense of interactivity and interaction between two or multiple characters on the screen while simultaneously creating dynamic movement with the camera and invoking energy into the scene.”

At times, Gregg Wayans, Marlon’s nephew, served as his body double. He even appears in a very wide shot as one of the siblings, although that occurred only once. “At the end of the day, when the concept of the movie is about Marlon playing multiple characters, the perfectionist in me wanted Marlon to portray every single moment of these characters on screen, even when the character is in the background and out of focus,” says Tiddes. “Because there is only one Marlon Wayans, and no one can replicate what he does physically and comedically in the moment.”

Tiddes knew he would be challenged going into the project, but the process was definitely more complicated than he had initially expected — even with his VFX editorial background. “I had a really good starting point as far as conceptually knowing how to execute motion control. But, it’s not until you get into the moment and start working with the actors that you really understand and digest exactly how to pull off the comedic timing needed for the jokes with the visual effects,” he says. “That is very difficult, and every situation is unique. There was a learning curve, but we picked it up quickly, and I had a great team.”

A system was established that worked for Tiddes and Burgess, as well as Wayans, who had to execute and hit certain marks and look at proper eyelines with precise timing. “He has an earwig, and I am talking to him, letting him know where to look, when to look,” says Tiddes. “At the same time, he’s also hearing dialogue that he’s done the day before in his ear, and he’s reacting to that dialog while giving his current character’s lines in the moment. So, there’s quite a bit going on, and it all becomes more complex when you add the character and camera moving through the scene. After weeks of practice, in one of the final scenes with Jaspar, we were able to do 16 motion-controlled moments in that scene alone, which was a lot!”

At the very end of the film, the group tested its limits and had all six characters (mom and all the siblings, with the exception of Alan) gathered around a table. That scene was shot over a span of five days. “The camera booms down from a sign and pans across the party, landing on all six characters around a table. Getting that motion and allowing the camera to flow through the party onto all six of them seamlessly interacting around the table was a goal of mine throughout the project,” Tiddes says.

Other shots that proved especially difficult were those of Baby Pete in the hospital room, since the entire scene involved Wayans playing three additional characters who are also present: Alan, Russell and Dawn. And then they amped things up with the head replacement on Baby Pete. “I had to shoot the scene and then, on the same day, select the take I would use in the final cut of the movie, rather than select it in post, where traditionally I could pick another take if that one was not working,” Tiddes adds. “I had to set the pace on the first day and work things out with Marlon ahead of time and plan for the subsequent days — What’s Dawn going to say? How is Russell going to react to what Dawn says? You have to really visualize and previsualize all the ad-libbing that was going on and work it out right there in the moment and discuss it, to have kind of a loose plan, then move forward and be confident that you have enough time between lines to allow room for growth when a joke just comes out of nowhere. You don’t want to stifle that joke.”

While the majority of effects involved motion control, there is a scene that contains a good amount of traditional effects work. In it, Alan and Russell park their car in a field to rest for the night, only to awake the next morning to find they have inadvertently provoked a bull, which sees red, literally — both from Alan’s jacket and his shiny car. Artists built the bull in CG. (They used Maya and Side Effects Houdini to build the 3D elements and rendered them in Autodesk’s Arnold.) Physical effects were then used to lift the actual car to simulate the digital bull slamming into the vehicle. In some shots of the bull crashing into the car doors, a 3D car was used to show the doors being damaged.

In another scene, Russell and Alan catch a serious amount of air when they crash through a barn, desperately trying to escape the bull. “I thought it would be hilarious if, in that moment, cereal exploded and individual pieces flew wildly through the car, while [the cereal-obsessed] Russell scooped up one of the cereal pieces mid-air with his tongue for a quick snack,” says Tiddes. To do this, “I wanted to create a zero-gravity slow-motion moment. We shot the scene using a [Vision Research] high-speed Phantom camera at 480fps. Then in post, we created the cereal as a CG element so I could control how every piece moved in the scene. It’s one of my favorite VFX/comedy moments in the movie.”

As Tiddes points out, Sextuplets was the first project on which he used motion control, which let him create motion with the camera and still have the characters interact, giving the subconscious feeling they were actually in the room with one another. “That’s what made the comedy shine,” he says.


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

Mavericks VFX provides effects for Hulu’s The Handmaid’s Tale

By Randi Altman

Season 3 episodes of Hulu’s The Handmaid’s Tale are available for streaming, and if you had any illusions that things would lighten up a bit for June (Elizabeth Moss) and the ladies of Gilead, I’m sorry to say you will be disappointed. What’s not disappointing is that, in addition to the amazing acting and storylines, the show’s visual effects once again play a heavy role.

Brendan Taylor

Toronto’s Mavericks VFX has created visual effects for all three seasons of the show, based on Margaret Atwood’s dystopian view of the not-too-distant future. Its work has earned two Emmy nominations.

We recently reached out to Maverick’s founder and visual effects supervisor, Brendan Taylor, to talk about the new season and his workflow.

How early did you get involved in each season? What sort of input did you have regarding the shots?
The Handmaid’s Tale production is great because they involve us as early as possible. Back in Season 2, when we had to do the Fenway Park scene, for example, we were in talks in August but didn’t shoot until November. For this season, they called us in August for the big fire sequence in Episode 1, and the scene was shot in December.

There’s a lot of nice leadup and planning that goes into it. Our opinions are sought after and we’re able to provide input on what’s the best methodology to use to achieve a shot. Showrunner Bruce Miller, along with the directors, have a way of how they’d like to see it, and they’re great at taking in our recommendations. It was very collaborative and we all approach the process with “what’s best for the show” in mind.

What are some things that the showrunners asked of you in terms of VFX? How did they describe what they wanted?
Each person has a different approach. Bruce speaks in story terms, providing a broader sense of what he’s looking for. He gave us the overarching direction of where he wants to go with the season. Mike Barker, who directed a lot of the big episodes, speaks in more specific terms. He really gets into the details, determining the moods of the scene and communicating how each part should feel.

What types of effects did you provide? Can you give examples?
Some standout effects were the CG smoke in the burning fire sequence and the aftermath of the house being burned down. For the smoke, we had to make it snake around corners in a believable yet magical way. We had a lot of fire going on set, and we couldn’t have any actors or stunt person near it due to the size, so we had to line up multiple shots and composite it together to make everything look realistic. We then had to recreate the whole house in 3D in order to create the aftermath of the fire, with the house being completely burned down.

We also went to Washington, and since we obviously couldn’t destroy the Lincoln Memorial, we recreated it all in 3D. That was a lot of back and forth between Bruce, the director and our team. Different parts of Lincoln being chipped away means different things, and Bruce definitely wanted the head to be off. It was really fun because we got to provide a lot of suggestions. On top of that, we also had to create CGI handmaids and all the details that came with it. We had to get the robes right and did cloth simulation to match what was shot on set. There were about a hundred handmaids on set, but we had to make it look like there were thousands.

Were you able to reuse assets from last season for this one?
We were able to use a handmaids asset from last season, but it needed a lot of upgrades for this season. Because there were closer shots of the handmaids, we had to tweak it and made sure little things like the texture, shaders and different cloth simulations were right for this season.

Were you on set? How did that help?
Yes, I was on set, especially for the fire sequences. We spent a lot of time talking about what’s possible and testing different ways to make it happen. We want it to be as perfect as possible, so I had to make sure it was all done properly from the start. We sent another visual effects supervisor, Leo Bovell, down to Washington to supervise out there as well.

Can you talk about a scene or scenes where being on set played a part in doing something either practical or knowing you could do it in CG?
The fire sequence with the smoke going around the corner took a lot of on-set collaboration. We had tried doing it practically, but the smoke was moving too fast for what we wanted, and there was no way we could physically slow it down.

Having the special effects coordinator, John MacGillivray, there to give us real smoke that we could then match to was invaluable. In most cases on this show, very few audible were called. They want to go into the show knowing exactly what to expect so we were prepared and ready.

Can you talk about turnaround time? Typically, series have short ones. How did that affect how you worked?
The average turnaround time was eight weeks. We began discussions in August, before shooting, and had to delivery by January. We worked with Mike to simplify things without diminishing the impact. We just wanted to make sure we had the chance to do it well given the time we had. Mike was very receptive in asking what we needed to do to make it the best it could be in the timeframe that we had. Take the fire sequence, for example. We could have done full-CGI fire but that would have taken six months. So we did our research and testing to find the most efficient way to merge practical effects with CGI and presented the best version in a shorter period of time.

What tools were used?
We used Foundry Nuke for compositing. We used Autodesk Maya to build all the 3D houses, including the burned-down house, and to destroy the Lincoln Memorial. Then we used Side Effects Houdini to do all the simulations, which can range from the smoke and fire to crowd and cloth.

Is there a shot that you are most proud of or that was very challenging?
The shot where we reveal the crowd over June when we’re in Washington was incredibly challenging. The actual Lincoln Memorial, where we shot, is an active public park, so we couldn’t prevent people from visiting the site. The most we could do was hold them off for a few minutes. We ended up having to clean out all of the tourists, which is difficult with moving camera and moving people. We had to reconstruct about 50% of the plate. Then, in order to get the CG people to be standing there, we had to create a replica of the ground they’re standing on in CG. There were some models we got from the US Geological Society, but they didn’t completely line up, so we had to make a lot of decisions on the fly.

The cloth simulation in that scene was perfect. We had to match the dampening and the movement of all the robes. Stephen Wagner, who is our effects lead on it, nailed it. It looked perfect, and it was really exciting to see it all come together. It looked seamless, and when you saw it in the show, nobody believed that the foreground handmaids were all CG. We’re very proud.

What other projects are you working on?
We’re working on a movie called Queen & Slim by Melina Matsoukas with Universal. It’s really great. We’re also doing YouTube Premium’s Impulse and Netflix’s series Madam C.J. Walker.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

VFX in Series: The Man in the High Castle, Westworld

By Karen Moltenbrey

The look of television changed forever starting in the 1990s as computer graphics technology began to mature to the point where it could be incorporated within television productions. Indeed, the applications initially were minor, but soon audiences were witnessing very complicated work on the small screen. Today, we see a wide range of visual effects being used in television series, from minor wire and sign removal to all-CG characters and complete CG environments — pretty much anything and everything to augment the action and story, or to turn a soundstage or location into a specific locale that could be miles away or even non-existent.

Here, we examine two prime examples where a wide range of visual effects are used to set the stage and propel the action for a pair of series with very unique settings. For instance, The Man in the High Castle uses effects to turn back the clock to the 1960s, but also to create an alternate reality for the period, turning the familiar on its head. In  Westworld, effects create a unique Wild West of the future. In both series, VFX also help turn up the volume on these series’ very creative storylines.

The Man in the High Castle

What would life in the US be like if the Axis powers had defeated the Allied forces during World War II? The Amazon TV series The Man in the High Castle explores that alternate history scenario. Created by Frank Spotnitz and produced by Amazon Studios, Scott Free Productions, Headline Pictures, Electric Shepherd Productions and Big Light Productions, the series is scheduled to start its fourth and final season in mid-November. The story is based on the book by Philip K. Dick.

High Castle begins in the early 1960s in a dystopian America. Nazi Germany and the Empire of Japan have divvied up the US as their spoils of war. Germany rules the East, known as the Greater Nazi Reich (with New York City as the regional capital), while Japan controls the West, known as the Japanese Pacific States (whose capital is now San Francisco). The Rocky Mountains serve as the Neutral Zone. The American Resistance works to thwart the occupiers, spurred on after the discovery of materials displaying an alternate reality where the Allies were victorious, making them ponder this scenario.

With this unique storyline, visual effects artists were tasked with turning back the clock on present-day locations to the ’60s and then turning them into German- and Japanese-dominated and inspired environments. Starting with Season 2, the main studio filling this role has been Barnstorm Visual Effects (Los Angeles, Vancouver). Barnstorm operated as one of the vendors for Season 1, but has since ramped up its crew from a dozen to around 70 to take on the additional work. (Barnstorm also works on CBS All Access shows such as The Good Fight and Strange Angel, in addition to Get Shorty, Outlander and the HBO series Room 104 and Silicon Valley.)

According to Barnstorm co-owner and VFX supervisor Lawson Deming, the studio is responsible for all types of effects for the series — ranging from simple cleanup and fixes such as removing modern objects from shots to more extensive period work through the addition of period set pieces and set extensions. In addition, there are some flashback scenes that call for the artists to digitally de-age the actors and lots of military vehicles to add, as well as science-fiction objects. The majority of the overall work entails CG set extensions and world creation, Deming explains, “That involves matte paintings and CG vehicles and buildings.”

The number of visual effects shots per episode also varies greatly, depending on the story line; there are an average of 60 VFX shots an episode, with each season encompassing 10 episodes. Currently the team is working on Season 4. A core group of eight to 10 CG artists and 12 to 18 compositors work on the show at any given time.

For Season 3, released last October, there are a number of scenes that take place in the Reich-occupied New York City. Although it was possible to go to NYC and photograph buildings for reference, the city has changed significantly since the 1960s, “even notwithstanding the fact that this is an alternate history 1960s,” says Deming. “There would have been a lot of work required to remove modern-day elements from shots, particularly at the street level of buildings where modern-day shops are located, even if it was a building from the 1940s, ’50s or ’60s. The whole main floor would have needed replaced.”

So, in many cases, the team found it more prudent to create set extensions for NYC from scratch. The artists created sections of Fifth and Sixth avenues, both for the area where American-born Reichmarshall and Resistance investigator John Smith has his apartment and also for a parade sequence that occurs in the middle of Season 3. They also constructed a digital version of Central Park for that sequence, which involved crafting a lot of modular buildings with mix-and-match pieces and stories to make what looked like a wide variety of different period-accurate buildings, with matte paintings for the backgrounds. Elements such as fire escapes and various types of windows (some with curtains open, some closed) helped randomize the structures. Shaders for brick, stucco, wood and so forth further enabled the artists to get a lot of usage from relatively few assets.

“That was a large undertaking, particularly because in a lot of those scenes, we also had crowd duplication, crowd systems, tiling and so on to create everything that was there,” Deming explains. “So even though it’s just a city and there’s nothing necessarily fantastical about it, it was almost fully created digitally.”

The styles of NYC and San Francisco are very different in the series narrative. The Nazis are rebuilding NYC in their own image, so there is a lot of influence from brutalist architecture, and cranes often dot the skyline to emphasize all the construction taking place. Meanwhile, San Francisco has more of a 1940s look, as the Japanese are less interested in influencing architectural changes as they are in occupation.

“We weren’t trying to create a science-fiction world because we wanted to be sure that what was there would be believable and sell the realistic feel of the story. So, we didn’t want to go too far in what we created. We wanted it to feel familiar enough, though, that you could believe this was really happening,” says Deming.

One of the standout episodes for visual effects is “Jahr Null” (Season 3, Episode 10), which has been nominated for a 2019 Emmy in the Outstanding Special Visual Effects category. It entails the destruction of the Statue of Liberty, which crashes into the water, requiring just about every tool available at Barnstorm. “Prior to [the upcoming] Season 4, our biggest technical challenge was the Statue of Liberty destruction. There were just so many moving parts, literally and figuratively,” says Deming. “So many things had to occur in the narrative – the Nazis had this sense of showmanship, so they filmed their events and there was this constant stream of propaganda and publicity they had created.”

There are ferries with people on them to watch the event, spotlights are on the statue and an air show with music prior to the destruction as planes with trails of colored smoke fly toward the statue. When the planes fire their missiles at the base of the statue, it’s for show, as there are a number of explosives planted in the base of the statue that go off in a ring formation to force the collapse. Deming explains the logistics challenge: “We wanted the statue’s torch arm to break off and sink in the water, but the statue sits too far back. We had to manufacture a way for the statue to not just tip over, but to sort of slide down the rubble of the base so it would be close enough to the edge and the arm would snap off against the side of the island.”

The destruction simulation, including the explosions, fire, water and so forth, was handled primarily in Side Effects Houdini. Because there was so much sim work, a good deal of the effects work for the entire sequence was done in Houdini as well. Lighting and rendering for the scene was done within Autodesk’s Arnold.

Barnstorm also used Blender, an open-source 3D program for modeling and asset creation, for a small portion of the assets in this sequence. In addition, the artists used Houdini Mantra for the water rendering, while textures and shaders were built in Adobe’s Substance Painter; later the team used Foundry’s Nuke to composite the imagery. “There was a lot of deep compositing involved in that scene because we had to have the lighting interact in three dimensions with things like the smoke simulation,” says Deming. “We had a bunch of simulations stacked on top of one another that created a lot of data to work with.”

The artists referenced historical photographs as they designed and built the statue with a period-accurate torch. In the wide aerial shots, the team used some stock footage of the statue with New York City in the background, but had to replace pretty much everything in the shot, shortening the city buildings and replacing Liberty Island, the water surrounding it and the vessels in the water. “So yeah, it ended up being a fully digital model throughout the sequence,” says Deming.

Deming cannot discuss the effects work coming up in Season 4, but he does note that Season 3 contained a lot of digital NYC. This included a sequence wherein John Smith was installed as the Reichmarshall near Central Park, a scene that comprised a digital NYC and digital crowd duplication. On the other side of the country, the team built digital versions of all the ships in San Francisco harbor, including CG builds of period Japanese battleships retrofitted with more modern equipment. Water simulations rounded out the scene.

In another sequence, the Japanese performed nuclear testing in Monument Valley, blowing the caps off the mesas. For that, the artists used reference photos to build the landscape and then created a digital simulation of a nuclear blast.

In addition, there were a multitude of banners on the various buildings. Because of the provocative nature of some of the Nazi flags and Fascist propaganda, solid-color banners were often hung on location, with artists adding the offensive VFX image in post as to not upset locals where the series was filmed. Other times, the VFX artists added all-digital signage to the scenes.

As Deming points out, there is only so much that can be created through production design and costumes. Some of the big things have to be done with visual effects. “There are large world events in the show that happen and large settings that we’re not able to re-create any other way. So, the visual effects are integral to the process of creating the aesthetic world of the show,” he adds. “We’re creating things that while they are visually impressive, also feel authentic, like a world that could really exist. That’s where the power and the horror of the world here comes from.”

High Castle is up for a total of three Emmy awards later this month. It was nominated for three Emmys in 2017 for Season 2 and four in 2016 for Season 1, taking home two Emmys that year: one for Outstanding Cinematography for a Single-Camera Series and another for Outstanding Title Design.

Westworld

What happens when high tech meets the Wild West, and wealthy patrons can indulge their fantasies with no limits? That is the premise of the Emmy-winning HBO series Westworld from creators Jonathan Nolan and Lisa Joy, who executive produce along with J.J. Abrams, Athena Wickham, Richard J. Lewis, Ben Stephenson and Denise Thé.

Westworld is set in the fictitious western theme park called Westworld, one of multiple parks where advanced technology enables the use of lifelike android hosts to cater to the whims of guests who are able to pay for such services — all without repercussions, as the hosts are programmed not to retaliate or harm the guests. After each role-play cycle, the host’s memory is erased, and then the cycle begins anew until eventually the host is either decommissioned or used in a different narrative. Staffers are situated out of sight while overseeing park operations and performing repairs on the hosts as necessary. As you can imagine, guests often play out the darkest of desires. So, what happens if some of the hosts retain their memories and begin to develop emotions? What if some escape from the park? What occurs in the other themed parks?

The series debuted in October 2016, with Season 2 running from April through June of 2018. The production for Season 3 began this past spring and it is planned for release in 2020.

The first two seasons were shot in various locations in California, as well as in Castle Valley near Moab, Utah. Multiple vendors provide the visual effects, including the team at CoSA VFX (North Hollywood, Vancouver and Atlanta), which has been with the show since the pilot, working closely with Westworld VFX supervisor Jay Worth. CoSA worked with Worth in the past on other series, including Fringe, Undercovers and Person of Interest.

The number of VFX shots per episode varies, depending on the storyline, and that means the number of shots CoSA is responsible for varies widely as well. For instance, the facility did approximately 360 shots for Season 1 and more than 200 for Season 2. The studio is unable to discuss its work at this time on the upcoming Season 3.

The type of effects work CoSA has done on Westworld varies as well, ranging from concept art through the concept department and extension work through the studio’s environments department. “Our CG team is quite large, so we handle every task from modeling and texturing to rigging, animation and effects,” says Laura Barbera, head of 3D at CoSA. “We’ve created some seamless digital doubles for the show that even I forget are CG! We’ve done crowd duplication, for which we did a fun shoot where we dressed up in period costumes. Our 2D department is also sizable, and they do everything from roto, to comp and creative 2D solutions, to difficult greenscreen elements. We even have a graphics department that did some wonderful shots for Season 2, including holograms and custom interfaces.”

On the 3D side, the studio’s pipeline js mainly comprised of Autodesk’s Maya and Side Effects Houdini, along with Adobe’s Substance, Foundry’s Mari and Pixologic’s ZBrush. Maxon’s Cinema 4D and Interactive Data Visualization’s SpeedTree vegetation modeler are also used. On the 2D side, the artists employ Foundry’s Nuke and the Adobe suite, including After Effects and Photoshop; rendering is done in Chaos Group’s V-Ray and Redshift’s renderer.

Of course, there have been some recurring effects each season, such as the host “twitches and glitches.” And while some of the same locations have been revisited, the CoSA artists have had to modify the environments to fit with the changing timeline of the story.

“Every season sees us getting more and more into the characters and their stories, so it’s been important for us to develop along with it. We’ve had to make our worlds more immersive so that we are feeling out the new and changing surroundings just like the characters are,” Barbera explains. “So the set work gets more complex and the realism gets even more heightened, ensuring that our VFX become even more seamless.”

At center stage have been the park locations, which are rooted in existing terrain, as there is a good deal of location shooting for the series. The challenge for CoSA then becomes how to enhance it and make nature seem even more full and impressive, while still subtly hinting toward the changes in the story, says Barbera. For instance, the studio did a significant amount of work to the Skirball Cultural Center locale in LA for the outdoor environment of Delos, which owns and operates the parks. “It’s now sitting atop a tall mesa instead of overlooking the 405!” she notes. The team also added elements to the abandoned Hawthorne Plaza mall to depict the sublevels of the Delos complex. They’re constantly creating and extending the environments in locations inside and out of the park, including the town of Pariah, a particularly lawless area.

“We’ve created beautiful additions to the outdoor sets. I feel sometimes like we’re looking at a John Ford film, where you don’t realize how important the world around you is to the feel of the story,” Barbera says.

CoSA has done significant interior work too, creating spaces that did not exist on set “but that you’d never know weren’t there unless you’d see the before and afters,” Barbera says. “It’s really very visually impressive — from futuristic set extensions, cars and [Westworld park co-creator] Arnold’s house in Season 2, it’s amazing how much we’ve done to extend the environments to make the world seem even bigger than it is on location.”

One of the larger challenges in the first seasons came in Season 2: creating the Delos complex and the final episodes where the studio had to build a world inside of a world – the Sublime –as well as the gateway to get there. “Creating the Sublime was a challenge because we had to reuse and yet completely change existing footage to design a new environment,” explains Barbera. “We had to find out what kind of trees and foliage would live in that environment, and then figure out how to populate it with hosts that were never in the original footage. This was another sequence where we had to get particularly creative about how to put all the elements together to make it believable.”

In the final episode of the second season, the group created environment work on the hills, pinnacles and quarry where the door to the Sublime appears. They also did an extensive rebuild of the Sublime environment, where the hosts emerge after crossing over. “In the first season, we did a great deal of work on the plateau side of Delos, as well as adding mesas into the background of other shots — where [hosts] Dolores and Teddy are — to make the multiple environments feel connected,” adds Barbera.

Aside from the environments, CoSA also did some subtle work on the robots, especially in Season 2, to make them appear as if they were becoming unhinged, hinting at a malfunction. The comp department also added eye twitches, subtle facial tics and even rapid blinks to provide a sense of uneasiness.

While Westworld’s blending of the Old West’s past and the robotic future initially may seem at thematic odds, the balance of that duality is cleverly accomplished in the filming of the series and the way it is performed, Barbera points out. “Jay Worth has a great vision for the integrated feel of the show. He established the looks for everything,” she adds.

The balance of the visual effects is equally important because it enhances the viewer experience. “There are things happening that can be so subtle but have so much impact. Much of our work on the second season was making sure that the world stayed grounded, so that the strangeness that happened with the characters and story line read as realistic,” Barbera explains. “Our job as visual effects artists is to help our professional storytelling partners tell their tales by adding details and elements that are too difficult or fantastic to accomplish live on set in the midst of production. If we’re doing our job right, you shouldn’t feel suddenly taken out of the moment because of a splashy effect. The visuals are there to supplement the story.”


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

Visual Effects Roundtable

By Randi Altman

With Siggraph 2019 in our not-too-distant rearview mirror, we thought it was a good time to reach out to visual effects experts to talk about trends. Everyone has had a bit of time to digest what they saw. Users are thinking what new tools and technologies might help their current and future workflows. Manufacturers are thinking about how their products will incorporate these new technologies.

We provided these experts with questions relating to realtime raytracing, the use of game engines in visual effects workflows, easier ways to share files and more.

Ben Looram, partner/owner, Chapeau Studios
Chapeau Studios provides production, VFX/animation, design and creative IP development (both for digital content and technology) for all screens.

What film inspired you to work in VFX?
There was Ray Harryhausen’s film Jason and the Argonauts, which I watched on TV when I was seven. The skeleton-fighting scene has been visually burned into my memory ever since. Later in life I watched an artist compositing some tough bluescreen shots on a Quantel Henry in 1997, and I instantly knew that that was going to be in my future.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
Double the content for half the cost seems to be the industry’s direction lately. This is coming from new in-house/client-direct agencies that sometimes don’t know what they don’t know … so we help guide/teach them where it’s OK to trim budgets or dedicate more funds for creative.

Are game engines affecting how you work, or how you will work in the future?
Yes, rendering on device and all the subtle shifts in video fidelity shifted our attention toward game engine technology a couple years ago. As soon as the game engines start to look less canned and have accurate depth of field and parallax, we’ll start to integrate more of those tools into our workflow.

Right now we have a handful of projects in the forecast where we will be using realtime game engine outputs as backgrounds on set instead of shooting greenscreen.

What about realtime raytracing? How will that affect VFX and the way you work?
We just finished an R&D project with Intel’s new raytracing engine OSPRay for Siggraph. The ability to work on a massive scale with last-minute creative flexibility was my main takeaway. This will allow our team to support our clients’ swift changes in direction with ease on global launches. I see this ingredient as really exciting for our creative tech devs moving into 2020. Proof of concept iterations will become finaled faster, and we’ve seen efficiencies in lighting, render and compositing effort.

How have ML/AI affected your workflows, if at all?
None to date, but we’ve been making suggestions for new tools that will make our compositing and color correction process more efficient.

The Uncanny Valley. Where are we now?
Still uncanny. Even with well-done virtual avatar influencers on Instagram like Lil Miquela, we’re still caught with that eerie feeling of close-to-visually-correct with a “meh” filter.

Apple

Can you name some recent projects?
The Rookie’s Guide to the NFL. This was a fun hybrid project where we mixed CG character design with realtime rendering voice activation. We created an avatar named Matthew for the NFL’s Amazon Alexa Skills store that answers your football questions in real time.

Microsoft AI: Carlsberg and Snow Leopard. We designed Microsoft’s visual language of AI on multiple campaigns.

Apple Trade In campaign: Our team concepted, shot and created an in-store video wall activation and on-all-device screen saver for Apple’s iPhone Trade In Program.

 

Mac Moore, CEO, Conductor
Conductor is a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud.

What are some of today’s VFX trends? Is cloud playing an even larger role?
Cloud is absolutely a growing trend. I think for many years the inherent complexity and perceived cost of cloud has limited adoption in VFX, but there’s been a marked acceleration in the past 12 months.

Two years ago at Siggraph, I was explaining the value of elastic compute and how it perfectly aligns with the elastic requirements that define our project-based industry; this year there was a much more pragmatic approach to cloud, and many of the people I spoke with are either using the cloud or planning to use it in the near future. Studios have seen referenceable success, both technically and financially, with cloud adoption and are now defining cloud’s role in their pipeline for fear of being left behind. Having a cloud-enabled pipeline is really a game changer; it is leveling the field and allowing artistic talent to be the differentiation, rather than the size of the studio’s wallet (and its ability to purchase a massive render farm).

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines for VFX have definitely attracted interest lately and show a lot of promise in certain verticals like virtual production. There’s more work to be done in terms of out-of-the-box usability, but great strides have been made in the past couple years. I also think various open source initiatives and the inherent collaboration those initiatives foster will help move VFX workflows forward.

Will realtime raytracing play a role in how your tool works?
There’s a need for managing the “last mile,” even in realtime raytracing, which is where Conductor would come in. We’ve been discussing realtime assist scenarios with a number of studios, such as pre-baking light maps and similar applications, where we’d perform some of the heavy lifting before assets are integrated in the realtime environment. There are certainly benefits on both sides, so we’ll likely land in some hybrid best practice using realtime and traditional rendering in the near future.

How do ML/AI and AR/VR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Machine learning and artificial intelligence are critical for our next evolutionary phase at Conductor. To date we’ve run over 250 million core-hours on the platform, and for each of those hours, we have a wealth of anonymous metadata about render behavior, such as the software run, duration, type of machine, etc.

Conductor

For our next phase, we’re focused on delivering intelligent rendering akin to ride-share app pricing; the goal is to provide producers with an upfront cost estimate before they submit the job, so they have a fixed price that they can leverage for their bids. There is also a rich set of analytics that we can mine, and those analytics are proving invaluable for studios in the planning phase of a project. We’re working with data science experts now to help us deliver this insight to our broader customer base.

AR/VR front presents a unique challenge for cloud, due to the large size and variety of datasets involved. The rendering of these workloads is less about compute cycles and more about scene assembly, so we’re determining how we can deliver more of a whole product for this market in particular.

OpenXR and USD are certainly helping with industry best practices and compatibility, which build recipes for repeatable success, and Conductor is collaborating on creating those guidelines for success when it comes to cloud computing with those standards.

What is next on the horizon for VFX?
Cloud, open source and realtime technologies are all disrupting VFX norms and are converging in a way that’s driving an overall democratization of the industry. Gone are the days when you need a pile of cash and a big brick-and-mortar building to house all of your tech and talent.

Streaming services and new mediums, along with a sky-high quality bar, have increased the pool of available VFX work, which is attracting new talent. Many of these new entrants are bootstrapping their businesses with cloud, standards-based approaches and geographically dispersed artistic talent.

Conductor recently became a fully virtual company for this reason. I hire based on expertise, not location, and today’s technology allows us to collaborate as if we are in the same building.

 

Aruna Inversin, creative director/VFX supervisor, Digital Domain 
Digital Domain has provided visual effects and technology for hundreds of motion pictures, commercials, video games, music videos and virtual reality experiences. It also livestreams events in 360-degree virtual reality, creates “virtual humans” for use in films and live events, and develops interactive content, among other things.

What film inspired you to work in VFX?
RoboCop in 1984. The combination of practical effects, miniatures and visual effects inspired me to start learning about what some call “The Invisible Art.”

What trends have you been seeing? What do you feel is important?
There has been a large focus on realtime rendering and virtual production and using it to help increase the throughput and workflow of visual effects. While indeed realtime rendering does increase throughput, there is now a greater onus on filmmakers to plan their creative ideas and assets before you can render them. No longer is it truly post production, but we are back into the realm of preproduction, using post tools and realtime tools to help define how a story is created and eventually filmed.

USD and cloud rendering are also important components, which allow many different VFX facilities the ability to manage their resources effectively. I think another trend that has since passed and has gained more traction is the availability of ACES and a more unified color space by the Academy. This allows quicker throughput between all facilities.

Are game engines affecting how you work or how you will work in the future?
As my primary focus is in new media and experiential entertainment at Digital Domain, I already use game engines (cinematic engines, realtime engines) for the majority of my deliverables. I also use our traditional visual effects pipeline; we have created a pipeline that flows from our traditional cinematic workflow directly into our realtime workflow, speeding up the development process of asset creation and shot creation.

What about realtime raytracing? How will that affect VFX and the way you work?
The ability to use Nvidia’s RTX and raytracing increases the physicality and realistic approximations of virtual worlds, which is really exciting for the future of cinematic storytelling in realtime narratives. I think we are just seeing the beginnings of how RTX can help VFX.

How have AR/VR and AI/ML affected your workflows, if at all?
Augmented reality has occasionally been a client deliverable for us, but we are not using it heavily in our VFX pipeline. Machine learning, on the other hand, allows us to continually improve our digital humans projects, providing quicker turnaround with higher fidelity than competitors.

The Uncanny Valley. Where are we now?
There is no more uncanny valley. We have the ability to create a digital human with the nuance expected! The only limitation is time and resources.

Can you name some recent projects?
I am currently working on a Time project but I cannot speak too much about it just yet. I am also heavily involved in creating digital humans for realtime projects for a number of game companies that wish to push the boundaries of storytelling in realtime. All these projects have a release date of 2020 or 2021.

 

Matt Allard, strategic alliances lead, M&E, Dell Precision Workstations
Dell Precision workstations feature the latest processors and graphics technology and target those working in the editing studio or at a drafting table, at the office or on location.

What are some of today’s VFX trends?
We’re seeing a number of trends in VFX at the moment — from 4K mastering from even higher-resolution acquisition formats and an increase in HDR content to game engines taking a larger role on set in VFX-heavy productions. Of course, we are also seeing rising expectations for more visual sophistication, complexity and film-level VFX, even in TV post (for example, Game of Thrones).

Will realtime raytracing play a role in how your tools work?
We expect that Dell customers will embrace realtime and hardware-accelerated raytracing as creative, cost-saving and time-saving tools. With the availability of Nvidia Quadro RTX across the Dell Precision portfolio, including on our 7000 series mobile workstations, customers can realize these benefits now to deliver better content wherever a production takes them in the world.

Large-scale studio users will not only benefit from the freedom to create the highest-quality content faster, but they’ll likely see overall impact to their energy consumption as they assess the move from CPU rendering, which dominates studio data centers today. Moving toward GPU and hybrid CPU/GPU rendering approaches can offer equal or better rendering output with less energy consumption.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines have made their way into VFX-intensive productions to deliver in-context views of the VFX during the practical shoot. With increasing quality driven by realtime raytracing, game engines have the potential to drive a master-quality VFX shot on set, helping to minimize the need to “fix it in post.”

What is next on the horizon for VFX?
The industry is at the beginning of a new era as artificial intelligence and machine learning techniques are brought to bear on VFX workflows. Analytical and repetitive tasks are already being targeted by major software applications to accelerate or eliminate cumbersome elements in the workflow. And as with most new technologies, it can result in improved creative output and/or cost savings. It really is an exciting time for VFX workflows!

Ongoing performance improvements to the computing infrastructure will continue to accelerate and democratize the highest-resolution workflows. Now more than ever, small shops and independents can access the computing power, tools and techniques that were previously available only to top-end studios. Additionally, virtualization techniques will allow flexible means to maximize the utilization and proliferation of workstation technology.

 

Carl Flygare, manager, Quadro Marketing, PNY
Providing tools for realtime raytracing, augmented reality and virtual reality with the goal of advancing VFX workflow creativity and productivity. PNY is NVIDIA’s Quadro channel partner throughout North America, Latin America, Europe and India..

How will realtime raytracing play a role in workflows?
Budgets are getting tighter, timelines are contracting, and audience expectations are increasing. This sounds like a perfect storm, in the bad sense of the term, but with the right tools, it is actually an opportunity.

Realtime raytracing, based on Nvidia’s RTX technology and support from leading ISVs, enables VFX shops to fit into these new realities while delivering brilliant work. Whiteboarding a VFX workflow is a complex task, so let’s break it down by categories. In preproduction, specifically previz, realtime raytracing will let VFX artists present far more realistic and compelling concepts much earlier in the creative process than ever before.

This extends to the next phase, asset creation and character animation, in which models can incorporate essentially lifelike nuance, including fur, cloth, hair or feathers – or something else altogether! Shot layout, blocking, animation, simulation, lighting and, of course, rendering all benefit from additional iterations, nuanced design and the creative possibilities that realtime raytracing can express and realize. Even finishing, particularly compositing, can benefit. Given the applicable scope of realtime raytracing, it will essentially remake VFX workflows and overall film pipelines, and Quadro RTX series products are the go-to tools enabling this revolution.

How are game engines changing how VFX is done? Is this for everyone or just a select few?
Variety had a great article on this last May. ILM substituted realtime rendering and five 4K laser projectors for a greenscreen shot during a sequence from Solo: A Star Wars Story. This allowed the actors to perform in context — in this case, a hyperspace jump — but also allowed cinematographers to capture arresting reflections of the jump effect in the actors’ eyes. Think of it as “practical digital effects” created during shots, not added later in post. The benefits are significant enough that the entire VFX ecosystem, from high-end shops and major studios to independent producers, are using realtime production tools to rethink how movies and TV shows happen while extending their vision to realize previously unrealizable concepts or projects.

Project Sol

How do ML and AR play a role in your tool? And are you supporting OpenXR 1.0? What about Pixar’s USD?
Those are three separate but somewhat interrelated questions! ML (machine learning) and AI (artificial intelligence) can contribute by rapidly denoising raytraced images in far less time than would be required by letting a given raytracing algorithm run to conclusion. Nvidia enables AI denoising in Optix 5.0 and is working with a broad array of leading ISVs to bring ML/AI enhanced realtime raytracing techniques into the mainstream.

OpenXR 1.0 was released at Siggraph 2019. Nvidia (among others) is supporting this open, royalty-free and cross-platform standard for VR/AR. Nvidia is now providing VR enhancing technologies, such as variable rate shading, content adaptive shading and foveated rendering (among others), with the launch of Quadro RTX. This provides access to the best of both worlds — open standards and the most advanced GPU platform on which to build actual implementations.

Pixar and Nvidia have collaborated to make Pixar’s USD (Universal Scene Description) and Nvidia’s complementary MDL (Materials Definition Language) software open source in an effort to catalyze the rapid development of cinematic quality realtime raytracing for M&E applications.

Project Sol

What is next on the horizon for VFX?
The insatiable desire on the part of VFX professionals, and audiences, to explore edge-of-the-envelope VFX will increasingly turn to realtime raytracing, based on the actual behavior of light and real materials, increasingly sophisticated shader technology and new mediums like VR and AR to explore new creative possibilities and entertainment experiences.

AI, specifically DNNs (deep neural networks) of various types, will automate many repetitive VFX workflow tasks, allowing creative visionaries and artists to focus on realizing formerly impossible digital storytelling techniques.

One obvious need is increasing the resolution at which VFX shots are rendered. We’re in a 4K world, but many films are finished at 2K, primarily based on VFX. 8K is unleashing the abilities (and changing the economics) of cinematography, so expect increasingly powerful realtime rendering solutions, such as Quadro RTX (and successor products when they come to market), along with amazing advances in AI, to allow the VFX community to innovate in tandem.

 

Chris Healer, CEO/CTO/VFX supervisor, The Molecule 
Founded in 2005, The Molecule creates bespoke VFX imagery for clients worldwide. Over 80 artists, producers, technicians and administrative support staff collaborate at our New York City and Los Angeles studios.

What film or show inspired you to work in VFX?
I have to admit, The Matrix was a big one for me.

Are game engines affecting how you work or how you will work?
Game engines are coming, but the talent pool is difficult and the bridge is hard to cross … a realtime artist doesn’t have the same mindset as a traditional VFX artist. The last small percentage of completion on a shot can invalidate any values gained by working in a game engine.

What about realtime raytracing?
I am amazed at this technology, and as a result bought stock in Nvidia, but the software has to get there. It’s a long game, for sure!

How have AR/VR and ML/AI affected your workflows?
I think artists are thinking more about how images work and how to generate them. There is still value in a plain-old four-cornered 16:9 rectangle that you can make the most beautiful image inside of.

AR,VR, ML, etc., are not that, to be sure. I think there was a skip over VR in all the hype. There’s way more to explore in VR, and that will inform AR tremendously. It is going to take a few more turns to find a real home for all this.

What trends have you been seeing? Cloud workflows? What else?
Everyone is rendering in the cloud. The biggest problem I see now is lack of a UBL model that is global enough to democratize it. UBL = usage-based licensing. I would love to be able to render while paying by the second or minute at large or small scales. I would love for Houdini or Arnold to be rentable on a Satoshi level … that would be awesome! Unfortunately, it is each software vendor that needs to provide this, which is a lot to organize.

The Uncanny Valley. Where are we now?
We saw in the recent Avengers film that Mark Ruffalo was in it. Or was he? I totally respect the Uncanny Valley, but within the complexity and context of VFX, this is not my battle. Others have to sort this one out, and I commend the artists who are working on it. Deepfake and Deeptake are amazing.

Can you name some recent projects?
We worked on Fosse/Verdon, but more recent stuff, I can’t … sorry. Let’s just say I have a lot of processors running right now.

 

Matt Bach and William George, lab technicians, Puget Systems 
Puget Systems specializes in high-performance custom-built computers — emphasizing each customer’s specific workflow.

Matt Bach

William George

What are some of today’s VFX trends?
Matt Bach: There are so many advances going on right now that it is really hard to identify specific trends. However, one of the most interesting to us is the back and forth between local and cloud rendering.

Cloud rendering has been progressing for quite a few years and is a great way to get a nice burst in rendering performance when you are  in a crunch. However, there have been high improvements in GPU-based rendering with technology like Nvidia Optix. Because of these, you no longer have to spend a fortune to have a local render farm, and even a relatively small investment in hardware can often move the production bottleneck away from rendering to other parts of the workflow. Of course, this technology should make its way to the cloud at some point, but as long as these types of advances keep happening, the cloud is going to continue playing catch-up.

A few other that we are keeping our eyes on are the growing use of game engines, motion capture suits and realtime markerless facial tracking in VFX pipelines.

Realtime raytracing is becoming more prevalent in VFX. What impact does realtime raytracing have on system hardware, and what do VFX artists need to be thinking about when optimizing their systems?
William George: Most realtime raytracing requires specialized computer hardware, specifically video cards with dedicated raytracing functionality. Raytracing can be done on the CPU and/or normal video cards as well, which is what render engines have done for years, but not quickly enough for realtime applications. Nvidia is the only game in town at the moment for hardware raytracing on video cards with its RTX series.

Nvidia’s raytracing technology is available on its consumer (GeForce) and professional (Quadro) RTX lines, but which one to use depends on your specific needs. Quadro cards are specifically made for this kind of work, with higher reliability and more VRAM, which allows for the rendering of more complex scenes … but they also cost a lot more. GeForce, on the other hand, is more geared toward consumer markets, but the “bang for your buck” is incredibly high, allowing you to get several times the performance for the same cost.

In between these two is the Titan RTX, which offers very good performance and VRAM for its price, but due to its fan layout, it should only be used as a single card (or at most in pairs, if used in a computer chassis with lots of airflow).

Another thing to consider is that if you plan on using multiple GPUs (which is often the case for rendering), the size of the computer chassis itself has to be fairly large in order to fit all the cards, power supply, and additional cooling needed to keep everything going.

How are game engines changing or impacting VFX workflows?
Bach: Game engines have been used for previsualization for a while, but we are starting to see them being used further and further down the VFX pipeline. In fact, there are already several instances where renders directly captured from game engines, like Unity or Unreal, are being used in the final film or animation.

This is getting into speculation, but I believe that as the quality of what game engines can produce continues to improve, it is going to drastically shake up VFX workflows. The fact that you can make changes in real time, as well as use motion capture and facial tracking, is going to dramatically reduce the amount of time necessary to produce a highly polished final product. Game engines likely won’t completely replace more traditional rendering for quite a while (if ever), but it is going to be significant enough that I would encourage VFX artists to at least familiarize themselves with the popular engines like Unity or Unreal.

What impact do you see ML/AI and AR/VR playing for your customers?
We are seeing a lot of work being done for machine learning and AI, but a lot of it is still on the development side of things. We are starting to get a taste of what is possible with things like Deepfakes, but there is still so much that could be done. I think it is too early to really tell how this will affect VFX in the long term, but it is going to be exciting to see.

AR and VR are cool technologies, but it seems like they have yet to really take off, in part because designing for them takes a different way of thinking than traditional media, but also in part because there isn’t one major platform that’s an overwhelming standard. Hopefully, that is something that gets addressed over time, because once creative folks really get a handle on how to use the unique capabilities of AR/VR to their fullest, I think a lot of neat stories will be told.

What is the next on the horizon for VFX?
Bach: The sky is really the limit due to how fast technology and techniques are changing, but I think there are two things in particular that are going to be very interesting to see how they play out.

First, we are hitting a point where ethics (“With great power comes great responsibility” and all that) is a serious concern. With how easy it is to create highly convincing Deepfakes of celebrities or other individuals, even for someone who has never used machine learning before, I believe that there is the potential of backlash from the general public. At the moment, every use of this type of technology has been for entertainment or otherwise rightful purposes, but the potential to use it for harm is too significant to ignore.

Something else I believe we will start to see is “VFX for the masses,” similar to how video editing used to be a purely specialized skill, but now anyone with a camera can create and produce content on social platforms like YouTube. Advances in game engines, facial/body tracking for animated characters and other technologies that remove a number of skills and hardware barriers for relatively simple content are going to mean that more and more people with no formal training will take on simple VFX work. This isn’t going to impact the professional VFX industry by a significant degree, but I think it might spawn a number of interesting techniques or styles that might make their way up to the professional level.

 

Paul Ghezzo, creative director, Technicolor Visual Effects
Technicolor and its family of VFX brands provide visual effects services tailored to each project’s needs.

What film inspired you to work in VFX?
At a pretty young age, I fell in love with Star Wars: Episode IV – A New Hope and learned about the movie magic that was developed to make those incredible visuals come to life.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
USD will help structure some of what we currently do, and cloud rendering is an incredible source to use when needed. I see both of them maturing and being around for years to come.

As for other trends, I see new methods of photogrammetry and HDRI photography/videography providing datasets for digital environment creation and capturing lighting content; performance capture (smart 2D tracking and manipulation or 3D volumetric capture) for ease of performance manipulation or layout; and even post camera work. New simulation engines are creating incredible and dynamic sims in a fraction of the time, and all of this coming together through video cards streamlining the creation of the end product. In many ways it might reinvent what can be done, but it might take a few cutting-edge shows to embrace and perfect the recipe and show its true value.

Production cameras tethered to digital environments for live set extensions are also coming of age, and with realtime rendering becoming a viable option, I can imagine that it will only be a matter of time for LED walls to become the new greenscreen. Can you imagine a live-action set extension that parallaxes, distorts and is exposed in the same way as its real-life foreground? How about adding explosions, bullet hits or even an armada of spaceships landing in the BG, all on cue. I imagine this will happen in short order. Exciting times.

Are game engines affecting how you work or how you will work in the future?
Game engines have affected how we work. The speed and quality that they offer is undoubtably a game changer, but they don’t always create the desired elements and AOVs that are typically needed in TV/film production.

They are also creating a level of competition that is spurring other render engines to be competitive and provide a similar or better solution. I can imagine that our future will use Unreal/Unity engines for fast turnaround productions like previz and stylized content, as well as for visualizing virtual environments and digital sets as realtime set extensions and a lot more.

Snowfall

What about realtime raytracing? How will that affect VFX and the way you work?
GPU rendering has single-handedly changed how we render and what we render with. A handful of GPUs and a GPU-accelerated render engine can equal or surpass a CPU farm that’s several times larger and much more expensive. In VFX, iterations equal quality, and if multiple iterations can be completed in a fraction of the time — and with production time usually being finite — then GPU-accelerated rendering equates to higher quality in the time given.

There are a lot of hidden variables to that equation (change of direction, level of talent provided, work ethics, hardware/software limitations, etc.), but simply said, if you can hit the notes as fast as they are given, and not have to wait hours for a render farm to churn out a product, then clearly the faster an iteration can be provided the more iterations can be produced, allowing for a higher-quality product in the time given.

How have AR or ML affected your workflows, if at all?
ML and AR haven’t significantly affected our current workflows yet … but I believe they will very soon.

One aspect of AR/VR/MR that we occasionally use in TV/film production is to previz environments, props and vehicles, which allows everyone in production and on set/location to see what the greenscreen will be replaced with, which allows for greater communication and understanding with the directors, DPs, gaffers, stunt teams, SFX and talent. I can imagine that AR/VR/MR will only become more popular as a preproduction tool, allowing productions to front load and approve all aspects of production way before the camera is loaded and the clock is running on cast and crew.
Machine learning is on the cusp of general usage, but it currently seems to be used by productions with lengthy schedules that will benefit from development teams building those toolsets. There are tasks that ML will undoubtably revolutionize, but it hasn’t affected our workflows yet.

The Uncanny Valley. Where are we now?
Making the impossible possible … That *is* what we do in VFX. Looking at everything from Digital Emily in 2011 to Thanos and Hulk in Avengers: Endgame, we’ve seen what can be done, and the Uncanny Valley will likely remain, but only on productions that can’t afford the time or cost of flawless execution.

Can you name some recent projects?
Big Little Lies, Dead to Me, NOS4A2, True Detective, Veep, This Is Us, Snowfall, The Loudest Voice, and Avengers: Endgame.

 

James Knight, virtual production director, AMD 
AMD is a semiconductor company that develops computer processors and related technologies for M&E as well as other markets. Its tools include Ryzen and Threadripper.

What are some of today’s VFX trends?
Well, certainly the exploration for “better, faster, cheaper” keeps going. Faster rendering, so our community can accomplish more iterations in a much shorter amount of time, seems to something I’ve heard the whole time I’ve been in the business.

I’d surely say the virtual production movement (or on-set visualization) is gaining steam, finally. I work with almost all the major studios in my role, and all of them, at a minimum, have the ability to speed up post and blend it with production on their radar; many have virtual production departments.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
I would say game engines are where most of the innovation comes from these days. Think about Unreal, for example. Epic pioneered Fortnite, and the revenue from that must be astonishing, and they’re not going to sit on their hands. The feature film and TV post/VFX business benefits from the requirement of the gaming consumer to see higher-resolution, more photorealistic images in real time. That gets passed on to our community in eliminating guess work on set when framing partial or completely CG shots.

It should be for everyone or most, because the realtime and post production time savings are rather large. I think many still have a personal preference for what they’re used to. And that’s not wrong, if it works for them, obviously that’s fine. I just think that even in 2019, use of game engines is still new to some … which is why it’s not completely ubiquitous.

How do ML or AR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Well, it’s more the reverse. With our new Rome and Threadripper CPUs, we’re powering AR. Yes, we are supporting OpenXR 1.0.

What is next on the horizon for VFX?
Well, the demand for VFX is increasing, not the opposite, so the pursuit of faster photographic reality is perpetually in play. That’s good job security for me at a CPU/GPU company, as we have a way to go to properly bridge the Uncanny Valley completely, for example.

I’d love to say lower-cost CG is part of the future, but then look at the budgets of major features — they’re not exactly falling. The dance of Moore’s law will forever be in effect more than likely, with momentary huge leaps in compute power — like with Rome and Threadripper — catching amazement for a period. Then, when someone sees the new, expanded size of their sandpit, they then fill that and go, “I now know what I’d do if it was just a bit bigger.”

I am vested and fascinated by the future of VFX, but I think it goes hand in hand with great storytelling. If we don’t have great stories, then directing and artistry innovations don’t properly get noticed. Look at the top 20 highest grossing films in history … they’re all fantasy. We all want to be taken away from our daily lives and immersed in a beautiful, realistic VFX intense fictional world for 90 minutes, so we’ll be forever pushing the boundaries of rigging, texturing, shading, simulations, etc. To put my finger on exactly what’s next, I’d say I happen to know of a few amazing things that are coming, but sadly, I’m not at liberty to say right now.

 

Michel Suissa, managing director of pro solutions, The Studio-B&H 
The Studio-B&H provides hands-on experience to high-end professionals. Its Technology Center is a fully operational studio with an extensive display of high-end products and state-of-the-art workflows.

What are some of today’s VFX trends?
AI, ML, NN (GAN) and realtime environments

Will realtime raytracing play a role in how the tools you provide work?
It already does with most relevant applications in the market.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
The ubiquity of realtime game engines is becoming more mainstream with every passing year. It is becoming fairly accessible to a number of disciplines within different market targets.

What is next on the horizon for VFX?
New pipeline architectures that will rely on different implementations (traditional and AI/ML/NN) and mixed infrastructures (local and cloud-based).

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
AI, ML and realtime environments. New cloud toolsets. Prominence of neural networks and GANs. Proliferation of convincing “deepfakes” as a proof of concept for the use of generative networks as resources for VFX creation.

What about realtime raytracing? How will that affect VFX workflows?
RTX is changing how most people see their work being done. It is also changing expectations about what it takes to create and render CG images.



The Uncanny Valley. Where are we now?
AI and machine learning will help us get there. Perfection still remains too costly. The amount of time and resources required to create something convincing is prohibitive for the large majority of the budgets.

 

Marc Côté, CEO, Real by Fake 
Real by Fake services include preproduction planning, visual effects, post production and tax-incentive financing.

What film or show inspired you to work in VFX?
George Lucas’ Star Wars and Indiana Jones (Raiders of the Lost Ark). For Star Wars, I was a kid and I saw this movie. It brought me to another universe. Star Wars was so inspiring even though I was too young to understand what the movie was about. The robots in the desert and the spaceships flying around. It looked real; it looked great. I was like, “Wow, this is amazing.”

Indiana Jones because it was a great adventure; we really visit the worlds. I was super-impressed by the action, by the way it was done. It was mostly practical effects, not really visual effects. Later on I realized that in Star Wars, they were using robots (motion control systems) to shoot the spaceships. And as a kid, I was very interested in robots. And I said, “Wow, this is great!” So I thought maybe I could use my skills and what I love and combine it with film. So that’s the way it started.

What trends have you been seeing? What do you feel is important?
The trend right now is using realtime rendering engines. It’s coming on pretty strong. The game companies who build engines like Unity or Unreal are offering a good product.

It’s bit of a hack to use these tools in rendering or in production at this point. They’re great for previz, and they’re great for generating realtime environments and realtime playback. But having the capacity to change or modify imagery with the director during the process of finishing is still not easy. But it’s a very promising trend.

Rendering in the cloud gives you a very rapid capacity, but I think it’s very expensive. You also have to download and upload 4K images, so you need a very big internet pipe. So I still believe in local rendering — either with CPUs or GPUs. But cloud rendering can be useful for very tight deadlines or for small companies that want to achieve something that’s impossible to do with the infrastructure they have.

My hope is that AI will minimize repetition in visual effects. For example, in keying. We key multiple sections of the body, but we get keying errors in plotting or transparency or in the edges, and they are all a bit different, so you have to use multiple keys. AI would be useful to define which key you need to use for every section and do it automatically and in parallel. AI could be an amazing tool to be able to make objects disappear by just selecting them.

Pixar’s USD is interesting. The question is: Will the industry take it as a standard? It’s like anything else. Kodak invented DPX, and it became the standard through time. Now we are using EXR. We have different software, and having exchange between them will be great. We’ll see. We have FBX, which is a really good standard right now. It was built by Filmbox, a Montreal company that was acquired by Autodesk. So we’ll see. The demand and the companies who build the software — they will be the ones who take it up or not. A big company like Pixar has the advantage of other companies using it.

The last trend is remote access. The internet is now allowing us to connect cross-country, like from LA to Montreal or Atlanta. We have a sophisticated remote infrastructure, and we do very high-quality remote sessions with artists who work from disparate locations. It’s very secure and very seamless.

What about realtime raytracing? How will that affect VFX and the way you work?
I think we have pretty good raytracing compared to what we had two years ago. I think it’s a question of performance, and of making it user-friendly in the application so it’s easy to light with natural lighting. To not have to fake the rebounds so you can get two or three rebounds. I think it’s coming along very well and quickly.

Sharp Objects

So what about things like AI/ML or AR/VR? Have those things changed anything in the way movies and TV shows are being made?
My feeling right now is that we are getting into an era where I don’t think you’ll have enough visual effects companies to cover the demand.

Every show has visual effects. It can be a complete character, like a Transformer, or a movie from the Marvel Universe where the entire film is CG. Or it can be the huge number of invisible effects that are starting to appear in virtually every show. You need capacity to get all this done.

AI can help minimize repetition so artists can work more on the art and what is being created. This will accelerate and give us the capacity to respond to what’s being demanded of us. They want a faster cheaper product, and they want the quality to be as high as a movie.

The only scenario where we are looking at using AR is when we are filming. For example, you need to have a good camera track in real time, and then you want to be able to quickly add a CGI environment around the actors so the director can make the right decision in terms of the background or interactive characters who are in the scene. The actors will not see it until they have a monitor or a pair of glasses or something to be able to give them the result.

So AR is a tool to be able to make faster decisions when you’re on set shooting. This is what we’ve been working on for a long time: bringing post production and preproduction together. To have an engineering department who designs and conceptualizes and creates everything that needs to be done before shooting.

The Uncanny Valley. Where are we now?
In terms of the environment, I think we’re pretty much there. We can create an environment that nobody will know is fake. Respectfully, I think our company Real by Fake is pretty good at doing it.

In terms of characters, I think we’re still not there. I think the game industry is helping a lot to push this. I think we’re on the verge of having characters look as close as possible to live actors, but if you’re in a closeup, it still feels fake. For mid-ground and long shots, it’s fine. You can make sure nobody will know. But I don’t think we’ve crossed the valley just yet.

Can you name some recent projects?
Big Little Lies and Sharp Objects for HBO, Black Summer for Netflix
and Brian Banks, an indie feature.

 

Jeremy Smith, CTO, Jellyfish Pictures
Jellyfish Pictures provides a range of services including VFX for feature film, high-end TV and episodic animated kids’ TV series and visual development for projects spanning multiple genres.

What film or show inspired you to work in VFX?
Forrest Gump really opened my eyes to how VFX could support filmmaking. Seeing Tom Hanks interact with historic footage (e.g., John F. Kennedy) was something that really grabbed my attention, and I remember thinking, “Wow … that is really cool.”

What trends have you been seeing? What do you feel is important?
The use of cloud technology is really empowering “digital transformation” within the animation and VFX industry. The result of this is that there are new opportunities that simply wouldn’t have been possible otherwise.

Jellyfish Pictures uses burst rendering into the cloud, extending our capacity and enabling us to take on more work. In addition to cloud rendering, Jellyfish Pictures were early adopters of virtual workstations, and, especially after Siggraph this year, it is apparent to see that this is the future for VFX and animation.

Virtual workstations promote a flexible and scalable way of working, with global reach for talent. This is incredibly important for studios to remain competitive in today’s market. As well as the cloud, formats such as USD are making it easier to exchange data with others, which allow us to work in a more collaborative environment.

It’s important for the industry to pay attention to these, and similar, trends, as they will have a massive impact on how productions are carried out going forward.
Are game engines affecting how you work, or how you will work in the future?

Game engines are offering ways to enhance certain parts of the workflow. We see a lot of value in the previz stage of the production. This allows artists to iterate very quickly and helps move shots onto the next stage of production.

What about realtime raytracing? How will that affect VFX and the way you work?
The realtime raytracing from Nvidia (as well as GPU compute in general) offers artists a new way to iterate and help create content. However, with recent advancements in CPU compute, we can see that “traditional” workloads aren’t going to be displaced. The RTX solution is another tool that can be used to assist in the creation of content.

How have AR/VR and ML/AI affected your workflows, if at all?
Machine learning has the power to really assist certain workloads. For example, it’s possible to use machine learning to assist a video editor by cataloging speech in a certain clip. When a director says, “find the spot where the actor says ‘X,’” we can go directly to that point in time on the timeline.

 In addition, ML can be used to mine existing file servers that contain vast amounts of unstructured data. When mining this “dark data,” an organization may find a lot of great additional value in the existing content, which machine learning can uncover.

The Uncanny Valley. Where are we now?
With recent advancements in technology, the Uncanny Valley is closing, however it is still there. We see more and more digital humans in cinema than ever before (Peter Cushing in Rogue One: A Star Wars Story was a main character), and I fully expect to see more advances as time goes on.

Can you name some recent projects?
Our latest credits include Solo: A Star Wars Story, Captive State, The Innocents, Black Mirror, Dennis & Gnasher: Unleashed! and Floogals Seasons 1 through 3.

 

Andy Brown, creative director, Jogger 
Jogger Studios is a boutique visual effects studio with offices in London, New York and LA. With capabilities in color grading, compositing and animation, Jogger works on a variety of projects, from TV commercials and music videos to projections for live concerts.

What inspired you to work in VFX?
First of all, my sixth form English project was writing treatments for music videos to songs that I really liked. You could do anything you wanted to for this project, and I wanted to create pictures using words. I never actually made any of them, but it planted the seed of working with visual images. Soon after that I went to university in Birmingham in the UK. I studied communications and cultural studies there, and as part of the course, we visited the BBC Studios at Pebble Mill. We visited one of the new edit suites, where they were putting together a story on the inquiry into the Handsworth riots in Birmingham. It struck me how these two people, the journalist and the editor, could shape the story and tell it however they saw fit. That’s what got me interested on a critical level in the editorial process. The practical interest in putting pictures together developed from that experience and all the opportunities that opened up when I started work at MPC after leaving university.

What trends have you been seeing? What do you feel is important?
Remote workstations and cloud rendering are all really interesting. It’s giving us more opportunities to work with clients across the world using our resources in LA, SF, Austin, NYC and London. I love the concept of a centralized remote machine room that runs all of your software for all of your offices and allows you scaled rendering in an efficient and seamless manner. The key part of that sentence is seamless. We’re doing remote grading and editing across our offices so we can share resources and personnel, giving the clients the best experience that we can without the carbon footprint.

Are game engines affecting how you work or how you will work in the future?
Game engines are having a tremendous effect on the entire media and entertainment industry, from conception to delivery. Walking around Siggraph last month, seeing what was not only possible but practical and available today using gaming engines, was fascinating. It’s hard to predict industry trends, but the technology felt like it will change everything. The possibilities on set look great, too, so I’m sure it will mean a merging of production and post production in many instances.

What about realtime raytracing How will that affect VFX and the way you work?
Faster workflows and less time waiting for something to render have got to be good news. It gives you more time to experiment and refine things.

Chico for Wendy’s

How have AR/VR or ML/AI affected your workflows, if at all?
Machine learning is making its way into new software releases, and the tools are useful. Anything that makes it easier to get where you need to go on a shot is welcome. AR, not so much. I viewed the new Mac Pro sitting on my kitchen work surface through my phone the other day, but it didn’t make me want to buy it any more or less. It feels more like something that we can take technology from rather than something that I want to see in my work.

I’d like 3D camera tracking and facial tracking to be realtime on my box, for example. That would be a huge time-saver in set extensions and beauty work. Anything that makes getting perfect key easier would also be great.

The Uncanny Valley. Where are we now?
It always used to be “Don’t believe anything you read.” Now it’s, “Don’t believe anything you see.” I used to struggle to see the point of an artificial human, except for resurrecting dead actors, but now I realize the ultimate aim is suppression of the human race and the destruction of democracy by multimillionaire despots and their robot underlings.

Can you name some recent projects?
I’ve started prepping for the apocalypse, so it’s hard to remember individual jobs, but there’s been the usual kind of stuff — beauty, set extensions, fast food, Muppets, greenscreen, squirrels, adding logos, removing logos, titles, grading, finishing, versioning, removing rigs, Frankensteining, animating, removing weeds, cleaning runways, making tenders into wings, split screens, roto, grading, polishing cars, removing camera reflections, stabilizing, tracking, adding seatbelts, moving seatbelts, adding photos, removing pictures and building petrol stations. You know, the usual.

 

James David Hattin, founder/creative director, VFX Legion 
Based in Burbank and British Columbia, VFX Legion specializes in providing episodic shows and feature films with an efficient approach to creating high-quality visual effects.

What film or show inspired you to work in VFX?
Star Wars was my ultimate source of inspiration for doing visual effects. Much of the effects in the movies didn’t make sense to me as a six-year-old, but I knew that this was the next best thing to magic. Visual effects create a wondrous world where everyday people can become superheroes, leaders of a resistance or ruler of a 5th century dynasty. Watching X-wings flying over the surface of a space station, the size of a small moon was exquisite. I also learned, much later on, that the visual effects that we couldn’t see were as important as what we could see.

I had already been steeped in visual effects with Star Trek — phasers, spaceships and futuristic transporters. Models held from wires on a moon base convinced me that we could survive on the moon as it broke free from orbit. All of this fueled my budding imagination. Exploring computer technology and creating alternate realities, CGI and digitally enhanced solutions have been my passion for over a quarter of century.

What trends have you been seeing? What do you feel is important?
More and more of the work is going to happen inside a cloud structure. That is definitely something that is being pressed on very heavily by the tech giants like Google and Amazon that rule our world. There is no Moore’s law for computers anymore. The prices and power we see out of computers is almost plateauing. The technology is now in the world of optimizing algorithms or rendering with video cards. It’s about getting bigger, better effects out more efficiently. Some companies are opting to run their entire operations in the cloud or co-located server locations. This can theoretically free up the workers to be in different locations around the world, provided they have solid, low-latency, high-speed internet.

When Legion was founded in 2013, the best way around cloud costs was to have on-premises servers and workstations that supported global connectivity. It was a cost control issue that has benefitted the company to this day, enabling us to bring a global collective of artists and clients into our fold in a controlled and secure way. Legion works in what we consider a “private cloud,” eschewing the costs of egress from large providers and working directly with on-premises solutions.

Are game engines affecting how you work or how you will work in the future?
Game engines are perfect for revisualization in large, involved scenes. We create a lot of environments and invisible effects. For the larger bluescreen shoots, we can build out our sets in Unreal engines, previsualizing how the scene will play for the director or DP. This helps get everyone on the same page when it comes to how a particular sequence is going to be filmed. It’s a technique that also helps the CG team focus on adding details to the areas of a set that we know will be seen. When the schedule is tight, the assets are camera-ready by the time the cut comes to us.

What about realtime raytracing via Nvidia’s RTX? How will that affect VFX and the way you work?
The type of visual effects that we create for feature films and television shows involves a lot of layers and technology that provides efficient, comprehensive compositing solutions. Many of the video card rendering engines like Octanerender, Redshift and V-Ray RT are limited when it comes to what they can create with layers. They often have issues with getting what is called a “back to beauty,” in which the sum of the render passes equals the final render. However, the workarounds we’ve developed enable us to achieve the quality we need. Realtime raytracing introduces a fantastic technology that will someday make it an ideal fit with our needs. We’re keeping an out eye for it as it evolves and becomes more robust.

How have AR/VR or ML/AI affected your workflows, if at all?
AR has been in the wings of the industry for a while. There’s nothing specific that we would take advantage of. Machine learning has been introduced a number of times to solve various problems. It’s a pretty exciting time for these things. One of our partner contacts, who left to join Facebook, was keen to try a number of machine learning tricks for a couple of projects that might have come through, but we didn’t get to put it through the test. There’s an enormous amount of power to be had in machine learning, and I think we are going to see big changes over the next five years in that field and how it affects all of post production.

The Uncanny Valley. Where are we now?
Climbing up the other side, not quite at the summit for daily use. As long as the character isn’t a full normal human, it’s almost indistinguishable from reality.

Can you name some recent projects?
We create visual effects on an ongoing basis for a variety of television shows that include How to Get Away with Murder, DC’s Legends of Tomorrow, Madam Secretary and The Food That Built America. Our team is also called upon to craft VFX for a mix of movies, from the groundbreaking feature film Hardcore Henry to recently released films such as Ma, SuperFly and After.

MAIN IMAGE: Good Morning Football via Chapeau Studios.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Behind the Title: Bindery editor Matt Dunne

Name: Matt Dunne

Company: Bindery

Can you describe your company?
Bindery is an indie film and content studio based in NYC. We model ourself after independent film studios, where we tackle every phase of a project from concept all the way through finishing. Our work varies from branded web content and national broadcast commercials to shorts and feature films.

What’s your job title?
Senior Editor

What does that entail?
I’m part of all things post at Bindery. I get involved early on in projects to help ensure we have a workflow set up, and if I’m the editor I’ll often get a chance to work with the director on conceptualizing the piece. When I get to go on set I’m able to become the hub of the production side. I’ll work with the director and DP to make sure the image is what they want and

I’ll start assembling the edit as they are shooting. Most of my time is spent in an edit suite with a director and clients working through their concept and really bringing their story to life. An advantage of working with Bindery is that I’m able to sit and work with directors before they shoot and sometimes even before a concept is locked. There’s a level of trust that’s developed and we get to work through ideas and plan for anything that may come up later on during the post process. Even though post is the last stage of a film project, it needs to be involved in the beginning. I’m a big believer in that. From the early stages to the very end, I get to touch a lot of projects.

What would surprise people the most about what falls under that title?
I’m a huge tech nerd and gear head, so with the help of two other colleagues I help maintain the post infrastructure of Bindery. When we expanded the office we had to rewire everything and I recently helped put a new server together. That’s something I never imagined myself doing.

Editors also become a sounding board for creatives. I think it’s partially because we are good listeners and partially because we have couches in our suites. People like to come in and riff an idea or work through something out loud, even if you aren’t the editor on that project. I think half of being a good editor is just being able to listen.

What’s your favorite part of the job?
Working in an open environment that nurtures ideas and creativity. I love working with people that want to push their product and encourage one another to do the same. It’s really special getting to play a role in it all.

What’s your least favorite?
I think anything that takes me away from the editing process. Any sort of hardware or software issue will completely kill your momentum and at times it can be difficult to get that back.

What’s your most productive time of the day?
Early in the morning. I’m usually walking around the post department checking the stations, double checking processes that took place overnight or maintaining the server. Opposite that I’ve always felt very productive late at night. If I’m not actively editing in the office, then I’m usually rolling the footage back in my head that I screened during the day to try and piece it together away from the computer.

If you didn’t have this Job, what would you be doing instead?
I would be running a dog sanctuary for senior and abused dogs.

How early on did you know this would be your path?
I first fell in love with post production when I was a kid. It was when Jurassic Park was in theaters and Fox would run these amazing behind-the-scene specials. There was this incredible in-depth coverage of how things in the film industry are done. I was too young to see the movie but I remember just devouring the content. That’s when I knew I wanted to be part of that scene.

Neurotica

Can you name some recent projects you have worked on?
I recently got to help finish a pilot for a series we released called Neurotica. We were lucky enough to premiere it at Tribeca this past season, and getting to see that on the big screen with the people who helped make it was a real thrill for me.

I also just finished cutting a JBL spot where we built soundscapes for Yankees player Aaron Judge and captured him as he listened and was taken on a journey through his career, past and present. The original concept was a bit different than the final deliverable, but because of the way it was shot we were able to re-conceptualize the piece in the edit. There was a lot of room to play and experiment with that one.

Do you put on a different hat when cutting for a specific genre? Can you elaborate?
Absolutely. With every job there comes a different approach and tools you need to use. If I’m cutting something more narrative focused I’ll make sure I have the script notes up, break my project out by scene and spend a lot of time auditioning different takes to make a scene work. Docu-style is a different approach entirely.

I’ll spend more time prepping that by location or subject and then break that down further. There’s even more back and forth when cutting doc. On a scripted project you have an idea of what the story flow is, but when you’re tasked with finding the edit you’re very much jumping around the story as it evolves. Whether it’s comedy, music or any type of genre, I’m always getting a chance to flex a different editing muscle.

1800 Tequila

What is the project you are most proud of?
There are a few, but one of my favorite collaborative experiences was when we worked with Billboard and 1800 Tequila to create a branded documentary series following Christian Scott aTunde Adjuh. It was five episodes shot in New York, Philadelphia and New Orleans, and the edit was happening simultaneously with production.

As the crew traveled and mapped out their days, I was able to screen footage, assemble and collaborate with the director on ideas that we thought could really enhance the piece. I was on the phone with him when they went back to NOLA for the last shoot and we were writing story beats that we needed to gather to make Episode 1 and 2 work more seamlessly now that the story had evolved. Being able to rework sections of earlier episodes before we were wrapped with production was an amazing opportunity.

What do you use to edit?
Software-wise I’m all in on the Adobe Creative Suite. I’ve been meaning to learn Resolve a bit more since I’ve been spending more and more time with it as a powerful tool in our workflow.

What is your favorite plugin?
Neat Video is a denoiser that’s really incredible. I’ve been able to work with low-light footage that would otherwise be unusable.

Are you often asked to do more than edit? If so, what else are you asked to do?
Since Bindery is involved in every stage of the process, I get this great opportunity to work with audio designers and colorists to see the project all the way through. I love learning by watching other people work.

Name three pieces of technology you can’t live without.
My phone. I think that’s a given at this point. A great pair of headphones, and a really comfortable chair that lets me recline as far back as possible for those really demanding edits.

What do you do to de-stress from it all?
I met my wife back in college and we’ve been best friends ever since, so spending any amount of time with her helps to wash away the stress. We also just bough our first house in February, so there’s plenty of projects for me to focus all of my stress into.

True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.