Tag Archives: VFX

Behind the Title: Blue Bolt VFX supervisor Richard Frazer

“If we have done our job well, the viewers should never notice the work and instead just be enjoying the storytelling.”

Name: Richard Frazer

Company: London’s Blue Bolt

Can you describe your company?
For the last four years, I’ve worked at BlueBolt, a Soho-based visual effects company in London. We work on high-end TV and feature films, and our main area of specialty is creating CG environments and populating them. BlueBolt is a privately-owned company run by two women, which is pretty rare. They believe in nurturing good talent and training them up to help break through the glass ceiling, if an artist is up for it.

What’s your job title?
I joined as a lead compositor with a view to becoming a 2D supervisor, and now I am one of their main core VFX supervisors.

What does that entail?
It means I oversee all of the visual effects work for a specific TV show or movie — from script stage to final delivery. That includes working with the director and DP in pre-production to determine what they would like to depict on the screen. We then work out what is possible to shoot practically, or if we need to use visual effects to help out.

I’ll then often be on the set during the shoot to make sure we correctly capture everything we need for post work. I’ll work with the VFX producer to calculate the costs and time scales of the VFX work. Finally, I will creatively lead our team of talented artists to create those rendered images, and make sure it all fits in with the show in a visually seamless way.

What would surprise people the most about what falls under that title?
The staggering amount of time and effort involved by many talented people to create something that an audience should be totally unaware exists. If we have done our job well, the viewers should never notice the work and instead just be enjoying the storytelling.

How long have you been working in VFX?
For around a decade. I started out as a rotoscope artist in 2008, then became a compositor. I did my first supervisor job back in 2012.

How has the VFX industry changed in the time you’ve been working?
A big shift has been just how much more visual effects work there is on TV shows and how much the standard of that work has improved. It used to be that TV work was looked down on as the poor cousin of feature film work. But shows like Game of Thrones have set audience expectations so much higher now. I worked on nothing but movies for the first part of my career, but the majority of my work now is on TV shows.

Did a particular film inspire you along this path in entertainment?
I grew up on ‘80s sci-fi and horror, so movies like Aliens and The Thing were definitely inspirations. This was back when effects were almost all done practically, so I wanted to get into model-making or prosthetics. The first time I remember being blown away by digital VFX work was seeing Terminator 2 at the cinema. I’ve ended up doing the type of work I dreamed of as a kid, just in a digital form.

Did you go to film school?
No, I actually studied graphic design. I worked for some time doing animation, video editing and motion graphics. I taught myself compositing for commercials using After Effects. But I always had a love of cinema and decided to try and specialize in this area. Almost all of what I’ve learned has been on the job — I think there’s no better training than just throwing yourself at the work, absorbing everything you can from the people around you and just being passionate about what you do.

What’s your favorite part of the job?
Each project has its own unique set of challenges, and every day involves creative problem solving. I love the process of translating what only exists in someone’s imagination and the journey of creating those images in a way that looks entirely real.

I also love the mix of being at the offices one day creating things that only exist in a virtual world, while the next day I might be on a film set shooting things in the real world. I get to travel to all kinds of random places and get paid to do so!

What’s your least favorite?
There are so many moving parts involved in creating a TV show or movie — so many departments all working together to try and complete the task at hand, as well as factors that are utterly out of your control. You have to have a perfectly clear idea of what needs to be done, but also be able to completely scrap that and come up with another idea at a moment’s notice.

If you didn’t have this job, what would you be doing instead?
Something where I can be creative and make things that physically exist. I’m always in awe of people who build and craft things with their hands.

Can you name some recent projects you have worked on?
Recent work has included Peaky Blinders, The Last Kingdom and Jamestown, as well as a movie called The Rhythm Section.

What is the project that you are most proud of?
I worked on a movie called Under The Skin a few years ago, which was a very technically and creatively challenging project. It was a very interesting piece of sci-fi that people seem to either love or hate, and everyone I ask seems to have a slightly different interpretation of what it was actually about.

What tools so you use day to day?
Almost exclusively Foundry Nuke. I use it for everything from drawing up concepts to reviewing artists work. If there’s functionality that I need from it that doesn’t exist, I’ll just write Python code to add features.

Where do you find inspiration now?
In the real world, if you just spend the time observing it in the right way. I often find myself distracted by how things look in certain light. And Instagram — it’s the perfect social media for me, as it’s just beautiful images, artwork and photography.

What do you do to de-stress from it all?
The job can be quite mentally and creatively draining and you spend a lot of time in dark rooms staring at screens, so I try to do the opposite of that. Anything that involves being outdoors or doing something physical — I find cycling or boxing are a good way to unwind.

I recently went on a paragliding trip in the French Alps, which was great but I found myself looking at all these beautiful views of sunsets over mountains and just analyzing how the sunlight was interacting with the fog and the atmospheric hazing. Apparently, I can never entirely turn off that part of my brain.

ILM’s virtual production platform used on The Mandalorian

To bring The Mandalorian to life, Industrial Light & Magic (ILM) and Epic Games — along with production technology partners Fuse, Lux Machina, Profile Studios, Nvidia and ARRI — have introduced a new way to shoot VFX-heavy projects in collaboration with Jon Favreau’s Golem Creations.

The new virtual production workflow allows filmmakers to capture a significant amount of complex visual effects shots in-camera using realtime game engine technology (Epic’s Unreal Engine) and LED screens to represent dynamic photoreal digital landscapes and sets with creative flexibility previously unimaginable.

Also part of the news, ILM has made its new end-to-end virtual production solution, ILM StageCraft, available for use by filmmakers, agencies and showrunners worldwide.

Over 50 percent of The Mandalorian Season 1 was filmed using this new methodology, eliminating the need for location shoots entirely. Instead, actors in The Mandalorian performed in an immersive and massive 20-foot-high by 270-degree semicircular LED video wall and ceiling with a 75-foot-diameter performance space, where the practical set pieces were combined with digital extensions on the screens.

Digital 3D environments created by ILM played back interactively on the LED walls, edited in realtime during the shoot, which allowed for pixel-accurate tracking and perspective-correct 3D imagery rendered at high resolution via systems powered by Nvidia GPUs.

L-R: Jon Favreau and Richard Bluff

The environments were lit and rendered from the perspective of the camera to provide parallax in real time, as if the camera were really capturing the physical environment with accurate interactive light on the actors and practical sets, giving showrunner Favreau; executive producer/director Dave Filoni; visual effects supervisor Richard Bluff; cinematographers Greig Fraser and Barry “Baz” Idoine and the episodic directors the ability to make concrete creative choices for visual effects-driven work during photography and achieve realtime in-camera composites on set.

The technology and workflow required to make in-camera compositing and effects practical for on-set use combined the ingenuity of all the partners involved.

“We’ve been experimenting with these technologies on my past projects and were finally able to bring a group together with different perspectives to synergize film and gaming advances and test the limits of realtime, in-camera rendering,” explains Favreau adding, “We are proud of what was achieved and feel that the system we built was the most efficient way to bring The Mandalorian to life.”

“Merging our efforts in the space with what Jon Favreau has been working toward using virtual reality and game engine technology in his filmmaking finally gave us the chance to execute the vision,” says Rob Bredow, executive creative director and head of ILM. “StageCraft has grown out of the culmination of over a decade of innovation in the virtual production space at ILM. Seeing our digital sets fully integrated, in real time on stage, providing the kind of in-camera shots we’ve always dreamed of while also providing the majority of the lighting was really a dream come true.”

Bluff adds, “Working with Kim Libreri and his Unreal team, Golem Creations, and the ILM StageCraft team has opened new avenues to both the filmmakers and my fellow key creatives on The Mandalorian, allowing us to shoot principal photography on photoreal, virtual sets that are indistinguishable from their physical counterparts while incorporating physical set pieces and props as needed for interaction. It’s truly a game-changer.”

ILM StageCraft’s production tools provide filmmakers with the combination of traditional filmmaking equipment and methodologies with all of the advantages of a fully digital workflow. With ILM StageCraft, a production can acquire many in-camera finals, allowing filmmakers immediate and complete creative control of work typically handed off and reinterpreted in post, improving the quality of visual effects shots with perfectly integrated elements and reducing visual effects requirements in post, which is a major benefit considering today’s compressed schedules.

Destin Daniel Cretton talks directing Warner’s Just Mercy

By Iain Blair

An emotionally powerful and thought-provoking true story, Just Mercy is the latest film from award-winning filmmaker Destin Daniel Cretton (The Glass Castle, Short Term 12), who directed the film from a screenplay he co-wrote. Based on famed lawyer and activist Bryan Stevenson’s memoir, “Just Mercy: A Story of Justice and Redemption,” which details his crusade to defend, among others, wrongly accused prisoners on death row, it stars Michael B. Jordan and Oscar winners Jamie Foxx and Brie Larson.

The story starts when, after graduating from Harvard, Stevenson (Jordan) — who had his pick of lucrative jobs — instead heads to Alabama to defend those wrongly condemned or who were not afforded proper representation, with the support of local advocate Eva Ansley (Larson).

One of his first cases is that of Walter McMillian (Foxx), who, in 1987, was sentenced to die for the murder of an 18-year-old girl, despite evidence proving his innocence. In the years that follow, Stevenson becomes embroiled in a labyrinth of legal and political maneuverings as well as overt racism as he fights for Walter, and others like him, with the odds — and the system — stacked against them.

This case becomes the main focus of the film, whose cast also includes Rob Morgan as Herbert Richardson, a fellow prisoner who also sits on death row; Tim Blake Nelson as Ralph Myers, whose pivotal testimony against Walter McMillian is called into question; Rafe Spall as Tommy Chapman, the DA who is fighting to uphold Walter’s conviction and sentence; O’Shea Jackson Jr. as Anthony Ray Hinton, another wrongly convicted death row inmate whose cause is taken up by Stevenson; and Karan Kendrick as Walter’s wife, Minnie McMillian.

Cretton’s behind-the-scenes creative team included DP Brett Pawlak, co-writer Andrew Lanham, production designer Sharon Seymour, editor Nat Sanders and composer Joel P. West, all of whom previously collaborated with the director on The Glass Castle.

Destin Daniel Crettin

I spoke with the director about making the film, his workflow and his love of post.

When you read Brian’s book, did you feel compelled to take this on?
I did. His voice and the way he tells the story about these characters, who seem so easy to judge at first. Then he starts peeling off all the layers, and the way he uses humor in certain areas and devastation in others. Somehow it still makes you feel hopeful and inspired to do something about all the injustice – all of it just hit me so hard, and I felt I had to be involved in it some way.

Did you work very closely with him on the film?
I did. Before we even began writing a word, we went to meet him in Montgomery, and he introduced us to the real Anthony Ray Hinton and a bunch of lawyers working on cases. Brian was with us through the whole writing process, filling in the blanks and helping us piece the story together. We did a lot of research, and we had the book, but it obviously couldn’t include everything. Brian gave us all the transcripts of all the hearings, and a lot of the lines were taken directly from those.

This is different from most other courtroom dramas, as the trial’s already happened when the movie begins. What sort of film did you set out to make?
We set out to make the book in as compelling a way as possible. And it’s a story about this young lawyer who’s trying to convince the system and state they made a terrible mistake, with all the ups and downs, and just how long it takes him to succeed. That’s the drama.

What were the main challenges in pulling it all together?
Telling a very intense, true story about people, many of whom are still alive and still doing the work they were doing then. So accuracy was a huge thing, and we all really felt the burden and responsibility to get it right. I felt it more so than any film I’ve ever done because I respect Brian’s work so much. We’re also telling stories about people who were very vulnerable.

Trying to figure out how to tell a narrative that still moved at the right pace and gave you an emotional ride, but which stayed completely accurate to the facts and to a legal process that moves incredibly slowly was very challenging. A big moment for me were when Brian first saw the film and gave me a big hug and thank you; he told me it was not for how he was portrayed, but for how we took care of his clients. That was his big concern.

What did Jamie and Michael bring to their roles?
They’ve been friends for a long time, so they already had this great natural chemistry, and they were able to play through scenes like two jazz musicians and bring a lot of stuff that wasn’t there on the page.

I heard you actually shot in the south. How tough was the shoot?
Filming in some of the real locations really helped. We were able to shoot in Montgomery — such as the scenes where Brian’s doing his morning jogs, the Baptist church where MLK Jr. was the pastor, and then the cotton fields and places where Walter and his family actually lived. Being there and feeling the weight of history was very important to the whole experience. Then we shot the rest of the film in Atlanta.

Where did you post?
All in LA on the Warner lot.

Do you like the post process?
I love post and I hate it (laughs). And it depends on whether you’re finding a solution to a problem or you’re realizing you have a big problem. Post, of course, is where you make the film and where all the problems are exposed… the problems with all the choices I made on set. Sometimes things are working great, but usually it’s the problems you’re having to face. But working with a good post team is so fulfilling, and you’re doing the final rewrite, and we solved so many things in post on this.

Talk about editing with your go-to Nat Sanders, who got an Oscar nom for his work (with co-editor Joi McMillon) on Moonlight and also cut If Beale Street Could Talk.
Nat wasn’t on set. He began cutting material here in LA while we shot on location in Atlanta and Alabama, and we talked a lot on the phone. He did the first assembly which was just over three hours long. All the elements were there but shaping all the material and fine-tuning it all took nearly a year as we went through every scene, talking them out.

Finding the correct emotional ride and balance was a big challenge, as this has so many emotional highs and lows and you can easily tire an audience out. We had to cut some storylines that were working, but we were sending people on another down when they needed something lighter. The other part of it was performance, and you can craft so much of that in the edit; our leads gave us so many takes and options to play with. Dealing with that is one of Nat’s big strengths. Both of us are meticulous, and we did a lot of test screenings and kept making adjustments.

Writer Iain Blair (left) and director Destin Daniel Crettin.

Nat and I both felt the hardest scene to cut and get right was Herb’s execution scene, because of the specific tone needed. If you went too far in one direction, it felt too much, but if you went too far the other way, it didn’t quite hit the emotional beat it needed. So that took a lot of time, playing around with all the cross-cutting and the music and sound to create the right balance.

All period films need VFX. What was entailed?
Crafty Apes did them, and we did a lot of fixes, added period stuff and did a lot of wig fixes — more than you’d think (laughs). We weren’t allowed to shoot at the real prison, so we had to create all the backdrops and set extensions for the death row sequences.

Can you talk about the importance of sound and music.
It’s always huge for me, and I’ve worked with my composer, Joel, and supervising sound editor/re-recording mixer Onnalee Blank, who was half of the sound team, since the start. For both of them, it was all about finding the right tone to create just the right amount of emotion that doesn’t overdo it, and Joel wrote the score in a very stripped-down way and then got all these jazz musicians to improvise along with the score.

Where did you do the DI and how important is it to you?
That’s huge too, and we did it at Light Iron with colorist Ian Vertovec. He’s worked with my DP on almost every project I’ve done, and he’s so good at grading and giving you a very subtle palette.

What’s next?
We’re currently on preproduction on Shang-Chi and the Legend of the Ten Rings, featuring Marvel’s first Asian superhero. It’s definitely a change of pace after this.


 

Kevin Lau heads up advertising, immersive at Digital Domain

Visual effects studio Digital Domain has brought on Kevin Lau as executive creative director of advertising, games and new media. In this newly created position, Lau will oversee all short-form projects and act as a creative partner for agencies and brands.

Lau brings over 18 years of ad-based visual effects and commercial production experience, working on campaigns for brands such as Target, Visa and Sprint.

Most recently, he was the executive creative director and founding partner at Timber, an LA-based studio focused on ads (GMC, Winter Olympics) and music videos (Kendrick Lamar’s Humble). Prior to that, he held creative director positions at Mirada, Brand New School and Superfad. Throughout his career, his work has been honored with multiple awards including Clios, AICP Awards, MTV VMAs and a Cannes Gold Lion for Sprint’s “Now Network” campaign via Goodby.

Lau, who joins Digital Domain EPs Nicole Fina and John Canning as they continue to build the studio’s short-form business, will help unify the vision for the advertising, games and new media/experiential groups, promoting a consistent voice across campaigns.

Lau joins the team as the new media group prepares to unveil its biggest project to date: Time’s The March, a virtual reality recreation of the 1963 March on Washington for Jobs and Freedom. Digital Domain’s experience with digital humans will play a major role in the future of both groups as they continue to build on the photoreal cinematics and in-game characters previously created for Activision, Electronic Arts and Ubisoft.

VFX-heavy Skyworth OLED TV spot via The-Artery

The-Artery created a spot for Skyworth’s latest version of its W81|W81 Pro Wallpaper OLED TV, which debuted last month at the “See the Wonder” event at CES 2020.

Created using The-Artery‘s newly opened Resolve-based color room and expanded design capabilities —spearheaded by colorist Stephen Picano and design director Lauren Indovina — the commercial features a couple swimming through space-like waters, children battling origami dragons while floating in a paper boat and a traveler treking through snowy tundras while glowing jellyfish float overhead. Publicis, Skyworth’s agency, wanted the ad to reflect “the wonder” of the company’s newest television model.

“The campaign, helmed by director Eli Sverdlov, was very director-led in a way that I’ve never seen before,” explains The-Artery’s EP/MD, Deborah Sullivan. “Of course, there was still ongoing dialogue with the client and agency, but the level of creative control that was entrusted is almost unheard of. Everything was open from start to finish, including the ideation phase, color grading and design — to name a few. Our team had a lot of fun jumping straight into the edit to develop and launch what we consider as a high-end conceptual throwback to the nineties.”

Sverdlov agrees: “Our flexible creative process was in a condensed schedule and required a very unique collaboration. We were practically creating the ideas and visuals while editing and sourcing footage.”

Due to the production’s long shooting schedule and tight deadlines, the visual effects were designed via Autodesk Flame in realtime, all under one roof, while filming took place in Serbia. Additional footage was carefully curated as well as color graded and cut to fit the tone and flow of the rest of the piece. The creature imagery such as the jellyfish was done via CG.

In addition to Flame and Resolve, The-Artery called on SideFX Houdini, Autodesk Maya, Maxon’s RedShift, Otoy’s Octane, Autodesk’s Arnold, Adobe After Effects and Maxon’s Cinema 4D.

Framestore launches FPS preproduction services

VFX studio Framestore has launched FPS (Framestore Pre-production Services) for the global film and content production industries. An expansion of Framestore’s existing capability, FPS is available to clients in need of standalone preproduction support or an end-to-end production solution.

The move builds out and aligns the company’s previz, virtual production, techviz and postviz services with Framestore’s art department (which operates either as part of the Framestore workflow or as a stand-alone creative service), virtual production team and R&D unit, and integrates with the company’s VFX and animation teams. The move builds on work on films such as Gravity and the knowledge gained during the company’s eight-year London joint venture with visualization company The Third Floor. FPS is working on feature film projects as part of an integrated offering and as a standalone visualization partner, with more projects slated in the coming months.

The new team is led by Alex Webster, who joins as FPS managing director after running The Third Floor London. He will report directly to Fiona Walkinshaw, Framestore’s global managing director, film.

“This work aligns Framestore’s singular VFX and animation craft with a granular understanding of the visualization industry,” says Webster. “It marries the company’s extraordinary legacy in VFX with established visualization and emergent virtual production processes, supported by bleeding-edge technology and dedicated R&D resource to inform the nimble approach which our clients need. Consolidating our preproduction services represents a significant creative step forward.”

“Preproduction is a crucial stage for filmmakers,” says chief creative officer Tim Webber. “From mapping out environments to developing creatures and characters to helping plot action sequences it provides unparalleled freedom in terms of seeing how a story unfolds or how characters interact with the worlds we create. Bringing together our technical innovation with an understanding of filmmaking, we want to offer a bespoke service for each film and each individual to help tell compelling, carefully crafted stories.”

“Our clients’ needs are as varied as the projects they bring to us, with some needing a start-to-finish service that begins with concept art and ends in post while others want a bespoke, standalone solution to specific creative challenges, be that in early stage concepting, through layout and visualization or in final animation and VFX” says Framestore CEO William Sargent. “It makes sense to bring all these services in-house — even more so when you consider how our work in adjacent fields like AR, VR and MR has helped the likes of HBO, Marvel and Warner Bros. bring their IP to new, immersive platforms. What we’ll ultimately deliver goes well beyond previz and beyond visualization.”

Main Image: (L-R) Tim Webber, Fiona Walkinshaw and Alex Webster.

FXhome’s HitFilm Express 14, ‘Pay What You Want’ option

FXhome has a new “Pay What You Want” good-will program inspired by the HitFilm Express community’s requests to be able to help pay for development of the historically free video editing and VFX software. Pay What You Want gives users the option to contribute financially, ensuring that those funds will be allocated for future development and improvements to HitFilm.

Additionally, FXhome will contribute a percentage of the proceeds of Pay What You Want to organizations dedicated to global causes important to the company and its community. At its launch, the FXhome Pay What You Want initiative will donate a portion of its proceeds to the WWF and the Australia Emergency Bushfire Fund. The larger the contribution from customers, the more FXhome will donate.

HitFilm Express remains a free download, however, first-time customers will now have the option to “Pay What You Want” on the software. They’ll also receive some exclusive discounts on HitFilm add-on packs and effects.

Coinciding with the release of Pay What You Want, FXhome is releasing HitFilm Express 14, the first version of HitFilm Express to be eligible for the Pay What You Want initiative. HitFilm Express 14 features a new and simplified export process, new text controls, a streamlined UI and a host of new features.

For new customers who would like to download HitFilm Express 14 and also contribute to the Pay What You Want program, there are three options available:

• Starter Pack Level: With a contribution as little as $9, new HitFilm Express 14 customers will also receive a free Starter Pack of software and effects that includes:
o Professional dark mode interface
o Edit tools including Text, Split Screen Masking, PiP, Vertical Video, Action Cam Crop
o Color tools including Exposure, Vibrance, Shadows and Highlights, Custom Gray, Color Phase, Channel Mixer and 16-bit color
o Additional VFX packs including Shatter, 3D Extrusion, Fire, Blood Spray and Animated Lasers
• Content Creator Level: With contributions of $19 or more, users will receive everything included in the Starter Pack, as well as:
o Edit: Repair Pack with Denoise, Grain Removal and Rolling Shutter
o Color: LUT Pack with LUTs and Grading Transfer
o Edit: Beautify Pack with Bilateral Blur and Pro Skin Retouch
• VFX Artist Level: Users who contribute from $39 to $99 get everything in the Starter Pack and Content Creator levels plus:
o Composite Toolkit Pack with Wire Removal, Projector, Clone and Channel Swapper
o Composite Pro-Keying Pack for Chroma Keying
o Motion Audio Visual Pack with Atomic Particles, Audio Spectrum and Audio Waveform
o VFX Neon Lights Pack with Lightsword Ultra (2-Point Auto), Lightsword Ultra (4-Point Manual), Lightsword Ultra (Glow Only) and Neon Path
o VFX Lighting Pack with Anamorphic Lens Flares, Gleam, Flicker and Auto Volumetrics

What’s new in HitFilm Express 14
HitFilm Express 14 adds a number of VFX workflow enhancements to enable even more sophisticated effects for content creators, including a simplified export workflow that allows users to export content directly from the timeline and comps, new text controls, a streamlined UI and a host of new features. Updates include:

• Video Textures for 3D Models: For creators who already have the 3D: Model Render Pack, they can now use a video layer as a texture on a 3D model to add animated bullet holes, cracked glass or changing textures.
• Improvements to the Export Process: In HitFilm Express 14, the Export Queue is now an Export Panel, and is now much easier to use. Exporting can also now be done from the timeline and from comps. These “in-context” exports will export the content between the In and Out points set or the entire timeline using the current default preset (which can be changed from the menu).
• Additional Text Controls: Customizing text in HitFilm Express 14 is now even simpler, with Text panel options for All Caps, Small Caps, Subscript and Superscript. Users can also change the character spacing, horizontal or vertical scale, as well as baseline shift (for that Stranger-Things-style titling).
• Usability and Workflow Enhancements: In addition to the new and improved export process, FXhome has also implemented new changes to the interface to further simplify the entire post production process, including a new “composite button” in the media panel, double-click and keyboard shortcuts. A new Masking feature adds new automation to the workflow; when users double-click the Rectangle or Ellipse tools, a centered mask is automatically placed to fill the center of the screen. Masks are also automatically assigned colors, which can be changed to more easily identify different masks.
• Effects: Users can now double-click the effects panel to apply to the selected layer and drop 2D effects directly onto layers in the viewer. Some effects — such as the Chroma Key and Light Flares — can be dropped on a specific point, or users can select a specific color to key by. Users can also now favorite “effects” for quick and easy access to their five most recently used effects from the ‘Effects’ menu in the toolbar.
• Additional Improvements: Users can now use Behavior effects from the editor timeline, click-drag across multiple layers to toggle “solo,” “locked” or “visibility” settings in one action, and access templates directly from the media panel with the new Templates button. Menus have also been added to the tab of each panel to make customization of the interface easier.
• Open Imerge Pro files in HitFilm: Imerge Pro files can now be opened directly from HitFilm as image assets. Any changes made in the Imerge Pro project will be automatically updated with any save, making it easier to change image assets in real time.
• Introducing Light Mode: The HitFilm Express interface is now available in Light Mode and will open in Light Mode the first time you open the software. Users with a pre-existing HitFilm Express license can easily change back to the dark theme if desired.

HitFilm Express 14 is available immediately and is a free download. Customers downloading HitFilm Express 14 for the first time are eligible to participate in the new Pay What You Want initiative. Free effects and software packs offered in conjunction with Pay What You Want are only available at initial download of HitFilm Express 14.

Rob Legato talks The Lion King‘s Oscar-nominated visual effects

By Karen Moltenbrey

There was a lot of buzz before — and after — this summer’s release of Disney’s remake of the animated classic The Lion King. And what’s not to love? From the animals to the African savannas, Disney brought the fabled world of Simba to life in what is essentially a “live-action” version of the beloved 1994 2D feature of the same name. Indeed, the filmmakers used tenets of live-action filmmaking to create The Lion King, and themselves call it a visual effects film. However, there are those who consider this remake, like the original, an animated movie, as 2019’s The Lion King used cutting-edge CGI for the photoreal beasts and environments.

Rob Legato

Whether you call it “live action” or “animation,” one thing’s for sure. This is no ordinary film. And, it was made using no ordinary production process. Rather, it was filmed entirely in virtual reality. And it’s been nominated for a Best Visual Effects Oscar this year.

“Everything in it is a visual effect, created in the same way that we would make a visual effects-oriented film, where we augment or create the backgrounds or create computer-generated characters for a scene or sequence. But in this case, that spanned the entire movie,” says VFX supervisor Rob Legato. “We used a traditional visual effects pipeline and hired MPC, which is a visual effects studio, not an animation house.”

MPC, which created the animals and environments, crafted all the elements, which were CG, and handled the virtual production, working with Magnopus to develop the necessary tools that would take the filmmakers from previz though shooting and, eventually, into post production. Even the location scouting occurred within VR, with Legato, director Jon Favreau and others, including cinematographer Caleb Deschanel, simultaneously walking through the sets and action by using HTC Vive headsets.

Caleb Deschanel (headset) and Rob Legato. Credit: Michael Legato

The Animations and Environments
MPC, known for its photorealistic animals and more, had worked with Disney and Favreau on the 2016 remake of The Jungle Book, which was shot within a total greenscreen environment and used realistic CG characters and sets with the exception of the boy Mowgli. (It also used VR, albeit for previsualization only.) The group’s innovative effort for that work won an Oscar for visual effects. Apparently that was just the tip of the spear, so to speak, as the team upped its game with The Lion King, making the whole production entirely CG and taking the total filmmaking process into virtual reality.

“It had to look as believable as possible. We didn’t want to exaggerate the performances or the facial features, which would make them less realistic,” says Legato of the animal characters in The Lion King.

The CG skeletons were built practically bone for bone to match their real-life counterparts, and the digital fur matched the hair variations of the various species found in nature. The animators, meanwhile, studied the motion of the real-life animals and moved the digital muscles accordingly.

“Your eye picks up when [the animal] is doing something that it can’t really do, like if it stretches its leg too far or doesn’t have the correct weight distribution that’s affecting the other muscles when it puts a paw down,” says Legato, contending that it is almost impossible to tell the CG version of the characters from the real thing in a non-talking shot or a still frame.

To craft the animals and environments, the MPC artists used Autodesk’s Maya as the main animation program, along with SideFX Houdini for water and fire simulations and Pixar’s RenderMan for rendering. MPC also used custom shaders and tools, particularly for the fur, mimicking that of the actual animal. “A lion has so many different types of hair — short hair around the body, the bushy mane, thick eyebrow hairs and whiskers. And every little nuance was recreated and faithfully reproduced,” Legato adds.

MPC artists brought to life dozens and dozens of animals for the film and then generated many more unique variations — from lions to mandrills to hyenas to zebras and more, even birds and bugs. And then the main cast and background animals were placed within a photoreal environment, where they were shot with virtual cameras that mimicked real cameras.

The world comprises expansive, open landscapes. “There were many, many miles of landscapes that were constructed,” says Legato. The filmmakers would film within pockets that were dressed and populated for different scenes, from Pride Rock to the interior of a cave to the savanna to the elephant graveyard — all built in CGI.

“Everything was simulated to be the real thing, so the sum total of the illusion is that it’s all real. And everything supports each other — the grounds, the characters, what they are physically doing. The sum total of that adds up to where your brain just says, ‘OK, this must be real. I’ll stop looking for flaws and will now just watch the story,’” says Legato. “That was the creative intent behind it.”

Virtual Production
All the virtual camera work was accomplished within Unity’s engine, so all the assets were ported in and out of that game engine. “Everyone would then know where our cameras were, what our camera moves were, how we were following the action, our lens choices, where the lights were placed … all those things,” says Legato.

Magnopus created the VR tools specific for the film, which ran on top of Unity to get the various work accomplished, such as the operation of the cameras. “We had a crane, dolly and other types of cameras encoded so that it basically drove its mate in the computer. For instance, we created a dolly and then had a physical dolly with encoders on it, so everything was hand operated, and we had a dolly grip and a camera assistant pulling focus. There was someone operating the cameras, and sometimes there was a crane operator. We did SteadiCam as well through an actual SteadiCam with a sensor on it to work with OptiTrack [motion capture that was used to track the camera],” explains Legato. “We built a little rig for the SteadiCam as well as one for a drone we’d fly around the stage, and we’d create the illusion that it was a helicopter shot while flying around Africa.”

Because the area within VR was so vast, a menu system was created so the filmmakers could locate one another within the virtual environment, making location scouting much easier. They also could take snapshots of different areas and angles and share them with the group. “We were standing next to each other [on stage], but within the virtual environment, we could be miles apart and not see each other because we’re maybe behind trees or rocks.”

As Legato points out, the menu tool is pretty robust. “We basically built a game of film production. Everything was customizable,” he says. Using iPads, the group could play the animation. As the camera was in operation, they could stop the animation, wind it backward, speed it forward, shoot it in slow motion or faster motion. “These options were all accessible to us,” he adds.

Legato provides the following brief step-by-step overview of how the virtual production occurred. First, the art department created the sets — Africa with the trees, ponds, rivers, mountains, waterfalls and so forth. “Based on the script, you know somewhat where you need to be [in the set],” he says. Production designer James Chinlund would make a composite background, and then they — along with Favreau, Deschanel and animation supervisor Andrew Jones — would go into VR.

“We had built these full-size stationary chess pieces of the animals, and in VR, we’d have these tools that let us grab a lion, for instance, or a meerkat, and position them, and then we’d look through the lens and start from there,” says Legato. “We would either move them by hand or puppeteer a simple walk cycle to get the idea of the blocking.”

Jones and his team would animate that tableau and port it back into the game engine as an animation cycle. “We’d find camera angles and augment them. We’d change some of the animation or slow it down or move the animals in slightly different positions. And then we’d shoot it like it’s on a live-action stage,” explains Legato. “We’d put a dolly track down, cover the action with various types of lenses, create full-coverage film dailies… We could shoot the same scene in as many different angles as we’d wish. We could then play it out to a video deck and start editing it right away.” The shots they liked might get rendered with more light or motion blur, but a lot of the time, they’d go right off the video tap.

Meanwhile, MPC recorded everything the filmmakers did and moved — every leaf, rock, tree, animal. Then, in post, all of that information would be reconverted back into Maya sets and the animation fine-tuned.

“In a nutshell, the filmmakers were imparting a live-action quality to the process — by not faking it, but by actually doing it,” says Legato. “And we still have the flexibility of full CGI.”

The Same, But Different
According to Legato, it did not take the group long to get the hang of working in VR. And the advantages are many — chief among them, time savings when it comes to planning and creating the sequence editorially, and then instantly being able to reshoot or iterate the scene inexpensively. “There is literally no downside to exploring a bold choice or an alternate angle on the concept,” he points out.

Yes, virtual filmmaking is the future, contends Legato.

So, back to the original question: Is The Lion King a VFX film or an animated film? “It’s perhaps a hybrid,” says Legato. “But, if you didn’t know how we did it and if the animals didn’t talk, you’d think it was done in the traditional manner of a live-action film. Which it is, visually speaking. You wouldn’t necessarily describe it as looking like ‘an animated film’ because it doesn’t really look like an animated film, like a Pixar or DreamWorks movie. By labeling it as such, you’re putting it into a hole that it’s not. It’s truly just a movie. How we achieved it is immaterial, as it should be.”

Legato and his colleagues call it “live action,” which it truly is. But some, including the Golden Globes, categorized it as “animation.” (They also called 2015’s The Martian and 2010’s The Tourist “comedies.”)

Call it what you will; the bottom line is that the film is breathtaking and the storytelling is amazing. And the filmmaking is inventive and pushes traditional boundaries, making it difficult to perhaps fit into a traditional category. Therefore, “beautiful,” “riveting,” “creative” and “innovative” might be the only descriptions necessary.


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Check out MPC’s VFX breakdown on the film:

Director James Mangold on Oscar-nominated Ford v Ferrari

By Iain Blair

Filmmaker James Mangold has been screenwriting, producing and directing for years. He has made films about country legends (Walk the Line), cowboys (3:10 to Yuma), superheroes (Logan) and cops (Cop Land), and has tackled mental illness (Girl Interrupted) as well.

Now he’s turned his attention to race car drivers and Formula 1 with his movie Ford v Ferrari, which has earned Mangold an Oscar nomination for Best Picture. The film also received nods for its editing, sound editing and sound mixing.

James Mangold (beard) on set.

The high-octane drama was inspired by a true-life friendship that forever changed racing history. In 1959, Carroll Shelby (Matt Damon) is on top of the world after winning the most difficult race in all of motorsports, the 24 Hours of Le Mans. But his greatest triumph is followed quickly by a crushing blow — the fearless Texan is told by doctors that a grave heart condition will prevent him from ever racing again.

Endlessly resourceful, Shelby reinvents himself as a car designer and salesman working out of a warehouse space in Venice Beach with a team of engineers and mechanics that includes hot-tempered test driver Ken Miles (Christian Bale). A champion British race car driver and a devoted family man, Miles is brilliant behind the wheel, but he’s also blunt, arrogant and unwilling to compromise.

After Shelby’s vehicles make a strong showing at Le Mans against Italy’s venerable Enzo Ferrari, Ford Motor Company recruits the firebrand visionary to design the ultimate race car, a machine that can beat even Ferrari on the unforgiving French track. Determined to succeed against overwhelming odds, Shelby, Miles and their ragtag crew battle corporate interference, the laws of physics and their own personal demons to develop a revolutionary vehicle that will outshine every competitor. The film culminates in the historic showdown between the US and Italy at the grueling 1966 24 hour Le Mans race.

Mangold’s below-the-line talent, many of whom have collaborated with the director before, includes Academy Award-nominated director of photography Phedon Papamichael; film editors Michael McCusker, ACE, and Andrew Buckland; visual effects supervisor Olivier Dumont; and composers Marco Beltrami and Buck Sanders.

L-R: Writer Iain Blair and Director James Mangold

I spoke with Mangold — whose other films include Logan, The Wolverine and Knight and Day — about making the film and his workflow.

You obviously love exploring very different subject matter in every film you make.
Yes, and I do every movie like a sci-fi film — meaning inventing a new world that has its own rules, customs, language, laws of physics and so on, and you need to set it up so the audience understands and they get it all. It’s like being a world-builder, and I feel every film should have that, as you’re entering this new world, whether it’s Walk the Line or The French Connection. And the rules and behavior are different from our own universe, and that’s what makes the story and characters interesting to me.

What sort of film did you set out to make?
Well, given all that, I wanted to make an exciting racing movie about that whole world, but it’s also that it was a moment when racing was free of all things that now turn me off about it. The cars were more beautiful then, and free of all the branding. Today, the cars are littered with all the advertising and trademarks — and it’s all nauseating to me. I don’t even feel like I’m watching a sport anymore.

When this story took place, it was also a time when all the new technology was just exploding. Racing hasn’t changed that much over the past 20 years. It’s just refining and tweaking to get that tiny edge, but back in the ‘60s they were still inventing the modern race car, and discovering aerodynamics and alternate building materials and methods. It was a brand-new world, so there was this great sense of discovery and charm along with all that.

What were the main technical challenges in pulling it all together?
Trying to do what I felt all the other racing movies hadn’t really done — taking the driving out of the CG world and putting it back in the real world, so you could feel the raw power and the romanticism of racing. A lot of that’s down to the particulates in the air, the vibrations of the camera, the way light moves around the drivers — and the reality of behavior when you’re dealing with incredibly powerful machines. So right from the start, I decided we had to build all the race cars; that was a huge challenge right there.

How early on did you start integrating post and all the VFX?
Day one. I wanted to use real cars and shoot the Le Mans and other races in camera rather than using CGI. But this is a period piece, so we did use a lot of CGI for set extensions and all the crowds. We couldn’t afford 50,000 extras, so just the first six rows or so were people in the stands; the rest were digital.

Did you do a lot of previz?

A lot, especially for Le Mans, as it was such a big, three-act sequence with so many moving parts. We used far less for Daytona. We did a few storyboards and then me and my second unit director, Darrin Prescott — who has choreographed car chases and races in such movies as Drive, Deadpool 2, Baby Driver and The Bourne Ultimatum — planned it out using matchbox cars.

I didn’t want that “previzy” feeling. Even when I do a lot of previz, whether it’s a Marvel movie or like this, I always tell my previz team “Don’t put the camera anywhere it can’t go.” One of the things that often happens when you have the ability to make your movie like a cartoon in a laboratory — which is what previz is — is that you start doing a lot of gimmicky shots and flying the camera through keyholes and floating like a drone, because it invites you to do all that crazy shit. It’s all very show-offy as a director — “Look at me!” — and a turnoff to me. It takes me out of the story, and it’s also not built off the subjective experience of your characters.

This marks your fifth collaboration with DP Phedon Papamichael, and I noticed there’s no big swooping camera moves or the beauty shot approach you see in all the car commercials.
Yes, we wanted it to look beautiful, but in a real way. There’s so much technology available now, like gyroscopic setups and arms that let you chase the cars in high-speed vehicles down tracks. You can do so much, so why do you need to do more? I’m conservative that way. My goal isn’t to brand myself through my storytelling tricks.

How tough was the shoot?
It was one of the most fun shoots I’ve ever had, with my regular crew and a great cast. But it was also very grueling, as we were outside a lot, often in 115-degree heat in the desert on blacktop. And locations were big challenges. The original Le Mans course doesn’t exist anymore like it used to be, so we used several locations in Georgia to double for it. We shot the races wide-angle anamorphic with a team of a dozen professional drivers, and with anamorphic you can shoot the cars right up into the lens — just inches away from camera, while they’d be doing 150 mph or 160 mph.

Where did you post?
All on the Fox lot at my offices. We scored at Capitol Records and mixed the score in Malibu at my composer’s home studio. I really love the post, and for me it’s all part of the same process — the same cutting and pasting I do when I’m writing, and even when I’m directing. You’re manipulating all these elements and watching it take form — and particularly in this film, where all the sound design and music and dialogue are all playing off one another and are so key. Take the races. By themselves, they look like nothing. It’s just a car whipping by. The power of it all only happens with the editing.

You had two editors — Michael McCusker and Andrew Buckland. How did that work?
Mike’s been with me for 20 years, so he’s kind of the lead. Mike and Drew take and trade scenes, and they’re good friends so they work closely together. I move back and forth between them, which also gives them each some space. It’s very collaborative. We all want it to look beautiful and elegant and well-designed, but no one’s a slave to any pre-existing ideas about structure or pace. (Check out postPerspective‘s interview with the editing duo here.)

What were the big editing challenges?
It’s a car racing movie with drama, so we had to hit you with adrenalin and then hold you with what’s a fairly procedural and process-oriented film about these guys scaling the corporate wall to get this car built and on the track. Most of that’s dramatic scenes. The flashiest editing is the races, which was a huge, year-long effort. Mike was cutting the previz before we shot a foot, and initially we just had car footage, without the actors, so that was a challenge. It all transformed once we added the actors.

Can you talk about working on the visual effects with Method’s VFX supervisor Olivier Dumont?
He did an incredible job, as no one thinks there are so many. They’re really invisible, and that’s what I love — the film feels 100% analog, but of course it isn’t. It’s impossible to build giant race tracks as they were in the ‘60s. But having real foregrounds really helped. We had very few scenes where actors were wandering around in a green void like on so many movies now. So you’re always anchored in the real world, and then all the set extensions were in softer focus or backlit.

This film really lends itself to sound.
Absolutely, as every car has its own signature sound, and as we cut rapidly from interiors to exteriors, from cars to pits and so on. The perspective aural shifts are exciting, but we also tried to keep it simple and not lose the dramatic identity of the story. We even removed sounds in the mix if they weren’t important, so we could focus on what was important.

Where did you do the DI, and how important is it to you?
At Efilm with Skip Kimball (working on Blackmagic DaVinci Resolve), and it was huge on this, especially dealing with the 24-hour race, the changing light, rain and night scenes, and having to match five different locations was a nightmare. So we worked on all that and the overall look from early on in the edit.

What’s next?
Don’t know. I’ve got two projects I’m working on. We’ll see.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Talking with Franki Ashiruka of Nairobi’s Africa Post Office

By Randi Altman

After two decades of editing award-winning film and television projects for media companies throughout Kenya and around the world, Franki Ashiruka opened Africa Post Office, a standalone, post house in Nairobi, Kenya. The studio provides color grading, animation, visual effects, motion graphics, compositing and more. In addition, they maintain a database of the Kenyan post production community that allows them to ramp up with the right artists when the need arises.

Here she talks about the company, its workflow and being a pioneer in Nairobi’s production industry.

When did you open Africa Post Office, and what was your background prior to starting this studio?
Africa Post Office (APO) opened its doors in February 2017. Prior to starting APO, I was a freelance editor with plenty of experience working with well-established media houses such as Channel 4 (UK), Fox International Channels (UK), 3D Global Leadership (Nigeria), PBS (USA), Touchdown (New Zealand), Greenstone Pictures (New Zealand) and Shadow Films (South Africa).

In terms of Kenya-based projects, I’ve worked with a number of production houses including Quite Bright Films, Fat Rain Films, Film Crew in Africa, Mojo Productions, Multichoice, Zuku, Content House and Ginger Ink Films.

I imagine female-run, independent studios in Africa are rare?
On the contrary, Kenya has reached a point where more and more women are emerging as leaders of their own companies. I actually think there are more women-led film production companies than male-led. The real challenge was that before APO, there was nothing quite like it in Nairobi. Historically, video production here was very vertical — if you shot something, you’d need to also manage post within whatever production house you were working in. There were no standalone post houses until us. That said, with my experience, even though hugely daunting, I never thought twice about starting APO. It is what I have always wanted to do, and if being the first company of our kind didn’t intimidate me, being female was never going to be a hindrance.

L-R: Franki Ashiruka, Kevin Kyalo, Carole Kinyua and Evans Wenani

What is the production and post industry like in Nairobi? 
When APO first opened, the workload was commercial-heavy, but in the last two years that has steadily declined. We’re seeing this gap filled by documentary films, corporate work and television series. Feature films are also slowly gaining traction and becoming the focus of many up-and-coming filmmakers.

What services do you provide, and what types of projects do you work on?
APO has a proven track record of successful delivery on hundreds of film and video projects for a diverse range of clients and collaborators, including major corporate entities, NGOs, advertising and PR agencies, and television stations. We also have plenty of experience mastering according to international delivery standards. We’re proud to house a complete end-to-end post ecosystem of offline and online editing suites.

Most importantly, we maintain a very thorough database of the post production community in Kenya.
This is of great benefit to our clients who come to us for a range of services including color grading, animation, visual effects, motion graphics and compositing. We are always excited to collaborate with the right people and get additional perspectives on the job at hand. One of our most notable collaborators is Ikweta Arts (Avatar, Black Panther, Game of Thrones, Hacksaw Ridge), owned and run by Yvonne Muinde. They specialize in providing VFX services with a focus in quality matte painting/digital environments, art direction, concept and post visual development art. We also collaborate with Keyframe (L’Oréal, BMW and Mitsubishi Malaysia) for motion graphics and animations.

Can you name some recent projects and the work you provided?
We are incredibly fortunate to be able to select projects that align with our beliefs and passions.

Our work on the short film Poacher (directed by Tom Whitworth) won us three global Best Editing Awards from the Short to the Point Online Film Festival (Romania, 2018), Feel the Reel International Film Festival (Glasgow, 2018) and Five Continents International Film Festival (Venezuela, 2019).

Other notable work includes three feature documentaries for the Big Story segment on China Global Television Network, directed by Juan Reina (director of the Netflix Original film Diving Into the Unknown), Lion’s Den (Quite Bright Films) an adaptation of ABC’s Shark Tank and The Great Kenyan Bake Off (Showstopper Media) adopted from the BBC series The Great British Bake Off. We also worked on Disconnect, a feature film produced by Kenya’s Tosh Gitonga (Nairobi Half Life), a director who is passionate about taking Africa’s budding film industry to the next level. We have also worked on a host of television commercials for clients extending across East Africa, including Kenya, Rwanda, South Sudan and Uganda.

What APO is most proud of though, is our clients’ ambitions and determination to contribute toward the growth of the African film industry. This truly resonates with APO’s mantra.

You recently added a MAM and some other gear. Can you talk about the need to upgrade?
Bringing on the EditShare EFS 200 nodes has significantly improved the collaborative possibilities of APO. We reached a point where we were quickly growing, and the old approach just wasn’t going to cut it.

Prior to centralizing our content, projects lived on individual hard disks. This meant that if I was editing and needed my assistant to find me a scene or a clip, or I needed VFX on something, I would have to export individual clips to different workstations. This created workflow redundancies and increased potential for versioning issues, which is something we couldn’t afford to be weighed down with.

The remote capabilities of the EditShare system were very appealing as well. Our color grading collaborator, Nic Apostoli of Comfort and Fame, is based in Cape Town, South Africa. From there, he can access the footage on the server and grade it while the client reviews with us in Nairobi. Flow media asset management also helps in this regard. We’re able to effectively organize and index clips, graphics, versions, etc. into clearly marked folders so there is no confusion about what media should be used. Collaboration among the team members is now seamless regardless of their physical location or tools used, which include the Adobe Creative Suite, Foundry Nuke, Autodesk Maya and Maxon Cinema 4D.

Any advice for others looking to break out on their own and start a post house?
Know what you want to do, and just do it! Thanks Nike …


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Directing bookend sequences for Portals, a horror anthology film

By Hasraf “HaZ” Dulull

Portals is a genre-bending feature film anthology focusing on a series of worldwide blackouts — after which millions of mysterious objects appear everywhere across the planet. While many flee from the sentient objects, some people are drawn toward and into them with horrifying consequences.

Portals

The film was in the final stages of post when writer/director Liam O’Donnell (Beyond Skyline and the upcoming Skylines film) called to see if I would like to get involved and direct some bookend sequences to add more scope and setup, which the producers felt was very much needed. I loved the premise and the world of the anthology, so I said yes. I pitched an idea for an ending, that quickly evolved into an extra segment at the end of the film, which I directed. That’s why there are officially four directors on the show, with me getting executive producer and “end-segment created by” credits.

Two of the other sequences are around 20 to 25 minutes each, and O’Donnell’s sequence was around 35 minutes. The film is 85 minutes long. Eduardo Sanchez and Gregg Hale (The Blair Witch Project) co-directed their segments. So the anthology feature film is really three long segments with my bookend sequences. The only connections among all the stories are the objects that appear, the event itself and the actual “portal,” but everything else was unique to each segment’s story. In terms of production, the only consistencies throughout the anthology were the camera language — that slight hand-held feel — and, of course, the music/sound

I had to watch the latest cut of the entire anthology film to get my head into that world, but I was given freedom to bring my own style to my sequences. That is exactly the point of an anthology — for each director to bring his or her own sensibilities to the individual segments. Besides Liam, the main producers I worked closely with on this project were Alyssa Devine and Griffin Devine from Pigrat Productions. They are fans of my first feature film, The Beyond, so they really encouraged the grounded tone I had demonstrated in that film.

The portal in Portals.

I’ve been a huge advocate of Blackmagic cameras and technology for a long time. Additionally, I knew I had to a lot to shoot in a very short time space (two days!), so I needed a camera that was light and flexible yet able to shoot 4K. I brought on cinematographer Colin Emerson, who shoots in a very loose way but always makes his stuff look cinematic. We watched the cut of the film and noticed the consistent loose nature to the cinematography on all the segments. Colin uses the Fig Rig a lot and I love the way that rig works and the BMD Pocket Cinema 4K fits nicely on it along with his DSLR lenses he likes to use. The other reason was to be able to use Blackmagic’s new BRaw format too.

We also shot the segment using a skeleton crew, which comprised of myself as director/producer; VFX supervisor/1st AD John Sellings, who also did some focus pulling; James De Taranto (sound recording); DP/camera op Colin Emerson, FX makeup artists Kate Griffith and Jay James; and our two actors, Georgina Blackledge and Dare Emmanuel. I worked with both of them on my feature film The Beyond.

The Post
One thing I wanted to make sure of was that the post team at The Institution in LA was able to take my Resolve files and literally work from that for the picture post. One of the things I did during prep of the project (before we even cast) was to shoot some tests to show what I had in mind in terms of look and feel. We also tested the BRaw and color workflow between my setup in London and the LA team. Colin and I did this during location recce. This proved to be extremely useful to ensure we set our camera to the exact specs the post house wanted. So we shot at 23.98, 4K (4096×1716) 2:39 cropped, Blackmagic color design log color space.

HaZ’s segments were captured with the Blackmagic Pocket Cinema Camera.

During the test, I did some quick color tests to show the producers in LA the tone and mood I was going for and to make sure everyone was on board before I shot it. The look was very post apocalyptic, as it’s set after the main events have happened. I wanted the locations to be a contrast with each other, one interior and one exterior with greens.

Colin is used to shooting most of his stuff on the Panasonic GH, but he had the Cinema Pocket Camera and was looking for the right project to use it on. He found he could use all of his usual lenses because the Cinema Pocket Camera has the same mount. Lenses used were the Sigma 18-35mm f/1.8 + Metabones Speedbooster; the Olympus 12mm f2; and the Lumix 35-100mm f2.8

Colin used the onboard monitor screen on the Pocket Cinema Camera, while I used a tethered external monitor — the Ikan DH5e — for directing. We used a 1TB Samsung external SSD securely attached to the rig cage along with a 64GB CFast card. The resolution we shot in was determined by the tests we did. We set up the rushes for post after each of the two days of the shoot, so during the day we would swap out drives and back things up. At the end of the day, we would bring in all the picture and sound rushes and use the amazing autosync feature in Blackmagic DaVinci Resolve to set it all up. This way, when I headed back home I could start editing right away inside Resolve.

Resolve

I have to admit, we were hesitant at first because I was shooting and capturing Log in QuickTime ProRes 4:4:4:4, and I always avoided DNG raw because of the huge file size and data transfer. But the team at Blackmagic has always been so supportive and provided us with support right up till the end of the shoot, so after testing BRaw I was impressed. We had so much control as all that information is accessed within Resolve. . I was able to set the temp look during editing, and the colorist worked from there. Skin tones were of utmost importance; because of the intimate nature of the drama, I wanted a natural look to the skin tones. I am really happy with the way they came out at the end.

They couldn’t believe how cinematic the footage was when we told them we shot using the Pocket Cinema Camera, since the other segments were shot on cameras like Red. We delivered the same 4K deliverables spec as the other segments in the film.

HaZ on set, second from right.

I used the AMD Radeon RX Vega 56 version of the Blackmagic eGPU. The reason was because I wanted to edit on my MacBook Pro (late 2017) and needed the power to run 4K in realtime. I was so impressed with how much power it provided; it was like having a new MacBook Pro without having to buy one. The eGPU had all the connectivity (two Thunderbolt and four USB-3) I needed, which is a limitation of the MacBook Pro.

The beauty of keeping everything native was that there wasn’t much work to do when porting, as it’s just plug and play. And the Resolve detects the eGPU, which you can then set as default. The BRaw format makes it all so manageable to preview and playback in real time. Also, since it’s native, Resolve doesn’t need to do any transcoding in the background. I have always been a huge fan of the tracking in Resolve, and I was able to do eye effects very easily without it being budgeted or done as a VFX shot. I was able to get the VFX render assets from the visual effects artist (Justin Martinez ) in LA and do quick-slap comps during editing. I love the idea that I can set looks and store them as memories, which I can then recall very quickly to apply on a bunch of shots. This allows me to have a slick-looking preview rough cut of the film.

Portals

I sent a hard drive containing all the organized rushes to the team in LA while I was doing the final tweaks to the edit. Once the edit was signed off, or if any last-minute notes came in, I would do them and email them my Resolve file. It was super simple, and the colorists (Oliver Ojeil) and post team (Chad Van Horn and Danny Barone) in LA appreciated the simple workflow because there really wasn’t any conforming for them to do apart from a one-click relink of media location; they would just take my Resolve file and start working away with it.

We used practical effects to keep the horror as real and grounded as possible, and used VFX to augment further. We were fortunate to be able to get special effects makeup artist Kate Griffiths. Given the tight schedule she was able to create a terrifying effect, which I won’t give away. You need to watch the film to see it! We had to shoot those make-up FX-heavy shots at the end of the day, which meant we had to be smart about how we scheduled the shoot given the hours-long make-up process. Kate was also on hand to provide effects like the liquid coming out of the eyes and sweat etc. — every detail of which the camera picked up for us so we could bring it out in the grade.

The Skype-style shots at the start of the film (phone and computer monitor shots) had their VFX screen elements placed as a separate layer so the post team in LA could grade them separately and control the filters applied on them. For some of the wide shots showing our characters entering and leaving the portal, we keyframed some movement of the 4K shot along with motion blur to give the effect of in-camera movement. I also used the camera shake within Resolve, which comes with so many options to create bespoke movement on static frames.

Portals is now available on iTunes and other VOD platforms.


HaZ Dulull is known for his sci-fi feature films The Beyond and 2036 Origin Unknown, also in television for his pilot and episodes on Disney’s Fast Layne. He is currently busy on projects at various stages of development and production at his production company, hazfilm.com.

Conductor Companion app targets VFX boutiques and freelancers

Conductor Technologies has introduced Conductor Companion, a desktop app designed to simplify the use of the cloud-based rendering service. Tailored for boutique studios and freelance artists, Companion streamlines the Conductor on-ramp and rendering experience, allowing users to easily manage and download files, write commands and handle custom submissions or plug-ins from their laptops or workstations. Along with this release, Conductor has added initial support for Blender creative software.

“Conductor was originally designed to meet the needs of larger VFX studios, focusing our efforts on maximizing efficiency and scalability when many artists simultaneously leverage the platform and optimizing how Conductor hooks into those pipelines,” explains CEO Mac Moore. “As Conductor’s user base has grown, we’ve been blown away by the number of freelance artists and small studios that have come to us for help, each of which has their own unique needs. Conductor Companion is a nod to that community, bringing all the functionality and massive render resource scale of Conductor into a user-friendly app, so that artists can focus on content creation versus pipeline management. And given that focus, it was a no-brainer to add Blender support, and we are eager to serve the passionate users of that product.”

Moore reports that this app will be the foundation of Conductor’s Intelligence Hub in the near future, “acting as a gateway to more advanced functionality like Shot Analytics and Intelligent Bid Assist. These features will leverage AI and Conductor’s cloud knowledge to help owners and freelancers make more informed business decisions as it pertains to project-to-project rendering financials.”

Conductor Companion is currently in public beta. You can download the app here.

In addition to Blender, applications currently supported by Conductor include Autodesk Maya and Arnold; Foundry’s Nuke, Cara VR, Katana, Modo and Ocula; Chaos Group’s V-Ray; Pixar’s RenderMan; Isotropix’s Clarisse; Golaem; Ephere’s Ornatrix; Yeti; and Miarmy.

The Mill opens boutique studio in Berlin

Technicolor’s The Mill has officially launched in Berlin. This new boutique studio is located in the heart of Berlin, situated in the creative hub of Mitte, near many of Germany’s agencies, production companies and brands.

The Mill has been working with German clients for years. Recent projects include the Mercedes’ Bertha Benz spot with director Sebastian Strasser; Netto’s The Easter Surprise, directed in-house by The Mill; and BMW The 8 with director Daniel Wolfe. The new studio will bring The Mill’s full range of creative services from color to experiential and interactive, as well as visual effects and design.

The Mill Berlin crew

Creative director Greg Spencer will lead the creative team. He is a multi-award winning creative, having won several VES, Cannes Lions and British Arrow awards. His recent projects include Carlsberg’s The Lake, PlayStation’s This Could Be You and Eve Cuddly Toy. Spencer also played a role in some of Mill Film’s major titles. He was the 2D supervisor for Les Misérables and also worked on the Lord of the Rings trilogy. His resume also includes campaigns for brands such as Nike and Samsung.

Executive producer Justin Stiebel moves from The Mill London, where he has been since early 2014, to manage client relationships and new business. Since joining the company, Stiebel has produced spots such as Audi’s Next Level and the Mini’s “The Faith of a Few” campaign. He has also collaborated with directors such as Sebastian Strasser, Markus Walter and Daniel Wolfe while working on brands like Mercedes, Audi and BMW.

Sean Costelloe is managing director of The Mill London and The Mill Berlin.

Main Image Caption: (L-R) Justin Stiebel and Greg Spencer

Directing Olly’s ‘Happy Inside Out’ campaign

How do you express how vitamins make you feel? Well, production company 1stAveMachine partnered with independent creative agency Yard NYC to develop the stylized “Happy Inside Out” campaign for Olly multivitamin gummies to show just that.

Beauty

The directing duo of Erika Zorzi and Matteo Sangalli, known as Mathery, highlighted the brand’s products and benefits by using rich textures, colors and lighting. They shot on an ARRI Alexa Mini. “Our vision was to tell a cohesive narrative, where each story of the supplements spoke the same visual language,” Mathery explains. “We created worlds where everything is possible and sometimes took each product’s concept to the extreme and other times added some romance to it.”

Each spot imagines various benefits of taking Olly products. The side-scrolling Energy, which features a green palette, shows a woman jumping and doing flips through life’s everyday challenges, including through her home to work, doing laundry and going to the movies. Beauty, with its pink color pallete, features another woman “feeling beautiful” while turning the heads of a parliament of owls. Meanwhile, Stress, with its purple/blue palette, features a women tied up in a giant ball of yarn, and as she unspools herself, the things that were tying her up spin away. In the purple-shaded Sleep, a lady lies in bed pulling off layer after layer of sleep masks until she just happily sleeps.

Sleep

The spots were shot with minimal VFX, other than a few greenscreen moments, and the team found itself making decisions on the fly, constantly managing logistics for stunt choreography, animal performances and wardrobe. Jogger Studios provided the VFX using Autodesk Flame for conform, cleanup and composite work. Adobe After Effects was used for all of the end tag animation. Cut+Run edited the campaign.

According to Mathery, “The acrobatic moves and obstacle pieces in the Energy spot were rehearsed on the same day of the shoot. We had to be mindful because the action was physically demanding on the talent. With the Beauty spot, we didn’t have time to prepare with the owls. We had no idea if they would move their heads on command or try to escape and fly around the whole time. For the Stress spot, we experimented with various costume designs and materials until we reached a look that humorously captured the concept.”

The campaign marks Mathery’s second collaboration with Yard NYC and Olly, who brought the directing team into the fold very early on, during the initial stages of the project. This familiarity gave everyone plenty of time to let the ideas breath.

VES Awards: The Lion King and Alita earn five noms each

The Visual Effects Society (VES) has announced its nominees for the 18th Annual VES Awards, which recognize outstanding visual effects artistry and innovation in film, animation, television, commercials and video games and the VFX supervisors, VFX producers and hands-on artists who bring this work to life. Alita: Battle Angel and The Lion King both have five nominations each; Toy Story 4 is the top animated film contender with five nominations, and Game of Thrones and The Mandalorian tie to lead the broadcast field with six nominations each.

Nominees in 25 categories were selected by VES members via events hosted by 11 VES sections, including Australia, the Bay Area, Germany, London, Los Angeles, Montreal, New York, New Zealand, Toronto, Vancouver and Washington.

The VES Awards will be held on January 29 at the Beverly Hilton Hotel. The VES Lifetime Achievement Award will be presented to Academy, DGA and Emmy-Award winning director-producer-screenwriter Martin Scorsese. The VES Visionary Award will be presented to director-producer-screenwriter Roland Emmerich. And the VES Award for Creative Excellence will be given to visual effects supervisor Sheena Duggal. Award-winning actor-comedian-author Patton Oswalt will once again host the event.

The nominees for the 18th Annual VES Awards in 25 categories are:

 

Outstanding Visual Effects in a Photoreal Feature

 

ALITA: BATTLE ANGEL

Richard Hollander

Kevin Sherwood

Eric Saindon

Richard Baneham

Bob Trevino

 

AVENGERS: ENDGAME

Daniel DeLeeuw

Jen Underdahl

Russell Earl

Matt Aitken

Daniel Sudick

 

GEMINI MAN

Bill Westenhofer

Karen Murphy-Mundell

Guy Williams

Sheldon Stopsack

Mark Hawker

 

STAR WARS: THE RISE OF SKYWALKER

Roger Guyett

Stacy Bissell

Patrick Tubach

Neal Scanlan

Dominic Tuohy

 

THE LION KING

Robert Legato

Tom Peitzman

Adam Valdez

Andrew R. Jones

 

Outstanding Supporting Visual Effects in a Photoreal Feature

 

1917

Guillaume Rocheron

Sona Pak

Greg Butler

Vijay Selvam

Dominic Tuohy

 

FORD V FERRARI

Olivier Dumont

Kathy Siegel

Dave Morley

Malte Sarnes

Mark Byers

 

JOKER

Edwin Rivera

Brice Parker

Mathew Giampa

Bryan Godwin

Jeff Brink

 

THE AERONAUTS

Louis Morin

Annie Godin

Christian Kaestner

Ara Khanikian

Mike Dawson

 

THE IRISHMAN

Pablo Helman

Mitch Ferm

Jill Brooks

Leandro Estebecorena

Jeff Brink

 

Outstanding Visual Effects in an Animated Feature

 

FROZEN 2

Steve Goldberg

Peter Del Vecho

Mark Hammel

Michael Giaimo

 

KLAUS

Sergio Pablos

Matthew Teevan

Marcin Jakubowski

Szymon Biernacki

 

MISSING LINK

Brad Schiff

Travis KnightSteve Emerson

Benoit Dubuc

 

THE LEGO MOVIE 2

David Burgess

Tim Smith

Mark Theriault

John Rix

 

TOY STORY 4

Josh Cooley

Mark Nielsen

Bob Moyer

Gary Bruins

 

Outstanding Visual Effects in a Photoreal Episode

 

GAME OF THRONES; The Bells

Joe Bauer

Steve Kullback

Ted Rae

Mohsen Mousavi

Sam Conway

 

HIS DARK MATERIALS; The Fight to the Death

Russell Dodgson

James Whitlam

Shawn Hillier

Robert Harrington

 

LADY AND THE TRAMP

Robert Weaver

Christopher Raimo

Arslan Elver

Michael Cozens

Bruno Van Zeebroeck

 

LOST IN SPACE – Episode: Ninety-Seven

Jabbar Raisani

Terron Pratt

Niklas Jacobson

Juri Stanossek

Paul Benjamin

 

STRANGER THINGS – Chapter Six: E Pluribus Unum

Paul Graff

Tom Ford

Michael Maher Jr.

Martin Pelletier

Andy Sowers

 

THE MANDALORIAN; The Child

Richard Bluff

Abbigail Keller

Jason Porter

Hayden Jones

Roy Cancinon

 

Outstanding Supporting Visual Effects in a Photoreal Episode

 

CHERNOBYL; 1:23:45

Max Dennison

Lindsay McFarlane

Clare Cheetham

Paul Jones

Claudius Christian Rauch

 

LIVING WITH YOURSELF; Nice Knowing You

Jay Worth

Jacqueline VandenBussche

Chris Wright

Tristan Zerafa

 

SEE; Godflame

Adrian de Wet

Eve Fizzinoglia

Matthew Welford

Pedro Sabrosa

Tom Blacklock

 

THE CROWN; Aberfan

Ben Turner

Reece Ewing

David Fleet

Jonathan Wood

 

VIKINGS; What Happens in the Cave

Dominic Remane

Mike Borrett

Ovidiu Cinazan

Tom Morrison

Paul Byrne

 

Outstanding Visual Effects in a Real-Time Project

 

Call of Duty Modern Warfare

Charles Chabert

Chris Parise

Attila Zalanyi

Patrick Hagar

 

Control

Janne Pulkkinen

Elmeri Raitanen

Matti Hämäläinen

James Tottman

 

Gears 5

Aryan Hanbeck

Laura Kippax

Greg Mitchell

Stu Maxwell

 

Myth: A Frozen Tale

Jeff Gipson

Nicholas Russell

Brittney Lee

Jose Luis Gomez Diaz

 

Vader Immortal: Episode I

Ben Snow

Mike Doran

Aaron McBride

Steve Henricks

 

Outstanding Visual Effects in a Commercial

 

Anthem Conviction

Viktor Muller

Lenka Likarova

Chris Harvey

Petr Marek

 

BMW Legend

Michael Gregory

Christian Downes

Tim Kafka

Toya Drechsler

 

Hennessy: The Seven Worlds

Carsten Keller

Selcuk Ergen

Kiril Mirkov

William Laban

 

PlayStation: Feel The Power of Pro

Sam Driscoll

Clare Melia

Gary Driver

Stefan Susemihl

 

Purdey’s: Hummingbird

Jules Janaud

Emma Cook

Matthew Thomas

Philip Child

 

Outstanding Visual Effects in a Special Venue Project

 

Avengers: Damage Control

Michael Koperwas

Shereif Fattouh

Ian Bowie

Kishore Vijay

Curtis Hickman

 

Jurassic World: The Ride

Hayden Landis

Friend Wells

Heath Kraynak

Ellen Coss

 

Millennium Falcon: Smugglers Run

Asa Kalama

Rob Huebner

Khatsho Orfali

Susan Greenhow

 

Star Wars: Rise of the Resistance

Jason Bayever

Patrick Kearney

Carol Norton

Bill George

 

Universal Sphere

James Healy

Morgan MacCuish

Ben West

Charlie Bayliss

 

Outstanding Animated Character in a Photoreal Feature

 

ALITA: BATTLE ANGEL; Alita

Michael Cozens

Mark Haenga

Olivier Lesaint

Dejan Momcilovic

 

AVENGERS: ENDGAME; Smart Hulk

Kevin Martel

Ebrahim Jahromi

Sven Jensen

Robert Allman

 

GEMINI MAN; Junior

Paul Story

Stuart Adcock

Emiliano Padovani

Marco Revelant

 

THE LION KING; Scar

Gabriel Arnold

James Hood

Julia Friedl

Daniel Fortheringham

 

 

 

 

Outstanding Animated Character in an Animated Feature

 

FROZEN 2; The Water Nøkk

Svetla Radivoeva

Marc Bryant

Richard E. Lehmann

Cameron Black

 

KLAUS; Jesper

Yoshimishi Tamura

Alfredo Cassano

Maxime Delalande

Jason Schwartzman

 

MISSING LINK; Susan

Rachelle Lambden

Brenda Baumgarten

Morgan Hay

Benoit Dubuc

 

TOY STORY 4; Bo Peep

Radford Hurn

Tanja Krampfert

George Nguyen

Becki Rocha Tower

 

Outstanding Animated Character in an Episode or Real-Time Project

 

LADY AND THE TRAMP; Tramp

Thiago Martins

Arslan Elver

Stanislas Paillereau

Martine Chartrand

 

STRANGER THINGS 3; Tom/Bruce Monster

Joseph Dubé-Arsenault

Antoine Barthod

Frederick Gagnon

Xavier Lafarge

 

THE MANDALORIAN; The Child; Mudhorn

Terry Bannon

Rudy Massar

Hugo Leygnac

 

THE UMBRELLA ACADEMY; Pilot; Pogo

Aidan Martin

Craig Young

Olivier Beierlein

Laurent Herveic

 

Outstanding Animated Character in a Commercial

 

Apex Legends; Meltdown; Mirage

Chris Bayol

John Fielding

Derrick Sesson

Nole Murphy

 

Churchill; Churchie

Martino Madeddu

Philippe Moine

Clement Granjon

Jon Wood

 

Cyberpunk 2077; Dex

Jonas Ekman

Jonas Skoog

Marek Madej

Grzegorz Chojnacki

 

John Lewis; Excitable Edgar; Edgar

Tim van Hussen

Diarmid Harrison-Murray

Amir Bazzazi

Michael Diprose

 

 

Outstanding Created Environment in a Photoreal Feature

 

ALADDIN; Agrabah

Daniel Schmid

Falk Boje

Stanislaw Marek

Kevin George

 

ALITA: BATTLE ANGEL; Iron City

John Stevenson-Galvin

Ryan Arcus

Mathias Larserud

Mark Tait

 

MOTHERLESS BROOKLYN; Penn Station

John Bair

Vance Miller

Sebastian Romero

Steve Sullivan

 

STAR WARS: THE RISE OF SKYWALKER; Pasaana Desert

Daniele Bigi

Steve Hardy

John Seru

Steven Denyer

 

THE LION KING; The Pridelands

Marco Rolandi

Luca Bonatti

Jules Bodenstein

Filippo Preti

 

 

Outstanding Created Environment in an Animated Feature

 

FROZEN 2; Giants’ Gorge

Samy Segura

Jay V. Jackson

Justin Cram

Scott Townsend

 

HOW TO TRAIN YOUR DRAGON: THE HIDDEN WORLD; The Hidden World

Chris Grun

Ronnie Cleland

Ariel Chisholm

Philippe Brochu

 

MISSING LINK; Passage to India Jungle

Oliver Jones

Phil Brotherton

Nick Mariana

Ralph Procida

 

TOY STORY 4; Antiques Mall

Hosuk Chang

Andrew Finley

Alison Leaf

Philip Shoebottom

 

 

Outstanding Created Environment in an Episode, Commercial, or Real-Time Project

 

GAME OF THRONES; The Iron Throne; Red Keep Plaza

Carlos Patrick DeLeon

Alonso Bocanegra Martinez

Marcela Silva

Benjamin Ross

 

LOST IN SPACE; Precipice; The Trench

Philip Engström

Benjamin Bernon

Martin Bergquist

Xuan Prada

 

THE DARK CRYSTAL: AGE OF RESISTANCE; The Endless Forest

Sulé Bryan

Charles Chorein

Christian Waite

Martyn Hawkins

 

THE MANDALORIAN; Nevarro Town

Alex Murtaza

Yanick Gaudreau

Marco Tremblay

Maryse Bouchard

 

Outstanding Virtual Cinematography in a CG Project

 

ALITA: BATTLE ANGEL

Emile Ghorayeb

Simon Jung

Nick Epstein

Mike Perry

 

THE LION KING

Robert Legato

Caleb Deschanel

Ben Grossmann

AJ Sciutto

 

THE MANDALORIAN; The Prisoner; The Roost

Richard Bluff

Jason Porter

Landis Fields IV

Baz Idione

 

 

TOY STORY 4

Jean-Claude Kalache

Patrick Lin

 

Outstanding Model in a Photoreal or Animated Project

 

LOST IN SPACE; The Resolute

Xuan Prada

Jason Martin

Jonathan Vårdstedt

Eric Andersson

 

MISSING LINK; The Manchuria

Todd Alan Harvey

Dan Casey

Katy Hughes

 

THE MAN IN THE HIGH CASTLE; Rocket Train

Neil Taylor

Casi Blume

Ben McDougal

Chris Kuhn

 

THE MANDALORIAN; The Sin; The Razorcrest

Doug Chiang

Jay Machado

John Goodson

Landis Fields IV

 

Outstanding Effects Simulations in a Photoreal Feature

 

DUMBO; Bubble Elephants

Sam Hancock

Victor Glushchenko

Andrew Savchenko

Arthur Moody

 

SPIDER-MAN: FAR FROM HOME; Molten Man

Adam Gailey

Jacob Santamaria

Jacob Clark

Stephanie Molk

 

 

 

 

 

STAR WARS: THE RISE OF SKYWALKER

Don Wong

Thibault Gauriau

Goncalo Cababca

Francois-Maxence Desplanques

 

THE LION KING

David Schneider

Samantha Hiscock

Andy Feery

Kostas Strevlos

 

Outstanding Effects Simulations in an Animated Feature

 

ABOMINABLE

Alex Timchenko

Domin Lee

Michael Losure

Eric Warren

 

FROZEN 2

Erin V. Ramos

Scott Townsend

Thomas Wickes

Rattanin Sirinaruemarn

 

HOW TO TRAIN YOUR DRAGON: THE HIDDEN WORLD; Water and Waterfalls

Derek Cheung

Baptiste Van Opstal

Youxi Woo

Jason Mayer

 

TOY STORY 4

Alexis Angelidis

Amit Baadkar

Lyon Liew

Michael Lorenzen

 

Outstanding Effects Simulations in an Episode, Commercial, or Real-Time Project

 

GAME OF THRONES; The Bells

Marcel Kern

Paul Fuller

Ryo Sakaguchi

Thomas Hartmann

 

Hennessy: The Seven Worlds

Selcuk Ergen

Radu Ciubotariu

Andreu Lucio

Vincent Ullmann

 

LOST IN SPACE; Precipice; Water Planet

Juri Bryan

Hugo Medda

Kristian Olsson

John Perrigo

 

STRANGER THINGS 3; Melting Tom/Bruce

Nathan Arbuckle

Christian Gaumond

James Dong

Aleksandr Starkov

 

THE MANDALORIAN; The Child; Mudhorn

Xavier Martin Ramirez

Ian Baxter

Fabio Siino

Andrea Rosa

 

Outstanding Compositing in a Feature

 

ALITA: BATTLE ANGEL

Adam Bradley

Carlo Scaduto

Hirofumi Takeda

Ben Roberts

 

AVENGERS: ENDGAME

Tim Walker

Blake Winder

Tobias Wiesner

Joerg Bruemmer

 

CAPTAIN MARVEL; Young Nick Fury

Trent Claus

David Moreno Hernandez

Jeremiah Sweeney

Yuki Uehara

 

STAR WARS: THE RISE OF SKYWALKER

Jeff Sutherland

John Galloway

Sam Bassett

Charles Lai

 

THE IRISHMAN

Nelson Sepulveda

Vincent Papaix

Benjamin O’Brien

Christopher Doerhoff

 

 

Outstanding Compositing in an Episode

 

GAME OF THRONES; The Bells

Sean Heuston

Scott Joseph

James Elster

Corinne Teo

 

GAME OF THRONES; The Long Night; Dragon Ground Battle

Mark Richardson

Darren Christie

Nathan Abbott

Owen Longstaff

 

STRANGER THINGS 3; Starcourt Mall Battle

Simon Lehembre

Andrew Kowbell

Karim El-Masry

Miklos Mesterhazy

 

WATCHMEN; Pilot; Looking Glass

Nathaniel Larouche

Iyi Tubi

Perunika Yorgova

Mitchell Beaton

 

Outstanding Compositing in a Commercial

 

BMW Legend

Toya Drechsler

Vivek Tekale

Guillaume Weiss

Alexander Kulikov

 

Feeding America; I Am Hunger in America

Dan Giraldo

Marcelo Pasqualino

Alexander Koester

 

Hennessy; The Seven Worlds

Rod Norman

Guillaume Weiss

Alexander Kulikov

Alessandro Granella

 

PlayStation: Feel the Power of Pro

Gary Driver

Stefan Susemihl

Greg Spencer

Theajo Dharan

 

Outstanding Special (Practical) Effects in a Photoreal or Animated Project

 

ALADDIN; Magic Carpet

Mark Holt

Jay Mallet

Will Wyatt

Dickon Mitchell

 

GAME OF THRONES; The Bells

Sam Conway

Terry Palmer

Laurence Harvey

Alastair Vardy

 

TERMINATOR: DARK FATE

Neil Corbould

David Brighton

Ray Ferguson

Keith Dawson

 

THE DARK CRYSTAL: THE AGE OF RESISTANCE; She Knows All the Secrets

Sean Mathiesen

Jon Savage

Toby Froud

Phil Harvey

 

Outstanding Visual Effects in a Student Project

 

DOWNFALL

Matias Heker

Stephen Moroz

Bradley Cocksedge

 

LOVE AND FIFTY MEGATONS

Denis Krez

Josephine Roß

Paulo Scatena

Lukas Löffler

 

OEIL POUR OEIL

Alan Guimont

Thomas Boileau

Malcom Hunt

Robin Courtoise

 

THE BEAUTY

Marc Angele

Aleksandra Todorovic

Pascal Schelbli

Noel Winzen

 

 

Recreating the Vatican and Sistine Chapel for Netflix’s The Two Popes

The Two Popes, directed by Fernando Meirelles, stars Anthony Hopkins as Pope Benedict XVI and Jonathan Pryce as current pontiff Pope Francis in a story about one of the most dramatic transitions of power in the Catholic Church’s history. The film follows a frustrated Cardinal Bergoglio (the future Pope Francis) who in 2012 requests permission from Pope Benedict to retire because of his issues with the direction of the church. Instead, facing scandal and self-doubt, the introspective Benedict summons his harshest critic and future successor to Rome to reveal a secret that would shake the foundations of the Catholic Church.

London’s Union was approached in May 2017 and supervised visual effects on location in Argentina and Italy over several months. A large proportion of the film takes place within the walls of Vatican City. The Vatican was not involved in the production and the team had very limited or no access to some of the key locations.

Under the direction of production designer Mark Tildesley, the production replicated parts of the Vatican at Rome’s Cinecitta Studios, including a life-size, open ceiling, Sistine Chapel, which took two months to build.

The team LIDAR-scanned everything available and set about amassing as much reference material as possible — photographing from a permitted distance, scanning the set builds and buying every photographic book they could lay their hands on.

From this material, the team set about building 3D models — created in Autodesk Maya — of St. Peter’s Square, the Basilica and the Sistine Chapel. The environments team was tasked with texturing all of these well-known locations using digital matte painting techniques, including recreating Michelangelo’s masterpiece on the ceiling of the Sistine Chapel.

The story centers on two key changes of pope in 2005 and 2013. Those events attracted huge attention, filling St. Peter’s Square with people eager to discover the identity of the new pope and celebrate his ascension. News crews from around the world also camp out to provide coverage for the billions of Catholics all over the world.

To recreate these scenes, the crew shot at a school in Rome (Ponte Mammolo) that has the same pattern on its floor. A cast of 300 extras was shot in blocks in different positions at different times of day, with costume tweaks including the addition of umbrellas to build a library that would provide enough flexibility during post to recreate these moments at different times of day and in different weather conditions.

Union also called on Clear Angle Studios to individually scan 50 extras to provide additional options for the VFX team. This was an ambitious crowd project, so the team couldn’t shoot in the location, and the end result had to stand up at 4K in very close proximity to the camera. Union designed a Houdini-based system to deal with the number of assets and clothing in such a way that the studio could easily art-direct them as individuals, allow the director to choreograph them and deliver a believable result.

Union conducted several motion capture shoots inhouse at Union to provide some specific animation cycles that married with the occasions they were recreating. This provided even more authentic-looking crowds for the post team.

Union worked on a total of 288 VFX shots, including greenscreens, set extensions, window reflections, muzzle flashes, fog and rain and a storm that included a lightning strike on the Basilica.

In addition, the team did a significant amount of de-aging work to accommodate the film’s eight-year main narrative timeline as well as a long period in Pope Francis’ younger years.

ILM’s Pablo Helman on The Irishman‘s visual effects

By Karen Moltenbrey

When a film stars Robert De Niro, Joe Pesci and Al Pacino, well, expectations are high. These are no ordinary actors, and Martin Scorsese is no ordinary director. These are movie legends. And their latest project, Netflix’s The Irishman, is no ordinary film. It features cutting-edge de-aging technology from visual effects studio Industrial Light & Magic (ILM) and earned the film’s VFX supervisor, Pablo Helman, an Oscar nomination.

The Irishman, adapted from the book “I Heard You Paint Houses,” tells the story of an elderly Frank “The Irishman” Sheeran (De Niro), whose life is nearing the end, as he looks back on his earlier years as a truck driver-turned-mob hitman for Russell Bufalino (Pesci) and family. While reminiscing, he recalls the role he played in the disappearance of his longtime friend, Jimmy Hoffa (Al Pacino), former president of the Teamsters, who famously disappeared in 1975 at the age of 62, and whose body has never been found.

The film contains 1,750 visual effects shots, most of which involve the de-aging of the three actors. In the film, the actors are depicted at various stages of their lives — mostly younger than their present age. Pacino is the least aged of the three actors, since he enters the story about a third of the way through — from the 1940s to his disappearance three decades later. He was 78 at the time of filming, and he plays Hoffa at various ages, from age 44 to 62. De Niro, who was 76 at the time of filming, plays Sheeran at certain points from age 20 to 80. Pesci plays Bufalino between age 53 and 83.

For the significantly older Sheeran, during his introspection, makeup was used. However, making the younger versions of all three actors was much more difficult. Indeed, current technology makes it possible to create believable younger digital doubles. But, it typically requires actors to perform alone on a soundstage wearing facial markers and helmet cameras, or requires artists to enhance or create performances with CG animation. That simply would not do for this film. Neither the actors nor Scorsese wanted the tech to interfere with the acting process in any way. Recreating their performances was also off the table.

“They wanted a technology that was non-intrusive and one that would be completely separate from the performances. They didn’t want markers on their faces, they did not want to wear helmet cams and they did not want to wear the gray [markered] pajamas that we normally use,” says VFX supervisor Helman. “They also wanted to be on set with theatrical lighting, and there wasn’t going to be any kind of re-shoots of performances outside the set.”

In a nutshell, ILM needed a markerless approach that occurred on-set during filming. To this end, ILM spent two years developing Flux, a new camera system and software, whereby a three-camera rig would extract performance data from lighting and textures captured on set and translate that to 3D computer-generated versions of the actors’ younger selves.

The camera rig was developed in collaboration with The Irishman’s DP, Rodrigo Prieto, and camera maker ARRI. It included two high-resolution (3.8K) Alexa Mini witness cameras that were modified with infrared rings; the two cameras were attached to and synched up with the primary sensor camera (the director’s Red Helium 8K camera). The infrared light from the two cameras was necessary to help neutralize any shadows on the actors’ faces, since Flux does not handle shadows well, yet remained “unseen” by the production camera.

Flux, meanwhile, used that camera information and translated that into deformable geometry mesh. “Flux takes that information from the three cameras and compares it to the lighting on set, deforms the geometry and changes the geometry and the shape of the actors on a frame-by-frame basis,” says Helman.

In fact, ILM continued to develop the software as it was working on the film. “It’s kind of like running the Grand Prix while you’re building the Ferrari,” Helman adds. “Then, you get better and better, and faster and faster, and your software gets better, and you are solving problems and learning from the software. Yes, it took a long time to do, but we knew we had time to do it and make it work.”

Pablo Helman (right) on The Irishman set.

At the beginning of the project, prior to the filming, the actors were digitally scanned performing a range of facial movements using ILM’s Medusa system, as well as on a light stage, which captured texture info under different lighting conditions. All that data was then used to create a 3D contemporary digital double of each of the actors. The models were sculpted in Autodesk’s Maya and with proprietary tools running on ILM’s Zeno platform.

ILM applied the 3D models to the exact performance data of each actor captured on set with the special camera rig, so the physical performances were now digital. No keyframe animation was used. However, the characters were still contemporary to the actors’ ages.

As Helman explains, after the performance, the footage was returned to ILM, where an intense matchmove was done of the actors’ bodies and heads. “The first thing that got matchmoved was the three cameras that were documenting what the actor was doing in the performance, and then we matchmoved the lighting instruments that were lighting the actor because Flux needs that lighting information in order to work,” he says.

Helman likens Flux to a black box full of little drawers where various aspects are inserted, like the layout, the matchimation, the lighting information and so forth, and it combines all that information to come up with the geometry for the digital double.

The actual de-aging occurs in modeling using a combination of libraries that were created for each actor and connected to and referenced by Flux. Later, modelers created the age variations, starting with the youngest version of each person. Variants were then generated gradually using a slider to move through life’s timeline. This process was labor-intensive as artists had to also erase the effects of time, such as wrinkles and age spots.

Insofar as The Irishman is not an action movie, creating motion for decades-younger versions of the characters was not an issue. However, a motion analyst was on set to work with the actors as they played the younger versions of their characters. Also, some visual effects work helped thin out the younger characters.

Helman points out that Scorsese stressed that he did not want to see a younger version of the actors playing roles from the past; he wanted to see younger versions of these particular characters. “He did not want to rewind the clock and see Robert De Niro as Jimmy Conway in 1990’s Goodfellas. He wanted to see De Niro as a 30-year-younger Frank Sheeran,” he explains.

When asked which actor posed the most difficulty to de-age, Helman explains that once you crack the code of capturing the performance and then retargeting the performance to a younger variation of the character, there’s little difference. Nevertheless, De Niro had the most screen time and the widest age range.

Performance capture began about 15 years ago, and Helman sees this achievement as a natural evolution of the technology. “Eventually those [facial] markers had to go away because for actors, that’s a very interesting way to work, if you really think about it. They have to try to ignore the markers and not be distracted by all the other intrusive stuff going on,” Helman says. “That time is now gone. If you let the actors do what they do, the performances will be so much better and the shots will look so much better because there is eye contact and context with another actor.”

While this technology is a quantum leap forward, there are still improvements to be made. The camera rig needs to get smaller and the software faster — and ILM is working on both aspects, Helman says. Nevertheless, the accomplishment made here is impressive and groundbreaking — the first markerless system that captures performance on set with theatrical lighting, thanks to more than 500 artists working around the world to make this happen. As a result, it opens up the door for more storytelling and acting options — not only for de-aging, but for other types of characters too.

Commenting on his Oscar nomination, Helman said, “It was an incredible, surreal experience to work with Scorsese and the actors, De Niro, Pacino and Pesci, on this movie. We are so grateful for the trust and support we got from the producers and from Netflix, and the talent and dedication of our team. We’re honored to be recognized by our colleagues with this nomination.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Maxon and Red Giant to merge

Maxon, developers of pro 3D software solutions, and Red Giant, makers of tools for editors, VFX artists, and motion designers, have agreed to merge under the media and entertainment division of Nemetschek Group. The transaction is expected to close in January 2020, subject to regulatory approval and customary closing conditions.

Maxon, best known for its 3D product Cinema 4D, was formed in 1986 to provide high-end yet accessible 3D software solutions. Artists across the globe rely on Maxon products to create high-end visuals. In April of this year, Maxon acquired Redshift, developer of the GPU-accelerated Redshift render engine.

Since 2002, Red Giant has built its brand through products such as Trapcode, Magic Bullet, Universe, PluralEyes and its line of visual effects software. Its tools are used in the fields of film, broadcast and advertising.

The two companies provide tools for companies including ABC, CBS, NBC, HBO, BBC, Sky, Fox Networks, Turner Broadcasting, NFL Network, WWE, Viacom, Netflix, ITV Creative, Discovery Channel, MPC, Digital Domain, VDO, Sony, Universal, The Walt Disney Company, Blizzard Entertainment, BMW, Facebook, Apple, Google, Vitra, Nike and many more.

Main Photo: L-R: Maxon CEO Dave McGavran and Red Giant CEP Chad Bechert

Shape+Light VFX boutique opens in LA with Trent, Lehr at helm


Visual effects and design studio boutique Shape+Light has officially launched in Santa Monica. At the helm is managing director/creative director Rob Trent and executive producer Cara Lehr. Shape+Light provides visual effects, design and finishing services for agency and brand-direct clients. The studio, which has been quietly operating since this summer, has already delivered work for Nike, Apple, Gatorade, Lexus and Proctor & Gamble.

Gatorade

Trent is no stranger to running VFX boutiques. An industry veteran, he began his career as a Flame artist, working at studios including Imaginary Forces and Digital Domain, and then at Asylum VFX as a VFX supervisor/creative director before co-founding The Mission VFX in 2010. In 2015, he established Saint Studio. During his career he has worked on big campaigns, including the launch of the Apple iPhone with David Fincher, celebrating the NFL with Nike and Michael Mann, and honoring moms with Alma Har’el and P&G for the Olympics. He has also contributed to award-winning feature films such as The Curious Case of Benjamin Button, Minority Report, X-Men and Zodiac.

Lehr is an established VFX producer with over 20 years of experience in both commercials and features. She has worked for many of LA’s leading VFX studios, including Zoic Studios, Asylum VFX, Digital Domain, Brickyard VFX and Psyop. She most recently served as EP at Method Studios, where she was on staff since 2012. She has worked on ad campaigns for brands including Apple, Microsoft, Nike, ESPN, Coca Cola, Taco Bell, AT&T, the NBA, Chevrolet and more.

Maya 2020 and Arnold 6 now available from Autodesk

Autodesk has released Autodesk Maya 2020 and Arnold 6 with Arnold GPU. Maya 2020 brings animators, modelers, riggers and technical artists a host of new tools and improvements for CG content creation, while Arnold 6 allows for production rendering on both the CPU and GPU.

Maya 2020 adds more than 60 new updates, as well as performance enhancements and new simulation features to Bifrost, the visual programming environment in Maya.

Maya 2020

Release highlights include:

— Over 60 animation features and updates to the graph editor and time slider.
— Cached Playback: New preview modes, layered dynamics caching and more efficient caching of image planes.
— Animation bookmarks: Mark, organize and navigate through specific events in time and frame playback ranges.
— Bifrost for Maya: Performance improvements, Cached Playback support and new MPM cloth constraints.
— Viewport improvements: Users can interact with and select dense geometry or a large number of smaller meshes faster in the viewport and UV editors.
— Modeling enhancements: New Remesh and Retopologize features.
— Rigging improvements: Matrix-driven workflows, nodes for precisely tracking positions on deforming geometry and a new GPU-accelerated wrap deformer.

The Arnold GPU is based on Nvidia’s OptiX framework and takes advantage of Nvidia RTX technology. Arnold 6 highlights include:

— Unified renderer— Toggle between CPU and GPU rendering.
— Lights, cameras and More— Support for OSL, OpenVDB volumes, on-demand texture loading, most LPEs, lights, shaders and all cameras.
— Reduced GPU noise— Comparable to CPU noise levels when using adaptive sampling, which has been improved to yield faster, more predictable results regardless of the renderer used.
— Optimized for Nvidia RTX hardware— Scale up rendering power when production demands it.
— New USD components— Hydra render delegate, Arnold USD procedural and USD schemas for Arnold nodes and properties are now available on GitHub.

Arnold 6

— Performance improvements— Faster creased subdivisons, an improved Physical Sky shader and dielectric microfacet multiple scattering.

Maya 2020 and Arnold 6 are available now as standalone subscriptions or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection. Monthly, annual and three-year single-user subscriptions of Arnold are available on the Autodesk e-store.

Arnold GPU is also available to try with a free 30-day trial of Arnold 6. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, SideFX Houdini, Maxon Cinema 4D and Foundry Katana.

Storage for Visual Effects

By Karen Moltenbrey

When creating visual effects for a live-action film or television project, the artist digs right in. But not before the source files are received and backed up. Of course, during the process, storage again comes into play, as the artist’s work is saved and composited into the live-action file and then saved (and stored) yet again. At mid-sized Artifex Studios and the larger Jellyfish Pictures, two visual effects studios, storage might not be the sexiest part of the work they do, but it is vital to a successful outcome nonetheless.

Artifex Studios
An independent studio in Vancouver, BC, Artifex Studios is a small- to mid-sized visual effects facility producing film and television projects for networks, film studios and streaming services. Founded in 1997 by VFX supervisor Adam Stern, the studio has grown over the years from a one- to two-person operation to one staffed by 35 to 45 artists. During that time it has built up a lengthy and impressive resume, from Charmed, Descendants 3 and The Crossing to Mission to Mars, The Company You Keep and Apollo 18.

To handle its storage needs, Artifex uses the Qumulo QC24 four-node storage cluster for its main storage system, along with G-Tech and LaCie portable RAIDs and Angelbird Technologies and Samsung portable SSD drives. “We’ve been running [Qumulo] for several years now. It was a significant investment for us because we’re not a huge company, but it has been tremendously successful for us,” says Stern.

“The most important things for us when it comes to storage are speed, data security and minimal downtime. They’re pretty obvious things, but Qumulo offered us a system that eliminated one of the problems we had been having with the [previous] system bogging down as concurrent users were moving the files around quickly between compositors and 3D artists,” says Stern. “We have 40-plus people hitting this thing, pulling in 4K, 6K, 8K footage from it, rendering and [creating] 3D, and it just ticks along. That was huge for us.”

Of course, speed is of utmost importance, but so is maintaining the data’s safety. To this end, the new system self-monitors, taking its own snapshots to maintain its own health and making sure there are constantly rotating levels of backups. Having the ability to monitor everything about the system is a big plus for the studio as well.

Because data safety and security is non-negotiable, Artifex uses Google Cloud services along with Qumulo for incremental storage, every night incrementally backing up to Google Cloud. “So while Qumulo is doing its own snapshots incrementally, we have another hard-drive system from Synology, which is more of a prosumer NAS system, whose only job is to do a local current backup,” Stern explains. “So in-house, we have two local backups between Qumulo and Synology, and then we have a third backup going to the cloud every night that’s off-site. When a project is complete, we archive it onto two sets of local hard drives, and one leaves the premises and the other is stored here.” At this point, the material is taken off the Qumulo system, and seven days later, the last of the so-called snapshots is removed.

As soon as data comes into Artifex — either via Aspera, Signiant’s Media Shuttle or hard disks — the material is immediately transferred to the Qumulo system, and then it is cataloged and placed into the studio’s ftrack database, which the studio uses for shot tracking. Then, as Stern says, the floodgates open, and all the artists, compositors, 3D team members and admin coordination team members access the material that resides on the Qumulo system.

Desktops at the studio have local storage, generally an SSD built into the machine, but as Stern points out, that is a temporary solution used by the artists while working on a specific shot, not to hold studio data.

Artifex generally works on a handful of projects simultaneously, including the Nickelodeon horror anthology Are You Afraid of the Dark? “Everything we do here requires storage, and we’re always dealing with high-resolution footage, and that project was no exception,” says Stern. For instance, the series required Artifex to simulate 10,000 CG cockroaches spilling out of every possible hole in a room — work that required a lot of high-speed caching.

“FX artists need to access temporary storage very quickly to produce those simulations. In terms of the Qumulo system, we need it to retrieve files at the speed our effects artists can simulate and cache, and make sure they are able to manage what can be thousands and thousands of files generated just within a few hours.”

Similarly, for Netflix’s Wu Assassins, the studio generated multiple simulations of CG smoke and fog within SideFX’s Side Effects Houdini and again had to generate thousands and thousands of cache files for all the particles and volume information. Just as it did with the caching for the CG cockroaches, the current system handled caching for the smoke and fog quite efficiently.

At this point, Stern says the vendor is doing some interesting things that his company has not yet taken advantage of. For instance, today one of the big pushes is working in the cloud and integrating that with infrastructures and workflows. “I know they are working on that, and we’re looking into that,” he adds. There are also some new equipment features, “bleeding-edge stuff” Artifex has not explored yet. “It’s OK to be cutting-edge, but bleeding-edge is a little scary for us,” Stern notes. “I know they are always playing with new features, but just having the important foundation of speed and security is right where we are at the moment.”

Jellyfish Pictures
When it comes to big projects with big storage needs, Jellyfish Pictures is no fish out of water. The studio works on myriad projects, from Hollywood blockbusters like Star Wars to high-end TV series like Watchmen to episodic animation like Floogals and Dennis & Gnasher: Unleashed! Recently, it has embarked on an animated feature for DreamWorks and has a dedicated art department that works on visual development for substantial VFX projects and children’s animated TV content.

To handle all this work, Jellyfish has five studios across the UK: four in London and one in Sheffield, in the north of England. What’s more, in early December, Jellyfish expanded further with a brand-new virtual studio in London seating over 150 artists — increasing its capacity to over 300 people. In line with this expansion, Jellyfish is removing all on-site infrastructure from its existing locales and moving everything to a co-location. This means that all five present locations will be wholly virtual as well, making Jellyfish the largest VFX and animation studio in the world operating this way, contends CTO Jeremy Smith.

“We are dealing with shows that have very large datasets, which, therefore, require high-performance computing. It goes without saying, then, that we need some pretty heavy-duty storage,” says Smith.

Not only must the storage solution be able to handle Jellyfish’s data needs, it must also fit into its operational model. “Even though we work across multiple sites, we don’t want our artists to feel that. We need a storage system that can bring together all locations into one centralized hub,” Smith explains. “As a studio, we do not rely on one storage hardware vendor; therefore, we need to work with a company that is hardware-agnostic in addition to being able to operate in the cloud.”

Also, Jellyfish is a TPN-assessed studio and thus has to work with vendors that are TPN compliant — another serious, and vital, consideration when choosing its storage solution. TPN is an initiative between the Motion Picture Association of America (MPAA) and the Content Delivery and Security Association (CDSA) that provides a set of requirements and best practices around preventing leaks, breaches and hacks of pre-released, high-valued media content.

With all those factors in mind, Jellyfish uses PixStor from Pixit Media for its storage solution. PixStor is a software-defined storage solution that allows the studio to use various hardware storage from other vendors under the hood. With PixStor, data moves seamlessly through many tiers of storage — from fast flash and disk tiers to cost-effective, high-capacity object storage to the cloud. In addition, the studio uses NetApp storage within a different part of the same workflow on Dell R740 hardware and alternates between SSD and spinning disks, depending on the purpose of the data and the file size.

“We’ve future-proofed our studio with the Mellanox SN2100 switch for the heavy lifting, and for connecting our virtual workstations to the storage, we are using several servers from the Dell N3000 series,” says Smith.

As a wholly virtual studio, Jellyfish has no storage housed locally; it all sits in a co-location, which is accessed through remote workstations powered by Teradici’s PCoIP technology.

According to Smith, becoming a completely virtual studio is a new development for Jellyfish. Nevertheless, the facility has been working with Pixit Media since 2014 and launched its first virtual studio in 2017, “so the building blocks have been in place for a while,” he says.

Prior to moving all the infrastructure off-site, Jellyfish ran its storage system out of its Brixton and Soho studios locally. Its own private cloud from Brixton powered Jellyfish’s Soho and Sheffield studios. Both PixStor storage solutions in Brixton and Soho were linked with the solution’s PixCache. The switches and servers were still from Dell and Mellanox but were an older generation.

“Way back when, before we adopted this virtual world we are living in, we still worked with on-premises and inflexible storage solutions. It limited us in terms of the work we could take on and where we could operate,” says Smith. “With this new solution, we can scale up to meet our requirements.”

Now, however, using Mellanox SN2100, which has 100GbE, Jellyfish can deal with obscene amounts of data, Smith contends. “The way the industry is moving with 4K and 8K, even 16K being thrown around, we need to be ready,” he says.

Before the co-location, the different sites were connected through PixCache; now the co-location and public cloud are linked via Ngenea, which pre-caches files locally to the render node before the render starts. Furthermore, the studio is able to unlock true multi-tenancy with a single storage namespace, rapidly deploying logical TPN-accredited data separation and isolation and scaling up services as needed. “Probably two of the most important facets for us in running a successful studio: security and flexibility,” says Smith.

Artists access the storage via their Teradici Zero Clients, which, through the Dell switches, connect users to the standard Samba SMB network. Users who are working on realtime clients or in high resolution are connected to the Pixit storage through the Mellanox switch, where PixStor Native Client is used.

“Storage is a fundamental part of any VFX and animation studio’s workflow. Implementing the correct solution is critical to the seamless running of a project, as well as the security and flexibility of the business,” Smith concludes. “Any good storage system is invisible to the user. Only the people who build it will ever know the precision it takes to get it up and running — and that is the sign you’ve got the perfect solution.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Reallusion’s Headshot plugin for realistic digi-doubles via AI

Reallusion has introduced a plugin for Character Creator 3 to help create realistic-looking digital doubles. According to the company, the Headshot plugin uses AI technology to automatically generate a digital human in minutes from one single photo, and those characters are fully rigged for voice lipsync, facial expression and full body animation.

Headshot allows game developers and virtual production teams to quickly funnel a cast of digital doubles into iClone, Unreal, Unity, Maya, ZBrush and more. The idea is to allow the digital humans to go anywhere they like and give creators a solution to rapidly develop, iterate and collaborate in realtime.

The plugin has two AI modes: Auto Mode and Pro Mode. Auto Mode is a one-click solution for creating mid-rez digital human crowds. This process allows one-click head and hair creation for realtime 3D head models. It also generates a separate 3D hair mesh with alpha mask to soften edge lines. The 3D hair is fully compatible with Character Creator’s conformable hair format (.ccHair). Users can add them into their hair library, and apply them to other CC characters.

Headshot Pro Mode offers full control of the 3D head generation process with advanced features such as Image Matching, Photo Reprojection and Custom Mask with up to 4,096-texture resolution.

The Image Matching Tool overlays an image reference plane for advanced head shape refinement and lens correction. With Photo Reprojection, users can easily fix the texture-to-mesh discrepancies resulting from face morph change.

Using high-rez source images and Headshot’s 1,000-plus morphs, users can get a scan-quality digital human face in 4K texture details. Additional textures include normal, AO, roughness, metallic, SSS and Micro Normal for more realistic digital human rendering.

The 3D Head Morph System is designed to achieve the professional and detailed look of 3D scan models. The 3D sculpting design allow users to hover over a control area and use directional mouse drags to adjust the corresponding mesh shape, from full head and face sculpting to individual features — head contour, face, eyes, nose, mouth and ears with more than 1,000 head morphs. It is now free with a purchase of the Headshot plugin.

The Headshot plugin for Character Creator is $199 and comes with the content pack Headshot Morph 1,000+ ($99). Character Creator 3 Pipeline costs $199.

Framestore VFX will open in Mumbai in 2020

Oscar-winning creative studio Framestore will be opening a full-service visual effects studio in Mumbai in 2020 to target India’s booming creative industry. The studio will be located in the Nesco IT Park in Goregaon, in the center of Mumbai’s technology district. The news hammers home Framestore’s continued interest in India, after having made a major investment in Jesh Krishna Murthy’s VFX studio, Anibrain, in 2017.

“Mumbai represents a rolling of wheels that were set in motion over two years ago,” says Framestore founder/CEO William Sargent. “Our investment in Anibrain has grown considerably, and we continue in our partnership with Jesh Krishna Murthy to develop and grow that business. Indeed, they will become a valued production partner to our Mumbai offering.”

Framestore looks to make considerable hires in the coming months, aiming to build an initial 500-strong team with existing Framestore talent combined with the best of local Indian expertise. Mumbai will work alongside the global network, including London and Montreal, to create a cohesive virtual team delivering high-quality international work.

“Mumbai has become a center of excellence in digital filmmaking. There’s a depth of talent that can deliver to the scale of Hollywood with the color and flair of Bollywood,” Sargent continues. “It’s an incredibly vibrant city and its presence on the international scene is holding us all to a higher standard. In terms of visual effects, we will set the standard here as we did in Montreal almost eight years ago.”

 

London’s Freefolk beefs up VFX team

Soho-based visual effects studio Freefolk, which has seen growth in its commercials and longform work, has grown its staff to meet this demand. As part of the uptick in work, Freefolk promoted Cheryl Payne from senior producer to head of commercial production. Additionally, Laura Rickets has joined as senior producer, and 2D artist Bradley Cocksedge has been added to the commercials VFX team.

Payne, who has been with Freefolk since the early days, has worked on some of the studio’s biggest commercials, including; Warburtons for Engine, Peloton for Dark Horses and Cadburys for VCCP.

Rickets comes to Freefolk with over 18 years of production experience working at some of the biggest VFX houses in London, including Framestore, The Mill and Smoke & Mirrors, as well as agency side for McCann. Since joining the team, Rickets has VFX-produced work on the I’m A Celebrity IDs, a set of seven technically challenging and CG-heavy spots for the new series of the show as well as ads for the Rugby World Cup and Who Wants to Be a Millionaire?.

Cocksedge is a recent graduate who joins from Framestore, where he was working as an intern on Fantastic Beasts: The Crimes of Grindelwald. While in school at the University of Hertfordshire, he interned at Freefolk and is happy to be back in a full-time position.

“We’ve had an exciting year and have worked on some really stand-out commercials, like TransPennine for Engine and the beautiful spot for The Guardian we completed with Uncommon, so we felt it was time to add to the Freefolk family,” says Fi Kilroe, Freefolk’s co-managing director/executive producer.

Main Image: (L-R) Cheryl Payne, Laura Rickets and Bradley Cocksedge

Behind the Title: MPC’s CD Morten Vinther

This creative director/director still jumps on the Flame and also edits from time to time. “I love mixing it up and doing different things,” he says.

NAME: Morten Vinther

COMPANY: Moving Picture Company, Los Angeles

CAN YOU DESCRIBE YOUR COMPANY?
From original ideas all the way through to finished production, we are an eclectic mix of hard-working and passionate artists, technologists and creatives who push the boundaries of what’s possible for our clients. We aim to move the audience through our work.

WHAT’S YOUR JOB TITLE?
Creative Director and Director

WHAT DOES THAT ENTAIL?
I guide our clients through challenging shoots and post. I try to keep us honest in terms of making sure that our casting is right and the team is looked after and has the appropriate resources available for the tasks ahead, while ensuring that we go above and beyond on quality and experience. In addition to this, I direct projects, pitch on new business and develop methodology for visual effects.

American Horror Story

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I still occasionally jump on Flame and comp a job — right now I’m editing a commercial. I love mixing it up and doing different things.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Writing treatments. The moments where everything is crystal clear in your head and great ideas and concepts are rushing onto paper like an unstoppable torrent.

WHAT’S YOUR LEAST FAVORITE?
Writing treatments. Staring at a blank page, writing something and realizing how contrived it sounds before angrily deleting everything.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Early mornings. A good night’s sleep and freshly ground coffee creates a fertile breeding ground for pure clarity, ideas and opportunities.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be carefully malting barley for my next small batch of artisan whisky somewhere on the Scottish west coast.

Adidas Creators

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I remember making a spoof commercial at my school when I was about 13 years old. I became obsessed with operating cameras and editing, and I began to study filmmakers like Scorsese and Kubrick. After a failed career as a shopkeeper, a documentary production company in Copenhagen took mercy on me, and I started as an assistant editor.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
American Horror Story, Apple Unlock, directed by Dougal Wilson, and Adidas Creators, directed by Stacy Wall.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
If I had to single one out, it would probably be Apple’s Unlock commercial. The spot looks amazing, and the team was incredibly creative on this one. We enjoyed a great collaboration between several of our offices, and it was a lot of fun putting it together.

Apple’s Unlock

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and PlayStation.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Some say social media rots your brains. That’s probably why I’m an Instagram addict.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Odesza, SBTRKT, Little Dragon, Disclosure and classic reggae.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I recently bought a motorbike, and I spin around LA and Southern California most weekends. Concentrating on how to survive the next turn is a great way for me to clear the mind.

Director Robert Eggers talks about his psychological thriller The Lighthouse

By Iain Blair

Writer/director Robert Eggers burst onto the scene when his feature film debut, The Witch, won the Directing Award in the US Dramatic category at the 2015 Sundance Film Festival. He followed up that success by co-writing and directing another supernatural, hallucinatory horror film, The Lighthouse, which is set in the maritime world of the late 19th century.

L-R: Director Robert Eggers and cinematographer Jarin Blaschke on set.

The story begins when two lighthouse keepers (Willem Dafoe and Robert Pattinson) arrive on a remote island off the coast of New England for their month-long stay. But that stay gets extended as they’re trapped and isolated due to a seemingly never-ending storm. Soon, the two men engage in an escalating battle of wills, as tensions boil over and mysterious forces (which may or may not be real) loom all around them.

The Lighthouse has the power of an ancient myth. To tell this tale, which was shot in black and white, Eggers called on many of those who helped him create The Witch, including cinematographer Jarin Blaschke, production designer Craig Lathrop, composer Mark Korven and editor Louise Ford.

I recently talked to Eggers, who got his professional start directing and designing experimental and classical theater in New York City, about making the film, his love of horror and the post workflow.

Why does horror have such an enduring appeal?
My best argument is that there’s darkness in humanity, and we need to explore that. And horror is great at doing that, from the Gothic to a bad slasher movie. While I may prefer authors who explore the complexities in humanity, others may prefer schlocky films with jump scares that make you spill your popcorn, which still give them that dose of darkness. Those films may not be seriously probing the darkness, but they can relate to it.

This film seems more psychological than simple horror.
We’re talking about horror, but I’m not even sure that this is a horror film. I don’t mind the label, even though most wannabe auteurs are like, “I don’t like labels!” It started with an idea my brother Max had for a ghost story set in a lighthouse, which is not what this movie became. But I loved the idea, which was based on a true story. It immediately evoked a black and white movie on 35mm negative with a boxy aspect ratio of 1.19:1, like the old movies, and a fusty, dusty, rusty, musty atmosphere — the pipe smoke and all the facial hair — so I just needed a story that went along with all of that. (Laughs) We were also thinking a lot about influences and writers from the time — like Poe, Melville and Stevenson — and soaking up the jargon of the day. There were also influences like Prometheus and Proteus and God knows what else.

Casting the two leads was obviously crucial. What did Willem and Robert bring to their roles?
Absolute passion and commitment to the project and their roles. Who else but Willem can speak like a North Atlantic pirate stereotype and make it totally believable? Robert has this incredible intensity, and together they play so well against each other and are so well suited to this world. And they both have two of the best faces ever in cinema.

What were the main technical challenges in pulling it all together, and is it true you actually built the lighthouse?
We did. We built everything, including the 70-foot tower — a full-scale working lighthouse, along with its house and outbuildings — on Cape Forchu in Nova Scotia, which is this very dramatic outcropping of volcanic rock. Production designer Craig Lathrop and his team did an amazing job, and the reason we did that was because it gave us far more control than if we’d used a real lighthouse.

We scouted a lot but just couldn’t find one that suited us, and the few that did were far too remote to access. We needed road access and a place with the right weather, so in the end it was better to build it all. We also shot some of the interiors there as well, but most of them were built on soundstages and warehouses in Halifax since we knew it’d be very hard to shoot interiors and move the camera inside the lighthouse tower itself.

Your go-to DP, Jarin Blaschke, shot it. Talk about how you collaborated on the look and why you used black and white.
I love the look of black and white, because it’s both dreamlike and also more realistic than color in a way. It really suited both the story and the way we shot it, with the harsh landscape and a lot of close-ups of Willem and Robert. Jarin shot the film on the Panavision Millennium XL2, and we also used vintage Baltar lenses from the 1930s, which gave the film a great look, as they make the sea, water and sky all glow and shimmer more. He also used a custom cyan filter by Schneider Filters that gave us that really old-fashioned look. Then by using black and white, it kept the overall look very bleak at all times.

How tough was the shoot?
It was pretty tough, and all the rain and pounding wind you see onscreen is pretty much real. Even on the few sunny days we had, the wind was just relentless. The shoot was about 32 days, and we were out in the elements in March and April of last year, so it was freezing cold and very tough for the actors. It was very physically demanding.

Where did you post?
We did it all in New York at Harbor Post, with some additional ADR work at Goldcrest in London with Robert.

Do you like the post process?
I love post, and after the very challenging shoot, it was such a relief to just get in a warm, dry, dark room and start cutting and pulling it all together.

Talk about editing with Louise Ford, who also cut The Witch. How did that work?
She was with us on the shoot at a bed and breakfast, so I could check in with her at the end of the day. But it was so tough shooting that I usually waited until the weekends to get together and go over stuff. Then when we did the stage work at Halifax, she had an edit room set up there, and that was much easier.

What were the big editing challenges?
The DP and I developed such a specific and detailed cinema language without a ton of coverage and with little room for error that we painted ourselves into a corner. So that became the big challenge… when something didn’t work. It was also about getting the running time down but keeping the right pace since the performances dictate the pace of the edit. You can’t just shorten stuff arbitrarily. But we didn’t leave a lot of stuff on the cutting room floor. The assembly was just over two hours and the final film isn’t much shorter.

All the sound effects play a big role. Talk about the importance of sound and working on them with sound designer Damian Volpe, whose credits include Can You Ever Forgive Me?, Leave No Trace, Mudbound, Drive, Winter’s Bone and Margin Call.
It’s hugely important in this film, and Louise and I did a lot of work in the picture edit to create temps for Damian to inspire him. And he was so relentless in building up the sound design, and even creating weird sounds to go with the actual light, and to go with the score by Mark Korven, who did The Witch, and all the brass and unusual instrumentation he used on this. So the result is both experimental and also quite traditional, I think.

There are quite a few VFX shots. Who did them, and what was involved?
We had MELS and Oblique in Quebec and Brainstorm Digital in New York also did some. The big one was that the movie’s set on an island but we shot on a peninsula, which also had a lighthouse further north, which unfortunately didn’t look at all correct, so we framed it out a lot but we had to erase it for some of the time. And our period-correct sea ship broke down and had to be towed around by other ships, so there was a lot of clean up. Also with all the safety cables we had to use for cliff shots with the actors.

Where did you do the DI, and how important is it to you?
We did it at Harbor with colorist Joe Gawler, and it was hugely important although it was fairly simple because there’s very little latitude on the Double-X film stock we used. We did a lot of fine detail work to finesse it, but it was a lot quicker than if it’d been in color.

Did the film turn out the way you hoped?
No, they always change and surprise you, but I’m very proud of what we did.

What’s next?
I’m prepping another period piece, but it’s not a horror film. That’s all I can say.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Alkemy X adds Albert Mason as head of production

Albert Mason has joined VFX house Alkemy X as head of production. He comes to Alkemy X with over two decades of experience in visual effects and post production. He has worked on projects directed by such industry icons as Peter Jackson on the Lord of the Rings trilogy, Tim Burton on Alice in Wonderland and Robert Zemeckis on The Polar Express. In his new role at Alkemy X, he will use his experience in feature films to target the growing episodic space.

A large part of Alkemy X’s work has been for episodic visual effects, with credits that include Amazon Prime’s Emmy-winning original series, The Marvelous Mrs. Maisel, USA’s Mr. Robot, AMC’s Fear the Walking Dead, Netflix’s Maniac, NBC’s Blindspot and Starz’s Power.

Mason began his career at MTV’s on-air promos department, sharpening his production skills on top series promo campaigns and as a part of its newly launched MTV Animation Department. He took an opportunity to transition into VFX, stepping into a production role for Weta Digital and spending three years working globally on the Lord of the Rings trilogy. He then joined Sony Pictures Imageworks, where he contributed to features including Spider-Man 3 and Ghost Rider. He has also produced work for such top industry shops as Logan, Rising Sun Pictures and Greymatter VFX.

“[Albert’s] expertise in constructing advanced pipelines that embrace emerging technologies will be invaluable to our team as we continue to bolster our slate of VFX work,” says Alkemy X president/CEO Justin Wineburgh.

2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.

Creating With Cloud: A VFX producer’s perspective

By Chris Del Conte

The ‘90s was an explosive era for visual effects, with films like Jurassic Park, Independence Day, Titanic and The Matrix shattering box office records and inspiring a generation of artists and filmmakers, myself included. I got my start in VFX working on seaQuest DSV, an Amblin/NBC sci-fi series that was ground-breaking for its time, but looking at the VFX of modern films like Gemini Man, The Lion King and Ad Astra, it’s clear just how far the industry has come. A lot of that progress has been enabled by new technology and techniques, from the leap to fully digital filmmaking and emergence of advanced viewing formats like 3D, Ultra HD and HDR to the rebirth of VR and now the rise of cloud-based workflows.

In my nearly 25 years in VFX, I’ve worn a lot of hats, including VFX producer, head of production and business development manager. Each role involved overseeing many aspects of a production and, collectively, they’ve all shaped my perspective when it comes to how the cloud is transforming the entire creative process. Thanks to my role at AWS Thinkbox, I have a front-row seat to see why studios are looking at the cloud for content creation, how they are using the cloud, and how the cloud affects their work and client relationships.

Chris Del Conte on the set of the IMAX film Magnificent Desolation.

Why Cloud?
We’re in a climate of high content demand and massive industry flux. Studios are incentivized to find ways to take on more work, and that requires more resources — not just artists, but storage, workstations and render capacity. Driving a need to scale, this trend often motivates studios to consider the cloud for production or to strengthen their use of cloud in their pipelines if already in play. Cloud-enabled studios are much more agile than traditional shops. When opportunities arise, they can act quickly, spinning resources up and down at a moment’s notice. I realize that for some, the concept of the cloud is still a bit nebulous, which is why finding the right cloud partner is key. Every facility is different, and part of the benefit of cloud is resource customization. When studios use predominantly physical resources, they have to make decisions about storage and render capacity, electrical and cooling infrastructure, and staff accommodations up front (and pay for them). Using the cloud allows studios to adjust easily to better accommodate whatever the current situation requires.

Artistic Impact
Advanced technology is great, but artists are by far a studio’s biggest asset; automated tools are helpful but won’t deliver those “wow moments” alone. Artists bring the creativity and talent to the table, then, in a perfect world, technology helps them realize their full potential. When artists are free of pipeline or workflow distractions, they can focus on creating. The positive effects spill over into nearly every aspect of production, which is especially true when cloud-based rendering is used. By scaling render resources via the cloud, artists aren’t limited by the capacity of their local machines. Since they don’t have to wait as long for shots to render, artists can iterate more fluidly. This boosts morale because the final results are closer to what artists envisioned, and it can improve work-life balance since artists don’t have to stick around late at night waiting for renders to finish. With faster render results, VFX supervisors also have more runway to make last-minute tweaks. Ultimately, cloud-based rendering enables a higher caliber of work and more satisfied artists.

Budget Considerations
There are compelling arguments for shifting capital expenditures to operational expenditures with the cloud. New studios get the most value out of this model since they don’t have legacy infrastructure to accommodate. Cloud-based solutions level the playing field in this respect; it’s easier for small studios and freelancers to get started because there’s no significant up-front hardware investment. This is an area where we’ve seen rapid cloud adoption. Considering how fast technology changes, it seems ill-advised to limit a new studio’s capabilities to today’s hardware when the cloud provides constant access to the latest compute resources.

When a studio has been in business for decades and might have multiple locations with varying needs, its infrastructure is typically well established. Some studios may opt to wait until their existing hardware has fully depreciated before shifting resources to the cloud, while others dive in right away, with an eye on the bigger picture. Rendering is generally a budgetary item on project bids, but with local hardware, studios are working to recoup a sunk cost. Using the cloud, render compute can be part of a bid and becomes a negotiable item. Clients can determine the delivery timeline based on render budget, and the elasticity of cloud resources allows VFX studios to pick up more work. (Even the most meticulously planned productions can run into 911 issues ahead of delivery, and cloud-enabled studios have bandwidth to be the hero when clients are in dire straits.)

Looking Ahead
When I started in VFX, giant rooms filled with racks and racks of servers and hardware were the norm, and VFX studios were largely judged by the size of their infrastructure. I’ve heard from an industry colleague about how their VFX studio’s server room was so impressive that they used to give clients tours of the space, seemingly a visual reminder of the studio’s vast compute capabilities. Today, there wouldn’t be nearly as much to view. Modern technology is more powerful and compact but still requires space, and that space has to be properly equipped with the necessary electricity and cooling. With cloud, studios don’t need switchers and physical storage to be competitive off the bat, and they experience fewer infrastructure headaches, like losing freon in the AC.

The cloud also opens up the available artist talent pool. Studios can dedicate the majority of physical space to artists as opposed to machines and even hire artists in remote locations on a per-project or long-term basis. Facilities of all sizes are beginning to recognize that becoming cloud-enabled brings a significant competitive edge, allowing them to harness the power to render almost any client request. VFX producers will also start to view facility cloud-enablement as a risk management tool that allows control of any creative changes or artistic embellishments up until delivery, with the rendering output no longer a blocker or a limited resource.

Bottom line: Cloud transforms nearly every aspect of content creation into a near-infinite resource, whether storage capacity, render power or artistic talent.


Chris Del Conte is senior EC2 business development manager at AWS Thinkbox.

Motorola’s next-gen Razr gets a campaign for today

Many of us have fond memories of our Razr flip phone. At the time, it was the latest and greatest. Then new technology came along, and the smartphone era was born. Now Motorola is asking, “Why can’t you have both?”

Available as of November 13, the new Razr fits in a palm or pocket when shut and flips open to reveal an immersive, full-length touch screen. There is a display screen called the Quick View when closed and the larger Flex View when open — and the two displays are made to work together. Whatever you see on Quick View then moves to the larger Flex View display when you flip it open.

In order to help tell this story, Motorola called on creative shop Los York to help relaunch the Razr. Los York created the new smartphone campaign to tap into the Razr’s original DNA and launch it for today’s user.

Los York developed a 360 campaign that included films, social, digital, TV, print and billboards, with visuals in stores and on devices (wallpapers, ringtones, startup screens). Los York treated the Razr as a luxury item and a piece of art, letting the device reveal itself unencumbered by taglines and copy. The campaign showcases the Razr as a futuristic, high-end “fashion accessory” that speaks to new industry conversations, such as advancing tech along a utopian or dystopian future.

The campaign features a mix of live action and CG. Los York shot on a Panavision DXL with Primo 70 lenses. CG was created using Maxon Cinema 4D with Redshift and composited in Adobe After Effects. The piece was edited in-house on Adobe Premiere.

We reached out to Los York CEO and founder Seth Epstein to find out more:

How much of this is live action versus CG?
The majority is CG, but, originally, the piece was intended to be entirely CG. Early in the creative process, we defined the world in which the new Razr existed and who would belong there. As we worked on the project, we kept feeling that bringing our characters to life in live action and blending the worlds. The proper live action was envisioned after the fact, which is somewhat unusual.

What were some of the most challenging aspects of this piece?
The most challenging part of the project was the fact that the project happened over a period of nine months. Wisely, the product release needed to push, and we continued to evolve the project over time, which is a blessing and a curse.

How did it feel taking on a product with a lot of history and then rebranding it for the modern day?
We felt the key was to relaunch an iconic product like the Razr with an eye to the future. The trap of launching anything iconic is falling back on the obvious retro throwback references, which can come across as too obvious. We dove into the original product and campaigns to extract the brand DNA of 2004 using archetype exercises. We tapped into the attitude and voice of the Razr at that time — and used that attitude as a starting point. We also wanted to look forward and stand three years in the future and imagine what the tone and campaign would be then. All of this is to say that we wanted the new Razr to extract the power of the past but also speak to audiences in a totally fresh and new way.

Check out the campaign here.

Blur Studio uses new AMD Threadripper for Terminator: Dark Fate VFX

By Dayna McCallum

AMD has announced new additions to its high-end desktop processor family. Built for demanding desktop and content creation workloads, the 24-core AMD Ryzen Threadripper 3960X and the 32-core AMD Ryzen Threadripper 3970X processors will be available worldwide November 25.

Tim Miller on the set of Dark Fate.

AMD states that the powerful new processors provide up to 90 percent more performance and up to 2.5 times more available storage bandwidth than competitive offerings, per testing and specifications by AMD performance labs. The 3rd Gen AMD Ryzen Threadripper lineup features two new processors built on 7nm “Zen 2” core architecture, claiming up to 88 PCIe 4.0 lanes and 144MB cache with 66 percent better power efficiency.

Prior to the official product launch, AMD made the 3rd Gen Threadrippers available to LA’s Blur Studio for work on the recent Terminator: Dark Fate and continued a collaboration with the film’s director — and Blur Studio founder — Tim Miller.

Before the movie’s release, AMD hosted a private Q&A with Miller, moderated by AMD’s James Knight. Please note that we’ve edited the lively conversation for space and taken a liberty with some of Miller’s more “colorful” language. (Also watch this space to see if a wager is won that will result in Miller sporting a new AMD tattoo.) Here is the Knight/Miller conversation…

So when we dropped off the 3rd Gen Threadripper to you guys, how did your IT guys react?
Like little children left in a candy shop with no adult supervision. The nice thing about our atmosphere here at Blur is we have an open layout. So when (bleep) like these new AMD processors drops in, you know it runs through the studio like wildfire, and I sit out there like everybody else does. You hear the guys talking about it, you hear people giggling and laughing hysterically at times on the second floor where all the compositors are. That’s where these machines really kick ass — busting through these comps that would have had to go to the farm, but they can now do it on a desktop.

James Knight

As an artist, the speed is crucial. You know, if you have a machine that takes 15 minutes to render, you want to stop and do something else while you wait for a render. It breaks your whole chain of thought. You get out of that fugue state that you produce the best art in. It breaks the chain between art and your brain. But if you have a machine that does it in 30 seconds, that’s not going to stop it.

But really, more speed means more iterations. It means you deal with heavier scenes, which means you can throw more detail at your models and your scenes. I don’t think we do the work faster, necessarily, but the work is much higher quality. And much more detailed. It’s like you create this vacuum, and then everybody rushes into it and you have this silly idea that it is really going to increase productivity, but what it really increases most is quality.

When your VFX supervisor showed you the difference between the way it was done with your existing ecosystem and then with the third-gen Threadripper, what were you thinking about?
There was the immediate thing — when we heard from the producers about the deadline, shots that weren’t going to get done for the trailer, suddenly were, which was great. More importantly, you heard from the artists. What you started to see was that it allows for all different ways of working, instead of just the elaborate pipeline that we’ve built up — to work on your local box and then submit it to the farm and wait for that render to hit the queue of farm machines that can handle it, then send that render back to you.

It has a rhythm that is at times tiresome for the artists, and I know that because I hear it all the time. Now I say, “How’s that comp coming and when are we going to get it, tick tock?” And they say, “Well, it’s rendering in the background right now, as I’m watching them work on another comp or another piece of that comp.” That’s pretty amazing. And they’re doing it all locally, which saves so much time and frustration compared to sending it down the pipeline and then waiting for it to come back up.

I know you guys are here to talk about technology, but the difference for the artists is the instead of working here until 1:00am, they’re going home to put their children to bed. That’s really what this means at the end of the day. Technology is so wonderful when it enables that, not just the creativity of what we do, but the humanity… allowing artists to feel like they’re really on the cutting edge, but also have a life of some sort outside.

Endoskeleton — Terminator: Dark Fate

As you noted, certain shots and sequences wouldn’t have made it in time for the trailer. How important was it for you to get that Terminator splitting in the trailer?
 Marketing was pretty adamant that that shot had to be in there. There’s always this push and pull between marketing and VFX as you get closer. They want certain shots for the trailer, but they’re almost always those shots that are the hardest to do because they have the most spectacle in them. And that’s one of the shots. The sequence was one of the last to come together because we changed the plan quite a bit, and I kept changing shots on Dan (Akers, VFX supervisor). But you tell marketing people that they can’t have something, and they don’t really give a (bleep) about you and your schedule or the path of that artist and shot. (Laughing)

Anyway, we said no. They begged, they pleaded, and we said, “We’ll try.” Dan stepped up and said, “Yeah, I think I can make it.” And we just made it, but that sounds like we were in danger because we couldn’t get it done fast enough. All of this was happening in like a two-day window. If you didn’t notice (in the trailer), that’s a Rev 7. Gabriel Luna is a Rev 9, which is the next gen. But the Rev 7s that you see in his future flashback are just pure killers. They’re still the same technology, which is looking like metal on the outside and a carbon endoskeleton that splits. So you have to run the simulation where the skeleton separates through the liquid that hangs off of an inch string; it’s a really hard simulation to do. That’s why we thought maybe it wasn’t going to get done, but running the simulation on the AMD boxes was lightning fast.

 

 

 

Todd Phillips talks directing Warner Bros.’ Joker

By Iain Blair

Filmmaker Todd Phillips began his career in comedy, most notably with the blockbuster franchise The Hangover, which racked up $1.4 billion at the box office globally. He then leveraged that clout and left his comedy comfort zone to make the genre-defying War Dogs.

Todd Phillips directing Joaquin Phoenix

Joker puts comedy even further in his rearview mirror. This bleak, intense, disturbing and chilling tragedy has earned over a $1 billion worldwide since its release, making it the seventh-highest-grossing film of 2019 and the highest-grossing R-rated film of all time. Not surprisingly, Joker was celebrated by the Academy, earning a total of 11 Oscar nods, including two for Phillips.

Directed, co-written and produced by Phillips (nominated for Directing and Screenplay), Joker is the filmmaker’s original vision of the infamous DC villain — an origin story infused with the character’s more traditional mythologies. Phillips’ exploration of Arthur Fleck, who is portrayed — and fully inhabited — by three-time Oscar-nominee Joaquin Phoenix, is of a man struggling to find his way in Gotham’s fractured society. Longing for any light to shine on him, he tries his hand as a stand-up comic but finds the joke always seems to be on him. Caught in a cyclical existence between apathy, cruelty and, ultimately, betrayal, Arthur makes one bad decision after another that brings about a chain reaction of escalating events in this powerful, allegorical character study.

Phoenix is joined by Oscar-winner Robert De Niro, who plays TV host Murray Franklin, and a cast that includes Zazie Beetz, Frances Conroy, Brett Cullen, Marc Maron, Josh Pais and Leigh Gill.

Behind the scenes, Phillips was joined by a couple of frequent collaborators in DP Lawrence Sher, ASC, and editor Jeff Groth. Also on the journey were Oscar-nominated co-writer Scott Silver, production designer Mark Friedberg and Oscar-winning costume designer Mark Bridges. Hildur Guðnadóttir provided the music.

Joker was produced by Phillips and actor/director Bradley Cooper, under their Joint Effort banner, and Emma Tillinger Koskoff.

I recently talked to Phillips, whose credits include Borat (for which he earned an Oscar nod for Best Adapted Screenplay), Due Date, Road Trip and Old School, about making the film, his love of editing and post.

You co-wrote this very complex, timely portrait of a man and a city. Was that the appeal for you?
Absolutely, 100 percent. While it takes place in the late ‘70s and early ‘80s, and we wrote it in 2016, it was very much about making a movie that deals with issues happening right now. Movies are often mirrors of society, and I feel this is exactly that.

Do you think that’s why so many people have been offended by it?
I do. It’s really resonated with audiences. I know it’s also been somewhat divisive, and a lot of people were saying, “You can’t make a movie about a guy like this — it’s irresponsible.” But do we want to pretend that these people don’t exist? When you hold up a mirror to society, people don’t always like what they see.

Especially when we don’t look so good.
(Laughs) Exactly.

This is a million miles away from the usual comic-book character and cartoon violence. What sort of film did you set out to make?
We set out to make a tragedy, which isn’t your usual Hollywood approach these days, for sure.

It’s hard to picture any other actor pulling this off. What did Joachin bring to the role?
When Scott and I wrote it, we had him in mind. I had a picture of him as my screensaver on my laptop — and he still is. And then when I pitched this, it was with him in mind. But I didn’t really know him personally, even though we created the character “in his voice.” Everything we wrote, I imagined him saying. So he was really in the DNA of the whole film as we wrote it, and he brought the vulnerability and intensity needed.

You’d assume that he’d jump at this role, but I heard it wasn’t so simple getting him.
You’re right. Getting him was a bit of a thing because it wasn’t something he was looking to do — to be in a movie set in the comic book world. But we spent a lot of timing talking about it, what it would be, what it means and what it says about society today and the lack of empathy and compassion that we have now. He really connected with those themes.

Now, looking back, it seems like an obvious thing for him to do, but it’s hard for actors because the business has changed so much and there’s so many of these superhero movies and comic book films now. Doing them is a big thing for an actor, because then you’re in “that group,” and not every actor wants to be in that group because it follows you, so to speak. A lot of actors have done really well in superhero movies and have done other things too, but it’s a big step and commitment for an actor. And he’d never really been in this kind of film before.

What were the main technical challenges in pulling it all together?
I really wanted to shoot on location all around New York City, and that was a big challenge because it’s far harder than it sounds. But it was so important to the vibe and feel of the movie. So many superhero movies use lots of CGI, but I needed that gritty reality of the actual streets. And I think that’s why it’s so unsettling to people because it does feel so real. Luckily, we had Emma Tillinger Koskoff, who’s one of the great New York producers. She was key in getting locations.

Did you do a lot of previz?
I don’t usually do that much. We did it once for War Dogs and it worked well, but it’s a really slow and annoying process to some extent. As crazy as it sounds, we tried it once on the big Murray Franklin scene with De Niro at the end, which is not a scene you’d normally previz — it’s just two guys sitting on a couch. But it was a 12-page scene with so many camera angles, so we began to previz it and then just abandoned it half-way through. The DP and I were like, “This isn’t worth it. We’ll just do it like we always do and just figure it out as we go.” But previz is an amazing tool. It just needed more time and money than we had, and definitely more patience than I have.

Where did you post?
We started off at my house, where Jeff and I had an Avid setup. We also had a satellite office at 9000 Sunset, where all the assistants were. VFX and our VFX supervisor Edwin Rivera were also based out of there along with our music editor, and that’s where most of it was done. Our supervising sound editor was Alan Robert Murray, a two-time Oscar-winner for his work on American Sniper and Letters From Iwo Jima, and we did the Atmos sound mix on the lot at Warners with Tom Ozanich and Dean Zupancic.

Talk about editing with Jeff Groth. What were the big editing challenges?
There are a lot of delusions in Arthur’s head, so it was a big challenge to know when to hide them and when to reveal them. The scene order in the final film is pretty different from the scripted order, and that’s all about deciding when to reveal information. When you write the script, every scene seems important, and everything has to happen in this order, but when you edit, it’s like, “What were we thinking? This could move here, we can cut this, and so on.”

Todd Phillips on set with Robert DeNiro

That’s what’s so fun about editing and why I love it and post so much. I see my editor as a co-writer. I think every director loves editing the most, because let’s face it — directors are all control freaks, and you have the most control in post and the editing room. So for me at least, I direct movies and go through all the stress of production and shooting just to get to the editing room. It’s all stuff I just have to deal with so I can then sit down and actually make the movie. So it’s the final draft of the script and I very much see it as a writing exercise.

Post is your last shot at getting the script right, and the most fun part of making a movie is the first 10 to 12 weeks of editing. The worst part is the final stretch of post, all that detail work and watching the movie 400 times. You get sick of it, and it’s so hard to be objective. This ended up taking 20 weeks before we had the first cut. Usually you get 10 for the director’s cut, but I asked Warners for more time and they were like, “OK.”

Visual effects play a big role in the film. How many were there?
More than you’d think, but they’re not flashy. I told Edwin early on, if you do your job right, no one will guess there are any VFX shots at all. He had a great team, and we used various VFX houses, including Scanline, Shade and Branch.

There’s a lot of blood, and I’m guessing that was all enhanced a lot?
In fact, there was no real blood — not a drop — used on set, and that amazes people when I tell them. That’s one of the great things about VFX now — you can do all the blood work in post. For instance, traditionally, when you film a guy being shot on the subway, you have all the blood spatters and for take two, you have to clean all that up and repaint the walls and reset, and it takes 45 minutes. This way, with VFX, you don’t have to deal with any of that. You just do a take, do it again until it’s right, and add all the blood in post. That’s so liberating.

L-R: Iain Blair and Todd Phillips

What was the most difficult VFX shot to do?
I’d say the scene with Randall at his apartment, and all that blood tracking on the walls and on Arthur’s face and hands is pretty amazing, and we spent the most time on all that, getting it right.

Where did you do the DI, and how important is it to you?
At Company 3 with my regular colorist Jill Bogdanowicz, and it’s vital for the look. I only began doing DIs on the first Hangover, and the great thing about it is you can go in and surgically fix anything. And if you have a great DP like Larry Sher, who’s shot the last six movies for me, you don’t get lost in the maze of possibilities, and I trust him more than I trust myself sometimes.

We shot it digitally, though the original plan was to shoot 65mm large format, and when that fell through to shoot 35mm. Then Larry and I did a lot of tests and decided we’d shoot digital and make it look like film. And thanks to the way he lit and all the work he and Jill did, it has this weird photochemical feel and look. It’s not quite film, but it’s definitely not digital. It’s somewhere in the middle, its own thing.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Behind the Title: Sarofsky EP Steven Anderson

This EP’s responsibilities range gamut “from managing our production staff to treating clients to an amazing dinner.”

Company: Chicago’s Sarofsky

Can you describe your company?
We like to describe ourselves as a design-driven production company. I like to think of us as that but so much more. We can be a one-stop shop for everything from concept through finish, or we can partner with a variety of other companies and just be one piece of the puzzle. It’s like ordering from a Chinese menu — you get to pick what items you want.

What’s your job title, and what does the job entail?
I’m executive producer, and that means different things at different companies and industries. Here at Sarofsky, I am responsible for things that run the gamut from managing our production staff to treating clients to an amazing dinner.

Sarofsky

What would surprise people the most about what falls under that title?
I also run payroll, and I am damn good at it.

How has the VFX industry changed in the time you’ve been working?
It used to be that when you told someone, “This is going to take some time to execute,” that’s what it meant. But now, everyone wants everything two hours ago. On the flip side, the technology we now have access to has streamlined the production process and provided us with some terrific new tools.

Why do you like being on set for shoots? What are the benefits?
I always like being on set whenever I can because decisions are being made that are going to affect the rest of the production paradigm. It’s also a good opportunity to bond with clients and, sometimes, get some kick-ass homemade guacamole.

Did a particular film inspire you along this path in entertainment?
I have been around this business for quite a while, and one of the reasons I got into it was my love of film and filmmaking. I can’t say that one particular film inspired me to do this, but I remember being a young kid and my dad taking me to see The Towering Inferno in the movie theater. I was blown away.

What’s your favorite part of the job?
Choosing a spectacular bottle of wine for a favorite client and watching their face when they taste it. My least favorite has to be chasing down clients for past due invoices. It gets old very quickly.

What is your most productive time of the day?
It’s 6:30am with my first cup of coffee sitting at my kitchen counter before the day comes at me. I get a lot of good thinking and writing done in those early morning hours.

Original Bomb Pop via agency VMLY&R

If you didn’t have this job, what would you be doing instead?
I would own a combo bookstore/wine shop where people could come and enjoy two of my favorite things.

Why did you choose this profession?
I would say this profession chose me. I studied to be an actor and made my living at it for several years, but due to some family issues, I ended up taking a break for a few years. When I came back, I went for a job interview at FCB and the rest is history. I made the move from agency producing to post executive producer five years ago and have not looked back since.

Can you briefly explain one or more ways Sarofsky is addressing the issue of workplace diversity in its business?
We are a smallish women-owned business, and I am a gay man; diversity is part of our DNA. We always look out for the best talent but also try to ensure we are providing opportunities for people who may not have access to them. For example, one of our amazing summer interns came to us through a program called Kaleidoscope 4 Kids, and we all benefited from the experience.

Name some recent projects you have worked on, which are you most proud of, and why?
My first week here at EP, we went to LA for the friends and family screening of Guardians of the Galaxy, and I thought, what an amazing company I work for! Marvel Studios is a terrific production partner, and I would say there is something special about so many of our clients because they keep coming back. I do have a soft spot for our main title for Animal Kingdom just because I am a big Ellen Barkin fan.

Original Bomb Pop via agency VMLY&R

Name three pieces of technology you can’t live without.
I’d be remiss if I didn’t say my MacBook and iPhone, but I also wouldn’t want to live without my cooking thermometer, as I’ve learned how to make sourdough bread this year, and it’s essential.

What social media channels do you follow?
I am a big fan of Instagram; it’s just visual eye candy and provides a nice break during the day. I don’t really partake in much else unless you count NPR. They occupy most of my day.

Do you listen to music while you work? Care to share your favorite music to work to?
I go in waves. Sometimes I do but then I won’t listen to anything for weeks. But I recently enjoyed listening to “Ladies and Gentleman: The Best of George Michael.” It was great to listen to an entire album, a rare treat.

What do you do to de-stress from it all?
I get up early and either walk or do some type of exercise to set the tone for the day. It’s also so important to unplug; my partner and I love to travel, so we do that as often as we can. All that and a 2006 Chateau Margaux usually washes away the day in two delicious sips.

Filmmaker Hasraf “HaZ” Dulull talks masterclass on sci-fi filmmaking

By Randi Altman

Hasraf “HaZ” Dulull is a producer/director and a hands-on VFX and post pro. His most recent credits include the features films 2036 Origin Unknown and The Beyond, the Disney TV series Fast Layne and the Disney Channel original movies Under the Sea — A Descendants Story, which takes place between Descendants 2 and 3. Recently, Dulull developed a masterclass on Sci-Fi Filmmaking, which can be bought or rented.

Why would this already very busy man decide to take on another project and one that is a little off his current path? Well, we reached out to find out.

Why, at this point in your career, did you think it was important to create this masterclass?
I have seen other masterclasses out there to do with filmmaking and they were always academic based, which turned me off. The best ones were the ones that were taught by actual filmmakers who had made commercial projects, films or TV shows… not just short films. So I knew that if I was to create and deliver a masterclass, I would do it after having made a couple of feature films that have been released out there in the world. I wanted to lead by example and experience.

When I was in LA explaining to studio people, executives and other filmmakers how I made my feature films, they were impressed and fascinated with my process. They were amazed that I was able to pull off high-concept sci-fi films on tight budgets and schedules but still produce a film that looked expensive to make.

When I was researching existing masterclasses or online courses as references, I found that no one was actually going through the entire process. Instead they were offering specialized training in either cinematography or VFX, but there wasn’t anything about how to break down a script and put a budget and schedule together; how to work with locations to make your film work; how to use visual effects smartly in production; how to prepare for marketing and delivering your film for distribution. None of these things were covered as a part of a general masterclass, so I set out to fill that void with my masterclass series.

Clearly this genre holds a special place in your heart. Can you talk about why?
I think it’s because the genre allows for so much creative freedom because sci-fi relies on world-building and imagination. Because of this freedom, it leads to some “out of this world” storytelling and visuals, but on the flip side it may influence the filmmaker to be too ambitious on a tight budget. This could lead to making cheap-looking films because of the over ambitious need to create amazing worlds. Not many filmmakers know how to do this in a fiscally sensible way and they may try to make Star Wars on a shoestring budget. So this is why I decided to use the genre of sci-fi in this masterclass to share my experience of smart filmmaking to achieve commercially successful results.

How did you decide on what topics to cover? What was your process?
I thought about the questions the people and studio executives were asking me when I was in those LA meetings, which pretty much boiled down to, “How did you put the movie together for that tight budget and schedule?” When answering that question, I ended up mapping out my process and the various stages and approaches I took in preproduction, production and post production, but also in the deliverables stage and marketing and distribution stage too. As an indie filmmaker, you really need to get a good grasp on that part to ensure your film is able to be released by the distributors and received commercially.

I also wanted each class/episode to have a variety of timings and not go more than around 10 minutes (the longest one is around 12 minutes, and the shortest is three minutes). I went with a more bite-sized approach to make the experience snappy, fun yet in-depth to allow the viewers to really soak in the knowledge. It also allows for repeat viewing.

Why was it important to teach these classes yourself?
I wanted it to feel raw and personal when talking about my experience of putting two sci-fi feature films together. Plus I wanted to talk about the constant problem solving, which is what filmmaking is all about. Teaching the class myself allowed me to get this all out of my system in my voice and style to really connect with the audience intimately.

Can you talk about what the experience will be like for the student?
I want the students to be like flies on the wall throughout the classes — seeing how I put those sci-fi feature films together. By the end of the series, I want them to feel like they have been on an entire production, from receiving a script to the releasing of the movie. The aim was to inspire others to go out and make their film. Or to instill confidence in those who have fears of making their film, or for existing filmmakers to learn some new tips and tricks because in this industry we are always learning on each project.

Why the rental and purchase options? What have most people been choosing?
Before I released it, one of the big factors that kept me up nights was how to make this accessible and affordable for everyone. The idea of renting is for those who can’t afford to purchase it but would love to experience the course. They can do so at a cut-down price but can only view within the 48-hour window. Whereas the purchase price is a little higher price-wise but you get to access it as many times as you like. It’s pretty much the same model as iTunes when you rent or buy a movie.

So far I have found that people have been buying more than renting, which is great, as this means audiences want to do repeat viewings of the classes.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Terminator: Dark Fate director Tim Miller

By Iain Blair

He said he’d be back, and he meant it. Thirty-five years after he first arrived to menace the world in the 1984 classic The Terminator, Arnold Schwarzenegger has returned as the implacable killing machine in Terminator: Dark Fate, the latest installment of the long-running franchise.

And he’s not alone in his return. Terminator: Dark Fate also reunites the film’s producer and co-writer James Cameron with original franchise star Linda Hamilton for the first time in 28 years in a new sequel that picks up where Terminator 2: Judgment Day left off.

When the film begins, more than two decades have passed since Sarah Connor (Hamilton) prevented Judgment Day, changed the future and re-wrote the fate of the human race. Now, Dani Ramos (Natalia Reyes) is living a simple life in Mexico City with her brother (Diego Boneta) and father when a highly advanced and deadly new Terminator — a Rev-9 (Gabriel Luna) — travels back through time to hunt and kill her. Dani’s survival depends on her joining forces with two warriors: Grace (Mackenzie Davis), an enhanced super-soldier from the future, and a battle-hardened Sarah Connor. As the Rev-9 ruthlessly destroys everything and everyone in its path on the hunt for Dani, the three are led to a T-800 (Schwarzenegger) from Sarah’s past that might be their last best hope.

To helm all the on-screen mayhem, black humor and visual effects, Cameron handpicked Tim Miller, whose credits include the global blockbuster Deadpool, one of the highest grossing R-rated films of all time (it grossed close to $800 million). Miller then assembled a close-knit team of collaborators that included director of photography Ken Seng (Deadpool, Project X), editor Julian Clarke (Deadpool, District 9) and visual effects supervisor Eric Barba (The Curious Case of Benjamin Button, Oblivion).

Tim Miller on set

I recently talked to Miller about making the film, its cutting-edge VFX, the workflow and his love of editing and post.

How daunting was it when James Cameron picked you to direct this?
I think there’s something wrong with me because I don’t really feel fear as normal people do. It just manifests as a sense of responsibility, and with this I knew I’d never measure up to Jim’s movies but felt I could do a good job. Jim was never going to tell this story, and I wanted to see it, so it just became more about the weight of that sense of responsibility, but not in a debilitating way. I felt pretty confident I could carry this off. But later, the big anxiety was not to let down Linda Hamilton. Before I knew her, it wasn’t a thing, but later, once I got to know her I really felt I couldn’t mess it up (laughs).

This is still Cameron’s baby even though he handed over the directing to you. How hands-on was he?
He was busy with Avatar, but he was there for a lot of the early meetings and was very involved with the writing and ideas, which was very helpful thematically. But he wasn’t overbearing on all that. Then later when we shot, he wanted to write a few of the key scenes, which he did, and then in the edit he was in and out, but he never came into my edit room. He’d give notes and let us get on with it.

What sort of film did you set out to make?
A continuation of Sarah’s story. I never felt it was John’s story to me. It was always about a mother’s love for a son, and I felt like there was a real opportunity here. And that that story hadn’t been told — partly because the other sequels never had Linda. Once she wanted to come back, it was always the best possible story. No one else could be her or Arnold’s character.

Any surprises working with them?
Before we shot, people were telling me, “You got to be ready, we can’t mess around. When Arnold walks on set you’d better be rolling!” Sure enough, when he walked on he’d go, “And…” (Laughs) He really likes to joke around. With Linda — and the other actors — it was a love-fest. They’re both such nice, down-to-earth people, and I like a collegial atmosphere. I’m not a screamer. I’m very prepared, and I feel if you just show up on time, you’re already ahead of the game as a director.

What were the main technical challenges in pulling it all together?
They were all different for each big action set piece, and fitting it all into a schedule was tough, as we had a crazy amount of VFX. The C-5 plane sequence was far and away the biggest challenge to do and [SFX supervisor] Neil Corbould and his team designed and constructed all the effects rigs for the movie. The C-5 set was incredible, with two revolving sets, one vertical and one horizontal. It was so big you could put a bus in it, and it was able to rotate 360 degrees and tilt in either direction at the same time.

You just can’t simulate that reality of zero gravity on the actors. And then after we got it all in camera, which took weeks, our VFX guy Eric Barba finished it off. The other big one was the whole underwater scene, where the Humvee falls over the top of a dam and goes underwater as it’s swept down a river. For that, we put the Humvee on a giant scissor lift that could take it all the way under, so the water rushes in and fills it up. It’s really safe to do, but it feels frighteningly realistic for the actors.

This is only my second movie, so I’m still learning, but the advantage is I’m really willing to listen to any advice from the smart people around me on set on how best to do all this stuff.

How early on did you start integrating post and all the VFX?
Right from the start. I use previz a lot, as I come from that environment and I’m very comfortable with it, and that becomes the template for all of production to work from. Sometimes it’s too much of a template and treated like a bible, but I’m like, “Please keep thinking. Is there a better idea?” But it’s great to get everyone on the same page, so very early on you see what’s VFX, what’s live-action only, what’s a combination, and you can really plan your shoot. We did over 45 minutes of previz, along with storyboards. We did tons of postviz. My director’s cut had no blue/green at all. It was all postviz for every shot.

Tim Miller and Linda Hamilton

DP Ken Seng, who did Deadpool with you, shot it. Talk about how you collaborated on the look.
We didn’t really have time to plan shot lists that much since we moved so much and packed so much into every day. A lot of it was just instinctive run-and-gun, as the shoot was pretty grueling. We shot in Madrid and [other parts of] Spain, which doubled for Mexico. Then we did studio work in Budapest. The script was in flux a lot, and Jim wrote a few scenes that came in late, and I was constantly re-writing and tweaking dialogue and adjusting to the locations because there’s the location you think you’ll get and then the one you actually get.

Where did you post?
All at Blur, my company where we did Deadpool. The edit bays weren’t big enough for this though, so we spilled over into another building next door. That became Terminator HQ with the main edit bay and several assistant bays, plus all the VFX and compositing post teams. Blur also helped out with postviz and previz.

Do you like the post process?
I love post! I was an animator and VFX guy first, so it’s very natural to me, and I had a lot of the same team from Deadpool, which was great.

Talk about editing with Julian Clarke who cut Deadpool. How did that work?
It was the same set up. He’d be back here in LA cutting while we shot. He’s so fast; he’d be just one day behind me — I’ve never met anyone who works as hard. Then after the shoot, we’d edit all day and then I’d deal with VFX reviews for hours.

Can you talk about how Adobe Creative Cloud helped the post and VFX teams achieve their creative and technical goals?
I’m a big fan, and that started back on Deadpool as David Fincher was working closely with Adobe to make Premiere something that could beat Avid. We’re good friends — we’re doing our animated Netflix show Love, Death & Robots together — and he was like, “Dude, you gotta use this tool,” so we used it on Deadpool. It was still a little rocky on that one, but overall it was a great experience, and we knew we’d use it on this one. Adobe really helped refine it and the workflow, and it was a huge leap.

What were the big editing challenges?
(Laughs) We just shot too much movie. We had many discussions about cutting one or more of the action scenes, but in the end, we just took out some of the action from all of them, instead of cutting a particular set piece. But it’s tricky cutting stuff and still making it seamless, especially in a very heavily choreographed sequence like the C-5.

VFX plays a big role. How many were there?
Over 2,500 — a huge amount. The VFX on this were so huge it became a bit of a problem, to be honest.

L-R: Writer Iain Blair and director Tim Miller

How did you work with VFX supervisor Eric Barba.
He did a great job and oversaw all the vendors, including ILM, who did most of them. We tried to have them do all the character-based stuff, to keep it in one place, but in the end, we also had Digital Domain, Method, Blur, UPP, Cantina, and some others. We also brought on Jeff White from ILM since it was more than Eric could handle.

Talk about the importance of sound and music.
Tom Holkenborg, who scored Deadpool, did another great job. We also reteamed with sound design and mixer Craig Henighan and we did the mix at Fox. They’re both crucial in a film like this, but I’m the first to admit music’s not my strength. Luckily, Julian Clarke is excellent with that and very focused. He worked hard at pulling it all together. I love sound design and we talked about all the spotting, and Julian managed a lot of that too for me because I was so busy with the VFX.

Where did you do the DI and how important is it to you?
It’s huge, and we did it at Company 3 with Tim Stipan, who did Deadpool. I like to do a lot of reframing, adding camera shake and so on. It has a subtle but important effect on the overall film.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Technicolor Post opens in Wales 

Technicolor has opened a new facility in Cardiff, Wales, within Wolf Studios. This expansion of the company’s post production footprint in the UK is a result of the growing demand for more high-quality content across streaming platforms and the need to post these projects, as well as the growth of production in Wales.

The facility is connected to all of Technicolor’s locations worldwide through the Technicolor Production Network, giving creatives easy access and to their projects no matter where they are shooting or posting.

The facility, an extension of Technicolor’s London operations, supports all Welsh productions and features a multi-purpose, state-of-the-art suite as well as space for VFX and front-end services including dailies. Technicolor Wales is working on Bad Wolf Production’s upcoming fantasy epic His Dark Materials, providing picture and sound services for the BBC/HBO show. Technicolor London’s recent credits include The Two Popes, The Souvenir, Chernobyl, Black Mirror, Gentleman Jack and The Spanish Princess.

Within this new Cardiff facility, Technicolor is offering 2K digital cinema projection, FilmLight Baselight color grading, realtime 4K HDR remote review, 4K OLED video monitoring, 5.1/7.1 sound, ADR recording/source connect, Avid Pro Tools sound mixing, dailies processing and Pulse cloud storage.

Bad Wolf Studios in Cardiff offers 125,000 square feet of stage space with five stages. There is flexible office space, as well as auxiliary rooms and costume and props storage. Its within

Rising Sun Pictures’ Anna Hodge talks VFX education and training

Based in Adelaide, South Australia, Rising Sun Pictures (RSP) has created stunning visual effects for films including Spider-Man: Far From Home, Captain Marvel, Thor: Ragnarok and Game of Thrones.

It also operates a visual effects training program in conjunction with the University of South Australia in which students learn such skills as compositing, tracking, effects, lighting, look development and modeling from working professionals. Thanks to this program, many students have landed jobs in the industry.

We recently spoke with RSP’s manager of training and education, Anna Hodge, about the school’s success.

Tell us about the education program at Rising Sun Pictures.
Rising Sun Pictures is an independently owned visual effects company. We’ve worked on more than 130 films, as well as commercials and streaming series, and we are very much about employing locals from South Australia. When this is not possible, we hire staff from interstate and overseas for key senior positions.

Our education program was established in 2015 in conjunction with the University of South Australia (UniSA) in order to directly feed our junior talent pool. We found there was a gap between traditional visual effects training and the skills young artists needed to hit the ground running in a studio.

How is the program structured?
We began with a single Graduate Certificate in Visual Effects program of 12 weeks duration that was designed for students coming out of vocational colleges and universities wanting to improve their skills and employability. Students apply through a portfolio process. The program accepts 10 students each term and are exposes them to Foundry Nuke and other visual effects software. They gain experience by working on shots from past movies and creating a short film.

The idea is to give them a true industry experience, develop a showreel in the process and gain a qualification through a prestigious university. Our students are exposed to the studio floor from day one. They attend RSP five days a week. They work in our training rooms and are immersed in the life of the company. We want them to feel as much a part of RSP as our regular employees.

Our program has grown to include two graduate certificate streams. The Graduate Certificate in Effects and Lighting and our first graduate certificate was rebadged into the Graduate Certificate of Compositing and Tracking. Both have been highly successful for our graduates acquiring employment post studies at RSP.

Anna Hodge and students

We also offer course work toward the university’s media arts degree. We teach two elective courses in the second year, specializing in modeling and texturing and look development and lighting. The university students attend RSP as part of their studies at UniSA. It gives them exposure to our artists, industry-type projects and expectations of the industry through workshop-based delivery.

In 2019, our education program expanded, and we introduced “visual effects specialization” as part of the media arts degree. Unlike any other degree, the students spend their entire last year of studies at RSP. They are integrated with the graduate certificate classes, and learning at RSP for the whole year enables them to build skills in both compositing and tracking and effects and lighting, making them highly skilled and desirable employees at the end of their studies.

What practical skills do students learn?
In the Media Arts Modeling and Texturing elective course, they are exposed to Maya and are introduced to Pixologic ZBrush. In the second semester, they can study look development and lighting and learn Substance Painter and how to light in SideFX Houdini.

Both degree and graduate certificate students in the dynamic effects and lighting course receive around nine weeks of Houdini training and then move onto lighting. Those in the compositing and tracking stream learn Nuke, as well as 3D Equalizer and Silhouette. All our degree and graduate certificate students are also exposed to Autodesk’s Shotgun. They learn the tools we use on the floor and apply them in the same workflow.

Skills are never taught in isolation. They learn how they fit into the whole movie-making process. Working on the short film project, run in conjunction with We Made a Thing Studios (WEMAT), students learn how to work collaboratively, take direction and gain other necessary skills required for working in a deadline-driven environment.

Where do your students come from?
We attract applications from South Australia. Over the past few years, applications from interstate and overseas have significantly increased. The benefit of our program is that it’s only 12-weeks long, so students can pick up the skills they require without a huge investment in time. There is strong growth of jobs in South Australia so they are often employed locally or sometimes return to their hometowns to gain employment.

What are the advantages of training in a working VFX studio?
Our training goes beyond simple software skills. Our students are taught by some of our best artists in the world and professionals who have been working in the industry for years. Students can walk around the studio, talk to and shadow artists, and attend a company staff meeting. We schedule what we call “Day in the Life Of” presentations so students can gain an understanding of the various roles that make up our company. Students hear from department heads, senior artists, producers and even juniors. They talk about their jobs and their pathways into the industry. They provide students with sound practical advice on how to improve their skills and present themselves. We also run sessions with recruiters, who share insights in building good resumes and showreels.

We are always trying to reinvent and improve what we do. I have one-on-ones with students to find out how they are doing and what we can do to improve their learning experience. We take feedback seriously. Our instructors are passionate artists and educators. Over time, I think we’ve built something quite unique and special at RSP.

How do you support your students in their transition from the program into the professional world?
We have an excellent relationship with recruiters at other visual effects companies in South Australia, interstate and globally, and we use those connections to help our students find work. A VFX company that opened in Brisbane recently hired two of our students and wants to hire more.

Of course, one reason we created the program was to meet our own need for juniors. So I work closely with our department heads to meet their needs. If a job lands and they have positions open, I will refer students for interviews. Many of our students stay in touch after they leave here. Our support doesn’t stop after 12 weeks. When former students add new material to their showreels, I encourage them to send them in and I forward them to the relevant heads of department. When one of our graduates secures his or hers first VFX job, it’s the best news. This really makes my day.

How do you see the program evolving over the next few years?
We are working on new initiatives with UniSA. Nothing to reveal yet, but I do expect our numbers to grow simply because our graduate results are excellent. Our employment rate is well above 70 percent. I spoke with someone yesterday who is looking to apply next year. She was at a recent film event and met a bunch of our graduates who raved about the programs they studied at RSP. Hearing that sort of thing is really exciting and something that we are really proud of.

RSP and UniSA are both mindful that when scaling up we don’t compromise on quality delivery. It is important to us that students consistently receive the same high-quality training and support regardless of class size.

Do you feel that visual effects offer a strong career path?
Absolutely. I am constantly contacted by recruiters who are looking to hire our graduates. I don’t foresee a lack of jobs, only a lack of qualified artists. We need to keep educating students to avoid a skill shortage. There has never been a better time to train for a career in visual effects.

De-aging John Goodman 30 years for HBO’s The Righteous Gemstones

For HBO’s original series The Righteous Gemstones, VFX house Gradient Effects de-aged John Goodman using its proprietary Shapeshifter tool, an AI-assisted tool that can turn back the time on any video footage. With Shapeshifter, Gradient sidestepped the Uncanny Valley to shave decades off Goodman for an entire episode, delivering nearly 30 minutes of film-quality VFX in six weeks.

In the show’s fifth episode, “Interlude,” viewers journey back to 1989, a time when the Gemstone empire was still growing and Eli’s wife, Aimee-Leigh, was still alive. But going back also meant de-aging Goodman for an entire episode, something never attempted before on television. Gradient accomplished it using Shapeshifter, which allows artists to “reshape” an individual frame and the performers in it and then extend those results across the rest of a shot.

Shapeshifter worked by first analyzing the underlying shape of Goodman’s face. It then extracted important anatomical characteristics, like skin details, stretching and muscle movements. With the extracted elements saved as layers to be reapplied at the end of the process, artists could start reshaping his face without breaking the original performance or footage. Artists could tweak additional frames in 3D down the line as needed, but they often didn’t need to, making the de-aging process nearly automated.

“Shapeshifter an entirely new way to de-age people,” says Olcun Tan, owner and visual effects supervisor at Gradient Effects. “While most productions are limited by time or money, we can turn around award-quality VFX on a TV schedule, opening up new possibilities for shows and films.”

Traditionally, de-aging work for film and television has been done in one of two ways: through filtering (saves time, but hard to scale) or CG replacements (better quality, higher cost), which can take six months to a year. Shapeshifter introduces a new method that not only preserves the actor’s original performance, but also interacts naturally with other objects in the scene.

“One of the first shots of ‘Interlude’ shows stage crew walking in front of John Goodman,” describes Tan. “In the past, a studio would have recommended a full CGI replacement for Goodman’s character because it would be too hard or take too much time to maintain consistency across the shot. With Shapeshifter, we can just reshape one frame and the work is done.”

This is possible because Shapeshifter continuously captures the face, including all of its essential details, using the source footage as its guide. With the data being constantly logged, artists can extract movement information from anywhere on the face whenever they want, replacing expensive motion-capture stages, equipment and makeup teams.

Director Ang Lee: Gemini Man and a digital clone

By Iain Blair

Filmmaker Ang Lee has always pushed the boundaries in cinema, both technically and creatively. His film Life of Pi, which he directed and produced, won four Academy Awards — for Best Direction, Best Cinematography, Best Visual Effects and Best Original Score.

Lee’s Brokeback Mountain won three Academy Awards, including Best Direction, Best Adapted Screenplay and Best Original Score. Crouching Tiger, Hidden Dragon was nominated for 10 Academy Awards and won four, including Best Foreign Language Film for Lee, Best Cinematography, Best Original Score and Best Art Direction/Set Decoration.

His latest, Paramount’s Gemini Man, is another innovative film, this time disguised as an action-thriller. It stars Will Smith in two roles — first, as Henry Brogan, a former Special Forces sniper-turned-assassin for a clandestine government organization; and second (with the assistance of ground-breaking visual effects) as “Junior,” a cloned younger version of himself with peerless fighting skills who is suddenly targeting him in a global chase. The chase takes them from the estuaries of Georgia to the streets of Cartagena and Budapest.

Rounding out the cast is Mary Elizabeth Winstead as Danny Zakarweski, a DIA agent sent to surveil Henry; Golden Globe Award-winner Clive Owen as Clay Verris, a former Marine officer now seeking to create his own personal military organization of elite soldiers; and Benedict Wong as Henry’s longtime friend, Baron.

Lee’s creative team included director of photography Dion Beebe (Memoirs of a Geisha, Chicago), production designer Guy Hendrix Dyas (Inception, Indiana Jones and the Kingdom of the Crystal Skull), longtime editor Tim Squyres (Life of Pi and Crouching Tiger, Hidden Dragon) and composer Lorne Balfe (Mission: Impossible — Fallout, Terminator Genisys).

The groundbreaking visual effects were supervised by Bill Westenhofer, Academy Award-winner for Life of Pi as well as The Golden Compass, and Weta  Digital’s Guy Williams, an Oscar-nominee for The Avengers, Iron Man 3 and Guardians of the Galaxy Vol. 2.

Will Smith and Ang Lee on set

I recently talked to Lee — whose directing credits include Taking Woodstock, Hulk, Ride With the Devil, The Ice Storm and Billy Lynn’s Long Halftime Walk — about making the film, which has already generated a lot of awards talk about its cutting-edge technology, the workflow and his love of editing and post.

Hollywood’s been trying to make this for over two decades now, but the technology just wasn’t there before. Now it’s finally here!
It was such a great idea, if you can visualize it. When I was first approached about it by Jerry Bruckheimer and David Ellison, they said, “We need a movie star who’s been around a long time to play Henry, and it’s an action-thriller and he’s being chased by a clone of himself,” and I thought the whole clone idea was so fascinating. I think if you saw a young clone version of yourself, you wouldn’t see yourself as special anymore. It would be, “What am I?” That also brought up themes like nature versus nurture and how different two people with the same genes can be. Then the whole idea of what makes us human? So there was a lot going on, a lot of great ideas that intrigued me. How does aging work and affect you? How would you feel meeting a younger version of yourself? I knew right away it had to be a digital clone.

You certainly didn’t make it easy for yourself as you also decided to shoot it in 120fps at 4K and in 3D.
(Laughs) You’re right, but I’ve been experimenting with new technology for the past decade, and it all started with Life of Pi. That was my first taste of 3D, and for 3D you really need to shoot digitally because of the need for absolute precision and accuracy in synchronizing the two cameras and your eyes. And you need a higher frame rate to get rid of the strobing effect and any strangeness. Then when you go to 120 frames per second, the image becomes so clear and far smoother. It’s like a whole new kind of moviemaking, and that’s fascinating to me.

Did you shoot native 3D?
Yes, even though it’s still so clumsy, and not easy, but for me it’s also a learning process on the set which I enjoy.

Junior

There’s been a lot of talk about digital de-aging use, especially in Scorsese’s The Irishman. But you didn’t use that technique for Will’s younger self, right?
Right. I haven’t seen The Irishman so I don’t know exactly what they did, but this was a total CGI creation, and it’s a lead character where you need all the details and performance. Maybe the de-aging is fine for a quick flashback, but it’s very expensive to do, and it’s all done manually. This was also quite hard to do, and there are two parts to it: Scientifically, it’s quite mind-boggling, and our VFX supervisor Bill Westenhofer and his team worked so hard at it, along with the Weta team headed by VFX supervisor Guy Williams. So did Will. But then the hardest part is dealing with audiences’ impressions of Junior, as you know in the back of your mind that a young Will Smith doesn’t really exist. Creating a fully digital believable human being has been one of the hardest things to do in movies, but now we can.

How early on did you start integrating post and all the VFX?
Before we even started anything, as we didn’t have unlimited money, a big part of the budget went to doing a lot of tests, new equipment, R&D and so on, so we had to be very careful about planning everything. That’s the only way you can reduce costs in VFX. You have to be a good citizen and very disciplined. It was a two-year process, and you plan and shoot layer by layer, and you have to be very patient… then you start making the film in post.

I assume you did a lot of previz?
(Laughs) A whole lot, and not only for all the obvious action scenes. Even for the non-action stuff, we designed and made the cartoons and did previz and had endless meetings and scouted and measured and so on. It was a lot of effort.

How tough was the shoot?
It was very tough and very slow. My last three movies have been like this since the technology’s all so new, so it’s a learning process as you’re figuring it all out as you go. No matter how much you plan, new stuff comes up all the time and equipment fails. It feels very fragile and very vulnerable sometimes. And we only had a budget for a regular movie, so we could only shoot for 80 days, and we were on three continents and places like Budapest and Cartagena as well as around Savannah in the US. Then I insist on doing all the second unit stuff as well, apart from a few establishing shots and sunsets. I have to shoot everything, so we had to plan very carefully with the sound team as every shot is a big deal.

Where did you post?
All in New York. We rented space at Final Frame, and then later we were at Harbor. The thing is, no lab could process our data since it was so huge, so when we were based in Savannah we just built our own technology base and lab so we could process all our dailies and so on — and we bought all our servers, computers and all the equipment needed. It was all in-house, and our technical supervisor Ben Gervais oversaw it all. It was too difficult to take all that to Cartagena, but we took it all to Budapest and then set it all up later in New York for post.

Do you like the post process?
I like the first half, but then it’s all about previews, getting notes, changing things. That part is excruciating. Although I have to give a lot of credit to Paramount as they totally committed to all the VFX quite early and put the big money there before they even saw a cut so we had time to do them properly.

Junior

Talk about editing with Tim Squyres. How did that work?
We sent him dailies. When I’m shooting, I just want to live in my dreams, unless something alarms me, and he’ll let me know. Otherwise, I prefer to work separately. But on this one, since we had to turn over some shots while we were shooting, he came to the set in Budapest, and we’d start post already, which was new to me. Before, I always liked to cut separately.

What were the big editing challenges?
Trying to put all the complex parts together, dealing with the rhythm and pace, going from quiet moments to things like the motorcycle chase scenes and telling the story as effectively as we could —all the usual things. In this medium, everything is more critical visually.

All the VFX play a big role. How many were there?
Over 1,000, but then Junior alone is a huge visual effect in every scene he’s in. Weta did all of him and complained that they got the hardest and most expensive part. (Laughs) The other, easier stuff was spread out to several companies, including Scanline and Clear Angle.

Ang Lee and Iain Blair

Talk about the importance of sound and music.
We did the mix at Harbor on its new stage, and it’s always so important. This time we did something new. Typically, you do Atmos at the final mix and mix the music along with all the rest, but our music editor did an Atmos mix on all the music first and then brought it to us for the final mix. That was very special.

Where did you do the DI and how important is it to you?
It’s huge on a movie like this. We set up our own DI suite in-house at Final Frame with the latest FilmLight Baselight, which is amazing. Our colorist Marcy Robinson had trained on it, and it was a lot easier than on the last film. Dion came in a lot and they worked together, and then I’d come in. We did a lot of work, especially on all the night scenes, enhancing moonlight and various elements.

I think the film turned out really well and looks great. When you have the combination of these elements like 3D, digital cinematography, high frame rate and high resolution, you really get “new immersive cinema.” So for me, it’s a new and different way of telling stories and processing them in your head. The funny thing is, personally I’m a very low-tech person, but I’ve been really pursuing this for the last few years.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Foundry updates Nuke to version 12.0

Foundry has released Nuke 12.0, which introduces the next cycle of releases for the Nuke family. The Nuke 12.0 release brings improved interactivity and performance across the Nuke family, from additional GPU-enabled nodes for cleanup to a rebuilt playback engine in Nuke Studio and Hiero. Nuke 12.0 also sees the integration of GPU-accelerated tools integrated from Cara VR for camera solving, stitching and corrections and updates to the latest industry standards.

OpenEXR

New features of Nuke 12.0 include:
• UI interactivity and script loading – This release includes  a variety of optimizations throughout the software to improve performance, especially when working at scale. One key improvement offers a much smoother experience and noticeably maintains UI interactivity and reduced loading times when working in large scripts.
• Read and write performance – Nuke 12.0 includes focused improvement to OpenEXR read and write performance, including optimizations for several popular compression types (Zip1, Zip16, PIZ, DWAA, DWAB), improving render times and interactivity in scripts. Red and Sony camera formats also see additional GPU support.
• Inpaint and EdgeExtend – These GPU-accelerated nodes provide faster and more intuitive workflows for common tasks, with fine detail controls and contextual paint strokes.
• Grid Warp Tracker – Extending the Smart Vector toolset in NukeX, this node uses Smart Vectors to drive grids for match moving, warping and morphing images.
• Cara VR node integration – The majority of Cara VR’s nodes are now integrated into NukeX, including a suite of GPU-enabled tools for VR and stereo workflows and tools that enhance traditional camera solving and cleanup workflows.
• Nuke Studio, Hiero and HieroPlayer Playback – The timeline-based tools in the Nuke family see dramatic improvements in playback stability and performance as a result of a rebuilt playback engine optimized for the heavy I/O demands of color-managed workflows with multichannel EXRs.

HPA Awards name 2019 creative nominees

The HPA Awards Committee has announced the nominees for the creative categories for the 2019 HPA Awards. The HPA Awards honor outstanding achievement and artistic excellence by the individuals and teams who help bring stories to life. Launched in 2006, the HPA Awards recognize outstanding achievement in color grading, editing, sound and visual effects for work in episodic, spots and feature films.

The winners of the 14th Annual HPA Awards will be announced at a gala ceremony on November 21 at the Skirball Cultural Center in Los Angeles.

The 2019 HPA Awards Creative Category nominees are:

Outstanding Color Grading – Theatrical Feature

-“First Man”

Natasha Leonnet // Efilm

-“Roma”

Steven J. Scott // Technicolor

-“Green Book”

Walter Volpatto // FotoKem

-“The Nutcracker and the Four Realms”

Tom Poole // Company 3

-“Us”

Michael Hatzer // Technicolor

-“Spider-Man: Into the Spider-Verse”

Natasha Leonnet // Efilm

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

-“The Handmaid’s Tale – Liars”

Bill Ferwerda // Deluxe Toronto

-“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”

Steven Bodner // Light Iron

-“Game of Thrones – Winterfell”

Joe Finley // Sim, Los Angeles

-“I am the Night – Pilot”

Stefan Sonnenfeld // Company 3

-“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”

Paul Westerbeck // Picture Shop

-“The Man in the High Castle – Jahr Null”

Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

-Zara – “Woman Campaign Spring Summer 2019”

Tim Masick // Company 3

-Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”

James Tillett // Moving Picture Company

-Hennessy X.O. – “The Seven Worlds”

Stephen Nakamura // Company 3

-Palms Casino – “Unstatus Quo”

Ricky Gausis // Moving Picture Company

-Audi – “Cashew”

Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

-“Once Upon a Time… in Hollywood”

Fred Raskin, ACE

-“Green Book”

Patrick J. Don Vito, ACE

-“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”

David Tedeschi, Damian Rodriguez

-“The Other Side of the Wind”

Orson Welles, Bob Murawski, ACE

-“A Star Is Born”

Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

“Russian Doll – The Way Out”

Todd Downing

-“Homecoming – Redwood”

Rosanne Tan, ACE

-“Veep – Pledge”

Roger Nygard, ACE

-“Withorwithout”

Jake Shaver, Shannon Albrink // Therapy Studios

-“Russian Doll – Ariadne”

Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

-“Stranger Things – Chapter Eight: The Battle of Starcourt”

Dean Zimmerman, ACE, Katheryn Naranjo

-“Chernobyl – Vichnaya Pamyat”

Simon Smith, Jinx Godfrey // Sister Pictures

-“Game of Thrones – The Iron Throne”

Katie Weiland, ACE

-“Game of Thrones – The Long Night”

Tim Porter, ACE

-“The Bodyguard – Episode One”

Steve Singleton

 

Outstanding Sound – Theatrical Feature

-“Godzilla: King of Monsters”

Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.

Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

-“Shazam!”

Michael Keller, Kevin O’Connell // Warner Bros.

Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

-“Smallfoot”

Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

-“Roma”

Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

-“Aquaman”

Tim LeBlanc // Warner Bros.

Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

-“Chernobyl – 1:23:45”

Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

-“Deadwood: The Movie”

John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Coleman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

-“Game of Thrones – The Bells”

Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

-“The Haunting of Hill House – Two Storms”

Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

-“Homecoming – Protocol”

John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

-John Lewis & Partners – “Bohemian Rhapsody”

Mark Hills, Anthony Moore // Factory

Audi – “Life”

Doobie White // Therapy Studios

-Leonard Cheshire Disability – “Together Unstoppable”

Mark Hills // Factory

-New York Times – “The Truth Is Worth It: Fearlessness”

Aaron Reynolds // Wave Studios NY

-John Lewis & Partners – “The Boy and the Piano”

Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

-“Avengers: Endgame”

Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

-“Spider-Man: Far From Home”

Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

-“The Lion King”

Robert Legato

Andrew R. Jones

Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film

Tom Peitzman // T&C Productions

-“Alita: Battle Angel”

Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

-“Pokemon Detective Pikachu”

Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

-“Game of Thrones – The Long Night”

Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

-“The Umbrella Academy – The White Violin”

Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

-“The Man in the High Castle – Jahr Null”

Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

-“Chernobyl – 1:23:45”

Lindsay McFarlane

Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

-“Game of Thrones – The Bells”

Steve Kullback, Joe Bauer, Ted Rae

Mohsen Mousavi // Scanline

Thomas Schelesny // Image Engine

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

-“Hawaii Five-O – Ke iho mai nei ko luna”

Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

-“9-1-1 – 7.1”

Jon Massey, Tony Pizadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

-“Star Trek: Discovery – Such Sweet Sorrow Part 2”

Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

-“The Flash – King Shark vs. Gorilla Grodd”

Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

-“The Orville – Identity: Part II”

Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX

Brandon Fayette, Brooke Noska // Twentieth Century Fox TV

 

In addition to the nominations announced today, the HPA Awards will present a small number of special awards. Visual effects supervisor and creative Robert Legato (The Lion King, The Aviator, Hugo, Harry Potter and the Sorcerer’s Stone, Titanic, Avatar) will receive the HPA Award for Lifetime Achievement.

Winners of the Engineering Excellence Award include Adobe, Epic Games, Pixelworks, Portrait Displays Inc. and LG Electronics. The recipient of the Judges Award for Creativity and Engineering, a juried honor, will be announced in the coming weeks. All awards will be bestowed at the HPA Awards gala.

For more information or to buy tickets to the 2019 HPA Awards, click here.

 

 

Using VFX to turn back time for Downton Abbey film

The feature film Downton Abbey is a continuation of the popular TV series, which followed the lives of the aristocratic Crawley family and their domestic help. Created by Julian Fellowes, the film is based in 1927, one year after the show’s final episode, bringing with it the exciting announcement of a royal visit to Downton from King George V and Queen Mary.

Framestore supported the film’s shoot and post, with VFX supervisor Kyle McCulloch and senior producer Ken Dailey leading the team. Following Framestore’s work creating post-war Britain for the BAFTA-nominated Darkest Hour, the VFX studio was approached to work directly with the film’s director, Michael Engler, to help ground the historical accuracy of the film.

Much of the original cast and crew returned, with a screenplay that required the new addition of a VFX department, “although it was important that we had a light footprint,” explains McCulloch. “I want people to see the credits and be surprised that there are visual effects in it.” Supporting VFX on over 170 shots ranged from cleanups and seamless set transitions to extensive environment builds and augmentation.

Transporting the audience to an idealized interpretation of 1920s Britain required careful work on the structures of buildings, including the Abbey (Highclere Castle), Buckingham Palace and Lacock village, a national trust village in the Cotswolds that was used as a location for Downton’s village. Using the available photogrammetry and captured footage, the artists set to work restoring the period, adding layers of dirt and removing contemporary details to existing historical buildings.

Having changed so much since the early 20th century, King’s Cross Station needed a complete rebuild in CG, with digital train carriages, atmospheric smoke and large interior and exterior environment builds.

The team also helped with landscaping the idyllic grounds of the Abbey, replacing the lawn, trees and grass and removing power lines, cars and modern roads. Research was key, with the team collaborating with production designer Donal Woods and historical advisor Alastair Bruce, who came equipped with look books and photographs from the era. “A huge amount of the work was in the detail,” explains McCulloch. “We questioned everything; looking at the street surfaces, the type of asphalt used, down to how the gutters were built. All these tiny elements create the texture of the entire film. Everyone went through it with a very fine-tooth comb — every single frame.”

 

In addition, a long shot that followed the letter from the Royal Household from the exterior of the abbey, through the corridors of the domestic “downstairs” to the aristocratic “upstairs,” was a particular challenge. The scenes based downstairs — including in the kitchen — were shot at Shepperton Studios on a set, with the upstairs being captured on location at Highclere Castle. It was important to keep the illusion of the action all being within one large household, requiring Framestore to stitch the two shots together.

Says McCulloch, “It was brute force, it was months of work and I challenge anyone to spot where the seam is.”

Flavor adds Joshua Studebaker as CG supervisor

Creative production house Flavor has added CG supervisor Joshua Studebaker to its Los Angeles studio. For more than eight years, Studebaker has been a freelance CG artist in LA, specializing in design, animation, dynamics, lighting/shading and compositing via Maya, Cinema 4D, Vray/Octane, Nuke and After Effects.

A frequent collaborator with Flavor and its brand and agency partners, Studebaker has also worked with Alma Mater, Arsenal FX, Brand New School, Buck, Greenhaus GFX, Imaginary Forces and We Are Royale in the past five years alone. In his new role with Flavor, Studebaker oversees visual effects and 3D services across the company’s global operations. Flavor’s Chicago, Los Angeles and Detroit studios offer color grading, VFX and picture finishing using tools like Autodesk Lustre and Flame Premium.

Flavor creative director Jason Cook also has a long history of working with Studebaker and deep respect for his talent. “What I love most about Josh is that he is both technical and a really amazing artist and designer. Adding him is a huge boon to the Flavor family, instantly elevating our production capabilities tenfold.”

Flavor has always emphasized creativity as a key ingredient, and according to Studebaker, that’s what attracted him. “I see Flavor as a place to grow my creative and design skills, as well as help bring more standardization to our process in house,” he explained. “My vision is to help Flavor become more agile and more efficient and to do our best work together.”

Visual Effects in Commercials: Chantix, Verizon

By Karen Moltenbrey

Once too expensive to consider for use in television commercials, visual effects soon found their way into this realm, enlivening and enhancing the spots. Today, countless commercials are using increasingly complex VFX to entertain, to explain and to elevate a message. Here, we examine two very different approaches to using effects in this way. In the Verizon commercial Helping Doctors Fight Cancer, augmented reality is transferred from a holographic medical application and fused into a heartwarming piece thanks to an extremely delicate production process. For the Chantix Turkey Campaign, digital artists took a completely different method, incorporating a stylized digital spokes-character — with feathers, nonetheless – into various scenes.

Verizon Helping Doctors Fight Cancer

The main goal of television advertisements — whether they are 15, 30 or 60 seconds in length — is to sell a product. Some do it through a direct sales approach. Some by “selling” a lifestyle or brand. And some opt to tell a story. Verizon took the latter approach for a campaign promoting its 5G Ultra Wideband.

Vico Sharabani

For the spot Helping Doctors Fight Cancer, directed by Christian Weber, Verizon adds a human touch to its technology through a compelling story illustrating how its 5G network is being used within a mixed-reality environment so doctors can better treat cancer patients. The 30-second commercial features surgeons and radiologists using high-fidelity holographic 3D anatomical renderings that can be viewed from every angle and even projected onto a person’s body for a more comprehensive examination, while the imagery can potentially be shared remotely in near real time. The augmented-reality application is from Medivis, a start-up medical visualization company that is using Verizon’s next-generation 5G wireless speeds to deliver the high speeds and low latencies necessary for the application’s large datasets and interactive frame rates.

The spot introduces video footage of patients undergoing MRIs and discussion by Medivis cofounder Dr. Osamah Choudhry about how treatment could be radically changed using the technology. Holographic medical imagery is then displayed showing the Medivis AR application being used on a patient.

“McGarryBowen New York, Verizon’s advertising agency, wanted to show the technology in the most accurate and the most realistic way possible. So, we studied the technology,” says Vico Sharabani, founder/COO of The-Artery, which was tasked with the VFX work in the spot. To this end, The Artery team opted to use as much of the actual holographic content as possible, pulling assets from the Medivis software and fusing it with other broadcast-quality content.

The-Artery is no stranger to augmented reality, virtual reality and mixed reality. Highly experienced in visual effects, Sharabani founded the company to solve business problems within the visual space across all platforms, from films to commercials to branding, and as such, alternate reality and story have been integral elements to achieving that goal. Nevertheless, the work required for this spot was difficult and challenging.

“It’s not just acquiring and melding together 3D assets,” says Sharabani. “The process is complex, and there are different ways to do it — some better than others. And the agency wanted it to be true to the real-life application. This was not something we could just illustrate in a beautiful way; it had to be very technically accurate.”

To this end, much of the holographic imagery consisted of actual 3D assets from the Medivis holographic AR system, captured live. At times, though, The Artery had to rework the imagery using multiple assets from the Medivis application, and other times the artists re-created the medical imagery in CG.

Initially, the ad agency expected that The-Artery would recreate all the digital assets in CG. But after learning as much as they could about the Medivis system, Sharabani and the team were confident they could export actual data for the spot. “There was much greater value to using actual data when possible, actual CT data,” says Sharabani. “Then you have the most true-to-life representation, which makes the story even more heartfelt. And because we were telling a true story about the capabilities of the network around a real application being used by doctors, any misrepresentation of the human anatomy or scans would hurt the message and intention of the campaign.”

The-Artery began developing a solution with technicians at Medivis to export actual imagery via the HoloLens headset that’s used by the medical staff to view and manipulate the holographic imagery, to coincide with the needs of the commercial. Sometimes this involved merely capturing the screen performance as the HoloLens was being used. Other times the assets from the Medivis system were rendered over a greenscreen without a background and later composited into a scene.

“We have the ability to shoot through the HoloLens, which was our base; we used that as our virtual camera whereby the output of the system is driven by the HoloLens. Every time we would go back to do a capture (if the edit changed or the camera position changed), we had to use the HoloLens as our virtual camera in order to get the proper camera angle,” notes Sharabani. Because the HoloLens is a stereoscopic device, The Artery always used the right-eye view for the representations, as it most closely reflected the experience of the user wearing the device.

Since the Medivis system is driven by the HoloLens, there is some shakiness present — an artifact the group retained in some of the shots to make it truer to life. “It’s a constant balance of how far we go with realism and at what point it is too distracting for the broadcast,” says Sharabani.

For imagery like the CT scans, the point cloud data was imported directly into Autodesk’s Maya, where it was turned into a 3D model. Other times the images were rendered out at 4K directly from the system. The Medivis imagery was later composited into the scenes using Autodesk’s Flame.

However, not every bit of imagery was extracted from the system. Some had to be re-created using a standard 3D pipeline. For instance, the “scan” of the actor’s skull was replicated by the artists so that the skull model matched perfectly with the holographic imagery that was overlaid in post production (since everyone’s skull proportions are different). The group began by creating the models in Maya and then composited the imagery within Autodesk’s Flame, along with a 3D bounding box of the creative implant.

The artists also replicated the Medivis UI in 3D to recreate and match the performance of the three-dimensional UI to the AI hand gestures by the person “using” the Medivis system in the spot — both of which were filmed separately. For the CG interface, the group used Autodesk’s Maya and Flame, as well as Adobe’s After Effects.

“The process was so integrated to the edit, we needed the proper 3D tracking and some of the assets to be built as a 3D screen element,” explains Sharabani. “It gave us more flexibility to build the 3D UI inside of Flame, enabling us to control it more quickly and easily when we changed a hand gesture or expanded the shots.”

With The-Artery’s experience pertaining to virtual technology, the team was quick to understand the limitations of the project using this particular equipment. Once that was established, however, they began to push the boundaries with small hacks that enabled them to achieve their goals of using actual holographic data to tell an amazing story.

Chantix “Turkey” Campaign

Chantix is medication to help smokers kick the habit. To get its message across in a series of television commercials, the drug maker decided to talk turkey, focusing the campaign on a CG turkey that, well, goes “cold turkey” with the assistance of Chantix.

A series of four spots — Slow Turkey, Camping, AC and Beach Day — prominently feature the turkey, created at The Mill. The spots were directed and produced in-house by Mill+, The Mill’s end-to-end production arm, with Jeffrey Dates directing.


L-R: John Montefusco, Dave Barosin and Scott Denton

“Each one had its own challenges,” says CG lead John Montefusco. Nevertheless, the initial commercial, Slow Turkey, presented the biggest obstacle: the build of the character from the ground up. “It was not only a performance feat, but a technical one as well,” he adds.

Effects artist Dave Barosin iterated Montefusco’s assessment of Slow Turkey, which, in addition to building the main asset from scratch, required the development of a feather system. Meanwhile, Camping and AC had the addition of clothing, and Beach Day presented the challenge of wind, water and simulation in a moving vehicle.

According to senior modeler Scott Denton, the team was given a good deal of creative freedom when crafting the turkey. The artists were presented with some initial sketches, he adds, but more or less had free rein in the creation of the look and feel of the model. “We were looking to tread the line between cartoony and realistic,” he says. The first iterations became very cartoony, but the team subsequently worked backward to where the character was more of a mix between the two styles.

The crew modeled the turkey using Autodesk’s Maya and Pixologic’s ZBrush. It was then textured within Adobe’s Substance and Foundry’s Mari. All the details of the model were hand-sculpted. “Nailing the look and feel was the toughest challenge. We went through a hundred iterations before getting to the final character you see in the commercial,” Denton says.

The turkey contains 6,427 body feathers, 94 flight feathers and eight scalp feathers. They were simulated using a custom feather setup built by the lead VFX artist within SideFX Houdini, which made the process more efficient. Proprietary tools also were used to groom the character.

The artists initially developed a concept sculpt in ZBrush of just the turkey’s head, which underwent numerous changes and versions before they added it to the body of the model. Denton then sculpted a posed version with sculpted feathers to show what the model might look like when posed, giving the client a better feel for the character. The artists later animated the turkey using Maya. Rendering was performed in Autodesk’s Arnold, while compositing was done within Foundry’s Nuke.

“Developing animation that holds good character and personality is a real challenge,” says Montefusco. “There’s a huge amount of evolution in the subtleties that ultimately make our turkey ‘the turkey.’”

For the most part, the same turkey model was used for all four spots, although the artists did adapt and change certain aspects — such as the skeleton and simulation meshes – for each as needed in the various scenarios.

For the turkey’s clothing (sweater, knitted vest, scarf, down vest, knitted cap, life vest), the group used Marvelous Designer 3D software for virtual clothes and fabrics, along with Maya and ZBrush. However, as Montefusco explains, tailoring for a turkey is far different than developing CG clothing for human characters. “Seeing as a lot of the clothes that were selected were knit, we really wanted to push the envelope and build the knit with geometry. Even though this made things a bit slower for our effects and lighting team, in the end, the finished clothing really spoke for itself.”

The four commercials also feature unique environments ranging from the interior and exterior of a home to a wooded area and beach. The artists used mostly plates for the environments, except for an occasional tent flap and chair replacement. The most challenging of these settings, says Montefusco, was the beach scene, which required full water replacement for the shot of the turkey on the paddle board.


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

VFX in Features: Hobbs & Shaw, Sextuplets

By Karen Moltenbrey

What a difference a year makes. Then again, what a difference 30 years make. That’s about the time when the feature film The Abyss included photoreal CGI integrated with live action, setting a trend that continues to this day. Since that milestone many years ago, VFX wizards have tackled a plethora of complicated problems, including realistic hair and skin, resulting in realistic digital humans, as well as realistic water, fire and other elements. With each new blockbuster VFX film, digital artists continually raise the bar, challenging the status quo and themselves to elevate the art even further.

The visual effects in today’s feature films run the gamut from in-your-face imagery that can put you on the edge of your seat through heightened action to the kind that can make you laugh by amping up the comedic action. As detailed here, Fast & Furious Presents: Hobbs & Shaw takes the former approach, helping to carry out amazing stunts that are bigger and “badder” than ever. Opposite that is Sextuplets, which uses VFX to carry out a gag central to the film in a way that also pushes the envelope.

Fast & Furious Presents: Hobbs & Shaw

The Fast and the Furious film franchise, which has included eight features that collectively have amassed more than $5 billion worldwide since first hitting the road in 2001, is known for its high-octane action and visual effects. The latest installment, Fast & Furious Presents: Hobbs & Shaw, continues that tradition.

At the core of the franchise are next-level underground street racers who become reluctant fugitives pulling off big heists. Hobbs & Shaw, the first stand-alone vehicle, has Dwayne Johnson and Jason Statham reprising their roles as loyal Diplomatic Security Service lawman Luke Hobbs and lawless former British operative Deckard Shaw, respectively. This comes after facing off in Furious 7 (2015) and then playing cat and mouse as Shaw tries to escape from prison and Hobbs tries to stop him in 2017’s The Fate of the Furious. (Hobbs first appeared in 2011’s Fast Five and became an ally to the gang. Shaw’s first foray was in 2013’s Fast & Furious 6.)

Now, in the latest installment, the pair are forced to join forces to hunt down anarchist Brixton Lorr (Idris Elba), who has control of a bio weapon. The trackers are hired separately to find Hattie, a rogue MI6 agent (who is also Shaw’s sister, a fact that initially eludes Hobbs) after she injects herself with the bio agent and is on the run, searching for a cure.

The Universal Pictures film is directed by David Leitch (Deadpool 2, Atomic Blonde). Jonathan Sela (Deadpool 2, John Wick) is DP, and visual effects supervisor is Dan Glass (Deadpool 2, Jupiter Ascending). A number of VFX facilities worked on the film, including key vendor DNeg along with other contributors such as Framestore.

DNeg delivered 1,000-plus shots for the film, including a range of vehicle-based action sequences set in different global locations. The work involved the creation of full digi-doubles and digi-vehicle duplicates for the death-defying stunts, jumps and crashes, as well as complex effects simulations and extensive digital environments. Naturally, all the work had to fit seamlessly alongside live-action stunts and photography from a director with a stunt coordinator pedigree and a keen eye for authentic action sequences. In all, the studio worked on 26 sequences divided among the Vancouver, London and Mumbai locations. Vancouver handled mostly the Chernobyl break-in and escape sequences, as well as the Samoa chase. London did the McLaren chase and the cave fight, as well as London chase sequences. The Mumbai team assisted its colleagues in Vancouver and London.

When you think of the Fast & Furious, the first thing that comes to mind are intense car chases, and according to Chris Downs, CG supervisor at DNeg Vancouver, the Chernobyl beat is essentially one long, giant car-and-motorcycle pursuit, describing it as “a pretty epic car chase.”

“We essential have Brixton chasing Shaw and Hattie, and then Shaw and Hattie are trying to catch up to a truck that’s being driven by Hobbs, and they end up on these utility ramps and pipes, using them almost as a roadway to get up and into the turbine rooms, onto the rooftops and then jump between buildings,” he says. “All the while, everyone is getting chased by these drones that Brixton is controlling.”

The Chernobyl sequences — the break-in and the escape — were the most challenging work on the film for DNeg Vancouver. The villain, Brixton, is using the Chernobyl nuclear power plant in Russia as the site of his hideaway, leading Hobbs and Shaw to secretly break into his secret lab underneath Chernobyl to locate a device Brixton has there — and then not-so-secretly break out.

The break-in was filmed at a location outside of London, at the decommissioned Eggborough coal-powered plant that served as a backdrop. To transform the locale into Chernobyl, DNeg augmented the site with cooling towers and other digital structures. Nevertheless, the artists also built an entire CG version of the site for the more extreme action, using photos of the actual Chernobyl as reference for their work. “It was a very intense build. We had artistic liberty, but it was based off of Chernobyl, and a lot of the buildings match the reference photography. It definitely maintained the feeling of a nuclear power plant,” says Downs.

Not only did the construction involve all the exteriors of the industrial complex around Chernobyl, but also an interior build of an “insanely complicated” turbine hall that the characters race through at one point.

The sequence required other environment work, too, as well as effects, digi-doubles and cloth sims for the characters’ flight suits and parachutes as they drop into the setting.

Following the break-in, Hobbs and Shaw are captured and tortured and then manage to escape from the lab just in time as the site begins to explode. For this escape sequence, the crew built a CG Chernobyl reactor and power station, automated drones, a digital chimney, an epic collapse of buildings, complex pyrotechnic clouds and burning material.

“The scope of the work, the amount of buildings and pipes, and the number of shots made this sequence our most difficult,” says Downs. “We were blowing it up, so all the buildings had to be effects-friendly as we’re crashing things through them.” Hobbs and Shaw commandeer vehicles as they try to outrun Brixton and the explosion, but Brixton and his henchmen give chase in a range of vehicles, including trucks, Range Rovers, motorcycles and more — a mix of CGI and practical with expert stunt drivers behind the wheel.

As expected for a Fast & Furious film, there’s a big variety of custom-built vehicles. Yet, for this scene and especially in Samoa, DNeg Vancouver crafted a range of CG vehicles, including motorcycles, SUVs, transport trucks, a flatbed truck, drones and a helicopter — 10 in all.

According to Downs, maintaining the appropriate wear and tear on the vehicles as the sequences progressed was not always easy. “Some are getting shot up, or something is blown up next to them, and you want to maintain the dirt and grime on an appropriate level,” he says. “And, we had to think of that wear and tear in advance because you need to build it into the model and the texture as you progress.”

The CG vehicles are mostly used for complex stunts, “which are definitely an 11 on the scale,” says Downs. Along with the CG vehicles, digi-doubles of the actors were also used for the various stunt work. “They are fairly straightforward, though we had a couple shots where we got close to the digi-doubles, so they needed to be at a high level of quality,” he adds. The Hattie digi-double proved the most difficult due to the hair simulation, which had to match the action on set, and the cloth simulation, which had to replicate the flow of her clothing.

“She has a loose sweater on during the Chernobyl sequence, which required some simulation to match the plate,” Downs adds, noting that the artists built the digi-doubles from scratch, using scans of the actors provided by production for quality checks.

The final beat of the Chernobyl escape comes with the chimney collapse. As the chase through Chernobyl progresses, Shaw tries to get Hattie to Hobbs, and Brixton tries to grab Hattie from Shaw. In the process, charges are detonated around the site, leading to the collapse of the main chimney, which just misses obliterating the vehicle they are all in as it travels down a narrow alleyway.

DNeg did a full environment build of the area for this scene, which included the entire alleyway and the chimney, and simulated the destruction of the chimney along with an explosive concussive force from the detonation. “There’s a large fireball at the beginning of the explosion that turns into a large volumetric cloud of dust that’s getting kicked up as the chimney is collapsing, and all that had to interact with itself,” Downs says of the scene. “Then, as the chimney is collapsing toward the end of the sequence, we had the huge chunks ripping through the volumetrics and kicking up more pyrotechnic-style explosions. As it is collapsing, it is taking out buildings along the way, so we had those blowing up and collapsing and interacting with our dust cloud, as well. It’s quite a VFX extravaganza.”

Adding to the chaos: The sequence was reshot. “We got new plates for the end of that escape sequence that we had to turn around in a month, so that was definitely a white-knuckle ride,” says Downs. “Thankfully we had already been working on a lot of the chimney collapse and had the Chernobyl build mostly filled in when word came in about the reshoot. But, just the amount of effects that went into it — the volumetrics, the debris and then the full CG environment in the background — was a staggering amount of very complex work.”

The action later turns from London at the start of the film, to Russia for the Chernobyl sequences, and then in the third act, to Samoa, home of the Hobbs family, as the main characters seek refuge on the island while trying to escape from Brixton. But Brixton soon catches up to them, and the last showdown begins amid the island’s tranquil setting with a shimmering blue ocean and green lush mountains. Some of the landscape is natural, some is man-made (sets) and some is CGI. To aid in the digital build of the Samoan environment, Glass traveled to the Hawaiian island of Kauai, where the filming took place, and took a good amount of reference footage.

For a daring chase in Samoa, the artists built out the cliff’s edge and sent a CG helicopter tumbling down the steep incline in the final battle with Brixton. In addition to creating the fully-digital Samoan roadside, CG cliff and 3D Black Hawk, the artists completed complex VFX simulations and destruction, and crafted high-tech combat drones and more for the sequence.

The helicopter proved to be the most challenging of all the vehicles, as it had a couple of hero moments when certain sections were fairly close to the camera. “We had to have a lot of model and texture detail,” Downs notes. “And then with it falling down the cliff and crash-landing onto the beach area, the destruction was quite tricky. We had to plan out which parts would be damaged the most and keep that consistent across the shots, and then go back in and do another pass of textures to support the scratches, dents and so forth.”

Meanwhile, DNeg London and Mumbai handled a number of sequences, among them the compelling McLaren chase, the CIA building descends and the final cave fight in Samoa. There were also a number of smaller sequences, for a total of approximately 750 shots.

One of the scenes in the film’s trailer that immediately caught fans’ attention was the McLaren escape/motorcycle transformation sequence, during which Hobbs, Shaw and Hattie are being chased by Brixton baddies on motorcycles through the streets of London. Shaw, behind the wheel of a McLaren 720S, tries to evade the motorbikes by maneuvering the prized vehicle underneath two crossing tractor trailer rigs, squeezing through with barely an inch to spare. The bad news for the trio: Brixton pulls an even more daring move, hopping off the bike while grabbing onto the back of it and then sliding parallel inches above the pavement as the bike zips under the road hazard practically on its side; once cleared, he pulls himself back onto the motorbike (in a memorable slow-motion stunt) and continues the pursuit thanks to his cybernetically altered body.

Chris Downs

According to Stuart Lashley, DNeg VFX supervisor, this sequence contained a lot of bluescreen car comps in which the actors were shot on stage in a McLaren rigged on a mechanical turntable. The backgrounds were shot alongside the stunt work in Glasgow (playing as London). In addition, there were a number of CG cars added throughout the sequence. “The main VFX set pieces were Hobbs grabbing the biker off his bike, the McLaren and Brixton’s transforming bike sliding under the semis, and Brixton flying through the double-decker bus,” he says. “These beats contained full-CG vehicles and characters for the most part. There was some background DMP [digital matte-painting] work to help the location look more like London. There were also a few shots of motion graphics where we see Brixton’s digital HUD through his helmet visor.”

As Lashley notes, it was important for the CG work to blend in with the surrounding practical stunt photography. “The McLaren itself had to hold up very close to the camera; it has a very distinctive look to its coating, which had to match perfectly,” he adds. “The bike transformation was a welcome challenge. There was a period of experimentation to figure out the mechanics of all the small moving parts while achieving something that looked cool at the same time.”

As exciting and complex as the McLaren scene is, Lashley believes the cave fight sequence following the helicopter/tractor trailer crash was perhaps even more of a difficult undertaking, as it had a particular VFX challenge in terms of the super slow-motion punches. The action takes place at a rock-filled waterfall location — a multi-story set on a 30,000-square-foot soundstage — where the three main characters battle it out. The film’s final sequence is a seamless blend of CG and live footage.

Stuart Lashley

“David [Leitch] had the idea that this epic final fight should be underscored by these very stylized, powerful impact moments, where you see all this water explode in very graphic ways,” explains Lashley. “The challenge came in finding the right balance between physics-based water simulation and creative stylization. We went through a lot of iterations of different looks before landing on something David and Dan [Glass] felt struck the right balance.”

The DNeg teams used a unified pipeline for their work, which includes Autodesk’s Maya for modeling, animation and the majority of cloth and hair sims; Foundry’s Mari for texturing; Isotropix’s Clarisse for lighting and rendering; Foundry’s Nuke for compositing; and SideFX’s Houdini for effects work, such as explosions, dust clouds, particulates and fire.

With expectations running high for Hobbs & Shaw, filmmakers and VFX artists once more delivered, putting audiences on the edge of their seats with jaw-dropping VFX work that shifted the franchise’s action into overdrive yet again. “We hope people have as much fun watching the result as we had making it. This was really an exercise in pushing everything to the max,” says Lashley, “often putting the physics book to one side for a bit and picking up the Fast & Furious manual instead.”

Sextuplets

When actor/comedian/screenwriter/film producer Marlon Wayans signed on to play the lead in the Netflix original movie Sextuplets, he was committing to a role requiring an extensive acting range. That’s because he was filling not one but seven different lead roles in the same film.

In Sextuplets, directed by Michael Tiddes, Wayans plays soon-to-be father Alan, who hopes to uncover information about his family history before his child’s arrival and sets out to locate his birth mother. Imagine Alan’s surprise when he finds out that he is part of “identical” sextuplets! Nevertheless, his siblings are about as unique as they come.

There’s Russell, the nerdy, overweight introvert and the only sibling not given up by their mother, with whom he lived until her recent passing. Ethan, meanwhile, is the embodiment of a 1970s pimp. Dawn is an exotic dancer who is in jail. Baby Pete is on his deathbed and needs a kidney. Jaspar is a villain reminiscent of Austin Powers’ Dr. Evil. Okay, that is six characters, all played by Wayans. Who is the seventh? (Spoiler alert: Wayans also plays their mother, who was simply on vacation and not actually dead as Russell had claimed.)

There are over 1,100 VFX shots in the movie. None, really, involved the transformation of the actor into the various characters — that was done using prosthetics, makeup, wigs and so forth, with slight digital touch-ups as needed. Instead, the majority of the effects work resulted from shooting with a motion-controlled camera and then compositing two (or more) of the siblings together in a shot. For Baby Pete, the artists also had to do a head replacement, comp’ing Wayans onto the body of a much smaller actor.

“We used quite a few visual effects techniques to pull off the movie. At the heart was motion control, [which enables precise control and repetition of camera movement] and allowed us to put multiple characters played by Marlon together in the scenes,” says Tiddes, who has worked with Wayans on multiple projects in the past, including A Haunted House.

The majority of shots involving the siblings were done on stage, filmed on bluescreen with a TechnoDolly for the motion control, as it is too impractical to fit the large rig inside an actual house for filming. “The goal was to find locations that had the exterior I liked [for those scenes] and then build the interior on set,” says Tiddes. “This gave me the versatility to move walls and use the TechnoDolly to create multiple layers so we could then add multiple characters into the same scene and interact together.”

According to Tiddes, the team approached exterior shots similarly to interior ones, with the added challenge of shooting the duplicate moments at the same time each day to get consistent lighting. “Don Burgess, the DP, was amazing in that sense. He was able to create almost exactly the same lighting elements from day to day,” he notes.

Michael Tiddes

So, whenever there was a scene with multiple Wayans characters, it would be filmed on back-to-back days with each of the characters. Tiddes usually started off with Alan, the straight man, to set the pace for the scene, using body doubles for the other characters. Next, the director would work out the shot with the motion control until the timing, composition and so forth was perfected. Then he would hit the Record button on the motion-control device, and the camera would repeat the same exact move over and over as many times as needed. The next day, the shot was replicated with the other character, and the camera would move automatically, and Wayans would have to hit the same marks at the same moment established on the first day.

“Then we’d do it again on the third day with another character. It’s kind of like building layers in Photoshop, and in the end, we would composite all those layers on top of each other for the final version,” explains Tiddes.

When one character would pass in front of another, it became a roto’d shot. Oftentimes a small bluescreen was set up on stage to allow for easier rotoscoping.

Image Engine was the main visual effects vendor on the film, with Bryan Jones serving as visual effects supervisor. The rotoscoping was done using a mix of SilhouetteFX’s Silhouette and Foundry’s Nuke, while compositing was mainly done using Nuke and Autodesk’s Flame.

Make no mistake … using the motion-controlled camera was not without challenges. “When you attack a scene, traditionally you can come in and figure out the blocking on the day [of the shoot],” says Tiddes. “With this movie, I had to previsualize all the blocking because once I put the TechnoDolly in a spot on the set, it could not move for the duration of time we shot in that location. It’s a large 13-foot crane with pieces of track that are 10 feet long and 4 feet wide.”

In fact, one of the main reasons Tiddes wanted to do the film was because of the visual effects challenges it presented. In past films where an actor played multiple characters in a scene, usually one character is on one side of the screen and the other character is on the other side, and a basic split-screen technique would have been used. “For me to do this film, I wanted to visually do it like no one else has ever done it, and that was accomplished by creating camera movement,” he explains. “I didn’t want to be constrained to only split-screen lock-off camera shots that would lack energy and movement. I wanted the freedom to block scenes organically, allowing the characters the flexibility to move through the room, with the opportunity to cross each other and interact together physically. By using motion control, by being able to re-create the same camera movement and then composite the characters into the scene, I was able to develop a different visual style than previous films and create a heightened sense of interactivity and interaction between two or multiple characters on the screen while simultaneously creating dynamic movement with the camera and invoking energy into the scene.”

At times, Gregg Wayans, Marlon’s nephew, served as his body double. He even appears in a very wide shot as one of the siblings, although that occurred only once. “At the end of the day, when the concept of the movie is about Marlon playing multiple characters, the perfectionist in me wanted Marlon to portray every single moment of these characters on screen, even when the character is in the background and out of focus,” says Tiddes. “Because there is only one Marlon Wayans, and no one can replicate what he does physically and comedically in the moment.”

Tiddes knew he would be challenged going into the project, but the process was definitely more complicated than he had initially expected — even with his VFX editorial background. “I had a really good starting point as far as conceptually knowing how to execute motion control. But, it’s not until you get into the moment and start working with the actors that you really understand and digest exactly how to pull off the comedic timing needed for the jokes with the visual effects,” he says. “That is very difficult, and every situation is unique. There was a learning curve, but we picked it up quickly, and I had a great team.”

A system was established that worked for Tiddes and Burgess, as well as Wayans, who had to execute and hit certain marks and look at proper eyelines with precise timing. “He has an earwig, and I am talking to him, letting him know where to look, when to look,” says Tiddes. “At the same time, he’s also hearing dialogue that he’s done the day before in his ear, and he’s reacting to that dialog while giving his current character’s lines in the moment. So, there’s quite a bit going on, and it all becomes more complex when you add the character and camera moving through the scene. After weeks of practice, in one of the final scenes with Jaspar, we were able to do 16 motion-controlled moments in that scene alone, which was a lot!”

At the very end of the film, the group tested its limits and had all six characters (mom and all the siblings, with the exception of Alan) gathered around a table. That scene was shot over a span of five days. “The camera booms down from a sign and pans across the party, landing on all six characters around a table. Getting that motion and allowing the camera to flow through the party onto all six of them seamlessly interacting around the table was a goal of mine throughout the project,” Tiddes says.

Other shots that proved especially difficult were those of Baby Pete in the hospital room, since the entire scene involved Wayans playing three additional characters who are also present: Alan, Russell and Dawn. And then they amped things up with the head replacement on Baby Pete. “I had to shoot the scene and then, on the same day, select the take I would use in the final cut of the movie, rather than select it in post, where traditionally I could pick another take if that one was not working,” Tiddes adds. “I had to set the pace on the first day and work things out with Marlon ahead of time and plan for the subsequent days — What’s Dawn going to say? How is Russell going to react to what Dawn says? You have to really visualize and previsualize all the ad-libbing that was going on and work it out right there in the moment and discuss it, to have kind of a loose plan, then move forward and be confident that you have enough time between lines to allow room for growth when a joke just comes out of nowhere. You don’t want to stifle that joke.”

While the majority of effects involved motion control, there is a scene that contains a good amount of traditional effects work. In it, Alan and Russell park their car in a field to rest for the night, only to awake the next morning to find they have inadvertently provoked a bull, which sees red, literally — both from Alan’s jacket and his shiny car. Artists built the bull in CG. (They used Maya and Side Effects Houdini to build the 3D elements and rendered them in Autodesk’s Arnold.) Physical effects were then used to lift the actual car to simulate the digital bull slamming into the vehicle. In some shots of the bull crashing into the car doors, a 3D car was used to show the doors being damaged.

In another scene, Russell and Alan catch a serious amount of air when they crash through a barn, desperately trying to escape the bull. “I thought it would be hilarious if, in that moment, cereal exploded and individual pieces flew wildly through the car, while [the cereal-obsessed] Russell scooped up one of the cereal pieces mid-air with his tongue for a quick snack,” says Tiddes. To do this, “I wanted to create a zero-gravity slow-motion moment. We shot the scene using a [Vision Research] high-speed Phantom camera at 480fps. Then in post, we created the cereal as a CG element so I could control how every piece moved in the scene. It’s one of my favorite VFX/comedy moments in the movie.”

As Tiddes points out, Sextuplets was the first project on which he used motion control, which let him create motion with the camera and still have the characters interact, giving the subconscious feeling they were actually in the room with one another. “That’s what made the comedy shine,” he says.


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

Mavericks VFX provides effects for Hulu’s The Handmaid’s Tale

By Randi Altman

Season 3 episodes of Hulu’s The Handmaid’s Tale are available for streaming, and if you had any illusions that things would lighten up a bit for June (Elizabeth Moss) and the ladies of Gilead, I’m sorry to say you will be disappointed. What’s not disappointing is that, in addition to the amazing acting and storylines, the show’s visual effects once again play a heavy role.

Brendan Taylor

Toronto’s Mavericks VFX has created visual effects for all three seasons of the show, based on Margaret Atwood’s dystopian view of the not-too-distant future. Its work has earned two Emmy nominations.

We recently reached out to Maverick’s founder and visual effects supervisor, Brendan Taylor, to talk about the new season and his workflow.

How early did you get involved in each season? What sort of input did you have regarding the shots?
The Handmaid’s Tale production is great because they involve us as early as possible. Back in Season 2, when we had to do the Fenway Park scene, for example, we were in talks in August but didn’t shoot until November. For this season, they called us in August for the big fire sequence in Episode 1, and the scene was shot in December.

There’s a lot of nice leadup and planning that goes into it. Our opinions are sought after and we’re able to provide input on what’s the best methodology to use to achieve a shot. Showrunner Bruce Miller, along with the directors, have a way of how they’d like to see it, and they’re great at taking in our recommendations. It was very collaborative and we all approach the process with “what’s best for the show” in mind.

What are some things that the showrunners asked of you in terms of VFX? How did they describe what they wanted?
Each person has a different approach. Bruce speaks in story terms, providing a broader sense of what he’s looking for. He gave us the overarching direction of where he wants to go with the season. Mike Barker, who directed a lot of the big episodes, speaks in more specific terms. He really gets into the details, determining the moods of the scene and communicating how each part should feel.

What types of effects did you provide? Can you give examples?
Some standout effects were the CG smoke in the burning fire sequence and the aftermath of the house being burned down. For the smoke, we had to make it snake around corners in a believable yet magical way. We had a lot of fire going on set, and we couldn’t have any actors or stunt person near it due to the size, so we had to line up multiple shots and composite it together to make everything look realistic. We then had to recreate the whole house in 3D in order to create the aftermath of the fire, with the house being completely burned down.

We also went to Washington, and since we obviously couldn’t destroy the Lincoln Memorial, we recreated it all in 3D. That was a lot of back and forth between Bruce, the director and our team. Different parts of Lincoln being chipped away means different things, and Bruce definitely wanted the head to be off. It was really fun because we got to provide a lot of suggestions. On top of that, we also had to create CGI handmaids and all the details that came with it. We had to get the robes right and did cloth simulation to match what was shot on set. There were about a hundred handmaids on set, but we had to make it look like there were thousands.

Were you able to reuse assets from last season for this one?
We were able to use a handmaids asset from last season, but it needed a lot of upgrades for this season. Because there were closer shots of the handmaids, we had to tweak it and made sure little things like the texture, shaders and different cloth simulations were right for this season.

Were you on set? How did that help?
Yes, I was on set, especially for the fire sequences. We spent a lot of time talking about what’s possible and testing different ways to make it happen. We want it to be as perfect as possible, so I had to make sure it was all done properly from the start. We sent another visual effects supervisor, Leo Bovell, down to Washington to supervise out there as well.

Can you talk about a scene or scenes where being on set played a part in doing something either practical or knowing you could do it in CG?
The fire sequence with the smoke going around the corner took a lot of on-set collaboration. We had tried doing it practically, but the smoke was moving too fast for what we wanted, and there was no way we could physically slow it down.

Having the special effects coordinator, John MacGillivray, there to give us real smoke that we could then match to was invaluable. In most cases on this show, very few audible were called. They want to go into the show knowing exactly what to expect so we were prepared and ready.

Can you talk about turnaround time? Typically, series have short ones. How did that affect how you worked?
The average turnaround time was eight weeks. We began discussions in August, before shooting, and had to delivery by January. We worked with Mike to simplify things without diminishing the impact. We just wanted to make sure we had the chance to do it well given the time we had. Mike was very receptive in asking what we needed to do to make it the best it could be in the timeframe that we had. Take the fire sequence, for example. We could have done full-CGI fire but that would have taken six months. So we did our research and testing to find the most efficient way to merge practical effects with CGI and presented the best version in a shorter period of time.

What tools were used?
We used Foundry Nuke for compositing. We used Autodesk Maya to build all the 3D houses, including the burned-down house, and to destroy the Lincoln Memorial. Then we used Side Effects Houdini to do all the simulations, which can range from the smoke and fire to crowd and cloth.

Is there a shot that you are most proud of or that was very challenging?
The shot where we reveal the crowd over June when we’re in Washington was incredibly challenging. The actual Lincoln Memorial, where we shot, is an active public park, so we couldn’t prevent people from visiting the site. The most we could do was hold them off for a few minutes. We ended up having to clean out all of the tourists, which is difficult with moving camera and moving people. We had to reconstruct about 50% of the plate. Then, in order to get the CG people to be standing there, we had to create a replica of the ground they’re standing on in CG. There were some models we got from the US Geological Society, but they didn’t completely line up, so we had to make a lot of decisions on the fly.

The cloth simulation in that scene was perfect. We had to match the dampening and the movement of all the robes. Stephen Wagner, who is our effects lead on it, nailed it. It looked perfect, and it was really exciting to see it all come together. It looked seamless, and when you saw it in the show, nobody believed that the foreground handmaids were all CG. We’re very proud.

What other projects are you working on?
We’re working on a movie called Queen & Slim by Melina Matsoukas with Universal. It’s really great. We’re also doing YouTube Premium’s Impulse and Netflix’s series Madam C.J. Walker.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Visual Effects Roundtable

By Randi Altman

With Siggraph 2019 in our not-too-distant rearview mirror, we thought it was a good time to reach out to visual effects experts to talk about trends. Everyone has had a bit of time to digest what they saw. Users are thinking what new tools and technologies might help their current and future workflows. Manufacturers are thinking about how their products will incorporate these new technologies.

We provided these experts with questions relating to realtime raytracing, the use of game engines in visual effects workflows, easier ways to share files and more.

Ben Looram, partner/owner, Chapeau Studios
Chapeau Studios provides production, VFX/animation, design and creative IP development (both for digital content and technology) for all screens.

What film inspired you to work in VFX?
There was Ray Harryhausen’s film Jason and the Argonauts, which I watched on TV when I was seven. The skeleton-fighting scene has been visually burned into my memory ever since. Later in life I watched an artist compositing some tough bluescreen shots on a Quantel Henry in 1997, and I instantly knew that that was going to be in my future.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
Double the content for half the cost seems to be the industry’s direction lately. This is coming from new in-house/client-direct agencies that sometimes don’t know what they don’t know … so we help guide/teach them where it’s OK to trim budgets or dedicate more funds for creative.

Are game engines affecting how you work, or how you will work in the future?
Yes, rendering on device and all the subtle shifts in video fidelity shifted our attention toward game engine technology a couple years ago. As soon as the game engines start to look less canned and have accurate depth of field and parallax, we’ll start to integrate more of those tools into our workflow.

Right now we have a handful of projects in the forecast where we will be using realtime game engine outputs as backgrounds on set instead of shooting greenscreen.

What about realtime raytracing? How will that affect VFX and the way you work?
We just finished an R&D project with Intel’s new raytracing engine OSPRay for Siggraph. The ability to work on a massive scale with last-minute creative flexibility was my main takeaway. This will allow our team to support our clients’ swift changes in direction with ease on global launches. I see this ingredient as really exciting for our creative tech devs moving into 2020. Proof of concept iterations will become finaled faster, and we’ve seen efficiencies in lighting, render and compositing effort.

How have ML/AI affected your workflows, if at all?
None to date, but we’ve been making suggestions for new tools that will make our compositing and color correction process more efficient.

The Uncanny Valley. Where are we now?
Still uncanny. Even with well-done virtual avatar influencers on Instagram like Lil Miquela, we’re still caught with that eerie feeling of close-to-visually-correct with a “meh” filter.

Apple

Can you name some recent projects?
The Rookie’s Guide to the NFL. This was a fun hybrid project where we mixed CG character design with realtime rendering voice activation. We created an avatar named Matthew for the NFL’s Amazon Alexa Skills store that answers your football questions in real time.

Microsoft AI: Carlsberg and Snow Leopard. We designed Microsoft’s visual language of AI on multiple campaigns.

Apple Trade In campaign: Our team concepted, shot and created an in-store video wall activation and on-all-device screen saver for Apple’s iPhone Trade In Program.

 

Mac Moore, CEO, Conductor
Conductor is a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud.

What are some of today’s VFX trends? Is cloud playing an even larger role?
Cloud is absolutely a growing trend. I think for many years the inherent complexity and perceived cost of cloud has limited adoption in VFX, but there’s been a marked acceleration in the past 12 months.

Two years ago at Siggraph, I was explaining the value of elastic compute and how it perfectly aligns with the elastic requirements that define our project-based industry; this year there was a much more pragmatic approach to cloud, and many of the people I spoke with are either using the cloud or planning to use it in the near future. Studios have seen referenceable success, both technically and financially, with cloud adoption and are now defining cloud’s role in their pipeline for fear of being left behind. Having a cloud-enabled pipeline is really a game changer; it is leveling the field and allowing artistic talent to be the differentiation, rather than the size of the studio’s wallet (and its ability to purchase a massive render farm).

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines for VFX have definitely attracted interest lately and show a lot of promise in certain verticals like virtual production. There’s more work to be done in terms of out-of-the-box usability, but great strides have been made in the past couple years. I also think various open source initiatives and the inherent collaboration those initiatives foster will help move VFX workflows forward.

Will realtime raytracing play a role in how your tool works?
There’s a need for managing the “last mile,” even in realtime raytracing, which is where Conductor would come in. We’ve been discussing realtime assist scenarios with a number of studios, such as pre-baking light maps and similar applications, where we’d perform some of the heavy lifting before assets are integrated in the realtime environment. There are certainly benefits on both sides, so we’ll likely land in some hybrid best practice using realtime and traditional rendering in the near future.

How do ML/AI and AR/VR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Machine learning and artificial intelligence are critical for our next evolutionary phase at Conductor. To date we’ve run over 250 million core-hours on the platform, and for each of those hours, we have a wealth of anonymous metadata about render behavior, such as the software run, duration, type of machine, etc.

Conductor

For our next phase, we’re focused on delivering intelligent rendering akin to ride-share app pricing; the goal is to provide producers with an upfront cost estimate before they submit the job, so they have a fixed price that they can leverage for their bids. There is also a rich set of analytics that we can mine, and those analytics are proving invaluable for studios in the planning phase of a project. We’re working with data science experts now to help us deliver this insight to our broader customer base.

AR/VR front presents a unique challenge for cloud, due to the large size and variety of datasets involved. The rendering of these workloads is less about compute cycles and more about scene assembly, so we’re determining how we can deliver more of a whole product for this market in particular.

OpenXR and USD are certainly helping with industry best practices and compatibility, which build recipes for repeatable success, and Conductor is collaborating on creating those guidelines for success when it comes to cloud computing with those standards.

What is next on the horizon for VFX?
Cloud, open source and realtime technologies are all disrupting VFX norms and are converging in a way that’s driving an overall democratization of the industry. Gone are the days when you need a pile of cash and a big brick-and-mortar building to house all of your tech and talent.

Streaming services and new mediums, along with a sky-high quality bar, have increased the pool of available VFX work, which is attracting new talent. Many of these new entrants are bootstrapping their businesses with cloud, standards-based approaches and geographically dispersed artistic talent.

Conductor recently became a fully virtual company for this reason. I hire based on expertise, not location, and today’s technology allows us to collaborate as if we are in the same building.

 

Aruna Inversin, creative director/VFX supervisor, Digital Domain 
Digital Domain has provided visual effects and technology for hundreds of motion pictures, commercials, video games, music videos and virtual reality experiences. It also livestreams events in 360-degree virtual reality, creates “virtual humans” for use in films and live events, and develops interactive content, among other things.

What film inspired you to work in VFX?
RoboCop in 1984. The combination of practical effects, miniatures and visual effects inspired me to start learning about what some call “The Invisible Art.”

What trends have you been seeing? What do you feel is important?
There has been a large focus on realtime rendering and virtual production and using it to help increase the throughput and workflow of visual effects. While indeed realtime rendering does increase throughput, there is now a greater onus on filmmakers to plan their creative ideas and assets before you can render them. No longer is it truly post production, but we are back into the realm of preproduction, using post tools and realtime tools to help define how a story is created and eventually filmed.

USD and cloud rendering are also important components, which allow many different VFX facilities the ability to manage their resources effectively. I think another trend that has since passed and has gained more traction is the availability of ACES and a more unified color space by the Academy. This allows quicker throughput between all facilities.

Are game engines affecting how you work or how you will work in the future?
As my primary focus is in new media and experiential entertainment at Digital Domain, I already use game engines (cinematic engines, realtime engines) for the majority of my deliverables. I also use our traditional visual effects pipeline; we have created a pipeline that flows from our traditional cinematic workflow directly into our realtime workflow, speeding up the development process of asset creation and shot creation.

What about realtime raytracing? How will that affect VFX and the way you work?
The ability to use Nvidia’s RTX and raytracing increases the physicality and realistic approximations of virtual worlds, which is really exciting for the future of cinematic storytelling in realtime narratives. I think we are just seeing the beginnings of how RTX can help VFX.

How have AR/VR and AI/ML affected your workflows, if at all?
Augmented reality has occasionally been a client deliverable for us, but we are not using it heavily in our VFX pipeline. Machine learning, on the other hand, allows us to continually improve our digital humans projects, providing quicker turnaround with higher fidelity than competitors.

The Uncanny Valley. Where are we now?
There is no more uncanny valley. We have the ability to create a digital human with the nuance expected! The only limitation is time and resources.

Can you name some recent projects?
I am currently working on a Time project but I cannot speak too much about it just yet. I am also heavily involved in creating digital humans for realtime projects for a number of game companies that wish to push the boundaries of storytelling in realtime. All these projects have a release date of 2020 or 2021.

 

Matt Allard, strategic alliances lead, M&E, Dell Precision Workstations
Dell Precision workstations feature the latest processors and graphics technology and target those working in the editing studio or at a drafting table, at the office or on location.

What are some of today’s VFX trends?
We’re seeing a number of trends in VFX at the moment — from 4K mastering from even higher-resolution acquisition formats and an increase in HDR content to game engines taking a larger role on set in VFX-heavy productions. Of course, we are also seeing rising expectations for more visual sophistication, complexity and film-level VFX, even in TV post (for example, Game of Thrones).

Will realtime raytracing play a role in how your tools work?
We expect that Dell customers will embrace realtime and hardware-accelerated raytracing as creative, cost-saving and time-saving tools. With the availability of Nvidia Quadro RTX across the Dell Precision portfolio, including on our 7000 series mobile workstations, customers can realize these benefits now to deliver better content wherever a production takes them in the world.

Large-scale studio users will not only benefit from the freedom to create the highest-quality content faster, but they’ll likely see overall impact to their energy consumption as they assess the move from CPU rendering, which dominates studio data centers today. Moving toward GPU and hybrid CPU/GPU rendering approaches can offer equal or better rendering output with less energy consumption.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines have made their way into VFX-intensive productions to deliver in-context views of the VFX during the practical shoot. With increasing quality driven by realtime raytracing, game engines have the potential to drive a master-quality VFX shot on set, helping to minimize the need to “fix it in post.”

What is next on the horizon for VFX?
The industry is at the beginning of a new era as artificial intelligence and machine learning techniques are brought to bear on VFX workflows. Analytical and repetitive tasks are already being targeted by major software applications to accelerate or eliminate cumbersome elements in the workflow. And as with most new technologies, it can result in improved creative output and/or cost savings. It really is an exciting time for VFX workflows!

Ongoing performance improvements to the computing infrastructure will continue to accelerate and democratize the highest-resolution workflows. Now more than ever, small shops and independents can access the computing power, tools and techniques that were previously available only to top-end studios. Additionally, virtualization techniques will allow flexible means to maximize the utilization and proliferation of workstation technology.

 

Carl Flygare, manager, Quadro Marketing, PNY
Providing tools for realtime raytracing, augmented reality and virtual reality with the goal of advancing VFX workflow creativity and productivity. PNY is NVIDIA’s Quadro channel partner throughout North America, Latin America, Europe and India..

How will realtime raytracing play a role in workflows?
Budgets are getting tighter, timelines are contracting, and audience expectations are increasing. This sounds like a perfect storm, in the bad sense of the term, but with the right tools, it is actually an opportunity.

Realtime raytracing, based on Nvidia’s RTX technology and support from leading ISVs, enables VFX shops to fit into these new realities while delivering brilliant work. Whiteboarding a VFX workflow is a complex task, so let’s break it down by categories. In preproduction, specifically previz, realtime raytracing will let VFX artists present far more realistic and compelling concepts much earlier in the creative process than ever before.

This extends to the next phase, asset creation and character animation, in which models can incorporate essentially lifelike nuance, including fur, cloth, hair or feathers – or something else altogether! Shot layout, blocking, animation, simulation, lighting and, of course, rendering all benefit from additional iterations, nuanced design and the creative possibilities that realtime raytracing can express and realize. Even finishing, particularly compositing, can benefit. Given the applicable scope of realtime raytracing, it will essentially remake VFX workflows and overall film pipelines, and Quadro RTX series products are the go-to tools enabling this revolution.

How are game engines changing how VFX is done? Is this for everyone or just a select few?
Variety had a great article on this last May. ILM substituted realtime rendering and five 4K laser projectors for a greenscreen shot during a sequence from Solo: A Star Wars Story. This allowed the actors to perform in context — in this case, a hyperspace jump — but also allowed cinematographers to capture arresting reflections of the jump effect in the actors’ eyes. Think of it as “practical digital effects” created during shots, not added later in post. The benefits are significant enough that the entire VFX ecosystem, from high-end shops and major studios to independent producers, are using realtime production tools to rethink how movies and TV shows happen while extending their vision to realize previously unrealizable concepts or projects.

Project Sol

How do ML and AR play a role in your tool? And are you supporting OpenXR 1.0? What about Pixar’s USD?
Those are three separate but somewhat interrelated questions! ML (machine learning) and AI (artificial intelligence) can contribute by rapidly denoising raytraced images in far less time than would be required by letting a given raytracing algorithm run to conclusion. Nvidia enables AI denoising in Optix 5.0 and is working with a broad array of leading ISVs to bring ML/AI enhanced realtime raytracing techniques into the mainstream.

OpenXR 1.0 was released at Siggraph 2019. Nvidia (among others) is supporting this open, royalty-free and cross-platform standard for VR/AR. Nvidia is now providing VR enhancing technologies, such as variable rate shading, content adaptive shading and foveated rendering (among others), with the launch of Quadro RTX. This provides access to the best of both worlds — open standards and the most advanced GPU platform on which to build actual implementations.

Pixar and Nvidia have collaborated to make Pixar’s USD (Universal Scene Description) and Nvidia’s complementary MDL (Materials Definition Language) software open source in an effort to catalyze the rapid development of cinematic quality realtime raytracing for M&E applications.

Project Sol

What is next on the horizon for VFX?
The insatiable desire on the part of VFX professionals, and audiences, to explore edge-of-the-envelope VFX will increasingly turn to realtime raytracing, based on the actual behavior of light and real materials, increasingly sophisticated shader technology and new mediums like VR and AR to explore new creative possibilities and entertainment experiences.

AI, specifically DNNs (deep neural networks) of various types, will automate many repetitive VFX workflow tasks, allowing creative visionaries and artists to focus on realizing formerly impossible digital storytelling techniques.

One obvious need is increasing the resolution at which VFX shots are rendered. We’re in a 4K world, but many films are finished at 2K, primarily based on VFX. 8K is unleashing the abilities (and changing the economics) of cinematography, so expect increasingly powerful realtime rendering solutions, such as Quadro RTX (and successor products when they come to market), along with amazing advances in AI, to allow the VFX community to innovate in tandem.

 

Chris Healer, CEO/CTO/VFX supervisor, The Molecule 
Founded in 2005, The Molecule creates bespoke VFX imagery for clients worldwide. Over 80 artists, producers, technicians and administrative support staff collaborate at our New York City and Los Angeles studios.

What film or show inspired you to work in VFX?
I have to admit, The Matrix was a big one for me.

Are game engines affecting how you work or how you will work?
Game engines are coming, but the talent pool is difficult and the bridge is hard to cross … a realtime artist doesn’t have the same mindset as a traditional VFX artist. The last small percentage of completion on a shot can invalidate any values gained by working in a game engine.

What about realtime raytracing?
I am amazed at this technology, and as a result bought stock in Nvidia, but the software has to get there. It’s a long game, for sure!

How have AR/VR and ML/AI affected your workflows?
I think artists are thinking more about how images work and how to generate them. There is still value in a plain-old four-cornered 16:9 rectangle that you can make the most beautiful image inside of.

AR,VR, ML, etc., are not that, to be sure. I think there was a skip over VR in all the hype. There’s way more to explore in VR, and that will inform AR tremendously. It is going to take a few more turns to find a real home for all this.

What trends have you been seeing? Cloud workflows? What else?
Everyone is rendering in the cloud. The biggest problem I see now is lack of a UBL model that is global enough to democratize it. UBL = usage-based licensing. I would love to be able to render while paying by the second or minute at large or small scales. I would love for Houdini or Arnold to be rentable on a Satoshi level … that would be awesome! Unfortunately, it is each software vendor that needs to provide this, which is a lot to organize.

The Uncanny Valley. Where are we now?
We saw in the recent Avengers film that Mark Ruffalo was in it. Or was he? I totally respect the Uncanny Valley, but within the complexity and context of VFX, this is not my battle. Others have to sort this one out, and I commend the artists who are working on it. Deepfake and Deeptake are amazing.

Can you name some recent projects?
We worked on Fosse/Verdon, but more recent stuff, I can’t … sorry. Let’s just say I have a lot of processors running right now.

 

Matt Bach and William George, lab technicians, Puget Systems 
Puget Systems specializes in high-performance custom-built computers — emphasizing each customer’s specific workflow.

Matt Bach

William George

What are some of today’s VFX trends?
Matt Bach: There are so many advances going on right now that it is really hard to identify specific trends. However, one of the most interesting to us is the back and forth between local and cloud rendering.

Cloud rendering has been progressing for quite a few years and is a great way to get a nice burst in rendering performance when you are  in a crunch. However, there have been high improvements in GPU-based rendering with technology like Nvidia Optix. Because of these, you no longer have to spend a fortune to have a local render farm, and even a relatively small investment in hardware can often move the production bottleneck away from rendering to other parts of the workflow. Of course, this technology should make its way to the cloud at some point, but as long as these types of advances keep happening, the cloud is going to continue playing catch-up.

A few other that we are keeping our eyes on are the growing use of game engines, motion capture suits and realtime markerless facial tracking in VFX pipelines.

Realtime raytracing is becoming more prevalent in VFX. What impact does realtime raytracing have on system hardware, and what do VFX artists need to be thinking about when optimizing their systems?
William George: Most realtime raytracing requires specialized computer hardware, specifically video cards with dedicated raytracing functionality. Raytracing can be done on the CPU and/or normal video cards as well, which is what render engines have done for years, but not quickly enough for realtime applications. Nvidia is the only game in town at the moment for hardware raytracing on video cards with its RTX series.

Nvidia’s raytracing technology is available on its consumer (GeForce) and professional (Quadro) RTX lines, but which one to use depends on your specific needs. Quadro cards are specifically made for this kind of work, with higher reliability and more VRAM, which allows for the rendering of more complex scenes … but they also cost a lot more. GeForce, on the other hand, is more geared toward consumer markets, but the “bang for your buck” is incredibly high, allowing you to get several times the performance for the same cost.

In between these two is the Titan RTX, which offers very good performance and VRAM for its price, but due to its fan layout, it should only be used as a single card (or at most in pairs, if used in a computer chassis with lots of airflow).

Another thing to consider is that if you plan on using multiple GPUs (which is often the case for rendering), the size of the computer chassis itself has to be fairly large in order to fit all the cards, power supply, and additional cooling needed to keep everything going.

How are game engines changing or impacting VFX workflows?
Bach: Game engines have been used for previsualization for a while, but we are starting to see them being used further and further down the VFX pipeline. In fact, there are already several instances where renders directly captured from game engines, like Unity or Unreal, are being used in the final film or animation.

This is getting into speculation, but I believe that as the quality of what game engines can produce continues to improve, it is going to drastically shake up VFX workflows. The fact that you can make changes in real time, as well as use motion capture and facial tracking, is going to dramatically reduce the amount of time necessary to produce a highly polished final product. Game engines likely won’t completely replace more traditional rendering for quite a while (if ever), but it is going to be significant enough that I would encourage VFX artists to at least familiarize themselves with the popular engines like Unity or Unreal.

What impact do you see ML/AI and AR/VR playing for your customers?
We are seeing a lot of work being done for machine learning and AI, but a lot of it is still on the development side of things. We are starting to get a taste of what is possible with things like Deepfakes, but there is still so much that could be done. I think it is too early to really tell how this will affect VFX in the long term, but it is going to be exciting to see.

AR and VR are cool technologies, but it seems like they have yet to really take off, in part because designing for them takes a different way of thinking than traditional media, but also in part because there isn’t one major platform that’s an overwhelming standard. Hopefully, that is something that gets addressed over time, because once creative folks really get a handle on how to use the unique capabilities of AR/VR to their fullest, I think a lot of neat stories will be told.

What is the next on the horizon for VFX?
Bach: The sky is really the limit due to how fast technology and techniques are changing, but I think there are two things in particular that are going to be very interesting to see how they play out.

First, we are hitting a point where ethics (“With great power comes great responsibility” and all that) is a serious concern. With how easy it is to create highly convincing Deepfakes of celebrities or other individuals, even for someone who has never used machine learning before, I believe that there is the potential of backlash from the general public. At the moment, every use of this type of technology has been for entertainment or otherwise rightful purposes, but the potential to use it for harm is too significant to ignore.

Something else I believe we will start to see is “VFX for the masses,” similar to how video editing used to be a purely specialized skill, but now anyone with a camera can create and produce content on social platforms like YouTube. Advances in game engines, facial/body tracking for animated characters and other technologies that remove a number of skills and hardware barriers for relatively simple content are going to mean that more and more people with no formal training will take on simple VFX work. This isn’t going to impact the professional VFX industry by a significant degree, but I think it might spawn a number of interesting techniques or styles that might make their way up to the professional level.

 

Paul Ghezzo, creative director, Technicolor Visual Effects
Technicolor and its family of VFX brands provide visual effects services tailored to each project’s needs.

What film inspired you to work in VFX?
At a pretty young age, I fell in love with Star Wars: Episode IV – A New Hope and learned about the movie magic that was developed to make those incredible visuals come to life.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
USD will help structure some of what we currently do, and cloud rendering is an incredible source to use when needed. I see both of them maturing and being around for years to come.

As for other trends, I see new methods of photogrammetry and HDRI photography/videography providing datasets for digital environment creation and capturing lighting content; performance capture (smart 2D tracking and manipulation or 3D volumetric capture) for ease of performance manipulation or layout; and even post camera work. New simulation engines are creating incredible and dynamic sims in a fraction of the time, and all of this coming together through video cards streamlining the creation of the end product. In many ways it might reinvent what can be done, but it might take a few cutting-edge shows to embrace and perfect the recipe and show its true value.

Production cameras tethered to digital environments for live set extensions are also coming of age, and with realtime rendering becoming a viable option, I can imagine that it will only be a matter of time for LED walls to become the new greenscreen. Can you imagine a live-action set extension that parallaxes, distorts and is exposed in the same way as its real-life foreground? How about adding explosions, bullet hits or even an armada of spaceships landing in the BG, all on cue. I imagine this will happen in short order. Exciting times.

Are game engines affecting how you work or how you will work in the future?
Game engines have affected how we work. The speed and quality that they offer is undoubtably a game changer, but they don’t always create the desired elements and AOVs that are typically needed in TV/film production.

They are also creating a level of competition that is spurring other render engines to be competitive and provide a similar or better solution. I can imagine that our future will use Unreal/Unity engines for fast turnaround productions like previz and stylized content, as well as for visualizing virtual environments and digital sets as realtime set extensions and a lot more.

Snowfall

What about realtime raytracing? How will that affect VFX and the way you work?
GPU rendering has single-handedly changed how we render and what we render with. A handful of GPUs and a GPU-accelerated render engine can equal or surpass a CPU farm that’s several times larger and much more expensive. In VFX, iterations equal quality, and if multiple iterations can be completed in a fraction of the time — and with production time usually being finite — then GPU-accelerated rendering equates to higher quality in the time given.

There are a lot of hidden variables to that equation (change of direction, level of talent provided, work ethics, hardware/software limitations, etc.), but simply said, if you can hit the notes as fast as they are given, and not have to wait hours for a render farm to churn out a product, then clearly the faster an iteration can be provided the more iterations can be produced, allowing for a higher-quality product in the time given.

How have AR or ML affected your workflows, if at all?
ML and AR haven’t significantly affected our current workflows yet … but I believe they will very soon.

One aspect of AR/VR/MR that we occasionally use in TV/film production is to previz environments, props and vehicles, which allows everyone in production and on set/location to see what the greenscreen will be replaced with, which allows for greater communication and understanding with the directors, DPs, gaffers, stunt teams, SFX and talent. I can imagine that AR/VR/MR will only become more popular as a preproduction tool, allowing productions to front load and approve all aspects of production way before the camera is loaded and the clock is running on cast and crew.
Machine learning is on the cusp of general usage, but it currently seems to be used by productions with lengthy schedules that will benefit from development teams building those toolsets. There are tasks that ML will undoubtably revolutionize, but it hasn’t affected our workflows yet.

The Uncanny Valley. Where are we now?
Making the impossible possible … That *is* what we do in VFX. Looking at everything from Digital Emily in 2011 to Thanos and Hulk in Avengers: Endgame, we’ve seen what can be done, and the Uncanny Valley will likely remain, but only on productions that can’t afford the time or cost of flawless execution.

Can you name some recent projects?
Big Little Lies, Dead to Me, NOS4A2, True Detective, Veep, This Is Us, Snowfall, The Loudest Voice, and Avengers: Endgame.

 

James Knight, virtual production director, AMD 
AMD is a semiconductor company that develops computer processors and related technologies for M&E as well as other markets. Its tools include Ryzen and Threadripper.

What are some of today’s VFX trends?
Well, certainly the exploration for “better, faster, cheaper” keeps going. Faster rendering, so our community can accomplish more iterations in a much shorter amount of time, seems to something I’ve heard the whole time I’ve been in the business.

I’d surely say the virtual production movement (or on-set visualization) is gaining steam, finally. I work with almost all the major studios in my role, and all of them, at a minimum, have the ability to speed up post and blend it with production on their radar; many have virtual production departments.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
I would say game engines are where most of the innovation comes from these days. Think about Unreal, for example. Epic pioneered Fortnite, and the revenue from that must be astonishing, and they’re not going to sit on their hands. The feature film and TV post/VFX business benefits from the requirement of the gaming consumer to see higher-resolution, more photorealistic images in real time. That gets passed on to our community in eliminating guess work on set when framing partial or completely CG shots.

It should be for everyone or most, because the realtime and post production time savings are rather large. I think many still have a personal preference for what they’re used to. And that’s not wrong, if it works for them, obviously that’s fine. I just think that even in 2019, use of game engines is still new to some … which is why it’s not completely ubiquitous.

How do ML or AR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Well, it’s more the reverse. With our new Rome and Threadripper CPUs, we’re powering AR. Yes, we are supporting OpenXR 1.0.

What is next on the horizon for VFX?
Well, the demand for VFX is increasing, not the opposite, so the pursuit of faster photographic reality is perpetually in play. That’s good job security for me at a CPU/GPU company, as we have a way to go to properly bridge the Uncanny Valley completely, for example.

I’d love to say lower-cost CG is part of the future, but then look at the budgets of major features — they’re not exactly falling. The dance of Moore’s law will forever be in effect more than likely, with momentary huge leaps in compute power — like with Rome and Threadripper — catching amazement for a period. Then, when someone sees the new, expanded size of their sandpit, they then fill that and go, “I now know what I’d do if it was just a bit bigger.”

I am vested and fascinated by the future of VFX, but I think it goes hand in hand with great storytelling. If we don’t have great stories, then directing and artistry innovations don’t properly get noticed. Look at the top 20 highest grossing films in history … they’re all fantasy. We all want to be taken away from our daily lives and immersed in a beautiful, realistic VFX intense fictional world for 90 minutes, so we’ll be forever pushing the boundaries of rigging, texturing, shading, simulations, etc. To put my finger on exactly what’s next, I’d say I happen to know of a few amazing things that are coming, but sadly, I’m not at liberty to say right now.

 

Michel Suissa, managing director of pro solutions, The Studio-B&H 
The Studio-B&H provides hands-on experience to high-end professionals. Its Technology Center is a fully operational studio with an extensive display of high-end products and state-of-the-art workflows.

What are some of today’s VFX trends?
AI, ML, NN (GAN) and realtime environments

Will realtime raytracing play a role in how the tools you provide work?
It already does with most relevant applications in the market.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
The ubiquity of realtime game engines is becoming more mainstream with every passing year. It is becoming fairly accessible to a number of disciplines within different market targets.

What is next on the horizon for VFX?
New pipeline architectures that will rely on different implementations (traditional and AI/ML/NN) and mixed infrastructures (local and cloud-based).

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
AI, ML and realtime environments. New cloud toolsets. Prominence of neural networks and GANs. Proliferation of convincing “deepfakes” as a proof of concept for the use of generative networks as resources for VFX creation.

What about realtime raytracing? How will that affect VFX workflows?
RTX is changing how most people see their work being done. It is also changing expectations about what it takes to create and render CG images.



The Uncanny Valley. Where are we now?
AI and machine learning will help us get there. Perfection still remains too costly. The amount of time and resources required to create something convincing is prohibitive for the large majority of the budgets.

 

Marc Côté, CEO, Real by Fake 
Real by Fake services include preproduction planning, visual effects, post production and tax-incentive financing.

What film or show inspired you to work in VFX?
George Lucas’ Star Wars and Indiana Jones (Raiders of the Lost Ark). For Star Wars, I was a kid and I saw this movie. It brought me to another universe. Star Wars was so inspiring even though I was too young to understand what the movie was about. The robots in the desert and the spaceships flying around. It looked real; it looked great. I was like, “Wow, this is amazing.”

Indiana Jones because it was a great adventure; we really visit the worlds. I was super-impressed by the action, by the way it was done. It was mostly practical effects, not really visual effects. Later on I realized that in Star Wars, they were using robots (motion control systems) to shoot the spaceships. And as a kid, I was very interested in robots. And I said, “Wow, this is great!” So I thought maybe I could use my skills and what I love and combine it with film. So that’s the way it started.

What trends have you been seeing? What do you feel is important?
The trend right now is using realtime rendering engines. It’s coming on pretty strong. The game companies who build engines like Unity or Unreal are offering a good product.

It’s bit of a hack to use these tools in rendering or in production at this point. They’re great for previz, and they’re great for generating realtime environments and realtime playback. But having the capacity to change or modify imagery with the director during the process of finishing is still not easy. But it’s a very promising trend.

Rendering in the cloud gives you a very rapid capacity, but I think it’s very expensive. You also have to download and upload 4K images, so you need a very big internet pipe. So I still believe in local rendering — either with CPUs or GPUs. But cloud rendering can be useful for very tight deadlines or for small companies that want to achieve something that’s impossible to do with the infrastructure they have.

My hope is that AI will minimize repetition in visual effects. For example, in keying. We key multiple sections of the body, but we get keying errors in plotting or transparency or in the edges, and they are all a bit different, so you have to use multiple keys. AI would be useful to define which key you need to use for every section and do it automatically and in parallel. AI could be an amazing tool to be able to make objects disappear by just selecting them.

Pixar’s USD is interesting. The question is: Will the industry take it as a standard? It’s like anything else. Kodak invented DPX, and it became the standard through time. Now we are using EXR. We have different software, and having exchange between them will be great. We’ll see. We have FBX, which is a really good standard right now. It was built by Filmbox, a Montreal company that was acquired by Autodesk. So we’ll see. The demand and the companies who build the software — they will be the ones who take it up or not. A big company like Pixar has the advantage of other companies using it.

The last trend is remote access. The internet is now allowing us to connect cross-country, like from LA to Montreal or Atlanta. We have a sophisticated remote infrastructure, and we do very high-quality remote sessions with artists who work from disparate locations. It’s very secure and very seamless.

What about realtime raytracing? How will that affect VFX and the way you work?
I think we have pretty good raytracing compared to what we had two years ago. I think it’s a question of performance, and of making it user-friendly in the application so it’s easy to light with natural lighting. To not have to fake the rebounds so you can get two or three rebounds. I think it’s coming along very well and quickly.

Sharp Objects

So what about things like AI/ML or AR/VR? Have those things changed anything in the way movies and TV shows are being made?
My feeling right now is that we are getting into an era where I don’t think you’ll have enough visual effects companies to cover the demand.

Every show has visual effects. It can be a complete character, like a Transformer, or a movie from the Marvel Universe where the entire film is CG. Or it can be the huge number of invisible effects that are starting to appear in virtually every show. You need capacity to get all this done.

AI can help minimize repetition so artists can work more on the art and what is being created. This will accelerate and give us the capacity to respond to what’s being demanded of us. They want a faster cheaper product, and they want the quality to be as high as a movie.

The only scenario where we are looking at using AR is when we are filming. For example, you need to have a good camera track in real time, and then you want to be able to quickly add a CGI environment around the actors so the director can make the right decision in terms of the background or interactive characters who are in the scene. The actors will not see it until they have a monitor or a pair of glasses or something to be able to give them the result.

So AR is a tool to be able to make faster decisions when you’re on set shooting. This is what we’ve been working on for a long time: bringing post production and preproduction together. To have an engineering department who designs and conceptualizes and creates everything that needs to be done before shooting.

The Uncanny Valley. Where are we now?
In terms of the environment, I think we’re pretty much there. We can create an environment that nobody will know is fake. Respectfully, I think our company Real by Fake is pretty good at doing it.

In terms of characters, I think we’re still not there. I think the game industry is helping a lot to push this. I think we’re on the verge of having characters look as close as possible to live actors, but if you’re in a closeup, it still feels fake. For mid-ground and long shots, it’s fine. You can make sure nobody will know. But I don’t think we’ve crossed the valley just yet.

Can you name some recent projects?
Big Little Lies and Sharp Objects for HBO, Black Summer for Netflix
and Brian Banks, an indie feature.

 

Jeremy Smith, CTO, Jellyfish Pictures
Jellyfish Pictures provides a range of services including VFX for feature film, high-end TV and episodic animated kids’ TV series and visual development for projects spanning multiple genres.

What film or show inspired you to work in VFX?
Forrest Gump really opened my eyes to how VFX could support filmmaking. Seeing Tom Hanks interact with historic footage (e.g., John F. Kennedy) was something that really grabbed my attention, and I remember thinking, “Wow … that is really cool.”

What trends have you been seeing? What do you feel is important?
The use of cloud technology is really empowering “digital transformation” within the animation and VFX industry. The result of this is that there are new opportunities that simply wouldn’t have been possible otherwise.

Jellyfish Pictures uses burst rendering into the cloud, extending our capacity and enabling us to take on more work. In addition to cloud rendering, Jellyfish Pictures were early adopters of virtual workstations, and, especially after Siggraph this year, it is apparent to see that this is the future for VFX and animation.

Virtual workstations promote a flexible and scalable way of working, with global reach for talent. This is incredibly important for studios to remain competitive in today’s market. As well as the cloud, formats such as USD are making it easier to exchange data with others, which allow us to work in a more collaborative environment.

It’s important for the industry to pay attention to these, and similar, trends, as they will have a massive impact on how productions are carried out going forward.
Are game engines affecting how you work, or how you will work in the future?

Game engines are offering ways to enhance certain parts of the workflow. We see a lot of value in the previz stage of the production. This allows artists to iterate very quickly and helps move shots onto the next stage of production.

What about realtime raytracing? How will that affect VFX and the way you work?
The realtime raytracing from Nvidia (as well as GPU compute in general) offers artists a new way to iterate and help create content. However, with recent advancements in CPU compute, we can see that “traditional” workloads aren’t going to be displaced. The RTX solution is another tool that can be used to assist in the creation of content.

How have AR/VR and ML/AI affected your workflows, if at all?
Machine learning has the power to really assist certain workloads. For example, it’s possible to use machine learning to assist a video editor by cataloging speech in a certain clip. When a director says, “find the spot where the actor says ‘X,’” we can go directly to that point in time on the timeline.

 In addition, ML can be used to mine existing file servers that contain vast amounts of unstructured data. When mining this “dark data,” an organization may find a lot of great additional value in the existing content, which machine learning can uncover.

The Uncanny Valley. Where are we now?
With recent advancements in technology, the Uncanny Valley is closing, however it is still there. We see more and more digital humans in cinema than ever before (Peter Cushing in Rogue One: A Star Wars Story was a main character), and I fully expect to see more advances as time goes on.

Can you name some recent projects?
Our latest credits include Solo: A Star Wars Story, Captive State, The Innocents, Black Mirror, Dennis & Gnasher: Unleashed! and Floogals Seasons 1 through 3.

 

Andy Brown, creative director, Jogger 
Jogger Studios is a boutique visual effects studio with offices in London, New York and LA. With capabilities in color grading, compositing and animation, Jogger works on a variety of projects, from TV commercials and music videos to projections for live concerts.

What inspired you to work in VFX?
First of all, my sixth form English project was writing treatments for music videos to songs that I really liked. You could do anything you wanted to for this project, and I wanted to create pictures using words. I never actually made any of them, but it planted the seed of working with visual images. Soon after that I went to university in Birmingham in the UK. I studied communications and cultural studies there, and as part of the course, we visited the BBC Studios at Pebble Mill. We visited one of the new edit suites, where they were putting together a story on the inquiry into the Handsworth riots in Birmingham. It struck me how these two people, the journalist and the editor, could shape the story and tell it however they saw fit. That’s what got me interested on a critical level in the editorial process. The practical interest in putting pictures together developed from that experience and all the opportunities that opened up when I started work at MPC after leaving university.

What trends have you been seeing? What do you feel is important?
Remote workstations and cloud rendering are all really interesting. It’s giving us more opportunities to work with clients across the world using our resources in LA, SF, Austin, NYC and London. I love the concept of a centralized remote machine room that runs all of your software for all of your offices and allows you scaled rendering in an efficient and seamless manner. The key part of that sentence is seamless. We’re doing remote grading and editing across our offices so we can share resources and personnel, giving the clients the best experience that we can without the carbon footprint.

Are game engines affecting how you work or how you will work in the future?
Game engines are having a tremendous effect on the entire media and entertainment industry, from conception to delivery. Walking around Siggraph last month, seeing what was not only possible but practical and available today using gaming engines, was fascinating. It’s hard to predict industry trends, but the technology felt like it will change everything. The possibilities on set look great, too, so I’m sure it will mean a merging of production and post production in many instances.

What about realtime raytracing How will that affect VFX and the way you work?
Faster workflows and less time waiting for something to render have got to be good news. It gives you more time to experiment and refine things.

Chico for Wendy’s

How have AR/VR or ML/AI affected your workflows, if at all?
Machine learning is making its way into new software releases, and the tools are useful. Anything that makes it easier to get where you need to go on a shot is welcome. AR, not so much. I viewed the new Mac Pro sitting on my kitchen work surface through my phone the other day, but it didn’t make me want to buy it any more or less. It feels more like something that we can take technology from rather than something that I want to see in my work.

I’d like 3D camera tracking and facial tracking to be realtime on my box, for example. That would be a huge time-saver in set extensions and beauty work. Anything that makes getting perfect key easier would also be great.

The Uncanny Valley. Where are we now?
It always used to be “Don’t believe anything you read.” Now it’s, “Don’t believe anything you see.” I used to struggle to see the point of an artificial human, except for resurrecting dead actors, but now I realize the ultimate aim is suppression of the human race and the destruction of democracy by multimillionaire despots and their robot underlings.

Can you name some recent projects?
I’ve started prepping for the apocalypse, so it’s hard to remember individual jobs, but there’s been the usual kind of stuff — beauty, set extensions, fast food, Muppets, greenscreen, squirrels, adding logos, removing logos, titles, grading, finishing, versioning, removing rigs, Frankensteining, animating, removing weeds, cleaning runways, making tenders into wings, split screens, roto, grading, polishing cars, removing camera reflections, stabilizing, tracking, adding seatbelts, moving seatbelts, adding photos, removing pictures and building petrol stations. You know, the usual.

 

James David Hattin, founder/creative director, VFX Legion 
Based in Burbank and British Columbia, VFX Legion specializes in providing episodic shows and feature films with an efficient approach to creating high-quality visual effects.

What film or show inspired you to work in VFX?
Star Wars was my ultimate source of inspiration for doing visual effects. Much of the effects in the movies didn’t make sense to me as a six-year-old, but I knew that this was the next best thing to magic. Visual effects create a wondrous world where everyday people can become superheroes, leaders of a resistance or ruler of a 5th century dynasty. Watching X-wings flying over the surface of a space station, the size of a small moon was exquisite. I also learned, much later on, that the visual effects that we couldn’t see were as important as what we could see.

I had already been steeped in visual effects with Star Trek — phasers, spaceships and futuristic transporters. Models held from wires on a moon base convinced me that we could survive on the moon as it broke free from orbit. All of this fueled my budding imagination. Exploring computer technology and creating alternate realities, CGI and digitally enhanced solutions have been my passion for over a quarter of century.

What trends have you been seeing? What do you feel is important?
More and more of the work is going to happen inside a cloud structure. That is definitely something that is being pressed on very heavily by the tech giants like Google and Amazon that rule our world. There is no Moore’s law for computers anymore. The prices and power we see out of computers is almost plateauing. The technology is now in the world of optimizing algorithms or rendering with video cards. It’s about getting bigger, better effects out more efficiently. Some companies are opting to run their entire operations in the cloud or co-located server locations. This can theoretically free up the workers to be in different locations around the world, provided they have solid, low-latency, high-speed internet.

When Legion was founded in 2013, the best way around cloud costs was to have on-premises servers and workstations that supported global connectivity. It was a cost control issue that has benefitted the company to this day, enabling us to bring a global collective of artists and clients into our fold in a controlled and secure way. Legion works in what we consider a “private cloud,” eschewing the costs of egress from large providers and working directly with on-premises solutions.

Are game engines affecting how you work or how you will work in the future?
Game engines are perfect for revisualization in large, involved scenes. We create a lot of environments and invisible effects. For the larger bluescreen shoots, we can build out our sets in Unreal engines, previsualizing how the scene will play for the director or DP. This helps get everyone on the same page when it comes to how a particular sequence is going to be filmed. It’s a technique that also helps the CG team focus on adding details to the areas of a set that we know will be seen. When the schedule is tight, the assets are camera-ready by the time the cut comes to us.

What about realtime raytracing via Nvidia’s RTX? How will that affect VFX and the way you work?
The type of visual effects that we create for feature films and television shows involves a lot of layers and technology that provides efficient, comprehensive compositing solutions. Many of the video card rendering engines like Octanerender, Redshift and V-Ray RT are limited when it comes to what they can create with layers. They often have issues with getting what is called a “back to beauty,” in which the sum of the render passes equals the final render. However, the workarounds we’ve developed enable us to achieve the quality we need. Realtime raytracing introduces a fantastic technology that will someday make it an ideal fit with our needs. We’re keeping an out eye for it as it evolves and becomes more robust.

How have AR/VR or ML/AI affected your workflows, if at all?
AR has been in the wings of the industry for a while. There’s nothing specific that we would take advantage of. Machine learning has been introduced a number of times to solve various problems. It’s a pretty exciting time for these things. One of our partner contacts, who left to join Facebook, was keen to try a number of machine learning tricks for a couple of projects that might have come through, but we didn’t get to put it through the test. There’s an enormous amount of power to be had in machine learning, and I think we are going to see big changes over the next five years in that field and how it affects all of post production.

The Uncanny Valley. Where are we now?
Climbing up the other side, not quite at the summit for daily use. As long as the character isn’t a full normal human, it’s almost indistinguishable from reality.

Can you name some recent projects?
We create visual effects on an ongoing basis for a variety of television shows that include How to Get Away with Murder, DC’s Legends of Tomorrow, Madam Secretary and The Food That Built America. Our team is also called upon to craft VFX for a mix of movies, from the groundbreaking feature film Hardcore Henry to recently released films such as Ma, SuperFly and After.

MAIN IMAGE: Good Morning Football via Chapeau Studios.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Whiskytree experiences growth, upgrades tools

Visual effects and content creation company Whiskytree has gone through a growth spurt that included a substantial increase in staff, a new physical space and new infrastructure.

Providing content for films, television, the Web, apps, game and VR or AR, Whiskytree’s team of artists, designers and technicians use applications such as Autodesk Maya, Side Effects Houdini, Autodesk Arnold, Gaffer and Foundry Nuke on Linux — along with custom tools — to create computer graphics and visual effects.

To help manage its growth and the increase in data that came with it, Whiskytree recently installed Panasas ActiveStor. The platform is used to store and manage Whiskytree’s computer graphics and visual effects workflows, including data-intensive rendering and realtime collaboration using extremely large data sets for movies, commercials and advertising; work for realtime render engines and games; and augmented reality and virtual reality applications.

“We recently tripled our employee count in a single month while simultaneously finalizing the build-out of our new facility and network infrastructure, all while working on a 700-shot feature film project [The Captain],” says Jonathan Harb, chief executive officer and owner of Whiskytree. “Panasas not only delivered the scalable performance that we required during this critical period, but also delivered a high level of support and expertise. This allowed us to add artists at the rapid pace we needed with an easy-to-work-with solution that didn’t require fine-tuning to maintain and improve our workflow and capacity in an uninterrupted fashion. We literally moved from our old location on a Friday, then began work in our new facility the following Monday morning, with no production downtime. The company’s ‘set it and forget it’ appliance resulted in overall smooth operations, even under the trying circumstances.”

In the past, Whiskytree operated a multi-vendor storage solution that was complex and time consuming to administer, modify and troubleshoot. With the office relocation and rapid team expansion, Whiskytree didn’t have time to build a new custom solution or spend a lot of time tuning. It also needed storage that would grow as project and facility needs change.

Projects from the studio include Thor: Ragnarok, Monster Hunt 2, Bolden, Mother, Star Wars: The Last Jedi, Downsizing, Warcraft and Rogue One: A Star Wars.

Tips from a Flame Artist: things to do before embarking on a VFX project

By Andy Brown

I’m creative director and Flame artist at Jogger Studios in Los Angeles. We are a VFX and finishing studio and sister company to  Cut+Run, which has offices in LA, New York, London, San Francisco and Austin. As an experienced visual effects artist, I’ve seen a lot in my time in the industry, and not just what ends up on the screen. I’m also an Englishman living in LA.

I was asked to put together some tips to help make your next project a little bit easier, but in the process, I remembered many things I forgot. I hope these tips these help!

1) Talk to production.

2) Trust your producers.

3) Don’t assume anyone (including you) knows anything.

4) Forget about the money; it’s not your job. Well, it’s kind of your job, but in the context of doing the work, it’s not.

5) Read everything that you’ve been sent, then read it again. Make sure you actually understand what is being asked of you.

6) Make a list of questions that cover any uncertainty you might have about any aspect of the project you’re bidding for. Then ask those questions.

7) Ask production to talk to you if they have any questions. It’s better to get interrupted on your weekend off than for the client to ask her friend Bob, who makes videos for YouTube. To be fair to Bob, he might have a million subscribers, but Bob isn’t doing the job, so please, keep Bob out of it.

8) Remember that what the client thinks is “a small amount of cleanup” isn’t necessarily a small amount of cleanup.

9) Bring your experience to the table. Even if it’s your experience in how not to do things.

10) If you can do some tests, then do some tests. Not only will you learn something about how you’re going to approach the problem, but it will show your client that you’re engaged with the project.

11) Ask about the deliverables. How many aspect ratios? How many versions? Then factor in the slated, the unslated and the generics and take a deep breath.

12) Don’t believe that a lift (a cutdown edit) is a lift is a lift. It won’t be a lift.

13) Make sure you have enough hours in your bid for what you’re being asked to do. The hours are more important than the money.

14) Attend the shoot. If you can’t attend the shoot, then send someone to the shoot … someone who knows about VFX. And don’t be afraid to pipe up on the shoot; that’s what you’re there for. Be prepared to make suggestions on set about little things that will make the VFX go more smoothly.

15) Give yourself time. Don’t get too frustrated that you haven’t got everything perfect in the first day.

16) Tackle things methodically.

17) Get organized.

18) Make a list.

19) Those last three were all the same thing, but that’s because it’s important.

20) Try to remember everyone’s names. Write them down. If you can’t remember, ask.

21) Sit up straight.

23) Be positive. You blew that already by being too English.

24) Remember we all want to get the best result that we can.

25) Forget about the money again. It’s not your job.

26) Work hard and don’t get pissed off if someone doesn’t like what you’ve done so far. You’ll get there. You always do.

27) Always send WIPs to the editor. Not only do they appreciate it, but they can add useful info along the way.

28) Double-check the audio.

29) Double-check for black lines at the edges of frame. There’s no cutoff anymore. Everything lives on the internet.

30) Check your spelling. Even if you spelled it right, it might be wrong. Colour. Realise. Etcetera. Etc.

 

Company 3 buys Sixteen19, offering full-service post in NYC

Company 3 has acquired Sixteen19, a creative editorial, production and post company based in New York City. The deal includes Sixteen19’s visual effects wing, PowerHouse VFX, and a mobile dailies operation with international reach.

The acquisition helps Company 3 further serve NYC’s booming post market for feature film and episodic TV. As part of the acquisition, industry veterans and Sixteen19 co-founders Jonathan Hoffman and Pete Conlin, along with their longtime collaborator, EVP of business development and strategy Alastair Binks, will join Company 3’s leadership team.

“With Sixteen19 under the Company 3 umbrella, we significantly expand what we bring to the production community, addressing a real unmet need in the industry,” says Company 3 president Stefan Sonnenfeld. “This infusion of talent and infrastructure will allow us to provide a complete suite of services for clients, from the start of production through the creative editing process to visual effects, final color, finishing and mastering. We’ve worked in tandem with Sixteen19 many times over the years, so we know that they have always provided strong client relationships, a best-in-class team and a deeply creative environment. We’re excited to bring that company’s vision into the fold at Company 3.”

Sonnenfeld will continue to serve as president of Company 3, and oversee operations of Sixteen19. As a subsidiary of Deluxe, Company 3 is part of a broad portfolio of post services. Bringing together the complementary services and geographic reach of Company3, Sixteen19 and Powerhouse VFX, will expand Company 3’s overall portfolio of post offerings and reach new markets in the US and internationally.

Sixteen19’s New York location includes 60 large editorial suites; two 4K digital cinema grading theaters; and a number of comfortable spaces, open environments and many common areas. Sixteen19’s mobile dailies services will add a perfect companion to Company 3’s existing offerings in that arena. PowerHouse VFX includes dedicated teams of experienced supervisors, producers and artists in 2D and 3D visual effects and compositing.

“The New York film community initially recognized the potential for a Company 3 and Sixteen19 partnership,” says Sixteen19’s Hoffman. “It’s not just the fact that a significant majority of the projects we work on are finished at Company 3, it’s more that our fundamental vision about post has always been aligned with Stefan’s. We value innovation; we’ve built terrific creative teams; and above all else, we both put clients first, always.”

Sixteen19 and Powerhouse VFX will retain their company names.