OWC 12.4

Category Archives: VFX

Reallusion’s Headshot plugin for realistic digi-doubles via AI

Reallusion has introduced a plugin for Character Creator 3 to help create realistic-looking digital doubles. According to the company, the Headshot plugin uses AI technology to automatically generate a digital human in minutes from one single photo, and those characters are fully rigged for voice lipsync, facial expression and full body animation.

Headshot allows game developers and virtual production teams to quickly funnel a cast of digital doubles into iClone, Unreal, Unity, Maya, ZBrush and more. The idea is to allow the digital humans to go anywhere they like and give creators a solution to rapidly develop, iterate and collaborate in realtime.

The plugin has two AI modes: Auto Mode and Pro Mode. Auto Mode is a one-click solution for creating mid-res digital human crowds. This process allows one-click head and hair creation for realtime 3D head models, It also generates a separate 3D hair mesh with alpha mask to soften edge lines. The 3D hair is fully compatible with Character Creator’s conformable hair format (.ccHair). Users can add them into their hair library, and apply them to other CC characters.

Headshot Pro Mode offers full control of the 3D head generation process with advanced features such as Image Matching, Photo Reprojection and Custom Mask with up to 4,096-texture resolution.

The Image Matching Tool overlays an image reference plane for advanced head shape refinement and lens correction. With Photo Reprojection, users can easily fix the texture-to-mesh discrepancies resulting from face morph change.

Using high-rez source images and Headshot’s 1,000-plus morphs, users can get a scan-quality digital human face in 4K texture details. Additional textures include normal, AO, roughness, metallic, SSS and Micro Normal for more realistic digital human rendering.

The 3D Head Morph System is designed to achieve the professional and detailed look of 3D scan models. The 3D sculpting design allow users to hover over a control area and use directional mouse drags to adjust the corresponding mesh shape, from full head and face sculpting to individual features — head contour, face, eyes, nose, mouth and ears with more than 1,000 head morphs. It is now free with a purchase of Headshot Plug-in.

The Headshot Plug-in for Character Creator is $199 and comes with the content pack Headshot Morph 1,000+ ($99). Character Creator 3 Pipeline costs $199.

Redshift integrates Cinema 4D noises, nodes and more

Maxon and Redshift Rendering Technologies have released Redshift 3.0.12, which has native support for Cinema 4D noises and deeper integration with Cinema 4D, including the option to define materials using Cinema 4D’s native node-based material system.

Cinema 4D noise effects have been in demand within other 3D software packages because of their flexibility, efficiency and look. Native support in Redshift means that users of other DCC applications can now access Cinema 4D noises by using Redshift as their rendering solution. Procedural noise allows artists to easily add surface detail and randomness to otherwise perfect surfaces. Cinema 4D offers 32 different types of noise and countless variations based on settings. Native support for Cinema 4D noises means Redshift can preserve GPU memory while delivering high-quality rendered results.

Redshift 3.0.12 provides content creators deeper integration of Redshift within Cinema 4D. Redshift materials can now be defined using Cinema 4D’s nodal material framework, introduced in Release 20. As well, Redshift materials can use the Node Space system introduced in Release 21, which combines the native nodes of multiple render engines into a single material. Redshift is the first to take advantage of the new API in Cinema 4D to implement its own Node Spaces. Users can now also use any Cinema 4D view panel as a Redshift IPR (interactive preview render) window, making it easier to work within compact layouts and interact with a scene while developing materials and lighting.

Redshift 3.0.12 is immediately available from the Redshift website.

Maxon acquired RedShift in April of 2019.

OWC 12.4

Framestore VFX will open in Mumbai in 2020

Oscar-winning creative studio Framestore will be opening a full-service visual effects studio in Mumbai in 2020 to target India’s booming creative industry. The studio will be located in the Nesco IT Park in Goregaon, in the center of Mumbai’s technology district. The news hammers home Framestore’s continued interest in India, after having made a major investment in Jesh Krishna Murthy’s VFX studio, Anibrain, in 2017.

“Mumbai represents a rolling of wheels that were set in motion over two years ago,” says Framestore founder/CEO William Sargent. “Our investment in Anibrain has grown considerably, and we continue in our partnership with Jesh Krishna Murthy to develop and grow that business. Indeed, they will become a valued production partner to our Mumbai offering.”

Framestore looks to make considerable hires in the coming months, aiming to build an initial 500-strong team with existing Framestore talent combined with the best of local Indian expertise. Mumbai will work alongside the global network, including London and Montreal, to create a cohesive virtual team delivering high-quality international work.

“Mumbai has become a center of excellence in digital filmmaking. There’s a depth of talent that can deliver to the scale of Hollywood with the color and flair of Bollywood,” Sargent continues. “It’s an incredibly vibrant city and its presence on the international scene is holding us all to a higher standard. In terms of visual effects, we will set the standard here as we did in Montreal almost eight years ago.”

 


London’s Freefolk beefs up VFX team

Soho-based visual effects studio Freefolk, which has seen growth in its commercials and longform work, has grown its staff to meet this demand. As part of the uptick in work, Freefolk promoted Cheryl Payne from senior producer to head of commercial production. Additionally, Laura Rickets has joined as senior producer, and 2D artist Bradley Cocksedge has been added to the commercials VFX team.

Payne, who has been with Freefolk since the early days, has worked on some of the studio’s biggest commercials, including; Warburtons for Engine, Peloton for Dark Horses and Cadburys for VCCP.

Rickets comes to Freefolk with over 18 years of production experience working at some of the biggest VFX houses in London, including Framestore, The Mill and Smoke & Mirrors, as well as agency side for McCann. Since joining the team, Rickets has VFX-produced work on the I’m A Celebrity IDs, a set of seven technically challenging and CG-heavy spots for the new series of the show as well as ads for the Rugby World Cup and Who Wants to Be a Millionaire?.

Cocksedge is a recent graduate who joins from Framestore, where he was working as an intern on Fantastic Beasts: The Crimes of Grindelwald. While in school at the University of Hertfordshire, he interned at Freefolk and is happy to be back in a full-time position.

“We’ve had an exciting year and have worked on some really stand-out commercials, like TransPennine for Engine and the beautiful spot for The Guardian we completed with Uncommon, so we felt it was time to add to the Freefolk family,” says Fi Kilroe, Freefolk’s co-managing director/executive producer.

Main Image: (L-R) Cheryl Payne, Laura Rickets and Bradley Cocksedge


Behind the Title: MPC’s CD Morten Vinther

This creative director/director still jumps on the Flame and also edits from time to time. “I love mixing it up and doing different things,” he says.

NAME: Morten Vinther

COMPANY: Moving Picture Company, Los Angeles

CAN YOU DESCRIBE YOUR COMPANY?
From original ideas all the way through to finished production, we are an eclectic mix of hard-working and passionate artists, technologists and creatives who push the boundaries of what’s possible for our clients. We aim to move the audience through our work.

WHAT’S YOUR JOB TITLE?
Creative Director and Director

WHAT DOES THAT ENTAIL?
I guide our clients through challenging shoots and post. I try to keep us honest in terms of making sure that our casting is right and the team is looked after and has the appropriate resources available for the tasks ahead, while ensuring that we go above and beyond on quality and experience. In addition to this, I direct projects, pitch on new business and develop methodology for visual effects.

American Horror Story

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I still occasionally jump on Flame and comp a job — right now I’m editing a commercial. I love mixing it up and doing different things.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Writing treatments. The moments where everything is crystal clear in your head and great ideas and concepts are rushing onto paper like an unstoppable torrent.

WHAT’S YOUR LEAST FAVORITE?
Writing treatments. Staring at a blank page, writing something and realizing how contrived it sounds before angrily deleting everything.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Early mornings. A good night’s sleep and freshly ground coffee creates a fertile breeding ground for pure clarity, ideas and opportunities.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be carefully malting barley for my next small batch of artisan whisky somewhere on the Scottish west coast.

Adidas Creators

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I remember making a spoof commercial at my school when I was about 13 years old. I became obsessed with operating cameras and editing, and I began to study filmmakers like Scorsese and Kubrick. After a failed career as a shopkeeper, a documentary production company in Copenhagen took mercy on me, and I started as an assistant editor.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
American Horror Story, Apple Unlock, directed by Dougal Wilson, and Adidas Creators, directed by Stacy Wall.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
If I had to single one out, it would probably be Apple’s Unlock commercial. The spot looks amazing, and the team was incredibly creative on this one. We enjoyed a great collaboration between several of our offices, and it was a lot of fun putting it together.

Apple’s Unlock

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and PlayStation.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Some say social media rots your brains. That’s probably why I’m an Instagram addict.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Odesza, SBTRKT, Little Dragon, Disclosure and classic reggae.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I recently bought a motorbike, and I spin around LA and Southern California most weekends. Concentrating on how to survive the next turn is a great way for me to clear the mind.


Director Robert Eggers talks about his psychological thriller The Lighthouse

By Iain Blair

Writer/director Robert Eggers burst onto the scene when his feature film debut, The Witch, won the Directing Award in the US Dramatic category at the 2015 Sundance Film Festival. He followed up that success by co-writing and directing another supernatural, hallucinatory horror film, The Lighthouse, which is set in the maritime world of the late 19th century.

L-R: Director Robert Eggers and cinematographer Jarin Blaschke on set.

The story begins when two lighthouse keepers (Willem Dafoe and Robert Pattinson) arrive on a remote island off the coast of New England for their month-long stay. But that stay gets extended as they’re trapped and isolated due to a seemingly never-ending storm. Soon, the two men engage in an escalating battle of wills, as tensions boil over and mysterious forces (which may or may not be real) loom all around them.

The Lighthouse has the power of an ancient myth. To tell this tale, which was shot in black and white, Eggers called on many of those who helped him create The Witch, including cinematographer Jarin Blaschke, production designer Craig Lathrop, composer Mark Korven and editor Louise Ford.

I recently talked to Eggers, who got his professional start directing and designing experimental and classical theater in New York City, about making the film, his love of horror and the post workflow.

Why does horror have such an enduring appeal?
My best argument is that there’s darkness in humanity, and we need to explore that. And horror is great at doing that, from the Gothic to a bad slasher movie. While I may prefer authors who explore the complexities in humanity, others may prefer schlocky films with jump scares that make you spill your popcorn, which still give them that dose of darkness. Those films may not be seriously probing the darkness, but they can relate to it.

This film seems more psychological than simple horror.
We’re talking about horror, but I’m not even sure that this is a horror film. I don’t mind the label, even though most wannabe auteurs are like, “I don’t like labels!” It started with an idea my brother Max had for a ghost story set in a lighthouse, which is not what this movie became. But I loved the idea, which was based on a true story. It immediately evoked a black and white movie on 35mm negative with a boxy aspect ratio of 1.19:1, like the old movies, and a fusty, dusty, rusty, musty atmosphere — the pipe smoke and all the facial hair — so I just needed a story that went along with all of that. (Laughs) We were also thinking a lot about influences and writers from the time — like Poe, Melville and Stevenson — and soaking up the jargon of the day. There were also influences like Prometheus and Proteus and God knows what else.

Casting the two leads was obviously crucial. What did Willem and Robert bring to their roles?
Absolute passion and commitment to the project and their roles. Who else but Willem can speak like a North Atlantic pirate stereotype and make it totally believable? Robert has this incredible intensity, and together they play so well against each other and are so well suited to this world. And they both have two of the best faces ever in cinema.

What were the main technical challenges in pulling it all together, and is it true you actually built the lighthouse?
We did. We built everything, including the 70-foot tower — a full-scale working lighthouse, along with its house and outbuildings — on Cape Forchu in Nova Scotia, which is this very dramatic outcropping of volcanic rock. Production designer Craig Lathrop and his team did an amazing job, and the reason we did that was because it gave us far more control than if we’d used a real lighthouse.

We scouted a lot but just couldn’t find one that suited us, and the few that did were far too remote to access. We needed road access and a place with the right weather, so in the end it was better to build it all. We also shot some of the interiors there as well, but most of them were built on soundstages and warehouses in Halifax since we knew it’d be very hard to shoot interiors and move the camera inside the lighthouse tower itself.

Your go-to DP, Jarin Blaschke, shot it. Talk about how you collaborated on the look and why you used black and white.
I love the look of black and white, because it’s both dreamlike and also more realistic than color in a way. It really suited both the story and the way we shot it, with the harsh landscape and a lot of close-ups of Willem and Robert. Jarin shot the film on the Panavision Millennium XL2, and we also used vintage Baltar lenses from the 1930s, which gave the film a great look, as they make the sea, water and sky all glow and shimmer more. He also used a custom cyan filter by Schneider Filters that gave us that really old-fashioned look. Then by using black and white, it kept the overall look very bleak at all times.

How tough was the shoot?
It was pretty tough, and all the rain and pounding wind you see onscreen is pretty much real. Even on the few sunny days we had, the wind was just relentless. The shoot was about 32 days, and we were out in the elements in March and April of last year, so it was freezing cold and very tough for the actors. It was very physically demanding.

Where did you post?
We did it all in New York at Harbor Post, with some additional ADR work at Goldcrest in London with Robert.

Do you like the post process?
I love post, and after the very challenging shoot, it was such a relief to just get in a warm, dry, dark room and start cutting and pulling it all together.

Talk about editing with Louise Ford, who also cut The Witch. How did that work?
She was with us on the shoot at a bed and breakfast, so I could check in with her at the end of the day. But it was so tough shooting that I usually waited until the weekends to get together and go over stuff. Then when we did the stage work at Halifax, she had an edit room set up there, and that was much easier.

What were the big editing challenges?
The DP and I developed such a specific and detailed cinema language without a ton of coverage and with little room for error that we painted ourselves into a corner. So that became the big challenge… when something didn’t work. It was also about getting the running time down but keeping the right pace since the performances dictate the pace of the edit. You can’t just shorten stuff arbitrarily. But we didn’t leave a lot of stuff on the cutting room floor. The assembly was just over two hours and the final film isn’t much shorter.

All the sound effects play a big role. Talk about the importance of sound and working on them with sound designer Damian Volpe, whose credits include Can You Ever Forgive Me?, Leave No Trace, Mudbound, Drive, Winter’s Bone and Margin Call.
It’s hugely important in this film, and Louise and I did a lot of work in the picture edit to create temps for Damian to inspire him. And he was so relentless in building up the sound design, and even creating weird sounds to go with the actual light, and to go with the score by Mark Korven, who did The Witch, and all the brass and unusual instrumentation he used on this. So the result is both experimental and also quite traditional, I think.

There are quite a few VFX shots. Who did them, and what was involved?
We had MELS and Oblique in Quebec and Brainstorm Digital in New York also did some. The big one was that the movie’s set on an island but we shot on a peninsula, which also had a lighthouse further north, which unfortunately didn’t look at all correct, so we framed it out a lot but we had to erase it for some of the time. And our period-correct sea ship broke down and had to be towed around by other ships, so there was a lot of clean up. Also with all the safety cables we had to use for cliff shots with the actors.

Where did you do the DI, and how important is it to you?
We did it at Harbor with colorist Joe Gawler, and it was hugely important although it was fairly simple because there’s very little latitude on the Double-X film stock we used. We did a lot of fine detail work to finesse it, but it was a lot quicker than if it’d been in color.

Did the film turn out the way you hoped?
No, they always change and surprise you, but I’m very proud of what we did.

What’s next?
I’m prepping another period piece, but it’s not a horror film. That’s all I can say.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Alkemy X adds Albert Mason as head of production

Albert Mason has joined VFX house Alkemy X as head of production. He comes to Alkemy X with over two decades of experience in visual effects and post production. He has worked on projects directed by such industry icons as Peter Jackson on the Lord of the Rings trilogy, Tim Burton on Alice in Wonderland and Robert Zemeckis on The Polar Express. In his new role at Alkemy X, he will use his experience in feature films to target the growing episodic space.

A large part of Alkemy X’s work has been for episodic visual effects, with credits that include Amazon Prime’s Emmy-winning original series, The Marvelous Mrs. Maisel, USA’s Mr. Robot, AMC’s Fear the Walking Dead, Netflix’s Maniac, NBC’s Blindspot and Starz’s Power.

Mason began his career at MTV’s on-air promos department, sharpening his production skills on top series promo campaigns and as a part of its newly launched MTV Animation Department. He took an opportunity to transition into VFX, stepping into a production role for Weta Digital and spending three years working globally on the Lord of the Rings trilogy. He then joined Sony Pictures Imageworks, where he contributed to features including Spider-Man 3 and Ghost Rider. He has also produced work for such top industry shops as Logan, Rising Sun Pictures and Greymatter VFX.

“[Albert’s] expertise in constructing advanced pipelines that embrace emerging technologies will be invaluable to our team as we continue to bolster our slate of VFX work,” says Alkemy X president/CEO Justin Wineburgh.


2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.


Creating With Cloud: A VFX producer’s perspective

By Chris Del Conte

The ‘90s was an explosive era for visual effects, with films like Jurassic Park, Independence Day, Titanic and The Matrix shattering box office records and inspiring a generation of artists and filmmakers, myself included. I got my start in VFX working on seaQuest DSV, an Amblin/NBC sci-fi series that was ground-breaking for its time, but looking at the VFX of modern films like Gemini Man, The Lion King and Ad Astra, it’s clear just how far the industry has come. A lot of that progress has been enabled by new technology and techniques, from the leap to fully digital filmmaking and emergence of advanced viewing formats like 3D, Ultra HD and HDR to the rebirth of VR and now the rise of cloud-based workflows.

In my nearly 25 years in VFX, I’ve worn a lot of hats, including VFX producer, head of production and business development manager. Each role involved overseeing many aspects of a production and, collectively, they’ve all shaped my perspective when it comes to how the cloud is transforming the entire creative process. Thanks to my role at AWS Thinkbox, I have a front-row seat to see why studios are looking at the cloud for content creation, how they are using the cloud, and how the cloud affects their work and client relationships.

Chris Del Conte on the set of the IMAX film Magnificent Desolation.

Why Cloud?
We’re in a climate of high content demand and massive industry flux. Studios are incentivized to find ways to take on more work, and that requires more resources — not just artists, but storage, workstations and render capacity. Driving a need to scale, this trend often motivates studios to consider the cloud for production or to strengthen their use of cloud in their pipelines if already in play. Cloud-enabled studios are much more agile than traditional shops. When opportunities arise, they can act quickly, spinning resources up and down at a moment’s notice. I realize that for some, the concept of the cloud is still a bit nebulous, which is why finding the right cloud partner is key. Every facility is different, and part of the benefit of cloud is resource customization. When studios use predominantly physical resources, they have to make decisions about storage and render capacity, electrical and cooling infrastructure, and staff accommodations up front (and pay for them). Using the cloud allows studios to adjust easily to better accommodate whatever the current situation requires.

Artistic Impact
Advanced technology is great, but artists are by far a studio’s biggest asset; automated tools are helpful but won’t deliver those “wow moments” alone. Artists bring the creativity and talent to the table, then, in a perfect world, technology helps them realize their full potential. When artists are free of pipeline or workflow distractions, they can focus on creating. The positive effects spill over into nearly every aspect of production, which is especially true when cloud-based rendering is used. By scaling render resources via the cloud, artists aren’t limited by the capacity of their local machines. Since they don’t have to wait as long for shots to render, artists can iterate more fluidly. This boosts morale because the final results are closer to what artists envisioned, and it can improve work-life balance since artists don’t have to stick around late at night waiting for renders to finish. With faster render results, VFX supervisors also have more runway to make last-minute tweaks. Ultimately, cloud-based rendering enables a higher caliber of work and more satisfied artists.

Budget Considerations
There are compelling arguments for shifting capital expenditures to operational expenditures with the cloud. New studios get the most value out of this model since they don’t have legacy infrastructure to accommodate. Cloud-based solutions level the playing field in this respect; it’s easier for small studios and freelancers to get started because there’s no significant up-front hardware investment. This is an area where we’ve seen rapid cloud adoption. Considering how fast technology changes, it seems ill-advised to limit a new studio’s capabilities to today’s hardware when the cloud provides constant access to the latest compute resources.

When a studio has been in business for decades and might have multiple locations with varying needs, its infrastructure is typically well established. Some studios may opt to wait until their existing hardware has fully depreciated before shifting resources to the cloud, while others dive in right away, with an eye on the bigger picture. Rendering is generally a budgetary item on project bids, but with local hardware, studios are working to recoup a sunk cost. Using the cloud, render compute can be part of a bid and becomes a negotiable item. Clients can determine the delivery timeline based on render budget, and the elasticity of cloud resources allows VFX studios to pick up more work. (Even the most meticulously planned productions can run into 911 issues ahead of delivery, and cloud-enabled studios have bandwidth to be the hero when clients are in dire straits.)

Looking Ahead
When I started in VFX, giant rooms filled with racks and racks of servers and hardware were the norm, and VFX studios were largely judged by the size of their infrastructure. I’ve heard from an industry colleague about how their VFX studio’s server room was so impressive that they used to give clients tours of the space, seemingly a visual reminder of the studio’s vast compute capabilities. Today, there wouldn’t be nearly as much to view. Modern technology is more powerful and compact but still requires space, and that space has to be properly equipped with the necessary electricity and cooling. With cloud, studios don’t need switchers and physical storage to be competitive off the bat, and they experience fewer infrastructure headaches, like losing freon in the AC.

The cloud also opens up the available artist talent pool. Studios can dedicate the majority of physical space to artists as opposed to machines and even hire artists in remote locations on a per-project or long-term basis. Facilities of all sizes are beginning to recognize that becoming cloud-enabled brings a significant competitive edge, allowing them to harness the power to render almost any client request. VFX producers will also start to view facility cloud-enablement as a risk management tool that allows control of any creative changes or artistic embellishments up until delivery, with the rendering output no longer a blocker or a limited resource.

Bottom line: Cloud transforms nearly every aspect of content creation into a near-infinite resource, whether storage capacity, render power or artistic talent.


Chris Del Conte is senior EC2 business development manager at AWS Thinkbox.

Motorola’s next-gen Razr gets a campaign for today

Many of us have fond memories of our Razr flip phone. At the time, it was the latest and greatest. Then new technology came along, and the smartphone era was born. Now Motorola is asking, “Why can’t you have both?”

Available as of November 13, the new Razr fits in a palm or pocket when shut and flips open to reveal an immersive, full-length touch screen. There is a display screen called the Quick View when closed and the larger Flex View when open — and the two displays are made to work together. Whatever you see on Quick View then moves to the larger Flex View display when you flip it open.

In order to help tell this story, Motorola called on creative shop Los York to help relaunch the Razr. Los York created the new smartphone campaign to tap into the Razr’s original DNA and launch it for today’s user.

Los York developed a 360 campaign that included films, social, digital, TV, print and billboards, with visuals in stores and on devices (wallpapers, ringtones, startup screens). Los York treated the Razr as a luxury item and a piece of art, letting the device reveal itself unencumbered by taglines and copy. The campaign showcases the Razr as a futuristic, high-end “fashion accessory” that speaks to new industry conversations, such as advancing tech along a utopian or dystopian future.

The campaign features a mix of live action and CG. Los York shot on a Panavision DXL with Primo 70 lenses. CG was created using Maxon Cinema 4D with Redshift and composited in Adobe After Effects. The piece was edited in-house on Adobe Premiere.

We reached out to Los York CEO and founder Seth Epstein to find out more:

How much of this is live action versus CG?
The majority is CG, but, originally, the piece was intended to be entirely CG. Early in the creative process, we defined the world in which the new Razr existed and who would belong there. As we worked on the project, we kept feeling that bringing our characters to life in live action and blending the worlds. The proper live action was envisioned after the fact, which is somewhat unusual.

What were some of the most challenging aspects of this piece?
The most challenging part of the project was the fact that the project happened over a period of nine months. Wisely, the product release needed to push, and we continued to evolve the project over time, which is a blessing and a curse.

How did it feel taking on a product with a lot of history and then rebranding it for the modern day?
We felt the key was to relaunch an iconic product like the Razr with an eye to the future. The trap of launching anything iconic is falling back on the obvious retro throwback references, which can come across as too obvious. We dove into the original product and campaigns to extract the brand DNA of 2004 using archetype exercises. We tapped into the attitude and voice of the Razr at that time — and used that attitude as a starting point. We also wanted to look forward and stand three years in the future and imagine what the tone and campaign would be then. All of this is to say that we wanted the new Razr to extract the power of the past but also speak to audiences in a totally fresh and new way.

Check out the campaign here.

Blur Studio uses new AMD Threadripper for Terminator: Dark Fate VFX

By Dayna McCallum

AMD has announced new additions to its high-end desktop processor family. Built for demanding desktop and content creation workloads, the 24-core AMD Ryzen Threadripper 3960X and the 32-core AMD Ryzen Threadripper 3970X processors will be available worldwide November 25.

Tim Miller on the set of Dark Fate.

AMD states that the powerful new processors provide up to 90 percent more performance and up to 2.5 times more available storage bandwidth than competitive offerings, per testing and specifications by AMD performance labs. The 3rd Gen AMD Ryzen Threadripper lineup features two new processors built on 7nm “Zen 2” core architecture, claiming up to 88 PCIe 4.0 lanes and 144MB cache with 66 percent better power efficiency.

Prior to the official product launch, AMD made the 3rd Gen Threadrippers available to LA’s Blur Studio for work on the recent Terminator: Dark Fate and continued a collaboration with the film’s director — and Blur Studio founder — Tim Miller.

Before the movie’s release, AMD hosted a private Q&A with Miller, moderated by AMD’s James Knight. Please note that we’ve edited the lively conversation for space and taken a liberty with some of Miller’s more “colorful” language. (Also watch this space to see if a wager is won that will result in Miller sporting a new AMD tattoo.) Here is the Knight/Miller conversation…

So when we dropped off the 3rd Gen Threadripper to you guys, how did your IT guys react?
Like little children left in a candy shop with no adult supervision. The nice thing about our atmosphere here at Blur is we have an open layout. So when (bleep) like these new AMD processors drops in, you know it runs through the studio like wildfire, and I sit out there like everybody else does. You hear the guys talking about it, you hear people giggling and laughing hysterically at times on the second floor where all the compositors are. That’s where these machines really kick ass — busting through these comps that would have had to go to the farm, but they can now do it on a desktop.

James Knight

As an artist, the speed is crucial. You know, if you have a machine that takes 15 minutes to render, you want to stop and do something else while you wait for a render. It breaks your whole chain of thought. You get out of that fugue state that you produce the best art in. It breaks the chain between art and your brain. But if you have a machine that does it in 30 seconds, that’s not going to stop it.

But really, more speed means more iterations. It means you deal with heavier scenes, which means you can throw more detail at your models and your scenes. I don’t think we do the work faster, necessarily, but the work is much higher quality. And much more detailed. It’s like you create this vacuum, and then everybody rushes into it and you have this silly idea that it is really going to increase productivity, but what it really increases most is quality.

When your VFX supervisor showed you the difference between the way it was done with your existing ecosystem and then with the third-gen Threadripper, what were you thinking about?
There was the immediate thing — when we heard from the producers about the deadline, shots that weren’t going to get done for the trailer, suddenly were, which was great. More importantly, you heard from the artists. What you started to see was that it allows for all different ways of working, instead of just the elaborate pipeline that we’ve built up — to work on your local box and then submit it to the farm and wait for that render to hit the queue of farm machines that can handle it, then send that render back to you.

It has a rhythm that is at times tiresome for the artists, and I know that because I hear it all the time. Now I say, “How’s that comp coming and when are we going to get it, tick tock?” And they say, “Well, it’s rendering in the background right now, as I’m watching them work on another comp or another piece of that comp.” That’s pretty amazing. And they’re doing it all locally, which saves so much time and frustration compared to sending it down the pipeline and then waiting for it to come back up.

I know you guys are here to talk about technology, but the difference for the artists is the instead of working here until 1:00am, they’re going home to put their children to bed. That’s really what this means at the end of the day. Technology is so wonderful when it enables that, not just the creativity of what we do, but the humanity… allowing artists to feel like they’re really on the cutting edge, but also have a life of some sort outside.

Endoskeleton — Terminator: Dark Fate

As you noted, certain shots and sequences wouldn’t have made it in time for the trailer. How important was it for you to get that Terminator splitting in the trailer?
 Marketing was pretty adamant that that shot had to be in there. There’s always this push and pull between marketing and VFX as you get closer. They want certain shots for the trailer, but they’re almost always those shots that are the hardest to do because they have the most spectacle in them. And that’s one of the shots. The sequence was one of the last to come together because we changed the plan quite a bit, and I kept changing shots on Dan (Akers, VFX supervisor). But you tell marketing people that they can’t have something, and they don’t really give a (bleep) about you and your schedule or the path of that artist and shot. (Laughing)

Anyway, we said no. They begged, they pleaded, and we said, “We’ll try.” Dan stepped up and said, “Yeah, I think I can make it.” And we just made it, but that sounds like we were in danger because we couldn’t get it done fast enough. All of this was happening in like a two-day window. If you didn’t notice (in the trailer), that’s a Rev 7. Gabriel Luna is a Rev 9, which is the next gen. But the Rev 7s that you see in his future flashback are just pure killers. They’re still the same technology, which is looking like metal on the outside and a carbon endoskeleton that splits. So you have to run the simulation where the skeleton separates through the liquid that hangs off of an inch string; it’s a really hard simulation to do. That’s why we thought maybe it wasn’t going to get done, but running the simulation on the AMD boxes was lightning fast.

 

 

 

Carbon New York grows with three industry vets

Carbon in New York has grown with two senior hires — executive producer Nick Haynes and head of CG Frank Grecco — and the relocation of existing ECD Liam Chapple, who joins from the Chicago office.

Chapple joined Carbon in 2016, moving from Mainframe in London to open Carbon’s Chicago facility.  He brought in clients such as Porsche, Lululemon, Jeep, McDonald’s, and Facebook. “I’ve always looked to the studios, designers and directors in New York as the high bar, and now I welcome the opportunity to pitch against them. There is an amazing pool of talent in New York, and the city’s energy is a magnet for artists and creatives of all ilk. I can’t wait to dive into this and look forward to expanding upon our amazing team of artists and really making an impression in such a competitive and creative market.”

Chapple recently wrapped direction and VFX on films for Teflon and American Express (Ogilvy) and multiple live-action projects for Lululemon. The most recent shoot, conceived and directed by Chapple, was a series of eight live-action films focusing on Lululemon’s brand ambassadors and its new flagship store in Chicago.

Haynes joins Carbon from his former role as EP of MPC, bringing over 20 years of experience earned at The Mill, MPC and Absolute. Haynes recently wrapped the launch film for the Google Pixel phone and the Chromebook, as well as an epic Middle Earth: Shadow of War Monolith Games trailer combining photo-real CGI elements with live-action shot on the frozen Black Sea in Ukraine.  “We want to be there at the inception of the creative and help steer it — ideally, lead it — and be there the whole way through the process, from concept and shoot to delivery. Over the years, whether working for the world’s most creative agencies or directly with prestigious clients like Google, Guinness and IBM, I aim to be as close to the project as possible from the outset, allowing my team to add genuine value that will garner the best result for everyone involved.”

Grecco joins Carbon from Method Studios, where he most recently led projects for Google, Target, Microsoft, Netflix and Marvel’s Deadpool 2.  With a wide range of experience from Emmy-nominated television title sequences to feature films and Super Bowl commercials, Grecco looks forward to helping Carbon continue to push its visuals beyond the high bar that has already been set.

In addition to New York and Chicago, Carbon has a studio in Los Angeles.

Main Image: (L-R) Frank Grecco, Liam Chapple, Nick Haynes

Behind the Title: Sarofsky EP Steven Anderson

This EP’s responsibilities range gamut “from managing our production staff to treating clients to an amazing dinner.”

Company: Chicago’s Sarofsky

Can you describe your company?
We like to describe ourselves as a design-driven production company. I like to think of us as that but so much more. We can be a one-stop shop for everything from concept through finish, or we can partner with a variety of other companies and just be one piece of the puzzle. It’s like ordering from a Chinese menu — you get to pick what items you want.

What’s your job title, and what does the job entail?
I’m executive producer, and that means different things at different companies and industries. Here at Sarofsky, I am responsible for things that run the gamut from managing our production staff to treating clients to an amazing dinner.

Sarofsky

What would surprise people the most about what falls under that title?
I also run payroll, and I am damn good at it.

How has the VFX industry changed in the time you’ve been working?
It used to be that when you told someone, “This is going to take some time to execute,” that’s what it meant. But now, everyone wants everything two hours ago. On the flip side, the technology we now have access to has streamlined the production process and provided us with some terrific new tools.

Why do you like being on set for shoots? What are the benefits?
I always like being on set whenever I can because decisions are being made that are going to affect the rest of the production paradigm. It’s also a good opportunity to bond with clients and, sometimes, get some kick-ass homemade guacamole.

Did a particular film inspire you along this path in entertainment?
I have been around this business for quite a while, and one of the reasons I got into it was my love of film and filmmaking. I can’t say that one particular film inspired me to do this, but I remember being a young kid and my dad taking me to see The Towering Inferno in the movie theater. I was blown away.

What’s your favorite part of the job?
Choosing a spectacular bottle of wine for a favorite client and watching their face when they taste it. My least favorite has to be chasing down clients for past due invoices. It gets old very quickly.

What is your most productive time of the day?
It’s 6:30am with my first cup of coffee sitting at my kitchen counter before the day comes at me. I get a lot of good thinking and writing done in those early morning hours.

Original Bomb Pop via agency VMLY&R

If you didn’t have this job, what would you be doing instead?
I would own a combo bookstore/wine shop where people could come and enjoy two of my favorite things.

Why did you choose this profession?
I would say this profession chose me. I studied to be an actor and made my living at it for several years, but due to some family issues, I ended up taking a break for a few years. When I came back, I went for a job interview at FCB and the rest is history. I made the move from agency producing to post executive producer five years ago and have not looked back since.

Can you briefly explain one or more ways Sarofsky is addressing the issue of workplace diversity in its business?
We are a smallish women-owned business, and I am a gay man; diversity is part of our DNA. We always look out for the best talent but also try to ensure we are providing opportunities for people who may not have access to them. For example, one of our amazing summer interns came to us through a program called Kaleidoscope 4 Kids, and we all benefited from the experience.

Name some recent projects you have worked on, which are you most proud of, and why?
My first week here at EP, we went to LA for the friends and family screening of Guardians of the Galaxy, and I thought, what an amazing company I work for! Marvel Studios is a terrific production partner, and I would say there is something special about so many of our clients because they keep coming back. I do have a soft spot for our main title for Animal Kingdom just because I am a big Ellen Barkin fan.

Original Bomb Pop via agency VMLY&R

Name three pieces of technology you can’t live without.
I’d be remiss if I didn’t say my MacBook and iPhone, but I also wouldn’t want to live without my cooking thermometer, as I’ve learned how to make sourdough bread this year, and it’s essential.

What social media channels do you follow?
I am a big fan of Instagram; it’s just visual eye candy and provides a nice break during the day. I don’t really partake in much else unless you count NPR. They occupy most of my day.

Do you listen to music while you work? Care to share your favorite music to work to?
I go in waves. Sometimes I do but then I won’t listen to anything for weeks. But I recently enjoyed listening to “Ladies and Gentleman: The Best of George Michael.” It was great to listen to an entire album, a rare treat.

What do you do to de-stress from it all?
I get up early and either walk or do some type of exercise to set the tone for the day. It’s also so important to unplug; my partner and I love to travel, so we do that as often as we can. All that and a 2006 Chateau Margaux usually washes away the day in two delicious sips.

Filmmaker Hasraf “HaZ” Dulull talks masterclass on sci-fi filmmaking

By Randi Altman

Hasraf “HaZ” Dulull is a producer/director and a hands-on VFX and post pro. His most recent credits include the features films 2036 Origin Unknown and The Beyond, the Disney TV series Fast Layne and the Disney Channel original movies Under the Sea — A Descendants Story, which takes place between Descendants 2 and 3. Recently, Dulull developed a masterclass on Sci-Fi Filmmaking, which can be bought or rented.

Why would this already very busy man decide to take on another project and one that is a little off his current path? Well, we reached out to find out.

Why, at this point in your career, did you think it was important to create this masterclass?
I have seen other masterclasses out there to do with filmmaking and they were always academic based, which turned me off. The best ones were the ones that were taught by actual filmmakers who had made commercial projects, films or TV shows… not just short films. So I knew that if I was to create and deliver a masterclass, I would do it after having made a couple of feature films that have been released out there in the world. I wanted to lead by example and experience.

When I was in LA explaining to studio people, executives and other filmmakers how I made my feature films, they were impressed and fascinated with my process. They were amazed that I was able to pull off high-concept sci-fi films on tight budgets and schedules but still produce a film that looked expensive to make.

When I was researching existing masterclasses or online courses as references, I found that no one was actually going through the entire process. Instead they were offering specialized training in either cinematography or VFX, but there wasn’t anything about how to break down a script and put a budget and schedule together; how to work with locations to make your film work; how to use visual effects smartly in production; how to prepare for marketing and delivering your film for distribution. None of these things were covered as a part of a general masterclass, so I set out to fill that void with my masterclass series.

Clearly this genre holds a special place in your heart. Can you talk about why?
I think it’s because the genre allows for so much creative freedom because sci-fi relies on world-building and imagination. Because of this freedom, it leads to some “out of this world” storytelling and visuals, but on the flip side it may influence the filmmaker to be too ambitious on a tight budget. This could lead to making cheap-looking films because of the over ambitious need to create amazing worlds. Not many filmmakers know how to do this in a fiscally sensible way and they may try to make Star Wars on a shoestring budget. So this is why I decided to use the genre of sci-fi in this masterclass to share my experience of smart filmmaking to achieve commercially successful results.

How did you decide on what topics to cover? What was your process?
I thought about the questions the people and studio executives were asking me when I was in those LA meetings, which pretty much boiled down to, “How did you put the movie together for that tight budget and schedule?” When answering that question, I ended up mapping out my process and the various stages and approaches I took in preproduction, production and post production, but also in the deliverables stage and marketing and distribution stage too. As an indie filmmaker, you really need to get a good grasp on that part to ensure your film is able to be released by the distributors and received commercially.

I also wanted each class/episode to have a variety of timings and not go more than around 10 minutes (the longest one is around 12 minutes, and the shortest is three minutes). I went with a more bite-sized approach to make the experience snappy, fun yet in-depth to allow the viewers to really soak in the knowledge. It also allows for repeat viewing.

Why was it important to teach these classes yourself?
I wanted it to feel raw and personal when talking about my experience of putting two sci-fi feature films together. Plus I wanted to talk about the constant problem solving, which is what filmmaking is all about. Teaching the class myself allowed me to get this all out of my system in my voice and style to really connect with the audience intimately.

Can you talk about what the experience will be like for the student?
I want the students to be like flies on the wall throughout the classes — seeing how I put those sci-fi feature films together. By the end of the series, I want them to feel like they have been on an entire production, from receiving a script to the releasing of the movie. The aim was to inspire others to go out and make their film. Or to instill confidence in those who have fears of making their film, or for existing filmmakers to learn some new tips and tricks because in this industry we are always learning on each project.

Why the rental and purchase options? What have most people been choosing?
Before I released it, one of the big factors that kept me up nights was how to make this accessible and affordable for everyone. The idea of renting is for those who can’t afford to purchase it but would love to experience the course. They can do so at a cut-down price but can only view within the 48-hour window. Whereas the purchase price is a little higher price-wise but you get to access it as many times as you like. It’s pretty much the same model as iTunes when you rent or buy a movie.

So far I have found that people have been buying more than renting, which is great, as this means audiences want to do repeat viewings of the classes.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Review: Lenovo Yoga A940 all-in-one workstation

By Brady Betzel

While more and more creators are looking for alternatives to the iMac, iMac Pro and Mac Pro, there are few options with high-quality, built-in monitors: Microsoft Surface Studio, HP Envy, and Dell 7000 are a few. There are even fewer choices if you want touch and pen capabilities. It’s with that need in mind that I decided to review the Lenovo Yoga A940, a 27-inch, UHD, pen- and touch-capable Intel Core i7 computer with an AMD Radeon RX 560 GPU.

While I haven’t done a lot of all-in-one system reviews like the Yoga A940, I have had my eyes on the Microsoft Surface Studio 2 for a long time. The only problem is the hefty price tag of around $3,500. The Lenovo’s most appealing feature — in addition to the tech specs I will go over — is its price point: It’s available from $2,200 and up. (I saw Best Buy selling a similar system to the one I reviewed for around $2,299. The insides of the Yoga and the Surface Studio 2 aren’t that far off from each other either, at least not enough to make up for the $1,300 disparity.)

Here are the parts inside the Lenovo Yoga A940: Intel Core i7-8700 3.2GHz processor (up to 4.6GHz with Turbo Boost), six cores (12 threads) and 12MB cache; 27-inch 4K UHD IPS multitouch 100% Adobe RGB display; 16GB DDR4 2666MHz (SODIMM) memory; 1TB 5400 RPM drive plus 256GB PCIe SSD; AMD Radeon RX 560 4GB graphics processor; 25-degree monitor tilt angle; Dolby Atmos speakers; Dimensions: 25 inches by 18.3 inches by 9.6 inches; Weight: 32.2 pounds; 802.11AC and Bluetooth 4.2 connectivity; side panel inputs: Intel Thunderbolt, USB 3.1, 3-in-1 card reader and audio jack; rear panel inputs: AC-in, RJ45, HDMI and four USB 3.0; Bluetooth active pen (appears to be the Lenovo Active Pen 2); and QI wireless charging technology platform.

Digging In
Right off the bat, I just happened to put my Android Galaxy phone on the odd little flat platform located on the right side of the all-in-one workstation, just under the monitor, and I saw my phone begin to charge wirelessly. QI wireless charging is an amazing little addition to the Yoga; it really comes through in a pinch when I need my phone charged and don’t have the cable or charging dock around.

Other than that nifty feature, why would you choose a Lenovo Yoga A940 over any other all-in-one system? Well, as mentioned, the price point is very attractive, but you are also getting a near-professional-level system in a very tiny footprint — including Thunderbolt 3 and USB connections, HDMI port, network port and SD card reader. While it would be incredible to have an Intel i9 processor inside of the Yoga, the i7 clocks in at 3.2GHz with six cores. Not a beast, but enough to get the job done inside of Adobe Premiere and Blackmagic’s DaVinci Resolve, but maybe with transcoded files instead of Red raw or the like.

The Lenovo Yoga A940 is outfitted with a front-facing Dolby Atmos audio speaker as well as Dolby Vision technology in the IPS display. The audio could use a little more low end, but it is good. The monitor is surprisingly great — the whites are white and the blacks are black; something not everyone can get right. It has 100% Adobe RGB color coverage and is Pantone-validated. The HDR is technically Dolby Vision and looks great at about 350 nits (not the brightest, but it won’t burn your eyes out either). The Lenovo BT active pen works well. I use Wacom tablets and laptop tablets daily, so this pen had a lot to live up to. While I still prefer the Wacom pen, the Lenovo pen, with 4,096 levels of sensitivity, will do just fine. I actually found myself using the touchscreen with my fingers way more than the pen.

One feature that sets the A940 apart from the other all-in-one machines is the USB Content Creation dial. With the little time I had with the system, I only used it to adjust speaker volume when playing Spotify, but in time I can see myself customizing the dials to work in Premiere and Resolve. The dial has good action and resistance. To customize the dial, you can jump into the Lenovo Dial Customization Assistant.

Besides the Intel i7, there is an AMD Radeon RX 560 with 4GB of memory, two 3W and two 5W speakers, 32 GB of DDR4 2666 MHz memory, a 1 TB 5400 RPM hard drive for storage, and a 256GB PCIe SSD. I wish the 1TB drive was also an SSD, but obviously Lenovo has to keep that price point somehow.

Real-World Testing
I use Premiere Pro, After Effects and Resolve all the time and can understand the horsepower of a machine through these apps. Whether editing and/or color correcting, the Lenovo A940 is a good medium ground — it won’t be running much more than 4K Red raw footage in real time without cutting the debayering quality down to half if not one-eighth. This system would make a good “offline” edit system, where you transcode your high-res media to a mezzanine codec like DNxHR or ProRes for your editing and then up-res your footage back to the highest resolution you have. Or, if you are in Resolve, maybe you could use optimized media for 80% of the workflow until you color. You will really want a system with a higher-end GPU if you want to fluidly cut and color in Premiere and Resolve. That being said, you can make it work with some debayer tweaking and/or transcoding.

In my testing I downloaded some footage from Red’s sample library, which you can find here. I also used some BRAW clips to test inside of Resolve, which can be downloaded here. I grabbed 4K, 6K, and 8K Red raw R3D files and the UHD-sized Blackmagic raw (BRAW) files to test with.

Adobe Premiere
Using the same Red clips as above, I created two one-minute-long UHD (3840×2160) sequences. I also clicked “Set to Frame Size” for all the clips. Sequence 1 contained these clips with a simple contrast, brightness and color cast applied. Sequence 2 contained these same clips with the same color correction applied, but also a 110% resize, 100 sharpen and 20 Gaussian Blur. I then exported them to various codecs via Adobe Media Encoder using the OpenCL for processing. Here are my results:

QuickTime (.mov) H.264, No Audio, UHD, 23.98 Maximum Render Quality, 10 Mb/s:
Color Correction Only: 24:07
Color Correction w/ 110% Resize, 100 Sharpen, 20 Gaussian Blur: 26:11
DNxHR HQX 10 bit UHD
Color Correction Only: 25:42
Color Correction w/ 110% Resize, 100 Sharpen, 20 Gaussian Blur: 27:03

ProRes HQ
Color Correction Only: 24:48
Color Correction w/ 110% Resize, 100 Sharpen, 20 Gaussian Blur: 25:34

As you can see, the export time is pretty long. And let me tell you, once the sequence with the Gaussian Blur and Resize kicked in, so did the fans. While it wasn’t like a jet was taking off, the sound of the fans definitely made me and my wife take a glance at the system. It was also throwing some heat out the back. Because of the way Premiere works, it relies heavily on the CPU over GPU. Not that it doesn’t embrace the GPU, but, as you will see later, Resolve takes more advantage of the GPUs. Either way, Premiere really taxed the Lenovo A940 when using 4K, 6K and 8K Red raw files. Playback in real time wasn’t possible except for the 4K files. I probably wouldn’t recommend this system for someone working with lots of higher-than-4K raw files; it seems to be simply too much for it to handle. But if you transcode the files down to ProRes, you will be in business.

Blackmagic Resolve 16 Studio
Resolve seemed to take better advantage of the AMD Radeon RX 560 GPU in combination with the CPU, as well as the onboard Intel GPU. In this test I added in Resolve’s amazing built-in spatial noise reduction, so other than the Red R3D footage, this test and the Premiere test weren’t exactly comparing apples to apples. Overall the export times will be significantly higher (or, in theory, they should be). I also added in some BRAW footage to test for fun, and that footage was way easier to work and color with. Both sequences were UHD (3840×2160) 23.98. I will definitely be looking into working with more BRAW footage. Here are my results:

Playback: 4K realtime playback at half-premium, 6K no realtime playback, 8K no realtime playback

H.264 no audio, UHD, 23.98fps, force sizing and debayering to highest quality
Export 1 (Native Renderer)
Export 2 (AMD Renderer)
Export 3 (Intel QuickSync)

Color Only
Export 1: 3:46
Export 2: 4:35
Export 3: 4:01

Color, 110% Resize, Spatial NR: Enhanced, Medium, 25; Sharpening, Gaussian Blur
Export 1: 36:51
Export 2: 37:21
Export 3: 37:13

BRAW 4K (4608×2592) Playback and Export Tests

Playback: Full-res would play at about 22fps; half-res plays at realtime

H.264 No Audio, UHD, 23.98 fps, Force Sizing and Debayering to highest quality
Color Only
Export 1: 1:26
Export 2: 1:31
Export 3: 1:29
Color, 110% Resize, Spatial NR: Enhanced, Medium, 25; Sharpening, Gaussian Blur
Export 1: 36:30
Export 2: 36:24
Export 3: 36:22

DNxHR 10 bit:
Color Correction Only: 3:42
Color, 110% Resize, Spatial NR: Enhanced, Medium, 25; Sharpening, Gaussian Blur: 39:03

One takeaway from the Resolve exports is that the color-only export was much more efficient than in Premiere, taking just over three or four times realtime for the intensive Red R3D files, and just over one and a half times real time for BRAW.

Summing UpIn the end, the Lenovo A940 is a sleek looking all-in-one touchscreen- and pen-compatible system. While it isn’t jam-packed with the latest high-end AMD GPUs or Intel i9 processors, the A940 is a mid-level system with an incredibly good-looking IPS Dolby Vision monitor with Dolby Atmos speakers. It has some other features — like IR camera, QI wireless charger and USB Dial — that you might not necessarily be looking for but love to find.

The power adapter is like a large laptop power brick, so you will need somewhere to stash that, but overall the monitor has a really nice 25-degree tilt that is comfortable when using just the touchscreen or pen, or when using the wireless keyboard and mouse.

Because the Lenovo A940 starts at just around $2,299 I think it really deserves a look when searching for a new system. If you are working in primarily HD video and/or graphics this is the all-in-one system for you. Check out more at their website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Bonfire adds Jason Mayo as managing director/partner

Jason Mayo has joined digital production company Bonfire in New York as managing director and partner. Industry veteran Mayo will be working with Bonfire’s new leadership lineup, which includes founder/Flame artist Brendan O’Neil, CD Aron Baxter, executive producer Dave Dimeola and partner Peter Corbett. Bonfire’s offerings include VFX, design, CG, animation, color, finishing and live action.

Mayo comes to Bonfire after several years building Postal, the digital arm of the production company Humble. Prior to that he spent 14 years at Click 3X, where he worked closely with Corbett as his partner. While there he also worked with Dimeola, who cut his teeth at Click as a young designer/compositor. Dimeola later went on to create The Brigade, where he developed the network and technology that now forms the remote, cloud-based backbone referred to as the Bonfire Platform.

Mayo says a number of factors convinced him that Bonfire was the right fit for him. “This really was what I’d been looking for,” he says. “The chance to be part of a creative and innovative operation like Bonfire in an ownership role gets me excited, as it allows me to make a real difference and genuinely effect change. And when you’re working closely with a tight group of people who are focused on a single vision, it’s much easier for that vision to be fully aligned. That’s harder to do in a larger company.”

O’Neil says that having Mayo join as partner/MD is a major move for the company. “Jason’s arrival is the missing link for us at Bonfire,” he says. “While each of us has specific areas to focus on, we needed someone who could both handle the day to day of running the company while keeping an eye on our brand and our mission and introducing our model to new opportunities. And that’s exactly his strong suit.”

For the most part, Mayo’s familiarity with his new partners means he’s arriving with a head start. Indeed, his connection to Dimeola, who built the Bonfire Platform — the company’s proprietary remote talent network, nicknamed the “secret sauce” — continued as Mayo tapped Dimeola’s network for overflow and outsourced work while at Postal. Their relationship, he says, was founded on trust.

“Dave came from the artist side, so I knew the work I’d be getting would be top quality and done right,” Mayo explains. “I never actually questioned how it was done, but now that he’s pulled back the curtain, I was blown away by the capabilities of the Platform and how it dramatically differentiates us.

“What separates our system is that we can go to top-level people around the world but have them working on the Bonfire Platform, which gives us total control over the process,” he continues. “They work on our cloud servers with our licenses and use our cloud rendering. The Platform lets us know everything they’re doing, so it’s much easier to track costs and make sure you’re only paying for the work you actually need. More importantly, it’s a way for us to feel connected – it’s like they’re working in a suite down the hall, except they could be anywhere in the world.”

Mayo stresses that while the cloud-based Platform is a huge advantage for Bonfire, it’s just one part of its profile. “We’re not a company riding on the backs of freelancers,” he points out. “We have great, proven talent in our core team who work directly with clients. What I’ve been telling my longtime client contacts is that Bonfire represents a huge step forward in terms of the services and level of work I can offer them.”

Corbett believes he and Mayo will continue to explore new ways of working now that he’s at Bonfire. “In the 14 years Jason and I built Click 3X, we were constantly innovating across both video and digital, integrating live action, post production, VFX and digital engagements in unique ways,” he observes. “I’m greatly looking forward to continuing on that path with him here.”

Technicolor Post opens in Wales 

Technicolor has opened a new facility in Cardiff, Wales, within Wolf Studios. This expansion of the company’s post production footprint in the UK is a result of the growing demand for more high-quality content across streaming platforms and the need to post these projects, as well as the growth of production in Wales.

The facility is connected to all of Technicolor’s locations worldwide through the Technicolor Production Network, giving creatives easy access and to their projects no matter where they are shooting or posting.

The facility, an extension of Technicolor’s London operations, supports all Welsh productions and features a multi-purpose, state-of-the-art suite as well as space for VFX and front-end services including dailies. Technicolor Wales is working on Bad Wolf Production’s upcoming fantasy epic His Dark Materials, providing picture and sound services for the BBC/HBO show. Technicolor London’s recent credits include The Two Popes, The Souvenir, Chernobyl, Black Mirror, Gentleman Jack and The Spanish Princess.

Within this new Cardiff facility, Technicolor is offering 2K digital cinema projection, FilmLight Baselight color grading, realtime 4K HDR remote review, 4K OLED video monitoring, 5.1/7.1 sound, ADR recording/source connect, Avid Pro Tools sound mixing, dailies processing and Pulse cloud storage.

Bad Wolf Studios in Cardiff offers 125,000 square feet of stage space with five stages. There is flexible office space, as well as auxiliary rooms and costume and props storage. Its within

Rising Sun Pictures’ Anna Hodge talks VFX education and training

Based in Adelaide, South Australia, Rising Sun Pictures (RSP) has created stunning visual effects for films including Spider-Man: Far From Home, Captain Marvel, Thor: Ragnarok and Game of Thrones.

It also operates a visual effects training program in conjunction with the University of South Australia in which students learn such skills as compositing, tracking, effects, lighting, look development and modeling from working professionals. Thanks to this program, many students have landed jobs in the industry.

We recently spoke with RSP’s manager of training and education, Anna Hodge, about the school’s success.

Tell us about the education program at Rising Sun Pictures.
Rising Sun Pictures is an independently owned visual effects company. We’ve worked on more than 130 films, as well as commercials and streaming series, and we are very much about employing locals from South Australia. When this is not possible, we hire staff from interstate and overseas for key senior positions.

Our education program was established in 2015 in conjunction with the University of South Australia (UniSA) in order to directly feed our junior talent pool. We found there was a gap between traditional visual effects training and the skills young artists needed to hit the ground running in a studio.

How is the program structured?
We began with a single Graduate Certificate in Visual Effects program of 12 weeks duration that was designed for students coming out of vocational colleges and universities wanting to improve their skills and employability. Students apply through a portfolio process. The program accepts 10 students each term and are exposes them to Foundry Nuke and other visual effects software. They gain experience by working on shots from past movies and creating a short film.

The idea is to give them a true industry experience, develop a showreel in the process and gain a qualification through a prestigious university. Our students are exposed to the studio floor from day one. They attend RSP five days a week. They work in our training rooms and are immersed in the life of the company. We want them to feel as much a part of RSP as our regular employees.

Our program has grown to include two graduate certificate streams. The Graduate Certificate in Effects and Lighting and our first graduate certificate was rebadged into the Graduate Certificate of Compositing and Tracking. Both have been highly successful for our graduates acquiring employment post studies at RSP.

Anna Hodge and students

We also offer course work toward the university’s media arts degree. We teach two elective courses in the second year, specializing in modeling and texturing and look development and lighting. The university students attend RSP as part of their studies at UniSA. It gives them exposure to our artists, industry-type projects and expectations of the industry through workshop-based delivery.

In 2019, our education program expanded, and we introduced “visual effects specialization” as part of the media arts degree. Unlike any other degree, the students spend their entire last year of studies at RSP. They are integrated with the graduate certificate classes, and learning at RSP for the whole year enables them to build skills in both compositing and tracking and effects and lighting, making them highly skilled and desirable employees at the end of their studies.

What practical skills do students learn?
In the Media Arts Modeling and Texturing elective course, they are exposed to Maya and are introduced to Pixologic ZBrush. In the second semester, they can study look development and lighting and learn Substance Painter and how to light in SideFX Houdini.

Both degree and graduate certificate students in the dynamic effects and lighting course receive around nine weeks of Houdini training and then move onto lighting. Those in the compositing and tracking stream learn Nuke, as well as 3D Equalizer and Silhouette. All our degree and graduate certificate students are also exposed to Autodesk’s Shotgun. They learn the tools we use on the floor and apply them in the same workflow.

Skills are never taught in isolation. They learn how they fit into the whole movie-making process. Working on the short film project, run in conjunction with We Made a Thing Studios (WEMAT), students learn how to work collaboratively, take direction and gain other necessary skills required for working in a deadline-driven environment.

Where do your students come from?
We attract applications from South Australia. Over the past few years, applications from interstate and overseas have significantly increased. The benefit of our program is that it’s only 12-weeks long, so students can pick up the skills they require without a huge investment in time. There is strong growth of jobs in South Australia so they are often employed locally or sometimes return to their hometowns to gain employment.

What are the advantages of training in a working VFX studio?
Our training goes beyond simple software skills. Our students are taught by some of our best artists in the world and professionals who have been working in the industry for years. Students can walk around the studio, talk to and shadow artists, and attend a company staff meeting. We schedule what we call “Day in the Life Of” presentations so students can gain an understanding of the various roles that make up our company. Students hear from department heads, senior artists, producers and even juniors. They talk about their jobs and their pathways into the industry. They provide students with sound practical advice on how to improve their skills and present themselves. We also run sessions with recruiters, who share insights in building good resumes and showreels.

We are always trying to reinvent and improve what we do. I have one-on-ones with students to find out how they are doing and what we can do to improve their learning experience. We take feedback seriously. Our instructors are passionate artists and educators. Over time, I think we’ve built something quite unique and special at RSP.

How do you support your students in their transition from the program into the professional world?
We have an excellent relationship with recruiters at other visual effects companies in South Australia, interstate and globally, and we use those connections to help our students find work. A VFX company that opened in Brisbane recently hired two of our students and wants to hire more.

Of course, one reason we created the program was to meet our own need for juniors. So I work closely with our department heads to meet their needs. If a job lands and they have positions open, I will refer students for interviews. Many of our students stay in touch after they leave here. Our support doesn’t stop after 12 weeks. When former students add new material to their showreels, I encourage them to send them in and I forward them to the relevant heads of department. When one of our graduates secures his or hers first VFX job, it’s the best news. This really makes my day.

How do you see the program evolving over the next few years?
We are working on new initiatives with UniSA. Nothing to reveal yet, but I do expect our numbers to grow simply because our graduate results are excellent. Our employment rate is well above 70 percent. I spoke with someone yesterday who is looking to apply next year. She was at a recent film event and met a bunch of our graduates who raved about the programs they studied at RSP. Hearing that sort of thing is really exciting and something that we are really proud of.

RSP and UniSA are both mindful that when scaling up we don’t compromise on quality delivery. It is important to us that students consistently receive the same high-quality training and support regardless of class size.

Do you feel that visual effects offer a strong career path?
Absolutely. I am constantly contacted by recruiters who are looking to hire our graduates. I don’t foresee a lack of jobs, only a lack of qualified artists. We need to keep educating students to avoid a skill shortage. There has never been a better time to train for a career in visual effects.

VFX house Blacksmith now offering color grading, adds Mikey Pehanich

New York-based visual effects studio Blacksmith has added colorist Mikey Pehanich to its team. With this new addition, Blacksmith expands its capabilities to now offer color grading in addition to VFX.

Pehanich has worked on projects for high-profile brands including Amazon, Samsung, Prada, Nike, New Balance, Marriott and Carhartt. Most recently, Pehanich worked on Smirnoff’s global “Infamous Since 1864” campaign directed by Rupert Sanders, Volkswagen’s Look Down in Awe spot from Garth Davis, Fisher-Price’s “Let’s Be Kids” campaign and Miller Lite’s newly launched Followers spot, both directed by Ringan Ledwidge.

Prior to joining Blacksmith, Pehanich spent six years as colorist at The Mill in Chicago. Pehanich was the first local hire when The Mill opened its Chicago studio in 2013. Initially cutting his teeth as color assistant, he quickly worked his way up to becoming a full-fledged colorist, lending his talent to campaigns that include Michelob’s 2019 Super Bowl spot featuring Zoe Kravitz and directed by Emma Westenberg, as well as music videos, including Regina Spektor’s Black and White.

In addition to commercial work, Pehanich’s diverse portfolio encompasses several feature films, short films and music videos. His recent longform work includes Shabier Kirchner’s short film Dadli about an Antiguan boy and his community, and Andre Muir’s short film 4 Corners, which tackles Chicago’s problem with gun violence.

“New York has always been a creative hub for all industries — the energy and vibe that is forever present in the air here has always been a draw for me. When the opportunity presented itself to join the incredible team over at Blacksmith, there was no way I could pass it up,” says Pehanich, who will be working on Blackmagic’s DaVinci Resolve.

 

Sheena Duggal to get VES Award for Creative Excellence

The Visual Effects Society (VES) named acclaimed visual effects supervisor Sheena Duggal as the forthcoming recipient of the VES Award for Creative Excellence in recognition of her valuable contributions to filmed entertainment. The award will be presented at the 18th Annual VES Awards on January 29, 2020, at the Beverly Hilton Hotel.

The VES Award for Creative Excellence, bestowed by the VES Board of Directors, recognizes individuals who have made significant and lasting contributions to the art and science of the visual effects industry by uniquely and consistently creating compelling and creative imagery in service to story. The VES will honor Duggal for breaking new ground in compelling storytelling through the use of stunning visual effects. Duggal has been at the forefront of embracing emerging technology to enhance the moviegoing experience, and her creative vision and inventive techniques have paved the way for future generations of filmmakers.

Duggal is an acclaimed visual effects supervisor and artist whose work has shaped numerous studio tentpole and Academy Award-nominated productions. She is known for her design skills, creative direction and visual effects work on blockbuster films such as Venom, The Hunger Games, Mission: Impossible, Men in Black II, Spider-Man 3 and Contact. She has worked extensively with Marvel Studios as VFX supervisor on projects including Doctor Strange, Thor: The Dark World, Iron Man 3, Marvel One-Shot: Agent Carter and the Agent Carter TV series. She also contributed to Sci-Tech Academy Award wins for visual effects and compositing software Flame and Inferno. Since 2012, Duggal has been consulting with Codex (and now Codex and Pix), providing guidance on various new technologies for the VFX community. Duggal is currently visual effects supervisor for Venom 2 and recently completed design and prep for Ghostbusters 2020.

In 2007, Duggal made her debut as a director on an award-winning short film to showcase the Chicago Spire, simultaneously designing all of the visual effects. Her career in movies began when she moved to Los Angeles to work as a Flame artist on Super Mario Bros. for Roland Joffe and Jake Eberts’ Lightmotive Fatman. She had previously been based in London, where she created high-resolution digital composites for Europe’s top advertising and design agencies. Her work included album covers for Elton John and Traveling Wilburys.

Already an accomplished compositor (she began in 1985 working on early generation paint software), in 1992 Duggal worked as a Flame artist on the world’s first Flame feature production. Soon after, she was hired by Industrial Light & Magic as a supervising lead Flame artist on a number of high-profile projects (Mission: Impossible, Congo and The Indian in the Cupboard). In 1996, Duggal left ILM to join Sony Pictures Imageworks as creative director of high-speed compositing and soon began to take on the additional responsibilities of visual effects supervisor. She was production-side VFX supervisor for multiple directors during this time, including Jane Anderson (The Prize Winner of Defiance, Ohio), Peter Segal (50 First Dates and Anger Management) and Ridley Scott (Body of Lies and Matchstick Men).
In addition to feature films, Duggal has also worked on a number of design projects. In 2013 she designed the logo and the main-on-ends for Agent Carter. She was production designer for SIGGRAPH Electronic Theatre 2001, and she created the title design for the groundbreaking Technology Entertainment and Design conference (TED) in 2004.

Duggal is also a published photographer and traveled to Zimbabwe and Malawi on her last assignment on behalf of UK water charity Pump Aid, where she was photo-documenting how access to clean water has transformed the lives of thousands of people in rural areas.
Duggal is a member of the Academy of Motion Pictures Arts and Sciences and serves on the executive committee for the VFX branch.

De-aging John Goodman 30 years for HBO’s The Righteous Gemstones

For HBO’s original series The Righteous Gemstones, VFX house Gradient Effects de-aged John Goodman using its proprietary Shapeshifter tool, an AI-assisted tool that can turn back the time on any video footage. With Shapeshifter, Gradient sidestepped the Uncanny Valley to shave decades off Goodman for an entire episode, delivering nearly 30 minutes of film-quality VFX in six weeks.

In the show’s fifth episode, “Interlude,” viewers journey back to 1989, a time when the Gemstone empire was still growing and Eli’s wife, Aimee-Leigh, was still alive. But going back also meant de-aging Goodman for an entire episode, something never attempted before on television. Gradient accomplished it using Shapeshifter, which allows artists to “reshape” an individual frame and the performers in it and then extend those results across the rest of a shot.

Shapeshifter worked by first analyzing the underlying shape of Goodman’s face. It then extracted important anatomical characteristics, like skin details, stretching and muscle movements. With the extracted elements saved as layers to be reapplied at the end of the process, artists could start reshaping his face without breaking the original performance or footage. Artists could tweak additional frames in 3D down the line as needed, but they often didn’t need to, making the de-aging process nearly automated.

“Shapeshifter an entirely new way to de-age people,” says Olcun Tan, owner and visual effects supervisor at Gradient Effects. “While most productions are limited by time or money, we can turn around award-quality VFX on a TV schedule, opening up new possibilities for shows and films.”

Traditionally, de-aging work for film and television has been done in one of two ways: through filtering (saves time, but hard to scale) or CG replacements (better quality, higher cost), which can take six months to a year. Shapeshifter introduces a new method that not only preserves the actor’s original performance, but also interacts naturally with other objects in the scene.

“One of the first shots of ‘Interlude’ shows stage crew walking in front of John Goodman,” describes Tan. “In the past, a studio would have recommended a full CGI replacement for Goodman’s character because it would be too hard or take too much time to maintain consistency across the shot. With Shapeshifter, we can just reshape one frame and the work is done.”

This is possible because Shapeshifter continuously captures the face, including all of its essential details, using the source footage as its guide. With the data being constantly logged, artists can extract movement information from anywhere on the face whenever they want, replacing expensive motion-capture stages, equipment and makeup teams.

Director Ang Lee: Gemini Man and a digital clone

By Iain Blair

Filmmaker Ang Lee has always pushed the boundaries in cinema, both technically and creatively. His film Life of Pi, which he directed and produced, won four Academy Awards — for Best Direction, Best Cinematography, Best Visual Effects and Best Original Score.

Lee’s Brokeback Mountain won three Academy Awards, including Best Direction, Best Adapted Screenplay and Best Original Score. Crouching Tiger, Hidden Dragon was nominated for 10 Academy Awards and won four, including Best Foreign Language Film for Lee, Best Cinematography, Best Original Score and Best Art Direction/Set Decoration.

His latest, Paramount’s Gemini Man, is another innovative film, this time disguised as an action-thriller. It stars Will Smith in two roles — first, as Henry Brogan, a former Special Forces sniper-turned-assassin for a clandestine government organization; and second (with the assistance of ground-breaking visual effects) as “Junior,” a cloned younger version of himself with peerless fighting skills who is suddenly targeting him in a global chase. The chase takes them from the estuaries of Georgia to the streets of Cartagena and Budapest.

Rounding out the cast is Mary Elizabeth Winstead as Danny Zakarweski, a DIA agent sent to surveil Henry; Golden Globe Award-winner Clive Owen as Clay Verris, a former Marine officer now seeking to create his own personal military organization of elite soldiers; and Benedict Wong as Henry’s longtime friend, Baron.

Lee’s creative team included director of photography Dion Beebe (Memoirs of a Geisha, Chicago), production designer Guy Hendrix Dyas (Inception, Indiana Jones and the Kingdom of the Crystal Skull), longtime editor Tim Squyres (Life of Pi and Crouching Tiger, Hidden Dragon) and composer Lorne Balfe (Mission: Impossible — Fallout, Terminator Genisys).

The groundbreaking visual effects were supervised by Bill Westenhofer, Academy Award-winner for Life of Pi as well as The Golden Compass, and Weta  Digital’s Guy Williams, an Oscar-nominee for The Avengers, Iron Man 3 and Guardians of the Galaxy Vol. 2.

Will Smith and Ang Lee on set

I recently talked to Lee — whose directing credits include Taking Woodstock, Hulk, Ride With the Devil, The Ice Storm and Billy Lynn’s Long Halftime Walk — about making the film, which has already generated a lot of awards talk about its cutting-edge technology, the workflow and his love of editing and post.

Hollywood’s been trying to make this for over two decades now, but the technology just wasn’t there before. Now it’s finally here!
It was such a great idea, if you can visualize it. When I was first approached about it by Jerry Bruckheimer and David Ellison, they said, “We need a movie star who’s been around a long time to play Henry, and it’s an action-thriller and he’s being chased by a clone of himself,” and I thought the whole clone idea was so fascinating. I think if you saw a young clone version of yourself, you wouldn’t see yourself as special anymore. It would be, “What am I?” That also brought up themes like nature versus nurture and how different two people with the same genes can be. Then the whole idea of what makes us human? So there was a lot going on, a lot of great ideas that intrigued me. How does aging work and affect you? How would you feel meeting a younger version of yourself? I knew right away it had to be a digital clone.

You certainly didn’t make it easy for yourself as you also decided to shoot it in 120fps at 4K and in 3D.
(Laughs) You’re right, but I’ve been experimenting with new technology for the past decade, and it all started with Life of Pi. That was my first taste of 3D, and for 3D you really need to shoot digitally because of the need for absolute precision and accuracy in synchronizing the two cameras and your eyes. And you need a higher frame rate to get rid of the strobing effect and any strangeness. Then when you go to 120 frames per second, the image becomes so clear and far smoother. It’s like a whole new kind of moviemaking, and that’s fascinating to me.

Did you shoot native 3D?
Yes, even though it’s still so clumsy, and not easy, but for me it’s also a learning process on the set which I enjoy.

Junior

There’s been a lot of talk about digital de-aging use, especially in Scorsese’s The Irishman. But you didn’t use that technique for Will’s younger self, right?
Right. I haven’t seen The Irishman so I don’t know exactly what they did, but this was a total CGI creation, and it’s a lead character where you need all the details and performance. Maybe the de-aging is fine for a quick flashback, but it’s very expensive to do, and it’s all done manually. This was also quite hard to do, and there are two parts to it: Scientifically, it’s quite mind-boggling, and our VFX supervisor Bill Westenhofer and his team worked so hard at it, along with the Weta team headed by VFX supervisor Guy Williams. So did Will. But then the hardest part is dealing with audiences’ impressions of Junior, as you know in the back of your mind that a young Will Smith doesn’t really exist. Creating a fully digital believable human being has been one of the hardest things to do in movies, but now we can.

How early on did you start integrating post and all the VFX?
Before we even started anything, as we didn’t have unlimited money, a big part of the budget went to doing a lot of tests, new equipment, R&D and so on, so we had to be very careful about planning everything. That’s the only way you can reduce costs in VFX. You have to be a good citizen and very disciplined. It was a two-year process, and you plan and shoot layer by layer, and you have to be very patient… then you start making the film in post.

I assume you did a lot of previz?
(Laughs) A whole lot, and not only for all the obvious action scenes. Even for the non-action stuff, we designed and made the cartoons and did previz and had endless meetings and scouted and measured and so on. It was a lot of effort.

How tough was the shoot?
It was very tough and very slow. My last three movies have been like this since the technology’s all so new, so it’s a learning process as you’re figuring it all out as you go. No matter how much you plan, new stuff comes up all the time and equipment fails. It feels very fragile and very vulnerable sometimes. And we only had a budget for a regular movie, so we could only shoot for 80 days, and we were on three continents and places like Budapest and Cartagena as well as around Savannah in the US. Then I insist on doing all the second unit stuff as well, apart from a few establishing shots and sunsets. I have to shoot everything, so we had to plan very carefully with the sound team as every shot is a big deal.

Where did you post?
All in New York. We rented space at Final Frame, and then later we were at Harbor. The thing is, no lab could process our data since it was so huge, so when we were based in Savannah we just built our own technology base and lab so we could process all our dailies and so on — and we bought all our servers, computers and all the equipment needed. It was all in-house, and our technical supervisor Ben Gervais oversaw it all. It was too difficult to take all that to Cartagena, but we took it all to Budapest and then set it all up later in New York for post.

Do you like the post process?
I like the first half, but then it’s all about previews, getting notes, changing things. That part is excruciating. Although I have to give a lot of credit to Paramount as they totally committed to all the VFX quite early and put the big money there before they even saw a cut so we had time to do them properly.

Junior

Talk about editing with Tim Squyres. How did that work?
We sent him dailies. When I’m shooting, I just want to live in my dreams, unless something alarms me, and he’ll let me know. Otherwise, I prefer to work separately. But on this one, since we had to turn over some shots while we were shooting, he came to the set in Budapest, and we’d start post already, which was new to me. Before, I always liked to cut separately.

What were the big editing challenges?
Trying to put all the complex parts together, dealing with the rhythm and pace, going from quiet moments to things like the motorcycle chase scenes and telling the story as effectively as we could —all the usual things. In this medium, everything is more critical visually.

All the VFX play a big role. How many were there?
Over 1,000, but then Junior alone is a huge visual effect in every scene he’s in. Weta did all of him and complained that they got the hardest and most expensive part. (Laughs) The other, easier stuff was spread out to several companies, including Scanline and Clear Angle.

Ang Lee and Iain Blair

Talk about the importance of sound and music.
We did the mix at Harbor on its new stage, and it’s always so important. This time we did something new. Typically, you do Atmos at the final mix and mix the music along with all the rest, but our music editor did an Atmos mix on all the music first and then brought it to us for the final mix. That was very special.

Where did you do the DI and how important is it to you?
It’s huge on a movie like this. We set up our own DI suite in-house at Final Frame with the latest FilmLight Baselight, which is amazing. Our colorist Marcy Robinson had trained on it, and it was a lot easier than on the last film. Dion came in a lot and they worked together, and then I’d come in. We did a lot of work, especially on all the night scenes, enhancing moonlight and various elements.

I think the film turned out really well and looks great. When you have the combination of these elements like 3D, digital cinematography, high frame rate and high resolution, you really get “new immersive cinema.” So for me, it’s a new and different way of telling stories and processing them in your head. The funny thing is, personally I’m a very low-tech person, but I’ve been really pursuing this for the last few years.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Harbor adds talent to its London, LA studios

Harbor has added to its London- and LA-based studios. Marcus Alexander joins as VP of picture post, West Coast and Darren Rae as senior colorist. He will be supervising all dailies in the UK.

Marcus Alexander started his film career in London almost 20 years ago as an assistant editor before joining Framestore as a VFX editor. He helped Framestore launch its digital intermediate division, producing multiple finishes on a host of tent-pole and independent titles, before joining Deluxe to set up its London DI facility. Alexander then relocated to New York to head up Deluxe New York DI. With the growth in 3D movies, he returned to the UK to supervise stereo post conversions for multiple studios before his segue into VFX supervising.

“I remember watching It Came from Outer Space at a very young age and deciding there and then to work in movies,” says Alexander. “Having always been fascinated with photography and moving images, I take great pride in thorough involvement in my capacity from either a production or creative standpoint. Joining Harbor allows me to use my skills from a post-finishing background along with my production experience in creating both 2D and 3D images to work alongside the best talent in the industry and deliver content we can be extremely proud of.”

Rae began his film career in the UK in 1995 as a sound sync operator at Mike Fraser Neg Cutters. He moved into the telecine department in 1997 as a trainee. By 1998 he was a dailies colorist working with 16mm and 35mm film. From 2001, Rae spent three years with The Machine Room in London as telecine operator and joined Todd AO’s London lab in 2014 as colorist working on drama and commercials 35mm and 16mm film and 8mm projects for music videos. In 2006 Rae moved into grading dailies at Todd AO parent company Deluxe in Soho London, moving to Company 3 London in 2007 as senior dailies colorist. In 2009, he was promoted to supervising colorist.

Prior to joining Harbor, Rae was senior colorist for Pinewood Digital, supervising multiple shows and overseeing a team of four, eventually becoming head of grading. Projects include Pokemon Detective Pikachu, Dumbo, Solo: A Star Wars Story, The Mummy, Rogue One, Doctor Strange and Star Wars Episode VII — The Force Awakens.

“My main goal is to make the director of photography feel comfortable. I can work on a big feature film from three months to a year, and the trust the DP has in you is paramount. They need to know that wherever they are shooting in the world, I’m supporting them. I like to get under the skin of the DP right from the start to get a feel for their wants and needs and to provide my own input throughout the entire creative process. You need to interpret their instructions and really understand their vision. As a company, Harbor understands and respects the filmmaker’s process and vision, so for me, it’s the ideal new home for me.”

Harbor has also announced that colorists Elodie Ichter and Katie Jordan are now available to work with clients on both the East and West Coasts in North America as well as the UK. Some of the team’s work includes Once Upon a Time in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Anna, Snow White and the Huntsman and Rise of the Planet of the Apes.

Foundry updates Nuke to version 12.0

Foundry has released Nuke 12.0, which introduces the next cycle of releases for the Nuke family. The Nuke 12.0 release brings improved interactivity and performance across the Nuke family, from additional GPU-enabled nodes for cleanup to a rebuilt playback engine in Nuke Studio and Hiero. Nuke 12.0 also sees the integration of GPU-accelerated tools integrated from Cara VR for camera solving, stitching and corrections and updates to the latest industry standards.

OpenEXR

New features of Nuke 12.0 include:
• UI interactivity and script loading – This release includes  a variety of optimizations throughout the software to improve performance, especially when working at scale. One key improvement offers a much smoother experience and noticeably maintains UI interactivity and reduced loading times when working in large scripts.
• Read and write performance – Nuke 12.0 includes focused improvement to OpenEXR read and write performance, including optimizations for several popular compression types (Zip1, Zip16, PIZ, DWAA, DWAB), improving render times and interactivity in scripts. Red and Sony camera formats also see additional GPU support.
• Inpaint and EdgeExtend – These GPU-accelerated nodes provide faster and more intuitive workflows for common tasks, with fine detail controls and contextual paint strokes.
• Grid Warp Tracker – Extending the Smart Vector toolset in NukeX, this node uses Smart Vectors to drive grids for match moving, warping and morphing images.
• Cara VR node integration – The majority of Cara VR’s nodes are now integrated into NukeX, including a suite of GPU-enabled tools for VR and stereo workflows and tools that enhance traditional camera solving and cleanup workflows.
• Nuke Studio, Hiero and HieroPlayer Playback – The timeline-based tools in the Nuke family see dramatic improvements in playback stability and performance as a result of a rebuilt playback engine optimized for the heavy I/O demands of color-managed workflows with multichannel EXRs.

Ziva VFX 1.7 helps simplify CG character creation


Ziva Dynamics has introduced Ziva VFX 1.7, designed to make CG character creation easier thanks to the introduction of Art Directable Rest Shapes (ADRS). This tool allows artists to make characters conform to any shape without losing its dynamic properties, opening up a faster path to cartoons and digi-doubles.

Users can now adjust a character’s silhouette with simple sculpting tools. Once the goal shape is established, Ziva VFX can morph to match it, maintaining all of the dynamics embedded before the change. Whether unnatural or precise, ADRS works with any shape, removing the difficulty of both complex setups and time-intensive corrective work.

The Art Directable Rest Shapes feature has been in development for over a year and was created in collaboration with several major VFX and feature animation studios. According to Ziva, while outputs and art styles differed, each group essentially requested the same thing: extreme accuracy and more control without compromising the dynamics that sell a final shot.

For feature animation characters not based on humans or nature, ADRS can rapidly alter and exaggerate key characteristics, allowing artists to be expressive and creative without losing the power of secondary physics. For live-action films, where the use of digi-doubles and other photorealistic characters is growing, ADRS can minimize the setup process when teams want to quickly tweak a silhouette or make muscles fire in multiple ways during a shot.

According to Josh diCarlo, head of rigging at Sony Pictures Imageworks, “Our creature team is really looking forward to the potential of Art Directable Rest Shapes to augment our facial and shot-work pipelines by adding quality while reducing effort. Ziva VFX 1.7 holds the potential to shave weeks of work off of both processes while simultaneously increasing the quality of the end results.”

To use Art Directable Rest Shapes, artists must duplicate a tissue mesh, sculpt their new shape onto the duplicate and add the new geometry as a Rest Shape over select frames. This process will intuitively morph the character, creating a smooth, novel deformation that adheres to any artistic direction a creative team can think up. On top of ADRS, Ziva VFX 1.7 will also include a new zRBFWarp feature, which can warp NURBS surfaces, curves and meshes.

For a free 60-day trial, click here. Ziva VFX 1.7 is available now as an Autodesk Maya plugin for Windows and Linux users. Ziva VFX 1.7 can be purchased in monthly or yearly installments, depending on user type.

According to Michael Smit, chief commercial officer at Ziva Dynamics, “Ziva is working towards a new platform that will more easily allow us to deploy the software into other software packages, operating systems, and different network architectures. As an example we are currently working on our integrations into iOS and Unreal, both of which have already been used in limited release for production settings. We’re hopeful that once we launch the new platform commercially there will be an opportunity to deploy tools for macOS users.”

Using VFX to turn back time for Downton Abbey film

The feature film Downton Abbey is a continuation of the popular TV series, which followed the lives of the aristocratic Crawley family and their domestic help. Created by Julian Fellowes, the film is based in 1927, one year after the show’s final episode, bringing with it the exciting announcement of a royal visit to Downton from King George V and Queen Mary.

Framestore supported the film’s shoot and post, with VFX supervisor Kyle McCulloch and senior producer Ken Dailey leading the team. Following Framestore’s work creating post-war Britain for the BAFTA-nominated Darkest Hour, the VFX studio was approached to work directly with the film’s director, Michael Engler, to help ground the historical accuracy of the film.

Much of the original cast and crew returned, with a screenplay that required the new addition of a VFX department, “although it was important that we had a light footprint,” explains McCulloch. “I want people to see the credits and be surprised that there are visual effects in it.” Supporting VFX on over 170 shots ranged from cleanups and seamless set transitions to extensive environment builds and augmentation.

Transporting the audience to an idealized interpretation of 1920s Britain required careful work on the structures of buildings, including the Abbey (Highclere Castle), Buckingham Palace and Lacock village, a national trust village in the Cotswolds that was used as a location for Downton’s village. Using the available photogrammetry and captured footage, the artists set to work restoring the period, adding layers of dirt and removing contemporary details to existing historical buildings.

Having changed so much since the early 20th century, King’s Cross Station needed a complete rebuild in CG, with digital train carriages, atmospheric smoke and large interior and exterior environment builds.

The team also helped with landscaping the idyllic grounds of the Abbey, replacing the lawn, trees and grass and removing power lines, cars and modern roads. Research was key, with the team collaborating with production designer Donal Woods and historical advisor Alastair Bruce, who came equipped with look books and photographs from the era. “A huge amount of the work was in the detail,” explains McCulloch. “We questioned everything; looking at the street surfaces, the type of asphalt used, down to how the gutters were built. All these tiny elements create the texture of the entire film. Everyone went through it with a very fine-tooth comb — every single frame.”

 

In addition, a long shot that followed the letter from the Royal Household from the exterior of the abbey, through the corridors of the domestic “downstairs” to the aristocratic “upstairs,” was a particular challenge. The scenes based downstairs — including in the kitchen — were shot at Shepperton Studios on a set, with the upstairs being captured on location at Highclere Castle. It was important to keep the illusion of the action all being within one large household, requiring Framestore to stitch the two shots together.

Says McCulloch, “It was brute force, it was months of work and I challenge anyone to spot where the seam is.”

Flavor adds Joshua Studebaker as CG supervisor

Creative production house Flavor has added CG supervisor Joshua Studebaker to its Los Angeles studio. For more than eight years, Studebaker has been a freelance CG artist in LA, specializing in design, animation, dynamics, lighting/shading and compositing via Maya, Cinema 4D, Vray/Octane, Nuke and After Effects.

A frequent collaborator with Flavor and its brand and agency partners, Studebaker has also worked with Alma Mater, Arsenal FX, Brand New School, Buck, Greenhaus GFX, Imaginary Forces and We Are Royale in the past five years alone. In his new role with Flavor, Studebaker oversees visual effects and 3D services across the company’s global operations. Flavor’s Chicago, Los Angeles and Detroit studios offer color grading, VFX and picture finishing using tools like Autodesk Lustre and Flame Premium.

Flavor creative director Jason Cook also has a long history of working with Studebaker and deep respect for his talent. “What I love most about Josh is that he is both technical and a really amazing artist and designer. Adding him is a huge boon to the Flavor family, instantly elevating our production capabilities tenfold.”

Flavor has always emphasized creativity as a key ingredient, and according to Studebaker, that’s what attracted him. “I see Flavor as a place to grow my creative and design skills, as well as help bring more standardization to our process in house,” he explained. “My vision is to help Flavor become more agile and more efficient and to do our best work together.”

Pace Pictures and ShockBox VFX formalize partnership

Hollywood post house Pace Pictures and bicoastal visual effects, animation and motion graphics specialist ShockBox VFX have formed a strategic alliance for film and television projects. The two specialist companies provide studios and producers with integrated services encompassing all aspects of post in order to finish any project efficiently, cost-effectively and with greater creative control.

The agreement formalizes a successful collaborative partnership that has been evolving over many years. Pace Pictures and ShockBox collaborated informally in 2015 on the independent feature November Rule. Since then, they have teamed up on numerous projects, including, most recently, the Hulu series Veronica Mars, Lionsgate’s 3 From Hell and Universal Pictures’ Grand-Daddy Day Care and Undercover Brother 2. Pace provided services including creative editorial, color grading, editorial finishing and sound mixing. ShockBox contributed visual effects, animation and main title design.

“We offer complementary services, and our staff have developed a close working rapport,” says Pace Pictures president Heath Ryan. “We want to keep building on that. A formal alliance benefits both companies and our clients.”

“In today’s world of shrinking budgets and delivery schedules, the time for creativity in the post process can often suffer,” adds ShockBox founder and director Steven Addair. “Through our partnership with Pace, producers and studios of all sizes will be able to maximize our integrated VFX pipeline for both quality and volume.”

As part of the agreement, ShockBox will move its West Coast operations to a new facility that Pace plans to open later this fall. The two companies have also set up an encrypted, high-speed data connection between Pace Pictures Hollywood and ShockBox New York, allowing them to exchange project data quickly and securely.

Martin Scorsese to receive VES Lifetime Achievement Award  

The Visual Effects Society (VES) has named Martin Scorsese as the forthcoming recipient of the VES Lifetime Achievement Award in recognition of his valuable contributions to filmed entertainment. The award will be presented next year at the 18th Annual VES Awards at the Beverly Hilton Hotel.

The VES Lifetime Achievement Award, voted on by the VES Board of Directors, recognizes an outstanding body of work that has significantly contributed to the art and/or science of the visual effects industry.  The VES will honor Scorsese for “his artistry, expansive storytelling and gift for blending iconic imagery and unforgettable narrative.”

“Martin Scorsese is one of the most influential filmmakers in modern history and has made an indelible mark on filmed entertainment,” says Mike Chambers, VES board chair. “His work is a master class in storytelling, which has brought us some of the most memorable films of all time.  His intuitive vision and fiercely innovative direction has given rise to a new era of storytelling and has made a profound impact on future generations of filmmakers. Martin has given us a rich body of groundbreaking work to aspire to, and for this, we are honored to award him with the Visual Effects Society Lifetime Achievement Award.”

Martin Scorsese has directed critically acclaimed, award-winning films including Mean Streets, Taxi Driver, Raging Bull, The Last Temptation of Christ, Goodfellas, Gangs of New York, The Aviator, The Departed (Academy Award for Best Director and Best Picture), Shutter Island and Hugo (Golden Globe for Best Director).

Scorsese has also directed numerous documentaries, including Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese, Elia Kazan: A Letter to Elia and the classic The Last Waltz about The Band’s final concert. His George Harrison: Living in the Material World received Emmy Awards for Outstanding Directing for Nonfiction Programming and Outstanding Nonfiction Special.

In 2010, Scorsese executive produced the HBO series Boardwalk Empire, winning an Emmy and DGA awards for directing the pilot episode. In 2014, he co-directed The 50 Year Argument with his long-time documentary editor David Tedeschi.

This September, Scorsese’s film, The Irishman, starring Robert De Niro, Al Pacino and Joe Pesci, will make its world premiere at the New York Film Festival and will have a theatrical release starting November 1 in New York and Los Angeles before arriving on Netflix on November 27.

Scorsese is the founder and chair of The Film Foundation, a non-profit organization dedicated to the preservation and protection of motion picture history.

Previous winners of the VES Lifetime Achievement Award have included George Lucas; Robert Zemeckis; Dennis Muren, VES; Steven Spielberg; Kathleen Kennedy and Frank Marshall; James Cameron; Ray Harryhausen; Stan Lee; Richard Edlund, VES; John Dykstra; Sir Ridley Scott; Ken Ralston; Jon Favreau and Chris Meledandri.ri

Visual Effects in Commercials: Chantix, Verizon

By Karen Moltenbrey

Once too expensive to consider for use in television commercials, visual effects soon found their way into this realm, enlivening and enhancing the spots. Today, countless commercials are using increasingly complex VFX to entertain, to explain and to elevate a message. Here, we examine two very different approaches to using effects in this way. In the Verizon commercial Helping Doctors Fight Cancer, augmented reality is transferred from a holographic medical application and fused into a heartwarming piece thanks to an extremely delicate production process. For the Chantix Turkey Campaign, digital artists took a completely different method, incorporating a stylized digital spokes-character — with feathers, nonetheless – into various scenes.

Verizon Helping Doctors Fight Cancer

The main goal of television advertisements — whether they are 15, 30 or 60 seconds in length — is to sell a product. Some do it through a direct sales approach. Some by “selling” a lifestyle or brand. And some opt to tell a story. Verizon took the latter approach for a campaign promoting its 5G Ultra Wideband.

Vico Sharabani

For the spot Helping Doctors Fight Cancer, directed by Christian Weber, Verizon adds a human touch to its technology through a compelling story illustrating how its 5G network is being used within a mixed-reality environment so doctors can better treat cancer patients. The 30-second commercial features surgeons and radiologists using high-fidelity holographic 3D anatomical renderings that can be viewed from every angle and even projected onto a person’s body for a more comprehensive examination, while the imagery can potentially be shared remotely in near real time. The augmented-reality application is from Medivis, a start-up medical visualization company that is using Verizon’s next-generation 5G wireless speeds to deliver the high speeds and low latencies necessary for the application’s large datasets and interactive frame rates.

The spot introduces video footage of patients undergoing MRIs and discussion by Medivis cofounder Dr. Osamah Choudhry about how treatment could be radically changed using the technology. Holographic medical imagery is then displayed showing the Medivis AR application being used on a patient.

“McGarryBowen New York, Verizon’s advertising agency, wanted to show the technology in the most accurate and the most realistic way possible. So, we studied the technology,” says Vico Sharabani, founder/COO of The-Artery, which was tasked with the VFX work in the spot. To this end, The Artery team opted to use as much of the actual holographic content as possible, pulling assets from the Medivis software and fusing it with other broadcast-quality content.

The-Artery is no stranger to augmented reality, virtual reality and mixed reality. Highly experienced in visual effects, Sharabani founded the company to solve business problems within the visual space across all platforms, from films to commercials to branding, and as such, alternate reality and story have been integral elements to achieving that goal. Nevertheless, the work required for this spot was difficult and challenging.

“It’s not just acquiring and melding together 3D assets,” says Sharabani. “The process is complex, and there are different ways to do it — some better than others. And the agency wanted it to be true to the real-life application. This was not something we could just illustrate in a beautiful way; it had to be very technically accurate.”

To this end, much of the holographic imagery consisted of actual 3D assets from the Medivis holographic AR system, captured live. At times, though, The Artery had to rework the imagery using multiple assets from the Medivis application, and other times the artists re-created the medical imagery in CG.

Initially, the ad agency expected that The-Artery would recreate all the digital assets in CG. But after learning as much as they could about the Medivis system, Sharabani and the team were confident they could export actual data for the spot. “There was much greater value to using actual data when possible, actual CT data,” says Sharabani. “Then you have the most true-to-life representation, which makes the story even more heartfelt. And because we were telling a true story about the capabilities of the network around a real application being used by doctors, any misrepresentation of the human anatomy or scans would hurt the message and intention of the campaign.”

The-Artery began developing a solution with technicians at Medivis to export actual imagery via the HoloLens headset that’s used by the medical staff to view and manipulate the holographic imagery, to coincide with the needs of the commercial. Sometimes this involved merely capturing the screen performance as the HoloLens was being used. Other times the assets from the Medivis system were rendered over a greenscreen without a background and later composited into a scene.

“We have the ability to shoot through the HoloLens, which was our base; we used that as our virtual camera whereby the output of the system is driven by the HoloLens. Every time we would go back to do a capture (if the edit changed or the camera position changed), we had to use the HoloLens as our virtual camera in order to get the proper camera angle,” notes Sharabani. Because the HoloLens is a stereoscopic device, The Artery always used the right-eye view for the representations, as it most closely reflected the experience of the user wearing the device.

Since the Medivis system is driven by the HoloLens, there is some shakiness present — an artifact the group retained in some of the shots to make it truer to life. “It’s a constant balance of how far we go with realism and at what point it is too distracting for the broadcast,” says Sharabani.

For imagery like the CT scans, the point cloud data was imported directly into Autodesk’s Maya, where it was turned into a 3D model. Other times the images were rendered out at 4K directly from the system. The Medivis imagery was later composited into the scenes using Autodesk’s Flame.

However, not every bit of imagery was extracted from the system. Some had to be re-created using a standard 3D pipeline. For instance, the “scan” of the actor’s skull was replicated by the artists so that the skull model matched perfectly with the holographic imagery that was overlaid in post production (since everyone’s skull proportions are different). The group began by creating the models in Maya and then composited the imagery within Autodesk’s Flame, along with a 3D bounding box of the creative implant.

The artists also replicated the Medivis UI in 3D to recreate and match the performance of the three-dimensional UI to the AI hand gestures by the person “using” the Medivis system in the spot — both of which were filmed separately. For the CG interface, the group used Autodesk’s Maya and Flame, as well as Adobe’s After Effects.

“The process was so integrated to the edit, we needed the proper 3D tracking and some of the assets to be built as a 3D screen element,” explains Sharabani. “It gave us more flexibility to build the 3D UI inside of Flame, enabling us to control it more quickly and easily when we changed a hand gesture or expanded the shots.”

With The-Artery’s experience pertaining to virtual technology, the team was quick to understand the limitations of the project using this particular equipment. Once that was established, however, they began to push the boundaries with small hacks that enabled them to achieve their goals of using actual holographic data to tell an amazing story.

Chantix “Turkey” Campaign

Chantix is medication to help smokers kick the habit. To get its message across in a series of television commercials, the drug maker decided to talk turkey, focusing the campaign on a CG turkey that, well, goes “cold turkey” with the assistance of Chantix.

A series of four spots — Slow Turkey, Camping, AC and Beach Day — prominently feature the turkey, created at The Mill. The spots were directed and produced in-house by Mill+, The Mill’s end-to-end production arm, with Jeffrey Dates directing.


L-R: John Montefusco, Dave Barosin and Scott Denton

“Each one had its own challenges,” says CG lead John Montefusco. Nevertheless, the initial commercial, Slow Turkey, presented the biggest obstacle: the build of the character from the ground up. “It was not only a performance feat, but a technical one as well,” he adds.

Effects artist Dave Barosin iterated Montefusco’s assessment of Slow Turkey, which, in addition to building the main asset from scratch, required the development of a feather system. Meanwhile, Camping and AC had the addition of clothing, and Beach Day presented the challenge of wind, water and simulation in a moving vehicle.

According to senior modeler Scott Denton, the team was given a good deal of creative freedom when crafting the turkey. The artists were presented with some initial sketches, he adds, but more or less had free rein in the creation of the look and feel of the model. “We were looking to tread the line between cartoony and realistic,” he says. The first iterations became very cartoony, but the team subsequently worked backward to where the character was more of a mix between the two styles.

The crew modeled the turkey using Autodesk’s Maya and Pixologic’s ZBrush. It was then textured within Adobe’s Substance and Foundry’s Mari. All the details of the model were hand-sculpted. “Nailing the look and feel was the toughest challenge. We went through a hundred iterations before getting to the final character you see in the commercial,” Denton says.

The turkey contains 6,427 body feathers, 94 flight feathers and eight scalp feathers. They were simulated using a custom feather setup built by the lead VFX artist within SideFX Houdini, which made the process more efficient. Proprietary tools also were used to groom the character.

The artists initially developed a concept sculpt in ZBrush of just the turkey’s head, which underwent numerous changes and versions before they added it to the body of the model. Denton then sculpted a posed version with sculpted feathers to show what the model might look like when posed, giving the client a better feel for the character. The artists later animated the turkey using Maya. Rendering was performed in Autodesk’s Arnold, while compositing was done within Foundry’s Nuke.

“Developing animation that holds good character and personality is a real challenge,” says Montefusco. “There’s a huge amount of evolution in the subtleties that ultimately make our turkey ‘the turkey.’”

For the most part, the same turkey model was used for all four spots, although the artists did adapt and change certain aspects — such as the skeleton and simulation meshes – for each as needed in the various scenarios.

For the turkey’s clothing (sweater, knitted vest, scarf, down vest, knitted cap, life vest), the group used Marvelous Designer 3D software for virtual clothes and fabrics, along with Maya and ZBrush. However, as Montefusco explains, tailoring for a turkey is far different than developing CG clothing for human characters. “Seeing as a lot of the clothes that were selected were knit, we really wanted to push the envelope and build the knit with geometry. Even though this made things a bit slower for our effects and lighting team, in the end, the finished clothing really spoke for itself.”

The four commercials also feature unique environments ranging from the interior and exterior of a home to a wooded area and beach. The artists used mostly plates for the environments, except for an occasional tent flap and chair replacement. The most challenging of these settings, says Montefusco, was the beach scene, which required full water replacement for the shot of the turkey on the paddle board.


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

VFX in Features: Hobbs & Shaw, Sextuplets

By Karen Moltenbrey

What a difference a year makes. Then again, what a difference 30 years make. That’s about the time when the feature film The Abyss included photoreal CGI integrated with live action, setting a trend that continues to this day. Since that milestone many years ago, VFX wizards have tackled a plethora of complicated problems, including realistic hair and skin, resulting in realistic digital humans, as well as realistic water, fire and other elements. With each new blockbuster VFX film, digital artists continually raise the bar, challenging the status quo and themselves to elevate the art even further.

The visual effects in today’s feature films run the gamut from in-your-face imagery that can put you on the edge of your seat through heightened action to the kind that can make you laugh by amping up the comedic action. As detailed here, Fast & Furious Presents: Hobbs & Shaw takes the former approach, helping to carry out amazing stunts that are bigger and “badder” than ever. Opposite that is Sextuplets, which uses VFX to carry out a gag central to the film in a way that also pushes the envelope.

Fast & Furious Presents: Hobbs & Shaw

The Fast and the Furious film franchise, which has included eight features that collectively have amassed more than $5 billion worldwide since first hitting the road in 2001, is known for its high-octane action and visual effects. The latest installment, Fast & Furious Presents: Hobbs & Shaw, continues that tradition.

At the core of the franchise are next-level underground street racers who become reluctant fugitives pulling off big heists. Hobbs & Shaw, the first stand-alone vehicle, has Dwayne Johnson and Jason Statham reprising their roles as loyal Diplomatic Security Service lawman Luke Hobbs and lawless former British operative Deckard Shaw, respectively. This comes after facing off in Furious 7 (2015) and then playing cat and mouse as Shaw tries to escape from prison and Hobbs tries to stop him in 2017’s The Fate of the Furious. (Hobbs first appeared in 2011’s Fast Five and became an ally to the gang. Shaw’s first foray was in 2013’s Fast & Furious 6.)

Now, in the latest installment, the pair are forced to join forces to hunt down anarchist Brixton Lorr (Idris Elba), who has control of a bio weapon. The trackers are hired separately to find Hattie, a rogue MI6 agent (who is also Shaw’s sister, a fact that initially eludes Hobbs) after she injects herself with the bio agent and is on the run, searching for a cure.

The Universal Pictures film is directed by David Leitch (Deadpool 2, Atomic Blonde). Jonathan Sela (Deadpool 2, John Wick) is DP, and visual effects supervisor is Dan Glass (Deadpool 2, Jupiter Ascending). A number of VFX facilities worked on the film, including key vendor DNeg along with other contributors such as Framestore.

DNeg delivered 1,000-plus shots for the film, including a range of vehicle-based action sequences set in different global locations. The work involved the creation of full digi-doubles and digi-vehicle duplicates for the death-defying stunts, jumps and crashes, as well as complex effects simulations and extensive digital environments. Naturally, all the work had to fit seamlessly alongside live-action stunts and photography from a director with a stunt coordinator pedigree and a keen eye for authentic action sequences. In all, the studio worked on 26 sequences divided among the Vancouver, London and Mumbai locations. Vancouver handled mostly the Chernobyl break-in and escape sequences, as well as the Samoa chase. London did the McLaren chase and the cave fight, as well as London chase sequences. The Mumbai team assisted its colleagues in Vancouver and London.

When you think of the Fast & Furious, the first thing that comes to mind are intense car chases, and according to Chris Downs, CG supervisor at DNeg Vancouver, the Chernobyl beat is essentially one long, giant car-and-motorcycle pursuit, describing it as “a pretty epic car chase.”

“We essential have Brixton chasing Shaw and Hattie, and then Shaw and Hattie are trying to catch up to a truck that’s being driven by Hobbs, and they end up on these utility ramps and pipes, using them almost as a roadway to get up and into the turbine rooms, onto the rooftops and then jump between buildings,” he says. “All the while, everyone is getting chased by these drones that Brixton is controlling.”

The Chernobyl sequences — the break-in and the escape — were the most challenging work on the film for DNeg Vancouver. The villain, Brixton, is using the Chernobyl nuclear power plant in Russia as the site of his hideaway, leading Hobbs and Shaw to secretly break into his secret lab underneath Chernobyl to locate a device Brixton has there — and then not-so-secretly break out.

The break-in was filmed at a location outside of London, at the decommissioned Eggborough coal-powered plant that served as a backdrop. To transform the locale into Chernobyl, DNeg augmented the site with cooling towers and other digital structures. Nevertheless, the artists also built an entire CG version of the site for the more extreme action, using photos of the actual Chernobyl as reference for their work. “It was a very intense build. We had artistic liberty, but it was based off of Chernobyl, and a lot of the buildings match the reference photography. It definitely maintained the feeling of a nuclear power plant,” says Downs.

Not only did the construction involve all the exteriors of the industrial complex around Chernobyl, but also an interior build of an “insanely complicated” turbine hall that the characters race through at one point.

The sequence required other environment work, too, as well as effects, digi-doubles and cloth sims for the characters’ flight suits and parachutes as they drop into the setting.

Following the break-in, Hobbs and Shaw are captured and tortured and then manage to escape from the lab just in time as the site begins to explode. For this escape sequence, the crew built a CG Chernobyl reactor and power station, automated drones, a digital chimney, an epic collapse of buildings, complex pyrotechnic clouds and burning material.

“The scope of the work, the amount of buildings and pipes, and the number of shots made this sequence our most difficult,” says Downs. “We were blowing it up, so all the buildings had to be effects-friendly as we’re crashing things through them.” Hobbs and Shaw commandeer vehicles as they try to outrun Brixton and the explosion, but Brixton and his henchmen give chase in a range of vehicles, including trucks, Range Rovers, motorcycles and more — a mix of CGI and practical with expert stunt drivers behind the wheel.

As expected for a Fast & Furious film, there’s a big variety of custom-built vehicles. Yet, for this scene and especially in Samoa, DNeg Vancouver crafted a range of CG vehicles, including motorcycles, SUVs, transport trucks, a flatbed truck, drones and a helicopter — 10 in all.

According to Downs, maintaining the appropriate wear and tear on the vehicles as the sequences progressed was not always easy. “Some are getting shot up, or something is blown up next to them, and you want to maintain the dirt and grime on an appropriate level,” he says. “And, we had to think of that wear and tear in advance because you need to build it into the model and the texture as you progress.”

The CG vehicles are mostly used for complex stunts, “which are definitely an 11 on the scale,” says Downs. Along with the CG vehicles, digi-doubles of the actors were also used for the various stunt work. “They are fairly straightforward, though we had a couple shots where we got close to the digi-doubles, so they needed to be at a high level of quality,” he adds. The Hattie digi-double proved the most difficult due to the hair simulation, which had to match the action on set, and the cloth simulation, which had to replicate the flow of her clothing.

“She has a loose sweater on during the Chernobyl sequence, which required some simulation to match the plate,” Downs adds, noting that the artists built the digi-doubles from scratch, using scans of the actors provided by production for quality checks.

The final beat of the Chernobyl escape comes with the chimney collapse. As the chase through Chernobyl progresses, Shaw tries to get Hattie to Hobbs, and Brixton tries to grab Hattie from Shaw. In the process, charges are detonated around the site, leading to the collapse of the main chimney, which just misses obliterating the vehicle they are all in as it travels down a narrow alleyway.

DNeg did a full environment build of the area for this scene, which included the entire alleyway and the chimney, and simulated the destruction of the chimney along with an explosive concussive force from the detonation. “There’s a large fireball at the beginning of the explosion that turns into a large volumetric cloud of dust that’s getting kicked up as the chimney is collapsing, and all that had to interact with itself,” Downs says of the scene. “Then, as the chimney is collapsing toward the end of the sequence, we had the huge chunks ripping through the volumetrics and kicking up more pyrotechnic-style explosions. As it is collapsing, it is taking out buildings along the way, so we had those blowing up and collapsing and interacting with our dust cloud, as well. It’s quite a VFX extravaganza.”

Adding to the chaos: The sequence was reshot. “We got new plates for the end of that escape sequence that we had to turn around in a month, so that was definitely a white-knuckle ride,” says Downs. “Thankfully we had already been working on a lot of the chimney collapse and had the Chernobyl build mostly filled in when word came in about the reshoot. But, just the amount of effects that went into it — the volumetrics, the debris and then the full CG environment in the background — was a staggering amount of very complex work.”

The action later turns from London at the start of the film, to Russia for the Chernobyl sequences, and then in the third act, to Samoa, home of the Hobbs family, as the main characters seek refuge on the island while trying to escape from Brixton. But Brixton soon catches up to them, and the last showdown begins amid the island’s tranquil setting with a shimmering blue ocean and green lush mountains. Some of the landscape is natural, some is man-made (sets) and some is CGI. To aid in the digital build of the Samoan environment, Glass traveled to the Hawaiian island of Kauai, where the filming took place, and took a good amount of reference footage.

For a daring chase in Samoa, the artists built out the cliff’s edge and sent a CG helicopter tumbling down the steep incline in the final battle with Brixton. In addition to creating the fully-digital Samoan roadside, CG cliff and 3D Black Hawk, the artists completed complex VFX simulations and destruction, and crafted high-tech combat drones and more for the sequence.

The helicopter proved to be the most challenging of all the vehicles, as it had a couple of hero moments when certain sections were fairly close to the camera. “We had to have a lot of model and texture detail,” Downs notes. “And then with it falling down the cliff and crash-landing onto the beach area, the destruction was quite tricky. We had to plan out which parts would be damaged the most and keep that consistent across the shots, and then go back in and do another pass of textures to support the scratches, dents and so forth.”

Meanwhile, DNeg London and Mumbai handled a number of sequences, among them the compelling McLaren chase, the CIA building descends and the final cave fight in Samoa. There were also a number of smaller sequences, for a total of approximately 750 shots.

One of the scenes in the film’s trailer that immediately caught fans’ attention was the McLaren escape/motorcycle transformation sequence, during which Hobbs, Shaw and Hattie are being chased by Brixton baddies on motorcycles through the streets of London. Shaw, behind the wheel of a McLaren 720S, tries to evade the motorbikes by maneuvering the prized vehicle underneath two crossing tractor trailer rigs, squeezing through with barely an inch to spare. The bad news for the trio: Brixton pulls an even more daring move, hopping off the bike while grabbing onto the back of it and then sliding parallel inches above the pavement as the bike zips under the road hazard practically on its side; once cleared, he pulls himself back onto the motorbike (in a memorable slow-motion stunt) and continues the pursuit thanks to his cybernetically altered body.

Chris Downs

According to Stuart Lashley, DNeg VFX supervisor, this sequence contained a lot of bluescreen car comps in which the actors were shot on stage in a McLaren rigged on a mechanical turntable. The backgrounds were shot alongside the stunt work in Glasgow (playing as London). In addition, there were a number of CG cars added throughout the sequence. “The main VFX set pieces were Hobbs grabbing the biker off his bike, the McLaren and Brixton’s transforming bike sliding under the semis, and Brixton flying through the double-decker bus,” he says. “These beats contained full-CG vehicles and characters for the most part. There was some background DMP [digital matte-painting] work to help the location look more like London. There were also a few shots of motion graphics where we see Brixton’s digital HUD through his helmet visor.”

As Lashley notes, it was important for the CG work to blend in with the surrounding practical stunt photography. “The McLaren itself had to hold up very close to the camera; it has a very distinctive look to its coating, which had to match perfectly,” he adds. “The bike transformation was a welcome challenge. There was a period of experimentation to figure out the mechanics of all the small moving parts while achieving something that looked cool at the same time.”

As exciting and complex as the McLaren scene is, Lashley believes the cave fight sequence following the helicopter/tractor trailer crash was perhaps even more of a difficult undertaking, as it had a particular VFX challenge in terms of the super slow-motion punches. The action takes place at a rock-filled waterfall location — a multi-story set on a 30,000-square-foot soundstage — where the three main characters battle it out. The film’s final sequence is a seamless blend of CG and live footage.

Stuart Lashley

“David [Leitch] had the idea that this epic final fight should be underscored by these very stylized, powerful impact moments, where you see all this water explode in very graphic ways,” explains Lashley. “The challenge came in finding the right balance between physics-based water simulation and creative stylization. We went through a lot of iterations of different looks before landing on something David and Dan [Glass] felt struck the right balance.”

The DNeg teams used a unified pipeline for their work, which includes Autodesk’s Maya for modeling, animation and the majority of cloth and hair sims; Foundry’s Mari for texturing; Isotropix’s Clarisse for lighting and rendering; Foundry’s Nuke for compositing; and SideFX’s Houdini for effects work, such as explosions, dust clouds, particulates and fire.

With expectations running high for Hobbs & Shaw, filmmakers and VFX artists once more delivered, putting audiences on the edge of their seats with jaw-dropping VFX work that shifted the franchise’s action into overdrive yet again. “We hope people have as much fun watching the result as we had making it. This was really an exercise in pushing everything to the max,” says Lashley, “often putting the physics book to one side for a bit and picking up the Fast & Furious manual instead.”

Sextuplets

When actor/comedian/screenwriter/film producer Marlon Wayans signed on to play the lead in the Netflix original movie Sextuplets, he was committing to a role requiring an extensive acting range. That’s because he was filling not one but seven different lead roles in the same film.

In Sextuplets, directed by Michael Tiddes, Wayans plays soon-to-be father Alan, who hopes to uncover information about his family history before his child’s arrival and sets out to locate his birth mother. Imagine Alan’s surprise when he finds out that he is part of “identical” sextuplets! Nevertheless, his siblings are about as unique as they come.

There’s Russell, the nerdy, overweight introvert and the only sibling not given up by their mother, with whom he lived until her recent passing. Ethan, meanwhile, is the embodiment of a 1970s pimp. Dawn is an exotic dancer who is in jail. Baby Pete is on his deathbed and needs a kidney. Jaspar is a villain reminiscent of Austin Powers’ Dr. Evil. Okay, that is six characters, all played by Wayans. Who is the seventh? (Spoiler alert: Wayans also plays their mother, who was simply on vacation and not actually dead as Russell had claimed.)

There are over 1,100 VFX shots in the movie. None, really, involved the transformation of the actor into the various characters — that was done using prosthetics, makeup, wigs and so forth, with slight digital touch-ups as needed. Instead, the majority of the effects work resulted from shooting with a motion-controlled camera and then compositing two (or more) of the siblings together in a shot. For Baby Pete, the artists also had to do a head replacement, comp’ing Wayans onto the body of a much smaller actor.

“We used quite a few visual effects techniques to pull off the movie. At the heart was motion control, [which enables precise control and repetition of camera movement] and allowed us to put multiple characters played by Marlon together in the scenes,” says Tiddes, who has worked with Wayans on multiple projects in the past, including A Haunted House.

The majority of shots involving the siblings were done on stage, filmed on bluescreen with a TechnoDolly for the motion control, as it is too impractical to fit the large rig inside an actual house for filming. “The goal was to find locations that had the exterior I liked [for those scenes] and then build the interior on set,” says Tiddes. “This gave me the versatility to move walls and use the TechnoDolly to create multiple layers so we could then add multiple characters into the same scene and interact together.”

According to Tiddes, the team approached exterior shots similarly to interior ones, with the added challenge of shooting the duplicate moments at the same time each day to get consistent lighting. “Don Burgess, the DP, was amazing in that sense. He was able to create almost exactly the same lighting elements from day to day,” he notes.

Michael Tiddes

So, whenever there was a scene with multiple Wayans characters, it would be filmed on back-to-back days with each of the characters. Tiddes usually started off with Alan, the straight man, to set the pace for the scene, using body doubles for the other characters. Next, the director would work out the shot with the motion control until the timing, composition and so forth was perfected. Then he would hit the Record button on the motion-control device, and the camera would repeat the same exact move over and over as many times as needed. The next day, the shot was replicated with the other character, and the camera would move automatically, and Wayans would have to hit the same marks at the same moment established on the first day.

“Then we’d do it again on the third day with another character. It’s kind of like building layers in Photoshop, and in the end, we would composite all those layers on top of each other for the final version,” explains Tiddes.

When one character would pass in front of another, it became a roto’d shot. Oftentimes a small bluescreen was set up on stage to allow for easier rotoscoping.

Image Engine was the main visual effects vendor on the film, with Bryan Jones serving as visual effects supervisor. The rotoscoping was done using a mix of SilhouetteFX’s Silhouette and Foundry’s Nuke, while compositing was mainly done using Nuke and Autodesk’s Flame.

Make no mistake … using the motion-controlled camera was not without challenges. “When you attack a scene, traditionally you can come in and figure out the blocking on the day [of the shoot],” says Tiddes. “With this movie, I had to previsualize all the blocking because once I put the TechnoDolly in a spot on the set, it could not move for the duration of time we shot in that location. It’s a large 13-foot crane with pieces of track that are 10 feet long and 4 feet wide.”

In fact, one of the main reasons Tiddes wanted to do the film was because of the visual effects challenges it presented. In past films where an actor played multiple characters in a scene, usually one character is on one side of the screen and the other character is on the other side, and a basic split-screen technique would have been used. “For me to do this film, I wanted to visually do it like no one else has ever done it, and that was accomplished by creating camera movement,” he explains. “I didn’t want to be constrained to only split-screen lock-off camera shots that would lack energy and movement. I wanted the freedom to block scenes organically, allowing the characters the flexibility to move through the room, with the opportunity to cross each other and interact together physically. By using motion control, by being able to re-create the same camera movement and then composite the characters into the scene, I was able to develop a different visual style than previous films and create a heightened sense of interactivity and interaction between two or multiple characters on the screen while simultaneously creating dynamic movement with the camera and invoking energy into the scene.”

At times, Gregg Wayans, Marlon’s nephew, served as his body double. He even appears in a very wide shot as one of the siblings, although that occurred only once. “At the end of the day, when the concept of the movie is about Marlon playing multiple characters, the perfectionist in me wanted Marlon to portray every single moment of these characters on screen, even when the character is in the background and out of focus,” says Tiddes. “Because there is only one Marlon Wayans, and no one can replicate what he does physically and comedically in the moment.”

Tiddes knew he would be challenged going into the project, but the process was definitely more complicated than he had initially expected — even with his VFX editorial background. “I had a really good starting point as far as conceptually knowing how to execute motion control. But, it’s not until you get into the moment and start working with the actors that you really understand and digest exactly how to pull off the comedic timing needed for the jokes with the visual effects,” he says. “That is very difficult, and every situation is unique. There was a learning curve, but we picked it up quickly, and I had a great team.”

A system was established that worked for Tiddes and Burgess, as well as Wayans, who had to execute and hit certain marks and look at proper eyelines with precise timing. “He has an earwig, and I am talking to him, letting him know where to look, when to look,” says Tiddes. “At the same time, he’s also hearing dialogue that he’s done the day before in his ear, and he’s reacting to that dialog while giving his current character’s lines in the moment. So, there’s quite a bit going on, and it all becomes more complex when you add the character and camera moving through the scene. After weeks of practice, in one of the final scenes with Jaspar, we were able to do 16 motion-controlled moments in that scene alone, which was a lot!”

At the very end of the film, the group tested its limits and had all six characters (mom and all the siblings, with the exception of Alan) gathered around a table. That scene was shot over a span of five days. “The camera booms down from a sign and pans across the party, landing on all six characters around a table. Getting that motion and allowing the camera to flow through the party onto all six of them seamlessly interacting around the table was a goal of mine throughout the project,” Tiddes says.

Other shots that proved especially difficult were those of Baby Pete in the hospital room, since the entire scene involved Wayans playing three additional characters who are also present: Alan, Russell and Dawn. And then they amped things up with the head replacement on Baby Pete. “I had to shoot the scene and then, on the same day, select the take I would use in the final cut of the movie, rather than select it in post, where traditionally I could pick another take if that one was not working,” Tiddes adds. “I had to set the pace on the first day and work things out with Marlon ahead of time and plan for the subsequent days — What’s Dawn going to say? How is Russell going to react to what Dawn says? You have to really visualize and previsualize all the ad-libbing that was going on and work it out right there in the moment and discuss it, to have kind of a loose plan, then move forward and be confident that you have enough time between lines to allow room for growth when a joke just comes out of nowhere. You don’t want to stifle that joke.”

While the majority of effects involved motion control, there is a scene that contains a good amount of traditional effects work. In it, Alan and Russell park their car in a field to rest for the night, only to awake the next morning to find they have inadvertently provoked a bull, which sees red, literally — both from Alan’s jacket and his shiny car. Artists built the bull in CG. (They used Maya and Side Effects Houdini to build the 3D elements and rendered them in Autodesk’s Arnold.) Physical effects were then used to lift the actual car to simulate the digital bull slamming into the vehicle. In some shots of the bull crashing into the car doors, a 3D car was used to show the doors being damaged.

In another scene, Russell and Alan catch a serious amount of air when they crash through a barn, desperately trying to escape the bull. “I thought it would be hilarious if, in that moment, cereal exploded and individual pieces flew wildly through the car, while [the cereal-obsessed] Russell scooped up one of the cereal pieces mid-air with his tongue for a quick snack,” says Tiddes. To do this, “I wanted to create a zero-gravity slow-motion moment. We shot the scene using a [Vision Research] high-speed Phantom camera at 480fps. Then in post, we created the cereal as a CG element so I could control how every piece moved in the scene. It’s one of my favorite VFX/comedy moments in the movie.”

As Tiddes points out, Sextuplets was the first project on which he used motion control, which let him create motion with the camera and still have the characters interact, giving the subconscious feeling they were actually in the room with one another. “That’s what made the comedy shine,” he says.


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

Mavericks VFX provides effects for Hulu’s The Handmaid’s Tale

By Randi Altman

Season 3 episodes of Hulu’s The Handmaid’s Tale are available for streaming, and if you had any illusions that things would lighten up a bit for June (Elizabeth Moss) and the ladies of Gilead, I’m sorry to say you will be disappointed. What’s not disappointing is that, in addition to the amazing acting and storylines, the show’s visual effects once again play a heavy role.

Brendan Taylor

Toronto’s Mavericks VFX has created visual effects for all three seasons of the show, based on Margaret Atwood’s dystopian view of the not-too-distant future. Its work has earned two Emmy nominations.

We recently reached out to Maverick’s founder and visual effects supervisor, Brendan Taylor, to talk about the new season and his workflow.

How early did you get involved in each season? What sort of input did you have regarding the shots?
The Handmaid’s Tale production is great because they involve us as early as possible. Back in Season 2, when we had to do the Fenway Park scene, for example, we were in talks in August but didn’t shoot until November. For this season, they called us in August for the big fire sequence in Episode 1, and the scene was shot in December.

There’s a lot of nice leadup and planning that goes into it. Our opinions are sought after and we’re able to provide input on what’s the best methodology to use to achieve a shot. Showrunner Bruce Miller, along with the directors, have a way of how they’d like to see it, and they’re great at taking in our recommendations. It was very collaborative and we all approach the process with “what’s best for the show” in mind.

What are some things that the showrunners asked of you in terms of VFX? How did they describe what they wanted?
Each person has a different approach. Bruce speaks in story terms, providing a broader sense of what he’s looking for. He gave us the overarching direction of where he wants to go with the season. Mike Barker, who directed a lot of the big episodes, speaks in more specific terms. He really gets into the details, determining the moods of the scene and communicating how each part should feel.

What types of effects did you provide? Can you give examples?
Some standout effects were the CG smoke in the burning fire sequence and the aftermath of the house being burned down. For the smoke, we had to make it snake around corners in a believable yet magical way. We had a lot of fire going on set, and we couldn’t have any actors or stunt person near it due to the size, so we had to line up multiple shots and composite it together to make everything look realistic. We then had to recreate the whole house in 3D in order to create the aftermath of the fire, with the house being completely burned down.

We also went to Washington, and since we obviously couldn’t destroy the Lincoln Memorial, we recreated it all in 3D. That was a lot of back and forth between Bruce, the director and our team. Different parts of Lincoln being chipped away means different things, and Bruce definitely wanted the head to be off. It was really fun because we got to provide a lot of suggestions. On top of that, we also had to create CGI handmaids and all the details that came with it. We had to get the robes right and did cloth simulation to match what was shot on set. There were about a hundred handmaids on set, but we had to make it look like there were thousands.

Were you able to reuse assets from last season for this one?
We were able to use a handmaids asset from last season, but it needed a lot of upgrades for this season. Because there were closer shots of the handmaids, we had to tweak it and made sure little things like the texture, shaders and different cloth simulations were right for this season.

Were you on set? How did that help?
Yes, I was on set, especially for the fire sequences. We spent a lot of time talking about what’s possible and testing different ways to make it happen. We want it to be as perfect as possible, so I had to make sure it was all done properly from the start. We sent another visual effects supervisor, Leo Bovell, down to Washington to supervise out there as well.

Can you talk about a scene or scenes where being on set played a part in doing something either practical or knowing you could do it in CG?
The fire sequence with the smoke going around the corner took a lot of on-set collaboration. We had tried doing it practically, but the smoke was moving too fast for what we wanted, and there was no way we could physically slow it down.

Having the special effects coordinator, John MacGillivray, there to give us real smoke that we could then match to was invaluable. In most cases on this show, very few audible were called. They want to go into the show knowing exactly what to expect so we were prepared and ready.

Can you talk about turnaround time? Typically, series have short ones. How did that affect how you worked?
The average turnaround time was eight weeks. We began discussions in August, before shooting, and had to delivery by January. We worked with Mike to simplify things without diminishing the impact. We just wanted to make sure we had the chance to do it well given the time we had. Mike was very receptive in asking what we needed to do to make it the best it could be in the timeframe that we had. Take the fire sequence, for example. We could have done full-CGI fire but that would have taken six months. So we did our research and testing to find the most efficient way to merge practical effects with CGI and presented the best version in a shorter period of time.

What tools were used?
We used Foundry Nuke for compositing. We used Autodesk Maya to build all the 3D houses, including the burned-down house, and to destroy the Lincoln Memorial. Then we used Side Effects Houdini to do all the simulations, which can range from the smoke and fire to crowd and cloth.

Is there a shot that you are most proud of or that was very challenging?
The shot where we reveal the crowd over June when we’re in Washington was incredibly challenging. The actual Lincoln Memorial, where we shot, is an active public park, so we couldn’t prevent people from visiting the site. The most we could do was hold them off for a few minutes. We ended up having to clean out all of the tourists, which is difficult with moving camera and moving people. We had to reconstruct about 50% of the plate. Then, in order to get the CG people to be standing there, we had to create a replica of the ground they’re standing on in CG. There were some models we got from the US Geological Society, but they didn’t completely line up, so we had to make a lot of decisions on the fly.

The cloth simulation in that scene was perfect. We had to match the dampening and the movement of all the robes. Stephen Wagner, who is our effects lead on it, nailed it. It looked perfect, and it was really exciting to see it all come together. It looked seamless, and when you saw it in the show, nobody believed that the foreground handmaids were all CG. We’re very proud.

What other projects are you working on?
We’re working on a movie called Queen & Slim by Melina Matsoukas with Universal. It’s really great. We’re also doing YouTube Premium’s Impulse and Netflix’s series Madam C.J. Walker.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

VFX in Series: The Man in the High Castle, Westworld

By Karen Moltenbrey

The look of television changed forever starting in the 1990s as computer graphics technology began to mature to the point where it could be incorporated within television productions. Indeed, the applications initially were minor, but soon audiences were witnessing very complicated work on the small screen. Today, we see a wide range of visual effects being used in television series, from minor wire and sign removal to all-CG characters and complete CG environments — pretty much anything and everything to augment the action and story, or to turn a soundstage or location into a specific locale that could be miles away or even non-existent.

Here, we examine two prime examples where a wide range of visual effects are used to set the stage and propel the action for a pair of series with very unique settings. For instance, The Man in the High Castle uses effects to turn back the clock to the 1960s, but also to create an alternate reality for the period, turning the familiar on its head. In  Westworld, effects create a unique Wild West of the future. In both series, VFX also help turn up the volume on these series’ very creative storylines.

The Man in the High Castle

What would life in the US be like if the Axis powers had defeated the Allied forces during World War II? The Amazon TV series The Man in the High Castle explores that alternate history scenario. Created by Frank Spotnitz and produced by Amazon Studios, Scott Free Productions, Headline Pictures, Electric Shepherd Productions and Big Light Productions, the series is scheduled to start its fourth and final season in mid-November. The story is based on the book by Philip K. Dick.

High Castle begins in the early 1960s in a dystopian America. Nazi Germany and the Empire of Japan have divvied up the US as their spoils of war. Germany rules the East, known as the Greater Nazi Reich (with New York City as the regional capital), while Japan controls the West, known as the Japanese Pacific States (whose capital is now San Francisco). The Rocky Mountains serve as the Neutral Zone. The American Resistance works to thwart the occupiers, spurred on after the discovery of materials displaying an alternate reality where the Allies were victorious, making them ponder this scenario.

With this unique storyline, visual effects artists were tasked with turning back the clock on present-day locations to the ’60s and then turning them into German- and Japanese-dominated and inspired environments. Starting with Season 2, the main studio filling this role has been Barnstorm Visual Effects (Los Angeles, Vancouver). Barnstorm operated as one of the vendors for Season 1, but has since ramped up its crew from a dozen to around 70 to take on the additional work. (Barnstorm also works on CBS All Access shows such as The Good Fight and Strange Angel, in addition to Get Shorty, Outlander and the HBO series Room 104 and Silicon Valley.)

According to Barnstorm co-owner and VFX supervisor Lawson Deming, the studio is responsible for all types of effects for the series — ranging from simple cleanup and fixes such as removing modern objects from shots to more extensive period work through the addition of period set pieces and set extensions. In addition, there are some flashback scenes that call for the artists to digitally de-age the actors and lots of military vehicles to add, as well as science-fiction objects. The majority of the overall work entails CG set extensions and world creation, Deming explains, “That involves matte paintings and CG vehicles and buildings.”

The number of visual effects shots per episode also varies greatly, depending on the story line; there are an average of 60 VFX shots an episode, with each season encompassing 10 episodes. Currently the team is working on Season 4. A core group of eight to 10 CG artists and 12 to 18 compositors work on the show at any given time.

For Season 3, released last October, there are a number of scenes that take place in the Reich-occupied New York City. Although it was possible to go to NYC and photograph buildings for reference, the city has changed significantly since the 1960s, “even notwithstanding the fact that this is an alternate history 1960s,” says Deming. “There would have been a lot of work required to remove modern-day elements from shots, particularly at the street level of buildings where modern-day shops are located, even if it was a building from the 1940s, ’50s or ’60s. The whole main floor would have needed replaced.”

So, in many cases, the team found it more prudent to create set extensions for NYC from scratch. The artists created sections of Fifth and Sixth avenues, both for the area where American-born Reichmarshall and Resistance investigator John Smith has his apartment and also for a parade sequence that occurs in the middle of Season 3. They also constructed a digital version of Central Park for that sequence, which involved crafting a lot of modular buildings with mix-and-match pieces and stories to make what looked like a wide variety of different period-accurate buildings, with matte paintings for the backgrounds. Elements such as fire escapes and various types of windows (some with curtains open, some closed) helped randomize the structures. Shaders for brick, stucco, wood and so forth further enabled the artists to get a lot of usage from relatively few assets.

“That was a large undertaking, particularly because in a lot of those scenes, we also had crowd duplication, crowd systems, tiling and so on to create everything that was there,” Deming explains. “So even though it’s just a city and there’s nothing necessarily fantastical about it, it was almost fully created digitally.”

The styles of NYC and San Francisco are very different in the series narrative. The Nazis are rebuilding NYC in their own image, so there is a lot of influence from brutalist architecture, and cranes often dot the skyline to emphasize all the construction taking place. Meanwhile, San Francisco has more of a 1940s look, as the Japanese are less interested in influencing architectural changes as they are in occupation.

“We weren’t trying to create a science-fiction world because we wanted to be sure that what was there would be believable and sell the realistic feel of the story. So, we didn’t want to go too far in what we created. We wanted it to feel familiar enough, though, that you could believe this was really happening,” says Deming.

One of the standout episodes for visual effects is “Jahr Null” (Season 3, Episode 10), which has been nominated for a 2019 Emmy in the Outstanding Special Visual Effects category. It entails the destruction of the Statue of Liberty, which crashes into the water, requiring just about every tool available at Barnstorm. “Prior to [the upcoming] Season 4, our biggest technical challenge was the Statue of Liberty destruction. There were just so many moving parts, literally and figuratively,” says Deming. “So many things had to occur in the narrative – the Nazis had this sense of showmanship, so they filmed their events and there was this constant stream of propaganda and publicity they had created.”

There are ferries with people on them to watch the event, spotlights are on the statue and an air show with music prior to the destruction as planes with trails of colored smoke fly toward the statue. When the planes fire their missiles at the base of the statue, it’s for show, as there are a number of explosives planted in the base of the statue that go off in a ring formation to force the collapse. Deming explains the logistics challenge: “We wanted the statue’s torch arm to break off and sink in the water, but the statue sits too far back. We had to manufacture a way for the statue to not just tip over, but to sort of slide down the rubble of the base so it would be close enough to the edge and the arm would snap off against the side of the island.”

The destruction simulation, including the explosions, fire, water and so forth, was handled primarily in Side Effects Houdini. Because there was so much sim work, a good deal of the effects work for the entire sequence was done in Houdini as well. Lighting and rendering for the scene was done within Autodesk’s Arnold.

Barnstorm also used Blender, an open-source 3D program for modeling and asset creation, for a small portion of the assets in this sequence. In addition, the artists used Houdini Mantra for the water rendering, while textures and shaders were built in Adobe’s Substance Painter; later the team used Foundry’s Nuke to composite the imagery. “There was a lot of deep compositing involved in that scene because we had to have the lighting interact in three dimensions with things like the smoke simulation,” says Deming. “We had a bunch of simulations stacked on top of one another that created a lot of data to work with.”

The artists referenced historical photographs as they designed and built the statue with a period-accurate torch. In the wide aerial shots, the team used some stock footage of the statue with New York City in the background, but had to replace pretty much everything in the shot, shortening the city buildings and replacing Liberty Island, the water surrounding it and the vessels in the water. “So yeah, it ended up being a fully digital model throughout the sequence,” says Deming.

Deming cannot discuss the effects work coming up in Season 4, but he does note that Season 3 contained a lot of digital NYC. This included a sequence wherein John Smith was installed as the Reichmarshall near Central Park, a scene that comprised a digital NYC and digital crowd duplication. On the other side of the country, the team built digital versions of all the ships in San Francisco harbor, including CG builds of period Japanese battleships retrofitted with more modern equipment. Water simulations rounded out the scene.

In another sequence, the Japanese performed nuclear testing in Monument Valley, blowing the caps off the mesas. For that, the artists used reference photos to build the landscape and then created a digital simulation of a nuclear blast.

In addition, there were a multitude of banners on the various buildings. Because of the provocative nature of some of the Nazi flags and Fascist propaganda, solid-color banners were often hung on location, with artists adding the offensive VFX image in post as to not upset locals where the series was filmed. Other times, the VFX artists added all-digital signage to the scenes.

As Deming points out, there is only so much that can be created through production design and costumes. Some of the big things have to be done with visual effects. “There are large world events in the show that happen and large settings that we’re not able to re-create any other way. So, the visual effects are integral to the process of creating the aesthetic world of the show,” he adds. “We’re creating things that while they are visually impressive, also feel authentic, like a world that could really exist. That’s where the power and the horror of the world here comes from.”

High Castle is up for a total of three Emmy awards later this month. It was nominated for three Emmys in 2017 for Season 2 and four in 2016 for Season 1, taking home two Emmys that year: one for Outstanding Cinematography for a Single-Camera Series and another for Outstanding Title Design.

Westworld

What happens when high tech meets the Wild West, and wealthy patrons can indulge their fantasies with no limits? That is the premise of the Emmy-winning HBO series Westworld from creators Jonathan Nolan and Lisa Joy, who executive produce along with J.J. Abrams, Athena Wickham, Richard J. Lewis, Ben Stephenson and Denise Thé.

Westworld is set in the fictitious western theme park called Westworld, one of multiple parks where advanced technology enables the use of lifelike android hosts to cater to the whims of guests who are able to pay for such services — all without repercussions, as the hosts are programmed not to retaliate or harm the guests. After each role-play cycle, the host’s memory is erased, and then the cycle begins anew until eventually the host is either decommissioned or used in a different narrative. Staffers are situated out of sight while overseeing park operations and performing repairs on the hosts as necessary. As you can imagine, guests often play out the darkest of desires. So, what happens if some of the hosts retain their memories and begin to develop emotions? What if some escape from the park? What occurs in the other themed parks?

The series debuted in October 2016, with Season 2 running from April through June of 2018. The production for Season 3 began this past spring and it is planned for release in 2020.

The first two seasons were shot in various locations in California, as well as in Castle Valley near Moab, Utah. Multiple vendors provide the visual effects, including the team at CoSA VFX (North Hollywood, Vancouver and Atlanta), which has been with the show since the pilot, working closely with Westworld VFX supervisor Jay Worth. CoSA worked with Worth in the past on other series, including Fringe, Undercovers and Person of Interest.

The number of VFX shots per episode varies, depending on the storyline, and that means the number of shots CoSA is responsible for varies widely as well. For instance, the facility did approximately 360 shots for Season 1 and more than 200 for Season 2. The studio is unable to discuss its work at this time on the upcoming Season 3.

The type of effects work CoSA has done on Westworld varies as well, ranging from concept art through the concept department and extension work through the studio’s environments department. “Our CG team is quite large, so we handle every task from modeling and texturing to rigging, animation and effects,” says Laura Barbera, head of 3D at CoSA. “We’ve created some seamless digital doubles for the show that even I forget are CG! We’ve done crowd duplication, for which we did a fun shoot where we dressed up in period costumes. Our 2D department is also sizable, and they do everything from roto, to comp and creative 2D solutions, to difficult greenscreen elements. We even have a graphics department that did some wonderful shots for Season 2, including holograms and custom interfaces.”

On the 3D side, the studio’s pipeline js mainly comprised of Autodesk’s Maya and Side Effects Houdini, along with Adobe’s Substance, Foundry’s Mari and Pixologic’s ZBrush. Maxon’s Cinema 4D and Interactive Data Visualization’s SpeedTree vegetation modeler are also used. On the 2D side, the artists employ Foundry’s Nuke and the Adobe suite, including After Effects and Photoshop; rendering is done in Chaos Group’s V-Ray and Redshift’s renderer.

Of course, there have been some recurring effects each season, such as the host “twitches and glitches.” And while some of the same locations have been revisited, the CoSA artists have had to modify the environments to fit with the changing timeline of the story.

“Every season sees us getting more and more into the characters and their stories, so it’s been important for us to develop along with it. We’ve had to make our worlds more immersive so that we are feeling out the new and changing surroundings just like the characters are,” Barbera explains. “So the set work gets more complex and the realism gets even more heightened, ensuring that our VFX become even more seamless.”

At center stage have been the park locations, which are rooted in existing terrain, as there is a good deal of location shooting for the series. The challenge for CoSA then becomes how to enhance it and make nature seem even more full and impressive, while still subtly hinting toward the changes in the story, says Barbera. For instance, the studio did a significant amount of work to the Skirball Cultural Center locale in LA for the outdoor environment of Delos, which owns and operates the parks. “It’s now sitting atop a tall mesa instead of overlooking the 405!” she notes. The team also added elements to the abandoned Hawthorne Plaza mall to depict the sublevels of the Delos complex. They’re constantly creating and extending the environments in locations inside and out of the park, including the town of Pariah, a particularly lawless area.

“We’ve created beautiful additions to the outdoor sets. I feel sometimes like we’re looking at a John Ford film, where you don’t realize how important the world around you is to the feel of the story,” Barbera says.

CoSA has done significant interior work too, creating spaces that did not exist on set “but that you’d never know weren’t there unless you’d see the before and afters,” Barbera says. “It’s really very visually impressive — from futuristic set extensions, cars and [Westworld park co-creator] Arnold’s house in Season 2, it’s amazing how much we’ve done to extend the environments to make the world seem even bigger than it is on location.”

One of the larger challenges in the first seasons came in Season 2: creating the Delos complex and the final episodes where the studio had to build a world inside of a world – the Sublime –as well as the gateway to get there. “Creating the Sublime was a challenge because we had to reuse and yet completely change existing footage to design a new environment,” explains Barbera. “We had to find out what kind of trees and foliage would live in that environment, and then figure out how to populate it with hosts that were never in the original footage. This was another sequence where we had to get particularly creative about how to put all the elements together to make it believable.”

In the final episode of the second season, the group created environment work on the hills, pinnacles and quarry where the door to the Sublime appears. They also did an extensive rebuild of the Sublime environment, where the hosts emerge after crossing over. “In the first season, we did a great deal of work on the plateau side of Delos, as well as adding mesas into the background of other shots — where [hosts] Dolores and Teddy are — to make the multiple environments feel connected,” adds Barbera.

Aside from the environments, CoSA also did some subtle work on the robots, especially in Season 2, to make them appear as if they were becoming unhinged, hinting at a malfunction. The comp department also added eye twitches, subtle facial tics and even rapid blinks to provide a sense of uneasiness.

While Westworld’s blending of the Old West’s past and the robotic future initially may seem at thematic odds, the balance of that duality is cleverly accomplished in the filming of the series and the way it is performed, Barbera points out. “Jay Worth has a great vision for the integrated feel of the show. He established the looks for everything,” she adds.

The balance of the visual effects is equally important because it enhances the viewer experience. “There are things happening that can be so subtle but have so much impact. Much of our work on the second season was making sure that the world stayed grounded, so that the strangeness that happened with the characters and story line read as realistic,” Barbera explains. “Our job as visual effects artists is to help our professional storytelling partners tell their tales by adding details and elements that are too difficult or fantastic to accomplish live on set in the midst of production. If we’re doing our job right, you shouldn’t feel suddenly taken out of the moment because of a splashy effect. The visuals are there to supplement the story.”


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

Visual Effects Roundtable

By Randi Altman

With Siggraph 2019 in our not-too-distant rearview mirror, we thought it was a good time to reach out to visual effects experts to talk about trends. Everyone has had a bit of time to digest what they saw. Users are thinking what new tools and technologies might help their current and future workflows. Manufacturers are thinking about how their products will incorporate these new technologies.

We provided these experts with questions relating to realtime raytracing, the use of game engines in visual effects workflows, easier ways to share files and more.

Ben Looram, partner/owner, Chapeau Studios
Chapeau Studios provides production, VFX/animation, design and creative IP development (both for digital content and technology) for all screens.

What film inspired you to work in VFX?
There was Ray Harryhausen’s film Jason and the Argonauts, which I watched on TV when I was seven. The skeleton-fighting scene has been visually burned into my memory ever since. Later in life I watched an artist compositing some tough bluescreen shots on a Quantel Henry in 1997, and I instantly knew that that was going to be in my future.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
Double the content for half the cost seems to be the industry’s direction lately. This is coming from new in-house/client-direct agencies that sometimes don’t know what they don’t know … so we help guide/teach them where it’s OK to trim budgets or dedicate more funds for creative.

Are game engines affecting how you work, or how you will work in the future?
Yes, rendering on device and all the subtle shifts in video fidelity shifted our attention toward game engine technology a couple years ago. As soon as the game engines start to look less canned and have accurate depth of field and parallax, we’ll start to integrate more of those tools into our workflow.

Right now we have a handful of projects in the forecast where we will be using realtime game engine outputs as backgrounds on set instead of shooting greenscreen.

What about realtime raytracing? How will that affect VFX and the way you work?
We just finished an R&D project with Intel’s new raytracing engine OSPRay for Siggraph. The ability to work on a massive scale with last-minute creative flexibility was my main takeaway. This will allow our team to support our clients’ swift changes in direction with ease on global launches. I see this ingredient as really exciting for our creative tech devs moving into 2020. Proof of concept iterations will become finaled faster, and we’ve seen efficiencies in lighting, render and compositing effort.

How have ML/AI affected your workflows, if at all?
None to date, but we’ve been making suggestions for new tools that will make our compositing and color correction process more efficient.

The Uncanny Valley. Where are we now?
Still uncanny. Even with well-done virtual avatar influencers on Instagram like Lil Miquela, we’re still caught with that eerie feeling of close-to-visually-correct with a “meh” filter.

Apple

Can you name some recent projects?
The Rookie’s Guide to the NFL. This was a fun hybrid project where we mixed CG character design with realtime rendering voice activation. We created an avatar named Matthew for the NFL’s Amazon Alexa Skills store that answers your football questions in real time.

Microsoft AI: Carlsberg and Snow Leopard. We designed Microsoft’s visual language of AI on multiple campaigns.

Apple Trade In campaign: Our team concepted, shot and created an in-store video wall activation and on-all-device screen saver for Apple’s iPhone Trade In Program.

 

Mac Moore, CEO, Conductor
Conductor is a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud.

What are some of today’s VFX trends? Is cloud playing an even larger role?
Cloud is absolutely a growing trend. I think for many years the inherent complexity and perceived cost of cloud has limited adoption in VFX, but there’s been a marked acceleration in the past 12 months.

Two years ago at Siggraph, I was explaining the value of elastic compute and how it perfectly aligns with the elastic requirements that define our project-based industry; this year there was a much more pragmatic approach to cloud, and many of the people I spoke with are either using the cloud or planning to use it in the near future. Studios have seen referenceable success, both technically and financially, with cloud adoption and are now defining cloud’s role in their pipeline for fear of being left behind. Having a cloud-enabled pipeline is really a game changer; it is leveling the field and allowing artistic talent to be the differentiation, rather than the size of the studio’s wallet (and its ability to purchase a massive render farm).

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines for VFX have definitely attracted interest lately and show a lot of promise in certain verticals like virtual production. There’s more work to be done in terms of out-of-the-box usability, but great strides have been made in the past couple years. I also think various open source initiatives and the inherent collaboration those initiatives foster will help move VFX workflows forward.

Will realtime raytracing play a role in how your tool works?
There’s a need for managing the “last mile,” even in realtime raytracing, which is where Conductor would come in. We’ve been discussing realtime assist scenarios with a number of studios, such as pre-baking light maps and similar applications, where we’d perform some of the heavy lifting before assets are integrated in the realtime environment. There are certainly benefits on both sides, so we’ll likely land in some hybrid best practice using realtime and traditional rendering in the near future.

How do ML/AI and AR/VR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Machine learning and artificial intelligence are critical for our next evolutionary phase at Conductor. To date we’ve run over 250 million core-hours on the platform, and for each of those hours, we have a wealth of anonymous metadata about render behavior, such as the software run, duration, type of machine, etc.

Conductor

For our next phase, we’re focused on delivering intelligent rendering akin to ride-share app pricing; the goal is to provide producers with an upfront cost estimate before they submit the job, so they have a fixed price that they can leverage for their bids. There is also a rich set of analytics that we can mine, and those analytics are proving invaluable for studios in the planning phase of a project. We’re working with data science experts now to help us deliver this insight to our broader customer base.

AR/VR front presents a unique challenge for cloud, due to the large size and variety of datasets involved. The rendering of these workloads is less about compute cycles and more about scene assembly, so we’re determining how we can deliver more of a whole product for this market in particular.

OpenXR and USD are certainly helping with industry best practices and compatibility, which build recipes for repeatable success, and Conductor is collaborating on creating those guidelines for success when it comes to cloud computing with those standards.

What is next on the horizon for VFX?
Cloud, open source and realtime technologies are all disrupting VFX norms and are converging in a way that’s driving an overall democratization of the industry. Gone are the days when you need a pile of cash and a big brick-and-mortar building to house all of your tech and talent.

Streaming services and new mediums, along with a sky-high quality bar, have increased the pool of available VFX work, which is attracting new talent. Many of these new entrants are bootstrapping their businesses with cloud, standards-based approaches and geographically dispersed artistic talent.

Conductor recently became a fully virtual company for this reason. I hire based on expertise, not location, and today’s technology allows us to collaborate as if we are in the same building.

 

Aruna Inversin, creative director/VFX supervisor, Digital Domain 
Digital Domain has provided visual effects and technology for hundreds of motion pictures, commercials, video games, music videos and virtual reality experiences. It also livestreams events in 360-degree virtual reality, creates “virtual humans” for use in films and live events, and develops interactive content, among other things.

What film inspired you to work in VFX?
RoboCop in 1984. The combination of practical effects, miniatures and visual effects inspired me to start learning about what some call “The Invisible Art.”

What trends have you been seeing? What do you feel is important?
There has been a large focus on realtime rendering and virtual production and using it to help increase the throughput and workflow of visual effects. While indeed realtime rendering does increase throughput, there is now a greater onus on filmmakers to plan their creative ideas and assets before you can render them. No longer is it truly post production, but we are back into the realm of preproduction, using post tools and realtime tools to help define how a story is created and eventually filmed.

USD and cloud rendering are also important components, which allow many different VFX facilities the ability to manage their resources effectively. I think another trend that has since passed and has gained more traction is the availability of ACES and a more unified color space by the Academy. This allows quicker throughput between all facilities.

Are game engines affecting how you work or how you will work in the future?
As my primary focus is in new media and experiential entertainment at Digital Domain, I already use game engines (cinematic engines, realtime engines) for the majority of my deliverables. I also use our traditional visual effects pipeline; we have created a pipeline that flows from our traditional cinematic workflow directly into our realtime workflow, speeding up the development process of asset creation and shot creation.

What about realtime raytracing? How will that affect VFX and the way you work?
The ability to use Nvidia’s RTX and raytracing increases the physicality and realistic approximations of virtual worlds, which is really exciting for the future of cinematic storytelling in realtime narratives. I think we are just seeing the beginnings of how RTX can help VFX.

How have AR/VR and AI/ML affected your workflows, if at all?
Augmented reality has occasionally been a client deliverable for us, but we are not using it heavily in our VFX pipeline. Machine learning, on the other hand, allows us to continually improve our digital humans projects, providing quicker turnaround with higher fidelity than competitors.

The Uncanny Valley. Where are we now?
There is no more uncanny valley. We have the ability to create a digital human with the nuance expected! The only limitation is time and resources.

Can you name some recent projects?
I am currently working on a Time project but I cannot speak too much about it just yet. I am also heavily involved in creating digital humans for realtime projects for a number of game companies that wish to push the boundaries of storytelling in realtime. All these projects have a release date of 2020 or 2021.

 

Matt Allard, strategic alliances lead, M&E, Dell Precision Workstations
Dell Precision workstations feature the latest processors and graphics technology and target those working in the editing studio or at a drafting table, at the office or on location.

What are some of today’s VFX trends?
We’re seeing a number of trends in VFX at the moment — from 4K mastering from even higher-resolution acquisition formats and an increase in HDR content to game engines taking a larger role on set in VFX-heavy productions. Of course, we are also seeing rising expectations for more visual sophistication, complexity and film-level VFX, even in TV post (for example, Game of Thrones).

Will realtime raytracing play a role in how your tools work?
We expect that Dell customers will embrace realtime and hardware-accelerated raytracing as creative, cost-saving and time-saving tools. With the availability of Nvidia Quadro RTX across the Dell Precision portfolio, including on our 7000 series mobile workstations, customers can realize these benefits now to deliver better content wherever a production takes them in the world.

Large-scale studio users will not only benefit from the freedom to create the highest-quality content faster, but they’ll likely see overall impact to their energy consumption as they assess the move from CPU rendering, which dominates studio data centers today. Moving toward GPU and hybrid CPU/GPU rendering approaches can offer equal or better rendering output with less energy consumption.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines have made their way into VFX-intensive productions to deliver in-context views of the VFX during the practical shoot. With increasing quality driven by realtime raytracing, game engines have the potential to drive a master-quality VFX shot on set, helping to minimize the need to “fix it in post.”

What is next on the horizon for VFX?
The industry is at the beginning of a new era as artificial intelligence and machine learning techniques are brought to bear on VFX workflows. Analytical and repetitive tasks are already being targeted by major software applications to accelerate or eliminate cumbersome elements in the workflow. And as with most new technologies, it can result in improved creative output and/or cost savings. It really is an exciting time for VFX workflows!

Ongoing performance improvements to the computing infrastructure will continue to accelerate and democratize the highest-resolution workflows. Now more than ever, small shops and independents can access the computing power, tools and techniques that were previously available only to top-end studios. Additionally, virtualization techniques will allow flexible means to maximize the utilization and proliferation of workstation technology.

 

Carl Flygare, manager, Quadro Marketing, PNY
Providing tools for realtime raytracing, augmented reality and virtual reality with the goal of advancing VFX workflow creativity and productivity. PNY is NVIDIA’s Quadro channel partner throughout North America, Latin America, Europe and India..

How will realtime raytracing play a role in workflows?
Budgets are getting tighter, timelines are contracting, and audience expectations are increasing. This sounds like a perfect storm, in the bad sense of the term, but with the right tools, it is actually an opportunity.

Realtime raytracing, based on Nvidia’s RTX technology and support from leading ISVs, enables VFX shops to fit into these new realities while delivering brilliant work. Whiteboarding a VFX workflow is a complex task, so let’s break it down by categories. In preproduction, specifically previz, realtime raytracing will let VFX artists present far more realistic and compelling concepts much earlier in the creative process than ever before.

This extends to the next phase, asset creation and character animation, in which models can incorporate essentially lifelike nuance, including fur, cloth, hair or feathers – or something else altogether! Shot layout, blocking, animation, simulation, lighting and, of course, rendering all benefit from additional iterations, nuanced design and the creative possibilities that realtime raytracing can express and realize. Even finishing, particularly compositing, can benefit. Given the applicable scope of realtime raytracing, it will essentially remake VFX workflows and overall film pipelines, and Quadro RTX series products are the go-to tools enabling this revolution.

How are game engines changing how VFX is done? Is this for everyone or just a select few?
Variety had a great article on this last May. ILM substituted realtime rendering and five 4K laser projectors for a greenscreen shot during a sequence from Solo: A Star Wars Story. This allowed the actors to perform in context — in this case, a hyperspace jump — but also allowed cinematographers to capture arresting reflections of the jump effect in the actors’ eyes. Think of it as “practical digital effects” created during shots, not added later in post. The benefits are significant enough that the entire VFX ecosystem, from high-end shops and major studios to independent producers, are using realtime production tools to rethink how movies and TV shows happen while extending their vision to realize previously unrealizable concepts or projects.

Project Sol

How do ML and AR play a role in your tool? And are you supporting OpenXR 1.0? What about Pixar’s USD?
Those are three separate but somewhat interrelated questions! ML (machine learning) and AI (artificial intelligence) can contribute by rapidly denoising raytraced images in far less time than would be required by letting a given raytracing algorithm run to conclusion. Nvidia enables AI denoising in Optix 5.0 and is working with a broad array of leading ISVs to bring ML/AI enhanced realtime raytracing techniques into the mainstream.

OpenXR 1.0 was released at Siggraph 2019. Nvidia (among others) is supporting this open, royalty-free and cross-platform standard for VR/AR. Nvidia is now providing VR enhancing technologies, such as variable rate shading, content adaptive shading and foveated rendering (among others), with the launch of Quadro RTX. This provides access to the best of both worlds — open standards and the most advanced GPU platform on which to build actual implementations.

Pixar and Nvidia have collaborated to make Pixar’s USD (Universal Scene Description) and Nvidia’s complementary MDL (Materials Definition Language) software open source in an effort to catalyze the rapid development of cinematic quality realtime raytracing for M&E applications.

Project Sol

What is next on the horizon for VFX?
The insatiable desire on the part of VFX professionals, and audiences, to explore edge-of-the-envelope VFX will increasingly turn to realtime raytracing, based on the actual behavior of light and real materials, increasingly sophisticated shader technology and new mediums like VR and AR to explore new creative possibilities and entertainment experiences.

AI, specifically DNNs (deep neural networks) of various types, will automate many repetitive VFX workflow tasks, allowing creative visionaries and artists to focus on realizing formerly impossible digital storytelling techniques.

One obvious need is increasing the resolution at which VFX shots are rendered. We’re in a 4K world, but many films are finished at 2K, primarily based on VFX. 8K is unleashing the abilities (and changing the economics) of cinematography, so expect increasingly powerful realtime rendering solutions, such as Quadro RTX (and successor products when they come to market), along with amazing advances in AI, to allow the VFX community to innovate in tandem.

 

Chris Healer, CEO/CTO/VFX supervisor, The Molecule 
Founded in 2005, The Molecule creates bespoke VFX imagery for clients worldwide. Over 80 artists, producers, technicians and administrative support staff collaborate at our New York City and Los Angeles studios.

What film or show inspired you to work in VFX?
I have to admit, The Matrix was a big one for me.

Are game engines affecting how you work or how you will work?
Game engines are coming, but the talent pool is difficult and the bridge is hard to cross … a realtime artist doesn’t have the same mindset as a traditional VFX artist. The last small percentage of completion on a shot can invalidate any values gained by working in a game engine.

What about realtime raytracing?
I am amazed at this technology, and as a result bought stock in Nvidia, but the software has to get there. It’s a long game, for sure!

How have AR/VR and ML/AI affected your workflows?
I think artists are thinking more about how images work and how to generate them. There is still value in a plain-old four-cornered 16:9 rectangle that you can make the most beautiful image inside of.

AR,VR, ML, etc., are not that, to be sure. I think there was a skip over VR in all the hype. There’s way more to explore in VR, and that will inform AR tremendously. It is going to take a few more turns to find a real home for all this.

What trends have you been seeing? Cloud workflows? What else?
Everyone is rendering in the cloud. The biggest problem I see now is lack of a UBL model that is global enough to democratize it. UBL = usage-based licensing. I would love to be able to render while paying by the second or minute at large or small scales. I would love for Houdini or Arnold to be rentable on a Satoshi level … that would be awesome! Unfortunately, it is each software vendor that needs to provide this, which is a lot to organize.

The Uncanny Valley. Where are we now?
We saw in the recent Avengers film that Mark Ruffalo was in it. Or was he? I totally respect the Uncanny Valley, but within the complexity and context of VFX, this is not my battle. Others have to sort this one out, and I commend the artists who are working on it. Deepfake and Deeptake are amazing.

Can you name some recent projects?
We worked on Fosse/Verdon, but more recent stuff, I can’t … sorry. Let’s just say I have a lot of processors running right now.

 

Matt Bach and William George, lab technicians, Puget Systems 
Puget Systems specializes in high-performance custom-built computers — emphasizing each customer’s specific workflow.

Matt Bach

William George

What are some of today’s VFX trends?
Matt Bach: There are so many advances going on right now that it is really hard to identify specific trends. However, one of the most interesting to us is the back and forth between local and cloud rendering.

Cloud rendering has been progressing for quite a few years and is a great way to get a nice burst in rendering performance when you are  in a crunch. However, there have been high improvements in GPU-based rendering with technology like Nvidia Optix. Because of these, you no longer have to spend a fortune to have a local render farm, and even a relatively small investment in hardware can often move the production bottleneck away from rendering to other parts of the workflow. Of course, this technology should make its way to the cloud at some point, but as long as these types of advances keep happening, the cloud is going to continue playing catch-up.

A few other that we are keeping our eyes on are the growing use of game engines, motion capture suits and realtime markerless facial tracking in VFX pipelines.

Realtime raytracing is becoming more prevalent in VFX. What impact does realtime raytracing have on system hardware, and what do VFX artists need to be thinking about when optimizing their systems?
William George: Most realtime raytracing requires specialized computer hardware, specifically video cards with dedicated raytracing functionality. Raytracing can be done on the CPU and/or normal video cards as well, which is what render engines have done for years, but not quickly enough for realtime applications. Nvidia is the only game in town at the moment for hardware raytracing on video cards with its RTX series.

Nvidia’s raytracing technology is available on its consumer (GeForce) and professional (Quadro) RTX lines, but which one to use depends on your specific needs. Quadro cards are specifically made for this kind of work, with higher reliability and more VRAM, which allows for the rendering of more complex scenes … but they also cost a lot more. GeForce, on the other hand, is more geared toward consumer markets, but the “bang for your buck” is incredibly high, allowing you to get several times the performance for the same cost.

In between these two is the Titan RTX, which offers very good performance and VRAM for its price, but due to its fan layout, it should only be used as a single card (or at most in pairs, if used in a computer chassis with lots of airflow).

Another thing to consider is that if you plan on using multiple GPUs (which is often the case for rendering), the size of the computer chassis itself has to be fairly large in order to fit all the cards, power supply, and additional cooling needed to keep everything going.

How are game engines changing or impacting VFX workflows?
Bach: Game engines have been used for previsualization for a while, but we are starting to see them being used further and further down the VFX pipeline. In fact, there are already several instances where renders directly captured from game engines, like Unity or Unreal, are being used in the final film or animation.

This is getting into speculation, but I believe that as the quality of what game engines can produce continues to improve, it is going to drastically shake up VFX workflows. The fact that you can make changes in real time, as well as use motion capture and facial tracking, is going to dramatically reduce the amount of time necessary to produce a highly polished final product. Game engines likely won’t completely replace more traditional rendering for quite a while (if ever), but it is going to be significant enough that I would encourage VFX artists to at least familiarize themselves with the popular engines like Unity or Unreal.

What impact do you see ML/AI and AR/VR playing for your customers?
We are seeing a lot of work being done for machine learning and AI, but a lot of it is still on the development side of things. We are starting to get a taste of what is possible with things like Deepfakes, but there is still so much that could be done. I think it is too early to really tell how this will affect VFX in the long term, but it is going to be exciting to see.

AR and VR are cool technologies, but it seems like they have yet to really take off, in part because designing for them takes a different way of thinking than traditional media, but also in part because there isn’t one major platform that’s an overwhelming standard. Hopefully, that is something that gets addressed over time, because once creative folks really get a handle on how to use the unique capabilities of AR/VR to their fullest, I think a lot of neat stories will be told.

What is the next on the horizon for VFX?
Bach: The sky is really the limit due to how fast technology and techniques are changing, but I think there are two things in particular that are going to be very interesting to see how they play out.

First, we are hitting a point where ethics (“With great power comes great responsibility” and all that) is a serious concern. With how easy it is to create highly convincing Deepfakes of celebrities or other individuals, even for someone who has never used machine learning before, I believe that there is the potential of backlash from the general public. At the moment, every use of this type of technology has been for entertainment or otherwise rightful purposes, but the potential to use it for harm is too significant to ignore.

Something else I believe we will start to see is “VFX for the masses,” similar to how video editing used to be a purely specialized skill, but now anyone with a camera can create and produce content on social platforms like YouTube. Advances in game engines, facial/body tracking for animated characters and other technologies that remove a number of skills and hardware barriers for relatively simple content are going to mean that more and more people with no formal training will take on simple VFX work. This isn’t going to impact the professional VFX industry by a significant degree, but I think it might spawn a number of interesting techniques or styles that might make their way up to the professional level.

 

Paul Ghezzo, creative director, Technicolor Visual Effects
Technicolor and its family of VFX brands provide visual effects services tailored to each project’s needs.

What film inspired you to work in VFX?
At a pretty young age, I fell in love with Star Wars: Episode IV – A New Hope and learned about the movie magic that was developed to make those incredible visuals come to life.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
USD will help structure some of what we currently do, and cloud rendering is an incredible source to use when needed. I see both of them maturing and being around for years to come.

As for other trends, I see new methods of photogrammetry and HDRI photography/videography providing datasets for digital environment creation and capturing lighting content; performance capture (smart 2D tracking and manipulation or 3D volumetric capture) for ease of performance manipulation or layout; and even post camera work. New simulation engines are creating incredible and dynamic sims in a fraction of the time, and all of this coming together through video cards streamlining the creation of the end product. In many ways it might reinvent what can be done, but it might take a few cutting-edge shows to embrace and perfect the recipe and show its true value.

Production cameras tethered to digital environments for live set extensions are also coming of age, and with realtime rendering becoming a viable option, I can imagine that it will only be a matter of time for LED walls to become the new greenscreen. Can you imagine a live-action set extension that parallaxes, distorts and is exposed in the same way as its real-life foreground? How about adding explosions, bullet hits or even an armada of spaceships landing in the BG, all on cue. I imagine this will happen in short order. Exciting times.

Are game engines affecting how you work or how you will work in the future?
Game engines have affected how we work. The speed and quality that they offer is undoubtably a game changer, but they don’t always create the desired elements and AOVs that are typically needed in TV/film production.

They are also creating a level of competition that is spurring other render engines to be competitive and provide a similar or better solution. I can imagine that our future will use Unreal/Unity engines for fast turnaround productions like previz and stylized content, as well as for visualizing virtual environments and digital sets as realtime set extensions and a lot more.

Snowfall

What about realtime raytracing? How will that affect VFX and the way you work?
GPU rendering has single-handedly changed how we render and what we render with. A handful of GPUs and a GPU-accelerated render engine can equal or surpass a CPU farm that’s several times larger and much more expensive. In VFX, iterations equal quality, and if multiple iterations can be completed in a fraction of the time — and with production time usually being finite — then GPU-accelerated rendering equates to higher quality in the time given.

There are a lot of hidden variables to that equation (change of direction, level of talent provided, work ethics, hardware/software limitations, etc.), but simply said, if you can hit the notes as fast as they are given, and not have to wait hours for a render farm to churn out a product, then clearly the faster an iteration can be provided the more iterations can be produced, allowing for a higher-quality product in the time given.

How have AR or ML affected your workflows, if at all?
ML and AR haven’t significantly affected our current workflows yet … but I believe they will very soon.

One aspect of AR/VR/MR that we occasionally use in TV/film production is to previz environments, props and vehicles, which allows everyone in production and on set/location to see what the greenscreen will be replaced with, which allows for greater communication and understanding with the directors, DPs, gaffers, stunt teams, SFX and talent. I can imagine that AR/VR/MR will only become more popular as a preproduction tool, allowing productions to front load and approve all aspects of production way before the camera is loaded and the clock is running on cast and crew.
Machine learning is on the cusp of general usage, but it currently seems to be used by productions with lengthy schedules that will benefit from development teams building those toolsets. There are tasks that ML will undoubtably revolutionize, but it hasn’t affected our workflows yet.

The Uncanny Valley. Where are we now?
Making the impossible possible … That *is* what we do in VFX. Looking at everything from Digital Emily in 2011 to Thanos and Hulk in Avengers: Endgame, we’ve seen what can be done, and the Uncanny Valley will likely remain, but only on productions that can’t afford the time or cost of flawless execution.

Can you name some recent projects?
Big Little Lies, Dead to Me, NOS4A2, True Detective, Veep, This Is Us, Snowfall, The Loudest Voice, and Avengers: Endgame.

 

James Knight, virtual production director, AMD 
AMD is a semiconductor company that develops computer processors and related technologies for M&E as well as other markets. Its tools include Ryzen and Threadripper.

What are some of today’s VFX trends?
Well, certainly the exploration for “better, faster, cheaper” keeps going. Faster rendering, so our community can accomplish more iterations in a much shorter amount of time, seems to something I’ve heard the whole time I’ve been in the business.

I’d surely say the virtual production movement (or on-set visualization) is gaining steam, finally. I work with almost all the major studios in my role, and all of them, at a minimum, have the ability to speed up post and blend it with production on their radar; many have virtual production departments.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
I would say game engines are where most of the innovation comes from these days. Think about Unreal, for example. Epic pioneered Fortnite, and the revenue from that must be astonishing, and they’re not going to sit on their hands. The feature film and TV post/VFX business benefits from the requirement of the gaming consumer to see higher-resolution, more photorealistic images in real time. That gets passed on to our community in eliminating guess work on set when framing partial or completely CG shots.

It should be for everyone or most, because the realtime and post production time savings are rather large. I think many still have a personal preference for what they’re used to. And that’s not wrong, if it works for them, obviously that’s fine. I just think that even in 2019, use of game engines is still new to some … which is why it’s not completely ubiquitous.

How do ML or AR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Well, it’s more the reverse. With our new Rome and Threadripper CPUs, we’re powering AR. Yes, we are supporting OpenXR 1.0.

What is next on the horizon for VFX?
Well, the demand for VFX is increasing, not the opposite, so the pursuit of faster photographic reality is perpetually in play. That’s good job security for me at a CPU/GPU company, as we have a way to go to properly bridge the Uncanny Valley completely, for example.

I’d love to say lower-cost CG is part of the future, but then look at the budgets of major features — they’re not exactly falling. The dance of Moore’s law will forever be in effect more than likely, with momentary huge leaps in compute power — like with Rome and Threadripper — catching amazement for a period. Then, when someone sees the new, expanded size of their sandpit, they then fill that and go, “I now know what I’d do if it was just a bit bigger.”

I am vested and fascinated by the future of VFX, but I think it goes hand in hand with great storytelling. If we don’t have great stories, then directing and artistry innovations don’t properly get noticed. Look at the top 20 highest grossing films in history … they’re all fantasy. We all want to be taken away from our daily lives and immersed in a beautiful, realistic VFX intense fictional world for 90 minutes, so we’ll be forever pushing the boundaries of rigging, texturing, shading, simulations, etc. To put my finger on exactly what’s next, I’d say I happen to know of a few amazing things that are coming, but sadly, I’m not at liberty to say right now.

 

Michel Suissa, managing director of pro solutions, The Studio-B&H 
The Studio-B&H provides hands-on experience to high-end professionals. Its Technology Center is a fully operational studio with an extensive display of high-end products and state-of-the-art workflows.

What are some of today’s VFX trends?
AI, ML, NN (GAN) and realtime environments

Will realtime raytracing play a role in how the tools you provide work?
It already does with most relevant applications in the market.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
The ubiquity of realtime game engines is becoming more mainstream with every passing year. It is becoming fairly accessible to a number of disciplines within different market targets.

What is next on the horizon for VFX?
New pipeline architectures that will rely on different implementations (traditional and AI/ML/NN) and mixed infrastructures (local and cloud-based).

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
AI, ML and realtime environments. New cloud toolsets. Prominence of neural networks and GANs. Proliferation of convincing “deepfakes” as a proof of concept for the use of generative networks as resources for VFX creation.

What about realtime raytracing? How will that affect VFX workflows?
RTX is changing how most people see their work being done. It is also changing expectations about what it takes to create and render CG images.



The Uncanny Valley. Where are we now?
AI and machine learning will help us get there. Perfection still remains too costly. The amount of time and resources required to create something convincing is prohibitive for the large majority of the budgets.

 

Marc Côté, CEO, Real by Fake 
Real by Fake services include preproduction planning, visual effects, post production and tax-incentive financing.

What film or show inspired you to work in VFX?
George Lucas’ Star Wars and Indiana Jones (Raiders of the Lost Ark). For Star Wars, I was a kid and I saw this movie. It brought me to another universe. Star Wars was so inspiring even though I was too young to understand what the movie was about. The robots in the desert and the spaceships flying around. It looked real; it looked great. I was like, “Wow, this is amazing.”

Indiana Jones because it was a great adventure; we really visit the worlds. I was super-impressed by the action, by the way it was done. It was mostly practical effects, not really visual effects. Later on I realized that in Star Wars, they were using robots (motion control systems) to shoot the spaceships. And as a kid, I was very interested in robots. And I said, “Wow, this is great!” So I thought maybe I could use my skills and what I love and combine it with film. So that’s the way it started.

What trends have you been seeing? What do you feel is important?
The trend right now is using realtime rendering engines. It’s coming on pretty strong. The game companies who build engines like Unity or Unreal are offering a good product.

It’s bit of a hack to use these tools in rendering or in production at this point. They’re great for previz, and they’re great for generating realtime environments and realtime playback. But having the capacity to change or modify imagery with the director during the process of finishing is still not easy. But it’s a very promising trend.

Rendering in the cloud gives you a very rapid capacity, but I think it’s very expensive. You also have to download and upload 4K images, so you need a very big internet pipe. So I still believe in local rendering — either with CPUs or GPUs. But cloud rendering can be useful for very tight deadlines or for small companies that want to achieve something that’s impossible to do with the infrastructure they have.

My hope is that AI will minimize repetition in visual effects. For example, in keying. We key multiple sections of the body, but we get keying errors in plotting or transparency or in the edges, and they are all a bit different, so you have to use multiple keys. AI would be useful to define which key you need to use for every section and do it automatically and in parallel. AI could be an amazing tool to be able to make objects disappear by just selecting them.

Pixar’s USD is interesting. The question is: Will the industry take it as a standard? It’s like anything else. Kodak invented DPX, and it became the standard through time. Now we are using EXR. We have different software, and having exchange between them will be great. We’ll see. We have FBX, which is a really good standard right now. It was built by Filmbox, a Montreal company that was acquired by Autodesk. So we’ll see. The demand and the companies who build the software — they will be the ones who take it up or not. A big company like Pixar has the advantage of other companies using it.

The last trend is remote access. The internet is now allowing us to connect cross-country, like from LA to Montreal or Atlanta. We have a sophisticated remote infrastructure, and we do very high-quality remote sessions with artists who work from disparate locations. It’s very secure and very seamless.

What about realtime raytracing? How will that affect VFX and the way you work?
I think we have pretty good raytracing compared to what we had two years ago. I think it’s a question of performance, and of making it user-friendly in the application so it’s easy to light with natural lighting. To not have to fake the rebounds so you can get two or three rebounds. I think it’s coming along very well and quickly.

Sharp Objects

So what about things like AI/ML or AR/VR? Have those things changed anything in the way movies and TV shows are being made?
My feeling right now is that we are getting into an era where I don’t think you’ll have enough visual effects companies to cover the demand.

Every show has visual effects. It can be a complete character, like a Transformer, or a movie from the Marvel Universe where the entire film is CG. Or it can be the huge number of invisible effects that are starting to appear in virtually every show. You need capacity to get all this done.

AI can help minimize repetition so artists can work more on the art and what is being created. This will accelerate and give us the capacity to respond to what’s being demanded of us. They want a faster cheaper product, and they want the quality to be as high as a movie.

The only scenario where we are looking at using AR is when we are filming. For example, you need to have a good camera track in real time, and then you want to be able to quickly add a CGI environment around the actors so the director can make the right decision in terms of the background or interactive characters who are in the scene. The actors will not see it until they have a monitor or a pair of glasses or something to be able to give them the result.

So AR is a tool to be able to make faster decisions when you’re on set shooting. This is what we’ve been working on for a long time: bringing post production and preproduction together. To have an engineering department who designs and conceptualizes and creates everything that needs to be done before shooting.

The Uncanny Valley. Where are we now?
In terms of the environment, I think we’re pretty much there. We can create an environment that nobody will know is fake. Respectfully, I think our company Real by Fake is pretty good at doing it.

In terms of characters, I think we’re still not there. I think the game industry is helping a lot to push this. I think we’re on the verge of having characters look as close as possible to live actors, but if you’re in a closeup, it still feels fake. For mid-ground and long shots, it’s fine. You can make sure nobody will know. But I don’t think we’ve crossed the valley just yet.

Can you name some recent projects?
Big Little Lies and Sharp Objects for HBO, Black Summer for Netflix
and Brian Banks, an indie feature.

 

Jeremy Smith, CTO, Jellyfish Pictures
Jellyfish Pictures provides a range of services including VFX for feature film, high-end TV and episodic animated kids’ TV series and visual development for projects spanning multiple genres.

What film or show inspired you to work in VFX?
Forrest Gump really opened my eyes to how VFX could support filmmaking. Seeing Tom Hanks interact with historic footage (e.g., John F. Kennedy) was something that really grabbed my attention, and I remember thinking, “Wow … that is really cool.”

What trends have you been seeing? What do you feel is important?
The use of cloud technology is really empowering “digital transformation” within the animation and VFX industry. The result of this is that there are new opportunities that simply wouldn’t have been possible otherwise.

Jellyfish Pictures uses burst rendering into the cloud, extending our capacity and enabling us to take on more work. In addition to cloud rendering, Jellyfish Pictures were early adopters of virtual workstations, and, especially after Siggraph this year, it is apparent to see that this is the future for VFX and animation.

Virtual workstations promote a flexible and scalable way of working, with global reach for talent. This is incredibly important for studios to remain competitive in today’s market. As well as the cloud, formats such as USD are making it easier to exchange data with others, which allow us to work in a more collaborative environment.

It’s important for the industry to pay attention to these, and similar, trends, as they will have a massive impact on how productions are carried out going forward.
Are game engines affecting how you work, or how you will work in the future?

Game engines are offering ways to enhance certain parts of the workflow. We see a lot of value in the previz stage of the production. This allows artists to iterate very quickly and helps move shots onto the next stage of production.

What about realtime raytracing? How will that affect VFX and the way you work?
The realtime raytracing from Nvidia (as well as GPU compute in general) offers artists a new way to iterate and help create content. However, with recent advancements in CPU compute, we can see that “traditional” workloads aren’t going to be displaced. The RTX solution is another tool that can be used to assist in the creation of content.

How have AR/VR and ML/AI affected your workflows, if at all?
Machine learning has the power to really assist certain workloads. For example, it’s possible to use machine learning to assist a video editor by cataloging speech in a certain clip. When a director says, “find the spot where the actor says ‘X,’” we can go directly to that point in time on the timeline.

 In addition, ML can be used to mine existing file servers that contain vast amounts of unstructured data. When mining this “dark data,” an organization may find a lot of great additional value in the existing content, which machine learning can uncover.

The Uncanny Valley. Where are we now?
With recent advancements in technology, the Uncanny Valley is closing, however it is still there. We see more and more digital humans in cinema than ever before (Peter Cushing in Rogue One: A Star Wars Story was a main character), and I fully expect to see more advances as time goes on.

Can you name some recent projects?
Our latest credits include Solo: A Star Wars Story, Captive State, The Innocents, Black Mirror, Dennis & Gnasher: Unleashed! and Floogals Seasons 1 through 3.

 

Andy Brown, creative director, Jogger 
Jogger Studios is a boutique visual effects studio with offices in London, New York and LA. With capabilities in color grading, compositing and animation, Jogger works on a variety of projects, from TV commercials and music videos to projections for live concerts.

What inspired you to work in VFX?
First of all, my sixth form English project was writing treatments for music videos to songs that I really liked. You could do anything you wanted to for this project, and I wanted to create pictures using words. I never actually made any of them, but it planted the seed of working with visual images. Soon after that I went to university in Birmingham in the UK. I studied communications and cultural studies there, and as part of the course, we visited the BBC Studios at Pebble Mill. We visited one of the new edit suites, where they were putting together a story on the inquiry into the Handsworth riots in Birmingham. It struck me how these two people, the journalist and the editor, could shape the story and tell it however they saw fit. That’s what got me interested on a critical level in the editorial process. The practical interest in putting pictures together developed from that experience and all the opportunities that opened up when I started work at MPC after leaving university.

What trends have you been seeing? What do you feel is important?
Remote workstations and cloud rendering are all really interesting. It’s giving us more opportunities to work with clients across the world using our resources in LA, SF, Austin, NYC and London. I love the concept of a centralized remote machine room that runs all of your software for all of your offices and allows you scaled rendering in an efficient and seamless manner. The key part of that sentence is seamless. We’re doing remote grading and editing across our offices so we can share resources and personnel, giving the clients the best experience that we can without the carbon footprint.

Are game engines affecting how you work or how you will work in the future?
Game engines are having a tremendous effect on the entire media and entertainment industry, from conception to delivery. Walking around Siggraph last month, seeing what was not only possible but practical and available today using gaming engines, was fascinating. It’s hard to predict industry trends, but the technology felt like it will change everything. The possibilities on set look great, too, so I’m sure it will mean a merging of production and post production in many instances.

What about realtime raytracing How will that affect VFX and the way you work?
Faster workflows and less time waiting for something to render have got to be good news. It gives you more time to experiment and refine things.

Chico for Wendy’s

How have AR/VR or ML/AI affected your workflows, if at all?
Machine learning is making its way into new software releases, and the tools are useful. Anything that makes it easier to get where you need to go on a shot is welcome. AR, not so much. I viewed the new Mac Pro sitting on my kitchen work surface through my phone the other day, but it didn’t make me want to buy it any more or less. It feels more like something that we can take technology from rather than something that I want to see in my work.

I’d like 3D camera tracking and facial tracking to be realtime on my box, for example. That would be a huge time-saver in set extensions and beauty work. Anything that makes getting perfect key easier would also be great.

The Uncanny Valley. Where are we now?
It always used to be “Don’t believe anything you read.” Now it’s, “Don’t believe anything you see.” I used to struggle to see the point of an artificial human, except for resurrecting dead actors, but now I realize the ultimate aim is suppression of the human race and the destruction of democracy by multimillionaire despots and their robot underlings.

Can you name some recent projects?
I’ve started prepping for the apocalypse, so it’s hard to remember individual jobs, but there’s been the usual kind of stuff — beauty, set extensions, fast food, Muppets, greenscreen, squirrels, adding logos, removing logos, titles, grading, finishing, versioning, removing rigs, Frankensteining, animating, removing weeds, cleaning runways, making tenders into wings, split screens, roto, grading, polishing cars, removing camera reflections, stabilizing, tracking, adding seatbelts, moving seatbelts, adding photos, removing pictures and building petrol stations. You know, the usual.

 

James David Hattin, founder/creative director, VFX Legion 
Based in Burbank and British Columbia, VFX Legion specializes in providing episodic shows and feature films with an efficient approach to creating high-quality visual effects.

What film or show inspired you to work in VFX?
Star Wars was my ultimate source of inspiration for doing visual effects. Much of the effects in the movies didn’t make sense to me as a six-year-old, but I knew that this was the next best thing to magic. Visual effects create a wondrous world where everyday people can become superheroes, leaders of a resistance or ruler of a 5th century dynasty. Watching X-wings flying over the surface of a space station, the size of a small moon was exquisite. I also learned, much later on, that the visual effects that we couldn’t see were as important as what we could see.

I had already been steeped in visual effects with Star Trek — phasers, spaceships and futuristic transporters. Models held from wires on a moon base convinced me that we could survive on the moon as it broke free from orbit. All of this fueled my budding imagination. Exploring computer technology and creating alternate realities, CGI and digitally enhanced solutions have been my passion for over a quarter of century.

What trends have you been seeing? What do you feel is important?
More and more of the work is going to happen inside a cloud structure. That is definitely something that is being pressed on very heavily by the tech giants like Google and Amazon that rule our world. There is no Moore’s law for computers anymore. The prices and power we see out of computers is almost plateauing. The technology is now in the world of optimizing algorithms or rendering with video cards. It’s about getting bigger, better effects out more efficiently. Some companies are opting to run their entire operations in the cloud or co-located server locations. This can theoretically free up the workers to be in different locations around the world, provided they have solid, low-latency, high-speed internet.

When Legion was founded in 2013, the best way around cloud costs was to have on-premises servers and workstations that supported global connectivity. It was a cost control issue that has benefitted the company to this day, enabling us to bring a global collective of artists and clients into our fold in a controlled and secure way. Legion works in what we consider a “private cloud,” eschewing the costs of egress from large providers and working directly with on-premises solutions.

Are game engines affecting how you work or how you will work in the future?
Game engines are perfect for revisualization in large, involved scenes. We create a lot of environments and invisible effects. For the larger bluescreen shoots, we can build out our sets in Unreal engines, previsualizing how the scene will play for the director or DP. This helps get everyone on the same page when it comes to how a particular sequence is going to be filmed. It’s a technique that also helps the CG team focus on adding details to the areas of a set that we know will be seen. When the schedule is tight, the assets are camera-ready by the time the cut comes to us.

What about realtime raytracing via Nvidia’s RTX? How will that affect VFX and the way you work?
The type of visual effects that we create for feature films and television shows involves a lot of layers and technology that provides efficient, comprehensive compositing solutions. Many of the video card rendering engines like Octanerender, Redshift and V-Ray RT are limited when it comes to what they can create with layers. They often have issues with getting what is called a “back to beauty,” in which the sum of the render passes equals the final render. However, the workarounds we’ve developed enable us to achieve the quality we need. Realtime raytracing introduces a fantastic technology that will someday make it an ideal fit with our needs. We’re keeping an out eye for it as it evolves and becomes more robust.

How have AR/VR or ML/AI affected your workflows, if at all?
AR has been in the wings of the industry for a while. There’s nothing specific that we would take advantage of. Machine learning has been introduced a number of times to solve various problems. It’s a pretty exciting time for these things. One of our partner contacts, who left to join Facebook, was keen to try a number of machine learning tricks for a couple of projects that might have come through, but we didn’t get to put it through the test. There’s an enormous amount of power to be had in machine learning, and I think we are going to see big changes over the next five years in that field and how it affects all of post production.

The Uncanny Valley. Where are we now?
Climbing up the other side, not quite at the summit for daily use. As long as the character isn’t a full normal human, it’s almost indistinguishable from reality.

Can you name some recent projects?
We create visual effects on an ongoing basis for a variety of television shows that include How to Get Away with Murder, DC’s Legends of Tomorrow, Madam Secretary and The Food That Built America. Our team is also called upon to craft VFX for a mix of movies, from the groundbreaking feature film Hardcore Henry to recently released films such as Ma, SuperFly and After.

MAIN IMAGE: Good Morning Football via Chapeau Studios.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Whiskytree experiences growth, upgrades tools

Visual effects and content creation company Whiskytree has gone through a growth spurt that included a substantial increase in staff, a new physical space and new infrastructure.

Providing content for films, television, the Web, apps, game and VR or AR, Whiskytree’s team of artists, designers and technicians use applications such as Autodesk Maya, Side Effects Houdini, Autodesk Arnold, Gaffer and Foundry Nuke on Linux — along with custom tools — to create computer graphics and visual effects.

To help manage its growth and the increase in data that came with it, Whiskytree recently installed Panasas ActiveStor. The platform is used to store and manage Whiskytree’s computer graphics and visual effects workflows, including data-intensive rendering and realtime collaboration using extremely large data sets for movies, commercials and advertising; work for realtime render engines and games; and augmented reality and virtual reality applications.

“We recently tripled our employee count in a single month while simultaneously finalizing the build-out of our new facility and network infrastructure, all while working on a 700-shot feature film project [The Captain],” says Jonathan Harb, chief executive officer and owner of Whiskytree. “Panasas not only delivered the scalable performance that we required during this critical period, but also delivered a high level of support and expertise. This allowed us to add artists at the rapid pace we needed with an easy-to-work-with solution that didn’t require fine-tuning to maintain and improve our workflow and capacity in an uninterrupted fashion. We literally moved from our old location on a Friday, then began work in our new facility the following Monday morning, with no production downtime. The company’s ‘set it and forget it’ appliance resulted in overall smooth operations, even under the trying circumstances.”

In the past, Whiskytree operated a multi-vendor storage solution that was complex and time consuming to administer, modify and troubleshoot. With the office relocation and rapid team expansion, Whiskytree didn’t have time to build a new custom solution or spend a lot of time tuning. It also needed storage that would grow as project and facility needs change.

Projects from the studio include Thor: Ragnarok, Monster Hunt 2, Bolden, Mother, Star Wars: The Last Jedi, Downsizing, Warcraft and Rogue One: A Star Wars.

Tips from a Flame Artist: things to do before embarking on a VFX project

By Andy Brown

I’m creative director and Flame artist at Jogger Studios in Los Angeles. We are a VFX and finishing studio and sister company to  Cut+Run, which has offices in LA, New York, London, San Francisco and Austin. As an experienced visual effects artist, I’ve seen a lot in my time in the industry, and not just what ends up on the screen. I’m also an Englishman living in LA.

I was asked to put together some tips to help make your next project a little bit easier, but in the process, I remembered many things I forgot. I hope these tips these help!

1) Talk to production.

2) Trust your producers.

3) Don’t assume anyone (including you) knows anything.

4) Forget about the money; it’s not your job. Well, it’s kind of your job, but in the context of doing the work, it’s not.

5) Read everything that you’ve been sent, then read it again. Make sure you actually understand what is being asked of you.

6) Make a list of questions that cover any uncertainty you might have about any aspect of the project you’re bidding for. Then ask those questions.

7) Ask production to talk to you if they have any questions. It’s better to get interrupted on your weekend off than for the client to ask her friend Bob, who makes videos for YouTube. To be fair to Bob, he might have a million subscribers, but Bob isn’t doing the job, so please, keep Bob out of it.

8) Remember that what the client thinks is “a small amount of cleanup” isn’t necessarily a small amount of cleanup.

9) Bring your experience to the table. Even if it’s your experience in how not to do things.

10) If you can do some tests, then do some tests. Not only will you learn something about how you’re going to approach the problem, but it will show your client that you’re engaged with the project.

11) Ask about the deliverables. How many aspect ratios? How many versions? Then factor in the slated, the unslated and the generics and take a deep breath.

12) Don’t believe that a lift (a cutdown edit) is a lift is a lift. It won’t be a lift.

13) Make sure you have enough hours in your bid for what you’re being asked to do. The hours are more important than the money.

14) Attend the shoot. If you can’t attend the shoot, then send someone to the shoot … someone who knows about VFX. And don’t be afraid to pipe up on the shoot; that’s what you’re there for. Be prepared to make suggestions on set about little things that will make the VFX go more smoothly.

15) Give yourself time. Don’t get too frustrated that you haven’t got everything perfect in the first day.

16) Tackle things methodically.

17) Get organized.

18) Make a list.

19) Those last three were all the same thing, but that’s because it’s important.

20) Try to remember everyone’s names. Write them down. If you can’t remember, ask.

21) Sit up straight.

23) Be positive. You blew that already by being too English.

24) Remember we all want to get the best result that we can.

25) Forget about the money again. It’s not your job.

26) Work hard and don’t get pissed off if someone doesn’t like what you’ve done so far. You’ll get there. You always do.

27) Always send WIPs to the editor. Not only do they appreciate it, but they can add useful info along the way.

28) Double-check the audio.

29) Double-check for black lines at the edges of frame. There’s no cutoff anymore. Everything lives on the internet.

30) Check your spelling. Even if you spelled it right, it might be wrong. Colour. Realise. Etcetera. Etc.

 

Boris FX beefs up film VFX arsenal, buys SilhouetteFX, Digital Film Tools

Boris FX, a provider of integrated VFX and workflow solutions for video and film, has bought SilhouetteFX (SFX) and Digital Film Tools (DFT). The two companies have a long history of developing tools used on Hollywood blockbusters and experience collaborating with top VFX studios, including Weta Digital, Framestore, Technicolor and Deluxe.

This is the third acquisition by Boris FX in recent years — Imagineer Systems (2014) and GenArts (2016) — and builds upon the company’s editing, visual effects, and motion graphics solutions used by post pros working in film and television. Silhouette and Digital Film Tools join Boris FX’s tools Sapphire, Continuum and Mocha Pro.

Silhouette’s groundbreaking non-destructive paint and advanced rotoscoping technology was recognized earlier this year by the Academy of Motion Pictures (Technical Achievement Award). It first gained prominence after Weta Digital used the rotoscoping tools on King Kong (2005). Now the full-fledged GPU-accelerated node-based compositing app features over 100 VFX nodes and integrated Boris FX Mocha planar tracking. Over the last 15 years, feature film artists have used Silhouette on films including Avatar (2009), The Hobbit (2012), Wonder Woman (2017), Avengers: End Game (2019) and Fast & Furious Presents: Hobbs & Shaw (2019).

Avengers: End Game courtesy of Marvel

Digital Film Tools (DFT) emerged as an off-shoot of a LA-based motion picture visual effects facility whose work included hundreds of feature films, commercials and television shows.

The Digital Film Tools portfolio includes standalone applications as well as professional plug-in collections for filmmakers, editors, colorists and photographers. The products offer hundreds of realistic filters for optical camera simulation, specialized lenses, film stocks and grain, lens flares, optical lab processes, color correction, keying and compositing, as well as natural light and photographic effects. DFT plug-ins support Adobe’s Photoshop, Lightroom, After Effects and Premiere Pro; Apple’s Final Cut Pro X and Motion; Avid’s Media Composer; and OFX hosts, including Foundry Nuke and Blackmagic DaVinci Resolve.

“This acquisition is a natural next step to our continued growth strategy and singular focus on delivering the most powerful VFX tools and plug-ins to the content creation market,”
“Silhouette fits perfectly into our product line with superior paint and advanced roto tools that highly complement Mocha’s core strength in planar tracking and object removal,” says Boris Yamnitsky, CEO/founder of Boris FX. “Rotoscoping, paint, digital makeup and stereo conversion are some of the most time-consuming, labor-intensive aspects of feature film post. Sharing technology and tools across all our products will make Silhouette even stronger as the leader in these tasks. Furthermore, we are very excited to be working with such an accomplished team [at DFT] and look forward to collaborating on new product offerings for photography, film and video.”

Silhouette founders, Marco Paolini, Paul Miller and Peter Moyer, will continue in their current leadership roles and partner with the Mocha product development team to collaborate on delivering next-generation tools. “By joining forces with Boris FX, we are not only dramatically expanding our team’s capabilities, but we are also joining a group of like-minded film industry pros to provide the best solutions and support to our customers,” says Marco Paolini, Product Designer. “The Mocha planar tracking option we currently license is extremely popular with Silhouette paint and roto artists, and more recently through OFX, we’ve added support for Sapphire plug-ins. Working together under the Boris FX umbrella is our next logical step and we are excited to add new features and continue advancing Silhouette for our user base.”

Both Silhouette and Digital Film Tool plug-ins will continue to be developed and sold under the Boris FX brand. Silhouette will adopt the Boris FX commitment to agile development with annual releases, annual support and subscription options.

Main Image: Silhouette

Game of Thrones’ Emmy-nominated visual effects

By Iain Blair

Once upon a time, only glamorous movies could afford the time and money it took to create truly imaginative and spectacular visual effects. Meanwhile, television shows either tried to avoid them altogether or had to rely on hand-me-downs. But the digital revolution changed all that with its technological advances, and new tools quickly leveling the playing field. Today, television is giving the movies a run for their money when it comes to sophisticated visual effects, as evidenced by HBO’s blockbuster series, Game of Thrones.

Mohsen Mousavi

This fantasy series was recently Emmy-nominated a record-busting 32 times for its eighth and final season — including one for its visually ambitious VFX in the penultimate episode, “The Bells.”

The epic mass destruction presented Scanline’s VFX supervisor, Mohsen Mousavi, and his team many challenges. But his expertise in high-end visual effects, and his reputation for constant innovation in advanced methodology, made him a perfect fit to oversee Scanline’s VFX for the crucial last three episodes of the final season of Game of Thrones.

Mousavi started his VFX career in the field of artificial intelligence and advanced-physics-based simulations. He spearheaded designing and developing many different proprietary toolsets and pipelines for doing crowd, fluid and rigid body simulation, including FluidIT, BehaveIT and CardIT, a node-based crowd choreography toolset.

Prior to joining Scanline VFX Vancouver, Mousavi rose through the ranks of top visual effects houses, working in jobs that ranged from lead effects technical director to CG supervisor and, ultimately, VFX supervisor. He’s been involved in such high-profile projects as Hugo, The Amazing Spider-Man and Sucker Punch.

In 2012, he began working with Scanline, acting as digital effects supervisor on 300: Rise of an Empire, for which Scanline handled almost 700 water-based sea battle shots. He then served as VFX supervisor on San Andreas, helping develop the company’s proprietary city-generation software. That software and pipeline were further developed and enhanced for scenes of destruction in director Roland Emmerich’s Independence Day: Resurgence. In 2017, he served as the lead VFX supervisor for Scanline on the Warner Bros. shark thriller, The Meg.

I spoke with Mousavi about creating the VFX and their pipeline.

Congratulations on being Emmy-nominated for “The Bells,” which showcased so many impressive VFX. How did all your work on Season 4 prepare you for the big finale?
We were heavily involved in the finale of Season 4, however the scope was far smaller. What we learned was the collaboration and the nature of the show, and what the expectations were in terms of the quality of the work and what HBO wanted.

You were brought onto the project by lead VFX supervisor Joe Bauer, correct?
Right. Joe was the “client VFX supervisor” on the HBO side and was involved since Season 3. Together with my producer, Marcus Goodwin, we also worked closely with HBO’s lead visual effects producer, Steve Kullback, who I’d worked with before on a different show and in a different capacity. We all had daily sessions and conversations, a lot of back and forth, and Joe would review the entire work, give us feedback and manage everything between us and other vendors, like Weta, Image Engine and Pixomondo. This was done both technically and creatively, so no one stepped on each other’s toes if we were sharing a shot and assets. But it was so well-planned that there wasn’t much overlap.

[Editor’s Note: Here is the full list of those nominated for their VFX work on Game of Thrones — Joe Bauer, lead visual effects supervisor; Steve Kullback, lead visual effects producer; Adam Chazen, visual effects associate producer; Sam Conway, special effects supervisor; Mohsen Mousavi, visual effects supervisor; Martin Hill, visual effects supervisor; Ted Rae, visual effects plate supervisor; Patrick Tiberius Gehlen, previz lead; and Thomas Schelesny, visual effects and animation supervisor.]

What were you tasked with doing on Season 8?
We were involved as one of the lead vendors on the last three episodes and covered a variety of sequences. In episode four, “The Last of the Starks,” we worked on the confrontation between Daenerys and Cersei in front of the King’s Landing’s gate, which included a full CG environment of the city gate and the landscape around it, as well as Missandei’s death sequence, which featured a full CG Missandei. We also did the animated Drogon outside the gate while the negotiations took place.

Then for “The Bells” we were responsible for most of the Battle of King’s Landing, which included full digital city, Daenerys’ army camp site outside the walls of King’s Landing, the gathering of soldiers in front of the King’s Landing walls, Danny’s attack on the scorpions, the city gate, streets and the Red Keep, which had some very close-up set extensions, close-up fire and destruction simulations and full CG crowd of various different factions — armies and civilians. We also did the iconic Cleaganebowl fight between The Hound and The Mountain and Jamie Lannister’s fight with Euron at the beach underneath the Red Keep. In Episode 5, we received raw animation caches of the dragon from Image Engine and did the full look-dev, lighting and rendering of the final dragon in our composites.

For the final episode, “The Iron Throne, we were responsible for the entire Deanerys speech sequence, which included a full 360 digital environment of the city aftermath and the Red Keep plaza filled with digital unsullied Dothrakies and CG horses leading into the majestic confrontation between Jon and Drogon, where it revealed itself from underneath a huge pile of snow outside Red Keep. We were also responsible for the iconic throne melt sequence, which included some advance simulation of high viscous fluid and destruction of the area around the throne and finishing the dramatic sequence with Drogon carrying Danny out of the throne room and away from King’s Landing into the unknown.

Where was all this work done?
The majority of the work was done here in Vancouver, which is the biggest Scanline office. Additionally we had teams working in our Munich, Montreal and LA offices. We’re a 100% connected company, all working under the same infrastructure in the same pipeline. So if I work with the team in Munich, it’s like they’re sitting in the next room. That allows us to set up and attack the project with a larger crew and get the benefit of the 24/7 scenario; as we go home, they can continue working, and it makes us far more productive.

How many VFX did you have to create for the final season?
We worked on over 600 shots across the final three episodes which gave us approximately over an hour of screen time of high-end consistent visual effects.

Isn’t that hour length unusual for 600 shots?
Yes, but we had a number of shots that were really long, including some ground coverage shots of Arya in the streets of King’s Landing that were over four or five minutes long. So we had the complexity along with the long duration.

How many people were on your team?
At the height, we had about 350 artists on the project, and we began in March 2018 and didn’t wrap till nearly the end of April 2019 — so it took us over a year of very intense work.

Tell us about the pipeline specific to Game of Thrones.
Scanline has an industry-wide reputation for delivering very complex, full CG environments combined with complex simulation scenarios of all sort of fluid dynamics and destruction based on our simulation framework “Flowline.” We had a high-end digital character and hero creature pipeline that gave the final three episodes a boost up front. What was new were the additions to our procedural city generation pipeline for the recreation of King’s Landing, making sure it can deliver both in wide angle shots as well as some extreme close-up set extensions.

How did you do that?
We used a framework we developed back for Independence Day: Resurgence, which is a module-based procedural city generation leveraging some incredible scans of the historical city of Dubrovnik as a blueprint and foundation of King’s Landing. Instead of doing the modeling conventionally, you model a lot of small modules, kind of like Lego blocks. You create various windows, stones, doors, shingles and so on, and once it’s encoded in the system, you can semi-automatically generate variations of buildings on the fly. That also goes for texturing. We had procedurally generated layers of façade textures, which gave us a lot of flexibility on texturing the entire city, with full control over the level of aging and damage. We could decide to make a block look older easily without going back to square one. That’s how we could create King’s Landing with its hundreds of thousands of unique buildings.

The same technology was applied to the aftermath of the city in Episode 6. We took the intact King’s Landing and ran a number of procedural collapsing simulations on the buildings to get the correct weight based on references from the bombed city of Dresden during WWII, and then we added procedurally created CG snow on the entire city.

It didn’t look like the usual matte paintings were used at all.
You’re right, and there were a lot of shots that normally would be done that way, but to Joe’s credit, he wanted to make sure the environments weren’t cheated in any way. That was a big challenge, to keep everything consistent and accurate. Even if we used traditional painting methods, it was all done on top of an accurate 3D representation with correct lighting and composition.

What other tools did you use?
We use Autodesk Maya for all our front-end departments, including modeling, layout, animation, rigging and creature effects, and we bridge the results to Autodesk 3ds Max, which encapsulates our look-dev/FX and rendering departments, powered by Flowline and Chaos Group’s V-Ray as our primary render engine, followed by Foundry’s Nuke as our main compositing package.

At the heart of our crowd pipeline, we use Massive and our creature department is driven with Ziva muscles which was a collaboration we started with Ziva Dynamics back for the creation of the hero Megalodon in The Meg.

Fair to say that your work on Game of Thrones was truly cutting-edge?
Game of Thrones has pushed the limit above and beyond and has effectively erased the TV/feature line. In terms of environment and effects and the creature work, this is what you’d do for a high-end blockbuster for the big screen. No difference at all.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

FilmLight sets speakers for free Color On Stage seminar at IBC

At this year’s IBC, FilmLight will host a free two-day seminar series, Color On Stage, on September 14 and 15. The event features live presentations and discussions with colorists and other creative professionals. The event will cover topics ranging from the colorist today to understanding color management and next-generation grading tools.

“Color on Stage offers a good platform to hear about real-world interaction between colorists, directors and cinematographers,” explains Alex Gascoigne, colorist at Technicolor and one of this year’s presenters. “Particularly when it comes to large studio productions, a project can take place over several months and involve a large creative team and complex collaborative workflows. This is a chance to find out about the challenges involved with big shows and demystify some of the more mysterious areas in the post process.”

This year’s IBC program includes colorists from broadcast, film and commercials, as well as DITs, editors, VFX artists and post supervisors.

Program highlights include:
•    Creating the unique look for Mindhunter Season 2
Colorist Eric Weidt will talk about his collaboration with director David Fincher — from defining the workflow to creating the look and feel of Mindhunter. He will break down scenes and run through color grading details of the masterful crime thriller.

•    Realtime collaboration on the world’s longest running continuing drama, ITV Studios’ Coronation Street
The session will address improving production processes and enhancing pictures with efficient renderless workflows, with colorist Stephen Edwards, finishing editor Tom Chittenden and head of post David Williams.

•    Looking to the future: Creating color for the TV series Black Mirror
Colorist Alex Gascoigne of Technicolor will explain the process behind grading Black Mirror, including the interactive episode Bandersnatch and the latest Season 5.

•    Bollywood: A World of Color
This session will delve into the Indian film industry with CV Rao, technical general manager at Annapurna Studios in Hyderabad. In this talk, CV will discuss grading and color as exemplified by the hit film Baahubali 2: The Conclusion.

•    Joining forces: Strengthening VFX and finishing with the BLG workflow
Mathieu Leclercq, head of post at Mikros Image in Paris, will be joined by colorist Sebastian Mingam and VFX supervisor Franck Lambertz to showcase their collaboration on recent projects.

•    Maintaining the DP’s creative looks from set to post
Meet with French DIT Karine Feuillard, ADIT — who worked on the latest Luc Besson film Anna as well as the TV series The Marvelous Mrs Maisel — and FilmLight workflow specialist Matthieu Straub.

•    New color management and creative tools to make multi-delivery easier
The latest and upcoming Baselight developments, including a host of features aimed to simplify delivery for emerging technologies such as HDR. With FilmLight’s Martin Tlaskal, Daniele Siragusano and Andy Minuth.

Color On Stage will take place in Room D201 on the second floor of the Elicium Centre (Entrance D), close to Hall 13. The event is free to attend but spaces are limited. Registion is available here.

Rob Legato to receive HPA’s Lifetime Achievement Award 

The Hollywood Professional Association (HPA) will honor renowned visual effects supervisor and creative Robert Legato with its Lifetime Achievement Award at the HPA Awards at the Skirball Cultural Center in Los Angeles on November 21. Now in its 14th year, the HPA Awards recognize creative artistry, innovation and engineering excellence in the media content industry. The Lifetime Achievement Award honors the recipients’ dedication to the betterment of the industry.

Legato is an iconic figure in the visual effects industry with multiple Oscar, BAFTA and Visual Effects Society nominations and awards to his credit. He is a multi-hyphenate on many of his projects, serving as visual effects supervisor, VFX director of photography and second unit director. From his work with studios and directors and in his roles at Sony Pictures Imageworks and Digital Domain, he has developed a variety of digital workflows.

He has enjoyed collaborations with leading directors including James Cameron, Jon Favreau, Martin Scorsese and Robert Zemeckis. Legato’s career in VFX began in television at Paramount Pictures, where he supervised visual effects on two Star Trek series, which earned him two Emmy awards. He left Paramount to join the newly formed Digital Domain where he worked with founders James Cameron, Stan Winston and Scott Ross. He remained at Digital Domain until he segued to Sony Imageworks.

Legato began his feature VFX career on Neil Jordan’s Interview with the Vampire. He then served as VFX supervisor and DP for the VFX unit on Ron Howard’s Apollo 13, which earned him his first Academy Award nomination, and a win at the BAFTAs. He worked with James Cameron on Titanic, earning him his first Academy Award. Legato continued to work with Cameron, conceiving and creating the virtual cinematography pipeline for Cameron’s visionary Avatar.

Legato has also enjoyed a long collaboration with Martin Scorsese that began with his consultation on Kundun and continued with the multi-award winning film The Aviator, on which he served as co-second unit director/cameraman and VFX supervisor. Legato’s work on The Aviator won him three VES awards. He returned to work with the director on the Oscar Best Picture winner The Departed as the 2nd unit director/cameraman and VFX supervisor.  Legato and Scorsese collaborated once again on Shutter Island, on which he was both VFX supervisor and 2nd unit director/cameraman. He continued on to Scorsese’s 3D film Hugo, which was nominated for 11 Oscars and 11 BAFTAs, including Best Picture and Best Visual Effects. Legato won his second Oscar for Hugo as well as three VES Society Awards. His collaboration with Scorsese continued with The Wolf of Wall Street as well as with non-theatrical and advertising projects such as the Clio award-winning Freixenet: The Key to Reserva, a 10-minute commercial project, and the Rolling Stones feature documentary, Shine a Light.

Legato worked with director Jon Favreau on Disney’s The Jungle Book (second unit director/cinematographer and VFX supervisor) for which he received his third Academy Award, a British Academy Award, five VES Awards, an HPA Award and the Critics’ Choice Award for Best Visual Effects for 2016. His latest film with Favreau is Disney’s The Lion King, which surpassed $1 billion in box office after fewer than three weeks in theaters.

Legato’s extensive credits include serving as VFX supervisor on Chris Columbus’ Harry Potter and the Sorcerer’s Stone, as well as on two Robert Zemeckis films, What Lies Beneath and Cast Away. He was senior VFX supervisor on Michael Bay’s Bad Boys II, which was nominated for a VES Award for Outstanding Supporting Visual Effects, and for Digital Domain he worked on Bay’s Armageddon.

Legato is a member of ASC, BAFTA, DGA, AMPAS, VES, and the Local 600 and Local 700 unions.

Shipping + Handling adds Jerry Spivack, Mike Pethel, Matthew Schwab

VFX creative director Jerry Spivack and colorists Michael Pethel and Matthew Schwab have joined LA’s Shipping + Handling, Spot Welders‘ VFX, color grading, animation, and finishing arm/sister company.

Alongside executive producer Scott Friske and current creative director Casey Price, Spivack will help lead the company’s creative team. As the creative director/co-founder at Ring of Fire, Spivack was responsible for crafting and spearheading VFX on commercials for brands including FedEx, Nike and Jaguar; episodic work for series television including Netflix’s Wormwood and 12 seasons of FX’s It’s Always Sunny in Philadelphia; promos for NBC’s The Voice and The Titan Games; and feature films such as Sony Pictures’ Spider-Man 2, Bold Films’ Drive and Warner Bros.’ The Bucket List.

Colorist Pethel was a founding partner of Company 3 and for the past five years has served client and director relationships under his BeachHouse Color brand, which he will continue to maintain. Pethel’s body of work includes campaigns for Carl’s Jr., Chase, Coke, Comcast/Xfinity, Hyundai, Jeep, Netflix and Southwest Airlines.

Commenting on the move, Pethel says, “I’m thrilled to be joining such a fantastic group of highly regarded and skilled professionals at Shipping + Handling. There is so much creativity here; the people are awesome to work with and the technology they are able to offer clientele at the facility is top-notch.”

Schwab formally joins the Shipping + Handling roster after working closely with the company over the past two years on multiple campaigns for Apple, Acura, QuickBooks and many others. Aside from his role at Shipping + Handling, Schwab will also continue his work through Roving Picture Company. Having worked with a number of internationally recognized brands, Schwab has collaborated on projects for Amazon, Honda, Mercedes-Benz, National Geographic, Netflix, Nike, PlayStation and Smirnoff.

“It’s exciting to be part of a team that approaches every project with such energy. This partnership represents a shared commitment to always deliver outstanding color and technical results for our clients,” says Schwab.

“Pethel is easily amongst the best colorists in our industry. As a longtime client of his, I have a real understanding of the professionalism he brings to every session. He is a delight in the room and wickedly talented. Schwab’s talent has just been realized in the last few years, and we are pleased to offer his skill to our clients. If our experience working with him over the last couple of years is any indication, we’re going to make a lot of clients happy he’s on our roster,” adds Friske.

Spivack, Pethel and Schwab will operate out of Shipping + Handling’s West Coast office on the creative campus it shares with its sister company, editorial post house Spot Welders.

Image: (L-R) Mike Pethel, Matthew Schwab, Jerry Spivack

 

Matthew Bristowe joins Jellyfish as COO

UK-based VFX and animation studio Jellyfish Pictures has hired Matthew Bristowe as director of operations. With a career spanning over 20 years, Bristowe joins Jellyfish Pictures after a stint as head of production at Technicolor.

During his 20 years in the industry, Bristowe has overseen hundreds of productions, including; Aladdin (Disney), Star Wars: The Last Jedi (Lucasfilm/Disney), Avengers: Age of Ultron (Marvel) and Guardians of the Galaxy (Marvel). In 2014 he was honored with the Advanced Imaging Society’s Lumiere Award for his work on Alfonso Cuarón’s Academy Award-winning Gravity.

Bristowe led the One Of Us VFX team to success in the category of Special, Visual and Graphic Effects at the BAFTAs and Best Digital Effects at the Royal Television Society Awards for The Crown Season 1. Another RTS award and BAFTA nomination followed in 2018 for The Crown Season 2. Prior to working with Technicolor and One of Us, Bristowe held senior positions at MPC and Prime Focus.

“Matt joining Jellyfish Pictures is a substantial hire for the company,” explains CEO Phil Dobree. “2019 has seen us focus on our growth, following the opening of our newest studio in Sheffield, and Matt’s extensive experience of bringing together creativity and strategy will be instrumental in our further expansion.”

An artist’s view of SIGGRAPH 2019

By Andy Brown

While I’ve been lucky enough to visit NAB and IBC several times over the years, this was my first SIGGRAPH. Of course, there are similarities. There are lots of booths, lots of demos, lots of branded T-shirts, lots of pairs of black jeans and a lot of beards. I fit right in. I know we’re not all the same, but we certainly looked like it. (The stats regarding women and diversity in VFX are pretty poor, but that’s another topic.)

Andy Brown

You spend your whole career in one industry and I guess you all start to look more and more like each other. That’s partly the problem for the people selling stuff at SIGGRAPH.

There were plenty of compositing demos from of all sorts of software. (Blackmagic was running a hands-on class for 20 people at a time.) I’m a Flame artist, so I think that Autodesk’s offering is best, obviously. Everyone’s compositing tool can play back large files and color correct, composite, edit, track and deliver, so in the midst of a buzzy trade show, the differences feel far fewer than the similarities.

Mocap
Take the world of tracking and motion capture as another example. There were more booths demonstrating tracking and motion capture than anything in the main hall, and all that tech came in different shapes and sizes and an interesting mix of hardware and software.

The motion capture solution required for a Hollywood movie isn’t the same as the one to create a live avatar on your phone, however. That’s where it gets interesting. There are solutions that can capture and translate the movement of everything from your fingers to your entire body using hardware from an iPhone X to a full 360-camera array. Some solutions used tracking ball markers, some used strips in the bodysuit and some used tiny proximity sensors, but the results were all really impressive.

Vicon

Vicon

Some tracking solution companies had different versions of their software and hardware. If you don’t need all of the cameras and all of the accuracy, then there’s a basic version for you. But if you need everything to be perfectly tracked in real time, then go for the full-on pro version with all the bells and whistles. I had a go at live-animating a monkey using just my hands, and apart from ending with him licking a banana in a highly inappropriate manner, I think it worked pretty well.

AR/VR
AR and VR were everywhere, too. You couldn’t throw a peanut across the room without hitting someone wearing a VR headset. They’d probably be able to bat it away whilst thinking they were Joe Root or Max Muncy (I had to Google him), with the real peanut being replaced with a red or white leather projectile. Haptic feedback made a few appearances, too, so expect to be able to feel those virtual objects very soon. Some of the biggest queues were at the North stand where the company had glasses that looked like the glasses everyone was wearing already (like mine, obviously) except the glasses incorporated a head-up display. I have mixed feelings about this. Google Glass didn’t last very long for a reason, although I don’t think North’s glasses have a camera in them, which makes things feel a bit more comfortable.

Nvidia

Data
One of the central themes for me was data, data and even more data. Whether you are interested in how to capture it, store it, unravel it, play it back or distribute it, there was a stand for you. This mass of data was being managed by really intelligent components and software. I was expecting to be writing all about artificial intelligence and machine learning from the show, and it’s true that there was a lot of software that used machine learning and deep neural networks to create things that looked really cool. Environments created using simple tools looked fabulously realistic because of deep learning. Basic pen strokes could be translated into beautiful pictures because of the power of neural networks. But most of that machine learning is in the background; it’s just doing the work that needs to be done to create the images, lighting and physical reactions that go to make up convincing and realistic images.

The Experience Hall
The Experience Hall was really great because no one was trying to sell me anything. It felt much more like an art gallery than a trade show. There were long waits for some of the exhibits (although not for the golf swing improver that I tried), and it was all really fascinating. I didn’t want to take part in the experiment that recorded your retina scan and made some art out of it, because, well, you know, its my retina scan. I also felt a little reluctant to check out the booth that made light-based animated artwork derived from your date of birth, time of birth and location of birth. But maybe all of these worries are because I’ve just finished watching the Netflix documentary The Great Hack. I can’t help but think that a better source of the data might be something a little less sinister.

The walls of posters back in the main hall described research projects that hadn’t yet made it into full production and gave more insight into what the future might bring. It was all about refinement, creating better algorithms, creating more realistic results. These uses of deep learning and virtual reality were applied to subjects as diverse as translating verbal descriptions into character design, virtual reality therapy for post-stroke patients, relighting portraits and haptic feedback anesthesia training for dental students. The range of the projects was wide. Yet everyone started from the same place, analyzing vast datasets to give more useful results. That brings me back to where I started. We’re all the same, but we’re all different.

Main Image Credit: Mike Tosti


Andy Brown is a Flame artist and creative director of Jogger Studios, a visual effects studio with offices in Los Angeles, New York, San Francisco and London.

Autodesk intros Bifrost for Maya at SIGGRAPH

At SIGGRAPH, Autodesk announced a new visual programming environment in Maya called Bifrost, which makes it possible for 3D artists and technical directors to create serious effects quickly and easily.

“Bifrost for Maya represents a major development milestone for Autodesk, giving artists powerful tools for building feature-quality VFX quickly,” says Chris Vienneau, senior director, Maya and Media & Entertainment Collection. “With visual programming at its core, Bifrost makes it possible for TDs to build custom effects that are reusable across shows. We’re also rolling out an array of ready-to-use graphs to make it easy for artists to get 90% of the way to a finished effect fast. Ultimately, we hope Bifrost empowers Maya artists to streamline the creation of anything from smoke, fire and fuzz to high-performance particle systems.”

Bifrost highlights include:

  • Ready-to-Use Graphs: Artists can quickly create state-of-the-art effects that meet today’s quality demands.
  • One Graph: In a single visual programming graph, users can combine nodes ranging from math operations to simulations.
  • Realistic Previews: Artists can see exactly how effects will look after lighting and rendering right in the Arnold Viewport in Maya.
  • Detailed Smoke, Fire and Explosions: New physically-based solvers for aerodynamics and combustion make it easy to create natural-looking fire effects.
  • The Material Point Method: The new MPM solver helps artists tackle realistic granular, cloth and fiber simulations.
  • High-Performance Particle System: A new particle system crafted entirely using visual programming adds power and scalability to particle workflows in Maya.
  • Artistic Effects with Volumes: Bifrost comes loaded with nodes that help artists convert between meshes, points and volumes to create artistic effects.
  • Flexible Instancing: High-performance, rendering-friendly instancing empowers users to create enormous complexity in their scenes.
  • Detailed Hair, Fur and Fuzz: Artists can now model things consisting of multiple fibers (or strands) procedurally.

Bifrost is available for download now and works with any version of Maya 2018 or later. It will also be included in the installer for Maya 2019.2 and later versions. Updates to Bifrost between Maya releases will be available for download from Autodesk AREA.

In addition to the release of Bifrost, Autodesk highlighted the latest versions of Shotgun, Arnold, Flame and 3ds Max. The company gave a tech preview of a new secure enterprise Shotgun that supports network segregation and customer-managed media isolation on AWS, making it possible for the largest studios to collaborate in a closed-network pipeline in the cloud. Shotgun Create, now out of beta, delivers a cloud-connected desktop experience, making it easier for artists and reviewers to see which tasks demand attention while providing a collaborative environment to review media and exchange feedback accurately and efficiently. Arnold 5.4 adds important updates to the GPU renderer, including OSL and OpenVDB support, while Flame 2020.1 introduces more uses of AI with new Sky Extraction tools and specialized image segmentation features. Also on display, the 3ds Max 2020.1 update features modernized procedural tools for 3D modeling.

Maxon intros Cinema 4D R21, consolidates versions into one offering

By Brady Betzel

At SIGGRAPH 2019, Maxon introduced the next release of its graphics software, Cinema 4D R21. Maxon also announced a subscription-based pricing structure as well as a very welcomed consolidation of its Cinema 4D versions into a single version, aptly titled Cinema 4D.

That’s right, no more Studio, Broadcast or BodyPaint. It all comes in one package at one price, and that pricing will now be subscription-based — but don’t worry, the online anxiety over this change seems to have been misplaced.

The cost has been substantially dropped for Cinema 4D R21, leading the way to start what Maxon is calling the “3D for the Real World” initiative. Maxon wants it to be the tool you choose for your graphics needs.

If you plan on upgrading every year or two, the new subscription-based model seems to be a great deal:

– Cinema 4D subscription paid annually: $59.99/month
– Cinema 4D subscription paid monthly: $94.99/month
– Cinema 4D subscription with Redshift paid annually: $81.99/month
– Cinema 4D subscription with Redshift paid monthly: $116.99/month
– Cinema 4D perpetual pricing: $3,495 (upgradeable)

Maxon did mention that if you have previously purchased Cinema 4D, there will be subscription-based upgrade/crossgrade deals coming.

The Updates
Cinema 4D R21 includes some great updates that will be welcomed by many users, both new and experienced. The new Field Force dynamics object allows the use of dynamic forces in modeling and animation within the MoGraph toolset. Caps and bevels have an all-new system that not only allows the extrusion of 3D logos and text effects but also means caps and bevels are integrated on all spline-based objects.

Furthering Cinema 4D’s integration with third-party apps, there is an all-new Mixamo Control rig allowing you to easily control any Mixamo characters. (If you haven’t checked out the models from Mixamo, you should. It’s a great way to find character rigs fast.)

An all-new Intel Open Image Denoise integration has been added to R21 in what seems like part of a rendering revolution for Cinema 4D. From the acquistion of Redshift to this integration, Maxon is expanding its third-party reach and doesn’t seem scared.

There is a new Node Space, which shows what materials are compatible with chosen render engines, as well as a new API available to third-party developers that allows them to integrate render engines with the new material node system. R21 has overall speed and efficiency improvements, with Cinema 4D supporting the latest processor optimizations from both Intel and AMD.

All this being said, my favorite update — or map toward the future — was actually announced last week. Unreal Engine added Cinema 4D .c4d file support via the Datasmith plugin, which is featured in the free Unreal Studio beta.

Today, Maxon is also announcing its integration with yet another game engine: Unity. In my opinion, the future lies in this mix of real-time rendering alongside real-world television and film production as well as gaming. With Cinema 4D, Maxon is bringing all sides to the table with a mix of 3D modeling, motion-graphics-building support, motion tracking, integration with third-party apps like Adobe After Effects via Cineware, and now integration with real-time game engines like Unreal Engine. Now I just have to learn it all.

Cinema 4D R21 will be available on both Mac OS and Windows on Tuesday, Sept. 3. In the meantime, watch out for some great SIGGRAPH presentations, including one from my favorite, Mike Winkelmann, better known as Beeple. You can find some past presentations on how he uses Cinema 4D to cover his “Everydays.”

Virtual Production Field Guide: Fox VFX Lab’s Glenn Derry

Just ahead of SIGGRAPH, Epic Games has published a resource guide called “The Virtual Production Field Guide”  — a comprehensive look at how virtual production impacts filmmakers, from directors to the art department to stunt coordinators to VFX teams and more. The guide is workflow-agnostic.

The use of realtime game engine technology has the potential to impact every aspect of traditional filmmaking, and the trend is increasingly being used in productions ranging from films like Avengers: Endgame and the upcoming Artemis Fowl to TV series like Game of Thrones.

The Virtual Production Field Guide offers an in-depth look at different types of techniques from creating and integrating high-quality CG elements live on set to virtual location scouting to using photoreal LED walls for in-camera VFX. It provides firsthand insights from award-winning professionals who have used these techniques – including directors Kenneth Branagh and Wes Ball, producers Connie Kennedy and Ryan Stafford, cinematographers Bill Pope and Haris Zambarloukos, VFX supervisors Ben Grossmann and Sam Nicholson, virtual production supervisors Kaya Jabar and Glenn Derry, editor Dan Lebental, previs supervisor Felix Jorge, stunt coordinators Guy and Harrison Norris, production designer Alex McDowell, and grip Kim Heath.

As mentioned, the guide is dense with information, so we decided to run an excerpt to give you an idea of what it covers.

Glenn DerryHere is an interview with Glenn Derry, founder and VP of visual effects at Fox VFX Lab, which offers a variety of virtual production services with a focus on performance capture. Derry is known for his work as a virtual production supervisor on projects like Avatar, Real Steel and The Jungle Book.

Let’s find out more.

How has performance capture evolved since projects such as The Polar Express?
In those earlier eras, there was no realtime visualization during capture. You captured everything as a standalone piece, and then you did what they called the director layout. After-the-fact, you would assemble the animation sequences from the motion data captured. Today, we’ve got a combo platter where we’re able to visualize in realtime.
When we bring a cinematographer in, he can start lining up shots with another device called the hybrid camera. It’s a tracked reference camera that he can hand hold. I can immediately toggle between an Unreal overview or a camera view of that scene.The earlier process was minimal in terms of aesthetics. We did everything we could in MotionBuilder, and we made it look as good as it could. Now we can make a lot more mission-critical decisions earlier in the process because the aesthetics of the renders look a lot better.

What are some additional uses for performance capture?
Sometimes we’re working with a pitch piece, where the studio is deciding whether they want to make a movie at all. We use the capture stage to generate what the director has in mind tonally and how the project could feel. We could do either a short little pitch piece or, for something like Call of the Wild, we created 20 minutes and three key scenes from the film to show the studio we could make it work.

The second the movie gets greenlit, we flip over into preproduction. Now we’re breaking down the full script and working with the art department to create concept art. Then we build the movie’s world out around those concepts.

We have our team doing environmental builds based on sketches. Or in some cases, the concept artists themselves are in Unreal Engine doing the environments. Then our virtual art department (VAD) cleans those up and optimizes them for realtime.

Are the artists modeling directly in Unreal Engine?
The artists model in Maya, Modo, 3ds Max, etc. — we’re not particular about the application as long as the output is FBX. The look development, which is where the texturing happens, is all done within Unreal. We’ll also have artists working in Substance Painter and it will auto-update in Unreal. We have to keep track of assets through the entire process, all the way through to the last visual effects vendor.

How do you handle the level of detail decimation so realtime assets can be reused for visual effects?
The same way we would work on AAA games. We begin with high-resolution detail and then use combinations of texture maps, normal maps and bump maps. That allows us to get high-texture detail without a huge polygon count. There are also some amazing LOD [level of detail] tools built into Unreal, which enable us to take a high-resolution asset and derive something that looks pretty much identical unless you’re right next to it, but runs at a much higher frame rate.

Do you find there’s a learning curve for crew members more accustomed to traditional production?
We’re the team productions come to do realtime on live-action sets. That’s pretty much all we do. That said, it requires prep, and if you want it to look great, you have to make decisions. If you were going to shoot rear projection back in the 1940s or Terminator 2 with large rear projection systems, you still had to have all that material pre-shot to make it work.
It’s the same concept in realtime virtual production. If you want to see it look great in Unreal live on the day, you can’t just show up and decide. You have to pre-build that world and figure out how it’s going to integrate.

The visual effects team and the virtual production team have to be involved from day one. They can’t just be brought in at the last minute. And that’s a significant change for producers and productions in general. It’s not that it’s a tough nut to swallow, it’s just a very different methodology.

How does the cinematographer collaborate with performance capture?
There are two schools of thought: one is to work live with camera operators, shooting the tangible part of the action that’s going on, as the camera is an actor in the scene as much as any of the people are. You can choreograph it all out live if you’ve got the performers and the suits. The other version of it is treated more like a stage play. Then you come back and do all the camera coverage later. I’ve seen DPs like Bill Pope and Caleb Deschanel pick this right up.

How is the experience for actors working in suits and a capture volume?
One of the harder problems we deal with is eye lines. How do we assist the actors so that they’re immersed in this, and they don’t just look around at a bunch of gray box material on a set. On any modern visual effects movie, you’re going to be standing in front of a 50-foot-tall bluescreen at some point.

Performance capture is in some ways more actor-centric versus a traditional set because there aren’t all the other distractions in a volume such as complex lighting and camera setup time. The director gets to focus in on the actors. The challenge is getting the actors to interact with something unseen. We’ll project pieces of the set on the walls and use lasers for eye lines. The quality of the HMDs today are also excellent for showing the actors what they would be seeing.

How do you see performance capture tools evolving?
I think a lot of the stuff we’re prototyping today will soon be available to consumers, home content creators, YouTubers, etc. A lot of what Epic develops also gets released in the engine. Money won’t be the driver in terms of being able to use the tools, your creative vision will be.

My teenage son uses Unreal Engine to storyboard. He knows how to do fly-throughs and use the little camera tools we built — he’s all over it. As it becomes easier to create photorealistic visual effects in realtime with a smaller team and at very high fidelity, the movie business will change dramatically.

Something that used to cost $10 million to produce might be a million or less. It’s not going to take away from artists; you still need them. But you won’t necessarily need these behemoth post companies because you’ll be able to do a lot more yourself. It’s just like desktop video — what used to take hundreds of thousands of dollars’ worth of Flame artists, you can now do yourself in After Effects.

Do you see new opportunities arising as a result of this democratization?
Yes, there are a lot of opportunities. High-quality, good-looking CG assets are still expensive to produce and expensive to make look great. There are already stock sites like TurboSquid and CGTrader where you can purchase beautiful assets economically.

But with the final assembly and coalescing of environments and characters there’s still a lot of need for talented people to do it effectively. I can see companies emerging out of that necessity. We spend a lot of time talking about assets because it’s the core of everything we do. You need to have a set to shoot on and you need compelling characters, which is why actors won’t go away.

What’s happening today isn’t even the tip of the iceberg. There are going to be 50 more big technological breakthroughs along the way. There’s tons of new content being created for Apple, Netflix, Amazon, Disney+, etc. And they’re all going to leverage virtual production.
What’s changing is previs’ role and methodology in the overall scheme of production.
While you might have previously conceived of previs as focused on the pre-production phase of a project and less integral to production, that conception shifts with a realtime engine. Previs is also typically a hands-off collaboration. In a traditional pipeline, a previs artist receives creative notes and art direction then goes off to create animation and present it back to creatives later for feedback.

In the realtime model, because the assets are directly malleable and rendering time is not a limiting factor, creatives can be much more directly and interactively involved in the process. This leads to higher levels of agency and creative satisfaction for all involved. This also means that instead of working with just a supervisor you might be interacting with the director, editor and cinematographer to design sequences and shots earlier in the project. They’re often right in the room with you as you edit the previs sequence and watch the results together in realtime.

Previs image quality has continued to increase in visual fidelity. This means a greater relationship between previs and final pixel image quality. When the assets you develop as a previs artist are of a sufficient quality, they may form the basis of final models for visual effects. The line between pre and final will continue to blur.

The efficiency of modeling assets only once is evident to all involved. By spending the time early in the project to create models of a very high quality, post begins at the outset of a project. Instead of waiting until the final phase of post to deliver the higher-quality models, the production has those assets from the beginning. And the models can also be fed into ancillary areas such as marketing, games, toys and more.

Beecham House‘s VFX take viewers back in time

Cambridge, UK-based Vine FX was the sole visual effects vendor on Gurinder Chadha’s Beecham House, a new Sunday night drama airing on ITV in the UK. Set in the India of 1795, Beecham House is the story of John Beecham (Tom Bateman), an Englishman who resigned from military service to set up as an honorable trader of the East India Company.

The series was shot at Ealing Studios and at some locations in India, with the visual effects work focusing on the Port of Delhi, the emperor’s palace and Beecham’s house. Vine FX founder Michael Illingworth assisted during development of the series and supervised his team of artists, creating intricate set extensions, matte paintings and period assets.

To make the shots believable and true to the era, the Vine FX team consulted closely with the show’s production designer and researched the period thoroughly. All modern elements — wires, telegraph poles, cars and lamp posts — had to be removed from the shoot footage, but the biggest challenge for the team was the Port of Delhi itself, a key location in the series.

Vine FX created a digital matte painting to extend the port and added numerous 3D boats and 3D people people working on the docks to create a busy working port of 1795 — a complex task and achieved by the expert eye of the Vine team.

“The success of this type of VFX is in its subtlety. We had to create a Delhi of 1795 that the audience believed, and this involved a great deal of research into how this would have looked that was essential to making it realistic,” says Illingworth. “Hopefully, we managed to do this.  I’m particularly happy with the finished port sequences as originally there were just three boats.

“I worked very closely with on-set supervisor Oliver Milburn while he was on set in India so was very much part of the production process in terms of VFX,” he continues. “Oliver would send me reference material from the shoot; this is always fundamental to the outcome of the VFX, as it allows you to plan ahead and work out any potential upcoming challenges. I was working on the VFX in Cambridge while Oliver was on set in Delhi — perfect!”

Vine FX used Photoshop and Nuke are its main tools. The artists modeled assets with Maya and Zbrush and painted assets using Substance painter. They rendered with Arnold.

Vine FX is currently working on War of the Worlds for Fox Networks and Canal+, due for release next year.

The Umbrella Academy‘s Emmy-nominated VFX supe Everett Burrell

By Iain Blair

If all ambitious TV shows with a ton of visual effects aspire to be cinematic, then Netflix’s The Umbrella Academy has to be the gold standard. The acclaimed sci-fi, superhero, adventure mash-up was just Emmy-nominated for its season-ending episode “The White Violin,” which showcased a full range of spectacular VFX. This included everything from the fully-CG Dr. Pogo to blowing up the moon and a mansion to the characters’ varied superpowers. Those VFX, mainly created by movie powerhouse Weta Digital in New Zealand and Spin VFX in Toronto, indeed rival anything in cinema. This is partly thanks to Netflix’s 4K pipeline.

The Umbrella Academy is based on the popular, Eisner Award-winning comics and graphic novels created and written by Gerard Way (“My Chemical Romance”), illustrated by Gabriel Bá, and published by Dark Horse Comics.

The story starts when, on the same day in 1989, 43 infants are born to unconnected women who showed no signs of pregnancy the day before. Seven are adopted by Sir Reginald Hargreeves, a billionaire industrialist, who creates The Umbrella Academy and prepares his “children” to save the world. But not everything went according to plan. In their teenage years, the family fractured and the team disbanded. Now, six of the surviving members reunite upon the news of Hargreeves’ death. Luther, Diego, Allison, Klaus, Vanya and Number Five work together to solve a mystery surrounding their father’s death. But the estranged family once again begins to come apart due to divergent personalities and abilities, not to mention the imminent threat of a global apocalypse.

The live-action series stars Ellen Page, Tom Hopper, Emmy Raver-Lampman, Robert Sheehan, David Castañeda, Aidan Gallagher, Cameron Britton and Mary J. Blige. It is produced by Universal Content Productions for Netflix. Steve Blackman (Fargo, Altered Carbon) is the executive producer and showrunner, with additional executive producers Jeff F. King, Bluegrass Television, and Mike Richardson and Keith Goldberg from Dark Horse Entertainment.

Everett Burrell

I spoke with senior visual effects supervisor and co-producer Everett Burrell (Pan’s Labyrinth, Altered Carbon), who has an Emmy for his work on Babylon 5, about creating the VFX and the 4K pipeline.

Congratulations on being nominated for the first season-ending episode “The White Violin,” which showcased so many impressive visual effects.
Thanks. We’re all really proud of the work.

Have you started season two?
Yes, and we’re already knee-deep in the shooting up in Canada. We shoot in Toronto, where we’re based, as well as Hamilton, which has this great period look. So we’re up there quite a bit. We’re just back here in LA for a couple of weeks working on editorial with Steve Blackman, the executive producer and showrunner. Our offices are in Encino, in a merchant bank building. I’m a co-producer as well, so I also deal a lot with editorial — more than normal.

Have you planned out all the VFX for the new season?
To a certain extent. We’re working on the scripts and have a good jump on them. We definitely plan to blow the first season out of the water in terms of what we come up with.

What are the biggest challenges of creating all the VFX on the show?
The big one is the sheer variety of VFX, which are all over the map in terms of the various types. They go from a completely animated talking CG chimpanzee Dr. Pogo to creating a very unusual apocalyptic world, with scenes like blowing up the moon and, of course, all the superpowers. One of the hardest things we had to do — which no one will ever know just watching it — was a ton of leaf replacement on trees.

Digital leaves via Montreal’s Folks.

When we began shooting, it was winter and there were no leaves on the trees. When we got to editorial we realized that the story spans just eight days, so it wouldn’t make any sense if in one scene we had no leaves and in the next we had leaves. So we had to add every single leaf to the trees for all of the first five episodes, which was a huge amount of work. The way we did it was to go back to all the locations and re-shoot all the trees from the same angles once they were in bloom. Then we had to composite all that in. Folks in Montreal did all of it, and it was very complicated. Lola did a lot of great work on Hargreeves, getting his young look for the early 1900s and cleaning up the hair and wrinkles and making it all look totally realistic. That was very tricky too.

Netflix is ahead of the curve thanks to its 4K policy. Tell us about the pipeline.
For a start, we shoot with the ARRI Alexa 65, which is a very robust cinema camera that was used on The Revenant. With its 65mm sensor, it’s meant for big-scope, epic movies, and we decided to go with it to give our show that great cinema look. The depth of field is like film, and it can also emulate film grain for this fantastic look. That camera shoots natively at 5K — it won’t go any lower. That means we’re at a much higher resolution than any other show out there.

And you’re right, Netflix requires a 4K master as future-proofing for streaming and so on. Those very high standards then trickle down to us and all the VFX. We also use a very unique system developed by Deluxe and Efilm called Portal, which basically stores the entire show in the cloud on a server somewhere, and we can get background plates to the vendors within 10 minutes. It’s amazing. Back in the old days, you’d have to make a request and maybe within 24 or 48 hours, you’d get those plates. So this system makes it almost instantaneous, and that’s a lifesaver.

   
Method blows up the moon.

How closely do you work with Steve Blackman and the editors?
I think Steve said it best:”There’s no daylight between the two of us” We’re linked at the hip pretty much all the time. He comes to my office if he has issues, and I go to his if we have complications; we resolve all of it together in probably the best creative relationship I’ve ever had. He relies on me and counts on me, and I trust him completely. Bottom line, if we need to write ourselves out of a sticky situation, he’s also the head writer, so he’ll just go off and rewrite a scene to help us out.

How many VFX do you average for each show?
We average between 150 and 200 per episode. Last season we did nearly 2,000 in total, so it’s a huge amount for a TV show, and there’s a lot of data being pushed. Luckily, I have an amazing team, including my production manager Misato Shinohara. She’s just the best and really takes care of all the databases, and manages all the shot data, reference, slates and so on. All that stuff we take on set has to go into this massive database, and just maintaining that is a huge job.

Who are the main VFX vendors?
The VFX are mainly created by Weta in New Zealand and Spin VFX in Toronto. Weta did all the Pogo stuff. Then we have Folks, Lola, Marz, Deluxe Toronto, DigitalFilm Tree in LA… and then Method Studios in Vancouver did great work on our end-of-the-world apocalyptic sequence. They blew up the moon and had a chunk of it hitting the Earth, along with all the surrounding imagery. We started R&D on that pretty early to get a jump on it. We gave them storyboards and they did previz. We used that as a cut to get iterations of it all. There were a lot of particle simulations, which was pretty intense.

Weta created Dr. Pogo

What have been the most difficult VFX sequences to create?
Just dealing with Pogo is obviously very demanding, and we had to come up with a fast shortcut to dealing with the photo-real look as we just don’t have the time or budget they have for the Planet of the Apes movies. The big thing is integrating him in the room as an actor with the live actors, and that was a huge challenge. We used just two witness cameras to capture our Pogo body performer. All the apocalyptic scenes were also very challenging because of the scale, and then those leaves were very hard to do and make look real. That alone took us a couple of months. And we might have the same problem this year, as we’re shooting in the summer through fall, and I’m praying that the leaves don’t start falling before we wrap.

What have been the main advances in technology that have really helped you pull off some of the show’s VFX?
I think the rendering and the graphics cards are the big ones, and the hardware talks together much more efficiently now. Even just a few years ago, and it might have taken weeks and weeks to render a Pogo. Now we can do it in a day. Weta developed new software for creating the texture and fabric of Pogo’s clothes. They also refined their hair programs.

 

I assume as co-producer that you’re very involved with the DI?
I am… and keeping track of all that and making sure we keep pushing the envelope. We do the DI at Company 3 with colorist Jill Bogdanowicz, who’s a partner in all of this. She brings so much to the show, and her work is a big part of why it looks so good. I love the DI. It’s where all the magic happens, and I get in there early with Jill and take care of the VFX tweaks. Then Steve comes in and works on contrast and color tweaks.By the time Steve gets there, we’re probably 80% of the way there already.

What can fans expect from season two?
Bigger, better visual effects. We definitely pay attention to the fans. They love the graphic novel, so we’re getting more of that into the show.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

UK’s Molinare adds two to its VFX team

Molinare has boosted its visual effects team with the addition of head of VFX production Kerrie Bryant and VFX supervisor Andy Tusabe.

Bryant comes to Molinare after working at DNeg TV and Technicolor, where she oversaw all projects within the studio, as well as supervising line producers and coordinators on their projects.

Tusabe joins Molinare with over 26 years’ experience across TV, film and commercials production and post production. He knows the Molinare VFX team well, having worked with them as a freelancer over the past two years, on titles such as Good Omens, The Crown, A Discovery of Witches, King Lear and Yardie.

So far this year, Molinare has completed VFX post on high-end dramas such as Good Omens, Strike Back: Silent War, Beecham House and the next series of The Crown, as well as Gurinder Chadha‘s new feature film Blinded by the Light, which will be released internationally in August.

Meet the Artist: The Mill’s Anne Trotman

Anne Trotman is a senior Flame artist and VFX supervisor at The Mill in New York. She specializes in beauty and fashion work but gets to work on a variety of other projects as well.

A graduate of Kings College in London, Trotman took on what she calls “a lot of very random temp jobs” before finally joining London’s Blue Post Production as a runner.

“In those days a runner did a lot of ‘actual’ running around SoHo, dropping off tapes and picking up lunches,” she says, admitting she was also sent out for extra green for color bars and warm sake at midnight. After being promoted to the machine room, she spent her time assisting all the areas of the company, including telecine grading, offline, online, VFX and audio. “This gave me a strong understanding of the post production process as a whole.”

Trotman then joined the 2D VFX teams from Blue, Clear Post Production, The Hive and VTR to create a team at Prime Focus London. She moved into film compositing where she headed up the 2D team as a senior Flame operator. Overseeing projects, including shot allocation and VFX reviews. Then she joined SFG-Technicolor’s commercials facility in Shanghai. After a year in China she joined The Mill in New York, where she is today.

We reached out to Trotman to find out more about The Mill, a technology and visual effects studio, how she works and some recent projects. Enjoy.

Bumble

Can you talk about some recent high-profile projects you’ve completed?
The most recent high-profile project I’ve worked on was for Bumble’s Super Bowl 2019 spot. It was its first commercial ever. Being that Bumble is a female-founded company, it was important for this project to celebrate female artists and empowerment, something I strongly support. Therefore, I was thrilled to lead an all-female team for this project. The agency creatives and producers were all female and so was almost the whole post team, including the editor, colorist and all the VFX artists.

How did you first learn Flame, and how has your use of it evolved over the years?
I had been assisting artists working on a Quantel Editbox at Blue. They then installed a Flame and hired a female artist who had worked on Gladiator. That’s when I knew I had found my calling. Working with technical equipment was very attractive to me, and in those days it was a dark art, and you had to work in a company to get your hands on one. I worked nights doing a lot of conforming and rotoscoping. I also started doing small jobs for clients I knew well. I remember assisting on an Adele pop video, which is where my love of beauty started.

When I first started using Flame, the whole job was usually completed by one artist. These days, jobs are much bigger, and with so many versions for social media, some days a lot of my day is coordinating the team of artists. Workshare and remote artists are becoming a big part of our industry, so communicating with artists all over the world has become a big part of my job in order to bring everything together to create the final film.

In addition to Flame, what other tools are used in your workflow?
Post production has changed so much in the past five years. My job is not just to press buttons on a Flame to get a commercial on television anymore; that’s only a small part. My job is to help the director and/or the agency position a brand and connect it with the consumer.

My workflow usually starts with bidding an agency or a director’s brief. Sometimes they need tests to sell an idea to a client. I might supervise a previz artist on Maxon Cinema 4D to help them achieve the director’s vision. I attend most of the shoots, which gives me an insight into the project while assessing the client’s goals and vision. I can take Flame on a laptop to my shoots to do tests for the director to help explain how certain shots will look after post. This process is so helpful all around in order for me to see if what we are shooting is correct and for the client to understand the director’s vision.

At The Mill, I work closely with the colorists who work on FilmLight Baselight before completing the work on Flame. All the artists at The Mill use Flame and Foundry Nuke, although my Flame skills are 100% better than my Nuke skills.

What are the most fulfilling aspects of the work you do?
I’m lucky to work with many directors and agency creatives that I now call friends. It still gives me a thrill when I’m able to interpret the vision of the creative or director to create the best work possible and convey the message of the brand.

I also love working with the next generation of artists. I especially love being able to work alongside the young female talent at The Mill. This is the first company I’ve worked at where I’ve not been “the one and only female Flame artist.”

At the Mill NY, we currently have 11 full-time female 2D artists working in our team, which has a 30/70 male to female ratio. Still a way to go to get to 50/50, so if I can inspire another female intern or runner who is thinking of becoming a VFX artist or colorist, then it’s a good day. Helping the cycle continue for female artists is so important to me.

What is the greatest challenge you’ve faced in your career?
Moving to Shanghai. Not only did I have the challenge of the language barrier to overcome but also the culture — from having lunch at noon to working with clients from a completely different background than mine. I had to learn all I could about the Chinese culture to help me connect with my clients.

Covergirl with Issa Rae

Out of all of the projects you’ve worked on, which one are you the most proud of?
There are many, but one that stands out is the Covergirl brand relaunch (2018) for director Matt Lambert at Prettybird. As an artist working on high-profile beauty brands, what they stand for is very important to me. I know every young girl will want to use makeup to make themselves feel great, but it’s so important to make sure young women are using it for the right reason. The new tagline “I am what I make-up” — together with a very diverse group of female ambassadors — was such a positive message to put out into the world.

There was also 28 Weeks Later, a feature film from director Juan Carlos Fresnadillo. My first time working on a feature was an amazing experience. I got to make lifelong friends working on this project. My technical abilities as an artist grew so much that year, from learning the patience needed to work on the same shot for two months to discovering the technical difficulties in compositing fire to be able to blow up parts of London. Such fun!

Finally, there was also a spot for the Target Summer 2019 campaign. It was directed by Whitelabel’s Lacey, who I collaborate with together on a lot of projects. Tristan Sheridan was the DP and the agency was Mother NY.

Target Summer Campaign

What advice do you have for a young professional trying to break into the industry?
Try everything. Don’t get pigeonholed into one area of the industry too early on. Learn about every part of the post process; it will be so helpful to you as you progress through your career.

I was lucky my first boss in the industry (Dave Cadle) was patient and gave me time to find out what I wanted to focus on. I try to be a positive mentor to the young runners and interns at The Mill, especially the young women. I was so lucky to have had female role models throughout my career, from the person that employed me to the first person that started training me on Flame. I know how important it is to see someone like you in a role you are thinking of pursuing.

Outside of work, how do you enjoy spending your free time?
I travel as much as I can. I love learning about new cultures; it keeps me grounded. I live in New York City, which is a bubble, and if you stay here too long, you start to forget what the real world looks like. I also try to give back when I can. I’ve been helping a director friend of mine with some films focusing on the issue of female homelessness around the world. We collaborated on some lovely films about women in LA and are currently working on some London-based ones.

You can find out more here.

Anne Trotman Image: Photo by Olivia Burke