OWC 12.4

Category Archives: VFX

Picture Shop VFX acquires Denmark’s Ghost VFX

Burbank’s Picture Shop VFX has acquired Denmark’s Ghost VFX. This Copenhagen-base studio, founded in 1999, provides high-end visual work for film, television and several streaming platforms. The move helps Picture Shop “increase its services worldwide and broaden its talent and expertise,” according to Picture Shop VFX’s president Tom Kendall.

Over the years, Ghost has contributed to more than 70 feature films and titles. Some of Ghost’s work includes Star Wars: The Rise of Skywalker, The Mandalorian, The Walking Dead, See, Black Panther and Star Trek Discovery.

“As we continue to expand our VFX footprint into the international market, I am extremely excited to have Ghost join Picture Shop VFX,” says Bill Romeo, president of Picture Head Holdings.

Christensen says the studio takes up three floors and 13,000 square feet in a “vintage and beautifully renovated office building” in Copenhagen. Their main tools are Autodesk Maya, Foundry Nuke and SideFX Houdini.

“We are really looking forward to a tight-nit collaboration with all the VFX teams in the Picture Shop group,” says Christensen. “Right now Ghost will continue servicing current clients and projects, but we’re really looking forward to exploring the massive potential of being part of a larger and international family.”

Picture Shop VFX is a division of Picture Head Holdings. Picture Head Holdings has locations in Los Angeles, Vancouver, the United Kingdom, and Denmark.

Main Image: Ghost artists at work.

Conductor Companion app targets VFX boutiques and freelancers

Conductor Technologies has introduced Conductor Companion, a desktop app designed to simplify the use of the cloud-based rendering service. Tailored for boutique studios and freelance artists, Companion streamlines the Conductor on-ramp and rendering experience, allowing users to easily manage and download files, write commands and handle custom submissions or plug-ins from their laptops or workstations. Along with this release, Conductor has added initial support for Blender creative software.

“Conductor was originally designed to meet the needs of larger VFX studios, focusing our efforts on maximizing efficiency and scalability when many artists simultaneously leverage the platform and optimizing how Conductor hooks into those pipelines,” explains CEO Mac Moore. “As Conductor’s user base has grown, we’ve been blown away by the number of freelance artists and small studios that have come to us for help, each of which has their own unique needs. Conductor Companion is a nod to that community, bringing all the functionality and massive render resource scale of Conductor into a user-friendly app, so that artists can focus on content creation versus pipeline management. And given that focus, it was a no-brainer to add Blender support, and we are eager to serve the passionate users of that product.”

Moore reports that this app will be the foundation of Conductor’s Intelligence Hub in the near future, “acting as a gateway to more advanced functionality like Shot Analytics and Intelligent Bid Assist. These features will leverage AI and Conductor’s cloud knowledge to help owners and freelancers make more informed business decisions as it pertains to project-to-project rendering financials.”

Conductor Companion is currently in public beta. You can download the app here.

In addition to Blender, applications currently supported by Conductor include Autodesk Maya and Arnold; Foundry’s Nuke, Cara VR, Katana, Modo and Ocula; Chaos Group’s V-Ray; Pixar’s RenderMan; Isotropix’s Clarisse; Golaem; Ephere’s Ornatrix; Yeti; and Miarmy.

OWC 12.4

The Mill opens boutique studio in Berlin

Technicolor’s The Mill has officially launched in Berlin. This new boutique studio is located in the heart of Berlin, situated in the creative hub of Mitte, near many of Germany’s agencies, production companies and brands.

The Mill has been working with German clients for years. Recent projects include the Mercedes’ Bertha Benz spot with director Sebastian Strasser; Netto’s The Easter Surprise, directed in-house by The Mill; and BMW The 8 with director Daniel Wolfe. The new studio will bring The Mill’s full range of creative services from color to experiential and interactive, as well as visual effects and design.

The Mill Berlin crew

Creative director Greg Spencer will lead the creative team. He is a multi-award winning creative, having won several VES, Cannes Lions and British Arrow awards. His recent projects include Carlsberg’s The Lake, PlayStation’s This Could Be You and Eve Cuddly Toy. Spencer also played a role in some of Mill Film’s major titles. He was the 2D supervisor for Les Misérables and also worked on the Lord of the Rings trilogy. His resume also includes campaigns for brands such as Nike and Samsung.

Executive producer Justin Stiebel moves from The Mill London, where he has been since early 2014, to manage client relationships and new business. Since joining the company, Stiebel has produced spots such as Audi’s Next Level and the Mini’s “The Faith of a Few” campaign. He has also collaborated with directors such as Sebastian Strasser, Markus Walter and Daniel Wolfe while working on brands like Mercedes, Audi and BMW.

Sean Costelloe is managing director of The Mill London and The Mill Berlin.

Main Image Caption: (L-R) Justin Stiebel and Greg Spencer


Directing Olly’s ‘Happy Inside Out’ campaign

How do you express how vitamins make you feel? Well, production company 1stAveMachine partnered with independent creative agency Yard NYC to develop the stylized “Happy Inside Out” campaign for Olly multivitamin gummies to show just that.

Beauty

The directing duo of Erika Zorzi and Matteo Sangalli, known as Mathery, highlighted the brand’s products and benefits by using rich textures, colors and lighting. They shot on an ARRI Alexa Mini. “Our vision was to tell a cohesive narrative, where each story of the supplements spoke the same visual language,” Mathery explains. “We created worlds where everything is possible and sometimes took each product’s concept to the extreme and other times added some romance to it.”

Each spot imagines various benefits of taking Olly products. The side-scrolling Energy, which features a green palette, shows a woman jumping and doing flips through life’s everyday challenges, including through her home to work, doing laundry and going to the movies. Beauty, with its pink color pallete, features another woman “feeling beautiful” while turning the heads of a parliament of owls. Meanwhile, Stress, with its purple/blue palette, features a women tied up in a giant ball of yarn, and as she unspools herself, the things that were tying her up spin away. In the purple-shaded Sleep, a lady lies in bed pulling off layer after layer of sleep masks until she just happily sleeps.

Sleep

The spots were shot with minimal VFX, other than a few greenscreen moments, and the team found itself making decisions on the fly, constantly managing logistics for stunt choreography, animal performances and wardrobe. Jogger Studios provided the VFX using Autodesk Flame for conform, cleanup and composite work. Adobe After Effects was used for all of the end tag animation. Cut+Run edited the campaign.

According to Mathery, “The acrobatic moves and obstacle pieces in the Energy spot were rehearsed on the same day of the shoot. We had to be mindful because the action was physically demanding on the talent. With the Beauty spot, we didn’t have time to prepare with the owls. We had no idea if they would move their heads on command or try to escape and fly around the whole time. For the Stress spot, we experimented with various costume designs and materials until we reached a look that humorously captured the concept.”

The campaign marks Mathery’s second collaboration with Yard NYC and Olly, who brought the directing team into the fold very early on, during the initial stages of the project. This familiarity gave everyone plenty of time to let the ideas breath.


VES Awards: The Lion King and Alita earn five noms each

The Visual Effects Society (VES) has announced its nominees for the 18th Annual VES Awards, which recognize outstanding visual effects artistry and innovation in film, animation, television, commercials and video games and the VFX supervisors, VFX producers and hands-on artists who bring this work to life. Alita: Battle Angel and The Lion King both have five nominations each; Toy Story 4 is the top animated film contender with five nominations, and Game of Thrones and The Mandalorian tie to lead the broadcast field with six nominations each.

Nominees in 25 categories were selected by VES members via events hosted by 11 VES sections, including Australia, the Bay Area, Germany, London, Los Angeles, Montreal, New York, New Zealand, Toronto, Vancouver and Washington.

The VES Awards will be held on January 29 at the Beverly Hilton Hotel. The VES Lifetime Achievement Award will be presented to Academy, DGA and Emmy-Award winning director-producer-screenwriter Martin Scorsese. The VES Visionary Award will be presented to director-producer-screenwriter Roland Emmerich. And the VES Award for Creative Excellence will be given to visual effects supervisor Sheena Duggal. Award-winning actor-comedian-author Patton Oswalt will once again host the event.

The nominees for the 18th Annual VES Awards in 25 categories are:

 

Outstanding Visual Effects in a Photoreal Feature

 

ALITA: BATTLE ANGEL

Richard Hollander

Kevin Sherwood

Eric Saindon

Richard Baneham

Bob Trevino

 

AVENGERS: ENDGAME

Daniel DeLeeuw

Jen Underdahl

Russell Earl

Matt Aitken

Daniel Sudick

 

GEMINI MAN

Bill Westenhofer

Karen Murphy-Mundell

Guy Williams

Sheldon Stopsack

Mark Hawker

 

STAR WARS: THE RISE OF SKYWALKER

Roger Guyett

Stacy Bissell

Patrick Tubach

Neal Scanlan

Dominic Tuohy

 

THE LION KING

Robert Legato

Tom Peitzman

Adam Valdez

Andrew R. Jones

 

Outstanding Supporting Visual Effects in a Photoreal Feature

 

1917

Guillaume Rocheron

Sona Pak

Greg Butler

Vijay Selvam

Dominic Tuohy

 

FORD V FERRARI

Olivier Dumont

Kathy Siegel

Dave Morley

Malte Sarnes

Mark Byers

 

JOKER

Edwin Rivera

Brice Parker

Mathew Giampa

Bryan Godwin

Jeff Brink

 

THE AERONAUTS

Louis Morin

Annie Godin

Christian Kaestner

Ara Khanikian

Mike Dawson

 

THE IRISHMAN

Pablo Helman

Mitch Ferm

Jill Brooks

Leandro Estebecorena

Jeff Brink

 

Outstanding Visual Effects in an Animated Feature

 

FROZEN 2

Steve Goldberg

Peter Del Vecho

Mark Hammel

Michael Giaimo

 

KLAUS

Sergio Pablos

Matthew Teevan

Marcin Jakubowski

Szymon Biernacki

 

MISSING LINK

Brad Schiff

Travis KnightSteve Emerson

Benoit Dubuc

 

THE LEGO MOVIE 2

David Burgess

Tim Smith

Mark Theriault

John Rix

 

TOY STORY 4

Josh Cooley

Mark Nielsen

Bob Moyer

Gary Bruins

 

Outstanding Visual Effects in a Photoreal Episode

 

GAME OF THRONES; The Bells

Joe Bauer

Steve Kullback

Ted Rae

Mohsen Mousavi

Sam Conway

 

HIS DARK MATERIALS; The Fight to the Death

Russell Dodgson

James Whitlam

Shawn Hillier

Robert Harrington

 

LADY AND THE TRAMP

Robert Weaver

Christopher Raimo

Arslan Elver

Michael Cozens

Bruno Van Zeebroeck

 

LOST IN SPACE – Episode: Ninety-Seven

Jabbar Raisani

Terron Pratt

Niklas Jacobson

Juri Stanossek

Paul Benjamin

 

STRANGER THINGS – Chapter Six: E Pluribus Unum

Paul Graff

Tom Ford

Michael Maher Jr.

Martin Pelletier

Andy Sowers

 

THE MANDALORIAN; The Child

Richard Bluff

Abbigail Keller

Jason Porter

Hayden Jones

Roy Cancinon

 

Outstanding Supporting Visual Effects in a Photoreal Episode

 

CHERNOBYL; 1:23:45

Max Dennison

Lindsay McFarlane

Clare Cheetham

Paul Jones

Claudius Christian Rauch

 

LIVING WITH YOURSELF; Nice Knowing You

Jay Worth

Jacqueline VandenBussche

Chris Wright

Tristan Zerafa

 

SEE; Godflame

Adrian de Wet

Eve Fizzinoglia

Matthew Welford

Pedro Sabrosa

Tom Blacklock

 

THE CROWN; Aberfan

Ben Turner

Reece Ewing

David Fleet

Jonathan Wood

 

VIKINGS; What Happens in the Cave

Dominic Remane

Mike Borrett

Ovidiu Cinazan

Tom Morrison

Paul Byrne

 

Outstanding Visual Effects in a Real-Time Project

 

Call of Duty Modern Warfare

Charles Chabert

Chris Parise

Attila Zalanyi

Patrick Hagar

 

Control

Janne Pulkkinen

Elmeri Raitanen

Matti Hämäläinen

James Tottman

 

Gears 5

Aryan Hanbeck

Laura Kippax

Greg Mitchell

Stu Maxwell

 

Myth: A Frozen Tale

Jeff Gipson

Nicholas Russell

Brittney Lee

Jose Luis Gomez Diaz

 

Vader Immortal: Episode I

Ben Snow

Mike Doran

Aaron McBride

Steve Henricks

 

Outstanding Visual Effects in a Commercial

 

Anthem Conviction

Viktor Muller

Lenka Likarova

Chris Harvey

Petr Marek

 

BMW Legend

Michael Gregory

Christian Downes

Tim Kafka

Toya Drechsler

 

Hennessy: The Seven Worlds

Carsten Keller

Selcuk Ergen

Kiril Mirkov

William Laban

 

PlayStation: Feel The Power of Pro

Sam Driscoll

Clare Melia

Gary Driver

Stefan Susemihl

 

Purdey’s: Hummingbird

Jules Janaud

Emma Cook

Matthew Thomas

Philip Child

 

Outstanding Visual Effects in a Special Venue Project

 

Avengers: Damage Control

Michael Koperwas

Shereif Fattouh

Ian Bowie

Kishore Vijay

Curtis Hickman

 

Jurassic World: The Ride

Hayden Landis

Friend Wells

Heath Kraynak

Ellen Coss

 

Millennium Falcon: Smugglers Run

Asa Kalama

Rob Huebner

Khatsho Orfali

Susan Greenhow

 

Star Wars: Rise of the Resistance

Jason Bayever

Patrick Kearney

Carol Norton

Bill George

 

Universal Sphere

James Healy

Morgan MacCuish

Ben West

Charlie Bayliss

 

Outstanding Animated Character in a Photoreal Feature

 

ALITA: BATTLE ANGEL; Alita

Michael Cozens

Mark Haenga

Olivier Lesaint

Dejan Momcilovic

 

AVENGERS: ENDGAME; Smart Hulk

Kevin Martel

Ebrahim Jahromi

Sven Jensen

Robert Allman

 

GEMINI MAN; Junior

Paul Story

Stuart Adcock

Emiliano Padovani

Marco Revelant

 

THE LION KING; Scar

Gabriel Arnold

James Hood

Julia Friedl

Daniel Fortheringham

 

 

 

 

Outstanding Animated Character in an Animated Feature

 

FROZEN 2; The Water Nøkk

Svetla Radivoeva

Marc Bryant

Richard E. Lehmann

Cameron Black

 

KLAUS; Jesper

Yoshimishi Tamura

Alfredo Cassano

Maxime Delalande

Jason Schwartzman

 

MISSING LINK; Susan

Rachelle Lambden

Brenda Baumgarten

Morgan Hay

Benoit Dubuc

 

TOY STORY 4; Bo Peep

Radford Hurn

Tanja Krampfert

George Nguyen

Becki Rocha Tower

 

Outstanding Animated Character in an Episode or Real-Time Project

 

LADY AND THE TRAMP; Tramp

Thiago Martins

Arslan Elver

Stanislas Paillereau

Martine Chartrand

 

STRANGER THINGS 3; Tom/Bruce Monster

Joseph Dubé-Arsenault

Antoine Barthod

Frederick Gagnon

Xavier Lafarge

 

THE MANDALORIAN; The Child; Mudhorn

Terry Bannon

Rudy Massar

Hugo Leygnac

 

THE UMBRELLA ACADEMY; Pilot; Pogo

Aidan Martin

Craig Young

Olivier Beierlein

Laurent Herveic

 

Outstanding Animated Character in a Commercial

 

Apex Legends; Meltdown; Mirage

Chris Bayol

John Fielding

Derrick Sesson

Nole Murphy

 

Churchill; Churchie

Martino Madeddu

Philippe Moine

Clement Granjon

Jon Wood

 

Cyberpunk 2077; Dex

Jonas Ekman

Jonas Skoog

Marek Madej

Grzegorz Chojnacki

 

John Lewis; Excitable Edgar; Edgar

Tim van Hussen

Diarmid Harrison-Murray

Amir Bazzazi

Michael Diprose

 

 

Outstanding Created Environment in a Photoreal Feature

 

ALADDIN; Agrabah

Daniel Schmid

Falk Boje

Stanislaw Marek

Kevin George

 

ALITA: BATTLE ANGEL; Iron City

John Stevenson-Galvin

Ryan Arcus

Mathias Larserud

Mark Tait

 

MOTHERLESS BROOKLYN; Penn Station

John Bair

Vance Miller

Sebastian Romero

Steve Sullivan

 

STAR WARS: THE RISE OF SKYWALKER; Pasaana Desert

Daniele Bigi

Steve Hardy

John Seru

Steven Denyer

 

THE LION KING; The Pridelands

Marco Rolandi

Luca Bonatti

Jules Bodenstein

Filippo Preti

 

 

Outstanding Created Environment in an Animated Feature

 

FROZEN 2; Giants’ Gorge

Samy Segura

Jay V. Jackson

Justin Cram

Scott Townsend

 

HOW TO TRAIN YOUR DRAGON: THE HIDDEN WORLD; The Hidden World

Chris Grun

Ronnie Cleland

Ariel Chisholm

Philippe Brochu

 

MISSING LINK; Passage to India Jungle

Oliver Jones

Phil Brotherton

Nick Mariana

Ralph Procida

 

TOY STORY 4; Antiques Mall

Hosuk Chang

Andrew Finley

Alison Leaf

Philip Shoebottom

 

 

Outstanding Created Environment in an Episode, Commercial, or Real-Time Project

 

GAME OF THRONES; The Iron Throne; Red Keep Plaza

Carlos Patrick DeLeon

Alonso Bocanegra Martinez

Marcela Silva

Benjamin Ross

 

LOST IN SPACE; Precipice; The Trench

Philip Engström

Benjamin Bernon

Martin Bergquist

Xuan Prada

 

THE DARK CRYSTAL: AGE OF RESISTANCE; The Endless Forest

Sulé Bryan

Charles Chorein

Christian Waite

Martyn Hawkins

 

THE MANDALORIAN; Nevarro Town

Alex Murtaza

Yanick Gaudreau

Marco Tremblay

Maryse Bouchard

 

Outstanding Virtual Cinematography in a CG Project

 

ALITA: BATTLE ANGEL

Emile Ghorayeb

Simon Jung

Nick Epstein

Mike Perry

 

THE LION KING

Robert Legato

Caleb Deschanel

Ben Grossmann

AJ Sciutto

 

THE MANDALORIAN; The Prisoner; The Roost

Richard Bluff

Jason Porter

Landis Fields IV

Baz Idione

 

 

TOY STORY 4

Jean-Claude Kalache

Patrick Lin

 

Outstanding Model in a Photoreal or Animated Project

 

LOST IN SPACE; The Resolute

Xuan Prada

Jason Martin

Jonathan Vårdstedt

Eric Andersson

 

MISSING LINK; The Manchuria

Todd Alan Harvey

Dan Casey

Katy Hughes

 

THE MAN IN THE HIGH CASTLE; Rocket Train

Neil Taylor

Casi Blume

Ben McDougal

Chris Kuhn

 

THE MANDALORIAN; The Sin; The Razorcrest

Doug Chiang

Jay Machado

John Goodson

Landis Fields IV

 

Outstanding Effects Simulations in a Photoreal Feature

 

DUMBO; Bubble Elephants

Sam Hancock

Victor Glushchenko

Andrew Savchenko

Arthur Moody

 

SPIDER-MAN: FAR FROM HOME; Molten Man

Adam Gailey

Jacob Santamaria

Jacob Clark

Stephanie Molk

 

 

 

 

 

STAR WARS: THE RISE OF SKYWALKER

Don Wong

Thibault Gauriau

Goncalo Cababca

Francois-Maxence Desplanques

 

THE LION KING

David Schneider

Samantha Hiscock

Andy Feery

Kostas Strevlos

 

Outstanding Effects Simulations in an Animated Feature

 

ABOMINABLE

Alex Timchenko

Domin Lee

Michael Losure

Eric Warren

 

FROZEN 2

Erin V. Ramos

Scott Townsend

Thomas Wickes

Rattanin Sirinaruemarn

 

HOW TO TRAIN YOUR DRAGON: THE HIDDEN WORLD; Water and Waterfalls

Derek Cheung

Baptiste Van Opstal

Youxi Woo

Jason Mayer

 

TOY STORY 4

Alexis Angelidis

Amit Baadkar

Lyon Liew

Michael Lorenzen

 

Outstanding Effects Simulations in an Episode, Commercial, or Real-Time Project

 

GAME OF THRONES; The Bells

Marcel Kern

Paul Fuller

Ryo Sakaguchi

Thomas Hartmann

 

Hennessy: The Seven Worlds

Selcuk Ergen

Radu Ciubotariu

Andreu Lucio

Vincent Ullmann

 

LOST IN SPACE; Precipice; Water Planet

Juri Bryan

Hugo Medda

Kristian Olsson

John Perrigo

 

STRANGER THINGS 3; Melting Tom/Bruce

Nathan Arbuckle

Christian Gaumond

James Dong

Aleksandr Starkov

 

THE MANDALORIAN; The Child; Mudhorn

Xavier Martin Ramirez

Ian Baxter

Fabio Siino

Andrea Rosa

 

Outstanding Compositing in a Feature

 

ALITA: BATTLE ANGEL

Adam Bradley

Carlo Scaduto

Hirofumi Takeda

Ben Roberts

 

AVENGERS: ENDGAME

Tim Walker

Blake Winder

Tobias Wiesner

Joerg Bruemmer

 

CAPTAIN MARVEL; Young Nick Fury

Trent Claus

David Moreno Hernandez

Jeremiah Sweeney

Yuki Uehara

 

STAR WARS: THE RISE OF SKYWALKER

Jeff Sutherland

John Galloway

Sam Bassett

Charles Lai

 

THE IRISHMAN

Nelson Sepulveda

Vincent Papaix

Benjamin O’Brien

Christopher Doerhoff

 

 

Outstanding Compositing in an Episode

 

GAME OF THRONES; The Bells

Sean Heuston

Scott Joseph

James Elster

Corinne Teo

 

GAME OF THRONES; The Long Night; Dragon Ground Battle

Mark Richardson

Darren Christie

Nathan Abbott

Owen Longstaff

 

STRANGER THINGS 3; Starcourt Mall Battle

Simon Lehembre

Andrew Kowbell

Karim El-Masry

Miklos Mesterhazy

 

WATCHMEN; Pilot; Looking Glass

Nathaniel Larouche

Iyi Tubi

Perunika Yorgova

Mitchell Beaton

 

Outstanding Compositing in a Commercial

 

BMW Legend

Toya Drechsler

Vivek Tekale

Guillaume Weiss

Alexander Kulikov

 

Feeding America; I Am Hunger in America

Dan Giraldo

Marcelo Pasqualino

Alexander Koester

 

Hennessy; The Seven Worlds

Rod Norman

Guillaume Weiss

Alexander Kulikov

Alessandro Granella

 

PlayStation: Feel the Power of Pro

Gary Driver

Stefan Susemihl

Greg Spencer

Theajo Dharan

 

Outstanding Special (Practical) Effects in a Photoreal or Animated Project

 

ALADDIN; Magic Carpet

Mark Holt

Jay Mallet

Will Wyatt

Dickon Mitchell

 

GAME OF THRONES; The Bells

Sam Conway

Terry Palmer

Laurence Harvey

Alastair Vardy

 

TERMINATOR: DARK FATE

Neil Corbould

David Brighton

Ray Ferguson

Keith Dawson

 

THE DARK CRYSTAL: THE AGE OF RESISTANCE; She Knows All the Secrets

Sean Mathiesen

Jon Savage

Toby Froud

Phil Harvey

 

Outstanding Visual Effects in a Student Project

 

DOWNFALL

Matias Heker

Stephen Moroz

Bradley Cocksedge

 

LOVE AND FIFTY MEGATONS

Denis Krez

Josephine Roß

Paulo Scatena

Lukas Löffler

 

OEIL POUR OEIL

Alan Guimont

Thomas Boileau

Malcom Hunt

Robin Courtoise

 

THE BEAUTY

Marc Angele

Aleksandra Todorovic

Pascal Schelbli

Noel Winzen

 

 


Recreating the Vatican and Sistine Chapel for Netflix’s The Two Popes

The Two Popes, directed by Fernando Meirelles, stars Anthony Hopkins as Pope Benedict XVI and Jonathan Pryce as current pontiff Pope Francis in a story about one of the most dramatic transitions of power in the Catholic Church’s history. The film follows a frustrated Cardinal Bergoglio (the future Pope Francis) who in 2012 requests permission from Pope Benedict to retire because of his issues with the direction of the church. Instead, facing scandal and self-doubt, the introspective Benedict summons his harshest critic and future successor to Rome to reveal a secret that would shake the foundations of the Catholic Church.

London’s Union was approached in May 2017 and supervised visual effects on location in Argentina and Italy over several months. A large proportion of the film takes place within the walls of Vatican City. The Vatican was not involved in the production and the team had very limited or no access to some of the key locations.

Under the direction of production designer Mark Tildesley, the production replicated parts of the Vatican at Rome’s Cinecitta Studios, including a life-size, open ceiling, Sistine Chapel, which took two months to build.

The team LIDAR-scanned everything available and set about amassing as much reference material as possible — photographing from a permitted distance, scanning the set builds and buying every photographic book they could lay their hands on.

From this material, the team set about building 3D models — created in Autodesk Maya — of St. Peter’s Square, the Basilica and the Sistine Chapel. The environments team was tasked with texturing all of these well-known locations using digital matte painting techniques, including recreating Michelangelo’s masterpiece on the ceiling of the Sistine Chapel.

The story centers on two key changes of pope in 2005 and 2013. Those events attracted huge attention, filling St. Peter’s Square with people eager to discover the identity of the new pope and celebrate his ascension. News crews from around the world also camp out to provide coverage for the billions of Catholics all over the world.

To recreate these scenes, the crew shot at a school in Rome (Ponte Mammolo) that has the same pattern on its floor. A cast of 300 extras was shot in blocks in different positions at different times of day, with costume tweaks including the addition of umbrellas to build a library that would provide enough flexibility during post to recreate these moments at different times of day and in different weather conditions.

Union also called on Clear Angle Studios to individually scan 50 extras to provide additional options for the VFX team. This was an ambitious crowd project, so the team couldn’t shoot in the location, and the end result had to stand up at 4K in very close proximity to the camera. Union designed a Houdini-based system to deal with the number of assets and clothing in such a way that the studio could easily art-direct them as individuals, allow the director to choreograph them and deliver a believable result.

Union conducted several motion capture shoots inhouse at Union to provide some specific animation cycles that married with the occasions they were recreating. This provided even more authentic-looking crowds for the post team.

Union worked on a total of 288 VFX shots, including greenscreens, set extensions, window reflections, muzzle flashes, fog and rain and a storm that included a lightning strike on the Basilica.

In addition, the team did a significant amount of de-aging work to accommodate the film’s eight-year main narrative timeline as well as a long period in Pope Francis’ younger years.


VFX pipeline trends for 2020

By Simon Robinson

A new year, more trends — some burgeoning, and others that have been dominating industry discussions for a while. Underpinning each is the common sentiment that 2020 seems especially geared toward streamlining artist workflows, more so than ever before.

There’s an increasing push for efficiency; not just through hardware but through better business practices and solutions to throughput problems.

Exciting times lie ahead for artists and studios everywhere. I believe the trends below form the pillars of this key industry mission for 2020.

Machine Learning Will Make Better, Faster Artists
Machines are getting smarter. AI software is becoming more universally applied in the VFX industry, and with this comes benefits and implications for artist workflows.

As adoption of machine learning increases, the core challenge for 2020 lies in artist direction and participation, especially since the M.O. of machine learning is its ability to solve entire problems on its own.

The issue is this: if you rely on something 99.9% of the time, what happens if it fails in that extra 0.1%? Can you fix it? While ML means less room for human error, will people have the skills to fix something gone wrong if they don’t need them anymore?

So this issue necessitates building a bridge between artist and algorithm. ML can do the hard work, giving artists the time to get creative and perfect their craft in the final stages.

Gemini Man

We’ve seen this pay off in the face of accessible and inexpensive deepfake technology giving rise to “quick and easy” deepfakes, which rely entirely on ML. In contrast to these, bridging the uncanny valley remains in the realm of highly-skilled artists, requiring thought, artistry and care to produce something that tricks the human eye. Weta Digital’s work on Gemini Man is a prime example.

As massive projects like these continue to emerge, studios strive for efficiency and being able to produce at scale. Since ML and AI are all about data, the manipulation of both can unlock endless potential for the speed and scale at which artists can operate.

Foundry’s own efforts in this regard revolve around improving the persistence and availability of captured data. We’re figuring out how to deliver data in a more sensible way downstream, from initial capture to timestamping and synchronization, and then final arrangement in an easy, accessible format.

Underpinning our research into this is Universal Scene Description (USD), which you’ve probably heard about…

USD Becomes Uniform
Despite having a legacy and prominence from its development with Pixar, the still relevant open-sourcing and gradual adoption of Universal Scene Description means that it’s still maturing for wider pipelines and workflows.

New iterations of USD are now being released at a three month cadence, where before it used to be every two months. With each new release comes improvements as growing pains and teething issues are ironed out, and the slower pace provides some respite for artists who rely on specific versions of USD.

But challenges still exist, namely mismatched USD pipelines, and scattered documentation which means that solutions to these can’t be easily found. Currently, no one is officially rubber stamping USD best practice.

Capturing volumetric datasets for future testing.

To solve this issue, the industry needs a universal application of USD so it can exist in pipelines as an application-standard plugin to prevent an explosion of multiple variants of USD, which may cause further confusion.

If this comes off, documentation could be made uniform, and information could be shared across software, teams and studios with even more ease and efficiency.

It’ll make Foundry’s life easier, too. USD is vital to us to power interoperability in our products, allowing clients to extend their software capabilities on top of what we do ourselves.

At Foundry, our lighting tool, Katana, uses USD Hydra tech as the basis for much improved viewer experiences. Most recently, its Advanced Viewport Technology aims at delivering a consistent visual experience across software.

This wouldn’t be possible without USD. Even in its current state, the benefits are tangible, and its core principles — flexibility, modularity, interoperability  — underpin 2020’s next big trends.

Artist Pipelines Will Look More Iterative 
The industry is asking, “How can you be more iterative through everything?” Calls for this will only grow louder as we move into next year.

There’s an increasing push for efficiency as the common sentiment prevails: too much work, not enough people to do it. While maximizing hardware usage might seem like a go-to solution to this, the actual answer lies in solving throughput problems by improving workflows and facilitating sharing between studios and artists.

Increasingly, VFX pipelines don’t work well as a waterfall structure anymore, where each stage is done, dusted, and passed onto the next department in a structured, rigid process.

Instead, artists are thinking about how data persists throughout their pipeline and how to make use of it in a smart way. The main aim is to iterate on everything simultaneously for a more fluid, consistent experience across teams and studios.

USD helps tremendously here, since it captures all of the data layers and iterations in one. Artists can go to any one point in their pipeline, change different aspects of it, and it’s all maintained in one neat “chunk.” No waterfalls here.

Compositing in particular benefits from this new style of working. Being able to easily review in context lends an immense amount of efficiency and creativity to artists working in post production.

That’s Just the Beginning
Other drivers for artist efficiency that may gain traction in 2020 include: working across multiple shots (currently featured in Nuke Studio), process automation, and volumetric-style workflows to let artists work with 3D representations featuring depth and volume.

The bottom line is that 2020 looks to be the year of the artist — and we can’t wait.


Simon Robinson is the co-founder and chief scientist at Foundry.


ILM’s Pablo Helman on The Irishman‘s visual effects

By Karen Moltenbrey

When a film stars Robert De Niro, Joe Pesci and Al Pacino, well, expectations are high. These are no ordinary actors, and Martin Scorsese is no ordinary director. These are movie legends. And their latest project, Netflix’s The Irishman, is no ordinary film. It features cutting-edge de-aging technology from visual effects studio Industrial Light & Magic (ILM) and earned the film’s VFX supervisor, Pablo Helman, an Oscar nomination.

The Irishman, adapted from the book “I Heard You Paint Houses,” tells the story of an elderly Frank “The Irishman” Sheeran (De Niro), whose life is nearing the end, as he looks back on his earlier years as a truck driver-turned-mob hitman for Russell Bufalino (Pesci) and family. While reminiscing, he recalls the role he played in the disappearance of his longtime friend, Jimmy Hoffa (Al Pacino), former president of the Teamsters, who famously disappeared in 1975 at the age of 62, and whose body has never been found.

The film contains 1,750 visual effects shots, most of which involve the de-aging of the three actors. In the film, the actors are depicted at various stages of their lives — mostly younger than their present age. Pacino is the least aged of the three actors, since he enters the story about a third of the way through — from the 1940s to his disappearance three decades later. He was 78 at the time of filming, and he plays Hoffa at various ages, from age 44 to 62. De Niro, who was 76 at the time of filming, plays Sheeran at certain points from age 20 to 80. Pesci plays Bufalino between age 53 and 83.

For the significantly older Sheeran, during his introspection, makeup was used. However, making the younger versions of all three actors was much more difficult. Indeed, current technology makes it possible to create believable younger digital doubles. But, it typically requires actors to perform alone on a soundstage wearing facial markers and helmet cameras, or requires artists to enhance or create performances with CG animation. That simply would not do for this film. Neither the actors nor Scorsese wanted the tech to interfere with the acting process in any way. Recreating their performances was also off the table.

“They wanted a technology that was non-intrusive and one that would be completely separate from the performances. They didn’t want markers on their faces, they did not want to wear helmet cams and they did not want to wear the gray [markered] pajamas that we normally use,” says VFX supervisor Helman. “They also wanted to be on set with theatrical lighting, and there wasn’t going to be any kind of re-shoots of performances outside the set.”

In a nutshell, ILM needed a markerless approach that occurred on-set during filming. To this end, ILM spent two years developing Flux, a new camera system and software, whereby a three-camera rig would extract performance data from lighting and textures captured on set and translate that to 3D computer-generated versions of the actors’ younger selves.

The camera rig was developed in collaboration with The Irishman’s DP, Rodrigo Prieto, and camera maker ARRI. It included two high-resolution (3.8K) Alexa Mini witness cameras that were modified with infrared rings; the two cameras were attached to and synched up with the primary sensor camera (the director’s Red Helium 8K camera). The infrared light from the two cameras was necessary to help neutralize any shadows on the actors’ faces, since Flux does not handle shadows well, yet remained “unseen” by the production camera.

Flux, meanwhile, used that camera information and translated that into deformable geometry mesh. “Flux takes that information from the three cameras and compares it to the lighting on set, deforms the geometry and changes the geometry and the shape of the actors on a frame-by-frame basis,” says Helman.

In fact, ILM continued to develop the software as it was working on the film. “It’s kind of like running the Grand Prix while you’re building the Ferrari,” Helman adds. “Then, you get better and better, and faster and faster, and your software gets better, and you are solving problems and learning from the software. Yes, it took a long time to do, but we knew we had time to do it and make it work.”

Pablo Helman (right) on The Irishman set.

At the beginning of the project, prior to the filming, the actors were digitally scanned performing a range of facial movements using ILM’s Medusa system, as well as on a light stage, which captured texture info under different lighting conditions. All that data was then used to create a 3D contemporary digital double of each of the actors. The models were sculpted in Autodesk’s Maya and with proprietary tools running on ILM’s Zeno platform.

ILM applied the 3D models to the exact performance data of each actor captured on set with the special camera rig, so the physical performances were now digital. No keyframe animation was used. However, the characters were still contemporary to the actors’ ages.

As Helman explains, after the performance, the footage was returned to ILM, where an intense matchmove was done of the actors’ bodies and heads. “The first thing that got matchmoved was the three cameras that were documenting what the actor was doing in the performance, and then we matchmoved the lighting instruments that were lighting the actor because Flux needs that lighting information in order to work,” he says.

Helman likens Flux to a black box full of little drawers where various aspects are inserted, like the layout, the matchimation, the lighting information and so forth, and it combines all that information to come up with the geometry for the digital double.

The actual de-aging occurs in modeling using a combination of libraries that were created for each actor and connected to and referenced by Flux. Later, modelers created the age variations, starting with the youngest version of each person. Variants were then generated gradually using a slider to move through life’s timeline. This process was labor-intensive as artists had to also erase the effects of time, such as wrinkles and age spots.

Insofar as The Irishman is not an action movie, creating motion for decades-younger versions of the characters was not an issue. However, a motion analyst was on set to work with the actors as they played the younger versions of their characters. Also, some visual effects work helped thin out the younger characters.

Helman points out that Scorsese stressed that he did not want to see a younger version of the actors playing roles from the past; he wanted to see younger versions of these particular characters. “He did not want to rewind the clock and see Robert De Niro as Jimmy Conway in 1990’s Goodfellas. He wanted to see De Niro as a 30-year-younger Frank Sheeran,” he explains.

When asked which actor posed the most difficulty to de-age, Helman explains that once you crack the code of capturing the performance and then retargeting the performance to a younger variation of the character, there’s little difference. Nevertheless, De Niro had the most screen time and the widest age range.

Performance capture began about 15 years ago, and Helman sees this achievement as a natural evolution of the technology. “Eventually those [facial] markers had to go away because for actors, that’s a very interesting way to work, if you really think about it. They have to try to ignore the markers and not be distracted by all the other intrusive stuff going on,” Helman says. “That time is now gone. If you let the actors do what they do, the performances will be so much better and the shots will look so much better because there is eye contact and context with another actor.”

While this technology is a quantum leap forward, there are still improvements to be made. The camera rig needs to get smaller and the software faster — and ILM is working on both aspects, Helman says. Nevertheless, the accomplishment made here is impressive and groundbreaking — the first markerless system that captures performance on set with theatrical lighting, thanks to more than 500 artists working around the world to make this happen. As a result, it opens up the door for more storytelling and acting options — not only for de-aging, but for other types of characters too.

Commenting on his Oscar nomination, Helman said, “It was an incredible, surreal experience to work with Scorsese and the actors, De Niro, Pacino and Pesci, on this movie. We are so grateful for the trust and support we got from the producers and from Netflix, and the talent and dedication of our team. We’re honored to be recognized by our colleagues with this nomination.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.


Shape+Light VFX boutique opens in LA with Trent, Lehr at helm


Visual effects and design studio boutique Shape+Light has officially launched in Santa Monica. At the helm is managing director/creative director Rob Trent and executive producer Cara Lehr. Shape+Light provides visual effects, design and finishing services for agency and brand-direct clients. The studio, which has been quietly operating since this summer, has already delivered work for Nike, Apple, Gatorade, Lexus and Proctor & Gamble.

Gatorade

Trent is no stranger to running VFX boutiques. An industry veteran, he began his career as a Flame artist, working at studios including Imaginary Forces and Digital Domain, and then at Asylum VFX as a VFX supervisor/creative director before co-founding The Mission VFX in 2010. In 2015, he established Saint Studio. During his career he has worked on big campaigns, including the launch of the Apple iPhone with David Fincher, celebrating the NFL with Nike and Michael Mann, and honoring moms with Alma Har’el and P&G for the Olympics. He has also contributed to award-winning feature films such as The Curious Case of Benjamin Button, Minority Report, X-Men and Zodiac.

Lehr is an established VFX producer with over 20 years of experience in both commercials and features. She has worked for many of LA’s leading VFX studios, including Zoic Studios, Asylum VFX, Digital Domain, Brickyard VFX and Psyop. She most recently served as EP at Method Studios, where she was on staff since 2012. She has worked on ad campaigns for brands including Apple, Microsoft, Nike, ESPN, Coca Cola, Taco Bell, AT&T, the NBA, Chevrolet and more.

Maya 2020 and Arnold 6 now available from Autodesk

Autodesk has released Autodesk Maya 2020 and Arnold 6 with Arnold GPU. Maya 2020 brings animators, modelers, riggers and technical artists a host of new tools and improvements for CG content creation, while Arnold 6 allows for production rendering on both the CPU and GPU.

Maya 2020 adds more than 60 new updates, as well as performance enhancements and new simulation features to Bifrost, the visual programming environment in Maya.

Maya 2020

Release highlights include:

— Over 60 animation features and updates to the graph editor and time slider.
— Cached Playback: New preview modes, layered dynamics caching and more efficient caching of image planes.
— Animation bookmarks: Mark, organize and navigate through specific events in time and frame playback ranges.
— Bifrost for Maya: Performance improvements, Cached Playback support and new MPM cloth constraints.
— Viewport improvements: Users can interact with and select dense geometry or a large number of smaller meshes faster in the viewport and UV editors.
— Modeling enhancements: New Remesh and Retopologize features.
— Rigging improvements: Matrix-driven workflows, nodes for precisely tracking positions on deforming geometry and a new GPU-accelerated wrap deformer.

The Arnold GPU is based on Nvidia’s OptiX framework and takes advantage of Nvidia RTX technology. Arnold 6 highlights include:

— Unified renderer— Toggle between CPU and GPU rendering.
— Lights, cameras and More— Support for OSL, OpenVDB volumes, on-demand texture loading, most LPEs, lights, shaders and all cameras.
— Reduced GPU noise— Comparable to CPU noise levels when using adaptive sampling, which has been improved to yield faster, more predictable results regardless of the renderer used.
— Optimized for Nvidia RTX hardware— Scale up rendering power when production demands it.
— New USD components— Hydra render delegate, Arnold USD procedural and USD schemas for Arnold nodes and properties are now available on GitHub.

Arnold 6

— Performance improvements— Faster creased subdivisons, an improved Physical Sky shader and dielectric microfacet multiple scattering.

Maya 2020 and Arnold 6 are available now as standalone subscriptions or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection. Monthly, annual and three-year single-user subscriptions of Arnold are available on the Autodesk e-store.

Arnold GPU is also available to try with a free 30-day trial of Arnold 6. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, SideFX Houdini, Maxon Cinema 4D and Foundry Katana.

Storage for Visual Effects

By Karen Moltenbrey

When creating visual effects for a live-action film or television project, the artist digs right in. But not before the source files are received and backed up. Of course, during the process, storage again comes into play, as the artist’s work is saved and composited into the live-action file and then saved (and stored) yet again. At mid-sized Artifex Studios and the larger Jellyfish Pictures, two visual effects studios, storage might not be the sexiest part of the work they do, but it is vital to a successful outcome nonetheless.

Artifex Studios
An independent studio in Vancouver, BC, Artifex Studios is a small- to mid-sized visual effects facility producing film and television projects for networks, film studios and streaming services. Founded in 1997 by VFX supervisor Adam Stern, the studio has grown over the years from a one- to two-person operation to one staffed by 35 to 45 artists. During that time it has built up a lengthy and impressive resume, from Charmed, Descendants 3 and The Crossing to Mission to Mars, The Company You Keep and Apollo 18.

To handle its storage needs, Artifex uses the Qumulo QC24 four-node storage cluster for its main storage system, along with G-Tech and LaCie portable RAIDs and Angelbird Technologies and Samsung portable SSD drives. “We’ve been running [Qumulo] for several years now. It was a significant investment for us because we’re not a huge company, but it has been tremendously successful for us,” says Stern.

“The most important things for us when it comes to storage are speed, data security and minimal downtime. They’re pretty obvious things, but Qumulo offered us a system that eliminated one of the problems we had been having with the [previous] system bogging down as concurrent users were moving the files around quickly between compositors and 3D artists,” says Stern. “We have 40-plus people hitting this thing, pulling in 4K, 6K, 8K footage from it, rendering and [creating] 3D, and it just ticks along. That was huge for us.”

Of course, speed is of utmost importance, but so is maintaining the data’s safety. To this end, the new system self-monitors, taking its own snapshots to maintain its own health and making sure there are constantly rotating levels of backups. Having the ability to monitor everything about the system is a big plus for the studio as well.

Because data safety and security is non-negotiable, Artifex uses Google Cloud services along with Qumulo for incremental storage, every night incrementally backing up to Google Cloud. “So while Qumulo is doing its own snapshots incrementally, we have another hard-drive system from Synology, which is more of a prosumer NAS system, whose only job is to do a local current backup,” Stern explains. “So in-house, we have two local backups between Qumulo and Synology, and then we have a third backup going to the cloud every night that’s off-site. When a project is complete, we archive it onto two sets of local hard drives, and one leaves the premises and the other is stored here.” At this point, the material is taken off the Qumulo system, and seven days later, the last of the so-called snapshots is removed.

As soon as data comes into Artifex — either via Aspera, Signiant’s Media Shuttle or hard disks — the material is immediately transferred to the Qumulo system, and then it is cataloged and placed into the studio’s ftrack database, which the studio uses for shot tracking. Then, as Stern says, the floodgates open, and all the artists, compositors, 3D team members and admin coordination team members access the material that resides on the Qumulo system.

Desktops at the studio have local storage, generally an SSD built into the machine, but as Stern points out, that is a temporary solution used by the artists while working on a specific shot, not to hold studio data.

Artifex generally works on a handful of projects simultaneously, including the Nickelodeon horror anthology Are You Afraid of the Dark? “Everything we do here requires storage, and we’re always dealing with high-resolution footage, and that project was no exception,” says Stern. For instance, the series required Artifex to simulate 10,000 CG cockroaches spilling out of every possible hole in a room — work that required a lot of high-speed caching.

“FX artists need to access temporary storage very quickly to produce those simulations. In terms of the Qumulo system, we need it to retrieve files at the speed our effects artists can simulate and cache, and make sure they are able to manage what can be thousands and thousands of files generated just within a few hours.”

Similarly, for Netflix’s Wu Assassins, the studio generated multiple simulations of CG smoke and fog within SideFX’s Side Effects Houdini and again had to generate thousands and thousands of cache files for all the particles and volume information. Just as it did with the caching for the CG cockroaches, the current system handled caching for the smoke and fog quite efficiently.

At this point, Stern says the vendor is doing some interesting things that his company has not yet taken advantage of. For instance, today one of the big pushes is working in the cloud and integrating that with infrastructures and workflows. “I know they are working on that, and we’re looking into that,” he adds. There are also some new equipment features, “bleeding-edge stuff” Artifex has not explored yet. “It’s OK to be cutting-edge, but bleeding-edge is a little scary for us,” Stern notes. “I know they are always playing with new features, but just having the important foundation of speed and security is right where we are at the moment.”

Jellyfish Pictures
When it comes to big projects with big storage needs, Jellyfish Pictures is no fish out of water. The studio works on myriad projects, from Hollywood blockbusters like Star Wars to high-end TV series like Watchmen to episodic animation like Floogals and Dennis & Gnasher: Unleashed! Recently, it has embarked on an animated feature for DreamWorks and has a dedicated art department that works on visual development for substantial VFX projects and children’s animated TV content.

To handle all this work, Jellyfish has five studios across the UK: four in London and one in Sheffield, in the north of England. What’s more, in early December, Jellyfish expanded further with a brand-new virtual studio in London seating over 150 artists — increasing its capacity to over 300 people. In line with this expansion, Jellyfish is removing all on-site infrastructure from its existing locales and moving everything to a co-location. This means that all five present locations will be wholly virtual as well, making Jellyfish the largest VFX and animation studio in the world operating this way, contends CTO Jeremy Smith.

“We are dealing with shows that have very large datasets, which, therefore, require high-performance computing. It goes without saying, then, that we need some pretty heavy-duty storage,” says Smith.

Not only must the storage solution be able to handle Jellyfish’s data needs, it must also fit into its operational model. “Even though we work across multiple sites, we don’t want our artists to feel that. We need a storage system that can bring together all locations into one centralized hub,” Smith explains. “As a studio, we do not rely on one storage hardware vendor; therefore, we need to work with a company that is hardware-agnostic in addition to being able to operate in the cloud.”

Also, Jellyfish is a TPN-assessed studio and thus has to work with vendors that are TPN compliant — another serious, and vital, consideration when choosing its storage solution. TPN is an initiative between the Motion Picture Association of America (MPAA) and the Content Delivery and Security Association (CDSA) that provides a set of requirements and best practices around preventing leaks, breaches and hacks of pre-released, high-valued media content.

With all those factors in mind, Jellyfish uses PixStor from Pixit Media for its storage solution. PixStor is a software-defined storage solution that allows the studio to use various hardware storage from other vendors under the hood. With PixStor, data moves seamlessly through many tiers of storage — from fast flash and disk tiers to cost-effective, high-capacity object storage to the cloud. In addition, the studio uses NetApp storage within a different part of the same workflow on Dell R740 hardware and alternates between SSD and spinning disks, depending on the purpose of the data and the file size.

“We’ve future-proofed our studio with the Mellanox SN2100 switch for the heavy lifting, and for connecting our virtual workstations to the storage, we are using several servers from the Dell N3000 series,” says Smith.

As a wholly virtual studio, Jellyfish has no storage housed locally; it all sits in a co-location, which is accessed through remote workstations powered by Teradici’s PCoIP technology.

According to Smith, becoming a completely virtual studio is a new development for Jellyfish. Nevertheless, the facility has been working with Pixit Media since 2014 and launched its first virtual studio in 2017, “so the building blocks have been in place for a while,” he says.

Prior to moving all the infrastructure off-site, Jellyfish ran its storage system out of its Brixton and Soho studios locally. Its own private cloud from Brixton powered Jellyfish’s Soho and Sheffield studios. Both PixStor storage solutions in Brixton and Soho were linked with the solution’s PixCache. The switches and servers were still from Dell and Mellanox but were an older generation.

“Way back when, before we adopted this virtual world we are living in, we still worked with on-premises and inflexible storage solutions. It limited us in terms of the work we could take on and where we could operate,” says Smith. “With this new solution, we can scale up to meet our requirements.”

Now, however, using Mellanox SN2100, which has 100GbE, Jellyfish can deal with obscene amounts of data, Smith contends. “The way the industry is moving with 4K and 8K, even 16K being thrown around, we need to be ready,” he says.

Before the co-location, the different sites were connected through PixCache; now the co-location and public cloud are linked via Ngenea, which pre-caches files locally to the render node before the render starts. Furthermore, the studio is able to unlock true multi-tenancy with a single storage namespace, rapidly deploying logical TPN-accredited data separation and isolation and scaling up services as needed. “Probably two of the most important facets for us in running a successful studio: security and flexibility,” says Smith.

Artists access the storage via their Teradici Zero Clients, which, through the Dell switches, connect users to the standard Samba SMB network. Users who are working on realtime clients or in high resolution are connected to the Pixit storage through the Mellanox switch, where PixStor Native Client is used.

“Storage is a fundamental part of any VFX and animation studio’s workflow. Implementing the correct solution is critical to the seamless running of a project, as well as the security and flexibility of the business,” Smith concludes. “Any good storage system is invisible to the user. Only the people who build it will ever know the precision it takes to get it up and running — and that is the sign you’ve got the perfect solution.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Reallusion’s Headshot plugin for realistic digi-doubles via AI

Reallusion has introduced a plugin for Character Creator 3 to help create realistic-looking digital doubles. According to the company, the Headshot plugin uses AI technology to automatically generate a digital human in minutes from one single photo, and those characters are fully rigged for voice lipsync, facial expression and full body animation.

Headshot allows game developers and virtual production teams to quickly funnel a cast of digital doubles into iClone, Unreal, Unity, Maya, ZBrush and more. The idea is to allow the digital humans to go anywhere they like and give creators a solution to rapidly develop, iterate and collaborate in realtime.

The plugin has two AI modes: Auto Mode and Pro Mode. Auto Mode is a one-click solution for creating mid-rez digital human crowds. This process allows one-click head and hair creation for realtime 3D head models. It also generates a separate 3D hair mesh with alpha mask to soften edge lines. The 3D hair is fully compatible with Character Creator’s conformable hair format (.ccHair). Users can add them into their hair library, and apply them to other CC characters.

Headshot Pro Mode offers full control of the 3D head generation process with advanced features such as Image Matching, Photo Reprojection and Custom Mask with up to 4,096-texture resolution.

The Image Matching Tool overlays an image reference plane for advanced head shape refinement and lens correction. With Photo Reprojection, users can easily fix the texture-to-mesh discrepancies resulting from face morph change.

Using high-rez source images and Headshot’s 1,000-plus morphs, users can get a scan-quality digital human face in 4K texture details. Additional textures include normal, AO, roughness, metallic, SSS and Micro Normal for more realistic digital human rendering.

The 3D Head Morph System is designed to achieve the professional and detailed look of 3D scan models. The 3D sculpting design allow users to hover over a control area and use directional mouse drags to adjust the corresponding mesh shape, from full head and face sculpting to individual features — head contour, face, eyes, nose, mouth and ears with more than 1,000 head morphs. It is now free with a purchase of the Headshot plugin.

The Headshot plugin for Character Creator is $199 and comes with the content pack Headshot Morph 1,000+ ($99). Character Creator 3 Pipeline costs $199.

Redshift integrates Cinema 4D noises, nodes and more

Maxon and Redshift Rendering Technologies have released Redshift 3.0.12, which has native support for Cinema 4D noises and deeper integration with Cinema 4D, including the option to define materials using Cinema 4D’s native node-based material system.

Cinema 4D noise effects have been in demand within other 3D software packages because of their flexibility, efficiency and look. Native support in Redshift means that users of other DCC applications can now access Cinema 4D noises by using Redshift as their rendering solution. Procedural noise allows artists to easily add surface detail and randomness to otherwise perfect surfaces. Cinema 4D offers 32 different types of noise and countless variations based on settings. Native support for Cinema 4D noises means Redshift can preserve GPU memory while delivering high-quality rendered results.

Redshift 3.0.12 provides content creators deeper integration of Redshift within Cinema 4D. Redshift materials can now be defined using Cinema 4D’s nodal material framework, introduced in Release 20. As well, Redshift materials can use the Node Space system introduced in Release 21, which combines the native nodes of multiple render engines into a single material. Redshift is the first to take advantage of the new API in Cinema 4D to implement its own Node Spaces. Users can now also use any Cinema 4D view panel as a Redshift IPR (interactive preview render) window, making it easier to work within compact layouts and interact with a scene while developing materials and lighting.

Redshift 3.0.12 is immediately available from the Redshift website.

Maxon acquired RedShift in April of 2019.

Framestore VFX will open in Mumbai in 2020

Oscar-winning creative studio Framestore will be opening a full-service visual effects studio in Mumbai in 2020 to target India’s booming creative industry. The studio will be located in the Nesco IT Park in Goregaon, in the center of Mumbai’s technology district. The news hammers home Framestore’s continued interest in India, after having made a major investment in Jesh Krishna Murthy’s VFX studio, Anibrain, in 2017.

“Mumbai represents a rolling of wheels that were set in motion over two years ago,” says Framestore founder/CEO William Sargent. “Our investment in Anibrain has grown considerably, and we continue in our partnership with Jesh Krishna Murthy to develop and grow that business. Indeed, they will become a valued production partner to our Mumbai offering.”

Framestore looks to make considerable hires in the coming months, aiming to build an initial 500-strong team with existing Framestore talent combined with the best of local Indian expertise. Mumbai will work alongside the global network, including London and Montreal, to create a cohesive virtual team delivering high-quality international work.

“Mumbai has become a center of excellence in digital filmmaking. There’s a depth of talent that can deliver to the scale of Hollywood with the color and flair of Bollywood,” Sargent continues. “It’s an incredibly vibrant city and its presence on the international scene is holding us all to a higher standard. In terms of visual effects, we will set the standard here as we did in Montreal almost eight years ago.”

 

London’s Freefolk beefs up VFX team

Soho-based visual effects studio Freefolk, which has seen growth in its commercials and longform work, has grown its staff to meet this demand. As part of the uptick in work, Freefolk promoted Cheryl Payne from senior producer to head of commercial production. Additionally, Laura Rickets has joined as senior producer, and 2D artist Bradley Cocksedge has been added to the commercials VFX team.

Payne, who has been with Freefolk since the early days, has worked on some of the studio’s biggest commercials, including; Warburtons for Engine, Peloton for Dark Horses and Cadburys for VCCP.

Rickets comes to Freefolk with over 18 years of production experience working at some of the biggest VFX houses in London, including Framestore, The Mill and Smoke & Mirrors, as well as agency side for McCann. Since joining the team, Rickets has VFX-produced work on the I’m A Celebrity IDs, a set of seven technically challenging and CG-heavy spots for the new series of the show as well as ads for the Rugby World Cup and Who Wants to Be a Millionaire?.

Cocksedge is a recent graduate who joins from Framestore, where he was working as an intern on Fantastic Beasts: The Crimes of Grindelwald. While in school at the University of Hertfordshire, he interned at Freefolk and is happy to be back in a full-time position.

“We’ve had an exciting year and have worked on some really stand-out commercials, like TransPennine for Engine and the beautiful spot for The Guardian we completed with Uncommon, so we felt it was time to add to the Freefolk family,” says Fi Kilroe, Freefolk’s co-managing director/executive producer.

Main Image: (L-R) Cheryl Payne, Laura Rickets and Bradley Cocksedge

Behind the Title: MPC’s CD Morten Vinther

This creative director/director still jumps on the Flame and also edits from time to time. “I love mixing it up and doing different things,” he says.

NAME: Morten Vinther

COMPANY: Moving Picture Company, Los Angeles

CAN YOU DESCRIBE YOUR COMPANY?
From original ideas all the way through to finished production, we are an eclectic mix of hard-working and passionate artists, technologists and creatives who push the boundaries of what’s possible for our clients. We aim to move the audience through our work.

WHAT’S YOUR JOB TITLE?
Creative Director and Director

WHAT DOES THAT ENTAIL?
I guide our clients through challenging shoots and post. I try to keep us honest in terms of making sure that our casting is right and the team is looked after and has the appropriate resources available for the tasks ahead, while ensuring that we go above and beyond on quality and experience. In addition to this, I direct projects, pitch on new business and develop methodology for visual effects.

American Horror Story

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I still occasionally jump on Flame and comp a job — right now I’m editing a commercial. I love mixing it up and doing different things.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Writing treatments. The moments where everything is crystal clear in your head and great ideas and concepts are rushing onto paper like an unstoppable torrent.

WHAT’S YOUR LEAST FAVORITE?
Writing treatments. Staring at a blank page, writing something and realizing how contrived it sounds before angrily deleting everything.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Early mornings. A good night’s sleep and freshly ground coffee creates a fertile breeding ground for pure clarity, ideas and opportunities.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be carefully malting barley for my next small batch of artisan whisky somewhere on the Scottish west coast.

Adidas Creators

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I remember making a spoof commercial at my school when I was about 13 years old. I became obsessed with operating cameras and editing, and I began to study filmmakers like Scorsese and Kubrick. After a failed career as a shopkeeper, a documentary production company in Copenhagen took mercy on me, and I started as an assistant editor.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
American Horror Story, Apple Unlock, directed by Dougal Wilson, and Adidas Creators, directed by Stacy Wall.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
If I had to single one out, it would probably be Apple’s Unlock commercial. The spot looks amazing, and the team was incredibly creative on this one. We enjoyed a great collaboration between several of our offices, and it was a lot of fun putting it together.

Apple’s Unlock

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and PlayStation.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Some say social media rots your brains. That’s probably why I’m an Instagram addict.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Odesza, SBTRKT, Little Dragon, Disclosure and classic reggae.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I recently bought a motorbike, and I spin around LA and Southern California most weekends. Concentrating on how to survive the next turn is a great way for me to clear the mind.

Director Robert Eggers talks about his psychological thriller The Lighthouse

By Iain Blair

Writer/director Robert Eggers burst onto the scene when his feature film debut, The Witch, won the Directing Award in the US Dramatic category at the 2015 Sundance Film Festival. He followed up that success by co-writing and directing another supernatural, hallucinatory horror film, The Lighthouse, which is set in the maritime world of the late 19th century.

L-R: Director Robert Eggers and cinematographer Jarin Blaschke on set.

The story begins when two lighthouse keepers (Willem Dafoe and Robert Pattinson) arrive on a remote island off the coast of New England for their month-long stay. But that stay gets extended as they’re trapped and isolated due to a seemingly never-ending storm. Soon, the two men engage in an escalating battle of wills, as tensions boil over and mysterious forces (which may or may not be real) loom all around them.

The Lighthouse has the power of an ancient myth. To tell this tale, which was shot in black and white, Eggers called on many of those who helped him create The Witch, including cinematographer Jarin Blaschke, production designer Craig Lathrop, composer Mark Korven and editor Louise Ford.

I recently talked to Eggers, who got his professional start directing and designing experimental and classical theater in New York City, about making the film, his love of horror and the post workflow.

Why does horror have such an enduring appeal?
My best argument is that there’s darkness in humanity, and we need to explore that. And horror is great at doing that, from the Gothic to a bad slasher movie. While I may prefer authors who explore the complexities in humanity, others may prefer schlocky films with jump scares that make you spill your popcorn, which still give them that dose of darkness. Those films may not be seriously probing the darkness, but they can relate to it.

This film seems more psychological than simple horror.
We’re talking about horror, but I’m not even sure that this is a horror film. I don’t mind the label, even though most wannabe auteurs are like, “I don’t like labels!” It started with an idea my brother Max had for a ghost story set in a lighthouse, which is not what this movie became. But I loved the idea, which was based on a true story. It immediately evoked a black and white movie on 35mm negative with a boxy aspect ratio of 1.19:1, like the old movies, and a fusty, dusty, rusty, musty atmosphere — the pipe smoke and all the facial hair — so I just needed a story that went along with all of that. (Laughs) We were also thinking a lot about influences and writers from the time — like Poe, Melville and Stevenson — and soaking up the jargon of the day. There were also influences like Prometheus and Proteus and God knows what else.

Casting the two leads was obviously crucial. What did Willem and Robert bring to their roles?
Absolute passion and commitment to the project and their roles. Who else but Willem can speak like a North Atlantic pirate stereotype and make it totally believable? Robert has this incredible intensity, and together they play so well against each other and are so well suited to this world. And they both have two of the best faces ever in cinema.

What were the main technical challenges in pulling it all together, and is it true you actually built the lighthouse?
We did. We built everything, including the 70-foot tower — a full-scale working lighthouse, along with its house and outbuildings — on Cape Forchu in Nova Scotia, which is this very dramatic outcropping of volcanic rock. Production designer Craig Lathrop and his team did an amazing job, and the reason we did that was because it gave us far more control than if we’d used a real lighthouse.

We scouted a lot but just couldn’t find one that suited us, and the few that did were far too remote to access. We needed road access and a place with the right weather, so in the end it was better to build it all. We also shot some of the interiors there as well, but most of them were built on soundstages and warehouses in Halifax since we knew it’d be very hard to shoot interiors and move the camera inside the lighthouse tower itself.

Your go-to DP, Jarin Blaschke, shot it. Talk about how you collaborated on the look and why you used black and white.
I love the look of black and white, because it’s both dreamlike and also more realistic than color in a way. It really suited both the story and the way we shot it, with the harsh landscape and a lot of close-ups of Willem and Robert. Jarin shot the film on the Panavision Millennium XL2, and we also used vintage Baltar lenses from the 1930s, which gave the film a great look, as they make the sea, water and sky all glow and shimmer more. He also used a custom cyan filter by Schneider Filters that gave us that really old-fashioned look. Then by using black and white, it kept the overall look very bleak at all times.

How tough was the shoot?
It was pretty tough, and all the rain and pounding wind you see onscreen is pretty much real. Even on the few sunny days we had, the wind was just relentless. The shoot was about 32 days, and we were out in the elements in March and April of last year, so it was freezing cold and very tough for the actors. It was very physically demanding.

Where did you post?
We did it all in New York at Harbor Post, with some additional ADR work at Goldcrest in London with Robert.

Do you like the post process?
I love post, and after the very challenging shoot, it was such a relief to just get in a warm, dry, dark room and start cutting and pulling it all together.

Talk about editing with Louise Ford, who also cut The Witch. How did that work?
She was with us on the shoot at a bed and breakfast, so I could check in with her at the end of the day. But it was so tough shooting that I usually waited until the weekends to get together and go over stuff. Then when we did the stage work at Halifax, she had an edit room set up there, and that was much easier.

What were the big editing challenges?
The DP and I developed such a specific and detailed cinema language without a ton of coverage and with little room for error that we painted ourselves into a corner. So that became the big challenge… when something didn’t work. It was also about getting the running time down but keeping the right pace since the performances dictate the pace of the edit. You can’t just shorten stuff arbitrarily. But we didn’t leave a lot of stuff on the cutting room floor. The assembly was just over two hours and the final film isn’t much shorter.

All the sound effects play a big role. Talk about the importance of sound and working on them with sound designer Damian Volpe, whose credits include Can You Ever Forgive Me?, Leave No Trace, Mudbound, Drive, Winter’s Bone and Margin Call.
It’s hugely important in this film, and Louise and I did a lot of work in the picture edit to create temps for Damian to inspire him. And he was so relentless in building up the sound design, and even creating weird sounds to go with the actual light, and to go with the score by Mark Korven, who did The Witch, and all the brass and unusual instrumentation he used on this. So the result is both experimental and also quite traditional, I think.

There are quite a few VFX shots. Who did them, and what was involved?
We had MELS and Oblique in Quebec and Brainstorm Digital in New York also did some. The big one was that the movie’s set on an island but we shot on a peninsula, which also had a lighthouse further north, which unfortunately didn’t look at all correct, so we framed it out a lot but we had to erase it for some of the time. And our period-correct sea ship broke down and had to be towed around by other ships, so there was a lot of clean up. Also with all the safety cables we had to use for cliff shots with the actors.

Where did you do the DI, and how important is it to you?
We did it at Harbor with colorist Joe Gawler, and it was hugely important although it was fairly simple because there’s very little latitude on the Double-X film stock we used. We did a lot of fine detail work to finesse it, but it was a lot quicker than if it’d been in color.

Did the film turn out the way you hoped?
No, they always change and surprise you, but I’m very proud of what we did.

What’s next?
I’m prepping another period piece, but it’s not a horror film. That’s all I can say.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Alkemy X adds Albert Mason as head of production

Albert Mason has joined VFX house Alkemy X as head of production. He comes to Alkemy X with over two decades of experience in visual effects and post production. He has worked on projects directed by such industry icons as Peter Jackson on the Lord of the Rings trilogy, Tim Burton on Alice in Wonderland and Robert Zemeckis on The Polar Express. In his new role at Alkemy X, he will use his experience in feature films to target the growing episodic space.

A large part of Alkemy X’s work has been for episodic visual effects, with credits that include Amazon Prime’s Emmy-winning original series, The Marvelous Mrs. Maisel, USA’s Mr. Robot, AMC’s Fear the Walking Dead, Netflix’s Maniac, NBC’s Blindspot and Starz’s Power.

Mason began his career at MTV’s on-air promos department, sharpening his production skills on top series promo campaigns and as a part of its newly launched MTV Animation Department. He took an opportunity to transition into VFX, stepping into a production role for Weta Digital and spending three years working globally on the Lord of the Rings trilogy. He then joined Sony Pictures Imageworks, where he contributed to features including Spider-Man 3 and Ghost Rider. He has also produced work for such top industry shops as Logan, Rising Sun Pictures and Greymatter VFX.

“[Albert’s] expertise in constructing advanced pipelines that embrace emerging technologies will be invaluable to our team as we continue to bolster our slate of VFX work,” says Alkemy X president/CEO Justin Wineburgh.

2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.

Creating With Cloud: A VFX producer’s perspective

By Chris Del Conte

The ‘90s was an explosive era for visual effects, with films like Jurassic Park, Independence Day, Titanic and The Matrix shattering box office records and inspiring a generation of artists and filmmakers, myself included. I got my start in VFX working on seaQuest DSV, an Amblin/NBC sci-fi series that was ground-breaking for its time, but looking at the VFX of modern films like Gemini Man, The Lion King and Ad Astra, it’s clear just how far the industry has come. A lot of that progress has been enabled by new technology and techniques, from the leap to fully digital filmmaking and emergence of advanced viewing formats like 3D, Ultra HD and HDR to the rebirth of VR and now the rise of cloud-based workflows.

In my nearly 25 years in VFX, I’ve worn a lot of hats, including VFX producer, head of production and business development manager. Each role involved overseeing many aspects of a production and, collectively, they’ve all shaped my perspective when it comes to how the cloud is transforming the entire creative process. Thanks to my role at AWS Thinkbox, I have a front-row seat to see why studios are looking at the cloud for content creation, how they are using the cloud, and how the cloud affects their work and client relationships.

Chris Del Conte on the set of the IMAX film Magnificent Desolation.

Why Cloud?
We’re in a climate of high content demand and massive industry flux. Studios are incentivized to find ways to take on more work, and that requires more resources — not just artists, but storage, workstations and render capacity. Driving a need to scale, this trend often motivates studios to consider the cloud for production or to strengthen their use of cloud in their pipelines if already in play. Cloud-enabled studios are much more agile than traditional shops. When opportunities arise, they can act quickly, spinning resources up and down at a moment’s notice. I realize that for some, the concept of the cloud is still a bit nebulous, which is why finding the right cloud partner is key. Every facility is different, and part of the benefit of cloud is resource customization. When studios use predominantly physical resources, they have to make decisions about storage and render capacity, electrical and cooling infrastructure, and staff accommodations up front (and pay for them). Using the cloud allows studios to adjust easily to better accommodate whatever the current situation requires.

Artistic Impact
Advanced technology is great, but artists are by far a studio’s biggest asset; automated tools are helpful but won’t deliver those “wow moments” alone. Artists bring the creativity and talent to the table, then, in a perfect world, technology helps them realize their full potential. When artists are free of pipeline or workflow distractions, they can focus on creating. The positive effects spill over into nearly every aspect of production, which is especially true when cloud-based rendering is used. By scaling render resources via the cloud, artists aren’t limited by the capacity of their local machines. Since they don’t have to wait as long for shots to render, artists can iterate more fluidly. This boosts morale because the final results are closer to what artists envisioned, and it can improve work-life balance since artists don’t have to stick around late at night waiting for renders to finish. With faster render results, VFX supervisors also have more runway to make last-minute tweaks. Ultimately, cloud-based rendering enables a higher caliber of work and more satisfied artists.

Budget Considerations
There are compelling arguments for shifting capital expenditures to operational expenditures with the cloud. New studios get the most value out of this model since they don’t have legacy infrastructure to accommodate. Cloud-based solutions level the playing field in this respect; it’s easier for small studios and freelancers to get started because there’s no significant up-front hardware investment. This is an area where we’ve seen rapid cloud adoption. Considering how fast technology changes, it seems ill-advised to limit a new studio’s capabilities to today’s hardware when the cloud provides constant access to the latest compute resources.

When a studio has been in business for decades and might have multiple locations with varying needs, its infrastructure is typically well established. Some studios may opt to wait until their existing hardware has fully depreciated before shifting resources to the cloud, while others dive in right away, with an eye on the bigger picture. Rendering is generally a budgetary item on project bids, but with local hardware, studios are working to recoup a sunk cost. Using the cloud, render compute can be part of a bid and becomes a negotiable item. Clients can determine the delivery timeline based on render budget, and the elasticity of cloud resources allows VFX studios to pick up more work. (Even the most meticulously planned productions can run into 911 issues ahead of delivery, and cloud-enabled studios have bandwidth to be the hero when clients are in dire straits.)

Looking Ahead
When I started in VFX, giant rooms filled with racks and racks of servers and hardware were the norm, and VFX studios were largely judged by the size of their infrastructure. I’ve heard from an industry colleague about how their VFX studio’s server room was so impressive that they used to give clients tours of the space, seemingly a visual reminder of the studio’s vast compute capabilities. Today, there wouldn’t be nearly as much to view. Modern technology is more powerful and compact but still requires space, and that space has to be properly equipped with the necessary electricity and cooling. With cloud, studios don’t need switchers and physical storage to be competitive off the bat, and they experience fewer infrastructure headaches, like losing freon in the AC.

The cloud also opens up the available artist talent pool. Studios can dedicate the majority of physical space to artists as opposed to machines and even hire artists in remote locations on a per-project or long-term basis. Facilities of all sizes are beginning to recognize that becoming cloud-enabled brings a significant competitive edge, allowing them to harness the power to render almost any client request. VFX producers will also start to view facility cloud-enablement as a risk management tool that allows control of any creative changes or artistic embellishments up until delivery, with the rendering output no longer a blocker or a limited resource.

Bottom line: Cloud transforms nearly every aspect of content creation into a near-infinite resource, whether storage capacity, render power or artistic talent.


Chris Del Conte is senior EC2 business development manager at AWS Thinkbox.

Motorola’s next-gen Razr gets a campaign for today

Many of us have fond memories of our Razr flip phone. At the time, it was the latest and greatest. Then new technology came along, and the smartphone era was born. Now Motorola is asking, “Why can’t you have both?”

Available as of November 13, the new Razr fits in a palm or pocket when shut and flips open to reveal an immersive, full-length touch screen. There is a display screen called the Quick View when closed and the larger Flex View when open — and the two displays are made to work together. Whatever you see on Quick View then moves to the larger Flex View display when you flip it open.

In order to help tell this story, Motorola called on creative shop Los York to help relaunch the Razr. Los York created the new smartphone campaign to tap into the Razr’s original DNA and launch it for today’s user.

Los York developed a 360 campaign that included films, social, digital, TV, print and billboards, with visuals in stores and on devices (wallpapers, ringtones, startup screens). Los York treated the Razr as a luxury item and a piece of art, letting the device reveal itself unencumbered by taglines and copy. The campaign showcases the Razr as a futuristic, high-end “fashion accessory” that speaks to new industry conversations, such as advancing tech along a utopian or dystopian future.

The campaign features a mix of live action and CG. Los York shot on a Panavision DXL with Primo 70 lenses. CG was created using Maxon Cinema 4D with Redshift and composited in Adobe After Effects. The piece was edited in-house on Adobe Premiere.

We reached out to Los York CEO and founder Seth Epstein to find out more:

How much of this is live action versus CG?
The majority is CG, but, originally, the piece was intended to be entirely CG. Early in the creative process, we defined the world in which the new Razr existed and who would belong there. As we worked on the project, we kept feeling that bringing our characters to life in live action and blending the worlds. The proper live action was envisioned after the fact, which is somewhat unusual.

What were some of the most challenging aspects of this piece?
The most challenging part of the project was the fact that the project happened over a period of nine months. Wisely, the product release needed to push, and we continued to evolve the project over time, which is a blessing and a curse.

How did it feel taking on a product with a lot of history and then rebranding it for the modern day?
We felt the key was to relaunch an iconic product like the Razr with an eye to the future. The trap of launching anything iconic is falling back on the obvious retro throwback references, which can come across as too obvious. We dove into the original product and campaigns to extract the brand DNA of 2004 using archetype exercises. We tapped into the attitude and voice of the Razr at that time — and used that attitude as a starting point. We also wanted to look forward and stand three years in the future and imagine what the tone and campaign would be then. All of this is to say that we wanted the new Razr to extract the power of the past but also speak to audiences in a totally fresh and new way.

Check out the campaign here.

Blur Studio uses new AMD Threadripper for Terminator: Dark Fate VFX

By Dayna McCallum

AMD has announced new additions to its high-end desktop processor family. Built for demanding desktop and content creation workloads, the 24-core AMD Ryzen Threadripper 3960X and the 32-core AMD Ryzen Threadripper 3970X processors will be available worldwide November 25.

Tim Miller on the set of Dark Fate.

AMD states that the powerful new processors provide up to 90 percent more performance and up to 2.5 times more available storage bandwidth than competitive offerings, per testing and specifications by AMD performance labs. The 3rd Gen AMD Ryzen Threadripper lineup features two new processors built on 7nm “Zen 2” core architecture, claiming up to 88 PCIe 4.0 lanes and 144MB cache with 66 percent better power efficiency.

Prior to the official product launch, AMD made the 3rd Gen Threadrippers available to LA’s Blur Studio for work on the recent Terminator: Dark Fate and continued a collaboration with the film’s director — and Blur Studio founder — Tim Miller.

Before the movie’s release, AMD hosted a private Q&A with Miller, moderated by AMD’s James Knight. Please note that we’ve edited the lively conversation for space and taken a liberty with some of Miller’s more “colorful” language. (Also watch this space to see if a wager is won that will result in Miller sporting a new AMD tattoo.) Here is the Knight/Miller conversation…

So when we dropped off the 3rd Gen Threadripper to you guys, how did your IT guys react?
Like little children left in a candy shop with no adult supervision. The nice thing about our atmosphere here at Blur is we have an open layout. So when (bleep) like these new AMD processors drops in, you know it runs through the studio like wildfire, and I sit out there like everybody else does. You hear the guys talking about it, you hear people giggling and laughing hysterically at times on the second floor where all the compositors are. That’s where these machines really kick ass — busting through these comps that would have had to go to the farm, but they can now do it on a desktop.

James Knight

As an artist, the speed is crucial. You know, if you have a machine that takes 15 minutes to render, you want to stop and do something else while you wait for a render. It breaks your whole chain of thought. You get out of that fugue state that you produce the best art in. It breaks the chain between art and your brain. But if you have a machine that does it in 30 seconds, that’s not going to stop it.

But really, more speed means more iterations. It means you deal with heavier scenes, which means you can throw more detail at your models and your scenes. I don’t think we do the work faster, necessarily, but the work is much higher quality. And much more detailed. It’s like you create this vacuum, and then everybody rushes into it and you have this silly idea that it is really going to increase productivity, but what it really increases most is quality.

When your VFX supervisor showed you the difference between the way it was done with your existing ecosystem and then with the third-gen Threadripper, what were you thinking about?
There was the immediate thing — when we heard from the producers about the deadline, shots that weren’t going to get done for the trailer, suddenly were, which was great. More importantly, you heard from the artists. What you started to see was that it allows for all different ways of working, instead of just the elaborate pipeline that we’ve built up — to work on your local box and then submit it to the farm and wait for that render to hit the queue of farm machines that can handle it, then send that render back to you.

It has a rhythm that is at times tiresome for the artists, and I know that because I hear it all the time. Now I say, “How’s that comp coming and when are we going to get it, tick tock?” And they say, “Well, it’s rendering in the background right now, as I’m watching them work on another comp or another piece of that comp.” That’s pretty amazing. And they’re doing it all locally, which saves so much time and frustration compared to sending it down the pipeline and then waiting for it to come back up.

I know you guys are here to talk about technology, but the difference for the artists is the instead of working here until 1:00am, they’re going home to put their children to bed. That’s really what this means at the end of the day. Technology is so wonderful when it enables that, not just the creativity of what we do, but the humanity… allowing artists to feel like they’re really on the cutting edge, but also have a life of some sort outside.

Endoskeleton — Terminator: Dark Fate

As you noted, certain shots and sequences wouldn’t have made it in time for the trailer. How important was it for you to get that Terminator splitting in the trailer?
 Marketing was pretty adamant that that shot had to be in there. There’s always this push and pull between marketing and VFX as you get closer. They want certain shots for the trailer, but they’re almost always those shots that are the hardest to do because they have the most spectacle in them. And that’s one of the shots. The sequence was one of the last to come together because we changed the plan quite a bit, and I kept changing shots on Dan (Akers, VFX supervisor). But you tell marketing people that they can’t have something, and they don’t really give a (bleep) about you and your schedule or the path of that artist and shot. (Laughing)

Anyway, we said no. They begged, they pleaded, and we said, “We’ll try.” Dan stepped up and said, “Yeah, I think I can make it.” And we just made it, but that sounds like we were in danger because we couldn’t get it done fast enough. All of this was happening in like a two-day window. If you didn’t notice (in the trailer), that’s a Rev 7. Gabriel Luna is a Rev 9, which is the next gen. But the Rev 7s that you see in his future flashback are just pure killers. They’re still the same technology, which is looking like metal on the outside and a carbon endoskeleton that splits. So you have to run the simulation where the skeleton separates through the liquid that hangs off of an inch string; it’s a really hard simulation to do. That’s why we thought maybe it wasn’t going to get done, but running the simulation on the AMD boxes was lightning fast.

 

 

 

Carbon New York grows with three industry vets

Carbon in New York has grown with two senior hires — executive producer Nick Haynes and head of CG Frank Grecco — and the relocation of existing ECD Liam Chapple, who joins from the Chicago office.

Chapple joined Carbon in 2016, moving from Mainframe in London to open Carbon’s Chicago facility.  He brought in clients such as Porsche, Lululemon, Jeep, McDonald’s, and Facebook. “I’ve always looked to the studios, designers and directors in New York as the high bar, and now I welcome the opportunity to pitch against them. There is an amazing pool of talent in New York, and the city’s energy is a magnet for artists and creatives of all ilk. I can’t wait to dive into this and look forward to expanding upon our amazing team of artists and really making an impression in such a competitive and creative market.”

Chapple recently wrapped direction and VFX on films for Teflon and American Express (Ogilvy) and multiple live-action projects for Lululemon. The most recent shoot, conceived and directed by Chapple, was a series of eight live-action films focusing on Lululemon’s brand ambassadors and its new flagship store in Chicago.

Haynes joins Carbon from his former role as EP of MPC, bringing over 20 years of experience earned at The Mill, MPC and Absolute. Haynes recently wrapped the launch film for the Google Pixel phone and the Chromebook, as well as an epic Middle Earth: Shadow of War Monolith Games trailer combining photo-real CGI elements with live-action shot on the frozen Black Sea in Ukraine.  “We want to be there at the inception of the creative and help steer it — ideally, lead it — and be there the whole way through the process, from concept and shoot to delivery. Over the years, whether working for the world’s most creative agencies or directly with prestigious clients like Google, Guinness and IBM, I aim to be as close to the project as possible from the outset, allowing my team to add genuine value that will garner the best result for everyone involved.”

Grecco joins Carbon from Method Studios, where he most recently led projects for Google, Target, Microsoft, Netflix and Marvel’s Deadpool 2.  With a wide range of experience from Emmy-nominated television title sequences to feature films and Super Bowl commercials, Grecco looks forward to helping Carbon continue to push its visuals beyond the high bar that has already been set.

In addition to New York and Chicago, Carbon has a studio in Los Angeles.

Main Image: (L-R) Frank Grecco, Liam Chapple, Nick Haynes

Behind the Title: Sarofsky EP Steven Anderson

This EP’s responsibilities range gamut “from managing our production staff to treating clients to an amazing dinner.”

Company: Chicago’s Sarofsky

Can you describe your company?
We like to describe ourselves as a design-driven production company. I like to think of us as that but so much more. We can be a one-stop shop for everything from concept through finish, or we can partner with a variety of other companies and just be one piece of the puzzle. It’s like ordering from a Chinese menu — you get to pick what items you want.

What’s your job title, and what does the job entail?
I’m executive producer, and that means different things at different companies and industries. Here at Sarofsky, I am responsible for things that run the gamut from managing our production staff to treating clients to an amazing dinner.

Sarofsky

What would surprise people the most about what falls under that title?
I also run payroll, and I am damn good at it.

How has the VFX industry changed in the time you’ve been working?
It used to be that when you told someone, “This is going to take some time to execute,” that’s what it meant. But now, everyone wants everything two hours ago. On the flip side, the technology we now have access to has streamlined the production process and provided us with some terrific new tools.

Why do you like being on set for shoots? What are the benefits?
I always like being on set whenever I can because decisions are being made that are going to affect the rest of the production paradigm. It’s also a good opportunity to bond with clients and, sometimes, get some kick-ass homemade guacamole.

Did a particular film inspire you along this path in entertainment?
I have been around this business for quite a while, and one of the reasons I got into it was my love of film and filmmaking. I can’t say that one particular film inspired me to do this, but I remember being a young kid and my dad taking me to see The Towering Inferno in the movie theater. I was blown away.

What’s your favorite part of the job?
Choosing a spectacular bottle of wine for a favorite client and watching their face when they taste it. My least favorite has to be chasing down clients for past due invoices. It gets old very quickly.

What is your most productive time of the day?
It’s 6:30am with my first cup of coffee sitting at my kitchen counter before the day comes at me. I get a lot of good thinking and writing done in those early morning hours.

Original Bomb Pop via agency VMLY&R

If you didn’t have this job, what would you be doing instead?
I would own a combo bookstore/wine shop where people could come and enjoy two of my favorite things.

Why did you choose this profession?
I would say this profession chose me. I studied to be an actor and made my living at it for several years, but due to some family issues, I ended up taking a break for a few years. When I came back, I went for a job interview at FCB and the rest is history. I made the move from agency producing to post executive producer five years ago and have not looked back since.

Can you briefly explain one or more ways Sarofsky is addressing the issue of workplace diversity in its business?
We are a smallish women-owned business, and I am a gay man; diversity is part of our DNA. We always look out for the best talent but also try to ensure we are providing opportunities for people who may not have access to them. For example, one of our amazing summer interns came to us through a program called Kaleidoscope 4 Kids, and we all benefited from the experience.

Name some recent projects you have worked on, which are you most proud of, and why?
My first week here at EP, we went to LA for the friends and family screening of Guardians of the Galaxy, and I thought, what an amazing company I work for! Marvel Studios is a terrific production partner, and I would say there is something special about so many of our clients because they keep coming back. I do have a soft spot for our main title for Animal Kingdom just because I am a big Ellen Barkin fan.

Original Bomb Pop via agency VMLY&R

Name three pieces of technology you can’t live without.
I’d be remiss if I didn’t say my MacBook and iPhone, but I also wouldn’t want to live without my cooking thermometer, as I’ve learned how to make sourdough bread this year, and it’s essential.

What social media channels do you follow?
I am a big fan of Instagram; it’s just visual eye candy and provides a nice break during the day. I don’t really partake in much else unless you count NPR. They occupy most of my day.

Do you listen to music while you work? Care to share your favorite music to work to?
I go in waves. Sometimes I do but then I won’t listen to anything for weeks. But I recently enjoyed listening to “Ladies and Gentleman: The Best of George Michael.” It was great to listen to an entire album, a rare treat.

What do you do to de-stress from it all?
I get up early and either walk or do some type of exercise to set the tone for the day. It’s also so important to unplug; my partner and I love to travel, so we do that as often as we can. All that and a 2006 Chateau Margaux usually washes away the day in two delicious sips.

Filmmaker Hasraf “HaZ” Dulull talks masterclass on sci-fi filmmaking

By Randi Altman

Hasraf “HaZ” Dulull is a producer/director and a hands-on VFX and post pro. His most recent credits include the features films 2036 Origin Unknown and The Beyond, the Disney TV series Fast Layne and the Disney Channel original movies Under the Sea — A Descendants Story, which takes place between Descendants 2 and 3. Recently, Dulull developed a masterclass on Sci-Fi Filmmaking, which can be bought or rented.

Why would this already very busy man decide to take on another project and one that is a little off his current path? Well, we reached out to find out.

Why, at this point in your career, did you think it was important to create this masterclass?
I have seen other masterclasses out there to do with filmmaking and they were always academic based, which turned me off. The best ones were the ones that were taught by actual filmmakers who had made commercial projects, films or TV shows… not just short films. So I knew that if I was to create and deliver a masterclass, I would do it after having made a couple of feature films that have been released out there in the world. I wanted to lead by example and experience.

When I was in LA explaining to studio people, executives and other filmmakers how I made my feature films, they were impressed and fascinated with my process. They were amazed that I was able to pull off high-concept sci-fi films on tight budgets and schedules but still produce a film that looked expensive to make.

When I was researching existing masterclasses or online courses as references, I found that no one was actually going through the entire process. Instead they were offering specialized training in either cinematography or VFX, but there wasn’t anything about how to break down a script and put a budget and schedule together; how to work with locations to make your film work; how to use visual effects smartly in production; how to prepare for marketing and delivering your film for distribution. None of these things were covered as a part of a general masterclass, so I set out to fill that void with my masterclass series.

Clearly this genre holds a special place in your heart. Can you talk about why?
I think it’s because the genre allows for so much creative freedom because sci-fi relies on world-building and imagination. Because of this freedom, it leads to some “out of this world” storytelling and visuals, but on the flip side it may influence the filmmaker to be too ambitious on a tight budget. This could lead to making cheap-looking films because of the over ambitious need to create amazing worlds. Not many filmmakers know how to do this in a fiscally sensible way and they may try to make Star Wars on a shoestring budget. So this is why I decided to use the genre of sci-fi in this masterclass to share my experience of smart filmmaking to achieve commercially successful results.

How did you decide on what topics to cover? What was your process?
I thought about the questions the people and studio executives were asking me when I was in those LA meetings, which pretty much boiled down to, “How did you put the movie together for that tight budget and schedule?” When answering that question, I ended up mapping out my process and the various stages and approaches I took in preproduction, production and post production, but also in the deliverables stage and marketing and distribution stage too. As an indie filmmaker, you really need to get a good grasp on that part to ensure your film is able to be released by the distributors and received commercially.

I also wanted each class/episode to have a variety of timings and not go more than around 10 minutes (the longest one is around 12 minutes, and the shortest is three minutes). I went with a more bite-sized approach to make the experience snappy, fun yet in-depth to allow the viewers to really soak in the knowledge. It also allows for repeat viewing.

Why was it important to teach these classes yourself?
I wanted it to feel raw and personal when talking about my experience of putting two sci-fi feature films together. Plus I wanted to talk about the constant problem solving, which is what filmmaking is all about. Teaching the class myself allowed me to get this all out of my system in my voice and style to really connect with the audience intimately.

Can you talk about what the experience will be like for the student?
I want the students to be like flies on the wall throughout the classes — seeing how I put those sci-fi feature films together. By the end of the series, I want them to feel like they have been on an entire production, from receiving a script to the releasing of the movie. The aim was to inspire others to go out and make their film. Or to instill confidence in those who have fears of making their film, or for existing filmmakers to learn some new tips and tricks because in this industry we are always learning on each project.

Why the rental and purchase options? What have most people been choosing?
Before I released it, one of the big factors that kept me up nights was how to make this accessible and affordable for everyone. The idea of renting is for those who can’t afford to purchase it but would love to experience the course. They can do so at a cut-down price but can only view within the 48-hour window. Whereas the purchase price is a little higher price-wise but you get to access it as many times as you like. It’s pretty much the same model as iTunes when you rent or buy a movie.

So far I have found that people have been buying more than renting, which is great, as this means audiences want to do repeat viewings of the classes.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Review: Lenovo Yoga A940 all-in-one workstation

By Brady Betzel

While more and more creators are looking for alternatives to the iMac, iMac Pro and Mac Pro, there are few options with high-quality, built-in monitors: Microsoft Surface Studio, HP Envy, and Dell 7000 are a few. There are even fewer choices if you want touch and pen capabilities. It’s with that need in mind that I decided to review the Lenovo Yoga A940, a 27-inch, UHD, pen- and touch-capable Intel Core i7 computer with an AMD Radeon RX 560 GPU.

While I haven’t done a lot of all-in-one system reviews like the Yoga A940, I have had my eyes on the Microsoft Surface Studio 2 for a long time. The only problem is the hefty price tag of around $3,500. The Lenovo’s most appealing feature — in addition to the tech specs I will go over — is its price point: It’s available from $2,200 and up. (I saw Best Buy selling a similar system to the one I reviewed for around $2,299. The insides of the Yoga and the Surface Studio 2 aren’t that far off from each other either, at least not enough to make up for the $1,300 disparity.)

Here are the parts inside the Lenovo Yoga A940: Intel Core i7-8700 3.2GHz processor (up to 4.6GHz with Turbo Boost), six cores (12 threads) and 12MB cache; 27-inch 4K UHD IPS multitouch 100% Adobe RGB display; 16GB DDR4 2666MHz (SODIMM) memory; 1TB 5400 RPM drive plus 256GB PCIe SSD; AMD Radeon RX 560 4GB graphics processor; 25-degree monitor tilt angle; Dolby Atmos speakers; Dimensions: 25 inches by 18.3 inches by 9.6 inches; Weight: 32.2 pounds; 802.11AC and Bluetooth 4.2 connectivity; side panel inputs: Intel Thunderbolt, USB 3.1, 3-in-1 card reader and audio jack; rear panel inputs: AC-in, RJ45, HDMI and four USB 3.0; Bluetooth active pen (appears to be the Lenovo Active Pen 2); and QI wireless charging technology platform.

Digging In
Right off the bat, I just happened to put my Android Galaxy phone on the odd little flat platform located on the right side of the all-in-one workstation, just under the monitor, and I saw my phone begin to charge wirelessly. QI wireless charging is an amazing little addition to the Yoga; it really comes through in a pinch when I need my phone charged and don’t have the cable or charging dock around.

Other than that nifty feature, why would you choose a Lenovo Yoga A940 over any other all-in-one system? Well, as mentioned, the price point is very attractive, but you are also getting a near-professional-level system in a very tiny footprint — including Thunderbolt 3 and USB connections, HDMI port, network port and SD card reader. While it would be incredible to have an Intel i9 processor inside of the Yoga, the i7 clocks in at 3.2GHz with six cores. Not a beast, but enough to get the job done inside of Adobe Premiere and Blackmagic’s DaVinci Resolve, but maybe with transcoded files instead of Red raw or the like.

The Lenovo Yoga A940 is outfitted with a front-facing Dolby Atmos audio speaker as well as Dolby Vision technology in the IPS display. The audio could use a little more low end, but it is good. The monitor is surprisingly great — the whites are white and the blacks are black; something not everyone can get right. It has 100% Adobe RGB color coverage and is Pantone-validated. The HDR is technically Dolby Vision and looks great at about 350 nits (not the brightest, but it won’t burn your eyes out either). The Lenovo BT active pen works well. I use Wacom tablets and laptop tablets daily, so this pen had a lot to live up to. While I still prefer the Wacom pen, the Lenovo pen, with 4,096 levels of sensitivity, will do just fine. I actually found myself using the touchscreen with my fingers way more than the pen.

One feature that sets the A940 apart from the other all-in-one machines is the USB Content Creation dial. With the little time I had with the system, I only used it to adjust speaker volume when playing Spotify, but in time I can see myself customizing the dials to work in Premiere and Resolve. The dial has good action and resistance. To customize the dial, you can jump into the Lenovo Dial Customization Assistant.

Besides the Intel i7, there is an AMD Radeon RX 560 with 4GB of memory, two 3W and two 5W speakers, 32 GB of DDR4 2666 MHz memory, a 1 TB 5400 RPM hard drive for storage, and a 256GB PCIe SSD. I wish the 1TB drive was also an SSD, but obviously Lenovo has to keep that price point somehow.

Real-World Testing
I use Premiere Pro, After Effects and Resolve all the time and can understand the horsepower of a machine through these apps. Whether editing and/or color correcting, the Lenovo A940 is a good medium ground — it won’t be running much more than 4K Red raw footage in real time without cutting the debayering quality down to half if not one-eighth. This system would make a good “offline” edit system, where you transcode your high-res media to a mezzanine codec like DNxHR or ProRes for your editing and then up-res your footage back to the highest resolution you have. Or, if you are in Resolve, maybe you could use optimized media for 80% of the workflow until you color. You will really want a system with a higher-end GPU if you want to fluidly cut and color in Premiere and Resolve. That being said, you can make it work with some debayer tweaking and/or transcoding.

In my testing I downloaded some footage from Red’s sample library, which you can find here. I also used some BRAW clips to test inside of Resolve, which can be downloaded here. I grabbed 4K, 6K, and 8K Red raw R3D files and the UHD-sized Blackmagic raw (BRAW) files to test with.

Adobe Premiere
Using the same Red clips as above, I created two one-minute-long UHD (3840×2160) sequences. I also clicked “Set to Frame Size” for all the clips. Sequence 1 contained these clips with a simple contrast, brightness and color cast applied. Sequence 2 contained these same clips with the same color correction applied, but also a 110% resize, 100 sharpen and 20 Gaussian Blur. I then exported them to various codecs via Adobe Media Encoder using the OpenCL for processing. Here are my results:

QuickTime (.mov) H.264, No Audio, UHD, 23.98 Maximum Render Quality, 10 Mb/s:
Color Correction Only: 24:07
Color Correction w/ 110% Resize, 100 Sharpen, 20 Gaussian Blur: 26:11
DNxHR HQX 10 bit UHD
Color Correction Only: 25:42
Color Correction w/ 110% Resize, 100 Sharpen, 20 Gaussian Blur: 27:03

ProRes HQ
Color Correction Only: 24:48
Color Correction w/ 110% Resize, 100 Sharpen, 20 Gaussian Blur: 25:34

As you can see, the export time is pretty long. And let me tell you, once the sequence with the Gaussian Blur and Resize kicked in, so did the fans. While it wasn’t like a jet was taking off, the sound of the fans definitely made me and my wife take a glance at the system. It was also throwing some heat out the back. Because of the way Premiere works, it relies heavily on the CPU over GPU. Not that it doesn’t embrace the GPU, but, as you will see later, Resolve takes more advantage of the GPUs. Either way, Premiere really taxed the Lenovo A940 when using 4K, 6K and 8K Red raw files. Playback in real time wasn’t possible except for the 4K files. I probably wouldn’t recommend this system for someone working with lots of higher-than-4K raw files; it seems to be simply too much for it to handle. But if you transcode the files down to ProRes, you will be in business.

Blackmagic Resolve 16 Studio
Resolve seemed to take better advantage of the AMD Radeon RX 560 GPU in combination with the CPU, as well as the onboard Intel GPU. In this test I added in Resolve’s amazing built-in spatial noise reduction, so other than the Red R3D footage, this test and the Premiere test weren’t exactly comparing apples to apples. Overall the export times will be significantly higher (or, in theory, they should be). I also added in some BRAW footage to test for fun, and that footage was way easier to work and color with. Both sequences were UHD (3840×2160) 23.98. I will definitely be looking into working with more BRAW footage. Here are my results:

Playback: 4K realtime playback at half-premium, 6K no realtime playback, 8K no realtime playback

H.264 no audio, UHD, 23.98fps, force sizing and debayering to highest quality
Export 1 (Native Renderer)
Export 2 (AMD Renderer)
Export 3 (Intel QuickSync)

Color Only
Export 1: 3:46
Export 2: 4:35
Export 3: 4:01

Color, 110% Resize, Spatial NR: Enhanced, Medium, 25; Sharpening, Gaussian Blur
Export 1: 36:51
Export 2: 37:21
Export 3: 37:13

BRAW 4K (4608×2592) Playback and Export Tests

Playback: Full-res would play at about 22fps; half-res plays at realtime

H.264 No Audio, UHD, 23.98 fps, Force Sizing and Debayering to highest quality
Color Only
Export 1: 1:26
Export 2: 1:31
Export 3: 1:29
Color, 110% Resize, Spatial NR: Enhanced, Medium, 25; Sharpening, Gaussian Blur
Export 1: 36:30
Export 2: 36:24
Export 3: 36:22

DNxHR 10 bit:
Color Correction Only: 3:42
Color, 110% Resize, Spatial NR: Enhanced, Medium, 25; Sharpening, Gaussian Blur: 39:03

One takeaway from the Resolve exports is that the color-only export was much more efficient than in Premiere, taking just over three or four times realtime for the intensive Red R3D files, and just over one and a half times real time for BRAW.

Summing UpIn the end, the Lenovo A940 is a sleek looking all-in-one touchscreen- and pen-compatible system. While it isn’t jam-packed with the latest high-end AMD GPUs or Intel i9 processors, the A940 is a mid-level system with an incredibly good-looking IPS Dolby Vision monitor with Dolby Atmos speakers. It has some other features — like IR camera, QI wireless charger and USB Dial — that you might not necessarily be looking for but love to find.

The power adapter is like a large laptop power brick, so you will need somewhere to stash that, but overall the monitor has a really nice 25-degree tilt that is comfortable when using just the touchscreen or pen, or when using the wireless keyboard and mouse.

Because the Lenovo A940 starts at just around $2,299 I think it really deserves a look when searching for a new system. If you are working in primarily HD video and/or graphics this is the all-in-one system for you. Check out more at their website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Bonfire adds Jason Mayo as managing director/partner

Jason Mayo has joined digital production company Bonfire in New York as managing director and partner. Industry veteran Mayo will be working with Bonfire’s new leadership lineup, which includes founder/Flame artist Brendan O’Neil, CD Aron Baxter, executive producer Dave Dimeola and partner Peter Corbett. Bonfire’s offerings include VFX, design, CG, animation, color, finishing and live action.

Mayo comes to Bonfire after several years building Postal, the digital arm of the production company Humble. Prior to that he spent 14 years at Click 3X, where he worked closely with Corbett as his partner. While there he also worked with Dimeola, who cut his teeth at Click as a young designer/compositor. Dimeola later went on to create The Brigade, where he developed the network and technology that now forms the remote, cloud-based backbone referred to as the Bonfire Platform.

Mayo says a number of factors convinced him that Bonfire was the right fit for him. “This really was what I’d been looking for,” he says. “The chance to be part of a creative and innovative operation like Bonfire in an ownership role gets me excited, as it allows me to make a real difference and genuinely effect change. And when you’re working closely with a tight group of people who are focused on a single vision, it’s much easier for that vision to be fully aligned. That’s harder to do in a larger company.”

O’Neil says that having Mayo join as partner/MD is a major move for the company. “Jason’s arrival is the missing link for us at Bonfire,” he says. “While each of us has specific areas to focus on, we needed someone who could both handle the day to day of running the company while keeping an eye on our brand and our mission and introducing our model to new opportunities. And that’s exactly his strong suit.”

For the most part, Mayo’s familiarity with his new partners means he’s arriving with a head start. Indeed, his connection to Dimeola, who built the Bonfire Platform — the company’s proprietary remote talent network, nicknamed the “secret sauce” — continued as Mayo tapped Dimeola’s network for overflow and outsourced work while at Postal. Their relationship, he says, was founded on trust.

“Dave came from the artist side, so I knew the work I’d be getting would be top quality and done right,” Mayo explains. “I never actually questioned how it was done, but now that he’s pulled back the curtain, I was blown away by the capabilities of the Platform and how it dramatically differentiates us.

“What separates our system is that we can go to top-level people around the world but have them working on the Bonfire Platform, which gives us total control over the process,” he continues. “They work on our cloud servers with our licenses and use our cloud rendering. The Platform lets us know everything they’re doing, so it’s much easier to track costs and make sure you’re only paying for the work you actually need. More importantly, it’s a way for us to feel connected – it’s like they’re working in a suite down the hall, except they could be anywhere in the world.”

Mayo stresses that while the cloud-based Platform is a huge advantage for Bonfire, it’s just one part of its profile. “We’re not a company riding on the backs of freelancers,” he points out. “We have great, proven talent in our core team who work directly with clients. What I’ve been telling my longtime client contacts is that Bonfire represents a huge step forward in terms of the services and level of work I can offer them.”

Corbett believes he and Mayo will continue to explore new ways of working now that he’s at Bonfire. “In the 14 years Jason and I built Click 3X, we were constantly innovating across both video and digital, integrating live action, post production, VFX and digital engagements in unique ways,” he observes. “I’m greatly looking forward to continuing on that path with him here.”

Technicolor Post opens in Wales 

Technicolor has opened a new facility in Cardiff, Wales, within Wolf Studios. This expansion of the company’s post production footprint in the UK is a result of the growing demand for more high-quality content across streaming platforms and the need to post these projects, as well as the growth of production in Wales.

The facility is connected to all of Technicolor’s locations worldwide through the Technicolor Production Network, giving creatives easy access and to their projects no matter where they are shooting or posting.

The facility, an extension of Technicolor’s London operations, supports all Welsh productions and features a multi-purpose, state-of-the-art suite as well as space for VFX and front-end services including dailies. Technicolor Wales is working on Bad Wolf Production’s upcoming fantasy epic His Dark Materials, providing picture and sound services for the BBC/HBO show. Technicolor London’s recent credits include The Two Popes, The Souvenir, Chernobyl, Black Mirror, Gentleman Jack and The Spanish Princess.

Within this new Cardiff facility, Technicolor is offering 2K digital cinema projection, FilmLight Baselight color grading, realtime 4K HDR remote review, 4K OLED video monitoring, 5.1/7.1 sound, ADR recording/source connect, Avid Pro Tools sound mixing, dailies processing and Pulse cloud storage.

Bad Wolf Studios in Cardiff offers 125,000 square feet of stage space with five stages. There is flexible office space, as well as auxiliary rooms and costume and props storage. Its within

Rising Sun Pictures’ Anna Hodge talks VFX education and training

Based in Adelaide, South Australia, Rising Sun Pictures (RSP) has created stunning visual effects for films including Spider-Man: Far From Home, Captain Marvel, Thor: Ragnarok and Game of Thrones.

It also operates a visual effects training program in conjunction with the University of South Australia in which students learn such skills as compositing, tracking, effects, lighting, look development and modeling from working professionals. Thanks to this program, many students have landed jobs in the industry.

We recently spoke with RSP’s manager of training and education, Anna Hodge, about the school’s success.

Tell us about the education program at Rising Sun Pictures.
Rising Sun Pictures is an independently owned visual effects company. We’ve worked on more than 130 films, as well as commercials and streaming series, and we are very much about employing locals from South Australia. When this is not possible, we hire staff from interstate and overseas for key senior positions.

Our education program was established in 2015 in conjunction with the University of South Australia (UniSA) in order to directly feed our junior talent pool. We found there was a gap between traditional visual effects training and the skills young artists needed to hit the ground running in a studio.

How is the program structured?
We began with a single Graduate Certificate in Visual Effects program of 12 weeks duration that was designed for students coming out of vocational colleges and universities wanting to improve their skills and employability. Students apply through a portfolio process. The program accepts 10 students each term and are exposes them to Foundry Nuke and other visual effects software. They gain experience by working on shots from past movies and creating a short film.

The idea is to give them a true industry experience, develop a showreel in the process and gain a qualification through a prestigious university. Our students are exposed to the studio floor from day one. They attend RSP five days a week. They work in our training rooms and are immersed in the life of the company. We want them to feel as much a part of RSP as our regular employees.

Our program has grown to include two graduate certificate streams. The Graduate Certificate in Effects and Lighting and our first graduate certificate was rebadged into the Graduate Certificate of Compositing and Tracking. Both have been highly successful for our graduates acquiring employment post studies at RSP.

Anna Hodge and students

We also offer course work toward the university’s media arts degree. We teach two elective courses in the second year, specializing in modeling and texturing and look development and lighting. The university students attend RSP as part of their studies at UniSA. It gives them exposure to our artists, industry-type projects and expectations of the industry through workshop-based delivery.

In 2019, our education program expanded, and we introduced “visual effects specialization” as part of the media arts degree. Unlike any other degree, the students spend their entire last year of studies at RSP. They are integrated with the graduate certificate classes, and learning at RSP for the whole year enables them to build skills in both compositing and tracking and effects and lighting, making them highly skilled and desirable employees at the end of their studies.

What practical skills do students learn?
In the Media Arts Modeling and Texturing elective course, they are exposed to Maya and are introduced to Pixologic ZBrush. In the second semester, they can study look development and lighting and learn Substance Painter and how to light in SideFX Houdini.

Both degree and graduate certificate students in the dynamic effects and lighting course receive around nine weeks of Houdini training and then move onto lighting. Those in the compositing and tracking stream learn Nuke, as well as 3D Equalizer and Silhouette. All our degree and graduate certificate students are also exposed to Autodesk’s Shotgun. They learn the tools we use on the floor and apply them in the same workflow.

Skills are never taught in isolation. They learn how they fit into the whole movie-making process. Working on the short film project, run in conjunction with We Made a Thing Studios (WEMAT), students learn how to work collaboratively, take direction and gain other necessary skills required for working in a deadline-driven environment.

Where do your students come from?
We attract applications from South Australia. Over the past few years, applications from interstate and overseas have significantly increased. The benefit of our program is that it’s only 12-weeks long, so students can pick up the skills they require without a huge investment in time. There is strong growth of jobs in South Australia so they are often employed locally or sometimes return to their hometowns to gain employment.

What are the advantages of training in a working VFX studio?
Our training goes beyond simple software skills. Our students are taught by some of our best artists in the world and professionals who have been working in the industry for years. Students can walk around the studio, talk to and shadow artists, and attend a company staff meeting. We schedule what we call “Day in the Life Of” presentations so students can gain an understanding of the various roles that make up our company. Students hear from department heads, senior artists, producers and even juniors. They talk about their jobs and their pathways into the industry. They provide students with sound practical advice on how to improve their skills and present themselves. We also run sessions with recruiters, who share insights in building good resumes and showreels.

We are always trying to reinvent and improve what we do. I have one-on-ones with students to find out how they are doing and what we can do to improve their learning experience. We take feedback seriously. Our instructors are passionate artists and educators. Over time, I think we’ve built something quite unique and special at RSP.

How do you support your students in their transition from the program into the professional world?
We have an excellent relationship with recruiters at other visual effects companies in South Australia, interstate and globally, and we use those connections to help our students find work. A VFX company that opened in Brisbane recently hired two of our students and wants to hire more.

Of course, one reason we created the program was to meet our own need for juniors. So I work closely with our department heads to meet their needs. If a job lands and they have positions open, I will refer students for interviews. Many of our students stay in touch after they leave here. Our support doesn’t stop after 12 weeks. When former students add new material to their showreels, I encourage them to send them in and I forward them to the relevant heads of department. When one of our graduates secures his or hers first VFX job, it’s the best news. This really makes my day.

How do you see the program evolving over the next few years?
We are working on new initiatives with UniSA. Nothing to reveal yet, but I do expect our numbers to grow simply because our graduate results are excellent. Our employment rate is well above 70 percent. I spoke with someone yesterday who is looking to apply next year. She was at a recent film event and met a bunch of our graduates who raved about the programs they studied at RSP. Hearing that sort of thing is really exciting and something that we are really proud of.

RSP and UniSA are both mindful that when scaling up we don’t compromise on quality delivery. It is important to us that students consistently receive the same high-quality training and support regardless of class size.

Do you feel that visual effects offer a strong career path?
Absolutely. I am constantly contacted by recruiters who are looking to hire our graduates. I don’t foresee a lack of jobs, only a lack of qualified artists. We need to keep educating students to avoid a skill shortage. There has never been a better time to train for a career in visual effects.

VFX house Blacksmith now offering color grading, adds Mikey Pehanich

New York-based visual effects studio Blacksmith has added colorist Mikey Pehanich to its team. With this new addition, Blacksmith expands its capabilities to now offer color grading in addition to VFX.

Pehanich has worked on projects for high-profile brands including Amazon, Samsung, Prada, Nike, New Balance, Marriott and Carhartt. Most recently, Pehanich worked on Smirnoff’s global “Infamous Since 1864” campaign directed by Rupert Sanders, Volkswagen’s Look Down in Awe spot from Garth Davis, Fisher-Price’s “Let’s Be Kids” campaign and Miller Lite’s newly launched Followers spot, both directed by Ringan Ledwidge.

Prior to joining Blacksmith, Pehanich spent six years as colorist at The Mill in Chicago. Pehanich was the first local hire when The Mill opened its Chicago studio in 2013. Initially cutting his teeth as color assistant, he quickly worked his way up to becoming a full-fledged colorist, lending his talent to campaigns that include Michelob’s 2019 Super Bowl spot featuring Zoe Kravitz and directed by Emma Westenberg, as well as music videos, including Regina Spektor’s Black and White.

In addition to commercial work, Pehanich’s diverse portfolio encompasses several feature films, short films and music videos. His recent longform work includes Shabier Kirchner’s short film Dadli about an Antiguan boy and his community, and Andre Muir’s short film 4 Corners, which tackles Chicago’s problem with gun violence.

“New York has always been a creative hub for all industries — the energy and vibe that is forever present in the air here has always been a draw for me. When the opportunity presented itself to join the incredible team over at Blacksmith, there was no way I could pass it up,” says Pehanich, who will be working on Blackmagic’s DaVinci Resolve.

 

Sheena Duggal to get VES Award for Creative Excellence

The Visual Effects Society (VES) named acclaimed visual effects supervisor Sheena Duggal as the forthcoming recipient of the VES Award for Creative Excellence in recognition of her valuable contributions to filmed entertainment. The award will be presented at the 18th Annual VES Awards on January 29, 2020, at the Beverly Hilton Hotel.

The VES Award for Creative Excellence, bestowed by the VES Board of Directors, recognizes individuals who have made significant and lasting contributions to the art and science of the visual effects industry by uniquely and consistently creating compelling and creative imagery in service to story. The VES will honor Duggal for breaking new ground in compelling storytelling through the use of stunning visual effects. Duggal has been at the forefront of embracing emerging technology to enhance the moviegoing experience, and her creative vision and inventive techniques have paved the way for future generations of filmmakers.

Duggal is an acclaimed visual effects supervisor and artist whose work has shaped numerous studio tentpole and Academy Award-nominated productions. She is known for her design skills, creative direction and visual effects work on blockbuster films such as Venom, The Hunger Games, Mission: Impossible, Men in Black II, Spider-Man 3 and Contact. She has worked extensively with Marvel Studios as VFX supervisor on projects including Doctor Strange, Thor: The Dark World, Iron Man 3, Marvel One-Shot: Agent Carter and the Agent Carter TV series. She also contributed to Sci-Tech Academy Award wins for visual effects and compositing software Flame and Inferno. Since 2012, Duggal has been consulting with Codex (and now Codex and Pix), providing guidance on various new technologies for the VFX community. Duggal is currently visual effects supervisor for Venom 2 and recently completed design and prep for Ghostbusters 2020.

In 2007, Duggal made her debut as a director on an award-winning short film to showcase the Chicago Spire, simultaneously designing all of the visual effects. Her career in movies began when she moved to Los Angeles to work as a Flame artist on Super Mario Bros. for Roland Joffe and Jake Eberts’ Lightmotive Fatman. She had previously been based in London, where she created high-resolution digital composites for Europe’s top advertising and design agencies. Her work included album covers for Elton John and Traveling Wilburys.

Already an accomplished compositor (she began in 1985 working on early generation paint software), in 1992 Duggal worked as a Flame artist on the world’s first Flame feature production. Soon after, she was hired by Industrial Light & Magic as a supervising lead Flame artist on a number of high-profile projects (Mission: Impossible, Congo and The Indian in the Cupboard). In 1996, Duggal left ILM to join Sony Pictures Imageworks as creative director of high-speed compositing and soon began to take on the additional responsibilities of visual effects supervisor. She was production-side VFX supervisor for multiple directors during this time, including Jane Anderson (The Prize Winner of Defiance, Ohio), Peter Segal (50 First Dates and Anger Management) and Ridley Scott (Body of Lies and Matchstick Men).
In addition to feature films, Duggal has also worked on a number of design projects. In 2013 she designed the logo and the main-on-ends for Agent Carter. She was production designer for SIGGRAPH Electronic Theatre 2001, and she created the title design for the groundbreaking Technology Entertainment and Design conference (TED) in 2004.

Duggal is also a published photographer and traveled to Zimbabwe and Malawi on her last assignment on behalf of UK water charity Pump Aid, where she was photo-documenting how access to clean water has transformed the lives of thousands of people in rural areas.
Duggal is a member of the Academy of Motion Pictures Arts and Sciences and serves on the executive committee for the VFX branch.

De-aging John Goodman 30 years for HBO’s The Righteous Gemstones

For HBO’s original series The Righteous Gemstones, VFX house Gradient Effects de-aged John Goodman using its proprietary Shapeshifter tool, an AI-assisted tool that can turn back the time on any video footage. With Shapeshifter, Gradient sidestepped the Uncanny Valley to shave decades off Goodman for an entire episode, delivering nearly 30 minutes of film-quality VFX in six weeks.

In the show’s fifth episode, “Interlude,” viewers journey back to 1989, a time when the Gemstone empire was still growing and Eli’s wife, Aimee-Leigh, was still alive. But going back also meant de-aging Goodman for an entire episode, something never attempted before on television. Gradient accomplished it using Shapeshifter, which allows artists to “reshape” an individual frame and the performers in it and then extend those results across the rest of a shot.

Shapeshifter worked by first analyzing the underlying shape of Goodman’s face. It then extracted important anatomical characteristics, like skin details, stretching and muscle movements. With the extracted elements saved as layers to be reapplied at the end of the process, artists could start reshaping his face without breaking the original performance or footage. Artists could tweak additional frames in 3D down the line as needed, but they often didn’t need to, making the de-aging process nearly automated.

“Shapeshifter an entirely new way to de-age people,” says Olcun Tan, owner and visual effects supervisor at Gradient Effects. “While most productions are limited by time or money, we can turn around award-quality VFX on a TV schedule, opening up new possibilities for shows and films.”

Traditionally, de-aging work for film and television has been done in one of two ways: through filtering (saves time, but hard to scale) or CG replacements (better quality, higher cost), which can take six months to a year. Shapeshifter introduces a new method that not only preserves the actor’s original performance, but also interacts naturally with other objects in the scene.

“One of the first shots of ‘Interlude’ shows stage crew walking in front of John Goodman,” describes Tan. “In the past, a studio would have recommended a full CGI replacement for Goodman’s character because it would be too hard or take too much time to maintain consistency across the shot. With Shapeshifter, we can just reshape one frame and the work is done.”

This is possible because Shapeshifter continuously captures the face, including all of its essential details, using the source footage as its guide. With the data being constantly logged, artists can extract movement information from anywhere on the face whenever they want, replacing expensive motion-capture stages, equipment and makeup teams.

Director Ang Lee: Gemini Man and a digital clone

By Iain Blair

Filmmaker Ang Lee has always pushed the boundaries in cinema, both technically and creatively. His film Life of Pi, which he directed and produced, won four Academy Awards — for Best Direction, Best Cinematography, Best Visual Effects and Best Original Score.

Lee’s Brokeback Mountain won three Academy Awards, including Best Direction, Best Adapted Screenplay and Best Original Score. Crouching Tiger, Hidden Dragon was nominated for 10 Academy Awards and won four, including Best Foreign Language Film for Lee, Best Cinematography, Best Original Score and Best Art Direction/Set Decoration.

His latest, Paramount’s Gemini Man, is another innovative film, this time disguised as an action-thriller. It stars Will Smith in two roles — first, as Henry Brogan, a former Special Forces sniper-turned-assassin for a clandestine government organization; and second (with the assistance of ground-breaking visual effects) as “Junior,” a cloned younger version of himself with peerless fighting skills who is suddenly targeting him in a global chase. The chase takes them from the estuaries of Georgia to the streets of Cartagena and Budapest.

Rounding out the cast is Mary Elizabeth Winstead as Danny Zakarweski, a DIA agent sent to surveil Henry; Golden Globe Award-winner Clive Owen as Clay Verris, a former Marine officer now seeking to create his own personal military organization of elite soldiers; and Benedict Wong as Henry’s longtime friend, Baron.

Lee’s creative team included director of photography Dion Beebe (Memoirs of a Geisha, Chicago), production designer Guy Hendrix Dyas (Inception, Indiana Jones and the Kingdom of the Crystal Skull), longtime editor Tim Squyres (Life of Pi and Crouching Tiger, Hidden Dragon) and composer Lorne Balfe (Mission: Impossible — Fallout, Terminator Genisys).

The groundbreaking visual effects were supervised by Bill Westenhofer, Academy Award-winner for Life of Pi as well as The Golden Compass, and Weta  Digital’s Guy Williams, an Oscar-nominee for The Avengers, Iron Man 3 and Guardians of the Galaxy Vol. 2.

Will Smith and Ang Lee on set

I recently talked to Lee — whose directing credits include Taking Woodstock, Hulk, Ride With the Devil, The Ice Storm and Billy Lynn’s Long Halftime Walk — about making the film, which has already generated a lot of awards talk about its cutting-edge technology, the workflow and his love of editing and post.

Hollywood’s been trying to make this for over two decades now, but the technology just wasn’t there before. Now it’s finally here!
It was such a great idea, if you can visualize it. When I was first approached about it by Jerry Bruckheimer and David Ellison, they said, “We need a movie star who’s been around a long time to play Henry, and it’s an action-thriller and he’s being chased by a clone of himself,” and I thought the whole clone idea was so fascinating. I think if you saw a young clone version of yourself, you wouldn’t see yourself as special anymore. It would be, “What am I?” That also brought up themes like nature versus nurture and how different two people with the same genes can be. Then the whole idea of what makes us human? So there was a lot going on, a lot of great ideas that intrigued me. How does aging work and affect you? How would you feel meeting a younger version of yourself? I knew right away it had to be a digital clone.

You certainly didn’t make it easy for yourself as you also decided to shoot it in 120fps at 4K and in 3D.
(Laughs) You’re right, but I’ve been experimenting with new technology for the past decade, and it all started with Life of Pi. That was my first taste of 3D, and for 3D you really need to shoot digitally because of the need for absolute precision and accuracy in synchronizing the two cameras and your eyes. And you need a higher frame rate to get rid of the strobing effect and any strangeness. Then when you go to 120 frames per second, the image becomes so clear and far smoother. It’s like a whole new kind of moviemaking, and that’s fascinating to me.

Did you shoot native 3D?
Yes, even though it’s still so clumsy, and not easy, but for me it’s also a learning process on the set which I enjoy.

Junior

There’s been a lot of talk about digital de-aging use, especially in Scorsese’s The Irishman. But you didn’t use that technique for Will’s younger self, right?
Right. I haven’t seen The Irishman so I don’t know exactly what they did, but this was a total CGI creation, and it’s a lead character where you need all the details and performance. Maybe the de-aging is fine for a quick flashback, but it’s very expensive to do, and it’s all done manually. This was also quite hard to do, and there are two parts to it: Scientifically, it’s quite mind-boggling, and our VFX supervisor Bill Westenhofer and his team worked so hard at it, along with the Weta team headed by VFX supervisor Guy Williams. So did Will. But then the hardest part is dealing with audiences’ impressions of Junior, as you know in the back of your mind that a young Will Smith doesn’t really exist. Creating a fully digital believable human being has been one of the hardest things to do in movies, but now we can.

How early on did you start integrating post and all the VFX?
Before we even started anything, as we didn’t have unlimited money, a big part of the budget went to doing a lot of tests, new equipment, R&D and so on, so we had to be very careful about planning everything. That’s the only way you can reduce costs in VFX. You have to be a good citizen and very disciplined. It was a two-year process, and you plan and shoot layer by layer, and you have to be very patient… then you start making the film in post.

I assume you did a lot of previz?
(Laughs) A whole lot, and not only for all the obvious action scenes. Even for the non-action stuff, we designed and made the cartoons and did previz and had endless meetings and scouted and measured and so on. It was a lot of effort.

How tough was the shoot?
It was very tough and very slow. My last three movies have been like this since the technology’s all so new, so it’s a learning process as you’re figuring it all out as you go. No matter how much you plan, new stuff comes up all the time and equipment fails. It feels very fragile and very vulnerable sometimes. And we only had a budget for a regular movie, so we could only shoot for 80 days, and we were on three continents and places like Budapest and Cartagena as well as around Savannah in the US. Then I insist on doing all the second unit stuff as well, apart from a few establishing shots and sunsets. I have to shoot everything, so we had to plan very carefully with the sound team as every shot is a big deal.

Where did you post?
All in New York. We rented space at Final Frame, and then later we were at Harbor. The thing is, no lab could process our data since it was so huge, so when we were based in Savannah we just built our own technology base and lab so we could process all our dailies and so on — and we bought all our servers, computers and all the equipment needed. It was all in-house, and our technical supervisor Ben Gervais oversaw it all. It was too difficult to take all that to Cartagena, but we took it all to Budapest and then set it all up later in New York for post.

Do you like the post process?
I like the first half, but then it’s all about previews, getting notes, changing things. That part is excruciating. Although I have to give a lot of credit to Paramount as they totally committed to all the VFX quite early and put the big money there before they even saw a cut so we had time to do them properly.

Junior

Talk about editing with Tim Squyres. How did that work?
We sent him dailies. When I’m shooting, I just want to live in my dreams, unless something alarms me, and he’ll let me know. Otherwise, I prefer to work separately. But on this one, since we had to turn over some shots while we were shooting, he came to the set in Budapest, and we’d start post already, which was new to me. Before, I always liked to cut separately.

What were the big editing challenges?
Trying to put all the complex parts together, dealing with the rhythm and pace, going from quiet moments to things like the motorcycle chase scenes and telling the story as effectively as we could —all the usual things. In this medium, everything is more critical visually.

All the VFX play a big role. How many were there?
Over 1,000, but then Junior alone is a huge visual effect in every scene he’s in. Weta did all of him and complained that they got the hardest and most expensive part. (Laughs) The other, easier stuff was spread out to several companies, including Scanline and Clear Angle.

Ang Lee and Iain Blair

Talk about the importance of sound and music.
We did the mix at Harbor on its new stage, and it’s always so important. This time we did something new. Typically, you do Atmos at the final mix and mix the music along with all the rest, but our music editor did an Atmos mix on all the music first and then brought it to us for the final mix. That was very special.

Where did you do the DI and how important is it to you?
It’s huge on a movie like this. We set up our own DI suite in-house at Final Frame with the latest FilmLight Baselight, which is amazing. Our colorist Marcy Robinson had trained on it, and it was a lot easier than on the last film. Dion came in a lot and they worked together, and then I’d come in. We did a lot of work, especially on all the night scenes, enhancing moonlight and various elements.

I think the film turned out really well and looks great. When you have the combination of these elements like 3D, digital cinematography, high frame rate and high resolution, you really get “new immersive cinema.” So for me, it’s a new and different way of telling stories and processing them in your head. The funny thing is, personally I’m a very low-tech person, but I’ve been really pursuing this for the last few years.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Harbor adds talent to its London, LA studios

Harbor has added to its London- and LA-based studios. Marcus Alexander joins as VP of picture post, West Coast and Darren Rae as senior colorist. He will be supervising all dailies in the UK.

Marcus Alexander started his film career in London almost 20 years ago as an assistant editor before joining Framestore as a VFX editor. He helped Framestore launch its digital intermediate division, producing multiple finishes on a host of tent-pole and independent titles, before joining Deluxe to set up its London DI facility. Alexander then relocated to New York to head up Deluxe New York DI. With the growth in 3D movies, he returned to the UK to supervise stereo post conversions for multiple studios before his segue into VFX supervising.

“I remember watching It Came from Outer Space at a very young age and deciding there and then to work in movies,” says Alexander. “Having always been fascinated with photography and moving images, I take great pride in thorough involvement in my capacity from either a production or creative standpoint. Joining Harbor allows me to use my skills from a post-finishing background along with my production experience in creating both 2D and 3D images to work alongside the best talent in the industry and deliver content we can be extremely proud of.”

Rae began his film career in the UK in 1995 as a sound sync operator at Mike Fraser Neg Cutters. He moved into the telecine department in 1997 as a trainee. By 1998 he was a dailies colorist working with 16mm and 35mm film. From 2001, Rae spent three years with The Machine Room in London as telecine operator and joined Todd AO’s London lab in 2014 as colorist working on drama and commercials 35mm and 16mm film and 8mm projects for music videos. In 2006 Rae moved into grading dailies at Todd AO parent company Deluxe in Soho London, moving to Company 3 London in 2007 as senior dailies colorist. In 2009, he was promoted to supervising colorist.

Prior to joining Harbor, Rae was senior colorist for Pinewood Digital, supervising multiple shows and overseeing a team of four, eventually becoming head of grading. Projects include Pokemon Detective Pikachu, Dumbo, Solo: A Star Wars Story, The Mummy, Rogue One, Doctor Strange and Star Wars Episode VII — The Force Awakens.

“My main goal is to make the director of photography feel comfortable. I can work on a big feature film from three months to a year, and the trust the DP has in you is paramount. They need to know that wherever they are shooting in the world, I’m supporting them. I like to get under the skin of the DP right from the start to get a feel for their wants and needs and to provide my own input throughout the entire creative process. You need to interpret their instructions and really understand their vision. As a company, Harbor understands and respects the filmmaker’s process and vision, so for me, it’s the ideal new home for me.”

Harbor has also announced that colorists Elodie Ichter and Katie Jordan are now available to work with clients on both the East and West Coasts in North America as well as the UK. Some of the team’s work includes Once Upon a Time in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Anna, Snow White and the Huntsman and Rise of the Planet of the Apes.

Foundry updates Nuke to version 12.0

Foundry has released Nuke 12.0, which introduces the next cycle of releases for the Nuke family. The Nuke 12.0 release brings improved interactivity and performance across the Nuke family, from additional GPU-enabled nodes for cleanup to a rebuilt playback engine in Nuke Studio and Hiero. Nuke 12.0 also sees the integration of GPU-accelerated tools integrated from Cara VR for camera solving, stitching and corrections and updates to the latest industry standards.

OpenEXR

New features of Nuke 12.0 include:
• UI interactivity and script loading – This release includes  a variety of optimizations throughout the software to improve performance, especially when working at scale. One key improvement offers a much smoother experience and noticeably maintains UI interactivity and reduced loading times when working in large scripts.
• Read and write performance – Nuke 12.0 includes focused improvement to OpenEXR read and write performance, including optimizations for several popular compression types (Zip1, Zip16, PIZ, DWAA, DWAB), improving render times and interactivity in scripts. Red and Sony camera formats also see additional GPU support.
• Inpaint and EdgeExtend – These GPU-accelerated nodes provide faster and more intuitive workflows for common tasks, with fine detail controls and contextual paint strokes.
• Grid Warp Tracker – Extending the Smart Vector toolset in NukeX, this node uses Smart Vectors to drive grids for match moving, warping and morphing images.
• Cara VR node integration – The majority of Cara VR’s nodes are now integrated into NukeX, including a suite of GPU-enabled tools for VR and stereo workflows and tools that enhance traditional camera solving and cleanup workflows.
• Nuke Studio, Hiero and HieroPlayer Playback – The timeline-based tools in the Nuke family see dramatic improvements in playback stability and performance as a result of a rebuilt playback engine optimized for the heavy I/O demands of color-managed workflows with multichannel EXRs.

Ziva VFX 1.7 helps simplify CG character creation


Ziva Dynamics has introduced Ziva VFX 1.7, designed to make CG character creation easier thanks to the introduction of Art Directable Rest Shapes (ADRS). This tool allows artists to make characters conform to any shape without losing its dynamic properties, opening up a faster path to cartoons and digi-doubles.

Users can now adjust a character’s silhouette with simple sculpting tools. Once the goal shape is established, Ziva VFX can morph to match it, maintaining all of the dynamics embedded before the change. Whether unnatural or precise, ADRS works with any shape, removing the difficulty of both complex setups and time-intensive corrective work.

The Art Directable Rest Shapes feature has been in development for over a year and was created in collaboration with several major VFX and feature animation studios. According to Ziva, while outputs and art styles differed, each group essentially requested the same thing: extreme accuracy and more control without compromising the dynamics that sell a final shot.

For feature animation characters not based on humans or nature, ADRS can rapidly alter and exaggerate key characteristics, allowing artists to be expressive and creative without losing the power of secondary physics. For live-action films, where the use of digi-doubles and other photorealistic characters is growing, ADRS can minimize the setup process when teams want to quickly tweak a silhouette or make muscles fire in multiple ways during a shot.

According to Josh diCarlo, head of rigging at Sony Pictures Imageworks, “Our creature team is really looking forward to the potential of Art Directable Rest Shapes to augment our facial and shot-work pipelines by adding quality while reducing effort. Ziva VFX 1.7 holds the potential to shave weeks of work off of both processes while simultaneously increasing the quality of the end results.”

To use Art Directable Rest Shapes, artists must duplicate a tissue mesh, sculpt their new shape onto the duplicate and add the new geometry as a Rest Shape over select frames. This process will intuitively morph the character, creating a smooth, novel deformation that adheres to any artistic direction a creative team can think up. On top of ADRS, Ziva VFX 1.7 will also include a new zRBFWarp feature, which can warp NURBS surfaces, curves and meshes.

For a free 60-day trial, click here. Ziva VFX 1.7 is available now as an Autodesk Maya plugin for Windows and Linux users. Ziva VFX 1.7 can be purchased in monthly or yearly installments, depending on user type.

According to Michael Smit, chief commercial officer at Ziva Dynamics, “Ziva is working towards a new platform that will more easily allow us to deploy the software into other software packages, operating systems, and different network architectures. As an example we are currently working on our integrations into iOS and Unreal, both of which have already been used in limited release for production settings. We’re hopeful that once we launch the new platform commercially there will be an opportunity to deploy tools for macOS users.”

Using VFX to turn back time for Downton Abbey film

The feature film Downton Abbey is a continuation of the popular TV series, which followed the lives of the aristocratic Crawley family and their domestic help. Created by Julian Fellowes, the film is based in 1927, one year after the show’s final episode, bringing with it the exciting announcement of a royal visit to Downton from King George V and Queen Mary.

Framestore supported the film’s shoot and post, with VFX supervisor Kyle McCulloch and senior producer Ken Dailey leading the team. Following Framestore’s work creating post-war Britain for the BAFTA-nominated Darkest Hour, the VFX studio was approached to work directly with the film’s director, Michael Engler, to help ground the historical accuracy of the film.

Much of the original cast and crew returned, with a screenplay that required the new addition of a VFX department, “although it was important that we had a light footprint,” explains McCulloch. “I want people to see the credits and be surprised that there are visual effects in it.” Supporting VFX on over 170 shots ranged from cleanups and seamless set transitions to extensive environment builds and augmentation.

Transporting the audience to an idealized interpretation of 1920s Britain required careful work on the structures of buildings, including the Abbey (Highclere Castle), Buckingham Palace and Lacock village, a national trust village in the Cotswolds that was used as a location for Downton’s village. Using the available photogrammetry and captured footage, the artists set to work restoring the period, adding layers of dirt and removing contemporary details to existing historical buildings.

Having changed so much since the early 20th century, King’s Cross Station needed a complete rebuild in CG, with digital train carriages, atmospheric smoke and large interior and exterior environment builds.

The team also helped with landscaping the idyllic grounds of the Abbey, replacing the lawn, trees and grass and removing power lines, cars and modern roads. Research was key, with the team collaborating with production designer Donal Woods and historical advisor Alastair Bruce, who came equipped with look books and photographs from the era. “A huge amount of the work was in the detail,” explains McCulloch. “We questioned everything; looking at the street surfaces, the type of asphalt used, down to how the gutters were built. All these tiny elements create the texture of the entire film. Everyone went through it with a very fine-tooth comb — every single frame.”

 

In addition, a long shot that followed the letter from the Royal Household from the exterior of the abbey, through the corridors of the domestic “downstairs” to the aristocratic “upstairs,” was a particular challenge. The scenes based downstairs — including in the kitchen — were shot at Shepperton Studios on a set, with the upstairs being captured on location at Highclere Castle. It was important to keep the illusion of the action all being within one large household, requiring Framestore to stitch the two shots together.

Says McCulloch, “It was brute force, it was months of work and I challenge anyone to spot where the seam is.”

Flavor adds Joshua Studebaker as CG supervisor

Creative production house Flavor has added CG supervisor Joshua Studebaker to its Los Angeles studio. For more than eight years, Studebaker has been a freelance CG artist in LA, specializing in design, animation, dynamics, lighting/shading and compositing via Maya, Cinema 4D, Vray/Octane, Nuke and After Effects.

A frequent collaborator with Flavor and its brand and agency partners, Studebaker has also worked with Alma Mater, Arsenal FX, Brand New School, Buck, Greenhaus GFX, Imaginary Forces and We Are Royale in the past five years alone. In his new role with Flavor, Studebaker oversees visual effects and 3D services across the company’s global operations. Flavor’s Chicago, Los Angeles and Detroit studios offer color grading, VFX and picture finishing using tools like Autodesk Lustre and Flame Premium.

Flavor creative director Jason Cook also has a long history of working with Studebaker and deep respect for his talent. “What I love most about Josh is that he is both technical and a really amazing artist and designer. Adding him is a huge boon to the Flavor family, instantly elevating our production capabilities tenfold.”

Flavor has always emphasized creativity as a key ingredient, and according to Studebaker, that’s what attracted him. “I see Flavor as a place to grow my creative and design skills, as well as help bring more standardization to our process in house,” he explained. “My vision is to help Flavor become more agile and more efficient and to do our best work together.”

Pace Pictures and ShockBox VFX formalize partnership

Hollywood post house Pace Pictures and bicoastal visual effects, animation and motion graphics specialist ShockBox VFX have formed a strategic alliance for film and television projects. The two specialist companies provide studios and producers with integrated services encompassing all aspects of post in order to finish any project efficiently, cost-effectively and with greater creative control.

The agreement formalizes a successful collaborative partnership that has been evolving over many years. Pace Pictures and ShockBox collaborated informally in 2015 on the independent feature November Rule. Since then, they have teamed up on numerous projects, including, most recently, the Hulu series Veronica Mars, Lionsgate’s 3 From Hell and Universal Pictures’ Grand-Daddy Day Care and Undercover Brother 2. Pace provided services including creative editorial, color grading, editorial finishing and sound mixing. ShockBox contributed visual effects, animation and main title design.

“We offer complementary services, and our staff have developed a close working rapport,” says Pace Pictures president Heath Ryan. “We want to keep building on that. A formal alliance benefits both companies and our clients.”

“In today’s world of shrinking budgets and delivery schedules, the time for creativity in the post process can often suffer,” adds ShockBox founder and director Steven Addair. “Through our partnership with Pace, producers and studios of all sizes will be able to maximize our integrated VFX pipeline for both quality and volume.”

As part of the agreement, ShockBox will move its West Coast operations to a new facility that Pace plans to open later this fall. The two companies have also set up an encrypted, high-speed data connection between Pace Pictures Hollywood and ShockBox New York, allowing them to exchange project data quickly and securely.

Martin Scorsese to receive VES Lifetime Achievement Award  

The Visual Effects Society (VES) has named Martin Scorsese as the forthcoming recipient of the VES Lifetime Achievement Award in recognition of his valuable contributions to filmed entertainment. The award will be presented next year at the 18th Annual VES Awards at the Beverly Hilton Hotel.

The VES Lifetime Achievement Award, voted on by the VES Board of Directors, recognizes an outstanding body of work that has significantly contributed to the art and/or science of the visual effects industry.  The VES will honor Scorsese for “his artistry, expansive storytelling and gift for blending iconic imagery and unforgettable narrative.”

“Martin Scorsese is one of the most influential filmmakers in modern history and has made an indelible mark on filmed entertainment,” says Mike Chambers, VES board chair. “His work is a master class in storytelling, which has brought us some of the most memorable films of all time.  His intuitive vision and fiercely innovative direction has given rise to a new era of storytelling and has made a profound impact on future generations of filmmakers. Martin has given us a rich body of groundbreaking work to aspire to, and for this, we are honored to award him with the Visual Effects Society Lifetime Achievement Award.”

Martin Scorsese has directed critically acclaimed, award-winning films including Mean Streets, Taxi Driver, Raging Bull, The Last Temptation of Christ, Goodfellas, Gangs of New York, The Aviator, The Departed (Academy Award for Best Director and Best Picture), Shutter Island and Hugo (Golden Globe for Best Director).

Scorsese has also directed numerous documentaries, including Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese, Elia Kazan: A Letter to Elia and the classic The Last Waltz about The Band’s final concert. His George Harrison: Living in the Material World received Emmy Awards for Outstanding Directing for Nonfiction Programming and Outstanding Nonfiction Special.

In 2010, Scorsese executive produced the HBO series Boardwalk Empire, winning an Emmy and DGA awards for directing the pilot episode. In 2014, he co-directed The 50 Year Argument with his long-time documentary editor David Tedeschi.

This September, Scorsese’s film, The Irishman, starring Robert De Niro, Al Pacino and Joe Pesci, will make its world premiere at the New York Film Festival and will have a theatrical release starting November 1 in New York and Los Angeles before arriving on Netflix on November 27.

Scorsese is the founder and chair of The Film Foundation, a non-profit organization dedicated to the preservation and protection of motion picture history.

Previous winners of the VES Lifetime Achievement Award have included George Lucas; Robert Zemeckis; Dennis Muren, VES; Steven Spielberg; Kathleen Kennedy and Frank Marshall; James Cameron; Ray Harryhausen; Stan Lee; Richard Edlund, VES; John Dykstra; Sir Ridley Scott; Ken Ralston; Jon Favreau and Chris Meledandri.ri

Visual Effects in Commercials: Chantix, Verizon

By Karen Moltenbrey

Once too expensive to consider for use in television commercials, visual effects soon found their way into this realm, enlivening and enhancing the spots. Today, countless commercials are using increasingly complex VFX to entertain, to explain and to elevate a message. Here, we examine two very different approaches to using effects in this way. In the Verizon commercial Helping Doctors Fight Cancer, augmented reality is transferred from a holographic medical application and fused into a heartwarming piece thanks to an extremely delicate production process. For the Chantix Turkey Campaign, digital artists took a completely different method, incorporating a stylized digital spokes-character — with feathers, nonetheless – into various scenes.

Verizon Helping Doctors Fight Cancer

The main goal of television advertisements — whether they are 15, 30 or 60 seconds in length — is to sell a product. Some do it through a direct sales approach. Some by “selling” a lifestyle or brand. And some opt to tell a story. Verizon took the latter approach for a campaign promoting its 5G Ultra Wideband.

Vico Sharabani

For the spot Helping Doctors Fight Cancer, directed by Christian Weber, Verizon adds a human touch to its technology through a compelling story illustrating how its 5G network is being used within a mixed-reality environment so doctors can better treat cancer patients. The 30-second commercial features surgeons and radiologists using high-fidelity holographic 3D anatomical renderings that can be viewed from every angle and even projected onto a person’s body for a more comprehensive examination, while the imagery can potentially be shared remotely in near real time. The augmented-reality application is from Medivis, a start-up medical visualization company that is using Verizon’s next-generation 5G wireless speeds to deliver the high speeds and low latencies necessary for the application’s large datasets and interactive frame rates.

The spot introduces video footage of patients undergoing MRIs and discussion by Medivis cofounder Dr. Osamah Choudhry about how treatment could be radically changed using the technology. Holographic medical imagery is then displayed showing the Medivis AR application being used on a patient.

“McGarryBowen New York, Verizon’s advertising agency, wanted to show the technology in the most accurate and the most realistic way possible. So, we studied the technology,” says Vico Sharabani, founder/COO of The-Artery, which was tasked with the VFX work in the spot. To this end, The Artery team opted to use as much of the actual holographic content as possible, pulling assets from the Medivis software and fusing it with other broadcast-quality content.

The-Artery is no stranger to augmented reality, virtual reality and mixed reality. Highly experienced in visual effects, Sharabani founded the company to solve business problems within the visual space across all platforms, from films to commercials to branding, and as such, alternate reality and story have been integral elements to achieving that goal. Nevertheless, the work required for this spot was difficult and challenging.

“It’s not just acquiring and melding together 3D assets,” says Sharabani. “The process is complex, and there are different ways to do it — some better than others. And the agency wanted it to be true to the real-life application. This was not something we could just illustrate in a beautiful way; it had to be very technically accurate.”

To this end, much of the holographic imagery consisted of actual 3D assets from the Medivis holographic AR system, captured live. At times, though, The Artery had to rework the imagery using multiple assets from the Medivis application, and other times the artists re-created the medical imagery in CG.

Initially, the ad agency expected that The-Artery would recreate all the digital assets in CG. But after learning as much as they could about the Medivis system, Sharabani and the team were confident they could export actual data for the spot. “There was much greater value to using actual data when possible, actual CT data,” says Sharabani. “Then you have the most true-to-life representation, which makes the story even more heartfelt. And because we were telling a true story about the capabilities of the network around a real application being used by doctors, any misrepresentation of the human anatomy or scans would hurt the message and intention of the campaign.”

The-Artery began developing a solution with technicians at Medivis to export actual imagery via the HoloLens headset that’s used by the medical staff to view and manipulate the holographic imagery, to coincide with the needs of the commercial. Sometimes this involved merely capturing the screen performance as the HoloLens was being used. Other times the assets from the Medivis system were rendered over a greenscreen without a background and later composited into a scene.

“We have the ability to shoot through the HoloLens, which was our base; we used that as our virtual camera whereby the output of the system is driven by the HoloLens. Every time we would go back to do a capture (if the edit changed or the camera position changed), we had to use the HoloLens as our virtual camera in order to get the proper camera angle,” notes Sharabani. Because the HoloLens is a stereoscopic device, The Artery always used the right-eye view for the representations, as it most closely reflected the experience of the user wearing the device.

Since the Medivis system is driven by the HoloLens, there is some shakiness present — an artifact the group retained in some of the shots to make it truer to life. “It’s a constant balance of how far we go with realism and at what point it is too distracting for the broadcast,” says Sharabani.

For imagery like the CT scans, the point cloud data was imported directly into Autodesk’s Maya, where it was turned into a 3D model. Other times the images were rendered out at 4K directly from the system. The Medivis imagery was later composited into the scenes using Autodesk’s Flame.

However, not every bit of imagery was extracted from the system. Some had to be re-created using a standard 3D pipeline. For instance, the “scan” of the actor’s skull was replicated by the artists so that the skull model matched perfectly with the holographic imagery that was overlaid in post production (since everyone’s skull proportions are different). The group began by creating the models in Maya and then composited the imagery within Autodesk’s Flame, along with a 3D bounding box of the creative implant.

The artists also replicated the Medivis UI in 3D to recreate and match the performance of the three-dimensional UI to the AI hand gestures by the person “using” the Medivis system in the spot — both of which were filmed separately. For the CG interface, the group used Autodesk’s Maya and Flame, as well as Adobe’s After Effects.

“The process was so integrated to the edit, we needed the proper 3D tracking and some of the assets to be built as a 3D screen element,” explains Sharabani. “It gave us more flexibility to build the 3D UI inside of Flame, enabling us to control it more quickly and easily when we changed a hand gesture or expanded the shots.”

With The-Artery’s experience pertaining to virtual technology, the team was quick to understand the limitations of the project using this particular equipment. Once that was established, however, they began to push the boundaries with small hacks that enabled them to achieve their goals of using actual holographic data to tell an amazing story.

Chantix “Turkey” Campaign

Chantix is medication to help smokers kick the habit. To get its message across in a series of television commercials, the drug maker decided to talk turkey, focusing the campaign on a CG turkey that, well, goes “cold turkey” with the assistance of Chantix.

A series of four spots — Slow Turkey, Camping, AC and Beach Day — prominently feature the turkey, created at The Mill. The spots were directed and produced in-house by Mill+, The Mill’s end-to-end production arm, with Jeffrey Dates directing.


L-R: John Montefusco, Dave Barosin and Scott Denton

“Each one had its own challenges,” says CG lead John Montefusco. Nevertheless, the initial commercial, Slow Turkey, presented the biggest obstacle: the build of the character from the ground up. “It was not only a performance feat, but a technical one as well,” he adds.

Effects artist Dave Barosin iterated Montefusco’s assessment of Slow Turkey, which, in addition to building the main asset from scratch, required the development of a feather system. Meanwhile, Camping and AC had the addition of clothing, and Beach Day presented the challenge of wind, water and simulation in a moving vehicle.

According to senior modeler Scott Denton, the team was given a good deal of creative freedom when crafting the turkey. The artists were presented with some initial sketches, he adds, but more or less had free rein in the creation of the look and feel of the model. “We were looking to tread the line between cartoony and realistic,” he says. The first iterations became very cartoony, but the team subsequently worked backward to where the character was more of a mix between the two styles.

The crew modeled the turkey using Autodesk’s Maya and Pixologic’s ZBrush. It was then textured within Adobe’s Substance and Foundry’s Mari. All the details of the model were hand-sculpted. “Nailing the look and feel was the toughest challenge. We went through a hundred iterations before getting to the final character you see in the commercial,” Denton says.

The turkey contains 6,427 body feathers, 94 flight feathers and eight scalp feathers. They were simulated using a custom feather setup built by the lead VFX artist within SideFX Houdini, which made the process more efficient. Proprietary tools also were used to groom the character.

The artists initially developed a concept sculpt in ZBrush of just the turkey’s head, which underwent numerous changes and versions before they added it to the body of the model. Denton then sculpted a posed version with sculpted feathers to show what the model might look like when posed, giving the client a better feel for the character. The artists later animated the turkey using Maya. Rendering was performed in Autodesk’s Arnold, while compositing was done within Foundry’s Nuke.

“Developing animation that holds good character and personality is a real challenge,” says Montefusco. “There’s a huge amount of evolution in the subtleties that ultimately make our turkey ‘the turkey.’”

For the most part, the same turkey model was used for all four spots, although the artists did adapt and change certain aspects — such as the skeleton and simulation meshes – for each as needed in the various scenarios.

For the turkey’s clothing (sweater, knitted vest, scarf, down vest, knitted cap, life vest), the group used Marvelous Designer 3D software for virtual clothes and fabrics, along with Maya and ZBrush. However, as Montefusco explains, tailoring for a turkey is far different than developing CG clothing for human characters. “Seeing as a lot of the clothes that were selected were knit, we really wanted to push the envelope and build the knit with geometry. Even though this made things a bit slower for our effects and lighting team, in the end, the finished clothing really spoke for itself.”

The four commercials also feature unique environments ranging from the interior and exterior of a home to a wooded area and beach. The artists used mostly plates for the environments, except for an occasional tent flap and chair replacement. The most challenging of these settings, says Montefusco, was the beach scene, which required full water replacement for the shot of the turkey on the paddle board.


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

VFX in Features: Hobbs & Shaw, Sextuplets

By Karen Moltenbrey

What a difference a year makes. Then again, what a difference 30 years make. That’s about the time when the feature film The Abyss included photoreal CGI integrated with live action, setting a trend that continues to this day. Since that milestone many years ago, VFX wizards have tackled a plethora of complicated problems, including realistic hair and skin, resulting in realistic digital humans, as well as realistic water, fire and other elements. With each new blockbuster VFX film, digital artists continually raise the bar, challenging the status quo and themselves to elevate the art even further.

The visual effects in today’s feature films run the gamut from in-your-face imagery that can put you on the edge of your seat through heightened action to the kind that can make you laugh by amping up the comedic action. As detailed here, Fast & Furious Presents: Hobbs & Shaw takes the former approach, helping to carry out amazing stunts that are bigger and “badder” than ever. Opposite that is Sextuplets, which uses VFX to carry out a gag central to the film in a way that also pushes the envelope.

Fast & Furious Presents: Hobbs & Shaw

The Fast and the Furious film franchise, which has included eight features that collectively have amassed more than $5 billion worldwide since first hitting the road in 2001, is known for its high-octane action and visual effects. The latest installment, Fast & Furious Presents: Hobbs & Shaw, continues that tradition.

At the core of the franchise are next-level underground street racers who become reluctant fugitives pulling off big heists. Hobbs & Shaw, the first stand-alone vehicle, has Dwayne Johnson and Jason Statham reprising their roles as loyal Diplomatic Security Service lawman Luke Hobbs and lawless former British operative Deckard Shaw, respectively. This comes after facing off in Furious 7 (2015) and then playing cat and mouse as Shaw tries to escape from prison and Hobbs tries to stop him in 2017’s The Fate of the Furious. (Hobbs first appeared in 2011’s Fast Five and became an ally to the gang. Shaw’s first foray was in 2013’s Fast & Furious 6.)

Now, in the latest installment, the pair are forced to join forces to hunt down anarchist Brixton Lorr (Idris Elba), who has control of a bio weapon. The trackers are hired separately to find Hattie, a rogue MI6 agent (who is also Shaw’s sister, a fact that initially eludes Hobbs) after she injects herself with the bio agent and is on the run, searching for a cure.

The Universal Pictures film is directed by David Leitch (Deadpool 2, Atomic Blonde). Jonathan Sela (Deadpool 2, John Wick) is DP, and visual effects supervisor is Dan Glass (Deadpool 2, Jupiter Ascending). A number of VFX facilities worked on the film, including key vendor DNeg along with other contributors such as Framestore.

DNeg delivered 1,000-plus shots for the film, including a range of vehicle-based action sequences set in different global locations. The work involved the creation of full digi-doubles and digi-vehicle duplicates for the death-defying stunts, jumps and crashes, as well as complex effects simulations and extensive digital environments. Naturally, all the work had to fit seamlessly alongside live-action stunts and photography from a director with a stunt coordinator pedigree and a keen eye for authentic action sequences. In all, the studio worked on 26 sequences divided among the Vancouver, London and Mumbai locations. Vancouver handled mostly the Chernobyl break-in and escape sequences, as well as the Samoa chase. London did the McLaren chase and the cave fight, as well as London chase sequences. The Mumbai team assisted its colleagues in Vancouver and London.

When you think of the Fast & Furious, the first thing that comes to mind are intense car chases, and according to Chris Downs, CG supervisor at DNeg Vancouver, the Chernobyl beat is essentially one long, giant car-and-motorcycle pursuit, describing it as “a pretty epic car chase.”

“We essential have Brixton chasing Shaw and Hattie, and then Shaw and Hattie are trying to catch up to a truck that’s being driven by Hobbs, and they end up on these utility ramps and pipes, using them almost as a roadway to get up and into the turbine rooms, onto the rooftops and then jump between buildings,” he says. “All the while, everyone is getting chased by these drones that Brixton is controlling.”

The Chernobyl sequences — the break-in and the escape — were the most challenging work on the film for DNeg Vancouver. The villain, Brixton, is using the Chernobyl nuclear power plant in Russia as the site of his hideaway, leading Hobbs and Shaw to secretly break into his secret lab underneath Chernobyl to locate a device Brixton has there — and then not-so-secretly break out.

The break-in was filmed at a location outside of London, at the decommissioned Eggborough coal-powered plant that served as a backdrop. To transform the locale into Chernobyl, DNeg augmented the site with cooling towers and other digital structures. Nevertheless, the artists also built an entire CG version of the site for the more extreme action, using photos of the actual Chernobyl as reference for their work. “It was a very intense build. We had artistic liberty, but it was based off of Chernobyl, and a lot of the buildings match the reference photography. It definitely maintained the feeling of a nuclear power plant,” says Downs.

Not only did the construction involve all the exteriors of the industrial complex around Chernobyl, but also an interior build of an “insanely complicated” turbine hall that the characters race through at one point.

The sequence required other environment work, too, as well as effects, digi-doubles and cloth sims for the characters’ flight suits and parachutes as they drop into the setting.

Following the break-in, Hobbs and Shaw are captured and tortured and then manage to escape from the lab just in time as the site begins to explode. For this escape sequence, the crew built a CG Chernobyl reactor and power station, automated drones, a digital chimney, an epic collapse of buildings, complex pyrotechnic clouds and burning material.

“The scope of the work, the amount of buildings and pipes, and the number of shots made this sequence our most difficult,” says Downs. “We were blowing it up, so all the buildings had to be effects-friendly as we’re crashing things through them.” Hobbs and Shaw commandeer vehicles as they try to outrun Brixton and the explosion, but Brixton and his henchmen give chase in a range of vehicles, including trucks, Range Rovers, motorcycles and more — a mix of CGI and practical with expert stunt drivers behind the wheel.

As expected for a Fast & Furious film, there’s a big variety of custom-built vehicles. Yet, for this scene and especially in Samoa, DNeg Vancouver crafted a range of CG vehicles, including motorcycles, SUVs, transport trucks, a flatbed truck, drones and a helicopter — 10 in all.

According to Downs, maintaining the appropriate wear and tear on the vehicles as the sequences progressed was not always easy. “Some are getting shot up, or something is blown up next to them, and you want to maintain the dirt and grime on an appropriate level,” he says. “And, we had to think of that wear and tear in advance because you need to build it into the model and the texture as you progress.”

The CG vehicles are mostly used for complex stunts, “which are definitely an 11 on the scale,” says Downs. Along with the CG vehicles, digi-doubles of the actors were also used for the various stunt work. “They are fairly straightforward, though we had a couple shots where we got close to the digi-doubles, so they needed to be at a high level of quality,” he adds. The Hattie digi-double proved the most difficult due to the hair simulation, which had to match the action on set, and the cloth simulation, which had to replicate the flow of her clothing.

“She has a loose sweater on during the Chernobyl sequence, which required some simulation to match the plate,” Downs adds, noting that the artists built the digi-doubles from scratch, using scans of the actors provided by production for quality checks.

The final beat of the Chernobyl escape comes with the chimney collapse. As the chase through Chernobyl progresses, Shaw tries to get Hattie to Hobbs, and Brixton tries to grab Hattie from Shaw. In the process, charges are detonated around the site, leading to the collapse of the main chimney, which just misses obliterating the vehicle they are all in as it travels down a narrow alleyway.

DNeg did a full environment build of the area for this scene, which included the entire alleyway and the chimney, and simulated the destruction of the chimney along with an explosive concussive force from the detonation. “There’s a large fireball at the beginning of the explosion that turns into a large volumetric cloud of dust that’s getting kicked up as the chimney is collapsing, and all that had to interact with itself,” Downs says of the scene. “Then, as the chimney is collapsing toward the end of the sequence, we had the huge chunks ripping through the volumetrics and kicking up more pyrotechnic-style explosions. As it is collapsing, it is taking out buildings along the way, so we had those blowing up and collapsing and interacting with our dust cloud, as well. It’s quite a VFX extravaganza.”

Adding to the chaos: The sequence was reshot. “We got new plates for the end of that escape sequence that we had to turn around in a month, so that was definitely a white-knuckle ride,” says Downs. “Thankfully we had already been working on a lot of the chimney collapse and had the Chernobyl build mostly filled in when word came in about the reshoot. But, just the amount of effects that went into it — the volumetrics, the debris and then the full CG environment in the background — was a staggering amount of very complex work.”

The action later turns from London at the start of the film, to Russia for the Chernobyl sequences, and then in the third act, to Samoa, home of the Hobbs family, as the main characters seek refuge on the island while trying to escape from Brixton. But Brixton soon catches up to them, and the last showdown begins amid the island’s tranquil setting with a shimmering blue ocean and green lush mountains. Some of the landscape is natural, some is man-made (sets) and some is CGI. To aid in the digital build of the Samoan environment, Glass traveled to the Hawaiian island of Kauai, where the filming took place, and took a good amount of reference footage.

For a daring chase in Samoa, the artists built out the cliff’s edge and sent a CG helicopter tumbling down the steep incline in the final battle with Brixton. In addition to creating the fully-digital Samoan roadside, CG cliff and 3D Black Hawk, the artists completed complex VFX simulations and destruction, and crafted high-tech combat drones and more for the sequence.

The helicopter proved to be the most challenging of all the vehicles, as it had a couple of hero moments when certain sections were fairly close to the camera. “We had to have a lot of model and texture detail,” Downs notes. “And then with it falling down the cliff and crash-landing onto the beach area, the destruction was quite tricky. We had to plan out which parts would be damaged the most and keep that consistent across the shots, and then go back in and do another pass of textures to support the scratches, dents and so forth.”

Meanwhile, DNeg London and Mumbai handled a number of sequences, among them the compelling McLaren chase, the CIA building descends and the final cave fight in Samoa. There were also a number of smaller sequences, for a total of approximately 750 shots.

One of the scenes in the film’s trailer that immediately caught fans’ attention was the McLaren escape/motorcycle transformation sequence, during which Hobbs, Shaw and Hattie are being chased by Brixton baddies on motorcycles through the streets of London. Shaw, behind the wheel of a McLaren 720S, tries to evade the motorbikes by maneuvering the prized vehicle underneath two crossing tractor trailer rigs, squeezing through with barely an inch to spare. The bad news for the trio: Brixton pulls an even more daring move, hopping off the bike while grabbing onto the back of it and then sliding parallel inches above the pavement as the bike zips under the road hazard practically on its side; once cleared, he pulls himself back onto the motorbike (in a memorable slow-motion stunt) and continues the pursuit thanks to his cybernetically altered body.

Chris Downs

According to Stuart Lashley, DNeg VFX supervisor, this sequence contained a lot of bluescreen car comps in which the actors were shot on stage in a McLaren rigged on a mechanical turntable. The backgrounds were shot alongside the stunt work in Glasgow (playing as London). In addition, there were a number of CG cars added throughout the sequence. “The main VFX set pieces were Hobbs grabbing the biker off his bike, the McLaren and Brixton’s transforming bike sliding under the semis, and Brixton flying through the double-decker bus,” he says. “These beats contained full-CG vehicles and characters for the most part. There was some background DMP [digital matte-painting] work to help the location look more like London. There were also a few shots of motion graphics where we see Brixton’s digital HUD through his helmet visor.”

As Lashley notes, it was important for the CG work to blend in with the surrounding practical stunt photography. “The McLaren itself had to hold up very close to the camera; it has a very distinctive look to its coating, which had to match perfectly,” he adds. “The bike transformation was a welcome challenge. There was a period of experimentation to figure out the mechanics of all the small moving parts while achieving something that looked cool at the same time.”

As exciting and complex as the McLaren scene is, Lashley believes the cave fight sequence following the helicopter/tractor trailer crash was perhaps even more of a difficult undertaking, as it had a particular VFX challenge in terms of the super slow-motion punches. The action takes place at a rock-filled waterfall location — a multi-story set on a 30,000-square-foot soundstage — where the three main characters battle it out. The film’s final sequence is a seamless blend of CG and live footage.

Stuart Lashley

“David [Leitch] had the idea that this epic final fight should be underscored by these very stylized, powerful impact moments, where you see all this water explode in very graphic ways,” explains Lashley. “The challenge came in finding the right balance between physics-based water simulation and creative stylization. We went through a lot of iterations of different looks before landing on something David and Dan [Glass] felt struck the right balance.”

The DNeg teams used a unified pipeline for their work, which includes Autodesk’s Maya for modeling, animation and the majority of cloth and hair sims; Foundry’s Mari for texturing; Isotropix’s Clarisse for lighting and rendering; Foundry’s Nuke for compositing; and SideFX’s Houdini for effects work, such as explosions, dust clouds, particulates and fire.

With expectations running high for Hobbs & Shaw, filmmakers and VFX artists once more delivered, putting audiences on the edge of their seats with jaw-dropping VFX work that shifted the franchise’s action into overdrive yet again. “We hope people have as much fun watching the result as we had making it. This was really an exercise in pushing everything to the max,” says Lashley, “often putting the physics book to one side for a bit and picking up the Fast & Furious manual instead.”

Sextuplets

When actor/comedian/screenwriter/film producer Marlon Wayans signed on to play the lead in the Netflix original movie Sextuplets, he was committing to a role requiring an extensive acting range. That’s because he was filling not one but seven different lead roles in the same film.

In Sextuplets, directed by Michael Tiddes, Wayans plays soon-to-be father Alan, who hopes to uncover information about his family history before his child’s arrival and sets out to locate his birth mother. Imagine Alan’s surprise when he finds out that he is part of “identical” sextuplets! Nevertheless, his siblings are about as unique as they come.

There’s Russell, the nerdy, overweight introvert and the only sibling not given up by their mother, with whom he lived until her recent passing. Ethan, meanwhile, is the embodiment of a 1970s pimp. Dawn is an exotic dancer who is in jail. Baby Pete is on his deathbed and needs a kidney. Jaspar is a villain reminiscent of Austin Powers’ Dr. Evil. Okay, that is six characters, all played by Wayans. Who is the seventh? (Spoiler alert: Wayans also plays their mother, who was simply on vacation and not actually dead as Russell had claimed.)

There are over 1,100 VFX shots in the movie. None, really, involved the transformation of the actor into the various characters — that was done using prosthetics, makeup, wigs and so forth, with slight digital touch-ups as needed. Instead, the majority of the effects work resulted from shooting with a motion-controlled camera and then compositing two (or more) of the siblings together in a shot. For Baby Pete, the artists also had to do a head replacement, comp’ing Wayans onto the body of a much smaller actor.

“We used quite a few visual effects techniques to pull off the movie. At the heart was motion control, [which enables precise control and repetition of camera movement] and allowed us to put multiple characters played by Marlon together in the scenes,” says Tiddes, who has worked with Wayans on multiple projects in the past, including A Haunted House.

The majority of shots involving the siblings were done on stage, filmed on bluescreen with a TechnoDolly for the motion control, as it is too impractical to fit the large rig inside an actual house for filming. “The goal was to find locations that had the exterior I liked [for those scenes] and then build the interior on set,” says Tiddes. “This gave me the versatility to move walls and use the TechnoDolly to create multiple layers so we could then add multiple characters into the same scene and interact together.”

According to Tiddes, the team approached exterior shots similarly to interior ones, with the added challenge of shooting the duplicate moments at the same time each day to get consistent lighting. “Don Burgess, the DP, was amazing in that sense. He was able to create almost exactly the same lighting elements from day to day,” he notes.

Michael Tiddes

So, whenever there was a scene with multiple Wayans characters, it would be filmed on back-to-back days with each of the characters. Tiddes usually started off with Alan, the straight man, to set the pace for the scene, using body doubles for the other characters. Next, the director would work out the shot with the motion control until the timing, composition and so forth was perfected. Then he would hit the Record button on the motion-control device, and the camera would repeat the same exact move over and over as many times as needed. The next day, the shot was replicated with the other character, and the camera would move automatically, and Wayans would have to hit the same marks at the same moment established on the first day.

“Then we’d do it again on the third day with another character. It’s kind of like building layers in Photoshop, and in the end, we would composite all those layers on top of each other for the final version,” explains Tiddes.

When one character would pass in front of another, it became a roto’d shot. Oftentimes a small bluescreen was set up on stage to allow for easier rotoscoping.

Image Engine was the main visual effects vendor on the film, with Bryan Jones serving as visual effects supervisor. The rotoscoping was done using a mix of SilhouetteFX’s Silhouette and Foundry’s Nuke, while compositing was mainly done using Nuke and Autodesk’s Flame.

Make no mistake … using the motion-controlled camera was not without challenges. “When you attack a scene, traditionally you can come in and figure out the blocking on the day [of the shoot],” says Tiddes. “With this movie, I had to previsualize all the blocking because once I put the TechnoDolly in a spot on the set, it could not move for the duration of time we shot in that location. It’s a large 13-foot crane with pieces of track that are 10 feet long and 4 feet wide.”

In fact, one of the main reasons Tiddes wanted to do the film was because of the visual effects challenges it presented. In past films where an actor played multiple characters in a scene, usually one character is on one side of the screen and the other character is on the other side, and a basic split-screen technique would have been used. “For me to do this film, I wanted to visually do it like no one else has ever done it, and that was accomplished by creating camera movement,” he explains. “I didn’t want to be constrained to only split-screen lock-off camera shots that would lack energy and movement. I wanted the freedom to block scenes organically, allowing the characters the flexibility to move through the room, with the opportunity to cross each other and interact together physically. By using motion control, by being able to re-create the same camera movement and then composite the characters into the scene, I was able to develop a different visual style than previous films and create a heightened sense of interactivity and interaction between two or multiple characters on the screen while simultaneously creating dynamic movement with the camera and invoking energy into the scene.”

At times, Gregg Wayans, Marlon’s nephew, served as his body double. He even appears in a very wide shot as one of the siblings, although that occurred only once. “At the end of the day, when the concept of the movie is about Marlon playing multiple characters, the perfectionist in me wanted Marlon to portray every single moment of these characters on screen, even when the character is in the background and out of focus,” says Tiddes. “Because there is only one Marlon Wayans, and no one can replicate what he does physically and comedically in the moment.”

Tiddes knew he would be challenged going into the project, but the process was definitely more complicated than he had initially expected — even with his VFX editorial background. “I had a really good starting point as far as conceptually knowing how to execute motion control. But, it’s not until you get into the moment and start working with the actors that you really understand and digest exactly how to pull off the comedic timing needed for the jokes with the visual effects,” he says. “That is very difficult, and every situation is unique. There was a learning curve, but we picked it up quickly, and I had a great team.”

A system was established that worked for Tiddes and Burgess, as well as Wayans, who had to execute and hit certain marks and look at proper eyelines with precise timing. “He has an earwig, and I am talking to him, letting him know where to look, when to look,” says Tiddes. “At the same time, he’s also hearing dialogue that he’s done the day before in his ear, and he’s reacting to that dialog while giving his current character’s lines in the moment. So, there’s quite a bit going on, and it all becomes more complex when you add the character and camera moving through the scene. After weeks of practice, in one of the final scenes with Jaspar, we were able to do 16 motion-controlled moments in that scene alone, which was a lot!”

At the very end of the film, the group tested its limits and had all six characters (mom and all the siblings, with the exception of Alan) gathered around a table. That scene was shot over a span of five days. “The camera booms down from a sign and pans across the party, landing on all six characters around a table. Getting that motion and allowing the camera to flow through the party onto all six of them seamlessly interacting around the table was a goal of mine throughout the project,” Tiddes says.

Other shots that proved especially difficult were those of Baby Pete in the hospital room, since the entire scene involved Wayans playing three additional characters who are also present: Alan, Russell and Dawn. And then they amped things up with the head replacement on Baby Pete. “I had to shoot the scene and then, on the same day, select the take I would use in the final cut of the movie, rather than select it in post, where traditionally I could pick another take if that one was not working,” Tiddes adds. “I had to set the pace on the first day and work things out with Marlon ahead of time and plan for the subsequent days — What’s Dawn going to say? How is Russell going to react to what Dawn says? You have to really visualize and previsualize all the ad-libbing that was going on and work it out right there in the moment and discuss it, to have kind of a loose plan, then move forward and be confident that you have enough time between lines to allow room for growth when a joke just comes out of nowhere. You don’t want to stifle that joke.”

While the majority of effects involved motion control, there is a scene that contains a good amount of traditional effects work. In it, Alan and Russell park their car in a field to rest for the night, only to awake the next morning to find they have inadvertently provoked a bull, which sees red, literally — both from Alan’s jacket and his shiny car. Artists built the bull in CG. (They used Maya and Side Effects Houdini to build the 3D elements and rendered them in Autodesk’s Arnold.) Physical effects were then used to lift the actual car to simulate the digital bull slamming into the vehicle. In some shots of the bull crashing into the car doors, a 3D car was used to show the doors being damaged.

In another scene, Russell and Alan catch a serious amount of air when they crash through a barn, desperately trying to escape the bull. “I thought it would be hilarious if, in that moment, cereal exploded and individual pieces flew wildly through the car, while [the cereal-obsessed] Russell scooped up one of the cereal pieces mid-air with his tongue for a quick snack,” says Tiddes. To do this, “I wanted to create a zero-gravity slow-motion moment. We shot the scene using a [Vision Research] high-speed Phantom camera at 480fps. Then in post, we created the cereal as a CG element so I could control how every piece moved in the scene. It’s one of my favorite VFX/comedy moments in the movie.”

As Tiddes points out, Sextuplets was the first project on which he used motion control, which let him create motion with the camera and still have the characters interact, giving the subconscious feeling they were actually in the room with one another. “That’s what made the comedy shine,” he says.


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

Mavericks VFX provides effects for Hulu’s The Handmaid’s Tale

By Randi Altman

Season 3 episodes of Hulu’s The Handmaid’s Tale are available for streaming, and if you had any illusions that things would lighten up a bit for June (Elizabeth Moss) and the ladies of Gilead, I’m sorry to say you will be disappointed. What’s not disappointing is that, in addition to the amazing acting and storylines, the show’s visual effects once again play a heavy role.

Brendan Taylor

Toronto’s Mavericks VFX has created visual effects for all three seasons of the show, based on Margaret Atwood’s dystopian view of the not-too-distant future. Its work has earned two Emmy nominations.

We recently reached out to Maverick’s founder and visual effects supervisor, Brendan Taylor, to talk about the new season and his workflow.

How early did you get involved in each season? What sort of input did you have regarding the shots?
The Handmaid’s Tale production is great because they involve us as early as possible. Back in Season 2, when we had to do the Fenway Park scene, for example, we were in talks in August but didn’t shoot until November. For this season, they called us in August for the big fire sequence in Episode 1, and the scene was shot in December.

There’s a lot of nice leadup and planning that goes into it. Our opinions are sought after and we’re able to provide input on what’s the best methodology to use to achieve a shot. Showrunner Bruce Miller, along with the directors, have a way of how they’d like to see it, and they’re great at taking in our recommendations. It was very collaborative and we all approach the process with “what’s best for the show” in mind.

What are some things that the showrunners asked of you in terms of VFX? How did they describe what they wanted?
Each person has a different approach. Bruce speaks in story terms, providing a broader sense of what he’s looking for. He gave us the overarching direction of where he wants to go with the season. Mike Barker, who directed a lot of the big episodes, speaks in more specific terms. He really gets into the details, determining the moods of the scene and communicating how each part should feel.

What types of effects did you provide? Can you give examples?
Some standout effects were the CG smoke in the burning fire sequence and the aftermath of the house being burned down. For the smoke, we had to make it snake around corners in a believable yet magical way. We had a lot of fire going on set, and we couldn’t have any actors or stunt person near it due to the size, so we had to line up multiple shots and composite it together to make everything look realistic. We then had to recreate the whole house in 3D in order to create the aftermath of the fire, with the house being completely burned down.

We also went to Washington, and since we obviously couldn’t destroy the Lincoln Memorial, we recreated it all in 3D. That was a lot of back and forth between Bruce, the director and our team. Different parts of Lincoln being chipped away means different things, and Bruce definitely wanted the head to be off. It was really fun because we got to provide a lot of suggestions. On top of that, we also had to create CGI handmaids and all the details that came with it. We had to get the robes right and did cloth simulation to match what was shot on set. There were about a hundred handmaids on set, but we had to make it look like there were thousands.

Were you able to reuse assets from last season for this one?
We were able to use a handmaids asset from last season, but it needed a lot of upgrades for this season. Because there were closer shots of the handmaids, we had to tweak it and made sure little things like the texture, shaders and different cloth simulations were right for this season.

Were you on set? How did that help?
Yes, I was on set, especially for the fire sequences. We spent a lot of time talking about what’s possible and testing different ways to make it happen. We want it to be as perfect as possible, so I had to make sure it was all done properly from the start. We sent another visual effects supervisor, Leo Bovell, down to Washington to supervise out there as well.

Can you talk about a scene or scenes where being on set played a part in doing something either practical or knowing you could do it in CG?
The fire sequence with the smoke going around the corner took a lot of on-set collaboration. We had tried doing it practically, but the smoke was moving too fast for what we wanted, and there was no way we could physically slow it down.

Having the special effects coordinator, John MacGillivray, there to give us real smoke that we could then match to was invaluable. In most cases on this show, very few audible were called. They want to go into the show knowing exactly what to expect so we were prepared and ready.

Can you talk about turnaround time? Typically, series have short ones. How did that affect how you worked?
The average turnaround time was eight weeks. We began discussions in August, before shooting, and had to delivery by January. We worked with Mike to simplify things without diminishing the impact. We just wanted to make sure we had the chance to do it well given the time we had. Mike was very receptive in asking what we needed to do to make it the best it could be in the timeframe that we had. Take the fire sequence, for example. We could have done full-CGI fire but that would have taken six months. So we did our research and testing to find the most efficient way to merge practical effects with CGI and presented the best version in a shorter period of time.

What tools were used?
We used Foundry Nuke for compositing. We used Autodesk Maya to build all the 3D houses, including the burned-down house, and to destroy the Lincoln Memorial. Then we used Side Effects Houdini to do all the simulations, which can range from the smoke and fire to crowd and cloth.

Is there a shot that you are most proud of or that was very challenging?
The shot where we reveal the crowd over June when we’re in Washington was incredibly challenging. The actual Lincoln Memorial, where we shot, is an active public park, so we couldn’t prevent people from visiting the site. The most we could do was hold them off for a few minutes. We ended up having to clean out all of the tourists, which is difficult with moving camera and moving people. We had to reconstruct about 50% of the plate. Then, in order to get the CG people to be standing there, we had to create a replica of the ground they’re standing on in CG. There were some models we got from the US Geological Society, but they didn’t completely line up, so we had to make a lot of decisions on the fly.

The cloth simulation in that scene was perfect. We had to match the dampening and the movement of all the robes. Stephen Wagner, who is our effects lead on it, nailed it. It looked perfect, and it was really exciting to see it all come together. It looked seamless, and when you saw it in the show, nobody believed that the foreground handmaids were all CG. We’re very proud.

What other projects are you working on?
We’re working on a movie called Queen & Slim by Melina Matsoukas with Universal. It’s really great. We’re also doing YouTube Premium’s Impulse and Netflix’s series Madam C.J. Walker.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

VFX in Series: The Man in the High Castle, Westworld

By Karen Moltenbrey

The look of television changed forever starting in the 1990s as computer graphics technology began to mature to the point where it could be incorporated within television productions. Indeed, the applications initially were minor, but soon audiences were witnessing very complicated work on the small screen. Today, we see a wide range of visual effects being used in television series, from minor wire and sign removal to all-CG characters and complete CG environments — pretty much anything and everything to augment the action and story, or to turn a soundstage or location into a specific locale that could be miles away or even non-existent.

Here, we examine two prime examples where a wide range of visual effects are used to set the stage and propel the action for a pair of series with very unique settings. For instance, The Man in the High Castle uses effects to turn back the clock to the 1960s, but also to create an alternate reality for the period, turning the familiar on its head. In  Westworld, effects create a unique Wild West of the future. In both series, VFX also help turn up the volume on these series’ very creative storylines.

The Man in the High Castle

What would life in the US be like if the Axis powers had defeated the Allied forces during World War II? The Amazon TV series The Man in the High Castle explores that alternate history scenario. Created by Frank Spotnitz and produced by Amazon Studios, Scott Free Productions, Headline Pictures, Electric Shepherd Productions and Big Light Productions, the series is scheduled to start its fourth and final season in mid-November. The story is based on the book by Philip K. Dick.

High Castle begins in the early 1960s in a dystopian America. Nazi Germany and the Empire of Japan have divvied up the US as their spoils of war. Germany rules the East, known as the Greater Nazi Reich (with New York City as the regional capital), while Japan controls the West, known as the Japanese Pacific States (whose capital is now San Francisco). The Rocky Mountains serve as the Neutral Zone. The American Resistance works to thwart the occupiers, spurred on after the discovery of materials displaying an alternate reality where the Allies were victorious, making them ponder this scenario.

With this unique storyline, visual effects artists were tasked with turning back the clock on present-day locations to the ’60s and then turning them into German- and Japanese-dominated and inspired environments. Starting with Season 2, the main studio filling this role has been Barnstorm Visual Effects (Los Angeles, Vancouver). Barnstorm operated as one of the vendors for Season 1, but has since ramped up its crew from a dozen to around 70 to take on the additional work. (Barnstorm also works on CBS All Access shows such as The Good Fight and Strange Angel, in addition to Get Shorty, Outlander and the HBO series Room 104 and Silicon Valley.)

According to Barnstorm co-owner and VFX supervisor Lawson Deming, the studio is responsible for all types of effects for the series — ranging from simple cleanup and fixes such as removing modern objects from shots to more extensive period work through the addition of period set pieces and set extensions. In addition, there are some flashback scenes that call for the artists to digitally de-age the actors and lots of military vehicles to add, as well as science-fiction objects. The majority of the overall work entails CG set extensions and world creation, Deming explains, “That involves matte paintings and CG vehicles and buildings.”

The number of visual effects shots per episode also varies greatly, depending on the story line; there are an average of 60 VFX shots an episode, with each season encompassing 10 episodes. Currently the team is working on Season 4. A core group of eight to 10 CG artists and 12 to 18 compositors work on the show at any given time.

For Season 3, released last October, there are a number of scenes that take place in the Reich-occupied New York City. Although it was possible to go to NYC and photograph buildings for reference, the city has changed significantly since the 1960s, “even notwithstanding the fact that this is an alternate history 1960s,” says Deming. “There would have been a lot of work required to remove modern-day elements from shots, particularly at the street level of buildings where modern-day shops are located, even if it was a building from the 1940s, ’50s or ’60s. The whole main floor would have needed replaced.”

So, in many cases, the team found it more prudent to create set extensions for NYC from scratch. The artists created sections of Fifth and Sixth avenues, both for the area where American-born Reichmarshall and Resistance investigator John Smith has his apartment and also for a parade sequence that occurs in the middle of Season 3. They also constructed a digital version of Central Park for that sequence, which involved crafting a lot of modular buildings with mix-and-match pieces and stories to make what looked like a wide variety of different period-accurate buildings, with matte paintings for the backgrounds. Elements such as fire escapes and various types of windows (some with curtains open, some closed) helped randomize the structures. Shaders for brick, stucco, wood and so forth further enabled the artists to get a lot of usage from relatively few assets.

“That was a large undertaking, particularly because in a lot of those scenes, we also had crowd duplication, crowd systems, tiling and so on to create everything that was there,” Deming explains. “So even though it’s just a city and there’s nothing necessarily fantastical about it, it was almost fully created digitally.”

The styles of NYC and San Francisco are very different in the series narrative. The Nazis are rebuilding NYC in their own image, so there is a lot of influence from brutalist architecture, and cranes often dot the skyline to emphasize all the construction taking place. Meanwhile, San Francisco has more of a 1940s look, as the Japanese are less interested in influencing architectural changes as they are in occupation.

“We weren’t trying to create a science-fiction world because we wanted to be sure that what was there would be believable and sell the realistic feel of the story. So, we didn’t want to go too far in what we created. We wanted it to feel familiar enough, though, that you could believe this was really happening,” says Deming.

One of the standout episodes for visual effects is “Jahr Null” (Season 3, Episode 10), which has been nominated for a 2019 Emmy in the Outstanding Special Visual Effects category. It entails the destruction of the Statue of Liberty, which crashes into the water, requiring just about every tool available at Barnstorm. “Prior to [the upcoming] Season 4, our biggest technical challenge was the Statue of Liberty destruction. There were just so many moving parts, literally and figuratively,” says Deming. “So many things had to occur in the narrative – the Nazis had this sense of showmanship, so they filmed their events and there was this constant stream of propaganda and publicity they had created.”

There are ferries with people on them to watch the event, spotlights are on the statue and an air show with music prior to the destruction as planes with trails of colored smoke fly toward the statue. When the planes fire their missiles at the base of the statue, it’s for show, as there are a number of explosives planted in the base of the statue that go off in a ring formation to force the collapse. Deming explains the logistics challenge: “We wanted the statue’s torch arm to break off and sink in the water, but the statue sits too far back. We had to manufacture a way for the statue to not just tip over, but to sort of slide down the rubble of the base so it would be close enough to the edge and the arm would snap off against the side of the island.”

The destruction simulation, including the explosions, fire, water and so forth, was handled primarily in Side Effects Houdini. Because there was so much sim work, a good deal of the effects work for the entire sequence was done in Houdini as well. Lighting and rendering for the scene was done within Autodesk’s Arnold.

Barnstorm also used Blender, an open-source 3D program for modeling and asset creation, for a small portion of the assets in this sequence. In addition, the artists used Houdini Mantra for the water rendering, while textures and shaders were built in Adobe’s Substance Painter; later the team used Foundry’s Nuke to composite the imagery. “There was a lot of deep compositing involved in that scene because we had to have the lighting interact in three dimensions with things like the smoke simulation,” says Deming. “We had a bunch of simulations stacked on top of one another that created a lot of data to work with.”

The artists referenced historical photographs as they designed and built the statue with a period-accurate torch. In the wide aerial shots, the team used some stock footage of the statue with New York City in the background, but had to replace pretty much everything in the shot, shortening the city buildings and replacing Liberty Island, the water surrounding it and the vessels in the water. “So yeah, it ended up being a fully digital model throughout the sequence,” says Deming.

Deming cannot discuss the effects work coming up in Season 4, but he does note that Season 3 contained a lot of digital NYC. This included a sequence wherein John Smith was installed as the Reichmarshall near Central Park, a scene that comprised a digital NYC and digital crowd duplication. On the other side of the country, the team built digital versions of all the ships in San Francisco harbor, including CG builds of period Japanese battleships retrofitted with more modern equipment. Water simulations rounded out the scene.

In another sequence, the Japanese performed nuclear testing in Monument Valley, blowing the caps off the mesas. For that, the artists used reference photos to build the landscape and then created a digital simulation of a nuclear blast.

In addition, there were a multitude of banners on the various buildings. Because of the provocative nature of some of the Nazi flags and Fascist propaganda, solid-color banners were often hung on location, with artists adding the offensive VFX image in post as to not upset locals where the series was filmed. Other times, the VFX artists added all-digital signage to the scenes.

As Deming points out, there is only so much that can be created through production design and costumes. Some of the big things have to be done with visual effects. “There are large world events in the show that happen and large settings that we’re not able to re-create any other way. So, the visual effects are integral to the process of creating the aesthetic world of the show,” he adds. “We’re creating things that while they are visually impressive, also feel authentic, like a world that could really exist. That’s where the power and the horror of the world here comes from.”

High Castle is up for a total of three Emmy awards later this month. It was nominated for three Emmys in 2017 for Season 2 and four in 2016 for Season 1, taking home two Emmys that year: one for Outstanding Cinematography for a Single-Camera Series and another for Outstanding Title Design.

Westworld

What happens when high tech meets the Wild West, and wealthy patrons can indulge their fantasies with no limits? That is the premise of the Emmy-winning HBO series Westworld from creators Jonathan Nolan and Lisa Joy, who executive produce along with J.J. Abrams, Athena Wickham, Richard J. Lewis, Ben Stephenson and Denise Thé.

Westworld is set in the fictitious western theme park called Westworld, one of multiple parks where advanced technology enables the use of lifelike android hosts to cater to the whims of guests who are able to pay for such services — all without repercussions, as the hosts are programmed not to retaliate or harm the guests. After each role-play cycle, the host’s memory is erased, and then the cycle begins anew until eventually the host is either decommissioned or used in a different narrative. Staffers are situated out of sight while overseeing park operations and performing repairs on the hosts as necessary. As you can imagine, guests often play out the darkest of desires. So, what happens if some of the hosts retain their memories and begin to develop emotions? What if some escape from the park? What occurs in the other themed parks?

The series debuted in October 2016, with Season 2 running from April through June of 2018. The production for Season 3 began this past spring and it is planned for release in 2020.

The first two seasons were shot in various locations in California, as well as in Castle Valley near Moab, Utah. Multiple vendors provide the visual effects, including the team at CoSA VFX (North Hollywood, Vancouver and Atlanta), which has been with the show since the pilot, working closely with Westworld VFX supervisor Jay Worth. CoSA worked with Worth in the past on other series, including Fringe, Undercovers and Person of Interest.

The number of VFX shots per episode varies, depending on the storyline, and that means the number of shots CoSA is responsible for varies widely as well. For instance, the facility did approximately 360 shots for Season 1 and more than 200 for Season 2. The studio is unable to discuss its work at this time on the upcoming Season 3.

The type of effects work CoSA has done on Westworld varies as well, ranging from concept art through the concept department and extension work through the studio’s environments department. “Our CG team is quite large, so we handle every task from modeling and texturing to rigging, animation and effects,” says Laura Barbera, head of 3D at CoSA. “We’ve created some seamless digital doubles for the show that even I forget are CG! We’ve done crowd duplication, for which we did a fun shoot where we dressed up in period costumes. Our 2D department is also sizable, and they do everything from roto, to comp and creative 2D solutions, to difficult greenscreen elements. We even have a graphics department that did some wonderful shots for Season 2, including holograms and custom interfaces.”

On the 3D side, the studio’s pipeline js mainly comprised of Autodesk’s Maya and Side Effects Houdini, along with Adobe’s Substance, Foundry’s Mari and Pixologic’s ZBrush. Maxon’s Cinema 4D and Interactive Data Visualization’s SpeedTree vegetation modeler are also used. On the 2D side, the artists employ Foundry’s Nuke and the Adobe suite, including After Effects and Photoshop; rendering is done in Chaos Group’s V-Ray and Redshift’s renderer.

Of course, there have been some recurring effects each season, such as the host “twitches and glitches.” And while some of the same locations have been revisited, the CoSA artists have had to modify the environments to fit with the changing timeline of the story.

“Every season sees us getting more and more into the characters and their stories, so it’s been important for us to develop along with it. We’ve had to make our worlds more immersive so that we are feeling out the new and changing surroundings just like the characters are,” Barbera explains. “So the set work gets more complex and the realism gets even more heightened, ensuring that our VFX become even more seamless.”

At center stage have been the park locations, which are rooted in existing terrain, as there is a good deal of location shooting for the series. The challenge for CoSA then becomes how to enhance it and make nature seem even more full and impressive, while still subtly hinting toward the changes in the story, says Barbera. For instance, the studio did a significant amount of work to the Skirball Cultural Center locale in LA for the outdoor environment of Delos, which owns and operates the parks. “It’s now sitting atop a tall mesa instead of overlooking the 405!” she notes. The team also added elements to the abandoned Hawthorne Plaza mall to depict the sublevels of the Delos complex. They’re constantly creating and extending the environments in locations inside and out of the park, including the town of Pariah, a particularly lawless area.

“We’ve created beautiful additions to the outdoor sets. I feel sometimes like we’re looking at a John Ford film, where you don’t realize how important the world around you is to the feel of the story,” Barbera says.

CoSA has done significant interior work too, creating spaces that did not exist on set “but that you’d never know weren’t there unless you’d see the before and afters,” Barbera says. “It’s really very visually impressive — from futuristic set extensions, cars and [Westworld park co-creator] Arnold’s house in Season 2, it’s amazing how much we’ve done to extend the environments to make the world seem even bigger than it is on location.”

One of the larger challenges in the first seasons came in Season 2: creating the Delos complex and the final episodes where the studio had to build a world inside of a world – the Sublime –as well as the gateway to get there. “Creating the Sublime was a challenge because we had to reuse and yet completely change existing footage to design a new environment,” explains Barbera. “We had to find out what kind of trees and foliage would live in that environment, and then figure out how to populate it with hosts that were never in the original footage. This was another sequence where we had to get particularly creative about how to put all the elements together to make it believable.”

In the final episode of the second season, the group created environment work on the hills, pinnacles and quarry where the door to the Sublime appears. They also did an extensive rebuild of the Sublime environment, where the hosts emerge after crossing over. “In the first season, we did a great deal of work on the plateau side of Delos, as well as adding mesas into the background of other shots — where [hosts] Dolores and Teddy are — to make the multiple environments feel connected,” adds Barbera.

Aside from the environments, CoSA also did some subtle work on the robots, especially in Season 2, to make them appear as if they were becoming unhinged, hinting at a malfunction. The comp department also added eye twitches, subtle facial tics and even rapid blinks to provide a sense of uneasiness.

While Westworld’s blending of the Old West’s past and the robotic future initially may seem at thematic odds, the balance of that duality is cleverly accomplished in the filming of the series and the way it is performed, Barbera points out. “Jay Worth has a great vision for the integrated feel of the show. He established the looks for everything,” she adds.

The balance of the visual effects is equally important because it enhances the viewer experience. “There are things happening that can be so subtle but have so much impact. Much of our work on the second season was making sure that the world stayed grounded, so that the strangeness that happened with the characters and story line read as realistic,” Barbera explains. “Our job as visual effects artists is to help our professional storytelling partners tell their tales by adding details and elements that are too difficult or fantastic to accomplish live on set in the midst of production. If we’re doing our job right, you shouldn’t feel suddenly taken out of the moment because of a splashy effect. The visuals are there to supplement the story.”


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

Visual Effects Roundtable

By Randi Altman

With Siggraph 2019 in our not-too-distant rearview mirror, we thought it was a good time to reach out to visual effects experts to talk about trends. Everyone has had a bit of time to digest what they saw. Users are thinking what new tools and technologies might help their current and future workflows. Manufacturers are thinking about how their products will incorporate these new technologies.

We provided these experts with questions relating to realtime raytracing, the use of game engines in visual effects workflows, easier ways to share files and more.

Ben Looram, partner/owner, Chapeau Studios
Chapeau Studios provides production, VFX/animation, design and creative IP development (both for digital content and technology) for all screens.

What film inspired you to work in VFX?
There was Ray Harryhausen’s film Jason and the Argonauts, which I watched on TV when I was seven. The skeleton-fighting scene has been visually burned into my memory ever since. Later in life I watched an artist compositing some tough bluescreen shots on a Quantel Henry in 1997, and I instantly knew that that was going to be in my future.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
Double the content for half the cost seems to be the industry’s direction lately. This is coming from new in-house/client-direct agencies that sometimes don’t know what they don’t know … so we help guide/teach them where it’s OK to trim budgets or dedicate more funds for creative.

Are game engines affecting how you work, or how you will work in the future?
Yes, rendering on device and all the subtle shifts in video fidelity shifted our attention toward game engine technology a couple years ago. As soon as the game engines start to look less canned and have accurate depth of field and parallax, we’ll start to integrate more of those tools into our workflow.

Right now we have a handful of projects in the forecast where we will be using realtime game engine outputs as backgrounds on set instead of shooting greenscreen.

What about realtime raytracing? How will that affect VFX and the way you work?
We just finished an R&D project with Intel’s new raytracing engine OSPRay for Siggraph. The ability to work on a massive scale with last-minute creative flexibility was my main takeaway. This will allow our team to support our clients’ swift changes in direction with ease on global launches. I see this ingredient as really exciting for our creative tech devs moving into 2020. Proof of concept iterations will become finaled faster, and we’ve seen efficiencies in lighting, render and compositing effort.

How have ML/AI affected your workflows, if at all?
None to date, but we’ve been making suggestions for new tools that will make our compositing and color correction process more efficient.

The Uncanny Valley. Where are we now?
Still uncanny. Even with well-done virtual avatar influencers on Instagram like Lil Miquela, we’re still caught with that eerie feeling of close-to-visually-correct with a “meh” filter.

Apple

Can you name some recent projects?
The Rookie’s Guide to the NFL. This was a fun hybrid project where we mixed CG character design with realtime rendering voice activation. We created an avatar named Matthew for the NFL’s Amazon Alexa Skills store that answers your football questions in real time.

Microsoft AI: Carlsberg and Snow Leopard. We designed Microsoft’s visual language of AI on multiple campaigns.

Apple Trade In campaign: Our team concepted, shot and created an in-store video wall activation and on-all-device screen saver for Apple’s iPhone Trade In Program.

 

Mac Moore, CEO, Conductor
Conductor is a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud.

What are some of today’s VFX trends? Is cloud playing an even larger role?
Cloud is absolutely a growing trend. I think for many years the inherent complexity and perceived cost of cloud has limited adoption in VFX, but there’s been a marked acceleration in the past 12 months.

Two years ago at Siggraph, I was explaining the value of elastic compute and how it perfectly aligns with the elastic requirements that define our project-based industry; this year there was a much more pragmatic approach to cloud, and many of the people I spoke with are either using the cloud or planning to use it in the near future. Studios have seen referenceable success, both technically and financially, with cloud adoption and are now defining cloud’s role in their pipeline for fear of being left behind. Having a cloud-enabled pipeline is really a game changer; it is leveling the field and allowing artistic talent to be the differentiation, rather than the size of the studio’s wallet (and its ability to purchase a massive render farm).

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines for VFX have definitely attracted interest lately and show a lot of promise in certain verticals like virtual production. There’s more work to be done in terms of out-of-the-box usability, but great strides have been made in the past couple years. I also think various open source initiatives and the inherent collaboration those initiatives foster will help move VFX workflows forward.

Will realtime raytracing play a role in how your tool works?
There’s a need for managing the “last mile,” even in realtime raytracing, which is where Conductor would come in. We’ve been discussing realtime assist scenarios with a number of studios, such as pre-baking light maps and similar applications, where we’d perform some of the heavy lifting before assets are integrated in the realtime environment. There are certainly benefits on both sides, so we’ll likely land in some hybrid best practice using realtime and traditional rendering in the near future.

How do ML/AI and AR/VR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Machine learning and artificial intelligence are critical for our next evolutionary phase at Conductor. To date we’ve run over 250 million core-hours on the platform, and for each of those hours, we have a wealth of anonymous metadata about render behavior, such as the software run, duration, type of machine, etc.

Conductor

For our next phase, we’re focused on delivering intelligent rendering akin to ride-share app pricing; the goal is to provide producers with an upfront cost estimate before they submit the job, so they have a fixed price that they can leverage for their bids. There is also a rich set of analytics that we can mine, and those analytics are proving invaluable for studios in the planning phase of a project. We’re working with data science experts now to help us deliver this insight to our broader customer base.

AR/VR front presents a unique challenge for cloud, due to the large size and variety of datasets involved. The rendering of these workloads is less about compute cycles and more about scene assembly, so we’re determining how we can deliver more of a whole product for this market in particular.

OpenXR and USD are certainly helping with industry best practices and compatibility, which build recipes for repeatable success, and Conductor is collaborating on creating those guidelines for success when it comes to cloud computing with those standards.

What is next on the horizon for VFX?
Cloud, open source and realtime technologies are all disrupting VFX norms and are converging in a way that’s driving an overall democratization of the industry. Gone are the days when you need a pile of cash and a big brick-and-mortar building to house all of your tech and talent.

Streaming services and new mediums, along with a sky-high quality bar, have increased the pool of available VFX work, which is attracting new talent. Many of these new entrants are bootstrapping their businesses with cloud, standards-based approaches and geographically dispersed artistic talent.

Conductor recently became a fully virtual company for this reason. I hire based on expertise, not location, and today’s technology allows us to collaborate as if we are in the same building.

 

Aruna Inversin, creative director/VFX supervisor, Digital Domain 
Digital Domain has provided visual effects and technology for hundreds of motion pictures, commercials, video games, music videos and virtual reality experiences. It also livestreams events in 360-degree virtual reality, creates “virtual humans” for use in films and live events, and develops interactive content, among other things.

What film inspired you to work in VFX?
RoboCop in 1984. The combination of practical effects, miniatures and visual effects inspired me to start learning about what some call “The Invisible Art.”

What trends have you been seeing? What do you feel is important?
There has been a large focus on realtime rendering and virtual production and using it to help increase the throughput and workflow of visual effects. While indeed realtime rendering does increase throughput, there is now a greater onus on filmmakers to plan their creative ideas and assets before you can render them. No longer is it truly post production, but we are back into the realm of preproduction, using post tools and realtime tools to help define how a story is created and eventually filmed.

USD and cloud rendering are also important components, which allow many different VFX facilities the ability to manage their resources effectively. I think another trend that has since passed and has gained more traction is the availability of ACES and a more unified color space by the Academy. This allows quicker throughput between all facilities.

Are game engines affecting how you work or how you will work in the future?
As my primary focus is in new media and experiential entertainment at Digital Domain, I already use game engines (cinematic engines, realtime engines) for the majority of my deliverables. I also use our traditional visual effects pipeline; we have created a pipeline that flows from our traditional cinematic workflow directly into our realtime workflow, speeding up the development process of asset creation and shot creation.

What about realtime raytracing? How will that affect VFX and the way you work?
The ability to use Nvidia’s RTX and raytracing increases the physicality and realistic approximations of virtual worlds, which is really exciting for the future of cinematic storytelling in realtime narratives. I think we are just seeing the beginnings of how RTX can help VFX.

How have AR/VR and AI/ML affected your workflows, if at all?
Augmented reality has occasionally been a client deliverable for us, but we are not using it heavily in our VFX pipeline. Machine learning, on the other hand, allows us to continually improve our digital humans projects, providing quicker turnaround with higher fidelity than competitors.

The Uncanny Valley. Where are we now?
There is no more uncanny valley. We have the ability to create a digital human with the nuance expected! The only limitation is time and resources.

Can you name some recent projects?
I am currently working on a Time project but I cannot speak too much about it just yet. I am also heavily involved in creating digital humans for realtime projects for a number of game companies that wish to push the boundaries of storytelling in realtime. All these projects have a release date of 2020 or 2021.

 

Matt Allard, strategic alliances lead, M&E, Dell Precision Workstations
Dell Precision workstations feature the latest processors and graphics technology and target those working in the editing studio or at a drafting table, at the office or on location.

What are some of today’s VFX trends?
We’re seeing a number of trends in VFX at the moment — from 4K mastering from even higher-resolution acquisition formats and an increase in HDR content to game engines taking a larger role on set in VFX-heavy productions. Of course, we are also seeing rising expectations for more visual sophistication, complexity and film-level VFX, even in TV post (for example, Game of Thrones).

Will realtime raytracing play a role in how your tools work?
We expect that Dell customers will embrace realtime and hardware-accelerated raytracing as creative, cost-saving and time-saving tools. With the availability of Nvidia Quadro RTX across the Dell Precision portfolio, including on our 7000 series mobile workstations, customers can realize these benefits now to deliver better content wherever a production takes them in the world.

Large-scale studio users will not only benefit from the freedom to create the highest-quality content faster, but they’ll likely see overall impact to their energy consumption as they assess the move from CPU rendering, which dominates studio data centers today. Moving toward GPU and hybrid CPU/GPU rendering approaches can offer equal or better rendering output with less energy consumption.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines have made their way into VFX-intensive productions to deliver in-context views of the VFX during the practical shoot. With increasing quality driven by realtime raytracing, game engines have the potential to drive a master-quality VFX shot on set, helping to minimize the need to “fix it in post.”

What is next on the horizon for VFX?
The industry is at the beginning of a new era as artificial intelligence and machine learning techniques are brought to bear on VFX workflows. Analytical and repetitive tasks are already being targeted by major software applications to accelerate or eliminate cumbersome elements in the workflow. And as with most new technologies, it can result in improved creative output and/or cost savings. It really is an exciting time for VFX workflows!

Ongoing performance improvements to the computing infrastructure will continue to accelerate and democratize the highest-resolution workflows. Now more than ever, small shops and independents can access the computing power, tools and techniques that were previously available only to top-end studios. Additionally, virtualization techniques will allow flexible means to maximize the utilization and proliferation of workstation technology.

 

Carl Flygare, manager, Quadro Marketing, PNY
Providing tools for realtime raytracing, augmented reality and virtual reality with the goal of advancing VFX workflow creativity and productivity. PNY is NVIDIA’s Quadro channel partner throughout North America, Latin America, Europe and India..

How will realtime raytracing play a role in workflows?
Budgets are getting tighter, timelines are contracting, and audience expectations are increasing. This sounds like a perfect storm, in the bad sense of the term, but with the right tools, it is actually an opportunity.

Realtime raytracing, based on Nvidia’s RTX technology and support from leading ISVs, enables VFX shops to fit into these new realities while delivering brilliant work. Whiteboarding a VFX workflow is a complex task, so let’s break it down by categories. In preproduction, specifically previz, realtime raytracing will let VFX artists present far more realistic and compelling concepts much earlier in the creative process than ever before.

This extends to the next phase, asset creation and character animation, in which models can incorporate essentially lifelike nuance, including fur, cloth, hair or feathers – or something else altogether! Shot layout, blocking, animation, simulation, lighting and, of course, rendering all benefit from additional iterations, nuanced design and the creative possibilities that realtime raytracing can express and realize. Even finishing, particularly compositing, can benefit. Given the applicable scope of realtime raytracing, it will essentially remake VFX workflows and overall film pipelines, and Quadro RTX series products are the go-to tools enabling this revolution.

How are game engines changing how VFX is done? Is this for everyone or just a select few?
Variety had a great article on this last May. ILM substituted realtime rendering and five 4K laser projectors for a greenscreen shot during a sequence from Solo: A Star Wars Story. This allowed the actors to perform in context — in this case, a hyperspace jump — but also allowed cinematographers to capture arresting reflections of the jump effect in the actors’ eyes. Think of it as “practical digital effects” created during shots, not added later in post. The benefits are significant enough that the entire VFX ecosystem, from high-end shops and major studios to independent producers, are using realtime production tools to rethink how movies and TV shows happen while extending their vision to realize previously unrealizable concepts or projects.

Project Sol

How do ML and AR play a role in your tool? And are you supporting OpenXR 1.0? What about Pixar’s USD?
Those are three separate but somewhat interrelated questions! ML (machine learning) and AI (artificial intelligence) can contribute by rapidly denoising raytraced images in far less time than would be required by letting a given raytracing algorithm run to conclusion. Nvidia enables AI denoising in Optix 5.0 and is working with a broad array of leading ISVs to bring ML/AI enhanced realtime raytracing techniques into the mainstream.

OpenXR 1.0 was released at Siggraph 2019. Nvidia (among others) is supporting this open, royalty-free and cross-platform standard for VR/AR. Nvidia is now providing VR enhancing technologies, such as variable rate shading, content adaptive shading and foveated rendering (among others), with the launch of Quadro RTX. This provides access to the best of both worlds — open standards and the most advanced GPU platform on which to build actual implementations.

Pixar and Nvidia have collaborated to make Pixar’s USD (Universal Scene Description) and Nvidia’s complementary MDL (Materials Definition Language) software open source in an effort to catalyze the rapid development of cinematic quality realtime raytracing for M&E applications.

Project Sol

What is next on the horizon for VFX?
The insatiable desire on the part of VFX professionals, and audiences, to explore edge-of-the-envelope VFX will increasingly turn to realtime raytracing, based on the actual behavior of light and real materials, increasingly sophisticated shader technology and new mediums like VR and AR to explore new creative possibilities and entertainment experiences.

AI, specifically DNNs (deep neural networks) of various types, will automate many repetitive VFX workflow tasks, allowing creative visionaries and artists to focus on realizing formerly impossible digital storytelling techniques.

One obvious need is increasing the resolution at which VFX shots are rendered. We’re in a 4K world, but many films are finished at 2K, primarily based on VFX. 8K is unleashing the abilities (and changing the economics) of cinematography, so expect increasingly powerful realtime rendering solutions, such as Quadro RTX (and successor products when they come to market), along with amazing advances in AI, to allow the VFX community to innovate in tandem.

 

Chris Healer, CEO/CTO/VFX supervisor, The Molecule 
Founded in 2005, The Molecule creates bespoke VFX imagery for clients worldwide. Over 80 artists, producers, technicians and administrative support staff collaborate at our New York City and Los Angeles studios.

What film or show inspired you to work in VFX?
I have to admit, The Matrix was a big one for me.

Are game engines affecting how you work or how you will work?
Game engines are coming, but the talent pool is difficult and the bridge is hard to cross … a realtime artist doesn’t have the same mindset as a traditional VFX artist. The last small percentage of completion on a shot can invalidate any values gained by working in a game engine.

What about realtime raytracing?
I am amazed at this technology, and as a result bought stock in Nvidia, but the software has to get there. It’s a long game, for sure!

How have AR/VR and ML/AI affected your workflows?
I think artists are thinking more about how images work and how to generate them. There is still value in a plain-old four-cornered 16:9 rectangle that you can make the most beautiful image inside of.

AR,VR, ML, etc., are not that, to be sure. I think there was a skip over VR in all the hype. There’s way more to explore in VR, and that will inform AR tremendously. It is going to take a few more turns to find a real home for all this.

What trends have you been seeing? Cloud workflows? What else?
Everyone is rendering in the cloud. The biggest problem I see now is lack of a UBL model that is global enough to democratize it. UBL = usage-based licensing. I would love to be able to render while paying by the second or minute at large or small scales. I would love for Houdini or Arnold to be rentable on a Satoshi level … that would be awesome! Unfortunately, it is each software vendor that needs to provide this, which is a lot to organize.

The Uncanny Valley. Where are we now?
We saw in the recent Avengers film that Mark Ruffalo was in it. Or was he? I totally respect the Uncanny Valley, but within the complexity and context of VFX, this is not my battle. Others have to sort this one out, and I commend the artists who are working on it. Deepfake and Deeptake are amazing.

Can you name some recent projects?
We worked on Fosse/Verdon, but more recent stuff, I can’t … sorry. Let’s just say I have a lot of processors running right now.

 

Matt Bach and William George, lab technicians, Puget Systems 
Puget Systems specializes in high-performance custom-built computers — emphasizing each customer’s specific workflow.

Matt Bach

William George

What are some of today’s VFX trends?
Matt Bach: There are so many advances going on right now that it is really hard to identify specific trends. However, one of the most interesting to us is the back and forth between local and cloud rendering.

Cloud rendering has been progressing for quite a few years and is a great way to get a nice burst in rendering performance when you are  in a crunch. However, there have been high improvements in GPU-based rendering with technology like Nvidia Optix. Because of these, you no longer have to spend a fortune to have a local render farm, and even a relatively small investment in hardware can often move the production bottleneck away from rendering to other parts of the workflow. Of course, this technology should make its way to the cloud at some point, but as long as these types of advances keep happening, the cloud is going to continue playing catch-up.

A few other that we are keeping our eyes on are the growing use of game engines, motion capture suits and realtime markerless facial tracking in VFX pipelines.

Realtime raytracing is becoming more prevalent in VFX. What impact does realtime raytracing have on system hardware, and what do VFX artists need to be thinking about when optimizing their systems?
William George: Most realtime raytracing requires specialized computer hardware, specifically video cards with dedicated raytracing functionality. Raytracing can be done on the CPU and/or normal video cards as well, which is what render engines have done for years, but not quickly enough for realtime applications. Nvidia is the only game in town at the moment for hardware raytracing on video cards with its RTX series.

Nvidia’s raytracing technology is available on its consumer (GeForce) and professional (Quadro) RTX lines, but which one to use depends on your specific needs. Quadro cards are specifically made for this kind of work, with higher reliability and more VRAM, which allows for the rendering of more complex scenes … but they also cost a lot more. GeForce, on the other hand, is more geared toward consumer markets, but the “bang for your buck” is incredibly high, allowing you to get several times the performance for the same cost.

In between these two is the Titan RTX, which offers very good performance and VRAM for its price, but due to its fan layout, it should only be used as a single card (or at most in pairs, if used in a computer chassis with lots of airflow).

Another thing to consider is that if you plan on using multiple GPUs (which is often the case for rendering), the size of the computer chassis itself has to be fairly large in order to fit all the cards, power supply, and additional cooling needed to keep everything going.

How are game engines changing or impacting VFX workflows?
Bach: Game engines have been used for previsualization for a while, but we are starting to see them being used further and further down the VFX pipeline. In fact, there are already several instances where renders directly captured from game engines, like Unity or Unreal, are being used in the final film or animation.

This is getting into speculation, but I believe that as the quality of what game engines can produce continues to improve, it is going to drastically shake up VFX workflows. The fact that you can make changes in real time, as well as use motion capture and facial tracking, is going to dramatically reduce the amount of time necessary to produce a highly polished final product. Game engines likely won’t completely replace more traditional rendering for quite a while (if ever), but it is going to be significant enough that I would encourage VFX artists to at least familiarize themselves with the popular engines like Unity or Unreal.

What impact do you see ML/AI and AR/VR playing for your customers?
We are seeing a lot of work being done for machine learning and AI, but a lot of it is still on the development side of things. We are starting to get a taste of what is possible with things like Deepfakes, but there is still so much that could be done. I think it is too early to really tell how this will affect VFX in the long term, but it is going to be exciting to see.

AR and VR are cool technologies, but it seems like they have yet to really take off, in part because designing for them takes a different way of thinking than traditional media, but also in part because there isn’t one major platform that’s an overwhelming standard. Hopefully, that is something that gets addressed over time, because once creative folks really get a handle on how to use the unique capabilities of AR/VR to their fullest, I think a lot of neat stories will be told.

What is the next on the horizon for VFX?
Bach: The sky is really the limit due to how fast technology and techniques are changing, but I think there are two things in particular that are going to be very interesting to see how they play out.

First, we are hitting a point where ethics (“With great power comes great responsibility” and all that) is a serious concern. With how easy it is to create highly convincing Deepfakes of celebrities or other individuals, even for someone who has never used machine learning before, I believe that there is the potential of backlash from the general public. At the moment, every use of this type of technology has been for entertainment or otherwise rightful purposes, but the potential to use it for harm is too significant to ignore.

Something else I believe we will start to see is “VFX for the masses,” similar to how video editing used to be a purely specialized skill, but now anyone with a camera can create and produce content on social platforms like YouTube. Advances in game engines, facial/body tracking for animated characters and other technologies that remove a number of skills and hardware barriers for relatively simple content are going to mean that more and more people with no formal training will take on simple VFX work. This isn’t going to impact the professional VFX industry by a significant degree, but I think it might spawn a number of interesting techniques or styles that might make their way up to the professional level.

 

Paul Ghezzo, creative director, Technicolor Visual Effects
Technicolor and its family of VFX brands provide visual effects services tailored to each project’s needs.

What film inspired you to work in VFX?
At a pretty young age, I fell in love with Star Wars: Episode IV – A New Hope and learned about the movie magic that was developed to make those incredible visuals come to life.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
USD will help structure some of what we currently do, and cloud rendering is an incredible source to use when needed. I see both of them maturing and being around for years to come.

As for other trends, I see new methods of photogrammetry and HDRI photography/videography providing datasets for digital environment creation and capturing lighting content; performance capture (smart 2D tracking and manipulation or 3D volumetric capture) for ease of performance manipulation or layout; and even post camera work. New simulation engines are creating incredible and dynamic sims in a fraction of the time, and all of this coming together through video cards streamlining the creation of the end product. In many ways it might reinvent what can be done, but it might take a few cutting-edge shows to embrace and perfect the recipe and show its true value.

Production cameras tethered to digital environments for live set extensions are also coming of age, and with realtime rendering becoming a viable option, I can imagine that it will only be a matter of time for LED walls to become the new greenscreen. Can you imagine a live-action set extension that parallaxes, distorts and is exposed in the same way as its real-life foreground? How about adding explosions, bullet hits or even an armada of spaceships landing in the BG, all on cue. I imagine this will happen in short order. Exciting times.

Are game engines affecting how you work or how you will work in the future?
Game engines have affected how we work. The speed and quality that they offer is undoubtably a game changer, but they don’t always create the desired elements and AOVs that are typically needed in TV/film production.

They are also creating a level of competition that is spurring other render engines to be competitive and provide a similar or better solution. I can imagine that our future will use Unreal/Unity engines for fast turnaround productions like previz and stylized content, as well as for visualizing virtual environments and digital sets as realtime set extensions and a lot more.

Snowfall

What about realtime raytracing? How will that affect VFX and the way you work?
GPU rendering has single-handedly changed how we render and what we render with. A handful of GPUs and a GPU-accelerated render engine can equal or surpass a CPU farm that’s several times larger and much more expensive. In VFX, iterations equal quality, and if multiple iterations can be completed in a fraction of the time — and with production time usually being finite — then GPU-accelerated rendering equates to higher quality in the time given.

There are a lot of hidden variables to that equation (change of direction, level of talent provided, work ethics, hardware/software limitations, etc.), but simply said, if you can hit the notes as fast as they are given, and not have to wait hours for a render farm to churn out a product, then clearly the faster an iteration can be provided the more iterations can be produced, allowing for a higher-quality product in the time given.

How have AR or ML affected your workflows, if at all?
ML and AR haven’t significantly affected our current workflows yet … but I believe they will very soon.

One aspect of AR/VR/MR that we occasionally use in TV/film production is to previz environments, props and vehicles, which allows everyone in production and on set/location to see what the greenscreen will be replaced with, which allows for greater communication and understanding with the directors, DPs, gaffers, stunt teams, SFX and talent. I can imagine that AR/VR/MR will only become more popular as a preproduction tool, allowing productions to front load and approve all aspects of production way before the camera is loaded and the clock is running on cast and crew.
Machine learning is on the cusp of general usage, but it currently seems to be used by productions with lengthy schedules that will benefit from development teams building those toolsets. There are tasks that ML will undoubtably revolutionize, but it hasn’t affected our workflows yet.

The Uncanny Valley. Where are we now?
Making the impossible possible … That *is* what we do in VFX. Looking at everything from Digital Emily in 2011 to Thanos and Hulk in Avengers: Endgame, we’ve seen what can be done, and the Uncanny Valley will likely remain, but only on productions that can’t afford the time or cost of flawless execution.

Can you name some recent projects?
Big Little Lies, Dead to Me, NOS4A2, True Detective, Veep, This Is Us, Snowfall, The Loudest Voice, and Avengers: Endgame.

 

James Knight, virtual production director, AMD 
AMD is a semiconductor company that develops computer processors and related technologies for M&E as well as other markets. Its tools include Ryzen and Threadripper.

What are some of today’s VFX trends?
Well, certainly the exploration for “better, faster, cheaper” keeps going. Faster rendering, so our community can accomplish more iterations in a much shorter amount of time, seems to something I’ve heard the whole time I’ve been in the business.

I’d surely say the virtual production movement (or on-set visualization) is gaining steam, finally. I work with almost all the major studios in my role, and all of them, at a minimum, have the ability to speed up post and blend it with production on their radar; many have virtual production departments.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
I would say game engines are where most of the innovation comes from these days. Think about Unreal, for example. Epic pioneered Fortnite, and the revenue from that must be astonishing, and they’re not going to sit on their hands. The feature film and TV post/VFX business benefits from the requirement of the gaming consumer to see higher-resolution, more photorealistic images in real time. That gets passed on to our community in eliminating guess work on set when framing partial or completely CG shots.

It should be for everyone or most, because the realtime and post production time savings are rather large. I think many still have a personal preference for what they’re used to. And that’s not wrong, if it works for them, obviously that’s fine. I just think that even in 2019, use of game engines is still new to some … which is why it’s not completely ubiquitous.

How do ML or AR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Well, it’s more the reverse. With our new Rome and Threadripper CPUs, we’re powering AR. Yes, we are supporting OpenXR 1.0.

What is next on the horizon for VFX?
Well, the demand for VFX is increasing, not the opposite, so the pursuit of faster photographic reality is perpetually in play. That’s good job security for me at a CPU/GPU company, as we have a way to go to properly bridge the Uncanny Valley completely, for example.

I’d love to say lower-cost CG is part of the future, but then look at the budgets of major features — they’re not exactly falling. The dance of Moore’s law will forever be in effect more than likely, with momentary huge leaps in compute power — like with Rome and Threadripper — catching amazement for a period. Then, when someone sees the new, expanded size of their sandpit, they then fill that and go, “I now know what I’d do if it was just a bit bigger.”

I am vested and fascinated by the future of VFX, but I think it goes hand in hand with great storytelling. If we don’t have great stories, then directing and artistry innovations don’t properly get noticed. Look at the top 20 highest grossing films in history … they’re all fantasy. We all want to be taken away from our daily lives and immersed in a beautiful, realistic VFX intense fictional world for 90 minutes, so we’ll be forever pushing the boundaries of rigging, texturing, shading, simulations, etc. To put my finger on exactly what’s next, I’d say I happen to know of a few amazing things that are coming, but sadly, I’m not at liberty to say right now.

 

Michel Suissa, managing director of pro solutions, The Studio-B&H 
The Studio-B&H provides hands-on experience to high-end professionals. Its Technology Center is a fully operational studio with an extensive display of high-end products and state-of-the-art workflows.

What are some of today’s VFX trends?
AI, ML, NN (GAN) and realtime environments

Will realtime raytracing play a role in how the tools you provide work?
It already does with most relevant applications in the market.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
The ubiquity of realtime game engines is becoming more mainstream with every passing year. It is becoming fairly accessible to a number of disciplines within different market targets.

What is next on the horizon for VFX?
New pipeline architectures that will rely on different implementations (traditional and AI/ML/NN) and mixed infrastructures (local and cloud-based).

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
AI, ML and realtime environments. New cloud toolsets. Prominence of neural networks and GANs. Proliferation of convincing “deepfakes” as a proof of concept for the use of generative networks as resources for VFX creation.

What about realtime raytracing? How will that affect VFX workflows?
RTX is changing how most people see their work being done. It is also changing expectations about what it takes to create and render CG images.



The Uncanny Valley. Where are we now?
AI and machine learning will help us get there. Perfection still remains too costly. The amount of time and resources required to create something convincing is prohibitive for the large majority of the budgets.

 

Marc Côté, CEO, Real by Fake 
Real by Fake services include preproduction planning, visual effects, post production and tax-incentive financing.

What film or show inspired you to work in VFX?
George Lucas’ Star Wars and Indiana Jones (Raiders of the Lost Ark). For Star Wars, I was a kid and I saw this movie. It brought me to another universe. Star Wars was so inspiring even though I was too young to understand what the movie was about. The robots in the desert and the spaceships flying around. It looked real; it looked great. I was like, “Wow, this is amazing.”

Indiana Jones because it was a great adventure; we really visit the worlds. I was super-impressed by the action, by the way it was done. It was mostly practical effects, not really visual effects. Later on I realized that in Star Wars, they were using robots (motion control systems) to shoot the spaceships. And as a kid, I was very interested in robots. And I said, “Wow, this is great!” So I thought maybe I could use my skills and what I love and combine it with film. So that’s the way it started.

What trends have you been seeing? What do you feel is important?
The trend right now is using realtime rendering engines. It’s coming on pretty strong. The game companies who build engines like Unity or Unreal are offering a good product.

It’s bit of a hack to use these tools in rendering or in production at this point. They’re great for previz, and they’re great for generating realtime environments and realtime playback. But having the capacity to change or modify imagery with the director during the process of finishing is still not easy. But it’s a very promising trend.

Rendering in the cloud gives you a very rapid capacity, but I think it’s very expensive. You also have to download and upload 4K images, so you need a very big internet pipe. So I still believe in local rendering — either with CPUs or GPUs. But cloud rendering can be useful for very tight deadlines or for small companies that want to achieve something that’s impossible to do with the infrastructure they have.

My hope is that AI will minimize repetition in visual effects. For example, in keying. We key multiple sections of the body, but we get keying errors in plotting or transparency or in the edges, and they are all a bit different, so you have to use multiple keys. AI would be useful to define which key you need to use for every section and do it automatically and in parallel. AI could be an amazing tool to be able to make objects disappear by just selecting them.

Pixar’s USD is interesting. The question is: Will the industry take it as a standard? It’s like anything else. Kodak invented DPX, and it became the standard through time. Now we are using EXR. We have different software, and having exchange between them will be great. We’ll see. We have FBX, which is a really good standard right now. It was built by Filmbox, a Montreal company that was acquired by Autodesk. So we’ll see. The demand and the companies who build the software — they will be the ones who take it up or not. A big company like Pixar has the advantage of other companies using it.

The last trend is remote access. The internet is now allowing us to connect cross-country, like from LA to Montreal or Atlanta. We have a sophisticated remote infrastructure, and we do very high-quality remote sessions with artists who work from disparate locations. It’s very secure and very seamless.

What about realtime raytracing? How will that affect VFX and the way you work?
I think we have pretty good raytracing compared to what we had two years ago. I think it’s a question of performance, and of making it user-friendly in the application so it’s easy to light with natural lighting. To not have to fake the rebounds so you can get two or three rebounds. I think it’s coming along very well and quickly.

Sharp Objects

So what about things like AI/ML or AR/VR? Have those things changed anything in the way movies and TV shows are being made?
My feeling right now is that we are getting into an era where I don’t think you’ll have enough visual effects companies to cover the demand.

Every show has visual effects. It can be a complete character, like a Transformer, or a movie from the Marvel Universe where the entire film is CG. Or it can be the huge number of invisible effects that are starting to appear in virtually every show. You need capacity to get all this done.

AI can help minimize repetition so artists can work more on the art and what is being created. This will accelerate and give us the capacity to respond to what’s being demanded of us. They want a faster cheaper product, and they want the quality to be as high as a movie.

The only scenario where we are looking at using AR is when we are filming. For example, you need to have a good camera track in real time, and then you want to be able to quickly add a CGI environment around the actors so the director can make the right decision in terms of the background or interactive characters who are in the scene. The actors will not see it until they have a monitor or a pair of glasses or something to be able to give them the result.

So AR is a tool to be able to make faster decisions when you’re on set shooting. This is what we’ve been working on for a long time: bringing post production and preproduction together. To have an engineering department who designs and conceptualizes and creates everything that needs to be done before shooting.

The Uncanny Valley. Where are we now?
In terms of the environment, I think we’re pretty much there. We can create an environment that nobody will know is fake. Respectfully, I think our company Real by Fake is pretty good at doing it.

In terms of characters, I think we’re still not there. I think the game industry is helping a lot to push this. I think we’re on the verge of having characters look as close as possible to live actors, but if you’re in a closeup, it still feels fake. For mid-ground and long shots, it’s fine. You can make sure nobody will know. But I don’t think we’ve crossed the valley just yet.

Can you name some recent projects?
Big Little Lies and Sharp Objects for HBO, Black Summer for Netflix
and Brian Banks, an indie feature.

 

Jeremy Smith, CTO, Jellyfish Pictures
Jellyfish Pictures provides a range of services including VFX for feature film, high-end TV and episodic animated kids’ TV series and visual development for projects spanning multiple genres.

What film or show inspired you to work in VFX?
Forrest Gump really opened my eyes to how VFX could support filmmaking. Seeing Tom Hanks interact with historic footage (e.g., John F. Kennedy) was something that really grabbed my attention, and I remember thinking, “Wow … that is really cool.”

What trends have you been seeing? What do you feel is important?
The use of cloud technology is really empowering “digital transformation” within the animation and VFX industry. The result of this is that there are new opportunities that simply wouldn’t have been possible otherwise.

Jellyfish Pictures uses burst rendering into the cloud, extending our capacity and enabling us to take on more work. In addition to cloud rendering, Jellyfish Pictures were early adopters of virtual workstations, and, especially after Siggraph this year, it is apparent to see that this is the future for VFX and animation.

Virtual workstations promote a flexible and scalable way of working, with global reach for talent. This is incredibly important for studios to remain competitive in today’s market. As well as the cloud, formats such as USD are making it easier to exchange data with others, which allow us to work in a more collaborative environment.

It’s important for the industry to pay attention to these, and similar, trends, as they will have a massive impact on how productions are carried out going forward.
Are game engines affecting how you work, or how you will work in the future?

Game engines are offering ways to enhance certain parts of the workflow. We see a lot of value in the previz stage of the production. This allows artists to iterate very quickly and helps move shots onto the next stage of production.

What about realtime raytracing? How will that affect VFX and the way you work?
The realtime raytracing from Nvidia (as well as GPU compute in general) offers artists a new way to iterate and help create content. However, with recent advancements in CPU compute, we can see that “traditional” workloads aren’t going to be displaced. The RTX solution is another tool that can be used to assist in the creation of content.

How have AR/VR and ML/AI affected your workflows, if at all?
Machine learning has the power to really assist certain workloads. For example, it’s possible to use machine learning to assist a video editor by cataloging speech in a certain clip. When a director says, “find the spot where the actor says ‘X,’” we can go directly to that point in time on the timeline.

 In addition, ML can be used to mine existing file servers that contain vast amounts of unstructured data. When mining this “dark data,” an organization may find a lot of great additional value in the existing content, which machine learning can uncover.

The Uncanny Valley. Where are we now?
With recent advancements in technology, the Uncanny Valley is closing, however it is still there. We see more and more digital humans in cinema than ever before (Peter Cushing in Rogue One: A Star Wars Story was a main character), and I fully expect to see more advances as time goes on.

Can you name some recent projects?
Our latest credits include Solo: A Star Wars Story, Captive State, The Innocents, Black Mirror, Dennis & Gnasher: Unleashed! and Floogals Seasons 1 through 3.

 

Andy Brown, creative director, Jogger 
Jogger Studios is a boutique visual effects studio with offices in London, New York and LA. With capabilities in color grading, compositing and animation, Jogger works on a variety of projects, from TV commercials and music videos to projections for live concerts.

What inspired you to work in VFX?
First of all, my sixth form English project was writing treatments for music videos to songs that I really liked. You could do anything you wanted to for this project, and I wanted to create pictures using words. I never actually made any of them, but it planted the seed of working with visual images. Soon after that I went to university in Birmingham in the UK. I studied communications and cultural studies there, and as part of the course, we visited the BBC Studios at Pebble Mill. We visited one of the new edit suites, where they were putting together a story on the inquiry into the Handsworth riots in Birmingham. It struck me how these two people, the journalist and the editor, could shape the story and tell it however they saw fit. That’s what got me interested on a critical level in the editorial process. The practical interest in putting pictures together developed from that experience and all the opportunities that opened up when I started work at MPC after leaving university.

What trends have you been seeing? What do you feel is important?
Remote workstations and cloud rendering are all really interesting. It’s giving us more opportunities to work with clients across the world using our resources in LA, SF, Austin, NYC and London. I love the concept of a centralized remote machine room that runs all of your software for all of your offices and allows you scaled rendering in an efficient and seamless manner. The key part of that sentence is seamless. We’re doing remote grading and editing across our offices so we can share resources and personnel, giving the clients the best experience that we can without the carbon footprint.

Are game engines affecting how you work or how you will work in the future?
Game engines are having a tremendous effect on the entire media and entertainment industry, from conception to delivery. Walking around Siggraph last month, seeing what was not only possible but practical and available today using gaming engines, was fascinating. It’s hard to predict industry trends, but the technology felt like it will change everything. The possibilities on set look great, too, so I’m sure it will mean a merging of production and post production in many instances.

What about realtime raytracing How will that affect VFX and the way you work?
Faster workflows and less time waiting for something to render have got to be good news. It gives you more time to experiment and refine things.

Chico for Wendy’s

How have AR/VR or ML/AI affected your workflows, if at all?
Machine learning is making its way into new software releases, and the tools are useful. Anything that makes it easier to get where you need to go on a shot is welcome. AR, not so much. I viewed the new Mac Pro sitting on my kitchen work surface through my phone the other day, but it didn’t make me want to buy it any more or less. It feels more like something that we can take technology from rather than something that I want to see in my work.

I’d like 3D camera tracking and facial tracking to be realtime on my box, for example. That would be a huge time-saver in set extensions and beauty work. Anything that makes getting perfect key easier would also be great.

The Uncanny Valley. Where are we now?
It always used to be “Don’t believe anything you read.” Now it’s, “Don’t believe anything you see.” I used to struggle to see the point of an artificial human, except for resurrecting dead actors, but now I realize the ultimate aim is suppression of the human race and the destruction of democracy by multimillionaire despots and their robot underlings.

Can you name some recent projects?
I’ve started prepping for the apocalypse, so it’s hard to remember individual jobs, but there’s been the usual kind of stuff — beauty, set extensions, fast food, Muppets, greenscreen, squirrels, adding logos, removing logos, titles, grading, finishing, versioning, removing rigs, Frankensteining, animating, removing weeds, cleaning runways, making tenders into wings, split screens, roto, grading, polishing cars, removing camera reflections, stabilizing, tracking, adding seatbelts, moving seatbelts, adding photos, removing pictures and building petrol stations. You know, the usual.

 

James David Hattin, founder/creative director, VFX Legion 
Based in Burbank and British Columbia, VFX Legion specializes in providing episodic shows and feature films with an efficient approach to creating high-quality visual effects.

What film or show inspired you to work in VFX?
Star Wars was my ultimate source of inspiration for doing visual effects. Much of the effects in the movies didn’t make sense to me as a six-year-old, but I knew that this was the next best thing to magic. Visual effects create a wondrous world where everyday people can become superheroes, leaders of a resistance or ruler of a 5th century dynasty. Watching X-wings flying over the surface of a space station, the size of a small moon was exquisite. I also learned, much later on, that the visual effects that we couldn’t see were as important as what we could see.

I had already been steeped in visual effects with Star Trek — phasers, spaceships and futuristic transporters. Models held from wires on a moon base convinced me that we could survive on the moon as it broke free from orbit. All of this fueled my budding imagination. Exploring computer technology and creating alternate realities, CGI and digitally enhanced solutions have been my passion for over a quarter of century.

What trends have you been seeing? What do you feel is important?
More and more of the work is going to happen inside a cloud structure. That is definitely something that is being pressed on very heavily by the tech giants like Google and Amazon that rule our world. There is no Moore’s law for computers anymore. The prices and power we see out of computers is almost plateauing. The technology is now in the world of optimizing algorithms or rendering with video cards. It’s about getting bigger, better effects out more efficiently. Some companies are opting to run their entire operations in the cloud or co-located server locations. This can theoretically free up the workers to be in different locations around the world, provided they have solid, low-latency, high-speed internet.

When Legion was founded in 2013, the best way around cloud costs was to have on-premises servers and workstations that supported global connectivity. It was a cost control issue that has benefitted the company to this day, enabling us to bring a global collective of artists and clients into our fold in a controlled and secure way. Legion works in what we consider a “private cloud,” eschewing the costs of egress from large providers and working directly with on-premises solutions.

Are game engines affecting how you work or how you will work in the future?
Game engines are perfect for revisualization in large, involved scenes. We create a lot of environments and invisible effects. For the larger bluescreen shoots, we can build out our sets in Unreal engines, previsualizing how the scene will play for the director or DP. This helps get everyone on the same page when it comes to how a particular sequence is going to be filmed. It’s a technique that also helps the CG team focus on adding details to the areas of a set that we know will be seen. When the schedule is tight, the assets are camera-ready by the time the cut comes to us.

What about realtime raytracing via Nvidia’s RTX? How will that affect VFX and the way you work?
The type of visual effects that we create for feature films and television shows involves a lot of layers and technology that provides efficient, comprehensive compositing solutions. Many of the video card rendering engines like Octanerender, Redshift and V-Ray RT are limited when it comes to what they can create with layers. They often have issues with getting what is called a “back to beauty,” in which the sum of the render passes equals the final render. However, the workarounds we’ve developed enable us to achieve the quality we need. Realtime raytracing introduces a fantastic technology that will someday make it an ideal fit with our needs. We’re keeping an out eye for it as it evolves and becomes more robust.

How have AR/VR or ML/AI affected your workflows, if at all?
AR has been in the wings of the industry for a while. There’s nothing specific that we would take advantage of. Machine learning has been introduced a number of times to solve various problems. It’s a pretty exciting time for these things. One of our partner contacts, who left to join Facebook, was keen to try a number of machine learning tricks for a couple of projects that might have come through, but we didn’t get to put it through the test. There’s an enormous amount of power to be had in machine learning, and I think we are going to see big changes over the next five years in that field and how it affects all of post production.

The Uncanny Valley. Where are we now?
Climbing up the other side, not quite at the summit for daily use. As long as the character isn’t a full normal human, it’s almost indistinguishable from reality.

Can you name some recent projects?
We create visual effects on an ongoing basis for a variety of television shows that include How to Get Away with Murder, DC’s Legends of Tomorrow, Madam Secretary and The Food That Built America. Our team is also called upon to craft VFX for a mix of movies, from the groundbreaking feature film Hardcore Henry to recently released films such as Ma, SuperFly and After.

MAIN IMAGE: Good Morning Football via Chapeau Studios.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Whiskytree experiences growth, upgrades tools

Visual effects and content creation company Whiskytree has gone through a growth spurt that included a substantial increase in staff, a new physical space and new infrastructure.

Providing content for films, television, the Web, apps, game and VR or AR, Whiskytree’s team of artists, designers and technicians use applications such as Autodesk Maya, Side Effects Houdini, Autodesk Arnold, Gaffer and Foundry Nuke on Linux — along with custom tools — to create computer graphics and visual effects.

To help manage its growth and the increase in data that came with it, Whiskytree recently installed Panasas ActiveStor. The platform is used to store and manage Whiskytree’s computer graphics and visual effects workflows, including data-intensive rendering and realtime collaboration using extremely large data sets for movies, commercials and advertising; work for realtime render engines and games; and augmented reality and virtual reality applications.

“We recently tripled our employee count in a single month while simultaneously finalizing the build-out of our new facility and network infrastructure, all while working on a 700-shot feature film project [The Captain],” says Jonathan Harb, chief executive officer and owner of Whiskytree. “Panasas not only delivered the scalable performance that we required during this critical period, but also delivered a high level of support and expertise. This allowed us to add artists at the rapid pace we needed with an easy-to-work-with solution that didn’t require fine-tuning to maintain and improve our workflow and capacity in an uninterrupted fashion. We literally moved from our old location on a Friday, then began work in our new facility the following Monday morning, with no production downtime. The company’s ‘set it and forget it’ appliance resulted in overall smooth operations, even under the trying circumstances.”

In the past, Whiskytree operated a multi-vendor storage solution that was complex and time consuming to administer, modify and troubleshoot. With the office relocation and rapid team expansion, Whiskytree didn’t have time to build a new custom solution or spend a lot of time tuning. It also needed storage that would grow as project and facility needs change.

Projects from the studio include Thor: Ragnarok, Monster Hunt 2, Bolden, Mother, Star Wars: The Last Jedi, Downsizing, Warcraft and Rogue One: A Star Wars.

Tips from a Flame Artist: things to do before embarking on a VFX project

By Andy Brown

I’m creative director and Flame artist at Jogger Studios in Los Angeles. We are a VFX and finishing studio and sister company to  Cut+Run, which has offices in LA, New York, London, San Francisco and Austin. As an experienced visual effects artist, I’ve seen a lot in my time in the industry, and not just what ends up on the screen. I’m also an Englishman living in LA.

I was asked to put together some tips to help make your next project a little bit easier, but in the process, I remembered many things I forgot. I hope these tips these help!

1) Talk to production.

2) Trust your producers.

3) Don’t assume anyone (including you) knows anything.

4) Forget about the money; it’s not your job. Well, it’s kind of your job, but in the context of doing the work, it’s not.

5) Read everything that you’ve been sent, then read it again. Make sure you actually understand what is being asked of you.

6) Make a list of questions that cover any uncertainty you might have about any aspect of the project you’re bidding for. Then ask those questions.

7) Ask production to talk to you if they have any questions. It’s better to get interrupted on your weekend off than for the client to ask her friend Bob, who makes videos for YouTube. To be fair to Bob, he might have a million subscribers, but Bob isn’t doing the job, so please, keep Bob out of it.

8) Remember that what the client thinks is “a small amount of cleanup” isn’t necessarily a small amount of cleanup.

9) Bring your experience to the table. Even if it’s your experience in how not to do things.

10) If you can do some tests, then do some tests. Not only will you learn something about how you’re going to approach the problem, but it will show your client that you’re engaged with the project.

11) Ask about the deliverables. How many aspect ratios? How many versions? Then factor in the slated, the unslated and the generics and take a deep breath.

12) Don’t believe that a lift (a cutdown edit) is a lift is a lift. It won’t be a lift.

13) Make sure you have enough hours in your bid for what you’re being asked to do. The hours are more important than the money.

14) Attend the shoot. If you can’t attend the shoot, then send someone to the shoot … someone who knows about VFX. And don’t be afraid to pipe up on the shoot; that’s what you’re there for. Be prepared to make suggestions on set about little things that will make the VFX go more smoothly.

15) Give yourself time. Don’t get too frustrated that you haven’t got everything perfect in the first day.

16) Tackle things methodically.

17) Get organized.

18) Make a list.

19) Those last three were all the same thing, but that’s because it’s important.

20) Try to remember everyone’s names. Write them down. If you can’t remember, ask.

21) Sit up straight.

23) Be positive. You blew that already by being too English.

24) Remember we all want to get the best result that we can.

25) Forget about the money again. It’s not your job.

26) Work hard and don’t get pissed off if someone doesn’t like what you’ve done so far. You’ll get there. You always do.

27) Always send WIPs to the editor. Not only do they appreciate it, but they can add useful info along the way.

28) Double-check the audio.

29) Double-check for black lines at the edges of frame. There’s no cutoff anymore. Everything lives on the internet.

30) Check your spelling. Even if you spelled it right, it might be wrong. Colour. Realise. Etcetera. Etc.

 

Boris FX beefs up film VFX arsenal, buys SilhouetteFX, Digital Film Tools

Boris FX, a provider of integrated VFX and workflow solutions for video and film, has bought SilhouetteFX (SFX) and Digital Film Tools (DFT). The two companies have a long history of developing tools used on Hollywood blockbusters and experience collaborating with top VFX studios, including Weta Digital, Framestore, Technicolor and Deluxe.

This is the third acquisition by Boris FX in recent years — Imagineer Systems (2014) and GenArts (2016) — and builds upon the company’s editing, visual effects, and motion graphics solutions used by post pros working in film and television. Silhouette and Digital Film Tools join Boris FX’s tools Sapphire, Continuum and Mocha Pro.

Silhouette’s groundbreaking non-destructive paint and advanced rotoscoping technology was recognized earlier this year by the Academy of Motion Pictures (Technical Achievement Award). It first gained prominence after Weta Digital used the rotoscoping tools on King Kong (2005). Now the full-fledged GPU-accelerated node-based compositing app features over 100 VFX nodes and integrated Boris FX Mocha planar tracking. Over the last 15 years, feature film artists have used Silhouette on films including Avatar (2009), The Hobbit (2012), Wonder Woman (2017), Avengers: End Game (2019) and Fast & Furious Presents: Hobbs & Shaw (2019).

Avengers: End Game courtesy of Marvel

Digital Film Tools (DFT) emerged as an off-shoot of a LA-based motion picture visual effects facility whose work included hundreds of feature films, commercials and television shows.

The Digital Film Tools portfolio includes standalone applications as well as professional plug-in collections for filmmakers, editors, colorists and photographers. The products offer hundreds of realistic filters for optical camera simulation, specialized lenses, film stocks and grain, lens flares, optical lab processes, color correction, keying and compositing, as well as natural light and photographic effects. DFT plug-ins support Adobe’s Photoshop, Lightroom, After Effects and Premiere Pro; Apple’s Final Cut Pro X and Motion; Avid’s Media Composer; and OFX hosts, including Foundry Nuke and Blackmagic DaVinci Resolve.

“This acquisition is a natural next step to our continued growth strategy and singular focus on delivering the most powerful VFX tools and plug-ins to the content creation market,”
“Silhouette fits perfectly into our product line with superior paint and advanced roto tools that highly complement Mocha’s core strength in planar tracking and object removal,” says Boris Yamnitsky, CEO/founder of Boris FX. “Rotoscoping, paint, digital makeup and stereo conversion are some of the most time-consuming, labor-intensive aspects of feature film post. Sharing technology and tools across all our products will make Silhouette even stronger as the leader in these tasks. Furthermore, we are very excited to be working with such an accomplished team [at DFT] and look forward to collaborating on new product offerings for photography, film and video.”

Silhouette founders, Marco Paolini, Paul Miller and Peter Moyer, will continue in their current leadership roles and partner with the Mocha product development team to collaborate on delivering next-generation tools. “By joining forces with Boris FX, we are not only dramatically expanding our team’s capabilities, but we are also joining a group of like-minded film industry pros to provide the best solutions and support to our customers,” says Marco Paolini, Product Designer. “The Mocha planar tracking option we currently license is extremely popular with Silhouette paint and roto artists, and more recently through OFX, we’ve added support for Sapphire plug-ins. Working together under the Boris FX umbrella is our next logical step and we are excited to add new features and continue advancing Silhouette for our user base.”

Both Silhouette and Digital Film Tool plug-ins will continue to be developed and sold under the Boris FX brand. Silhouette will adopt the Boris FX commitment to agile development with annual releases, annual support and subscription options.

Main Image: Silhouette

Game of Thrones’ Emmy-nominated visual effects

By Iain Blair

Once upon a time, only glamorous movies could afford the time and money it took to create truly imaginative and spectacular visual effects. Meanwhile, television shows either tried to avoid them altogether or had to rely on hand-me-downs. But the digital revolution changed all that with its technological advances, and new tools quickly leveling the playing field. Today, television is giving the movies a run for their money when it comes to sophisticated visual effects, as evidenced by HBO’s blockbuster series, Game of Thrones.

Mohsen Mousavi

This fantasy series was recently Emmy-nominated a record-busting 32 times for its eighth and final season — including one for its visually ambitious VFX in the penultimate episode, “The Bells.”

The epic mass destruction presented Scanline’s VFX supervisor, Mohsen Mousavi, and his team many challenges. But his expertise in high-end visual effects, and his reputation for constant innovation in advanced methodology, made him a perfect fit to oversee Scanline’s VFX for the crucial last three episodes of the final season of Game of Thrones.

Mousavi started his VFX career in the field of artificial intelligence and advanced-physics-based simulations. He spearheaded designing and developing many different proprietary toolsets and pipelines for doing crowd, fluid and rigid body simulation, including FluidIT, BehaveIT and CardIT, a node-based crowd choreography toolset.

Prior to joining Scanline VFX Vancouver, Mousavi rose through the ranks of top visual effects houses, working in jobs that ranged from lead effects technical director to CG supervisor and, ultimately, VFX supervisor. He’s been involved in such high-profile projects as Hugo, The Amazing Spider-Man and Sucker Punch.

In 2012, he began working with Scanline, acting as digital effects supervisor on 300: Rise of an Empire, for which Scanline handled almost 700 water-based sea battle shots. He then served as VFX supervisor on San Andreas, helping develop the company’s proprietary city-generation software. That software and pipeline were further developed and enhanced for scenes of destruction in director Roland Emmerich’s Independence Day: Resurgence. In 2017, he served as the lead VFX supervisor for Scanline on the Warner Bros. shark thriller, The Meg.

I spoke with Mousavi about creating the VFX and their pipeline.

Congratulations on being Emmy-nominated for “The Bells,” which showcased so many impressive VFX. How did all your work on Season 4 prepare you for the big finale?
We were heavily involved in the finale of Season 4, however the scope was far smaller. What we learned was the collaboration and the nature of the show, and what the expectations were in terms of the quality of the work and what HBO wanted.

You were brought onto the project by lead VFX supervisor Joe Bauer, correct?
Right. Joe was the “client VFX supervisor” on the HBO side and was involved since Season 3. Together with my producer, Marcus Goodwin, we also worked closely with HBO’s lead visual effects producer, Steve Kullback, who I’d worked with before on a different show and in a different capacity. We all had daily sessions and conversations, a lot of back and forth, and Joe would review the entire work, give us feedback and manage everything between us and other vendors, like Weta, Image Engine and Pixomondo. This was done both technically and creatively, so no one stepped on each other’s toes if we were sharing a shot and assets. But it was so well-planned that there wasn’t much overlap.

[Editor’s Note: Here is the full list of those nominated for their VFX work on Game of Thrones — Joe Bauer, lead visual effects supervisor; Steve Kullback, lead visual effects producer; Adam Chazen, visual effects associate producer; Sam Conway, special effects supervisor; Mohsen Mousavi, visual effects supervisor; Martin Hill, visual effects supervisor; Ted Rae, visual effects plate supervisor; Patrick Tiberius Gehlen, previz lead; and Thomas Schelesny, visual effects and animation supervisor.]

What were you tasked with doing on Season 8?
We were involved as one of the lead vendors on the last three episodes and covered a variety of sequences. In episode four, “The Last of the Starks,” we worked on the confrontation between Daenerys and Cersei in front of the King’s Landing’s gate, which included a full CG environment of the city gate and the landscape around it, as well as Missandei’s death sequence, which featured a full CG Missandei. We also did the animated Drogon outside the gate while the negotiations took place.

Then for “The Bells” we were responsible for most of the Battle of King’s Landing, which included full digital city, Daenerys’ army camp site outside the walls of King’s Landing, the gathering of soldiers in front of the King’s Landing walls, Danny’s attack on the scorpions, the city gate, streets and the Red Keep, which had some very close-up set extensions, close-up fire and destruction simulations and full CG crowd of various different factions — armies and civilians. We also did the iconic Cleaganebowl fight between The Hound and The Mountain and Jamie Lannister’s fight with Euron at the beach underneath the Red Keep. In Episode 5, we received raw animation caches of the dragon from Image Engine and did the full look-dev, lighting and rendering of the final dragon in our composites.

For the final episode, “The Iron Throne, we were responsible for the entire Deanerys speech sequence, which included a full 360 digital environment of the city aftermath and the Red Keep plaza filled with digital unsullied Dothrakies and CG horses leading into the majestic confrontation between Jon and Drogon, where it revealed itself from underneath a huge pile of snow outside Red Keep. We were also responsible for the iconic throne melt sequence, which included some advance simulation of high viscous fluid and destruction of the area around the throne and finishing the dramatic sequence with Drogon carrying Danny out of the throne room and away from King’s Landing into the unknown.

Where was all this work done?
The majority of the work was done here in Vancouver, which is the biggest Scanline office. Additionally we had teams working in our Munich, Montreal and LA offices. We’re a 100% connected company, all working under the same infrastructure in the same pipeline. So if I work with the team in Munich, it’s like they’re sitting in the next room. That allows us to set up and attack the project with a larger crew and get the benefit of the 24/7 scenario; as we go home, they can continue working, and it makes us far more productive.

How many VFX did you have to create for the final season?
We worked on over 600 shots across the final three episodes which gave us approximately over an hour of screen time of high-end consistent visual effects.

Isn’t that hour length unusual for 600 shots?
Yes, but we had a number of shots that were really long, including some ground coverage shots of Arya in the streets of King’s Landing that were over four or five minutes long. So we had the complexity along with the long duration.

How many people were on your team?
At the height, we had about 350 artists on the project, and we began in March 2018 and didn’t wrap till nearly the end of April 2019 — so it took us over a year of very intense work.

Tell us about the pipeline specific to Game of Thrones.
Scanline has an industry-wide reputation for delivering very complex, full CG environments combined with complex simulation scenarios of all sort of fluid dynamics and destruction based on our simulation framework “Flowline.” We had a high-end digital character and hero creature pipeline that gave the final three episodes a boost up front. What was new were the additions to our procedural city generation pipeline for the recreation of King’s Landing, making sure it can deliver both in wide angle shots as well as some extreme close-up set extensions.

How did you do that?
We used a framework we developed back for Independence Day: Resurgence, which is a module-based procedural city generation leveraging some incredible scans of the historical city of Dubrovnik as a blueprint and foundation of King’s Landing. Instead of doing the modeling conventionally, you model a lot of small modules, kind of like Lego blocks. You create various windows, stones, doors, shingles and so on, and once it’s encoded in the system, you can semi-automatically generate variations of buildings on the fly. That also goes for texturing. We had procedurally generated layers of façade textures, which gave us a lot of flexibility on texturing the entire city, with full control over the level of aging and damage. We could decide to make a block look older easily without going back to square one. That’s how we could create King’s Landing with its hundreds of thousands of unique buildings.

The same technology was applied to the aftermath of the city in Episode 6. We took the intact King’s Landing and ran a number of procedural collapsing simulations on the buildings to get the correct weight based on references from the bombed city of Dresden during WWII, and then we added procedurally created CG snow on the entire city.

It didn’t look like the usual matte paintings were used at all.
You’re right, and there were a lot of shots that normally would be done that way, but to Joe’s credit, he wanted to make sure the environments weren’t cheated in any way. That was a big challenge, to keep everything consistent and accurate. Even if we used traditional painting methods, it was all done on top of an accurate 3D representation with correct lighting and composition.

What other tools did you use?
We use Autodesk Maya for all our front-end departments, including modeling, layout, animation, rigging and creature effects, and we bridge the results to Autodesk 3ds Max, which encapsulates our look-dev/FX and rendering departments, powered by Flowline and Chaos Group’s V-Ray as our primary render engine, followed by Foundry’s Nuke as our main compositing package.

At the heart of our crowd pipeline, we use Massive and our creature department is driven with Ziva muscles which was a collaboration we started with Ziva Dynamics back for the creation of the hero Megalodon in The Meg.

Fair to say that your work on Game of Thrones was truly cutting-edge?
Game of Thrones has pushed the limit above and beyond and has effectively erased the TV/feature line. In terms of environment and effects and the creature work, this is what you’d do for a high-end blockbuster for the big screen. No difference at all.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

FilmLight sets speakers for free Color On Stage seminar at IBC

At this year’s IBC, FilmLight will host a free two-day seminar series, Color On Stage, on September 14 and 15. The event features live presentations and discussions with colorists and other creative professionals. The event will cover topics ranging from the colorist today to understanding color management and next-generation grading tools.

“Color on Stage offers a good platform to hear about real-world interaction between colorists, directors and cinematographers,” explains Alex Gascoigne, colorist at Technicolor and one of this year’s presenters. “Particularly when it comes to large studio productions, a project can take place over several months and involve a large creative team and complex collaborative workflows. This is a chance to find out about the challenges involved with big shows and demystify some of the more mysterious areas in the post process.”

This year’s IBC program includes colorists from broadcast, film and commercials, as well as DITs, editors, VFX artists and post supervisors.

Program highlights include:
•    Creating the unique look for Mindhunter Season 2
Colorist Eric Weidt will talk about his collaboration with director David Fincher — from defining the workflow to creating the look and feel of Mindhunter. He will break down scenes and run through color grading details of the masterful crime thriller.

•    Realtime collaboration on the world’s longest running continuing drama, ITV Studios’ Coronation Street
The session will address improving production processes and enhancing pictures with efficient renderless workflows, with colorist Stephen Edwards, finishing editor Tom Chittenden and head of post David Williams.

•    Looking to the future: Creating color for the TV series Black Mirror
Colorist Alex Gascoigne of Technicolor will explain the process behind grading Black Mirror, including the interactive episode Bandersnatch and the latest Season 5.

•    Bollywood: A World of Color
This session will delve into the Indian film industry with CV Rao, technical general manager at Annapurna Studios in Hyderabad. In this talk, CV will discuss grading and color as exemplified by the hit film Baahubali 2: The Conclusion.

•    Joining forces: Strengthening VFX and finishing with the BLG workflow
Mathieu Leclercq, head of post at Mikros Image in Paris, will be joined by colorist Sebastian Mingam and VFX supervisor Franck Lambertz to showcase their collaboration on recent projects.

•    Maintaining the DP’s creative looks from set to post
Meet with French DIT Karine Feuillard, ADIT — who worked on the latest Luc Besson film Anna as well as the TV series The Marvelous Mrs Maisel — and FilmLight workflow specialist Matthieu Straub.

•    New color management and creative tools to make multi-delivery easier
The latest and upcoming Baselight developments, including a host of features aimed to simplify delivery for emerging technologies such as HDR. With FilmLight’s Martin Tlaskal, Daniele Siragusano and Andy Minuth.

Color On Stage will take place in Room D201 on the second floor of the Elicium Centre (Entrance D), close to Hall 13. The event is free to attend but spaces are limited. Registion is available here.