NBCUni 7.26

Category Archives: VFX Edition

Visual Effects in Commercials: Chantix, Verizon

By Karen Moltenbrey

Once too expensive to consider for use in television commercials, visual effects soon found their way into this realm, enlivening and enhancing the spots. Today, countless commercials are using increasingly complex VFX to entertain, to explain and to elevate a message. Here, we examine two very different approaches to using effects in this way. In the Verizon commercial Helping Doctors Fight Cancer, augmented reality is transferred from a holographic medical application and fused into a heartwarming piece thanks to an extremely delicate production process. For the Chantix Turkey Campaign, digital artists took a completely different method, incorporating a stylized digital spokes-character — with feathers, nonetheless – into various scenes.

Verizon Helping Doctors Fight Cancer

The main goal of television advertisements — whether they are 15, 30 or 60 seconds in length — is to sell a product. Some do it through a direct sales approach. Some by “selling” a lifestyle or brand. And some opt to tell a story. Verizon took the latter approach for a campaign promoting its 5G Ultra Wideband.

Vico Sharabani

For the spot Helping Doctors Fight Cancer, directed by Christian Weber, Verizon adds a human touch to its technology through a compelling story illustrating how its 5G network is being used within a mixed-reality environment so doctors can better treat cancer patients. The 30-second commercial features surgeons and radiologists using high-fidelity holographic 3D anatomical renderings that can be viewed from every angle and even projected onto a person’s body for a more comprehensive examination, while the imagery can potentially be shared remotely in near real time. The augmented-reality application is from Medivis, a start-up medical visualization company that is using Verizon’s next-generation 5G wireless speeds to deliver the high speeds and low latencies necessary for the application’s large datasets and interactive frame rates.

The spot introduces video footage of patients undergoing MRIs and discussion by Medivis cofounder Dr. Osamah Choudhry about how treatment could be radically changed using the technology. Holographic medical imagery is then displayed showing the Medivis AR application being used on a patient.

“McGarryBowen New York, Verizon’s advertising agency, wanted to show the technology in the most accurate and the most realistic way possible. So, we studied the technology,” says Vico Sharabani, founder/COO of The-Artery, which was tasked with the VFX work in the spot. To this end, The Artery team opted to use as much of the actual holographic content as possible, pulling assets from the Medivis software and fusing it with other broadcast-quality content.

The-Artery is no stranger to augmented reality, virtual reality and mixed reality. Highly experienced in visual effects, Sharabani founded the company to solve business problems within the visual space across all platforms, from films to commercials to branding, and as such, alternate reality and story have been integral elements to achieving that goal. Nevertheless, the work required for this spot was difficult and challenging.

“It’s not just acquiring and melding together 3D assets,” says Sharabani. “The process is complex, and there are different ways to do it — some better than others. And the agency wanted it to be true to the real-life application. This was not something we could just illustrate in a beautiful way; it had to be very technically accurate.”

To this end, much of the holographic imagery consisted of actual 3D assets from the Medivis holographic AR system, captured live. At times, though, The Artery had to rework the imagery using multiple assets from the Medivis application, and other times the artists re-created the medical imagery in CG.

Initially, the ad agency expected that The-Artery would recreate all the digital assets in CG. But after learning as much as they could about the Medivis system, Sharabani and the team were confident they could export actual data for the spot. “There was much greater value to using actual data when possible, actual CT data,” says Sharabani. “Then you have the most true-to-life representation, which makes the story even more heartfelt. And because we were telling a true story about the capabilities of the network around a real application being used by doctors, any misrepresentation of the human anatomy or scans would hurt the message and intention of the campaign.”

The-Artery began developing a solution with technicians at Medivis to export actual imagery via the HoloLens headset that’s used by the medical staff to view and manipulate the holographic imagery, to coincide with the needs of the commercial. Sometimes this involved merely capturing the screen performance as the HoloLens was being used. Other times the assets from the Medivis system were rendered over a greenscreen without a background and later composited into a scene.

“We have the ability to shoot through the HoloLens, which was our base; we used that as our virtual camera whereby the output of the system is driven by the HoloLens. Every time we would go back to do a capture (if the edit changed or the camera position changed), we had to use the HoloLens as our virtual camera in order to get the proper camera angle,” notes Sharabani. Because the HoloLens is a stereoscopic device, The Artery always used the right-eye view for the representations, as it most closely reflected the experience of the user wearing the device.

Since the Medivis system is driven by the HoloLens, there is some shakiness present — an artifact the group retained in some of the shots to make it truer to life. “It’s a constant balance of how far we go with realism and at what point it is too distracting for the broadcast,” says Sharabani.

For imagery like the CT scans, the point cloud data was imported directly into Autodesk’s Maya, where it was turned into a 3D model. Other times the images were rendered out at 4K directly from the system. The Medivis imagery was later composited into the scenes using Autodesk’s Flame.

However, not every bit of imagery was extracted from the system. Some had to be re-created using a standard 3D pipeline. For instance, the “scan” of the actor’s skull was replicated by the artists so that the skull model matched perfectly with the holographic imagery that was overlaid in post production (since everyone’s skull proportions are different). The group began by creating the models in Maya and then composited the imagery within Autodesk’s Flame, along with a 3D bounding box of the creative implant.

The artists also replicated the Medivis UI in 3D to recreate and match the performance of the three-dimensional UI to the AI hand gestures by the person “using” the Medivis system in the spot — both of which were filmed separately. For the CG interface, the group used Autodesk’s Maya and Flame, as well as Adobe’s After Effects.

“The process was so integrated to the edit, we needed the proper 3D tracking and some of the assets to be built as a 3D screen element,” explains Sharabani. “It gave us more flexibility to build the 3D UI inside of Flame, enabling us to control it more quickly and easily when we changed a hand gesture or expanded the shots.”

With The-Artery’s experience pertaining to virtual technology, the team was quick to understand the limitations of the project using this particular equipment. Once that was established, however, they began to push the boundaries with small hacks that enabled them to achieve their goals of using actual holographic data to tell an amazing story.

Chantix “Turkey” Campaign

Chantix is medication to help smokers kick the habit. To get its message across in a series of television commercials, the drug maker decided to talk turkey, focusing the campaign on a CG turkey that, well, goes “cold turkey” with the assistance of Chantix.

A series of four spots — Slow Turkey, Camping, AC and Beach Day — prominently feature the turkey, created at The Mill. The spots were directed and produced in-house by Mill+, The Mill’s end-to-end production arm, with Jeffrey Dates directing.


L-R: John Montefusco, Dave Barosin and Scott Denton

“Each one had its own challenges,” says CG lead John Montefusco. Nevertheless, the initial commercial, Slow Turkey, presented the biggest obstacle: the build of the character from the ground up. “It was not only a performance feat, but a technical one as well,” he adds.

Effects artist Dave Barosin iterated Montefusco’s assessment of Slow Turkey, which, in addition to building the main asset from scratch, required the development of a feather system. Meanwhile, Camping and AC had the addition of clothing, and Beach Day presented the challenge of wind, water and simulation in a moving vehicle.

According to senior modeler Scott Denton, the team was given a good deal of creative freedom when crafting the turkey. The artists were presented with some initial sketches, he adds, but more or less had free rein in the creation of the look and feel of the model. “We were looking to tread the line between cartoony and realistic,” he says. The first iterations became very cartoony, but the team subsequently worked backward to where the character was more of a mix between the two styles.

The crew modeled the turkey using Autodesk’s Maya and Pixologic’s ZBrush. It was then textured within Adobe’s Substance and Foundry’s Mari. All the details of the model were hand-sculpted. “Nailing the look and feel was the toughest challenge. We went through a hundred iterations before getting to the final character you see in the commercial,” Denton says.

The turkey contains 6,427 body feathers, 94 flight feathers and eight scalp feathers. They were simulated using a custom feather setup built by the lead VFX artist within SideFX Houdini, which made the process more efficient. Proprietary tools also were used to groom the character.

The artists initially developed a concept sculpt in ZBrush of just the turkey’s head, which underwent numerous changes and versions before they added it to the body of the model. Denton then sculpted a posed version with sculpted feathers to show what the model might look like when posed, giving the client a better feel for the character. The artists later animated the turkey using Maya. Rendering was performed in Autodesk’s Arnold, while compositing was done within Foundry’s Nuke.

“Developing animation that holds good character and personality is a real challenge,” says Montefusco. “There’s a huge amount of evolution in the subtleties that ultimately make our turkey ‘the turkey.’”

For the most part, the same turkey model was used for all four spots, although the artists did adapt and change certain aspects — such as the skeleton and simulation meshes – for each as needed in the various scenarios.

For the turkey’s clothing (sweater, knitted vest, scarf, down vest, knitted cap, life vest), the group used Marvelous Designer 3D software for virtual clothes and fabrics, along with Maya and ZBrush. However, as Montefusco explains, tailoring for a turkey is far different than developing CG clothing for human characters. “Seeing as a lot of the clothes that were selected were knit, we really wanted to push the envelope and build the knit with geometry. Even though this made things a bit slower for our effects and lighting team, in the end, the finished clothing really spoke for itself.”

The four commercials also feature unique environments ranging from the interior and exterior of a home to a wooded area and beach. The artists used mostly plates for the environments, except for an occasional tent flap and chair replacement. The most challenging of these settings, says Montefusco, was the beach scene, which required full water replacement for the shot of the turkey on the paddle board.


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

VFX in Features: Hobbs & Shaw, Sextuplets

By Karen Moltenbrey

What a difference a year makes. Then again, what a difference 30 years make. That’s about the time when the feature film The Abyss included photoreal CGI integrated with live action, setting a trend that continues to this day. Since that milestone many years ago, VFX wizards have tackled a plethora of complicated problems, including realistic hair and skin, resulting in realistic digital humans, as well as realistic water, fire and other elements. With each new blockbuster VFX film, digital artists continually raise the bar, challenging the status quo and themselves to elevate the art even further.

The visual effects in today’s feature films run the gamut from in-your-face imagery that can put you on the edge of your seat through heightened action to the kind that can make you laugh by amping up the comedic action. As detailed here, Fast & Furious Presents: Hobbs & Shaw takes the former approach, helping to carry out amazing stunts that are bigger and “badder” than ever. Opposite that is Sextuplets, which uses VFX to carry out a gag central to the film in a way that also pushes the envelope.

Fast & Furious Presents: Hobbs & Shaw

The Fast and the Furious film franchise, which has included eight features that collectively have amassed more than $5 billion worldwide since first hitting the road in 2001, is known for its high-octane action and visual effects. The latest installment, Fast & Furious Presents: Hobbs & Shaw, continues that tradition.

At the core of the franchise are next-level underground street racers who become reluctant fugitives pulling off big heists. Hobbs & Shaw, the first stand-alone vehicle, has Dwayne Johnson and Jason Statham reprising their roles as loyal Diplomatic Security Service lawman Luke Hobbs and lawless former British operative Deckard Shaw, respectively. This comes after facing off in Furious 7 (2015) and then playing cat and mouse as Shaw tries to escape from prison and Hobbs tries to stop him in 2017’s The Fate of the Furious. (Hobbs first appeared in 2011’s Fast Five and became an ally to the gang. Shaw’s first foray was in 2013’s Fast & Furious 6.)

Now, in the latest installment, the pair are forced to join forces to hunt down anarchist Brixton Lorr (Idris Elba), who has control of a bio weapon. The trackers are hired separately to find Hattie, a rogue MI6 agent (who is also Shaw’s sister, a fact that initially eludes Hobbs) after she injects herself with the bio agent and is on the run, searching for a cure.

The Universal Pictures film is directed by David Leitch (Deadpool 2, Atomic Blonde). Jonathan Sela (Deadpool 2, John Wick) is DP, and visual effects supervisor is Dan Glass (Deadpool 2, Jupiter Ascending). A number of VFX facilities worked on the film, including key vendor DNeg along with other contributors such as Framestore.

DNeg delivered 1,000-plus shots for the film, including a range of vehicle-based action sequences set in different global locations. The work involved the creation of full digi-doubles and digi-vehicle duplicates for the death-defying stunts, jumps and crashes, as well as complex effects simulations and extensive digital environments. Naturally, all the work had to fit seamlessly alongside live-action stunts and photography from a director with a stunt coordinator pedigree and a keen eye for authentic action sequences. In all, the studio worked on 26 sequences divided among the Vancouver, London and Mumbai locations. Vancouver handled mostly the Chernobyl break-in and escape sequences, as well as the Samoa chase. London did the McLaren chase and the cave fight, as well as London chase sequences. The Mumbai team assisted its colleagues in Vancouver and London.

When you think of the Fast & Furious, the first thing that comes to mind are intense car chases, and according to Chris Downs, CG supervisor at DNeg Vancouver, the Chernobyl beat is essentially one long, giant car-and-motorcycle pursuit, describing it as “a pretty epic car chase.”

“We essential have Brixton chasing Shaw and Hattie, and then Shaw and Hattie are trying to catch up to a truck that’s being driven by Hobbs, and they end up on these utility ramps and pipes, using them almost as a roadway to get up and into the turbine rooms, onto the rooftops and then jump between buildings,” he says. “All the while, everyone is getting chased by these drones that Brixton is controlling.”

The Chernobyl sequences — the break-in and the escape — were the most challenging work on the film for DNeg Vancouver. The villain, Brixton, is using the Chernobyl nuclear power plant in Russia as the site of his hideaway, leading Hobbs and Shaw to secretly break into his secret lab underneath Chernobyl to locate a device Brixton has there — and then not-so-secretly break out.

The break-in was filmed at a location outside of London, at the decommissioned Eggborough coal-powered plant that served as a backdrop. To transform the locale into Chernobyl, DNeg augmented the site with cooling towers and other digital structures. Nevertheless, the artists also built an entire CG version of the site for the more extreme action, using photos of the actual Chernobyl as reference for their work. “It was a very intense build. We had artistic liberty, but it was based off of Chernobyl, and a lot of the buildings match the reference photography. It definitely maintained the feeling of a nuclear power plant,” says Downs.

Not only did the construction involve all the exteriors of the industrial complex around Chernobyl, but also an interior build of an “insanely complicated” turbine hall that the characters race through at one point.

The sequence required other environment work, too, as well as effects, digi-doubles and cloth sims for the characters’ flight suits and parachutes as they drop into the setting.

Following the break-in, Hobbs and Shaw are captured and tortured and then manage to escape from the lab just in time as the site begins to explode. For this escape sequence, the crew built a CG Chernobyl reactor and power station, automated drones, a digital chimney, an epic collapse of buildings, complex pyrotechnic clouds and burning material.

“The scope of the work, the amount of buildings and pipes, and the number of shots made this sequence our most difficult,” says Downs. “We were blowing it up, so all the buildings had to be effects-friendly as we’re crashing things through them.” Hobbs and Shaw commandeer vehicles as they try to outrun Brixton and the explosion, but Brixton and his henchmen give chase in a range of vehicles, including trucks, Range Rovers, motorcycles and more — a mix of CGI and practical with expert stunt drivers behind the wheel.

As expected for a Fast & Furious film, there’s a big variety of custom-built vehicles. Yet, for this scene and especially in Samoa, DNeg Vancouver crafted a range of CG vehicles, including motorcycles, SUVs, transport trucks, a flatbed truck, drones and a helicopter — 10 in all.

According to Downs, maintaining the appropriate wear and tear on the vehicles as the sequences progressed was not always easy. “Some are getting shot up, or something is blown up next to them, and you want to maintain the dirt and grime on an appropriate level,” he says. “And, we had to think of that wear and tear in advance because you need to build it into the model and the texture as you progress.”

The CG vehicles are mostly used for complex stunts, “which are definitely an 11 on the scale,” says Downs. Along with the CG vehicles, digi-doubles of the actors were also used for the various stunt work. “They are fairly straightforward, though we had a couple shots where we got close to the digi-doubles, so they needed to be at a high level of quality,” he adds. The Hattie digi-double proved the most difficult due to the hair simulation, which had to match the action on set, and the cloth simulation, which had to replicate the flow of her clothing.

“She has a loose sweater on during the Chernobyl sequence, which required some simulation to match the plate,” Downs adds, noting that the artists built the digi-doubles from scratch, using scans of the actors provided by production for quality checks.

The final beat of the Chernobyl escape comes with the chimney collapse. As the chase through Chernobyl progresses, Shaw tries to get Hattie to Hobbs, and Brixton tries to grab Hattie from Shaw. In the process, charges are detonated around the site, leading to the collapse of the main chimney, which just misses obliterating the vehicle they are all in as it travels down a narrow alleyway.

DNeg did a full environment build of the area for this scene, which included the entire alleyway and the chimney, and simulated the destruction of the chimney along with an explosive concussive force from the detonation. “There’s a large fireball at the beginning of the explosion that turns into a large volumetric cloud of dust that’s getting kicked up as the chimney is collapsing, and all that had to interact with itself,” Downs says of the scene. “Then, as the chimney is collapsing toward the end of the sequence, we had the huge chunks ripping through the volumetrics and kicking up more pyrotechnic-style explosions. As it is collapsing, it is taking out buildings along the way, so we had those blowing up and collapsing and interacting with our dust cloud, as well. It’s quite a VFX extravaganza.”

Adding to the chaos: The sequence was reshot. “We got new plates for the end of that escape sequence that we had to turn around in a month, so that was definitely a white-knuckle ride,” says Downs. “Thankfully we had already been working on a lot of the chimney collapse and had the Chernobyl build mostly filled in when word came in about the reshoot. But, just the amount of effects that went into it — the volumetrics, the debris and then the full CG environment in the background — was a staggering amount of very complex work.”

The action later turns from London at the start of the film, to Russia for the Chernobyl sequences, and then in the third act, to Samoa, home of the Hobbs family, as the main characters seek refuge on the island while trying to escape from Brixton. But Brixton soon catches up to them, and the last showdown begins amid the island’s tranquil setting with a shimmering blue ocean and green lush mountains. Some of the landscape is natural, some is man-made (sets) and some is CGI. To aid in the digital build of the Samoan environment, Glass traveled to the Hawaiian island of Kauai, where the filming took place, and took a good amount of reference footage.

For a daring chase in Samoa, the artists built out the cliff’s edge and sent a CG helicopter tumbling down the steep incline in the final battle with Brixton. In addition to creating the fully-digital Samoan roadside, CG cliff and 3D Black Hawk, the artists completed complex VFX simulations and destruction, and crafted high-tech combat drones and more for the sequence.

The helicopter proved to be the most challenging of all the vehicles, as it had a couple of hero moments when certain sections were fairly close to the camera. “We had to have a lot of model and texture detail,” Downs notes. “And then with it falling down the cliff and crash-landing onto the beach area, the destruction was quite tricky. We had to plan out which parts would be damaged the most and keep that consistent across the shots, and then go back in and do another pass of textures to support the scratches, dents and so forth.”

Meanwhile, DNeg London and Mumbai handled a number of sequences, among them the compelling McLaren chase, the CIA building descends and the final cave fight in Samoa. There were also a number of smaller sequences, for a total of approximately 750 shots.

One of the scenes in the film’s trailer that immediately caught fans’ attention was the McLaren escape/motorcycle transformation sequence, during which Hobbs, Shaw and Hattie are being chased by Brixton baddies on motorcycles through the streets of London. Shaw, behind the wheel of a McLaren 720S, tries to evade the motorbikes by maneuvering the prized vehicle underneath two crossing tractor trailer rigs, squeezing through with barely an inch to spare. The bad news for the trio: Brixton pulls an even more daring move, hopping off the bike while grabbing onto the back of it and then sliding parallel inches above the pavement as the bike zips under the road hazard practically on its side; once cleared, he pulls himself back onto the motorbike (in a memorable slow-motion stunt) and continues the pursuit thanks to his cybernetically altered body.

Chris Downs

According to Stuart Lashley, DNeg VFX supervisor, this sequence contained a lot of bluescreen car comps in which the actors were shot on stage in a McLaren rigged on a mechanical turntable. The backgrounds were shot alongside the stunt work in Glasgow (playing as London). In addition, there were a number of CG cars added throughout the sequence. “The main VFX set pieces were Hobbs grabbing the biker off his bike, the McLaren and Brixton’s transforming bike sliding under the semis, and Brixton flying through the double-decker bus,” he says. “These beats contained full-CG vehicles and characters for the most part. There was some background DMP [digital matte-painting] work to help the location look more like London. There were also a few shots of motion graphics where we see Brixton’s digital HUD through his helmet visor.”

As Lashley notes, it was important for the CG work to blend in with the surrounding practical stunt photography. “The McLaren itself had to hold up very close to the camera; it has a very distinctive look to its coating, which had to match perfectly,” he adds. “The bike transformation was a welcome challenge. There was a period of experimentation to figure out the mechanics of all the small moving parts while achieving something that looked cool at the same time.”

As exciting and complex as the McLaren scene is, Lashley believes the cave fight sequence following the helicopter/tractor trailer crash was perhaps even more of a difficult undertaking, as it had a particular VFX challenge in terms of the super slow-motion punches. The action takes place at a rock-filled waterfall location — a multi-story set on a 30,000-square-foot soundstage — where the three main characters battle it out. The film’s final sequence is a seamless blend of CG and live footage.

Stuart Lashley

“David [Leitch] had the idea that this epic final fight should be underscored by these very stylized, powerful impact moments, where you see all this water explode in very graphic ways,” explains Lashley. “The challenge came in finding the right balance between physics-based water simulation and creative stylization. We went through a lot of iterations of different looks before landing on something David and Dan [Glass] felt struck the right balance.”

The DNeg teams used a unified pipeline for their work, which includes Autodesk’s Maya for modeling, animation and the majority of cloth and hair sims; Foundry’s Mari for texturing; Isotropix’s Clarisse for lighting and rendering; Foundry’s Nuke for compositing; and SideFX’s Houdini for effects work, such as explosions, dust clouds, particulates and fire.

With expectations running high for Hobbs & Shaw, filmmakers and VFX artists once more delivered, putting audiences on the edge of their seats with jaw-dropping VFX work that shifted the franchise’s action into overdrive yet again. “We hope people have as much fun watching the result as we had making it. This was really an exercise in pushing everything to the max,” says Lashley, “often putting the physics book to one side for a bit and picking up the Fast & Furious manual instead.”

Sextuplets

When actor/comedian/screenwriter/film producer Marlon Wayans signed on to play the lead in the Netflix original movie Sextuplets, he was committing to a role requiring an extensive acting range. That’s because he was filling not one but seven different lead roles in the same film.

In Sextuplets, directed by Michael Tiddes, Wayans plays soon-to-be father Alan, who hopes to uncover information about his family history before his child’s arrival and sets out to locate his birth mother. Imagine Alan’s surprise when he finds out that he is part of “identical” sextuplets! Nevertheless, his siblings are about as unique as they come.

There’s Russell, the nerdy, overweight introvert and the only sibling not given up by their mother, with whom he lived until her recent passing. Ethan, meanwhile, is the embodiment of a 1970s pimp. Dawn is an exotic dancer who is in jail. Baby Pete is on his deathbed and needs a kidney. Jaspar is a villain reminiscent of Austin Powers’ Dr. Evil. Okay, that is six characters, all played by Wayans. Who is the seventh? (Spoiler alert: Wayans also plays their mother, who was simply on vacation and not actually dead as Russell had claimed.)

There are over 1,100 VFX shots in the movie. None, really, involved the transformation of the actor into the various characters — that was done using prosthetics, makeup, wigs and so forth, with slight digital touch-ups as needed. Instead, the majority of the effects work resulted from shooting with a motion-controlled camera and then compositing two (or more) of the siblings together in a shot. For Baby Pete, the artists also had to do a head replacement, comp’ing Wayans onto the body of a much smaller actor.

“We used quite a few visual effects techniques to pull off the movie. At the heart was motion control, [which enables precise control and repetition of camera movement] and allowed us to put multiple characters played by Marlon together in the scenes,” says Tiddes, who has worked with Wayans on multiple projects in the past, including A Haunted House.

The majority of shots involving the siblings were done on stage, filmed on bluescreen with a TechnoDolly for the motion control, as it is too impractical to fit the large rig inside an actual house for filming. “The goal was to find locations that had the exterior I liked [for those scenes] and then build the interior on set,” says Tiddes. “This gave me the versatility to move walls and use the TechnoDolly to create multiple layers so we could then add multiple characters into the same scene and interact together.”

According to Tiddes, the team approached exterior shots similarly to interior ones, with the added challenge of shooting the duplicate moments at the same time each day to get consistent lighting. “Don Burgess, the DP, was amazing in that sense. He was able to create almost exactly the same lighting elements from day to day,” he notes.

Michael Tiddes

So, whenever there was a scene with multiple Wayans characters, it would be filmed on back-to-back days with each of the characters. Tiddes usually started off with Alan, the straight man, to set the pace for the scene, using body doubles for the other characters. Next, the director would work out the shot with the motion control until the timing, composition and so forth was perfected. Then he would hit the Record button on the motion-control device, and the camera would repeat the same exact move over and over as many times as needed. The next day, the shot was replicated with the other character, and the camera would move automatically, and Wayans would have to hit the same marks at the same moment established on the first day.

“Then we’d do it again on the third day with another character. It’s kind of like building layers in Photoshop, and in the end, we would composite all those layers on top of each other for the final version,” explains Tiddes.

When one character would pass in front of another, it became a roto’d shot. Oftentimes a small bluescreen was set up on stage to allow for easier rotoscoping.

Image Engine was the main visual effects vendor on the film, with Bryan Jones serving as visual effects supervisor. The rotoscoping was done using a mix of SilhouetteFX’s Silhouette and Foundry’s Nuke, while compositing was mainly done using Nuke and Autodesk’s Flame.

Make no mistake … using the motion-controlled camera was not without challenges. “When you attack a scene, traditionally you can come in and figure out the blocking on the day [of the shoot],” says Tiddes. “With this movie, I had to previsualize all the blocking because once I put the TechnoDolly in a spot on the set, it could not move for the duration of time we shot in that location. It’s a large 13-foot crane with pieces of track that are 10 feet long and 4 feet wide.”

In fact, one of the main reasons Tiddes wanted to do the film was because of the visual effects challenges it presented. In past films where an actor played multiple characters in a scene, usually one character is on one side of the screen and the other character is on the other side, and a basic split-screen technique would have been used. “For me to do this film, I wanted to visually do it like no one else has ever done it, and that was accomplished by creating camera movement,” he explains. “I didn’t want to be constrained to only split-screen lock-off camera shots that would lack energy and movement. I wanted the freedom to block scenes organically, allowing the characters the flexibility to move through the room, with the opportunity to cross each other and interact together physically. By using motion control, by being able to re-create the same camera movement and then composite the characters into the scene, I was able to develop a different visual style than previous films and create a heightened sense of interactivity and interaction between two or multiple characters on the screen while simultaneously creating dynamic movement with the camera and invoking energy into the scene.”

At times, Gregg Wayans, Marlon’s nephew, served as his body double. He even appears in a very wide shot as one of the siblings, although that occurred only once. “At the end of the day, when the concept of the movie is about Marlon playing multiple characters, the perfectionist in me wanted Marlon to portray every single moment of these characters on screen, even when the character is in the background and out of focus,” says Tiddes. “Because there is only one Marlon Wayans, and no one can replicate what he does physically and comedically in the moment.”

Tiddes knew he would be challenged going into the project, but the process was definitely more complicated than he had initially expected — even with his VFX editorial background. “I had a really good starting point as far as conceptually knowing how to execute motion control. But, it’s not until you get into the moment and start working with the actors that you really understand and digest exactly how to pull off the comedic timing needed for the jokes with the visual effects,” he says. “That is very difficult, and every situation is unique. There was a learning curve, but we picked it up quickly, and I had a great team.”

A system was established that worked for Tiddes and Burgess, as well as Wayans, who had to execute and hit certain marks and look at proper eyelines with precise timing. “He has an earwig, and I am talking to him, letting him know where to look, when to look,” says Tiddes. “At the same time, he’s also hearing dialogue that he’s done the day before in his ear, and he’s reacting to that dialog while giving his current character’s lines in the moment. So, there’s quite a bit going on, and it all becomes more complex when you add the character and camera moving through the scene. After weeks of practice, in one of the final scenes with Jaspar, we were able to do 16 motion-controlled moments in that scene alone, which was a lot!”

At the very end of the film, the group tested its limits and had all six characters (mom and all the siblings, with the exception of Alan) gathered around a table. That scene was shot over a span of five days. “The camera booms down from a sign and pans across the party, landing on all six characters around a table. Getting that motion and allowing the camera to flow through the party onto all six of them seamlessly interacting around the table was a goal of mine throughout the project,” Tiddes says.

Other shots that proved especially difficult were those of Baby Pete in the hospital room, since the entire scene involved Wayans playing three additional characters who are also present: Alan, Russell and Dawn. And then they amped things up with the head replacement on Baby Pete. “I had to shoot the scene and then, on the same day, select the take I would use in the final cut of the movie, rather than select it in post, where traditionally I could pick another take if that one was not working,” Tiddes adds. “I had to set the pace on the first day and work things out with Marlon ahead of time and plan for the subsequent days — What’s Dawn going to say? How is Russell going to react to what Dawn says? You have to really visualize and previsualize all the ad-libbing that was going on and work it out right there in the moment and discuss it, to have kind of a loose plan, then move forward and be confident that you have enough time between lines to allow room for growth when a joke just comes out of nowhere. You don’t want to stifle that joke.”

While the majority of effects involved motion control, there is a scene that contains a good amount of traditional effects work. In it, Alan and Russell park their car in a field to rest for the night, only to awake the next morning to find they have inadvertently provoked a bull, which sees red, literally — both from Alan’s jacket and his shiny car. Artists built the bull in CG. (They used Maya and Side Effects Houdini to build the 3D elements and rendered them in Autodesk’s Arnold.) Physical effects were then used to lift the actual car to simulate the digital bull slamming into the vehicle. In some shots of the bull crashing into the car doors, a 3D car was used to show the doors being damaged.

In another scene, Russell and Alan catch a serious amount of air when they crash through a barn, desperately trying to escape the bull. “I thought it would be hilarious if, in that moment, cereal exploded and individual pieces flew wildly through the car, while [the cereal-obsessed] Russell scooped up one of the cereal pieces mid-air with his tongue for a quick snack,” says Tiddes. To do this, “I wanted to create a zero-gravity slow-motion moment. We shot the scene using a [Vision Research] high-speed Phantom camera at 480fps. Then in post, we created the cereal as a CG element so I could control how every piece moved in the scene. It’s one of my favorite VFX/comedy moments in the movie.”

As Tiddes points out, Sextuplets was the first project on which he used motion control, which let him create motion with the camera and still have the characters interact, giving the subconscious feeling they were actually in the room with one another. “That’s what made the comedy shine,” he says.


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

NBCUni 7.26

Mavericks VFX provides effects for Hulu’s The Handmaid’s Tale

By Randi Altman

Season 3 episodes of Hulu’s The Handmaid’s Tale are available for streaming, and if you had any illusions that things would lighten up a bit for June (Elizabeth Moss) and the ladies of Gilead, I’m sorry to say you will be disappointed. What’s not disappointing is that, in addition to the amazing acting and storylines, the show’s visual effects once again play a heavy role.

Brendan Taylor

Toronto’s Mavericks VFX has created visual effects for all three seasons of the show, based on Margaret Atwood’s dystopian view of the not-too-distant future. Its work has earned two Emmy nominations.

We recently reached out to Maverick’s founder and visual effects supervisor, Brendan Taylor, to talk about the new season and his workflow.

How early did you get involved in each season? What sort of input did you have regarding the shots?
The Handmaid’s Tale production is great because they involve us as early as possible. Back in Season 2, when we had to do the Fenway Park scene, for example, we were in talks in August but didn’t shoot until November. For this season, they called us in August for the big fire sequence in Episode 1, and the scene was shot in December.

There’s a lot of nice leadup and planning that goes into it. Our opinions are sought after and we’re able to provide input on what’s the best methodology to use to achieve a shot. Showrunner Bruce Miller, along with the directors, have a way of how they’d like to see it, and they’re great at taking in our recommendations. It was very collaborative and we all approach the process with “what’s best for the show” in mind.

What are some things that the showrunners asked of you in terms of VFX? How did they describe what they wanted?
Each person has a different approach. Bruce speaks in story terms, providing a broader sense of what he’s looking for. He gave us the overarching direction of where he wants to go with the season. Mike Barker, who directed a lot of the big episodes, speaks in more specific terms. He really gets into the details, determining the moods of the scene and communicating how each part should feel.

What types of effects did you provide? Can you give examples?
Some standout effects were the CG smoke in the burning fire sequence and the aftermath of the house being burned down. For the smoke, we had to make it snake around corners in a believable yet magical way. We had a lot of fire going on set, and we couldn’t have any actors or stunt person near it due to the size, so we had to line up multiple shots and composite it together to make everything look realistic. We then had to recreate the whole house in 3D in order to create the aftermath of the fire, with the house being completely burned down.

We also went to Washington, and since we obviously couldn’t destroy the Lincoln Memorial, we recreated it all in 3D. That was a lot of back and forth between Bruce, the director and our team. Different parts of Lincoln being chipped away means different things, and Bruce definitely wanted the head to be off. It was really fun because we got to provide a lot of suggestions. On top of that, we also had to create CGI handmaids and all the details that came with it. We had to get the robes right and did cloth simulation to match what was shot on set. There were about a hundred handmaids on set, but we had to make it look like there were thousands.

Were you able to reuse assets from last season for this one?
We were able to use a handmaids asset from last season, but it needed a lot of upgrades for this season. Because there were closer shots of the handmaids, we had to tweak it and made sure little things like the texture, shaders and different cloth simulations were right for this season.

Were you on set? How did that help?
Yes, I was on set, especially for the fire sequences. We spent a lot of time talking about what’s possible and testing different ways to make it happen. We want it to be as perfect as possible, so I had to make sure it was all done properly from the start. We sent another visual effects supervisor, Leo Bovell, down to Washington to supervise out there as well.

Can you talk about a scene or scenes where being on set played a part in doing something either practical or knowing you could do it in CG?
The fire sequence with the smoke going around the corner took a lot of on-set collaboration. We had tried doing it practically, but the smoke was moving too fast for what we wanted, and there was no way we could physically slow it down.

Having the special effects coordinator, John MacGillivray, there to give us real smoke that we could then match to was invaluable. In most cases on this show, very few audible were called. They want to go into the show knowing exactly what to expect so we were prepared and ready.

Can you talk about turnaround time? Typically, series have short ones. How did that affect how you worked?
The average turnaround time was eight weeks. We began discussions in August, before shooting, and had to delivery by January. We worked with Mike to simplify things without diminishing the impact. We just wanted to make sure we had the chance to do it well given the time we had. Mike was very receptive in asking what we needed to do to make it the best it could be in the timeframe that we had. Take the fire sequence, for example. We could have done full-CGI fire but that would have taken six months. So we did our research and testing to find the most efficient way to merge practical effects with CGI and presented the best version in a shorter period of time.

What tools were used?
We used Foundry Nuke for compositing. We used Autodesk Maya to build all the 3D houses, including the burned-down house, and to destroy the Lincoln Memorial. Then we used Side Effects Houdini to do all the simulations, which can range from the smoke and fire to crowd and cloth.

Is there a shot that you are most proud of or that was very challenging?
The shot where we reveal the crowd over June when we’re in Washington was incredibly challenging. The actual Lincoln Memorial, where we shot, is an active public park, so we couldn’t prevent people from visiting the site. The most we could do was hold them off for a few minutes. We ended up having to clean out all of the tourists, which is difficult with moving camera and moving people. We had to reconstruct about 50% of the plate. Then, in order to get the CG people to be standing there, we had to create a replica of the ground they’re standing on in CG. There were some models we got from the US Geological Society, but they didn’t completely line up, so we had to make a lot of decisions on the fly.

The cloth simulation in that scene was perfect. We had to match the dampening and the movement of all the robes. Stephen Wagner, who is our effects lead on it, nailed it. It looked perfect, and it was really exciting to see it all come together. It looked seamless, and when you saw it in the show, nobody believed that the foreground handmaids were all CG. We’re very proud.

What other projects are you working on?
We’re working on a movie called Queen & Slim by Melina Matsoukas with Universal. It’s really great. We’re also doing YouTube Premium’s Impulse and Netflix’s series Madam C.J. Walker.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 


VFX in Series: The Man in the High Castle, Westworld

By Karen Moltenbrey

The look of television changed forever starting in the 1990s as computer graphics technology began to mature to the point where it could be incorporated within television productions. Indeed, the applications initially were minor, but soon audiences were witnessing very complicated work on the small screen. Today, we see a wide range of visual effects being used in television series, from minor wire and sign removal to all-CG characters and complete CG environments — pretty much anything and everything to augment the action and story, or to turn a soundstage or location into a specific locale that could be miles away or even non-existent.

Here, we examine two prime examples where a wide range of visual effects are used to set the stage and propel the action for a pair of series with very unique settings. For instance, The Man in the High Castle uses effects to turn back the clock to the 1960s, but also to create an alternate reality for the period, turning the familiar on its head. In  Westworld, effects create a unique Wild West of the future. In both series, VFX also help turn up the volume on these series’ very creative storylines.

The Man in the High Castle

What would life in the US be like if the Axis powers had defeated the Allied forces during World War II? The Amazon TV series The Man in the High Castle explores that alternate history scenario. Created by Frank Spotnitz and produced by Amazon Studios, Scott Free Productions, Headline Pictures, Electric Shepherd Productions and Big Light Productions, the series is scheduled to start its fourth and final season in mid-November. The story is based on the book by Philip K. Dick.

High Castle begins in the early 1960s in a dystopian America. Nazi Germany and the Empire of Japan have divvied up the US as their spoils of war. Germany rules the East, known as the Greater Nazi Reich (with New York City as the regional capital), while Japan controls the West, known as the Japanese Pacific States (whose capital is now San Francisco). The Rocky Mountains serve as the Neutral Zone. The American Resistance works to thwart the occupiers, spurred on after the discovery of materials displaying an alternate reality where the Allies were victorious, making them ponder this scenario.

With this unique storyline, visual effects artists were tasked with turning back the clock on present-day locations to the ’60s and then turning them into German- and Japanese-dominated and inspired environments. Starting with Season 2, the main studio filling this role has been Barnstorm Visual Effects (Los Angeles, Vancouver). Barnstorm operated as one of the vendors for Season 1, but has since ramped up its crew from a dozen to around 70 to take on the additional work. (Barnstorm also works on CBS All Access shows such as The Good Fight and Strange Angel, in addition to Get Shorty, Outlander and the HBO series Room 104 and Silicon Valley.)

According to Barnstorm co-owner and VFX supervisor Lawson Deming, the studio is responsible for all types of effects for the series — ranging from simple cleanup and fixes such as removing modern objects from shots to more extensive period work through the addition of period set pieces and set extensions. In addition, there are some flashback scenes that call for the artists to digitally de-age the actors and lots of military vehicles to add, as well as science-fiction objects. The majority of the overall work entails CG set extensions and world creation, Deming explains, “That involves matte paintings and CG vehicles and buildings.”

The number of visual effects shots per episode also varies greatly, depending on the story line; there are an average of 60 VFX shots an episode, with each season encompassing 10 episodes. Currently the team is working on Season 4. A core group of eight to 10 CG artists and 12 to 18 compositors work on the show at any given time.

For Season 3, released last October, there are a number of scenes that take place in the Reich-occupied New York City. Although it was possible to go to NYC and photograph buildings for reference, the city has changed significantly since the 1960s, “even notwithstanding the fact that this is an alternate history 1960s,” says Deming. “There would have been a lot of work required to remove modern-day elements from shots, particularly at the street level of buildings where modern-day shops are located, even if it was a building from the 1940s, ’50s or ’60s. The whole main floor would have needed replaced.”

So, in many cases, the team found it more prudent to create set extensions for NYC from scratch. The artists created sections of Fifth and Sixth avenues, both for the area where American-born Reichmarshall and Resistance investigator John Smith has his apartment and also for a parade sequence that occurs in the middle of Season 3. They also constructed a digital version of Central Park for that sequence, which involved crafting a lot of modular buildings with mix-and-match pieces and stories to make what looked like a wide variety of different period-accurate buildings, with matte paintings for the backgrounds. Elements such as fire escapes and various types of windows (some with curtains open, some closed) helped randomize the structures. Shaders for brick, stucco, wood and so forth further enabled the artists to get a lot of usage from relatively few assets.

“That was a large undertaking, particularly because in a lot of those scenes, we also had crowd duplication, crowd systems, tiling and so on to create everything that was there,” Deming explains. “So even though it’s just a city and there’s nothing necessarily fantastical about it, it was almost fully created digitally.”

The styles of NYC and San Francisco are very different in the series narrative. The Nazis are rebuilding NYC in their own image, so there is a lot of influence from brutalist architecture, and cranes often dot the skyline to emphasize all the construction taking place. Meanwhile, San Francisco has more of a 1940s look, as the Japanese are less interested in influencing architectural changes as they are in occupation.

“We weren’t trying to create a science-fiction world because we wanted to be sure that what was there would be believable and sell the realistic feel of the story. So, we didn’t want to go too far in what we created. We wanted it to feel familiar enough, though, that you could believe this was really happening,” says Deming.

One of the standout episodes for visual effects is “Jahr Null” (Season 3, Episode 10), which has been nominated for a 2019 Emmy in the Outstanding Special Visual Effects category. It entails the destruction of the Statue of Liberty, which crashes into the water, requiring just about every tool available at Barnstorm. “Prior to [the upcoming] Season 4, our biggest technical challenge was the Statue of Liberty destruction. There were just so many moving parts, literally and figuratively,” says Deming. “So many things had to occur in the narrative – the Nazis had this sense of showmanship, so they filmed their events and there was this constant stream of propaganda and publicity they had created.”

There are ferries with people on them to watch the event, spotlights are on the statue and an air show with music prior to the destruction as planes with trails of colored smoke fly toward the statue. When the planes fire their missiles at the base of the statue, it’s for show, as there are a number of explosives planted in the base of the statue that go off in a ring formation to force the collapse. Deming explains the logistics challenge: “We wanted the statue’s torch arm to break off and sink in the water, but the statue sits too far back. We had to manufacture a way for the statue to not just tip over, but to sort of slide down the rubble of the base so it would be close enough to the edge and the arm would snap off against the side of the island.”

The destruction simulation, including the explosions, fire, water and so forth, was handled primarily in Side Effects Houdini. Because there was so much sim work, a good deal of the effects work for the entire sequence was done in Houdini as well. Lighting and rendering for the scene was done within Autodesk’s Arnold.

Barnstorm also used Blender, an open-source 3D program for modeling and asset creation, for a small portion of the assets in this sequence. In addition, the artists used Houdini Mantra for the water rendering, while textures and shaders were built in Adobe’s Substance Painter; later the team used Foundry’s Nuke to composite the imagery. “There was a lot of deep compositing involved in that scene because we had to have the lighting interact in three dimensions with things like the smoke simulation,” says Deming. “We had a bunch of simulations stacked on top of one another that created a lot of data to work with.”

The artists referenced historical photographs as they designed and built the statue with a period-accurate torch. In the wide aerial shots, the team used some stock footage of the statue with New York City in the background, but had to replace pretty much everything in the shot, shortening the city buildings and replacing Liberty Island, the water surrounding it and the vessels in the water. “So yeah, it ended up being a fully digital model throughout the sequence,” says Deming.

Deming cannot discuss the effects work coming up in Season 4, but he does note that Season 3 contained a lot of digital NYC. This included a sequence wherein John Smith was installed as the Reichmarshall near Central Park, a scene that comprised a digital NYC and digital crowd duplication. On the other side of the country, the team built digital versions of all the ships in San Francisco harbor, including CG builds of period Japanese battleships retrofitted with more modern equipment. Water simulations rounded out the scene.

In another sequence, the Japanese performed nuclear testing in Monument Valley, blowing the caps off the mesas. For that, the artists used reference photos to build the landscape and then created a digital simulation of a nuclear blast.

In addition, there were a multitude of banners on the various buildings. Because of the provocative nature of some of the Nazi flags and Fascist propaganda, solid-color banners were often hung on location, with artists adding the offensive VFX image in post as to not upset locals where the series was filmed. Other times, the VFX artists added all-digital signage to the scenes.

As Deming points out, there is only so much that can be created through production design and costumes. Some of the big things have to be done with visual effects. “There are large world events in the show that happen and large settings that we’re not able to re-create any other way. So, the visual effects are integral to the process of creating the aesthetic world of the show,” he adds. “We’re creating things that while they are visually impressive, also feel authentic, like a world that could really exist. That’s where the power and the horror of the world here comes from.”

High Castle is up for a total of three Emmy awards later this month. It was nominated for three Emmys in 2017 for Season 2 and four in 2016 for Season 1, taking home two Emmys that year: one for Outstanding Cinematography for a Single-Camera Series and another for Outstanding Title Design.

Westworld

What happens when high tech meets the Wild West, and wealthy patrons can indulge their fantasies with no limits? That is the premise of the Emmy-winning HBO series Westworld from creators Jonathan Nolan and Lisa Joy, who executive produce along with J.J. Abrams, Athena Wickham, Richard J. Lewis, Ben Stephenson and Denise Thé.

Westworld is set in the fictitious western theme park called Westworld, one of multiple parks where advanced technology enables the use of lifelike android hosts to cater to the whims of guests who are able to pay for such services — all without repercussions, as the hosts are programmed not to retaliate or harm the guests. After each role-play cycle, the host’s memory is erased, and then the cycle begins anew until eventually the host is either decommissioned or used in a different narrative. Staffers are situated out of sight while overseeing park operations and performing repairs on the hosts as necessary. As you can imagine, guests often play out the darkest of desires. So, what happens if some of the hosts retain their memories and begin to develop emotions? What if some escape from the park? What occurs in the other themed parks?

The series debuted in October 2016, with Season 2 running from April through June of 2018. The production for Season 3 began this past spring and it is planned for release in 2020.

The first two seasons were shot in various locations in California, as well as in Castle Valley near Moab, Utah. Multiple vendors provide the visual effects, including the team at CoSA VFX (North Hollywood, Vancouver and Atlanta), which has been with the show since the pilot, working closely with Westworld VFX supervisor Jay Worth. CoSA worked with Worth in the past on other series, including Fringe, Undercovers and Person of Interest.

The number of VFX shots per episode varies, depending on the storyline, and that means the number of shots CoSA is responsible for varies widely as well. For instance, the facility did approximately 360 shots for Season 1 and more than 200 for Season 2. The studio is unable to discuss its work at this time on the upcoming Season 3.

The type of effects work CoSA has done on Westworld varies as well, ranging from concept art through the concept department and extension work through the studio’s environments department. “Our CG team is quite large, so we handle every task from modeling and texturing to rigging, animation and effects,” says Laura Barbera, head of 3D at CoSA. “We’ve created some seamless digital doubles for the show that even I forget are CG! We’ve done crowd duplication, for which we did a fun shoot where we dressed up in period costumes. Our 2D department is also sizable, and they do everything from roto, to comp and creative 2D solutions, to difficult greenscreen elements. We even have a graphics department that did some wonderful shots for Season 2, including holograms and custom interfaces.”

On the 3D side, the studio’s pipeline js mainly comprised of Autodesk’s Maya and Side Effects Houdini, along with Adobe’s Substance, Foundry’s Mari and Pixologic’s ZBrush. Maxon’s Cinema 4D and Interactive Data Visualization’s SpeedTree vegetation modeler are also used. On the 2D side, the artists employ Foundry’s Nuke and the Adobe suite, including After Effects and Photoshop; rendering is done in Chaos Group’s V-Ray and Redshift’s renderer.

Of course, there have been some recurring effects each season, such as the host “twitches and glitches.” And while some of the same locations have been revisited, the CoSA artists have had to modify the environments to fit with the changing timeline of the story.

“Every season sees us getting more and more into the characters and their stories, so it’s been important for us to develop along with it. We’ve had to make our worlds more immersive so that we are feeling out the new and changing surroundings just like the characters are,” Barbera explains. “So the set work gets more complex and the realism gets even more heightened, ensuring that our VFX become even more seamless.”

At center stage have been the park locations, which are rooted in existing terrain, as there is a good deal of location shooting for the series. The challenge for CoSA then becomes how to enhance it and make nature seem even more full and impressive, while still subtly hinting toward the changes in the story, says Barbera. For instance, the studio did a significant amount of work to the Skirball Cultural Center locale in LA for the outdoor environment of Delos, which owns and operates the parks. “It’s now sitting atop a tall mesa instead of overlooking the 405!” she notes. The team also added elements to the abandoned Hawthorne Plaza mall to depict the sublevels of the Delos complex. They’re constantly creating and extending the environments in locations inside and out of the park, including the town of Pariah, a particularly lawless area.

“We’ve created beautiful additions to the outdoor sets. I feel sometimes like we’re looking at a John Ford film, where you don’t realize how important the world around you is to the feel of the story,” Barbera says.

CoSA has done significant interior work too, creating spaces that did not exist on set “but that you’d never know weren’t there unless you’d see the before and afters,” Barbera says. “It’s really very visually impressive — from futuristic set extensions, cars and [Westworld park co-creator] Arnold’s house in Season 2, it’s amazing how much we’ve done to extend the environments to make the world seem even bigger than it is on location.”

One of the larger challenges in the first seasons came in Season 2: creating the Delos complex and the final episodes where the studio had to build a world inside of a world – the Sublime –as well as the gateway to get there. “Creating the Sublime was a challenge because we had to reuse and yet completely change existing footage to design a new environment,” explains Barbera. “We had to find out what kind of trees and foliage would live in that environment, and then figure out how to populate it with hosts that were never in the original footage. This was another sequence where we had to get particularly creative about how to put all the elements together to make it believable.”

In the final episode of the second season, the group created environment work on the hills, pinnacles and quarry where the door to the Sublime appears. They also did an extensive rebuild of the Sublime environment, where the hosts emerge after crossing over. “In the first season, we did a great deal of work on the plateau side of Delos, as well as adding mesas into the background of other shots — where [hosts] Dolores and Teddy are — to make the multiple environments feel connected,” adds Barbera.

Aside from the environments, CoSA also did some subtle work on the robots, especially in Season 2, to make them appear as if they were becoming unhinged, hinting at a malfunction. The comp department also added eye twitches, subtle facial tics and even rapid blinks to provide a sense of uneasiness.

While Westworld’s blending of the Old West’s past and the robotic future initially may seem at thematic odds, the balance of that duality is cleverly accomplished in the filming of the series and the way it is performed, Barbera points out. “Jay Worth has a great vision for the integrated feel of the show. He established the looks for everything,” she adds.

The balance of the visual effects is equally important because it enhances the viewer experience. “There are things happening that can be so subtle but have so much impact. Much of our work on the second season was making sure that the world stayed grounded, so that the strangeness that happened with the characters and story line read as realistic,” Barbera explains. “Our job as visual effects artists is to help our professional storytelling partners tell their tales by adding details and elements that are too difficult or fantastic to accomplish live on set in the midst of production. If we’re doing our job right, you shouldn’t feel suddenly taken out of the moment because of a splashy effect. The visuals are there to supplement the story.”


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.


Visual Effects Roundtable

By Randi Altman

With Siggraph 2019 in our not-too-distant rearview mirror, we thought it was a good time to reach out to visual effects experts to talk about trends. Everyone has had a bit of time to digest what they saw. Users are thinking what new tools and technologies might help their current and future workflows. Manufacturers are thinking about how their products will incorporate these new technologies.

We provided these experts with questions relating to realtime raytracing, the use of game engines in visual effects workflows, easier ways to share files and more.

Ben Looram, partner/owner, Chapeau Studios
Chapeau Studios provides production, VFX/animation, design and creative IP development (both for digital content and technology) for all screens.

What film inspired you to work in VFX?
There was Ray Harryhausen’s film Jason and the Argonauts, which I watched on TV when I was seven. The skeleton-fighting scene has been visually burned into my memory ever since. Later in life I watched an artist compositing some tough bluescreen shots on a Quantel Henry in 1997, and I instantly knew that that was going to be in my future.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
Double the content for half the cost seems to be the industry’s direction lately. This is coming from new in-house/client-direct agencies that sometimes don’t know what they don’t know … so we help guide/teach them where it’s OK to trim budgets or dedicate more funds for creative.

Are game engines affecting how you work, or how you will work in the future?
Yes, rendering on device and all the subtle shifts in video fidelity shifted our attention toward game engine technology a couple years ago. As soon as the game engines start to look less canned and have accurate depth of field and parallax, we’ll start to integrate more of those tools into our workflow.

Right now we have a handful of projects in the forecast where we will be using realtime game engine outputs as backgrounds on set instead of shooting greenscreen.

What about realtime raytracing? How will that affect VFX and the way you work?
We just finished an R&D project with Intel’s new raytracing engine OSPRay for Siggraph. The ability to work on a massive scale with last-minute creative flexibility was my main takeaway. This will allow our team to support our clients’ swift changes in direction with ease on global launches. I see this ingredient as really exciting for our creative tech devs moving into 2020. Proof of concept iterations will become finaled faster, and we’ve seen efficiencies in lighting, render and compositing effort.

How have ML/AI affected your workflows, if at all?
None to date, but we’ve been making suggestions for new tools that will make our compositing and color correction process more efficient.

The Uncanny Valley. Where are we now?
Still uncanny. Even with well-done virtual avatar influencers on Instagram like Lil Miquela, we’re still caught with that eerie feeling of close-to-visually-correct with a “meh” filter.

Apple

Can you name some recent projects?
The Rookie’s Guide to the NFL. This was a fun hybrid project where we mixed CG character design with realtime rendering voice activation. We created an avatar named Matthew for the NFL’s Amazon Alexa Skills store that answers your football questions in real time.

Microsoft AI: Carlsberg and Snow Leopard. We designed Microsoft’s visual language of AI on multiple campaigns.

Apple Trade In campaign: Our team concepted, shot and created an in-store video wall activation and on-all-device screen saver for Apple’s iPhone Trade In Program.

 

Mac Moore, CEO, Conductor
Conductor is a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud.

What are some of today’s VFX trends? Is cloud playing an even larger role?
Cloud is absolutely a growing trend. I think for many years the inherent complexity and perceived cost of cloud has limited adoption in VFX, but there’s been a marked acceleration in the past 12 months.

Two years ago at Siggraph, I was explaining the value of elastic compute and how it perfectly aligns with the elastic requirements that define our project-based industry; this year there was a much more pragmatic approach to cloud, and many of the people I spoke with are either using the cloud or planning to use it in the near future. Studios have seen referenceable success, both technically and financially, with cloud adoption and are now defining cloud’s role in their pipeline for fear of being left behind. Having a cloud-enabled pipeline is really a game changer; it is leveling the field and allowing artistic talent to be the differentiation, rather than the size of the studio’s wallet (and its ability to purchase a massive render farm).

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines for VFX have definitely attracted interest lately and show a lot of promise in certain verticals like virtual production. There’s more work to be done in terms of out-of-the-box usability, but great strides have been made in the past couple years. I also think various open source initiatives and the inherent collaboration those initiatives foster will help move VFX workflows forward.

Will realtime raytracing play a role in how your tool works?
There’s a need for managing the “last mile,” even in realtime raytracing, which is where Conductor would come in. We’ve been discussing realtime assist scenarios with a number of studios, such as pre-baking light maps and similar applications, where we’d perform some of the heavy lifting before assets are integrated in the realtime environment. There are certainly benefits on both sides, so we’ll likely land in some hybrid best practice using realtime and traditional rendering in the near future.

How do ML/AI and AR/VR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Machine learning and artificial intelligence are critical for our next evolutionary phase at Conductor. To date we’ve run over 250 million core-hours on the platform, and for each of those hours, we have a wealth of anonymous metadata about render behavior, such as the software run, duration, type of machine, etc.

Conductor

For our next phase, we’re focused on delivering intelligent rendering akin to ride-share app pricing; the goal is to provide producers with an upfront cost estimate before they submit the job, so they have a fixed price that they can leverage for their bids. There is also a rich set of analytics that we can mine, and those analytics are proving invaluable for studios in the planning phase of a project. We’re working with data science experts now to help us deliver this insight to our broader customer base.

AR/VR front presents a unique challenge for cloud, due to the large size and variety of datasets involved. The rendering of these workloads is less about compute cycles and more about scene assembly, so we’re determining how we can deliver more of a whole product for this market in particular.

OpenXR and USD are certainly helping with industry best practices and compatibility, which build recipes for repeatable success, and Conductor is collaborating on creating those guidelines for success when it comes to cloud computing with those standards.

What is next on the horizon for VFX?
Cloud, open source and realtime technologies are all disrupting VFX norms and are converging in a way that’s driving an overall democratization of the industry. Gone are the days when you need a pile of cash and a big brick-and-mortar building to house all of your tech and talent.

Streaming services and new mediums, along with a sky-high quality bar, have increased the pool of available VFX work, which is attracting new talent. Many of these new entrants are bootstrapping their businesses with cloud, standards-based approaches and geographically dispersed artistic talent.

Conductor recently became a fully virtual company for this reason. I hire based on expertise, not location, and today’s technology allows us to collaborate as if we are in the same building.

 

Aruna Inversin, creative director/VFX supervisor, Digital Domain 
Digital Domain has provided visual effects and technology for hundreds of motion pictures, commercials, video games, music videos and virtual reality experiences. It also livestreams events in 360-degree virtual reality, creates “virtual humans” for use in films and live events, and develops interactive content, among other things.

What film inspired you to work in VFX?
RoboCop in 1984. The combination of practical effects, miniatures and visual effects inspired me to start learning about what some call “The Invisible Art.”

What trends have you been seeing? What do you feel is important?
There has been a large focus on realtime rendering and virtual production and using it to help increase the throughput and workflow of visual effects. While indeed realtime rendering does increase throughput, there is now a greater onus on filmmakers to plan their creative ideas and assets before you can render them. No longer is it truly post production, but we are back into the realm of preproduction, using post tools and realtime tools to help define how a story is created and eventually filmed.

USD and cloud rendering are also important components, which allow many different VFX facilities the ability to manage their resources effectively. I think another trend that has since passed and has gained more traction is the availability of ACES and a more unified color space by the Academy. This allows quicker throughput between all facilities.

Are game engines affecting how you work or how you will work in the future?
As my primary focus is in new media and experiential entertainment at Digital Domain, I already use game engines (cinematic engines, realtime engines) for the majority of my deliverables. I also use our traditional visual effects pipeline; we have created a pipeline that flows from our traditional cinematic workflow directly into our realtime workflow, speeding up the development process of asset creation and shot creation.

What about realtime raytracing? How will that affect VFX and the way you work?
The ability to use Nvidia’s RTX and raytracing increases the physicality and realistic approximations of virtual worlds, which is really exciting for the future of cinematic storytelling in realtime narratives. I think we are just seeing the beginnings of how RTX can help VFX.

How have AR/VR and AI/ML affected your workflows, if at all?
Augmented reality has occasionally been a client deliverable for us, but we are not using it heavily in our VFX pipeline. Machine learning, on the other hand, allows us to continually improve our digital humans projects, providing quicker turnaround with higher fidelity than competitors.

The Uncanny Valley. Where are we now?
There is no more uncanny valley. We have the ability to create a digital human with the nuance expected! The only limitation is time and resources.

Can you name some recent projects?
I am currently working on a Time project but I cannot speak too much about it just yet. I am also heavily involved in creating digital humans for realtime projects for a number of game companies that wish to push the boundaries of storytelling in realtime. All these projects have a release date of 2020 or 2021.

 

Matt Allard, strategic alliances lead, M&E, Dell Precision Workstations
Dell Precision workstations feature the latest processors and graphics technology and target those working in the editing studio or at a drafting table, at the office or on location.

What are some of today’s VFX trends?
We’re seeing a number of trends in VFX at the moment — from 4K mastering from even higher-resolution acquisition formats and an increase in HDR content to game engines taking a larger role on set in VFX-heavy productions. Of course, we are also seeing rising expectations for more visual sophistication, complexity and film-level VFX, even in TV post (for example, Game of Thrones).

Will realtime raytracing play a role in how your tools work?
We expect that Dell customers will embrace realtime and hardware-accelerated raytracing as creative, cost-saving and time-saving tools. With the availability of Nvidia Quadro RTX across the Dell Precision portfolio, including on our 7000 series mobile workstations, customers can realize these benefits now to deliver better content wherever a production takes them in the world.

Large-scale studio users will not only benefit from the freedom to create the highest-quality content faster, but they’ll likely see overall impact to their energy consumption as they assess the move from CPU rendering, which dominates studio data centers today. Moving toward GPU and hybrid CPU/GPU rendering approaches can offer equal or better rendering output with less energy consumption.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines have made their way into VFX-intensive productions to deliver in-context views of the VFX during the practical shoot. With increasing quality driven by realtime raytracing, game engines have the potential to drive a master-quality VFX shot on set, helping to minimize the need to “fix it in post.”

What is next on the horizon for VFX?
The industry is at the beginning of a new era as artificial intelligence and machine learning techniques are brought to bear on VFX workflows. Analytical and repetitive tasks are already being targeted by major software applications to accelerate or eliminate cumbersome elements in the workflow. And as with most new technologies, it can result in improved creative output and/or cost savings. It really is an exciting time for VFX workflows!

Ongoing performance improvements to the computing infrastructure will continue to accelerate and democratize the highest-resolution workflows. Now more than ever, small shops and independents can access the computing power, tools and techniques that were previously available only to top-end studios. Additionally, virtualization techniques will allow flexible means to maximize the utilization and proliferation of workstation technology.

 

Carl Flygare, manager, Quadro Marketing, PNY
Providing tools for realtime raytracing, augmented reality and virtual reality with the goal of advancing VFX workflow creativity and productivity. PNY is NVIDIA’s Quadro channel partner throughout North America, Latin America, Europe and India..

How will realtime raytracing play a role in workflows?
Budgets are getting tighter, timelines are contracting, and audience expectations are increasing. This sounds like a perfect storm, in the bad sense of the term, but with the right tools, it is actually an opportunity.

Realtime raytracing, based on Nvidia’s RTX technology and support from leading ISVs, enables VFX shops to fit into these new realities while delivering brilliant work. Whiteboarding a VFX workflow is a complex task, so let’s break it down by categories. In preproduction, specifically previz, realtime raytracing will let VFX artists present far more realistic and compelling concepts much earlier in the creative process than ever before.

This extends to the next phase, asset creation and character animation, in which models can incorporate essentially lifelike nuance, including fur, cloth, hair or feathers – or something else altogether! Shot layout, blocking, animation, simulation, lighting and, of course, rendering all benefit from additional iterations, nuanced design and the creative possibilities that realtime raytracing can express and realize. Even finishing, particularly compositing, can benefit. Given the applicable scope of realtime raytracing, it will essentially remake VFX workflows and overall film pipelines, and Quadro RTX series products are the go-to tools enabling this revolution.

How are game engines changing how VFX is done? Is this for everyone or just a select few?
Variety had a great article on this last May. ILM substituted realtime rendering and five 4K laser projectors for a greenscreen shot during a sequence from Solo: A Star Wars Story. This allowed the actors to perform in context — in this case, a hyperspace jump — but also allowed cinematographers to capture arresting reflections of the jump effect in the actors’ eyes. Think of it as “practical digital effects” created during shots, not added later in post. The benefits are significant enough that the entire VFX ecosystem, from high-end shops and major studios to independent producers, are using realtime production tools to rethink how movies and TV shows happen while extending their vision to realize previously unrealizable concepts or projects.

Project Sol

How do ML and AR play a role in your tool? And are you supporting OpenXR 1.0? What about Pixar’s USD?
Those are three separate but somewhat interrelated questions! ML (machine learning) and AI (artificial intelligence) can contribute by rapidly denoising raytraced images in far less time than would be required by letting a given raytracing algorithm run to conclusion. Nvidia enables AI denoising in Optix 5.0 and is working with a broad array of leading ISVs to bring ML/AI enhanced realtime raytracing techniques into the mainstream.

OpenXR 1.0 was released at Siggraph 2019. Nvidia (among others) is supporting this open, royalty-free and cross-platform standard for VR/AR. Nvidia is now providing VR enhancing technologies, such as variable rate shading, content adaptive shading and foveated rendering (among others), with the launch of Quadro RTX. This provides access to the best of both worlds — open standards and the most advanced GPU platform on which to build actual implementations.

Pixar and Nvidia have collaborated to make Pixar’s USD (Universal Scene Description) and Nvidia’s complementary MDL (Materials Definition Language) software open source in an effort to catalyze the rapid development of cinematic quality realtime raytracing for M&E applications.

Project Sol

What is next on the horizon for VFX?
The insatiable desire on the part of VFX professionals, and audiences, to explore edge-of-the-envelope VFX will increasingly turn to realtime raytracing, based on the actual behavior of light and real materials, increasingly sophisticated shader technology and new mediums like VR and AR to explore new creative possibilities and entertainment experiences.

AI, specifically DNNs (deep neural networks) of various types, will automate many repetitive VFX workflow tasks, allowing creative visionaries and artists to focus on realizing formerly impossible digital storytelling techniques.

One obvious need is increasing the resolution at which VFX shots are rendered. We’re in a 4K world, but many films are finished at 2K, primarily based on VFX. 8K is unleashing the abilities (and changing the economics) of cinematography, so expect increasingly powerful realtime rendering solutions, such as Quadro RTX (and successor products when they come to market), along with amazing advances in AI, to allow the VFX community to innovate in tandem.

 

Chris Healer, CEO/CTO/VFX supervisor, The Molecule 
Founded in 2005, The Molecule creates bespoke VFX imagery for clients worldwide. Over 80 artists, producers, technicians and administrative support staff collaborate at our New York City and Los Angeles studios.

What film or show inspired you to work in VFX?
I have to admit, The Matrix was a big one for me.

Are game engines affecting how you work or how you will work?
Game engines are coming, but the talent pool is difficult and the bridge is hard to cross … a realtime artist doesn’t have the same mindset as a traditional VFX artist. The last small percentage of completion on a shot can invalidate any values gained by working in a game engine.

What about realtime raytracing?
I am amazed at this technology, and as a result bought stock in Nvidia, but the software has to get there. It’s a long game, for sure!

How have AR/VR and ML/AI affected your workflows?
I think artists are thinking more about how images work and how to generate them. There is still value in a plain-old four-cornered 16:9 rectangle that you can make the most beautiful image inside of.

AR,VR, ML, etc., are not that, to be sure. I think there was a skip over VR in all the hype. There’s way more to explore in VR, and that will inform AR tremendously. It is going to take a few more turns to find a real home for all this.

What trends have you been seeing? Cloud workflows? What else?
Everyone is rendering in the cloud. The biggest problem I see now is lack of a UBL model that is global enough to democratize it. UBL = usage-based licensing. I would love to be able to render while paying by the second or minute at large or small scales. I would love for Houdini or Arnold to be rentable on a Satoshi level … that would be awesome! Unfortunately, it is each software vendor that needs to provide this, which is a lot to organize.

The Uncanny Valley. Where are we now?
We saw in the recent Avengers film that Mark Ruffalo was in it. Or was he? I totally respect the Uncanny Valley, but within the complexity and context of VFX, this is not my battle. Others have to sort this one out, and I commend the artists who are working on it. Deepfake and Deeptake are amazing.

Can you name some recent projects?
We worked on Fosse/Verdon, but more recent stuff, I can’t … sorry. Let’s just say I have a lot of processors running right now.

 

Matt Bach and William George, lab technicians, Puget Systems 
Puget Systems specializes in high-performance custom-built computers — emphasizing each customer’s specific workflow.

Matt Bach

William George

What are some of today’s VFX trends?
Matt Bach: There are so many advances going on right now that it is really hard to identify specific trends. However, one of the most interesting to us is the back and forth between local and cloud rendering.

Cloud rendering has been progressing for quite a few years and is a great way to get a nice burst in rendering performance when you are  in a crunch. However, there have been high improvements in GPU-based rendering with technology like Nvidia Optix. Because of these, you no longer have to spend a fortune to have a local render farm, and even a relatively small investment in hardware can often move the production bottleneck away from rendering to other parts of the workflow. Of course, this technology should make its way to the cloud at some point, but as long as these types of advances keep happening, the cloud is going to continue playing catch-up.

A few other that we are keeping our eyes on are the growing use of game engines, motion capture suits and realtime markerless facial tracking in VFX pipelines.

Realtime raytracing is becoming more prevalent in VFX. What impact does realtime raytracing have on system hardware, and what do VFX artists need to be thinking about when optimizing their systems?
William George: Most realtime raytracing requires specialized computer hardware, specifically video cards with dedicated raytracing functionality. Raytracing can be done on the CPU and/or normal video cards as well, which is what render engines have done for years, but not quickly enough for realtime applications. Nvidia is the only game in town at the moment for hardware raytracing on video cards with its RTX series.

Nvidia’s raytracing technology is available on its consumer (GeForce) and professional (Quadro) RTX lines, but which one to use depends on your specific needs. Quadro cards are specifically made for this kind of work, with higher reliability and more VRAM, which allows for the rendering of more complex scenes … but they also cost a lot more. GeForce, on the other hand, is more geared toward consumer markets, but the “bang for your buck” is incredibly high, allowing you to get several times the performance for the same cost.

In between these two is the Titan RTX, which offers very good performance and VRAM for its price, but due to its fan layout, it should only be used as a single card (or at most in pairs, if used in a computer chassis with lots of airflow).

Another thing to consider is that if you plan on using multiple GPUs (which is often the case for rendering), the size of the computer chassis itself has to be fairly large in order to fit all the cards, power supply, and additional cooling needed to keep everything going.

How are game engines changing or impacting VFX workflows?
Bach: Game engines have been used for previsualization for a while, but we are starting to see them being used further and further down the VFX pipeline. In fact, there are already several instances where renders directly captured from game engines, like Unity or Unreal, are being used in the final film or animation.

This is getting into speculation, but I believe that as the quality of what game engines can produce continues to improve, it is going to drastically shake up VFX workflows. The fact that you can make changes in real time, as well as use motion capture and facial tracking, is going to dramatically reduce the amount of time necessary to produce a highly polished final product. Game engines likely won’t completely replace more traditional rendering for quite a while (if ever), but it is going to be significant enough that I would encourage VFX artists to at least familiarize themselves with the popular engines like Unity or Unreal.

What impact do you see ML/AI and AR/VR playing for your customers?
We are seeing a lot of work being done for machine learning and AI, but a lot of it is still on the development side of things. We are starting to get a taste of what is possible with things like Deepfakes, but there is still so much that could be done. I think it is too early to really tell how this will affect VFX in the long term, but it is going to be exciting to see.

AR and VR are cool technologies, but it seems like they have yet to really take off, in part because designing for them takes a different way of thinking than traditional media, but also in part because there isn’t one major platform that’s an overwhelming standard. Hopefully, that is something that gets addressed over time, because once creative folks really get a handle on how to use the unique capabilities of AR/VR to their fullest, I think a lot of neat stories will be told.

What is the next on the horizon for VFX?
Bach: The sky is really the limit due to how fast technology and techniques are changing, but I think there are two things in particular that are going to be very interesting to see how they play out.

First, we are hitting a point where ethics (“With great power comes great responsibility” and all that) is a serious concern. With how easy it is to create highly convincing Deepfakes of celebrities or other individuals, even for someone who has never used machine learning before, I believe that there is the potential of backlash from the general public. At the moment, every use of this type of technology has been for entertainment or otherwise rightful purposes, but the potential to use it for harm is too significant to ignore.

Something else I believe we will start to see is “VFX for the masses,” similar to how video editing used to be a purely specialized skill, but now anyone with a camera can create and produce content on social platforms like YouTube. Advances in game engines, facial/body tracking for animated characters and other technologies that remove a number of skills and hardware barriers for relatively simple content are going to mean that more and more people with no formal training will take on simple VFX work. This isn’t going to impact the professional VFX industry by a significant degree, but I think it might spawn a number of interesting techniques or styles that might make their way up to the professional level.

 

Paul Ghezzo, creative director, Technicolor Visual Effects
Technicolor and its family of VFX brands provide visual effects services tailored to each project’s needs.

What film inspired you to work in VFX?
At a pretty young age, I fell in love with Star Wars: Episode IV – A New Hope and learned about the movie magic that was developed to make those incredible visuals come to life.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
USD will help structure some of what we currently do, and cloud rendering is an incredible source to use when needed. I see both of them maturing and being around for years to come.

As for other trends, I see new methods of photogrammetry and HDRI photography/videography providing datasets for digital environment creation and capturing lighting content; performance capture (smart 2D tracking and manipulation or 3D volumetric capture) for ease of performance manipulation or layout; and even post camera work. New simulation engines are creating incredible and dynamic sims in a fraction of the time, and all of this coming together through video cards streamlining the creation of the end product. In many ways it might reinvent what can be done, but it might take a few cutting-edge shows to embrace and perfect the recipe and show its true value.

Production cameras tethered to digital environments for live set extensions are also coming of age, and with realtime rendering becoming a viable option, I can imagine that it will only be a matter of time for LED walls to become the new greenscreen. Can you imagine a live-action set extension that parallaxes, distorts and is exposed in the same way as its real-life foreground? How about adding explosions, bullet hits or even an armada of spaceships landing in the BG, all on cue. I imagine this will happen in short order. Exciting times.

Are game engines affecting how you work or how you will work in the future?
Game engines have affected how we work. The speed and quality that they offer is undoubtably a game changer, but they don’t always create the desired elements and AOVs that are typically needed in TV/film production.

They are also creating a level of competition that is spurring other render engines to be competitive and provide a similar or better solution. I can imagine that our future will use Unreal/Unity engines for fast turnaround productions like previz and stylized content, as well as for visualizing virtual environments and digital sets as realtime set extensions and a lot more.

Snowfall

What about realtime raytracing? How will that affect VFX and the way you work?
GPU rendering has single-handedly changed how we render and what we render with. A handful of GPUs and a GPU-accelerated render engine can equal or surpass a CPU farm that’s several times larger and much more expensive. In VFX, iterations equal quality, and if multiple iterations can be completed in a fraction of the time — and with production time usually being finite — then GPU-accelerated rendering equates to higher quality in the time given.

There are a lot of hidden variables to that equation (change of direction, level of talent provided, work ethics, hardware/software limitations, etc.), but simply said, if you can hit the notes as fast as they are given, and not have to wait hours for a render farm to churn out a product, then clearly the faster an iteration can be provided the more iterations can be produced, allowing for a higher-quality product in the time given.

How have AR or ML affected your workflows, if at all?
ML and AR haven’t significantly affected our current workflows yet … but I believe they will very soon.

One aspect of AR/VR/MR that we occasionally use in TV/film production is to previz environments, props and vehicles, which allows everyone in production and on set/location to see what the greenscreen will be replaced with, which allows for greater communication and understanding with the directors, DPs, gaffers, stunt teams, SFX and talent. I can imagine that AR/VR/MR will only become more popular as a preproduction tool, allowing productions to front load and approve all aspects of production way before the camera is loaded and the clock is running on cast and crew.
Machine learning is on the cusp of general usage, but it currently seems to be used by productions with lengthy schedules that will benefit from development teams building those toolsets. There are tasks that ML will undoubtably revolutionize, but it hasn’t affected our workflows yet.

The Uncanny Valley. Where are we now?
Making the impossible possible … That *is* what we do in VFX. Looking at everything from Digital Emily in 2011 to Thanos and Hulk in Avengers: Endgame, we’ve seen what can be done, and the Uncanny Valley will likely remain, but only on productions that can’t afford the time or cost of flawless execution.

Can you name some recent projects?
Big Little Lies, Dead to Me, NOS4A2, True Detective, Veep, This Is Us, Snowfall, The Loudest Voice, and Avengers: Endgame.

 

James Knight, virtual production director, AMD 
AMD is a semiconductor company that develops computer processors and related technologies for M&E as well as other markets. Its tools include Ryzen and Threadripper.

What are some of today’s VFX trends?
Well, certainly the exploration for “better, faster, cheaper” keeps going. Faster rendering, so our community can accomplish more iterations in a much shorter amount of time, seems to something I’ve heard the whole time I’ve been in the business.

I’d surely say the virtual production movement (or on-set visualization) is gaining steam, finally. I work with almost all the major studios in my role, and all of them, at a minimum, have the ability to speed up post and blend it with production on their radar; many have virtual production departments.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
I would say game engines are where most of the innovation comes from these days. Think about Unreal, for example. Epic pioneered Fortnite, and the revenue from that must be astonishing, and they’re not going to sit on their hands. The feature film and TV post/VFX business benefits from the requirement of the gaming consumer to see higher-resolution, more photorealistic images in real time. That gets passed on to our community in eliminating guess work on set when framing partial or completely CG shots.

It should be for everyone or most, because the realtime and post production time savings are rather large. I think many still have a personal preference for what they’re used to. And that’s not wrong, if it works for them, obviously that’s fine. I just think that even in 2019, use of game engines is still new to some … which is why it’s not completely ubiquitous.

How do ML or AR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Well, it’s more the reverse. With our new Rome and Threadripper CPUs, we’re powering AR. Yes, we are supporting OpenXR 1.0.

What is next on the horizon for VFX?
Well, the demand for VFX is increasing, not the opposite, so the pursuit of faster photographic reality is perpetually in play. That’s good job security for me at a CPU/GPU company, as we have a way to go to properly bridge the Uncanny Valley completely, for example.

I’d love to say lower-cost CG is part of the future, but then look at the budgets of major features — they’re not exactly falling. The dance of Moore’s law will forever be in effect more than likely, with momentary huge leaps in compute power — like with Rome and Threadripper — catching amazement for a period. Then, when someone sees the new, expanded size of their sandpit, they then fill that and go, “I now know what I’d do if it was just a bit bigger.”

I am vested and fascinated by the future of VFX, but I think it goes hand in hand with great storytelling. If we don’t have great stories, then directing and artistry innovations don’t properly get noticed. Look at the top 20 highest grossing films in history … they’re all fantasy. We all want to be taken away from our daily lives and immersed in a beautiful, realistic VFX intense fictional world for 90 minutes, so we’ll be forever pushing the boundaries of rigging, texturing, shading, simulations, etc. To put my finger on exactly what’s next, I’d say I happen to know of a few amazing things that are coming, but sadly, I’m not at liberty to say right now.

 

Michel Suissa, managing director of pro solutions, The Studio-B&H 
The Studio-B&H provides hands-on experience to high-end professionals. Its Technology Center is a fully operational studio with an extensive display of high-end products and state-of-the-art workflows.

What are some of today’s VFX trends?
AI, ML, NN (GAN) and realtime environments

Will realtime raytracing play a role in how the tools you provide work?
It already does with most relevant applications in the market.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
The ubiquity of realtime game engines is becoming more mainstream with every passing year. It is becoming fairly accessible to a number of disciplines within different market targets.

What is next on the horizon for VFX?
New pipeline architectures that will rely on different implementations (traditional and AI/ML/NN) and mixed infrastructures (local and cloud-based).

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
AI, ML and realtime environments. New cloud toolsets. Prominence of neural networks and GANs. Proliferation of convincing “deepfakes” as a proof of concept for the use of generative networks as resources for VFX creation.

What about realtime raytracing? How will that affect VFX workflows?
RTX is changing how most people see their work being done. It is also changing expectations about what it takes to create and render CG images.



The Uncanny Valley. Where are we now?
AI and machine learning will help us get there. Perfection still remains too costly. The amount of time and resources required to create something convincing is prohibitive for the large majority of the budgets.

 

Marc Côté, CEO, Real by Fake 
Real by Fake services include preproduction planning, visual effects, post production and tax-incentive financing.

What film or show inspired you to work in VFX?
George Lucas’ Star Wars and Indiana Jones (Raiders of the Lost Ark). For Star Wars, I was a kid and I saw this movie. It brought me to another universe. Star Wars was so inspiring even though I was too young to understand what the movie was about. The robots in the desert and the spaceships flying around. It looked real; it looked great. I was like, “Wow, this is amazing.”

Indiana Jones because it was a great adventure; we really visit the worlds. I was super-impressed by the action, by the way it was done. It was mostly practical effects, not really visual effects. Later on I realized that in Star Wars, they were using robots (motion control systems) to shoot the spaceships. And as a kid, I was very interested in robots. And I said, “Wow, this is great!” So I thought maybe I could use my skills and what I love and combine it with film. So that’s the way it started.

What trends have you been seeing? What do you feel is important?
The trend right now is using realtime rendering engines. It’s coming on pretty strong. The game companies who build engines like Unity or Unreal are offering a good product.

It’s bit of a hack to use these tools in rendering or in production at this point. They’re great for previz, and they’re great for generating realtime environments and realtime playback. But having the capacity to change or modify imagery with the director during the process of finishing is still not easy. But it’s a very promising trend.

Rendering in the cloud gives you a very rapid capacity, but I think it’s very expensive. You also have to download and upload 4K images, so you need a very big internet pipe. So I still believe in local rendering — either with CPUs or GPUs. But cloud rendering can be useful for very tight deadlines or for small companies that want to achieve something that’s impossible to do with the infrastructure they have.

My hope is that AI will minimize repetition in visual effects. For example, in keying. We key multiple sections of the body, but we get keying errors in plotting or transparency or in the edges, and they are all a bit different, so you have to use multiple keys. AI would be useful to define which key you need to use for every section and do it automatically and in parallel. AI could be an amazing tool to be able to make objects disappear by just selecting them.

Pixar’s USD is interesting. The question is: Will the industry take it as a standard? It’s like anything else. Kodak invented DPX, and it became the standard through time. Now we are using EXR. We have different software, and having exchange between them will be great. We’ll see. We have FBX, which is a really good standard right now. It was built by Filmbox, a Montreal company that was acquired by Autodesk. So we’ll see. The demand and the companies who build the software — they will be the ones who take it up or not. A big company like Pixar has the advantage of other companies using it.

The last trend is remote access. The internet is now allowing us to connect cross-country, like from LA to Montreal or Atlanta. We have a sophisticated remote infrastructure, and we do very high-quality remote sessions with artists who work from disparate locations. It’s very secure and very seamless.

What about realtime raytracing? How will that affect VFX and the way you work?
I think we have pretty good raytracing compared to what we had two years ago. I think it’s a question of performance, and of making it user-friendly in the application so it’s easy to light with natural lighting. To not have to fake the rebounds so you can get two or three rebounds. I think it’s coming along very well and quickly.

Sharp Objects

So what about things like AI/ML or AR/VR? Have those things changed anything in the way movies and TV shows are being made?
My feeling right now is that we are getting into an era where I don’t think you’ll have enough visual effects companies to cover the demand.

Every show has visual effects. It can be a complete character, like a Transformer, or a movie from the Marvel Universe where the entire film is CG. Or it can be the huge number of invisible effects that are starting to appear in virtually every show. You need capacity to get all this done.

AI can help minimize repetition so artists can work more on the art and what is being created. This will accelerate and give us the capacity to respond to what’s being demanded of us. They want a faster cheaper product, and they want the quality to be as high as a movie.

The only scenario where we are looking at using AR is when we are filming. For example, you need to have a good camera track in real time, and then you want to be able to quickly add a CGI environment around the actors so the director can make the right decision in terms of the background or interactive characters who are in the scene. The actors will not see it until they have a monitor or a pair of glasses or something to be able to give them the result.

So AR is a tool to be able to make faster decisions when you’re on set shooting. This is what we’ve been working on for a long time: bringing post production and preproduction together. To have an engineering department who designs and conceptualizes and creates everything that needs to be done before shooting.

The Uncanny Valley. Where are we now?
In terms of the environment, I think we’re pretty much there. We can create an environment that nobody will know is fake. Respectfully, I think our company Real by Fake is pretty good at doing it.

In terms of characters, I think we’re still not there. I think the game industry is helping a lot to push this. I think we’re on the verge of having characters look as close as possible to live actors, but if you’re in a closeup, it still feels fake. For mid-ground and long shots, it’s fine. You can make sure nobody will know. But I don’t think we’ve crossed the valley just yet.

Can you name some recent projects?
Big Little Lies and Sharp Objects for HBO, Black Summer for Netflix
and Brian Banks, an indie feature.

 

Jeremy Smith, CTO, Jellyfish Pictures
Jellyfish Pictures provides a range of services including VFX for feature film, high-end TV and episodic animated kids’ TV series and visual development for projects spanning multiple genres.

What film or show inspired you to work in VFX?
Forrest Gump really opened my eyes to how VFX could support filmmaking. Seeing Tom Hanks interact with historic footage (e.g., John F. Kennedy) was something that really grabbed my attention, and I remember thinking, “Wow … that is really cool.”

What trends have you been seeing? What do you feel is important?
The use of cloud technology is really empowering “digital transformation” within the animation and VFX industry. The result of this is that there are new opportunities that simply wouldn’t have been possible otherwise.

Jellyfish Pictures uses burst rendering into the cloud, extending our capacity and enabling us to take on more work. In addition to cloud rendering, Jellyfish Pictures were early adopters of virtual workstations, and, especially after Siggraph this year, it is apparent to see that this is the future for VFX and animation.

Virtual workstations promote a flexible and scalable way of working, with global reach for talent. This is incredibly important for studios to remain competitive in today’s market. As well as the cloud, formats such as USD are making it easier to exchange data with others, which allow us to work in a more collaborative environment.

It’s important for the industry to pay attention to these, and similar, trends, as they will have a massive impact on how productions are carried out going forward.
Are game engines affecting how you work, or how you will work in the future?

Game engines are offering ways to enhance certain parts of the workflow. We see a lot of value in the previz stage of the production. This allows artists to iterate very quickly and helps move shots onto the next stage of production.

What about realtime raytracing? How will that affect VFX and the way you work?
The realtime raytracing from Nvidia (as well as GPU compute in general) offers artists a new way to iterate and help create content. However, with recent advancements in CPU compute, we can see that “traditional” workloads aren’t going to be displaced. The RTX solution is another tool that can be used to assist in the creation of content.

How have AR/VR and ML/AI affected your workflows, if at all?
Machine learning has the power to really assist certain workloads. For example, it’s possible to use machine learning to assist a video editor by cataloging speech in a certain clip. When a director says, “find the spot where the actor says ‘X,’” we can go directly to that point in time on the timeline.

 In addition, ML can be used to mine existing file servers that contain vast amounts of unstructured data. When mining this “dark data,” an organization may find a lot of great additional value in the existing content, which machine learning can uncover.

The Uncanny Valley. Where are we now?
With recent advancements in technology, the Uncanny Valley is closing, however it is still there. We see more and more digital humans in cinema than ever before (Peter Cushing in Rogue One: A Star Wars Story was a main character), and I fully expect to see more advances as time goes on.

Can you name some recent projects?
Our latest credits include Solo: A Star Wars Story, Captive State, The Innocents, Black Mirror, Dennis & Gnasher: Unleashed! and Floogals Seasons 1 through 3.

 

Andy Brown, creative director, Jogger 
Jogger Studios is a boutique visual effects studio with offices in London, New York and LA. With capabilities in color grading, compositing and animation, Jogger works on a variety of projects, from TV commercials and music videos to projections for live concerts.

What inspired you to work in VFX?
First of all, my sixth form English project was writing treatments for music videos to songs that I really liked. You could do anything you wanted to for this project, and I wanted to create pictures using words. I never actually made any of them, but it planted the seed of working with visual images. Soon after that I went to university in Birmingham in the UK. I studied communications and cultural studies there, and as part of the course, we visited the BBC Studios at Pebble Mill. We visited one of the new edit suites, where they were putting together a story on the inquiry into the Handsworth riots in Birmingham. It struck me how these two people, the journalist and the editor, could shape the story and tell it however they saw fit. That’s what got me interested on a critical level in the editorial process. The practical interest in putting pictures together developed from that experience and all the opportunities that opened up when I started work at MPC after leaving university.

What trends have you been seeing? What do you feel is important?
Remote workstations and cloud rendering are all really interesting. It’s giving us more opportunities to work with clients across the world using our resources in LA, SF, Austin, NYC and London. I love the concept of a centralized remote machine room that runs all of your software for all of your offices and allows you scaled rendering in an efficient and seamless manner. The key part of that sentence is seamless. We’re doing remote grading and editing across our offices so we can share resources and personnel, giving the clients the best experience that we can without the carbon footprint.

Are game engines affecting how you work or how you will work in the future?
Game engines are having a tremendous effect on the entire media and entertainment industry, from conception to delivery. Walking around Siggraph last month, seeing what was not only possible but practical and available today using gaming engines, was fascinating. It’s hard to predict industry trends, but the technology felt like it will change everything. The possibilities on set look great, too, so I’m sure it will mean a merging of production and post production in many instances.

What about realtime raytracing How will that affect VFX and the way you work?
Faster workflows and less time waiting for something to render have got to be good news. It gives you more time to experiment and refine things.

Chico for Wendy’s

How have AR/VR or ML/AI affected your workflows, if at all?
Machine learning is making its way into new software releases, and the tools are useful. Anything that makes it easier to get where you need to go on a shot is welcome. AR, not so much. I viewed the new Mac Pro sitting on my kitchen work surface through my phone the other day, but it didn’t make me want to buy it any more or less. It feels more like something that we can take technology from rather than something that I want to see in my work.

I’d like 3D camera tracking and facial tracking to be realtime on my box, for example. That would be a huge time-saver in set extensions and beauty work. Anything that makes getting perfect key easier would also be great.

The Uncanny Valley. Where are we now?
It always used to be “Don’t believe anything you read.” Now it’s, “Don’t believe anything you see.” I used to struggle to see the point of an artificial human, except for resurrecting dead actors, but now I realize the ultimate aim is suppression of the human race and the destruction of democracy by multimillionaire despots and their robot underlings.

Can you name some recent projects?
I’ve started prepping for the apocalypse, so it’s hard to remember individual jobs, but there’s been the usual kind of stuff — beauty, set extensions, fast food, Muppets, greenscreen, squirrels, adding logos, removing logos, titles, grading, finishing, versioning, removing rigs, Frankensteining, animating, removing weeds, cleaning runways, making tenders into wings, split screens, roto, grading, polishing cars, removing camera reflections, stabilizing, tracking, adding seatbelts, moving seatbelts, adding photos, removing pictures and building petrol stations. You know, the usual.

 

James David Hattin, founder/creative director, VFX Legion 
Based in Burbank and British Columbia, VFX Legion specializes in providing episodic shows and feature films with an efficient approach to creating high-quality visual effects.

What film or show inspired you to work in VFX?
Star Wars was my ultimate source of inspiration for doing visual effects. Much of the effects in the movies didn’t make sense to me as a six-year-old, but I knew that this was the next best thing to magic. Visual effects create a wondrous world where everyday people can become superheroes, leaders of a resistance or ruler of a 5th century dynasty. Watching X-wings flying over the surface of a space station, the size of a small moon was exquisite. I also learned, much later on, that the visual effects that we couldn’t see were as important as what we could see.

I had already been steeped in visual effects with Star Trek — phasers, spaceships and futuristic transporters. Models held from wires on a moon base convinced me that we could survive on the moon as it broke free from orbit. All of this fueled my budding imagination. Exploring computer technology and creating alternate realities, CGI and digitally enhanced solutions have been my passion for over a quarter of century.

What trends have you been seeing? What do you feel is important?
More and more of the work is going to happen inside a cloud structure. That is definitely something that is being pressed on very heavily by the tech giants like Google and Amazon that rule our world. There is no Moore’s law for computers anymore. The prices and power we see out of computers is almost plateauing. The technology is now in the world of optimizing algorithms or rendering with video cards. It’s about getting bigger, better effects out more efficiently. Some companies are opting to run their entire operations in the cloud or co-located server locations. This can theoretically free up the workers to be in different locations around the world, provided they have solid, low-latency, high-speed internet.

When Legion was founded in 2013, the best way around cloud costs was to have on-premises servers and workstations that supported global connectivity. It was a cost control issue that has benefitted the company to this day, enabling us to bring a global collective of artists and clients into our fold in a controlled and secure way. Legion works in what we consider a “private cloud,” eschewing the costs of egress from large providers and working directly with on-premises solutions.

Are game engines affecting how you work or how you will work in the future?
Game engines are perfect for revisualization in large, involved scenes. We create a lot of environments and invisible effects. For the larger bluescreen shoots, we can build out our sets in Unreal engines, previsualizing how the scene will play for the director or DP. This helps get everyone on the same page when it comes to how a particular sequence is going to be filmed. It’s a technique that also helps the CG team focus on adding details to the areas of a set that we know will be seen. When the schedule is tight, the assets are camera-ready by the time the cut comes to us.

What about realtime raytracing via Nvidia’s RTX? How will that affect VFX and the way you work?
The type of visual effects that we create for feature films and television shows involves a lot of layers and technology that provides efficient, comprehensive compositing solutions. Many of the video card rendering engines like Octanerender, Redshift and V-Ray RT are limited when it comes to what they can create with layers. They often have issues with getting what is called a “back to beauty,” in which the sum of the render passes equals the final render. However, the workarounds we’ve developed enable us to achieve the quality we need. Realtime raytracing introduces a fantastic technology that will someday make it an ideal fit with our needs. We’re keeping an out eye for it as it evolves and becomes more robust.

How have AR/VR or ML/AI affected your workflows, if at all?
AR has been in the wings of the industry for a while. There’s nothing specific that we would take advantage of. Machine learning has been introduced a number of times to solve various problems. It’s a pretty exciting time for these things. One of our partner contacts, who left to join Facebook, was keen to try a number of machine learning tricks for a couple of projects that might have come through, but we didn’t get to put it through the test. There’s an enormous amount of power to be had in machine learning, and I think we are going to see big changes over the next five years in that field and how it affects all of post production.

The Uncanny Valley. Where are we now?
Climbing up the other side, not quite at the summit for daily use. As long as the character isn’t a full normal human, it’s almost indistinguishable from reality.

Can you name some recent projects?
We create visual effects on an ongoing basis for a variety of television shows that include How to Get Away with Murder, DC’s Legends of Tomorrow, Madam Secretary and The Food That Built America. Our team is also called upon to craft VFX for a mix of movies, from the groundbreaking feature film Hardcore Henry to recently released films such as Ma, SuperFly and After.

MAIN IMAGE: Good Morning Football via Chapeau Studios.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 


Tips from a Flame Artist: things to do before embarking on a VFX project

By Andy Brown

I’m creative director and Flame artist at Jogger Studios in Los Angeles. We are a VFX and finishing studio and sister company to  Cut+Run, which has offices in LA, New York, London, San Francisco and Austin. As an experienced visual effects artist, I’ve seen a lot in my time in the industry, and not just what ends up on the screen. I’m also an Englishman living in LA.

I was asked to put together some tips to help make your next project a little bit easier, but in the process, I remembered many things I forgot. I hope these tips these help!

1) Talk to production.

2) Trust your producers.

3) Don’t assume anyone (including you) knows anything.

4) Forget about the money; it’s not your job. Well, it’s kind of your job, but in the context of doing the work, it’s not.

5) Read everything that you’ve been sent, then read it again. Make sure you actually understand what is being asked of you.

6) Make a list of questions that cover any uncertainty you might have about any aspect of the project you’re bidding for. Then ask those questions.

7) Ask production to talk to you if they have any questions. It’s better to get interrupted on your weekend off than for the client to ask her friend Bob, who makes videos for YouTube. To be fair to Bob, he might have a million subscribers, but Bob isn’t doing the job, so please, keep Bob out of it.

8) Remember that what the client thinks is “a small amount of cleanup” isn’t necessarily a small amount of cleanup.

9) Bring your experience to the table. Even if it’s your experience in how not to do things.

10) If you can do some tests, then do some tests. Not only will you learn something about how you’re going to approach the problem, but it will show your client that you’re engaged with the project.

11) Ask about the deliverables. How many aspect ratios? How many versions? Then factor in the slated, the unslated and the generics and take a deep breath.

12) Don’t believe that a lift (a cutdown edit) is a lift is a lift. It won’t be a lift.

13) Make sure you have enough hours in your bid for what you’re being asked to do. The hours are more important than the money.

14) Attend the shoot. If you can’t attend the shoot, then send someone to the shoot … someone who knows about VFX. And don’t be afraid to pipe up on the shoot; that’s what you’re there for. Be prepared to make suggestions on set about little things that will make the VFX go more smoothly.

15) Give yourself time. Don’t get too frustrated that you haven’t got everything perfect in the first day.

16) Tackle things methodically.

17) Get organized.

18) Make a list.

19) Those last three were all the same thing, but that’s because it’s important.

20) Try to remember everyone’s names. Write them down. If you can’t remember, ask.

21) Sit up straight.

23) Be positive. You blew that already by being too English.

24) Remember we all want to get the best result that we can.

25) Forget about the money again. It’s not your job.

26) Work hard and don’t get pissed off if someone doesn’t like what you’ve done so far. You’ll get there. You always do.

27) Always send WIPs to the editor. Not only do they appreciate it, but they can add useful info along the way.

28) Double-check the audio.

29) Double-check for black lines at the edges of frame. There’s no cutoff anymore. Everything lives on the internet.

30) Check your spelling. Even if you spelled it right, it might be wrong. Colour. Realise. Etcetera. Etc.

 


VFX Roundtable: Trends and inspiration

By Randi Altman

The world of visual effects is ever-changing, and the speed at which artists are being asked to create new worlds, or to make things invisible is moving full-speed ahead. How do visual effects artists (and studios) prepare for these challenges, and what inspired them to get into this business? We reached out to a small group of visual effects pros working in television, commercials and feature films to find out how they work and what gets their creative juices flowing.

Let’s find out what they had to say…

KEVIN BAILLIE, CEO, ATOMIC FICTION
What do you wish clients would know before jumping into a VFX-heavy project?
The core thing for every filmmaking team to recognize is that VFX isn’t a “post process.” Careful advance planning and a tight relationship between the director, production designer, stunt team and cinematographer will yield a far superior result much more cost effectively.

In the best-looking and best-managed productions I’ve ever been a part of, the VFX team is the first department to be brought onto the show and the last one off. It truly acts as a partner in the filmmaking process. After all, once the VFX post phase starts, it’s effectively a continuation of production — with there being a digital corollary to every single department on set, from painters to construction to costume!

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
The move to cloud computing is one of the most exciting trends in VFX. The cloud is letting smaller teams to much bigger work, allowing bigger teams to do things that have never been seen before and will ultimately result in compute resources no longer being a constraint on the creative process.

Cloud computing allowed Atomic Fiction to play alongside the most prestigious companies in the world, even when we were just 20 people. That capability has allowed us to grow to over 200 people, and now we’re able to take the lead vendor position on A-list shows. It’s remarkable what dynamic and large-scale infrastructure in the cloud has enabled Atomic to accomplish.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I grew up in Seattle and started dabbling in 3D as a hobby when I was 14 years old, having been immensely inspired by Jurassic Park. Soon thereafter, I started working at Microsoft in the afternoons, developing visual content to demonstrate their upcoming technologies. I was fortunate enough to land a job with Lucasfilm right after graduating high school, which was 20 years ago at this point! I’ve been lucky enough to work with many of the directors that inspired me as a child, such as George Lucas and Robert Zemeckis, and modern pioneers like JJ Abrams.

Looking back on my career so far, I truly feel like I’ve been living the dream. I can’t wait for what’s next in this exciting, ever-changing business.

ROB LEGATO, OSCAR-WINNING VFX SUPERVISOR, SECOND UNIT DIRECTOR, SECOND UNIT DIRECTOR OF PHOTOGRAPHY
What do you wish clients would know before jumping into a VFX-heavy project?
It takes a good bit of time to come up with a plan that will ensure a sustainable attack when makinging the film. They need to ask someone in authority, “What does it take to do it,” and then make a reasonable plan. Everyone wants to do a great job all the time, and if they could maneuver the schedule — even with the same timeframe — it could be a much less frustrating job.

It happens time and time again, someone comes up with a budget and a schedule that doesn’t really fit with the task and forces you to live with it. That makes for a very difficult assignment that gets done because of the hard work of the people who are in the trenches.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
For me, it’s how realistic you can make something. The rendering capabilities — like what we did on Jungle Book with the animals — are so sophisticated that it fools your eye into believing it’s real. Once you do that you’ve opened the magic door that allows you to do anything with a tremendous amount of fidelity. You can make good movies without it being a special-venue movie or a VFX movie. The computer power and rendering abilities — along with the incredible artistic talent pool that we have created over the years — is very impressive, especially for me, coming from a more traditional camera background. I tended to shy away from computer-generated things because they never had the authenticity you would have wanted.

Then there is the happy accident of shooting something, where an angle you wouldn’t have considered appears as you look through the camera; now you can do that in the computer, which I find infinitely fascinating. This is where all the virtual cinematography things I’ve done in the past come in to help create that happy accident.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I’ve been working in VFX since about 1984. Visual effects wasn’t my dream. I wanted to make movies: direct, shoot and be a cameraman and editor. I fell into it and then used it as an avenue to allow me to create sequences in films and commercials.

The reason you go to movies is to see something you have never seen before, and for me that was Close Encounters. The first time I saw the mothership in Close Encounters, it wasn’t just an effect, it became an art form. It was beautifully realized and it made the story. Blade Runner was another where it’s no longer a visual effect, it’s filmmaking as an art form.

There was also my deep appreciation for Doug Trumbull, whose quality of work was so high it transcended being a visual effect or a photographic effect.

LISA MAHER, VP OF PRODUCTION, SHADE VFX 
What do you wish clients would know before jumping into a VFX-heavy project?
That it’s less expensive in the end to have a VFX representative involved on the project from the get-go, just like all the other filmmaking craft-persons that are represented. It’s getting better all the time though, and we are definitely being brought on board earlier these days.

At Shade we specialize in invisible or supporting VFX. So-called invisible effects are often much harder to pull off. It’s all about integrating digital elements that support the story but don’t pull the audience out of a scene. Being able to assist in the planning stages of a difficult VFX sequence often results in the filmmakers achieving what they envisioned more readily. It also helps tremendously to keep the costs in line with what was originally budgeted. It also goes without saying that it makes for happier VFX artists as they receive photography captured with their best interests in mind.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
I would say the most exciting development affecting visual effects is the explosion of opportunities offered by the OTT content providers such as Netflix, Amazon, HBO and Hulu. Shade primarily served the feature film market up to three years ago, but with the expanding needs of television, our offices in Los Angeles and New York are now evenly split between film and TV work.

We often find that the film work is still being done at the good old reliable 2K resolution while our TV shows are always 4K plus. The quality and diversity of projects being produced for TV now make visual effects a much more buoyant enterprise for a mid-sized company and also a real source of employment for VFX professionals who were previously so dependent on big studio generated features.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I’ve been working in visual effects close to 20 years now. I grew up in Ireland; as a child the world of film, and especially images of sunny California, were always a huge draw for me. They helped me survive the many grey and rainy days of the Irish climate.  I can’t point to one project that inspired me to get into film making — there have been so many — just a general love for storytelling, I guess. Films like Westworld (the 1973 version), Silent Running, Cinema Paradiso, Close Encounters of the Third Kind, Blade Runner and, of course, the original Star Wars were truly inspirational.

DAVID SHELDON-HICKS, CO-FOUNDER/EXECUTIVE CREATIVE DIRECTOR, TERRITORY STUDIO
What do you wish clients would know before jumping into a VFX-heavy project?
The craft and care and love that goes into VFX is often forgotten in the “business” of it all. As a design led studio that straddles art and VFX departments in our screen graphic and VFX work, we prefer to work with the director from the preproduction phase. This ensures that all aspects of our work are integrated into story and world building.

The talent and gut instinct, eye for composition and lighting, appreciation of form, choreography of movement and, most notably, the appreciation of the classics is so pertinent to the art of VFX and is undersold for conversations of shot counts, pipelines, bidding and numbers of artists. Bringing the filmmakers into the creative process has to be the way forward for an art form still finding its own voice.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
The level of concept art and postviz coming through from VFX studios is quite staggering. It gets back to my point from above of bringing the VFX dialogue with filmmakers and VFX artists concentrated on world building and narrative expansion. It’s so exciting to see concept art and postviz getting to a new level of sophistication and influence in the filmmaking process.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I have been working professionally in VFX for over 15 years. My love of VFX and creativity in general came from the moment I picked up a pencil and imagined new possibilities. But once I cut my film teeth designing screens graphics on Casino Royale and followed by Dark Knight, I left my freelance days behind and co-founded Territory Studio. Our first film as a studio was Prometheus, and working with Ridley Scott was a formative experience that has influenced our own design-led approach to motion graphics and VFX, which has established us in the industry and seen the studio grow and expand.

MARK BREAKSPEAR, VFX SUPERVISOR, SONY PICTURES IMAGEWORKS
What do you wish clients would know before jumping into a VFX-heavy project?

Firstly, I think the clients I have worked with have always been extremely cognizant of the key areas affecting VFX heavy projects and consequently have built frameworks that help plan and execute these mammoth shows successfully.

Ironically, it’s the smaller shows that sometimes have the surprising “gotchas” in them. The big shows come with built-in checks and balances in the form of experienced people who are looking out for the best interests of the project and how to navigate the many pitfalls that can make the VFX costs increase.

Smaller shows sometimes don’t allow enough discussion and planning time for the VFX components in pre-production, which could result in the photography not being captured as well as it could have been. Everything goes wrong from there.

So, when I approach any show, I always look for the shots that are going to be underestimated and try to give them the attention they need to succeed. You can get taken out of a movie by a bad driving comp as much as you can a monster space goat biting a planet in half.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
I think there are several red herrings out there right now… the big one being VR. To me, VR is like someone has invented teleportation, but it only works on feet.

So, right now, it’s essentially useless and won’t make creating VFX any easier or make the end result any more spectacular. I would like to see VR used to aid artists working on shots. If you could comp in VR I could see that being a good way to help create more complex and visually thrilling shots. The user interface world is really the key area VR can benefit.

Suicide Squad

I do think however, that AR is very interesting. The real world, with added layers of information is a hugely powerful prospect. Imagine looking at a building in any city of the world, and the apartments for sale in it are highlighted in realtime, with facts like cost, square footage etc. all right there in front of you.

How does AR benefit VFX? An artist could use AR to get valuable info about shots just by looking at them. How often do we look at a shot and ask “what lens was this? AR could have all that meta-data ready to display at any point on any shot.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I’ve been in VFX for 25 years. When I started, VFX was not really a common term. I came to this industry through the commercial world… as a compositor on TV shows and music videos. Lots of (as we would call it now) visual effects, but done in a world bereft of pipelines and huge cloud-based renderfarms.

I was never inspired by a specific project to get into the visual effects world. I was a creative kid who also liked the sciences. I liked to work out why things ticked, and also draw them, and sometimes try to draw them with improvements or updates as I could imagine. It’s a common set of passions that I find in my colleagues.

I watched Star Wars and came out wondering why there were black lines around some of the space ships. Maybe there’s your answer… I was inspired by the broken parts of movies, rather than being swept up in the worlds they portrayed. After all that effort, time and energy… why did it still look wrong? How can I fix it for next time?

CHRIS HEALER, CEO/CTO/VFX SUPERVISOR, THE MOLECULE
What do you wish clients would know before jumping into a VFX-heavy project?

Plan, plan plan… previs, storyboarding and initial design are crucial to VFX-heavy projects. The mindset should ideally be that most (or all) decisions have been made before the shoot starts, as opposed to a “we’ll figure it out in post” approach.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
Photogrammetry, image modeling and data capture are so much more available than ever before. Instead of an expensive Lidar rig that only produces geometry without color, there are many many new ways to capture the color and geometry of the physical world, even using a simple smart phone or DSLR.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I’ve been doing VFX now for over 16 years. I would have to say that The Matrix (part 1) was really inspiring when I saw it the first time, and it made clear that VFX as an art form was coming and available to artists of all kinds all over the world. Previous to that, VFX was very difficult to approach for the average student with limited resources.

PAUL MARANGOS, SENIOR VFX FLAME ARTIST, HOOLIGAN
What do you wish clients would know before jumping into a VFX-heavy project?

The more involved I can be in the early stages, the more I can educate clients on all of the various effects they could use, as well as technical hurdles to watch out for. In general, I wish more clients involved the VFX guys earlier in the process — even at the concepting and storyboarding stages — because we can consult on a range of critical matters related to budgets, timelines, workflow and, of course, bringing the creative to life with the best possible quality.

Fortunately, more and more agencies realize the value of this. For instance, with a recent campaign Hooligan finished for Harvoni, we were able to plan shots for a big scene featuring hundreds of lanterns in the sky, which required lanterns of various sizes for every angle that Elma Garcia’s production team shot. Having everything well storyboarded and under Elma’s direction, who left no detail unnoticed, we managed to create a spectacular display of lantern composites for the commercial.

We were also involved early on for a campaign for MyHeritage DNA (above) via creative agency Berlin Cameron, featuring spoken word artist Prince Ea, and directed by Jonathan Augustavo of Skunk. Devised as if projecting on a wall, we mapped the motion graphics in the 3D environments.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
Of course VR and 360 live TV shows are exciting, but augmented reality is what I find particularly interesting — mixing the real world with graphics and video all around you. The interactivity of both of these emerging platforms presents an endless area of growth, as our industry is on the cusp of a sea change that hasn’t quite yet begun to directly affect my day-to-day.

Meanwhile, at Hooligan, we’re always educating ourselves on the latest software, tools and technological trends in order to prepare for the future of media and entertainment — which is wise if you want to be relevant 10 years from now. For instance, I recently attended the TED conference, where Chris Milk spoke on the birth of virtual reality as an artform. I’m also seeing advances in Google cardboard, which is making the platform affordable, too. Seeing companies open up VR Departments is an exciting step for us all and it shows the vision for the future of advertising.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I have worked in VFX for 25 years. After initially studying fine art and graphic design, the craft aspect of visual effects really appealed to me. Seeing special effects genius Ray Harryhausen’s four-minute skeleton fight was a big inspiration. He rear-projected footage of the actual actors and then combined the shots to make a realistic skeleton-Argonaut battle. It took him over four and a half months to shoot the stop-motion animation.

Main Image: Deadpool/Atomic Fiction.


The hybridization of VFX and motion design

Plus the rise of the small studio

By Miguel Lee

There has long been a dichotomy between motion graphics and VFX because they have traditionally serviced very different creative needs. However, with the democratization of tools and the migration of talent between these two pillars of the CG industry, a new “hybrid” field of content creators is emerging. And for motion designers like myself, this trend reflects the many exciting things taking place in our industry today, especially as content platforms increase at an incredible rate with smartphones and new LED technologies, not to mention a renaissance in the fields of VR, AR, and projection mapping, to name a few.

Miguel Lee

I’ve always likened the comparison of motion graphics and VFX to the Science Club and the Art Club we remember at school. VFX has its roots in an objective goal: to seamlessly integrate CG into the narrative or spectacle in a convincing and highly technical way. Motion graphics, on the other hand, can be highly subjective. One studio, for instance, might produce a broadcast package laden with 3D animations, whereas another studio will opt for a more minimal, graphical approach to communicating the same brand. A case can typically be made for either direction.

So where does the new “hybrid” studios fit into this analogy? Let’s call them the “Polymath Club,” given their abilities to tap into the proverbial hemispheres of the brain — the “left” representing their affinity for the tools, and the “right” driving the aesthetics and creative. With this “Polymath” mentality, CG artists are now able to generate work that was once only achievable by a large team of artists and technicians. Concurrently, it is influencing the hybridization of the CG industry at large, as VFX companies build motion design teams in-house, while motion graphics studios increasingly incorporate VFX tools into their own workflow.

As a result, we’ve seen a proliferation in the “lean-and-mean” production studio over the last few years. Their rise is the direct result of the democratization of our industry, where content creation tools have significantly evolved in terms of technology, accessibility and reliability. One such example is the dramatic increase in render power with the rise of third-party GPU renderers, such as Otoy’s Octane and Redshift, which have essentially made 3D photorealism more attainable. Cloud rendering solutions have also popped up for conventional and third-party renderers, which mitigates the need to build out expensive renderfarms — a luxury that is still privy to companies of a certain size.

Otoy’s Octane being used on one of Midnight Sherpa’s jobs.

Motion artists, too, have become far more adventurous in employing VFX-specific software like Houdini, which has simultaneously become far more accessible and egalitarian without any compromise to its capability. Maxon’s Cinema 4D, the heavily favored 3D application in motion graphics, has had a long tradition of implementing efficient software-specific workflows to bridge its ecosystem to other programs. Coding and script-based animation has also found a nice home in the Motion repertoire to create inventive and efficient ways to generate content. Even the barrier of entry for creating VR and AR content has eased quite a bit with the latest releases of both the Unity and Unreal engines.

Aside from lower overhead costs, the horizontal work structure of the “lean-and-mean” model has also cultivated truly collaborative environments where artists of trans-disciplinary backgrounds can work together in more streamlined ways than can be done in an oversized team. In many cases, these smaller studios are forced to develop workflows that more effectively reflect their team’s makeup — these systems often enjoy more success as they reflect the styles and, even, personalities of the core teams, which institute them.

The nature of being small also pushes you to innovate and develop greater efficiencies, rather than just throwing more bodies at the problem. These solutions and workflows are often baked into the core team and rolled out on future projects. Smaller studios also have a reputation for cultivating talent. Junior artists and interns are often put on a wider range of projects and into more roles out of necessity to fulfill the various needs of production, whereas they are typically relegated to a single role at larger studios and oftentimes are not afforded the opportunity to branch out. This conversely creates an incentive to hire artists with the intent of developing them over a long term.

There are downsides, of course, to being small — chief among them is how quickly they reach physical capacity at which point jobs would have to be turned down. The proliferation of small studios equals more voices in the landscape of content, which in turn directly contributes to the greater evolution of design as a whole.

Now that the playing field has been technologically equalized, the key between failure and success for many of these companies lies in whether or not they can craft a voice that is unique amongst their peers in an increasingly saturated landscape.

Main Image: Audi – Photorealism is more achievable in a streamlined production pipeline.


Miguel Lee is partner/creative director at LA’s Midnight Sherpa, a boutique creative studio for brands and entertainment.


The importance of on-set VFX supervision

By Karen Maierhofer

Some contend that having a visual effects supervisor present on set during production is a luxury; others deem it a necessity. However, few, if any, see it as unnecessary.

Today, more and more VFX supes can be found alongside directors and DPs during filming, advising and problem-solving, with the goal of saving valuable time and expense during production and, later, in post.

John Kilshaw

“A VFX supervisor is on set and in pre-production to help the director and production team achieve their creative goals. By having the supervisor on set, they gain the flexibility to cope with the unexpected and allow for creative changes in scope or creative direction,” says Zoic Studios creative director John Kilshaw, a sought-after VFX supervisor known for his collaborative creative approach.

Kilshaw, who has worked at a number of top VFX studios including ILM, Method and Double Negative, has an impressive resume of features, among them The Avengers, Pirates of the Caribbean: On Stranger Tides, Mission: Impossible – Ghost Protocol and various Harry Potter films. More recently, he was visual effects supervisor for the TV series Marvel’s The Defenders and Iron Fist.

Weta Digital’s Erik Winquist (Apes trilogy, Avatar, The Hobbit: An Unexpected Journey) believes the biggest contribution a VFX supervisor can make while on set comes during prep. “Involving the VFX supervisor as early as possible can only mean less surprises during principal photography. This is when the important conversations are taking place between the various heads of departments. ‘Does this particular effect need to be executed with computer graphics, or is there a way to get this in-camera? Do we need to build a set for this, or would it be better for the post process to be greenscreen? Can we have practical smoke and air mortars firing debris in this shot, or is that going to mess with the visual effects that have to be added behind it later?’”

War for the Planet of the Apes via Weta Digital

According to Winquist, who is VFX supervisor on Rampage (2018), currently in post production, having a VFX supe around can help clear up misconceptions in the mind of the director or other department heads: “No, putting that guy in a green suit doesn’t make him magically disappear from the shot. Yes, replacing that sky is probably relatively straightforward. No, modifying the teeth of that actor to look more like a vampire’s while he’s talking is actually pretty involved.”

Both Kilshaw and Winquist note that it is not uncommon to have a VFX supervisor on set whenever there are shots that include visual effects. In fact, Winquist has not heard of a major production that didn’t have a visual effects supervisor present for principal photography. “From the filmmaker’s point of view, I can’t imagine why you would not want to have your VFX supervisor there to advise,” he says. “Film is a collaborative medium. Building a solid team is how you put your vision up on the screen in the most cost-effective way possible.”

At Industrial Light & Magic, which has a long list of major VFX film credits, it is a requirement. “We always have a visual effects supervisor on set, and we insist on it. It is critical to our success on a project,” says Lindy De Quattro, VFX supervisor at ILM. “Frankly, it terrifies me to think about what could happen without one present.”

Lindy De Quattro

For some films, such as Evan Almighty, Pacific Rim, Mission: Impossible — Ghost Protocol and the upcoming Downsizing, De Quattro spent an extended period on set, while for many others she was only present for a week or two while big VFX scenes were shot. “No matter how much time you have put into planning, things rarely go entirely as planned. And someone has to be present to make last-minute adjustments and changes, and deal with new ideas that might arise on that day — it’s just part of the creative process,” she says.

For instance, while working on Pacific Rim, Director Guillermo del Toro would stay up until the wee hours of the night making new boards for what would be shot the following day, and the next morning everyone would crowd around his hand-drawn sketches and notebooks and he would say, “OK, this is what we are shooting. So we have to be prepared and do everything in our power to help ensure that the director’s vision becomes reality on screen.”

“I cannot imagine how they would have gone about setting up the shots if they didn’t have a VFX supervisor on set. Someone has to be there to be sure we are gathering the data needed to recreate the environment and the camera move in post, to be sure these things, and the greenscreens, are set up correctly so the post is successful,” De Quattro says. If you don’t know to put in greenscreen, you may be in a position where you cannot extract the foreground elements the way you need to, she warns. “So, suddenly, two days of an extraction and composite turns into three weeks of roto and hair replacement, and a bunch of other time-consuming and expensive work because it wasn’t set up properly in initial photography.”

Sometimes, a VFX supervisor ends up running the second unit, where the bulk of the VFX work is done, if the director is at a different location with the first unit. This was the case recently when De Quattro was in Norway for the Downsizing shoot. She ended up overseeing the plate unit and did location scouting with the DP each morning to find shots or elements that could be used in post. “It’s not that unusual for a VFX supervisor to operate as a second unit director and get a credit for that work,” she adds.

Kilshaw often finds himself discussing the best way of achieving the show’s creative goals with the director and producer while on set. Also, he makes sure that the producer is always informed of changes that will impact the budget. “It becomes very easy for people to say, ‘we can fix this in post.’ It is at this time when costs can start to spiral, and having a VFX supervisor on set to discuss options helps stop this from happening,” he adds. “At Zoic, we ensure that the VFX supervisor is also able to suggest alternative approaches that may help directors achieve what they need.”

Erik Winquist

According to Winquist, the tasks a VFX supe does on set depends on the size of the budget and crew. In a low-budget production, a person might be doing a myriad of different tasks themselves: creating previs and techvis, working with the cinematographer and key grip concerning greenscreen or bluescreen placement, placing tracking markers, collecting camera information for each setup or take, shooting reference photos of the set, helping with camera or lighting placement, gathering lighting measurements with gray and chrome reference spheres — basically any information that will help the person best execute the visual effects requirements of the shot. “And all the while being available to answer questions the director might have,” he says.

If the production has a large budget, the role is more about spreading out and managing those tasks among an on-set visual effects team: data wranglers, surveyors, photographers, coordinators, PAs, perhaps a motion capture crew, “so that each aspect of it is done as thoroughly as possible,” says Winquist. “Your primary responsibility is being there for the director and staying in close communication with the ADs so that you or your team are able to get all the required data from the shoot. You only have one chance to do so.”

The benefits of on-set VFX supervision are not just for those working on big-budget features, however. As Winquist points out, the larger the budget, the more demanding the VFX work and the higher the shot count, therefore the more important it is to involve the VFX supervisor in the shoot. “But it could also be argued that a production with a shoestring budget also can’t afford to get it wrong or be wasteful during the shoot, and the best way to ensure that footage is captured in a way that will make for a cost-effective post process is to have the VFX supervisor there to help.”

Kilshaw concurs. “Regardless of whether it is a period drama or superhero show, whether you need to create a superpower or a digital version of 1900 New York, the advantages of visual effects and visual effects supervision on set are equally important.”

While De Quattro’s resume is overflowing with big-budget VFX films, she has also assisted on smaller projects where a VFX supervisor’s presence was also critical. She recalls a commercial shoot, one that prompted her to question the need for her presence. However, production hit a snag when a young actor was unable to physically accomplish a task during multiple takes, and she was able to step in and offer a suggestion, knowing it would require just a minor VFX fix. “It’s always something like that. Even if the shoot is simple and you think there is no need, inevitably someone will need you and the input of someone who understands the process and what can be done,” she says.

De Quattro’s husband is also a VFX supervisor who is presently working on a non-VFX-driven Netflix series. While he is not on set every day, he is called when there is an effects shoot scheduled.

Mission Impossible: Ghost Protocol

So, with so many benefits to be had, why would someone opt not to have a VFX supervisor on set? De Quattro assumes it is the cost. “What’s that saying, ‘penny wise and pound foolish?’ A producer thinks he or she is saving money by eliminating the line item of an on-set supervisor but doesn’t realize the invisible costs, including how much more expensive the work can be, and often is, on the back end,” she notes.

“On set, people always tell me their plans, and I find myself advising them not to bother building this or that — we are not going to need it, and the money saved could be better utilized elsewhere,” De Quattro says.

On Mission: Impossible, for example, the crew was filming a complicated underwater escape scene with Tom Cruise and finally got the perfect take, only his emergency rig became exposed. However, rather than have the actor go back into the frigid water for another take, De Quattro assured the team that the rig could be removed in post within the original scope of the VFX work. While most people are aware that can be done now, having someone with the authority and knowledge to know that for sure was a relief, she says.

Despite their extensive knowledge of VFX, these supervisors all say they support the best tool for the job on set and, mostly, that is to capture the shot in-camera first. “In most instances, the best way to make something look real is to shoot it real, even if it’s ultimately just a small part of the final frame,” Winquist says. However, when factors conspire against that, whether it be weather, animals, extras, or something similar, “having a VFX supervisor there during the shoot will allow a director to make decisions with confidence.”

Main Image: Weta’s Erik Winquist on set for Planet of the Apes.

Transitioning from VFX artist to director

By Karen Maierhofer

It takes a certain type of person to be a director — someone who has an in-depth understanding of the production process; is an exceptional communicator, planner and organizer; who possesses creative vision; and is able to see the big picture where one does not yet exist. And those same qualities can be found in a visual effects or CG supervisor.

In fact, there are a number of former visual effects artists and supes who have made the successful transition to the director’s chair – Neill Blomkamp (District 9), Andrew Adamson (Shrek, Narnia), Carlos Saldanha (Ice Age, Rio) and Tim Miller (Deadpool), to name a few. And while VFX supervisors possess many of the skills necessary for directing, it is still relatively uncommon for them to bear that credit, whether it is on a feature film, television series, commercial, music video or other project.

Armen Kevorkian
Armen Kevorkian, VFX supervisor and executive creative director at Deluxe’s Encore, says, “It’s not necessarily a new trend, but it’s really not that common.”

Armen Kevorkian (flannel shirt) on set.

Kevorkian, who has a long list of visual effects credits on various television series — two of which he has also directed episodes (Supergirl and The Flash) — has always wanted to direct but embrace VFX, winning an Emmy and three LEO Awards in addition to garnering multiple nominations for that work. “It’s all about filmmaking and storytelling. I loved what I was doing but always wanted to pursue directing, although I was not going to be pushy about it. If it happened, it happened.”

Indeed, it happened. And having the VFX experience gave Kevorkian the confidence and skills to handle being a director. “A VFX supervisor is often directing the second unit, which makes you comfortable with directing. When you direct an entire episode, though, it is not just about a few pieces; it’s about telling an entire story. That is something you learn to handle as you go.”

As a VFX supe, Kevorkian often was present from start to finish, and was able to see the whole preparation process of what worked and what didn’t. “With VFX, you are there for prep, shooting and post — the whole gamut. Not many other departments get to experience that,” he says.

When he was given the chance to direct an episode, Kevorkian was “the visual effects guy directing.” Luckily, he had worked with the actors on previous episodes in his VFX role and had a good relationship with them. “They were really supportive, and I couldn’t have done it without that, but I can see situations where you might be treated differently because your background is visual effects, and it takes more than that to tell a story and direct a full episode,” he adds.

Proving oneself can be scary, and Kevorkian has known others who directed one project and never did it again. Not so for Kevorkian, who has now directed three episodes of The Flash and one episode of Supergirl thus far, and will direct another Supergirl episode later this year.

While the episodes he has directed were not VFX-heavy, he foresees times when he will have to make a certain decision on the spot, and knowing that something can be fixed easily and less expensively in post, as opposed to wasting precious time trying to fix it practically, will be very helpful. “You are not asking the VFX guy, hey is this going to work? You pretty much know the answer because of your background,” he explains.

Despite his turn directing, Kevorkian is still “the VFX guy” for the series. “I love VFX and also love directing,” he says, hoping to one day direct feature films. “A lot of people think they want to direct but don’t realize how difficult it can be,” he adds.

HaZ Dulull
Hasraf “HaZ” Dulull doesn’t see VFX artists as directors as being so unique any more — “there are more of us now” — and recognizes the advantages such a background can bring to the new role.

“The type of films I make are considered high-concept sci-fi, which rely on VFX to help present the vision and tell the story. But it’s not just putting pretty pixels on screen as an artist that has helped me, it was also being in VFX management roles. This meant I spent a lot of time with TV showrunners, film producers on set and in the edit bay,” says Dulull. “I learned a lot from that such as how to deal with producers, executive producers and timelines. And all the other exposure I got in my VFX management role helped me prep for directing/producing a film.”

Dulull has an extensive resume, having worked as a VFX artist on films such as The Dark Knight and Prince of Persia, before moving into a supervisor role on TV shows including Planet Dinosaur and America: The Story of Us, and then into a VFX producer role. While working in VFX, he created several short films, and one of them — Project Kronos — went viral and caught the attention of Hollywood producers. Soon after, Dulull directed his first feature, The Beyond, which will be released the first quarter of next year by Gravitas Ventures. Another, Origin Unknown, based on a story he wrote, will be released later in 2018 by Content.

Before making the transition to director, Dulull had to overcome the stigma of being a first-time director — despite the success three of his short films had online. At the time, “film investors and studios were not too keen on throwing money at me yet to make a feature.” Frustrated, he decided to take the plunge and used his savings to finance his debut feature film The Beyond, based on Project Kronos. That move later on caught the attention of some investors, who helped finance the remaining post budget.

For Dulull, his VFX background is a definite plus when it comes to directing. “When I say we can add a giant alien sphere in the sky while our character looks out of the car window, with helicopters zipping by, I can say it with confidence. Also, when financiers/producers look at the storyboards and mood boards and see the amount of VFX in there, they know they have a director who can handle that and use VFX smartly as a tool to tell the story. This is as opposed to a director who has no experience in VFX and whose production would probably end up costing more due to the lack of education and wrong decisions, or trial and errors made on set and in post.”

The Beyond, courtesy of HaZ Film LTD.

Because of VFX, Dulull has learned to always shoot clean plates and not to encourage the DP to do zooms or whip pans when a scene has VFX elements. “For The Beyond, there is digital body replacements, and although this was not the same budget as Batman v Superman, we were still able to do it because all the camera moves were on sliders and we acquired a lot of data on the day of the shoot. In fact, I ensured I had budget to hire a tracking master on set who would gather all the data required to get an accurate object and camera track later in CG,” he says.

Dulull also plans for effects early in the production, making notes during the script stages concerning the VFX and researching ideas on how to achieve them so that the producers budget for them.

While on set, though, he focuses on the actors and HODs, and doesn’t get too involved with the VFX beyond showing actors a Photoshop mockup he might have done the night before a greenscreen shoot, to give them a sense of what will be occurring in the scene.

Yet, oftentimes Dulull’s artist side takes over in post. On The Beyond, he handled 75 to 80 percent of the work (mainly compositing), while CG houses and trusted freelancers did the CGI and rendering. “It was my baby and my first film, and I was a control freak on every single shot — the curse of having a VFX background,” he says. On his second feature, Origin Unknown, he found it easier to hand off the work — in this instance it was to Territory Studio.

“I still find I end up doing a lot of the key creative VFX scenes merely because there is no budget for it and basically because it was created during the editorial process — which means you can’t go and raise more money at this stage. But since I can do those ideas myself, I can come up with the concepts in the editorial process and pay the price with long nights and lots of coffee with support from Territory – but I have to ensure I don’t push the VFX studio to the breaking point with overages just because I had a creative burst of inspiration in the edit!” he says.

However, Dulull is confident that on his next feature, he will be hands-off on the VFX and focused on the time-demanding duties of directing and producing, though will still be involved with the designing of the VFX, working closely with Territory.

When it comes to outsourcing the VFX, knowing how much they cost helps keep that part of the budget from getting out of hand, Dulull says. And being able to offer up solutions or alternatives enables a studio to get a shot done faster and with better results.

Freddy Chavez Olmos
Freddy Chavez Olmos got the filmmaking/directing bug at an early age while recording horror-style home movies. Later, he found himself working in the visual effects industry in Vancouver, and counts many impressive VFX credits to his name: District 9, Pacific Rim, Deadpool, Chappie, Sin City 2 and the upcoming Blade Runner 2049. He also writes and directs projects independently, including the award-winning short films Shhh (2012) and Leviticus 24:20 (2016) — both in collaboration with VFX studio Image Engine — and R3C1CL4 (2017).

Working in visual effects, particularly compositing, has taught Olmos the artistic and technical sides of filmmaking during production and post, helping him develop a deeper understanding of the process and improving his problem-solving skills on set.

As more features rely on the use of VFX, having a director or producer with a clear understanding of that process has become almost necessary, according to Olmos. “It’s a process that requires constant feedback and clear communication. I’ve seen a lot of productions suffer visually and budget-wise due to a lack of decision-making in the post production process.”

Olmos has learned a number of lessons from VFX that he believes will help him on future directorial projects:
• Avoid last-minute changes.
• Don’t let too many cooks in the kitchen.
• Be clear on your feedback and use references when possible.
• If you can fix it on set, don’t leave it for post to handle.
• Always stay humble and give credit to those who help you.
• CG is time-consuming and expensive. If it doesn’t serve your story, don’t use it.
• Networking and professional relationships are crucial.
• Don’t become a pixel nitpicker. No one will analyze every single frame of your film unless you work on a Star Wars sequel. Your VFX crew will be more gracious to you, too.

Despite his VFX experience, Olmos, like others interviewed for this article, tries to use a practical approach first while in the director’s seat. Nevertheless, he always keeps the “VFX side of his brain open.”

For instance, the first short film he co-directed called for a full-body creature. “I didn’t want to go full CG with it because I knew we could achieve most of it practically, but I also understood the limitations. So we decided to only ‘digitally enhance’ what we couldn’t do on set and become more selective in our shot list,” he explains. “In the end, I was glad we worked as efficiently as we did on the project and didn’t have any throw-away work.”

Shhh film

While some former VFX artists/supervisors may find it difficult to hand off a project they directed to a VFX facility, Olmos maintains that as long as there is someone he trusts on set who is always by his side, he is able to detach himself “from micromanaging that part,” he says, although he does like to be heavily involved in the storyboarding and previs processes whenever possible. “A lot of the changes happen during that stage, and I like giving freedom to the VFX supervisor on set to do what he thinks is best for the project,” says Olmos.

“A few years ago, there were two VFX artists who became mainstream directors because they knew how to tell a good story using visual effects as a supporting platform (Neill Blomkamp and Gareth Edwards, Godzilla, Rogue One). Now there is a similar wave of talented filmmakers with a VFX and animation background doing original short projects,” says Olmos. “We have common interests, and I have become friends with a lot of them. I have no doubt they will end up doing big things in the near future.”

David Mellor
David Mellor is the creative director of Framestore’s new Chicago office and a director with the studio’s production company Framestore Pictures. With a background in computer visualization and animation, he started out in a support role with the rendering team and eventually transitioned to commercials and music videos, working his way up to CG lead and head of the CG department in the studio’s New York office.

In that capacity, Mellor was exposed to the creative side and worked with directors and agencies, and that led to the creative director and director roles he now enjoys.

Mellor has directed spots for Chick-fil-A (VR and live action), Redd’s Wicked Apple, Chex Mix and a series for Qualcomm’s Snapdragon.

Without hesitation, Mellor credits his VFX experience for helping him prepare for directing in that it enables him to “see” the big picture and final result from a fragment of elements, giving him a more solid direction. “VFX supervisors have a full understanding of how to build a scene, how light and camera work, and what effect lensing has,” he says.

Additionally, VFX supervisors are prepared to react to a given situation, as things are always changing. They also have to be able to break down a shot in moments on set, and run the whole shoot — post to finish — through their head when asked a question by a director or DP. “So it gives you this very good instinct as a director and allows you to see beyond what’s in front of you,” Mellor says. “It also allows you to plan well and be creative while looking at the entire timeline of the project. ‘Fix it in post’ is no longer acceptable with everyone wanting more for less time/money.”

And as projects become larger and incorporate more effects, director’s like Mellor will be able to tackle them more efficiently and with a higher quality, knowing all that is needed to produce the final piece. He also values his ability to communicate and collaborate, which are necessary for effects supervisors on big VFX projects.

“Our career path to directing hasn’t been the traditional one, but we have more exposure working with the client from conception through to a project’s finish. That means collaboration is a big aspect for me, working toward the best result holistically within the parameters of time and budget.”

Still, Mellor believes the transition to director for a VFX supervisor remains rare. One reason is because a person often becomes pigeonholed in a role.

While their numbers are still low, VFX artists/supervisors-turned-directors are making their mark across various genres, proving themselves capable and worthy of the much-deserved moniker of director, and in doing so, are helping to pave the way for others in visual effects roles.

Our Main Image: The Beyond, courtesy of HaZ Film LTD.

The Third Floor: Previs and postvis for Wonder Woman

To help realize the cinematic world of Warner Bros.’s Wonder Woman, artists at The Third Floor London, led by Vincent Aupetit, visualized key scenes using previs and postvis. Work spanned nearly two years, as the team collaborated with director Patty Jenkins and visual effects supervisor Bill Westenhofer to map out key action and visual effects scenes.

Previs was also used to explore story elements and to identify requirements for the physical shoot as well as visual effects. Following production, postvis shots with temp CG elements stood in for finals as the editorial cut progressed.

We checked in with previs supervisor Vincent Aupetit at The Third Floor London to find out more.

Wonder Woman is a good example of filmmaking that leveraged not just the technical, but also the creative advantages of previs. How can a director maximize the benefits of having a previs team?
Each project is different, with different needs and opportunities as well as creative styles, but for Wonder Woman our director worked very closely with us and got involved with previs and postvis as much as she could. Even though this was her first time using previs, she was open and enthusiastic and quickly recognized the possibilities. She engaged with us and used our resources to further develop the ideas she had for the story and action, including iconic moments she envisioned for the main character. Seeing the ideas she was after successfully portrayed as moving previs was exciting for her and motivating for us.

How do you ensure what is being visualized translates to what can be achieved through actual filming and visual effects?
We put a big emphasis on shooting methodology and helping with requirements for the physical shoot and visual effects work — even when we are not specifically doing techvis diagrams or schematics. We conceive previs shots from the start with a shooting method in mind to make sure no shots represented in previs would prove impossible to achieve down the line.

What can productions look to previs for when preparing for large-scale visual effects scenes?
Of course, previs can be an important guide in deciding what parts of sets to build, determining equipment, camera and greenscreen needs and having a roadmap of shots. The previs team is in a position to gather input across many departments — art department, camera department, stunt department and visual effects — and effectively communicate the vision and plan.

But another huge part of it creating a working visual outline for what the characters are doing and what action is happening. If a director wants to try different narrative beats, or put them in a new order, they can do that in the previs world before committing to the shoot. If they want to do multiple iterations, it’s possible to do that before embarking on production. All of this helps streamline complexities that are already there for intensive action and visual effects sequences.

On Wonder Woman, we had a couple of notable scenes, including the beach battle, where we combined previs, storyboards and fight tests to convey a sense of how the story and choreography would unfold. Another was the final battle in the third act of the film. It’s an epic 40 minutes that includes a lot of conceptual development. What is the form and shape of Ares, the movie’s antagonist, as he evolves and reveals his true god nature? What happens in each blow of his fight with Diana on the airfield? How do her powers grow, and what do those abilities look like? Previs can definitely help answer important questions that influence the narrative as well as the technical visuals to be produced.

How can directors leverage the postvis process?
Postvis has become more and more instrumental, especially as sequences go through editorial versions and evolving cuts. For Wonder Woman, the extensive postvis aided the director in making editorial choices when she was refining the story for key sequences.

Being able to access postvis during and after reshoots was very helpful as well. When you can see a more complete picture of the scene you have been imagining, with temp characters and backdrops in place, your decisions are much more informed.

How do you balance the ability to explore ideas and shots with the need to turn them around quickly?
This is one of the qualities of previs artists — we need to be both effective and flexible! Our workflow has to sustain and keep track of shots, versions and approvals. On Wonder Woman, our on-board previs editor literally did wonders keeping the show organized and reacting near instantaneously to director or visual effects supervisor requests.

The pace of the show and the will to explore and develop with a passionate director led to our producing an astonishing number of shots at a very rapid rate despite a challenging schedule. We also had a great working relationship, where we were trusted truly and fully by the client and repaid this trust by meeting deliveries with a high level of professionalism and quality.