VFX Chat: Digital Domain discusses its work on ‘Maleficent’

By Randi Altman

The Disney hit movie train keeps rolling along, with audiences getting on board at every stop; this time it’s Maleficent’s turn. The studio has taken a classic animated villain, Maleficent, and put her into a live-action film, starring Angelina Jolie as the title character.

Directed by Robert Stromberg, the story is a “re-imagining” of the classic Sleeping Beauty, but from Maleficient’s perspective. There are battles and fairies and magical wings. All of which imply visual effects.

One of the houses called on to help was Digital Domain, which provided 540 shots for the film, which was more than originally thought. As the show went through some adjustments, that number grew.

Other studios contributed VFX work for Malificent, most notably MPC, which provided a whopping 875 shots. MPC’s work included a full CG fairy world with a variety of environments and 15 different hero fairies, all with their own unique characteristics. MPC also created Maleficient’s castle, a 3D model layered with different brick, mortar and masonry effects to the model to enhance detail and give a realistic look. You can check out more details of their work here.

Darren-Kelly

Darren Hendler and Kelly Port.

postPerspective reached out to Digital Domain with some questions about their work on the film, and the team of DFX supervisor Darren Hendler, CG supervisor Jonathan Litt and VFX supervisor Kelly Port took part in a round robin of sorts, all contributing to each answer. Talk about teamwork!

Can you talk about creating the fairly complex pixies?
The key to building the pixies was to start by building photoreal CG versions of the actors, then transforming these digital actors into the pixies. This ensured our pixies always retained the exact likeness of the actors, and each of their thousands of facial expressions; the way their skin moved and looked.

The pixies’ faces are proportionally different from the actors’ faces. For example, they have bigger eyes, smaller chins and wider cheeks. Finding the right proportion changes for each pixie took quite a bit of experimentation and collaboration with Rob [the director] and Carey [Villegas, senior VFX supervisor on the film]. We would create multiple versions of each pixie face and then use it in tests for several weeks to “try on for size.” Flittle found her look fairly quickly, but both Knotgrass and Thistlewit took longer to settle on, and small tweaks were made to the faces throughout shot production.

pixies small Fairies42small

How did you accomplish the facial performances?
The face is the hardest part of the character to create — to have it look real, especially when the character is life-like and looks like the actor. We spent a considerable amount of time with the actors, capturing high-resolution face scans in all of their facial expressions poses.

During pre-production we shot each actor in ICT’s Lightstage X, which gives us a nearly perfect digital model of each face all the way down to the pore level. A base set of facial motion data — so-called FACS shapes — was captured separately at Disney Zurich.

Digital Domain has its own virtual production team, supervised by Gary Roberts, which was responsible for all facial and body performance capture during the shoot itself. During the motion capture shoot, the actors wear custom helmets with four cameras that capture their facial performances to the accuracy of about 200 points, based on dot patterns drawn on their faces.

For each shot, the dot-based performance is “solved” using custom software to derive the true high-resolution performance from the “recipe” of the high-resolution FACS and ICT data. Even with the facial solves, animation is still a huge component in bringing life and realism to the characters.

lady mocap

At Digital Domain we have a team of facial animators so attuned to the nuance of human facial performances that they are able to tell when something is not exactly right. The animators then have to do a considerable amount of work to adjust the final animation since the solve is never perfect. There are often changes requested after initial animation that are necessary due to other constraints in the shot or the animation of the bodies.

What about the body movements?
We captured the actresses body motion during their performances on the mocap stage.  In many of the shots their pixie characters were flying around, so to help get them in character we had them suspended in stunt rigs flying around and interacting with one another. The way the actors moved and flew differed from how the pixies would ultimately fly, but the actors body capture still proved very helpful in getting the actors  motion cues.

What about Maleficent?
The digi-double was primarily used for flying shots or extreme stunt work that was not possible to do practically. We knew from the beginning that the Maleficent digi-double would be held to the highest scrutiny, so it was very important to ensure matching it to Angelina as closely as possible. We were fortunate to get very high-resolution scans of Angelina in full make-up and prosthetic. These scans were used as a base to build the digital Maleficent face.

When building her digital wardrobes, we deconstructed how her practical wardrobes were tailored and re-built them panel by panel in CG. This ensured that when we simulated them in CG they moved, looked and felt like the real ones. In order to ensure our digital Maleficent matched Angelina as closely as possible, we created test shots in which we could do side-by-side comparisons of digital Maleficent and Angelina to ensure the digi-double would be indistinguishable in the CG shots. The wardrobe was also pushed through extreme wind simulations, such as when she is flying at hundreds of miles per hour through the fairy world.

pretty fairy

What about the environments?
MPC was responsible for a large number of the environments in Maleficent. The most complicated environment Digital Domain was involved with was the setting for the Angelia Jolie reveal. This landscape was extremely complicated, as it included a variety of different elements. For example, rocky terrain, water, waterfalls, clouds mist, trees and flowers. These elements were often generated by different Digital Domain teams and in different software packages, which made creating a seamless landscape all the more difficult. The water surfaces, waterfalls and mist clouds were created in Houdini and rendered in Mantra.

There are also several shots that have large whale-like creatures (called carriage faeries) from MPC jumping through the water like dolphins. For these shots we received Alembic geometry from MPC and simulated the water surfaces and spray in Digital Domain’s custom water solver. The final shots used deep compositing to put MPC’s renders into our final water and landscapes. This kind of tight integration between two companies in a single shot is becoming more and more common. The trees and foliage were mostly full 3D geometry, lit and rendered in V-Ray. We used custom layout tools to populate the environments with the large numbers of trees required. The cliffs were mostly 2.5D matte paintings since nearly every shot had a unique set of cliffs not used in any other shot.

Can you talk about the tools you used? Did you have to write your own sims for any of the work?
At Digital Domain we favor using off the-shelf software as much as possible, with custom tools and plug-ins to help us in cases where we aren’t able to do what we need to within the applications. For all of our modeling work we used a combination of Maya, Mudbox and Z-Brush. For texturing we use a combination of Mari and Photoshop. All animation was generated in Maya, and we used V-Ray for our character and environment rendering.

Our water and clouds are all created in Houdini and rendered through Mantra. We have a proprietary fluid simulation engine which we used for many of the environment shots that had water simulation, e.g. waterfalls, creatures interacting with water, etc. We also have an in-house hair grooming tool called Sampson, which was used to create the pixies hairstyles and parts of their wardrobes.

Jonathan_Littsmaller

Jonathan Litt

What scene or sequence are you most proud of?
The scene where we meet the pixies for the first time. This scene was added very late in the schedule and has most [of our] hero shots. It also takes place in an MPC environment with other MPC characters, so our integration within the MPC environment and with their characters had to be seamless.

Each of our pixie shots had to pass though 10 departments, from initial actor capture through to final comp, and it was amazing to see how quickly we could turn shots around through all these groups once everyone was immersed in the characters.

 


Leave a Reply

Your email address will not be published. Required fields are marked *