Tag Archives: previs

Sony Imageworks’ VFX work on Spider-Man: Homecoming

By Daniel Restuccio

With Sony’s Spider-Man: Homecoming getting ready to release digitally on September 26 and on 4K Ultra HD/Blu-ray, Blu-ray 3D, Blu-ray and DVD on October 17, we thought this was a great opportunity to talk about some of the film’s VFX.

Sony Imageworks has worked on every single Spider-Man movie in some capacity since the 2002 Sam Raimi version. On Spider-Man: Homecoming, Imageworks worked on mostly the “third act,” which encompasses the warehouse, hijacked plane and beach destruction scenes. This meant delivering over 500 VFX shots, created by over 30 artists (at one point this peaked at 200) and compositors, and rendering out 2K finished scenes.

All of the Imageworks artists used Dell R7910 workstations with Intel Xeon CPU E5-2620 24 cores, 64GB memory and Nvidia Quadro P5000 graphics cards. They used Cinesync for client reviews and internally they used their in-house Itview software. Rendering technology was SPI Arnold (not the commercial version) and their custom shading system. Software used was Autodesk 2015, Foundry’s Nuke X 10.0 and Side Effects Houdini 15.5. They avoided plug-ins so that their auto-vend, breaking of comps into layers for the 3D conversion process, would be as smooth as possible. Everything was rendered internally on their on-premises renderfarm. They also used the Sony “Kinect” scanning technique that allowed their artists to do performance capture on themselves and rapidly prototype ideas and generate reference.

We sat down with Sony Imageworks VFX supervisor Theo Bailek, who talks about the studio’s contribution to this latest Spidey film.

You worked on The Amazing Spider-Man in 2012 and The Amazing Spider-Man 2 in 2014. From a visual effects standpoint, what was different?
You know, not a lot. Most of the changes have been iterative improvements. We used many of the same technologies that we developed on the first few movies. How we do our city environments is a specific example of how we build off of our previous assets and techniques, leveraging off the library of buildings and props. As the machines get faster and the software more refined, it allows our artists increased iterations. This alone gave our team a big advantage over the workflows from five years earlier. As the software and pipeline here at Sony has gotten more accessible, it has allowed us to more quickly integrate new artists.

It’s a lot of very small, incremental improvements along the way. The biggest technological changes between now and the early Spider-Mans is our rendering technology. We use a more physically-accurate-based rendering incarnation of our global illumination Arnold renderer. As the shaders and rendering algorithms become more naturalistic, we’re able to conform our assets and workflows. In the end, this translates to a more realistic image out of the box.

The biggest thing on this movie was the inclusion of Spider-Man in a Marvel Universe: a different take on this film and how they wanted it to go. That would be probably the biggest difference.

Did you work directly with director Jon Watts, or did you work with production VFX supervisor Janek Sirrs in terms of the direction on the VFX?
During the shooting of the film I had the advantage of working directly with both Janek and Jon. The entire creative team pushed for open collaboration, and Janek was very supportive toward this goal. He would encourage and facilitate interaction with both the director and Tom Holland (who played Spider-Man) whenever possible. Everything moved so quick on set, often times if you waited to suggest an idea you’d lose the chance, as they would have to set up for the next scene.

The sooner Janek could get his vendor supervisors comfortable interacting, the bigger our contributions. While on set I often had the opportunity to bring our asset work and designs directly to Jon for feedback. There were times on set when we’d iterate on a design three or four times over the span of the day. Getting this type of realtime feedback was amazing. Once post work began, most of our reviews were directly with Janek.

When you had that first meeting about the tone of the movie, what was Jon’s vision? What did he want to accomplish in this movie?
Early on, it was communicated from him through Janek. It was described as, “This is sort of like a John Hughes, Ferris Bueller’s take on Spider-Man. Being a teenager he’s not meant to be fully in control of his powers or the responsibility that comes with them. This translates to not always being super-confident or proficient in his maneuvers. That was the basis of it.

Their goal was a more playful, relatable character. We accomplished this by being more conservative in our performances, of what Spider-Man was capable of doing. Yes, he has heightened abilities, but we never wanted every landing and jump to be perfect. Even superheroes have off days, especially teenage ones.

This being part of the Marvel Universe, was there a pool of common assets that all the VFX houses used?
Yes. With the Marvel movies, they’re incredibly collaborative and always use multiple vendors. We’re constantly sharing the assets. That said, there are a lot of things you just can’t share because of the different systems under the hood. Textures and models are easily exchanged, but how the textures are combined in the material and shaders… that makes them not reusable given the different renderers at companies. Character rigs are not reusable across vendors as facilities have very specific binding and animation tools.

It is typical to expect only models, textures, base joint locations and finished turntable renders for reference when sending or receiving character assets. As an example, we were able to leverage somewhat on the Avengers Tower model we received from ILM. We did supply our Toomes costume model and Spider-Man character and costume models to other vendors as well.

The scan data of Tom Holland, was it a 3D body scan of him or was there any motion capture?
Multiple sessions were done through the production process. A large volume of stunts and test footage were shot with Tom before filming that proved to be invaluable to our team. He’s incredibly athletic and can do a lot of his own stunts, so the mocap takes we came away with were often directly usable. Given that Tom could do backflips and somersaults in the air we were able to use this footage as a reference for how to instruct our animators later on down the road.
Toward the later-end of filming we did a second capture session, focusing on the shots we wanted to acquire using specific mocap performances. Then again several months later, we followed up with a third mocap session to get any new performances required as the edit solidified.

As we were trying to create a signature performance that felt like Tom Holland, we exclusively stuck to his performances whenever possible. On rare occasions when the stunt was too dangerous, a stuntman was used. Other times we resorted to using our own in-house method of performance capture using a modified Xbox Kinect system to record our own animators as they acted out performances.

In the end performance capture accounted for roughly 30% of the character animation of Spider-Man and Vulture in our shots, with the remaining 70% being completed using traditional key-framed methods.

How did you approach the fresh take on this iconic film franchise?
It was clear from our first meeting with the filmmakers that Spider-Man in this film was intended to be a more relatable and light-hearted take on the genre. Yes, we wanted to take the characters and their stories seriously, but not at the expense of having fun with Peter Parker along the way.

For us that meant that despite Spider-Man’s enhanced abilities, how we displayed those abilities on screen needed to always feel grounded in realism. If we faltered on this goal, we ran the risk of eroding the sense of peril and therefore any empathy toward the characters.

When you’re animating a superhero it’s not easy to keep the action relatable. When your characters possess abilities that you never see in the real world, it’s a very thin line between something that looks amazing and something that is amazingly silly and unrealistic. Over-extend the performances and you blow the illusion. Given that Peter Parker is a teenager and he’s coming to grips with the responsibilities and limits of his abilities, we really tried to key into the performances from Tom Holland for guidance.

The first tool at our disposal and the most direct representation of Tom as Spider-Man was, of course, motion capture of his performances. On three separate occasions we recorded Tom running through stunts and other generic motions. For the more dangerous stunts, wires and a stuntman were employed as we pushed the limit of what could be recorded. Even though the cables allowed us to record huge leaps, you couldn’t easily disguise the augmented feel to the actor’s weight and motion. Even so, every session provided us with amazing reference.

Though a bulk of the shots were keyframed, it was always informed by reference. We looked at everything that was remotely relevant for inspiration. For example, we have a scene in the warehouse where the Vulture’s wings are racing toward you as Spider-Man leaps into the air stepping on the top of the wings before flipping to avoid the attack. We found this amazing reference of people who leap over cars racing in excess of 70mph. It’s absurdly dangerous and hard to justify why someone would attempt a stunt like that, and yet it was the perfect for inspiration for our shot.

In trying to keep the performances grounded and stay true to the goals of the filmmakers, we also found it was always better to err on the side of simplicity when possible. Typically, when animating a character, you look for opportunities to create strong silhouettes so the actions read clearly, but we tended to ignore these rules in favor of keeping everything dirty and with an unscripted feel. We let his legs cross over and knees knock together. Our animation supervisor, Richard Smith, pushed our team to follow the guidelines of “economy of motion.” If Spider-Man needed to get from point A to B he’d take the shortest route — there’s not time to strike an iconic pose in-between!


Let’s talk a little bit about the third act. You had previsualizations from The Third Floor?
Right. All three of the main sequences we worked on in the third act had extensive previs completed before filming began. Janek worked extremely closely with The Third Floor and the director throughout the entire process of the film. In addition, Imageworks was tapped to help come up with ideas and takes. From early on it was a very collaborative effort on the part of the whole production.
The previs for the warehouse sequence was immensely helpful in the planning of the shoot. Given we were filming on location and the VFX shots would largely rely on carefully choreographed plate photography and practical effects, everything had to be planned ahead of time. In the end, the previs for that sequence resembled the final shots in most cases.

The digital performances of our CG Spider-Man varied at times, but the pacing and spirit remained true to the previs. As our plane battle sequence was almost entirely CG, the previs stage was more of an ongoing process for this section. Given that we weren’t locked into plates for the action, the filmmakers were free to iterate and refine ideas well past the time of filming. In addition to The Third Floor’s previs, Imageworks’ internal animation team also contributed heavily to the ideas that eventually formed the sequence.

For the beach battle, we had a mix of plate and all-CG shots. Here the previs was invaluable once again in informing the shoot and subsequent reshoots later on. As there were several all-CG beats to the fight, we again had sections where we continued to refine and experiment till late into post. As with the plane battle, Imageworks’ internal team contributed extensively to pre and postvis of this sequence.

The one scene, you mentioned — the fight in the warehouse — in the production notes, it talks about that scene being inspired by an actual scene from the comic The Amazing Spider-Man #33.
Yes, in our warehouse sequence there are a series of shots that are directly inspired by the comic book’s cells. Different circumstances in the the comic and our sequence lead to Spider-Man being trapped under debris. However, Tom’s performance and the camera angles that were shot play homage to the comic as he escapes. As a side note, many of those shots were added later in the production and filmed as reshoots.

What sort of CG enhancements did you bring to that scene?
For the warehouse sequence, we added digital Spider-Man, Vulture wings, CG destruction, enhanced any practical effects, and extended or repaired the plate as needed.The columns that the Vulture wings slice through as it circles Spider-Man were practically destroyed with small denoted charges. These explosives were rigged within cement that encased the actual warehouses steel girder columns. They had fans on set that were used to help mimic interaction from the turbulence that would be present from a flying wingsuit powered by turbines. These practical effects were immensely helpful for our effects artists as they provided the best-possible in-camera reference. We kept much of what was filmed, adding our fully reactive FX on top to help tie it into the motion of our CG wings.

There’s quite a bit of destruction when the Vulture wings blast through walls as well. For those shots we relied entirely on CG rigid body dynamic simulations for the CG effects, as filming it would have been prohibitive and unreliable. Though most of the shots in this sequence had photographed plates, there were still a few that required the background to be generated in CG. One shot in particular, with Spider-Man sliding back and rising up, stands out in particular. As the shot was conceived later in the production, there was no footage for us to use as our main plate. We did however have many tiles shot of the environment, which we were able to use to quickly reconstruct the entire set in CG.

I was particularly proud of our team for their work on the warehouse sequence. The quality of our CG performances and the look of the rendering is difficult to discern from the live action. Even the rare all-CG shots blended seamlessly between scenes.

When you were looking at that ending plane scene, what sort of challenges were there?
Since over 90 shots within the plane sequence were entirely CG we faced many challenges, for sure. With such a large number of shots without the typical constraints that practical plates impose, we knew a turnkey pipeline was needed. There just wouldn’t be time to have custom workflows for each shot type. This was something Janek, our client-side VFX supervisor, stressed from the onset, “show early, show often and be prepared to change constantly!”

To accomplish this, a balance of 3D and 2D techniques were developed to make the shot production as flexible as possible. Using our compositing software Nuke’s 3D abilities we were able to offload significant portions of the shot production into the compositor’s hands. For example: the city ground plane you see through the clouds, the projections of the imagery on the plane’s cloaking LED’s and the damaged flickering LED’s were all techniques done in the composite.

A unique challenge to the sequence that stands out is definitely the cloaking. Making an invisible jet was only half of the equation. The LEDs that made up the basis for the effect also needed to be able to illuminate our characters. This was true for wide and extreme close-up shots. We’re talking about millions of tiny light sources, which is a particularly expensive rendering problem to tackle. Mix in the fact that the design of these flashing light sources is highly subjective and thus prone to needing many revisions to get the look right.

Painting control texture maps for the location of these LEDs wouldn’t be feasible for the detail needed on our extreme close-up shots. Modeling them in would have been prohibitive as well, resulting in excessive geometric complexity. Instead, using Houdini, our effects software, we built algorithms to automate the distribution of point clouds of data to intelligently represent each LED position. This technique could be reprocessed as necessary without incurring the large amounts of time a texture or model solution would have required. As the plane base model often needed adjustments to accommodate design or performance changes, this was a real factor. The point cloud data was then used by our rendering software to instance geometric approximations of inset LED compartments on the surface.

Interestingly, this was a technique we adopted from rendering technology we use to create room interiors for our CG city buildings. When rendering large CG buildings we can’t afford to model the hundreds and sometimes thousands of rooms you see through the office windows. Instead of modeling the complex geometry you see through the windows, we procedurally generate small inset boxes for each window that have randomized pictures of different rooms. This is the same underlying technology we used to create the millions of highly detailed LEDs on our plane.

First our lighters supplied base renders to our compositors to work with inside of Nuke. The compositors quickly animated flashing damage to the LEDs by projecting animated imagery on the plane using Nuke’s 3D capabilities. Once we got buyoff on the animation of the imagery we’d pass this work back to the lighters as 2D layers that could be used as texture maps for our LED lights in the renderer. These images would instruct each LED when it was on and what color it needed to be. This back and forth technique allowed us to more rapidly iterate on the look of the LEDs in 2D before committing and submitting final 3D renders that would have all of the expensive interactive lighting.

Is that a proprietary system?
Yes, this is a shading system that was actually developed for our earlier Spider-Man films back when we used RenderMan. It has since been ported to work in our proprietary version of Arnold, our current renderer.

The Third Floor: Previs and postvis for Wonder Woman

To help realize the cinematic world of Warner Bros.’s Wonder Woman, artists at The Third Floor London, led by Vincent Aupetit, visualized key scenes using previs and postvis. Work spanned nearly two years, as the team collaborated with director Patty Jenkins and visual effects supervisor Bill Westenhofer to map out key action and visual effects scenes.

Previs was also used to explore story elements and to identify requirements for the physical shoot as well as visual effects. Following production, postvis shots with temp CG elements stood in for finals as the editorial cut progressed.

We checked in with previs supervisor Vincent Aupetit at The Third Floor London to find out more.

Wonder Woman is a good example of filmmaking that leveraged not just the technical, but also the creative advantages of previs. How can a director maximize the benefits of having a previs team?
Each project is different, with different needs and opportunities as well as creative styles, but for Wonder Woman our director worked very closely with us and got involved with previs and postvis as much as she could. Even though this was her first time using previs, she was open and enthusiastic and quickly recognized the possibilities. She engaged with us and used our resources to further develop the ideas she had for the story and action, including iconic moments she envisioned for the main character. Seeing the ideas she was after successfully portrayed as moving previs was exciting for her and motivating for us.

How do you ensure what is being visualized translates to what can be achieved through actual filming and visual effects?
We put a big emphasis on shooting methodology and helping with requirements for the physical shoot and visual effects work — even when we are not specifically doing techvis diagrams or schematics. We conceive previs shots from the start with a shooting method in mind to make sure no shots represented in previs would prove impossible to achieve down the line.

What can productions look to previs for when preparing for large-scale visual effects scenes?
Of course, previs can be an important guide in deciding what parts of sets to build, determining equipment, camera and greenscreen needs and having a roadmap of shots. The previs team is in a position to gather input across many departments — art department, camera department, stunt department and visual effects — and effectively communicate the vision and plan.

But another huge part of it creating a working visual outline for what the characters are doing and what action is happening. If a director wants to try different narrative beats, or put them in a new order, they can do that in the previs world before committing to the shoot. If they want to do multiple iterations, it’s possible to do that before embarking on production. All of this helps streamline complexities that are already there for intensive action and visual effects sequences.

On Wonder Woman, we had a couple of notable scenes, including the beach battle, where we combined previs, storyboards and fight tests to convey a sense of how the story and choreography would unfold. Another was the final battle in the third act of the film. It’s an epic 40 minutes that includes a lot of conceptual development. What is the form and shape of Ares, the movie’s antagonist, as he evolves and reveals his true god nature? What happens in each blow of his fight with Diana on the airfield? How do her powers grow, and what do those abilities look like? Previs can definitely help answer important questions that influence the narrative as well as the technical visuals to be produced.

How can directors leverage the postvis process?
Postvis has become more and more instrumental, especially as sequences go through editorial versions and evolving cuts. For Wonder Woman, the extensive postvis aided the director in making editorial choices when she was refining the story for key sequences.

Being able to access postvis during and after reshoots was very helpful as well. When you can see a more complete picture of the scene you have been imagining, with temp characters and backdrops in place, your decisions are much more informed.

How do you balance the ability to explore ideas and shots with the need to turn them around quickly?
This is one of the qualities of previs artists — we need to be both effective and flexible! Our workflow has to sustain and keep track of shots, versions and approvals. On Wonder Woman, our on-board previs editor literally did wonders keeping the show organized and reacting near instantaneously to director or visual effects supervisor requests.

The pace of the show and the will to explore and develop with a passionate director led to our producing an astonishing number of shots at a very rapid rate despite a challenging schedule. We also had a great working relationship, where we were trusted truly and fully by the client and repaid this trust by meeting deliveries with a high level of professionalism and quality.

Behind the Title: Edit 1 creative director/editor Ken Kresge

NAME: Ken Kresge

COMPANY: New York City’s Edit 1

CAN YOU DESCRIBE YOUR COMPANY?
We are a production company with a unique talent for previs and content work.

WHAT’S YOUR JOB TITLE?
VP creative director/editor

Edit 1

WHAT DOES THAT ENTAIL?
At heart, I’m an editor. I have been so for the past 20 years. Editing led to a natural progression into being a CD. I manage the creative expectation of our clients and make sure our team of pros executes it in the manner that I feel represents the client’s needs.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
In today’s business environment, being an editor means more than editing video, offline color correct and light After Effects work. Now the job includes all that plus a myriad other things, ranging from collaborating concept and script to audio, online color correct, graphic design, 3D animation and even helping with PR and social media. Fortunately, I love doing all those things, so for me now is a great time to be a creative editor.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with my new colleagues. It’s been 17 years since I was the new guy, and for me its very exciting getting to know them and seeing their talent for the first time. Oh yeah, the pizza on Friday is pretty awesome.

WHAT’S YOUR LEAST FAVORITE?
Give me some time and I am sure I will find something.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I like the mornings, when its just one or two of us. I put on some music and get ready for whatever is coming up.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a scientist or a host for a comedic documentary TV show. Yes, that is pretty specific for me.

WHY DID YOU CHOOSE THIS PROFESSION?
I used editing as a way of expressing myself. I started my first editing gig at McCann Erickson.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Is it ok if that hasn’t happened yet?

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I just joined Edit 1 so much of it is still going on, but I can say I was able to re-connect with some of my previous clients and friends on a few projects from McCann, Evoke, DDB, One World Trade and Grey. They have kept me quite busy over the past four weeks.

Edit 1’s rooftop.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
One World is project I am proud of for a few reasons. I helped on the agency pitch and then helped create this interactive AR project for iPad, which involved editing almost 50 different videos about places in New York City. Moreover, I am a New Yorker, and having lived here through 9/11, this project gave me an overwhelming sense of pride. For me it’s like patriotism, in a New York-er way. I’m also proud of a personal project called “Road Trip Earth.”

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
A powerful laptop, Adobe’s Creative Cloud and a decent camera/phone.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Honestly, I don’t follow anything.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
I listen to KEXP almost every day. I love John in the Morning. Other than that, lately, it’s a lot of ALT-J, Modest Mouse and the new Slowdive.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I just don’t sweat it anymore. Life can be pretty hard but I have a good one, with an amazing wife and kids to share it with. I try not to mess with those things and everything else seems to fall into place.

Fantastic Beasts VFX workflow employs previs and postvis

By Daniel Restuccio

Warner Bros’ Fantastic Beasts and Where to Find Them is considered by some a Harry Potter prequel and by others an entirely new J.K. Rowling franchise. Filled with nearly 1,500 VFX shots, this live-action, CG and character-driven movie put a huge emphasis on pre-pro and established the increased role of postvis in the film’s visual effects post pipeline.

For the film’s overall visual effects supervisors, Tim Burke and Christian Manz, it was a family reunion of sorts, reteaming with many of the companies and individuals that worked on the Harry Potter movies, including director David Yates and producers David Heyman, Steve Kloves, J.K. Rowling and Lionel Wigram.

According to Manz, one of the most significant aspects of this film was how visual effects were integrated into the story from the very beginning. The direction from Yates was very clear: “Make things fantastic, but not fantasy.” For every creature design presented, Yates would ask, “What would be the story behind that creature? What would that character do if the audience saw it from one moment to the next?” Says Manz, “It all had to work to support the story, but not be the story.”

Manz feels that this movie speaks to a new way of storytelling with VFX. “Visual effects is now a part of that filmmaking and storytelling team rather than being the guys who stick in everything afterwards.”

Starting in January 2015, while Burke was busy as VFX supervisor on The Legend of Tarzan, Manz worked with Framestore animation director Pablo Grillo, a Framestore art and animation team and a group of freelance concept and previs artists doing creature development and scene design. Over eight months of design sessions they created 18 main animal types based on hundreds of variations, and worked with the Framestore art department to conceive the turn-of-the-century New York City sets and set extensions.

“Of course, there were creatures we tested that didn’t make the film,” says Framestore animator Andras Ormos, “but it was about the process of whittling it down, and that was the real joy of working on this project. The creative input stretched beyond the post production stage, deciding what worked and what wouldn’t in the overall film.”

“J.K. Rowling’s wonderful script was filled with characters and creatures,” explains Grillo. “Having seen how animation is such a big part of the process in a film like this, we decided that it was important to be involved from the concept stage onwards.” The character development and scene work sessions were so impressive they actually influenced subsequent drafts of the script.

Burke came on full-time in June 2015, and they split the movie in half. Manz took the lead developing the world inside Newt’s magical case, and Burke did the “Obscurus” and the third act. Principal photography took place from August 2015 to January 2016, and they took turns on set supervising their own and each other’s VFX sequences.

With Framestore and Double Negative taking the lead, the shots were spread out among nine main VFX and three previs/postvis companies including: Cinesite, Image Engine, Method Studios, Milk Visual Effects, Moving Picture Company, Nvizible, Proof, Rodeo FX, Secret Lab, The Third Floor and others. Burke says they divided the work by “the strengths of the companies and without overlapping them too much.”

Framestore
Framestore took on the majority of the complex character animation: the Niffler, Gnarlack, the Erumpent and Picket Bowtruckle, as well as many goblins and elves. Grillo first tackled Niffler, described by Rowling as “a long-snouted, burrowing creature native to Britain with a penchant for anything shiny.” The creature design was a mash-up of a spiny anteater, platypus and mole and went through hundreds of iterations and many animated prototypes. Framestore used the skin and muscle rigging toolkit Flesh and Flex developed for Tarzan on Niffler’s magic “loot stuffing” pouch.

Framestore

Framestore

The reason the audience is so delighted when this character first appears, explains Ormos, is that “this scene is driven by the relationship between Newt and the Niffler. There was a history we had to get across — the fact that the Niffler was notorious for escaping and pick-pocketing, and that Newt was going through the motions in trying to catch him. They understood each other and there were little looks, a language in their movement.”

Gnarlack, an American, cigar-chewing, snarky goblin, voiced and facial mocaped by actor Ron Perlman, “is one of the best digital humanoids yet,” reports Grillo. Perlman donned a Vicon Cara 3D facial motion capture headset, surrounded by four high-resolution, high-speed witness cameras. According to Framestore VFX supervisor Andy Kind, Perlman also sat in front of 98 cameras for a facial action coding shape (FACS) session so the team could sculpt the face directly in 3D.

“We created CG characters for the giants, elves and band ensemble,” says Kind. “Then we gave them crazy instruments, including a sousaphone/trumpet concoction.”

A 17-foot carbon fiber puppet, built by Handspring Puppet, substituted for the amorous rhinoceros Erumpent during the Central Park chase scene. It was switched out with the CG version later and dynamic simulations of shattering ice, explosive snow and water effects were added to the concluding shots. There’s this liquid, light-filled sack on the Erumpent’s forehead that Manz says, “made her slightly more unusual than a normal creature.”

“There was an awful lot of digital environment as well as the beast itself,” continues Manz. “David Yates fell in love with the postvis for this scene. It was great to be able to play with shots and offer up suggestions for the edit. It was a very organic way of filmmaking.”

Newt’s pocket-hiding creature sidekick, Picket Bowtruckle, took two months and 200 versions to get right. “We were told that Picket moved too slowly at first and that he appeared too old. We played with the speed but kept his movements graceful,” explains Manz. “He didn’t really have any facial animation, but he does blow a raspberry at one point. In the end, we added more shots to get Pickett’s story to go through, as everyone just loved him.”

MPC
The Moving Picture Company (MPC) completed more than 220 shots and created the Demiguise, Occamy and Billiwig, as well as 3D set extensions of period Manhattan.

For Demiguise’s long, flowing hair and invisibility effect, MPC used their Furtility groom technology. According to MPC VFX supervisor Ferran Domenech, using Furtility “allows for the hair to move naturally and interact with the creature’s arms, legs and the environment around it.” Demiguise was animated using enhanced mocap with keyframed facial expressions.

MPC

MPC built the large feathered dragon-snake Occamy in sections to fill the real and CG extended attic. They used Furtility once more, this time to add feathers, and they augmented the code so that in the climatic fight scene they could scale the giant version of the creature down to mouse-size. MPC’s effects team then used its in-house Kali destruction technology to wreck the attic.

Finally, MPC worked on the Billiwig, a magical bug that can change its flight mode from dragonfly to propeller plane. “This little creature has lots of character and was great fun to bring to life,” reports Domenech.

Previs and Postvis
A major technical advance for Fantastic Beasts can be found in the workflow. It’s been 15 years since the first Harry Potter movie and five years since Deathly Hallows. Over that time Burke had designed a very efficient, streamlined, mostly film-based VFX workflow.

“In the past, we were always stuck at the point where when we shot the film, it was put into editorial, they cut it and then gave it back to us — quite often with big holes where creatures would exist or environments needed to be placed,” describes Burke. “Then we would have to involve the facilities to use their real power to push things through and start blocking out all of the characters. This took quite a bit of time and would always slow the process down, and time is really the key difference with everything we do these days.”

In the past, says Burke, he might wait two months to see an early block of an animated character, “which always then restricts what you can do at the back end or restricts the director’s ability to make creative changes.”

Thankfully this wasn’t the case with Fantastic Beasts. “In the middle of the shoot, Christian and I started supervising the postvis of the scenes we’d already shot,” he explains. They assembled a 50-artist in-house postvis team comprised of members of The Third Floor, Proof and Framestore. While some of the movie was prevised, all of the movie was postvised.

“The path from previs to postvis varied from sequence to sequence,” explains Peter McDonald, previs/postvis supervisor for The Third Floor, London. “At one end of the scale, we had sequences that never really survived through shooting, while at the other end we had sequences that were followed shot-for-shot during the shoot and subsequent editorial process.”

Third Floor

Third Floor postvis

“As an example,” he continues, “the Demiguise and Occamy scene in the department store attic was heavily prevised. The final previs was a pretty polished and spectacular piece in its own right with some relatively sophisticated animation and a highly refined edit. This previs edit was taken onto the stage, with printouts of the shots being referenced as the shoot day progressed. What later came back our way for postvis was very similar to what had been designed in the previs, which was very satisfying from our point of view. It’s nice to know that previs can help drive a production at this level of fidelity!”

One of the benefits of this process was having a movie during editorial that had no “holes” where VFX shots were to later appear. The “postvis” was so good that it was used during audience screenings before the VFX shots were actually built and rendered.

“There were a couple of factors that elevated the postvis,” says McDonald. “Probably the primary one was integration between Framestore’s team with our team at The Third Floor London. Having them present and being supervised by Pablo Grillo guaranteed that the work we were putting together was being judged from almost a “finals” point of view, as Pablo and his artists would also be the ones finishing the job in post. It meant that our postvis wasn’t a throw away — it was the first step in the post production pipeline. This philosophy was present beyond the personnel involved. We also had creature rigs that could be translated with their animation down the line.”

Subway rampage previs

Third Floor’s previs of subway rampage.

One example of a scene that followed through from previs to postvis were parts of the Obscurus rampage in the subway. “Pablo and I worked very closely with artists at both Framestore and The Third Floor on this ‘sequence within a sequence,” says McDonald. “We started with bifrost fluid simulations created in Maya by our own senior asset builder Chris Dawson. We then had our animators distort these simulations into the virtual subway set. Through iteration, we developed the choreography of the piece and further refined the overall rhythm and shape of the piece with our previs editor. This previs then became the template for what was shot on the actual set with Eddie Redmayne and Colin Farrell in the roles of Newt and Graves. When the plate edit returned to us for postvis, we were pretty much able to drop the same distorted simulations onto the plates. The camera angles and cutting pattern in the previs edit had been followed very closely by the live-action unit. We then animated a layer of environment destruction and comped it into the shots to help tie it all together.”

During postvis, says Manz, “A character could go into a shot within a day or two. You would track the shot plate, put the character in, light it and stick it back into editorial. That sort of turn around, that in-house work that we did was the big, big difference with how the film worked. It allowed us to feed all that stuff to David Yates.”

Yates showed his director’s cut to the studio with every single shot of the film blocked out. There were no empty spaces. “We even got the environments in so he had a representation of every street,” says Manz. They completed a three-hour assembly of the film in about five months.

Creatively, it was very liberating, which enabled them to do additional shoots just to enhance the movie. Burke says they were designing and changing shots right up to the end. The final reveal of Jonny Depp as dark wizard Gellert Grindelwald came together all at the last minute.

Fantastic Beasts is like a Harry Potter movie because it exists in the J.K. Rowling story universe and is part of the Harry Potter lore. “Where it’s similar to Potter in terms of the filmmaking,” says Manz, “ is in making it feel very real and not fantasy. What I always enjoyed about the Potter films was they really felt like they were set in a real version of the UK; you could almost believe that magical sort of school existed.”

Third Floor previs

How Fantastic Beasts is different, says Burke, is that it is set in turn-of-the-century New York City, a real city, and not the magical school environment of Hogwarts. “We were dealing with adults,” continues Burke, “we’re not talking about a child growing and running everything through a child’s life. We’re talking about a series of adults. In that sense, it felt when we were making it like were making a film for adults, which obviously has great appeal to children as well. But I do feel it’s more of a film for grown-ups in terms of its storyline, and the issues it’s challenging and discussing.”

Someone told Manz that somebody “felt like Fantastic Beasts was made for the audience that read the books and watched those films and has now grown up.”

IMDB lists four new Fantastic Beasts movies in development. Burke and Manz are already in pre-production on the second, penciled in for a November 16, 2018 release date. “I think it’s fair to say,” says Burke, “that we’ll obviously be trying to expand on the universe that we’ve started to create. Newt will be there with his animals and various other characters, which are going to bring a lot more interest as the story evolves.”

Manz predicts, “It’s just trying to push the believability of the interactions and that world even further. The first film was all about creating that new world, and now it’s out there. It will be a new city (Paris) so we’ll have that challenge again, but we’ll build on what we’ve learned. You don’t often get an opportunity to work with the same team of people, so that’s going to be the great for the second one.”

 

 

VFX Supervisor Volker Engel: ‘Independence Day,’ technology and more

Uncharted Territory’s Volker Engel is one of Hollywood’s leading VFX supervisors, working on movies as diverse as White House Down, Hugo and Roland Emmerich’s Shakespeare movie Anonymous. Most recently he was in charge of the huge number of effects for Emmerich’s Independence Day: Resurgence.

Engel was kind enough to make time in his schedule to discuss his 28-year history with Emmerich, his favorite scenes from Independence Day, his experience with augmented reality on set and more.

When did you get involved with Independence Day?
I was probably the earliest person involved after Roland Emmerich himself! He kept me posted over the years while we were working on other projects because we were always going to do this movie.

I think it was 2009 when the first negotiations with 20th Century Fox started, but the important part was early 2014. Roland had to convince the studio regarding the visuals of the project. Everyone was happy with the screenplay, but they said it would be great to get some key images. I hired a company called Trixter — they are based in Germany, but also have an office in LA. They have a very strong art department. In about six weeks we finished 16 images that are what you can call “concept art,” but they are extremely detailed. Most of these concepts can be seen as finished shots in the movie. This artwork was presented to 20th Century Fox and the movie was greenlit.

Concept art via Trixter.

You have worked with Emmerich many times. You must have developed a sort of shorthand?
This is now a 28-year working relationship. Obviously, we haven’t done every movie as a team but I think this is our eighth movie together. There is a shorthand and that helps a lot. I don’t think we really know what the actual shorthand is other than things that we don’t need to talk about because we know what needs to happen.

Technology continues to advance. Does that make life easier, or because you have more options does it make it even more complex?
It’s less the fact that there’s more options, it’s that the audience is so much more sophisticated. We now have better tools available to make better pictures. We can do things now that we were not able to do before. So, for example, now we can imagine a mothership that’s 3,000 miles in diameter and actually lands on Earth. There is a reason we had a smaller mothership in the first movie and that it didn’t touch down anywhere on the planet.

The mothership touching down in DC.

So it changes the way you tell stories in a really fundamental way?
Absolutely. If you look at a movie like Ex Machina, for example, you can show a half-human/half-robot and make it incredibly visually convincing. So all of a sudden you can tell a story that you wouldn’t have been able to tell before.

If you look at the original Independence Day movie, you really only see glimpses of the aliens because we had to do it with practical effects and men in suits. For Independence Day: Resurgence we had the chance to go much further. What I like actually is that Roland decided not to make it too gratuitous, but at least we were able to fully show the aliens.

Reports vary, but they suggest about 1,700 effects shots in Independence Day: Resurgence. Is that correct?
It was 1,748. Close to two-thirds of the movie!

What was your previs process like?
We had two different teams: one joined us from Method Studios and the other was our own Uncharted Territory team, and we split the task in half. The Method artists were working in our facility, so we were all under one roof.

Method focused on the whole lunar sequence, for example, while our in-house team started with the queen/bus chase toward the end of the movie. Roland loves to work with two specific storyboard artists and has several sessions during the week with them, and we used this as a foundation for the previs.

Trixter concept art.

So Roland was involved at the previs stage looking at how it was all going to fit together?
He had an office where the previs team was working, so we could get him over and go literally from artist to artist. We usually did these sessions twice a day.

What tools were you using?
Our in-house artists are Autodesk 3D Studio Max specialists, and the good folks from Method worked with Autodesk Maya.

The live shoot used camera-tracking technology from Ncam to marry the previs graphics and the live action in realtime to give a precise impression of how the final married shot would work.

How were you using the Ncam exactly?
The advantage is that we took the assets we had already built for previs and then re-used them inside the Ncam set-up, doing this with Autodesk Motion Builder. But some of the animation had to be done right there on set.

After: Area 51

I’ll give you an example. When we’re inside the hangar at Area 51, Roland wanted to pan from an actor’s face looking at 20 jet fighters lifting off and flying into the distance, and he wanted to pan off the actors face to show the jets. The Ncam team and Marion [Spates, the on-set digital effects supervisor] had to right there, on the spot, do the animation for the fighters. In about five minutes, they had to come up with something there and then and do the animation, and what’s more, it worked. That’s why Roland also loves to work with Ncam, because it gives him the flexibility to make some decisions right there in the moment.

So you’re actually updating or even creating shots on set?
Yes, exactly. We have the toolbox there — the assets like the interior of the hangar — but then we do it right there to the picture. Sometimes for both the A-camera and the B-camera.

We did a lot of extensions and augmentations on this movie and what really helped was our experience of working with Ncam on White House Down. For Roland, as the director, it helps him compose his images instead of just looking at a gigantic bluescreen. That’s really what it is, and he’s really good at that.

The Ncam at use on set.

I explain it this way: imagine you already have your first composite right there, which goes straight to editorial. They immediately have something to work with. We just deliver two video files: the clean one with the bluescreen and another from Ncam that has the composite.

Did using Ncam add to the shooting time?
Working with AR on set always adds some shooting time, and it’s really important that the director is briefed and wants to use this tool. The Ncam prep often runs parallel to the rehearsals with the actors, but sometimes it adds two or three additional minutes. When you have someone who’s not prepared for it, two or three minutes can feel like a lifetime. It does, however, save a lot of time in post.

On White House Down, when we used Ncam for the first time, it actually took a little over a week until everything grooved and everyone was aware of it — especially the camera department. After a little while they just knew this is exactly what needed to be done. It all became instant teamwork. It is something that supports the picture and it’s not a hindrance. It’s something that the director really wants.

Do you have a favorite scene from Resurgence?
There is a sequence inside the mothership where our actors are climbing up one of these gigantic columns. We had a small set piece being built for the actors to climb, and it was really important for Roland to compose the whole image. He could ask for a landing platform to be removed and more columns to be added to create a sense of depth, then move the view around another 50 or 60 degrees.

He was creating his images right there, and that’s why the guys have to be really quick on their feet and build these things in and make it work. At the same time, the assistant director is there saying the cameras are ready, the actors are ready and we’re ready to shoot, and of course no one wants them to wait around, so they better have their stuff ready!

Destruction of Singapore

The destruction of Singapore.

Some of my other favorite sequences from the film are the destruction of Singapore while the mothership enters the atmosphere and the alien queen chasing the school bus!

What is next for you?
In 1999, when I started Unchartered Territory with my business partner Marc Weigert, we set it up as a production company and started developing our own projects. We joke that Roland interrupts us from developing our projects because he comes with projects of his own that we just cannot say no to! But we have just come back from a trip to Ireland where we scouted two studios and met with several potential production partners for a new project of our own. Stay tuned!

The Third Floor’s Eric Carney on the evolution of previs

When people hear the word previs, they likely think of visual effects, but today’s previs goes way beyond VFX. Our industry is made up of artists who think visually, so why not get a mock-up of what a scene might look like before it’s shot, whether it includes visual effects or not?

Eric Carney is a previs supervisor and co-founder of The Third Floor, which has studios in Los Angeles, Montreal and London. He defines today’s previs as a Swiss army knife that helps define the vision and solutions for all departments on a project. “Previs is not exclusively for visual effects,” he explains. “Previs teams work with producers, directors, cinematographers, stunt coordinators, special effects crews, grips, locations, editorial and many other collaborators, including visual effects, to help map out ideas for scenes and how they can be effectively executed on the day.”

Eric Carney

Let’s find out more from Carney about previs’ meaning and evolution.

How has the definition of previs changed over the years?
While previs is often categorized as a visual effects tool, it’s really a process for the entire production and is being regularly used that way. In a heads-of-department meeting, where scenes are being discussed with a large group of people, it can be hard to describe with words something that is a moving image. Being able to say, “Why don’t we mock up something in previs?” makes everyone happy because they know everyone will get something they can watch and understand, and we can move on to the next item in the meeting.

We’re also seeing previs used more frequently to develop the storytelling and to visualize a large percentage of a film — 80 to 100 percent on some we’ve collaborated on. If you can sit down and “see” a version of the movie, what works (or doesn’t) really comes to light.

Can you give an example?
Maybe a certain scene doesn’t play very well when placed after a certain other scene — maybe the order should be flipped. Maybe there are two scenes that are too similar. Maybe the pacing should be changed, or maybe the last part of the scene is unnecessary. You used to have to wait until the first cut to have this type of insight, but with previs filmmakers and studios can discover these things much earlier, before visual effects may have been ordered and oftentimes before the scenes get filmed.

Ultron

Postvis for Age of Ultron

What is the relationship between previs and production?
Previs helps to produce a blueprint for production from which everything can be planned. Once all departments have a good idea of the desired scene, they can apply their specialized knowledge for how to accomplish that — from the equipment and personnel they are going to need on the day to figuring out how many days they will be filming or where they are going to shoot. All of the nuts and bolts become easier and more efficient when the production has invested in accurate previs.

What is the relationship between previs and post production?
In post production, previs becomes something called “postvis,” which can be the editorial department’s best friend. Many big-budget movies have so many visual effects that it can be challenging to produce a truly representative cut prior to visual effects delivery if your footage is mostly greenscreen. Postvis is able to fill the live plates with temp effects, characters or environments so the creatures, backgrounds or other elements that are important to the shot appear in context. Because postvis can be done quickly, editors can request shots on the fly to help them try out and drop in different options. It’s such a useful process that we’re spending as much and sometimes more time on postvis as we do on previs.

Can you describe the creative aspect of previs?
Previs involves all aspects of filmmaking, and there are no creative boundaries. This is why directors love previs; it’s a giant sandbox, free from the realities of physical production. A previs team is typically small, so the work can be very collaborative. Operating in service of the director, frequently also including producers, visual effects supervisors or other collaborators, creative visualization helps find effective ways to visually tell the story, to show the key beats and the way a scene goes together. The previs team’s starting point is often the script or storyboards, but this can also be general descriptions of the action that needs to occur. Through previs, we often have the latitude to explore possible flows of action and brainstorm different details or gags that might be a creative fit.

While previs supports having a very fully realized creative vision, it’s also important that what is visualized translates into shots and scenes that are possible for real-world production and budgets. It’s all well and good to come up with great ideas, but eventually someone has to actually film or post produce the shot.

Can you talk about the technical aspects of previs?
Previs has an important function in helping plan complicated technical aspects of production. We call it “techvis.” This is where we incorporate input and information from all the key departments to produce detailed shooting logistics and plans. By working collaboratively to bring these details into the previs, any number of shooting and visual effects details can be determined and shots can be rehearsed virtually with a good deal of technical accuracy corresponding to the setup for the shooting day.

Many things can be figured out using techvis, including positions for the camera, how far and fast it should move, which lenses are needed and where the actors need to be. It’s also possible to define equipment needs and a host of specific details. Can the shot be done on a Fisher Dolly or will you need a jib arm or a Technocrane? Should it be Techno 30 or Techno 50? Where should it go and how much track are you going to need? Or maybe the move is too fast for a crane and you’d be better off with a Spidercam?

By interfacing with all the departments and bringing together the collective wisdom about the scene at hand, we can produce on-set specifications ahead of time so everyone can refer to the diagrams that have been created without spending time figuring it out on the day.

One area where previs and techvis artists often contribute is in scenes with motion control work. We might be charged with visualizing the types of moves that the motion control crane can achieve, or looking at the best places to position the rig. We’ve built a large library of motion control cranes in 3D that can be dropped into the virtual scene to aid this process. Not only can the move be calculated in advance, the camera path from the computer can be loaded directly to the physical rig to have the on-set equipment execute the same move.

We are at a point where virtual planning and on-set production can really work hand in hand, each process feeding the other to realize the vision more effectively.

Previs for Ant-man.

Name some common challenges that can be solved via the previs process.
One common request for previs is in planning large-scale fight scenes. Knowing what each character, creature or ship, etc. is doing at any given movement is important for the story as well as in orchestrating filming and visual effects. Scenes like the final car chase in Mad Max: Fury Road, armies clashing in Game of Thrones or the epic action in a Marvel film like Avengers: Age of Ultron or Captain America: Civil War are good examples. Visualizing heroes, villains and the inevitable destruction that will happen can be lots of fun.

As mentioned, previs also comes up to help with things that pose specific technical challenges, such as those that rely on representing physics or scale. If you’re depicting astronauts in zero gravity, a fleet of hover cars or a superhero shrinking from human to ant-size, you are likely using previs to help conceptualize, as well as realize, the scene.

What are some less common ways previs is used?
A newer trend is using previs within a virtual camera system to explore frame-ups of the shot. Previs visuals appear in a display that the director can control and reposition to see what type of coverage works best. On The Walk, postvis composites actually fed a virtual camera that was used to explore shot plates and extend practical camera moves. On some shows, previs versions of real locations or CG environments might be used to virtually “scout” and more extensively develop the shots, or a location might be sought matching the size or description suggested in a previs mockup of the scene.

Beyond showing the action, previs artists are sometimes asked to develop and test the characteristics of a character, environment, prop or type of effect. In Godzilla, we did animation tests for our director with possible fighting styles for Godzilla and the Mutos, cueing off large animals from nature. For Thor, we looked at things like how the hero’s hammer and cape would fly and behave. On Total Recall, we considered different sets of rules that might apply to cities and vehicles in a futuristic world. On special venue projects, we’ve tested things like the flow of a ride from the audience’s POV.

Previs for Game of Thrones

While previs is used a lot on CG-heavy scenes, it’s worth noting that visualization can also be vital for scenes largely based on practical filming. This is especially true for coordinating complex stunts. On Mission Impossible: Rogue Nation, for example, stunt and camera teams tightly coordinated their work with previs to identify requirements for safely and effectively pulling off in-camera stunts that ranged from Tom Cruise riding on the wing of an Airbus to being rotated in an underwater chamber. The same is true on Season 5 of Game of Thrones where the approach to realizing the ambitious arena scene in Episode 9 relied on syncing up the actions of a digital dragon with real pyrotechnics and stunt performances on the location set.

What do you see for the future of previs?
In film, we’re seeing higher proportions of movies being previsualized and previs being requested by directors and studios on productions of all sizes and scale. We’re doing more techvis, virtual production and visualization from on set. We’re looking into how modern game engines can support the process with increased interactivity and visual quality. And we are applying skills, tools and collaborations from the previs process to create content for platforms like VR.

You’ve won two Emmys as part of the Game of Thrones team. Can you talk about your work on the show?
We don’t actually think about it as a television program but more like a 10-hour movie. The trick is that we have a smaller team and less time than we would on a two-hour big film. To be able to visualize the large set-piece sequences —like Drogon in the arena or the battle at Hardhome — is an indispensable part of the production process and it’s difficult to imagine being able to achieve such sequences without this type of process. Everything involving visual effects can be planned down to the inch, with it all being done in half the time of normal films.

All of the contributors on the show — from the producers to the directors, special effects, stunts, camera, visual effects teams headed up by Joe Bauer and Steve Kullback — are so very collaborative.

Being on a show like this only inspires innovation even more. Last season, we had a flame-throwing Technodolly playing a CG dragon with real actors in a real location in Spain. This season…stay tuned!

Ncam hires industry vet Vincent Maza to head up LA office

Ncam, makers of camera tracking for augmented reality production and previs, has opened a new office in Los Angeles, and they have brought on Vincent Maza to run the operation.

Maza spent much of his career at Avid and as an HD engineer at Fletcher Chicago. More recently he has been working with the professional imaging division of Dolby and with data transfer specialist Aspera. He is also a member of the board of directors of the HPA (Hollywood Post Alliance), now part of SMPTE. He will be in Indian Wells, California next week representing Ncam at the HPA Tech Retreat.

“2016 is going to be a great year for augmented reality and we believe we will see a huge uptake in people using it to make television more engaging, more exciting and more challenging,” commented Maza. “Ncam’s camera tracking technology makes augmented reality a practical proposition, and I am very excited to be at the heart of it and supporting our US presence.”

Ncam’s tracking system is able to achieve all six degrees of movement in camera location: XYZ position in 3D space, pan, tilt and roll, so even handheld cameras can be precisely tracked with minimal latency.

Broadcasters have embraced augmented reality with Ncam, including CNN, ESPN, Fox Sports and the NFL. This same technology is used to provide directors and cinematographers with realtime visualization of effects shots. Recent movies using the technology include, Avengers Age of Ultron, Edge of Tomorrow and White House Down.

The A-List: An interview with ‘The Martian’ director Ridley Scott

By Iain Blair

A mysterious alien world in deep space, hundreds of years in the future. The gore and glory of imperial Rome, and the spectacle of its doomed gladiators. The nightmarish vision of a dystopian Los Angeles and its rogue replicants. The colossal grandeur of ancient Egypt and its massive monuments. The bloody battlefields of the Crusades. The pastoral glow of vineyards in southern France.

Those are just a few of the “other worlds” that Ridley Scott, one of the supreme stylists of contemporary cinema, has brought to life over the past five decades since making his feature debut with The Duellists in 1977. Scott’s directorial resume also includes Blade Runner, Alien and Thelma and Louise. Of all his contemporaries working today, Scott alone seems to be equally at ease creating vast landscapes set in both the distant past and distant future, in the process channeling David Lean, Cecil B. DeMille and Jim Cameron along with his own prodigious gifts as an epic storyteller and visual artist.

THE MARTIAN

Ridley Scott on location in Jordan for ‘The Martian.’

Now, the three-time Oscar-nominated director — whose credits include such varied fare as Hannibal, Robin Hood, Black Hawk Down, Exodus: Gods and Kings, A Good Year and G.I. Jane — has turned his attention to the red badlands of Mars in his new sci-fi thriller The Martian. Starring Matt Damon, Jessica Chastain, Kate Mara and Jeff Daniels, it tells the story of a botanist astronaut (Damon) left behind on the dead, hostile planet after an aborted mission and the efforts of NASA and a team of international scientists to rescue him.

I recently spoke to Scott, whose other credits include Prometheus, Matchstick Men, American Gangster and Legend, about making the 3D film, which was shot in Jordan and Hungary. We discussed his love of previs and post and — hold onto your seats!! — why post schedules are way too long for his liking.

You’ve made a lot of sci-fi films. What’s the appeal?
It’s a new canvas, it takes you into the arena of “anything goes,” but you also need to create a rulebook so the world you create is coherent, otherwise you just get rubbish. You also need a story that’s valid in that universe… and to create parameters. Anything doesn’t go!

The appeal here was it’s a sort of Robinson Crusoe survival story, set in space, five years in the future. There are no aliens and we went for a very realistic look and approach — I knew exactly what to do with it. Even as I was reading it, I was seeing Wadi Rum in Jordan, where we shot the landscapes, and I knew grading would be simple, as I could adjust terra cotta to orange landscapes.

THE MARTIAN

How early on did you decide to go 3D?
Immediately. I loved 3D when I first tried it out on Prometheus, and then we used it on Exodus, so this is the third one. Again DP Dariusz Wolski used the 3ality TS-5 Technica rigs with Red Epic Dragons and Scarlet Dragons. I love it! It’s only a problem if you allow it to become brain surgery, so you just need to know what you’re doing. It’s a bit like shooting four cameras, which I do anyway.

All the visual effects were obviously crucial. How soon did you integrate post and VFX with the production?
I start it almost immediately, and I also do a lot of boarding. I started well before we began The Martian, with a particular view or rock. I board it all myself, which makes it more accurate, and it allows you to pace a scene. They’re very instructive and they become the bible for everyone, and you can tell the VFX guys, “Here’s the lead-in, this is the cross-over, now we’re in the full VFX shot.”

What about digi-data animation?
I absolutely love it. I think it’s essential before you go into anything complex, because, first, you see what the problems are and, second, in editing you invariably haven’t got the greenscreen, so digital data enables you to cut it into the film instead of having blank space, and it stays there until you get a complete shot.

Matt Damon portrays an astronaut who draws upon his ingenuity to subsist on a hostile planet.

Did you do a lot of previs?
Yes, at MPC and Argon. I love that too as it let’s me see what’s what. It can be very sophisticated now in terms of working out the pacing and how you’ll cut. You can get very close to what the final thing will be.

Where did you post?
Partly in London and Budapest, and we did a lot of post as we shot. I cut as we go, every night, so by the end of the shoot I’m pretty close to the director’s cut. I hate waiting until the end of the shoot to start editing, so editor Pietro Scalia just got on with it. That let’s me see where I am.

We did the sound mix at Twickenham in the big new Dolby Atmos room. The mix is amazing as it gives you all this clarity and separation between dialogue and all the effects and other layers.

A lot of filmmakers complain about today’s accelerated post schedules. I assume you’re not one of them?
Are you kidding me? It’s like watching ivy grow when you’re waiting for all the VFX shots and so on. We worked 25 weeks on post for this, and I still think it’s a bit long. Today’s digital technology means you no longer travel with a million feet [of film], just digital output and data, and post is getting faster and faster, thank God. Shooting and posting in 35mm drove me crazy! To be honest, I’d be happy with an even shorter post. I love post, but if you know what you’re doing you don’t need to spend all that time. And digital has changed everything.

Matt Damon portrays an astronaut who must draw upon his ingenuity to survive on a hostile planet.How many visual effects shots are there?
Probably 1,300, and we had a lot of companies — Framestore, ILM, Milk, Prime Focus, The Senate, [The Territory for screen graphics] and my usual VFX supervisor Richard Stammers, who’s been with me since Kingdom of Heaven. Funnily enough, the hardest shot to do was the [scene] with the tape, where it floats around and curls in a rather balletic fashion. That was very tricky to get right.

Where was the DI?
At Company 3 in London. I love the DI. For me it’s the final touch, like grading still photographs, and I used my favorite colorist, Stephen Nakamura, who’s a top guy at their LA office and he would travel to London. He’s very fast, and we did the whole film in just two weeks. I’m very happy with the way it looks. (Nakamura used Blackmagic’s DaVinci Resolve on the film.)

What’s next?
I plan to start Alien: Paradise Lost in February, maybe in Toronto. It’s a sequel to Prometheus and a prequel to Alien. I’m also doing a lot of TV projects, including The Hot Zone, a drama with Fox about the Ebola virus.

You seem to be working at a flat-out pace these days, directing a huge movie every year. You turn 78 in November. Do you ever see yourself slowing down?
(Laughs) Hopefully not! I actually think I’m speeding up, and as long as I find great projects to make that really interest me, I’ll keep working.

Photos by Giles Keyte and Aidan Monaghan.


Check back in soon for our audio post coverage of The Martian.

Gaining more control over VFX shoots

By Randi Altman

During IBC this year, I saw many companies, some I was familiar with and some that were new to me. One I was eager to get to know was SolidAnim.

Alas, fate intervened. Well, more accurately, the size of the RAI Conventions Center where IBC took place intervened — it is huge and has a crazy amount of exhibit halls to navigate through. I never made it there (damn you RAI, shaking fist in air!). But happily I did get to connect with the company’s Lamia Nouri on a call recently.

Founded by three artists — mocap supervisors and animation directors Isaac Partouche, Emmanuel Linot and Jean-Francois Szlapka — this French animation company was born in Continue reading

Zoic’s Mike Romey discusses previs app ZEUS:Scout

By Randi Altman

Visual effects studio Zoic has released to the masses an iPad-based previs tool they developed while providing shots for VFX-heavy shows such as Once Upon a Time, Intelligence, Pan Am and V. The app is available now via iTunes for $9.99.

According to Zoic (@zoicstudios), ZEUS:Scout (Zoic Environmental Unification System) offers seven main modes: View allows the user to move the camera around and save camera positions for shock blocking purposes; Measurements mode allows users to bring real-world measurements into a virtual world; Characters mode can be used to insert character cards into the virtual location; Props lets users add, move, rotate and scale set props that are available for in-app purchase; Previs Animation lets users explore camera moves for previs and rehearsal purposes. The Tracking mode allows users to use the tablet as a virtual camera with the CG view matching the Continue reading