Tag Archives: VFX

Sony Imageworks’ VFX work on Spider-Man: Homecoming

By Daniel Restuccio

With Sony’s Spider-Man: Homecoming getting ready to release digitally on September 26 and on 4K Ultra HD/Blu-ray, Blu-ray 3D, Blu-ray and DVD on October 17, we thought this was a great opportunity to talk about some of the film’s VFX.

Sony Imageworks has worked on every single Spider-Man movie in some capacity since the 2002 Sam Raimi version. On Spider-Man: Homecoming, Imageworks worked on mostly the “third act,” which encompasses the warehouse, hijacked plane and beach destruction scenes. This meant delivering over 500 VFX shots, created by over 30 artists (at one point this peaked at 200) and compositors, and rendering out 2K finished scenes.

All of the Imageworks artists used Dell R7910 workstations with Intel Xeon CPU E5-2620 24 cores, 64GB memory and Nvidia Quadro P5000 graphics cards. They used Cinesync for client reviews and internally they used their in-house Itview software. Rendering technology was SPI Arnold (not the commercial version) and their custom shading system. Software used was Autodesk 2015, Foundry’s Nuke X 10.0 and Side Effects Houdini 15.5. They avoided plug-ins so that their auto-vend, breaking of comps into layers for the 3D conversion process, would be as smooth as possible. Everything was rendered internally on their on-premises renderfarm. They also used the Sony “Kinect” scanning technique that allowed their artists to do performance capture on themselves and rapidly prototype ideas and generate reference.

We sat down with Sony Imageworks VFX supervisor Theo Bailek, who talks about the studio’s contribution to this latest Spidey film.

You worked on The Amazing Spider-Man in 2012 and The Amazing Spider-Man 2 in 2014. From a visual effects standpoint, what was different?
You know, not a lot. Most of the changes have been iterative improvements. We used many of the same technologies that we developed on the first few movies. How we do our city environments is a specific example of how we build off of our previous assets and techniques, leveraging off the library of buildings and props. As the machines get faster and the software more refined, it allows our artists increased iterations. This alone gave our team a big advantage over the workflows from five years earlier. As the software and pipeline here at Sony has gotten more accessible, it has allowed us to more quickly integrate new artists.

It’s a lot of very small, incremental improvements along the way. The biggest technological changes between now and the early Spider-Mans is our rendering technology. We use a more physically-accurate-based rendering incarnation of our global illumination Arnold renderer. As the shaders and rendering algorithms become more naturalistic, we’re able to conform our assets and workflows. In the end, this translates to a more realistic image out of the box.

The biggest thing on this movie was the inclusion of Spider-Man in a Marvel Universe: a different take on this film and how they wanted it to go. That would be probably the biggest difference.

Did you work directly with director Jon Watts, or did you work with production VFX supervisor Janek Sirrs in terms of the direction on the VFX?
During the shooting of the film I had the advantage of working directly with both Janek and Jon. The entire creative team pushed for open collaboration, and Janek was very supportive toward this goal. He would encourage and facilitate interaction with both the director and Tom Holland (who played Spider-Man) whenever possible. Everything moved so quick on set, often times if you waited to suggest an idea you’d lose the chance, as they would have to set up for the next scene.

The sooner Janek could get his vendor supervisors comfortable interacting, the bigger our contributions. While on set I often had the opportunity to bring our asset work and designs directly to Jon for feedback. There were times on set when we’d iterate on a design three or four times over the span of the day. Getting this type of realtime feedback was amazing. Once post work began, most of our reviews were directly with Janek.

When you had that first meeting about the tone of the movie, what was Jon’s vision? What did he want to accomplish in this movie?
Early on, it was communicated from him through Janek. It was described as, “This is sort of like a John Hughes, Ferris Bueller’s take on Spider-Man. Being a teenager he’s not meant to be fully in control of his powers or the responsibility that comes with them. This translates to not always being super-confident or proficient in his maneuvers. That was the basis of it.

Their goal was a more playful, relatable character. We accomplished this by being more conservative in our performances, of what Spider-Man was capable of doing. Yes, he has heightened abilities, but we never wanted every landing and jump to be perfect. Even superheroes have off days, especially teenage ones.

This being part of the Marvel Universe, was there a pool of common assets that all the VFX houses used?
Yes. With the Marvel movies, they’re incredibly collaborative and always use multiple vendors. We’re constantly sharing the assets. That said, there are a lot of things you just can’t share because of the different systems under the hood. Textures and models are easily exchanged, but how the textures are combined in the material and shaders… that makes them not reusable given the different renderers at companies. Character rigs are not reusable across vendors as facilities have very specific binding and animation tools.

It is typical to expect only models, textures, base joint locations and finished turntable renders for reference when sending or receiving character assets. As an example, we were able to leverage somewhat on the Avengers Tower model we received from ILM. We did supply our Toomes costume model and Spider-Man character and costume models to other vendors as well.

The scan data of Tom Holland, was it a 3D body scan of him or was there any motion capture?
Multiple sessions were done through the production process. A large volume of stunts and test footage were shot with Tom before filming that proved to be invaluable to our team. He’s incredibly athletic and can do a lot of his own stunts, so the mocap takes we came away with were often directly usable. Given that Tom could do backflips and somersaults in the air we were able to use this footage as a reference for how to instruct our animators later on down the road.
Toward the later-end of filming we did a second capture session, focusing on the shots we wanted to acquire using specific mocap performances. Then again several months later, we followed up with a third mocap session to get any new performances required as the edit solidified.

As we were trying to create a signature performance that felt like Tom Holland, we exclusively stuck to his performances whenever possible. On rare occasions when the stunt was too dangerous, a stuntman was used. Other times we resorted to using our own in-house method of performance capture using a modified Xbox Kinect system to record our own animators as they acted out performances.

In the end performance capture accounted for roughly 30% of the character animation of Spider-Man and Vulture in our shots, with the remaining 70% being completed using traditional key-framed methods.

How did you approach the fresh take on this iconic film franchise?
It was clear from our first meeting with the filmmakers that Spider-Man in this film was intended to be a more relatable and light-hearted take on the genre. Yes, we wanted to take the characters and their stories seriously, but not at the expense of having fun with Peter Parker along the way.

For us that meant that despite Spider-Man’s enhanced abilities, how we displayed those abilities on screen needed to always feel grounded in realism. If we faltered on this goal, we ran the risk of eroding the sense of peril and therefore any empathy toward the characters.

When you’re animating a superhero it’s not easy to keep the action relatable. When your characters possess abilities that you never see in the real world, it’s a very thin line between something that looks amazing and something that is amazingly silly and unrealistic. Over-extend the performances and you blow the illusion. Given that Peter Parker is a teenager and he’s coming to grips with the responsibilities and limits of his abilities, we really tried to key into the performances from Tom Holland for guidance.

The first tool at our disposal and the most direct representation of Tom as Spider-Man was, of course, motion capture of his performances. On three separate occasions we recorded Tom running through stunts and other generic motions. For the more dangerous stunts, wires and a stuntman were employed as we pushed the limit of what could be recorded. Even though the cables allowed us to record huge leaps, you couldn’t easily disguise the augmented feel to the actor’s weight and motion. Even so, every session provided us with amazing reference.

Though a bulk of the shots were keyframed, it was always informed by reference. We looked at everything that was remotely relevant for inspiration. For example, we have a scene in the warehouse where the Vulture’s wings are racing toward you as Spider-Man leaps into the air stepping on the top of the wings before flipping to avoid the attack. We found this amazing reference of people who leap over cars racing in excess of 70mph. It’s absurdly dangerous and hard to justify why someone would attempt a stunt like that, and yet it was the perfect for inspiration for our shot.

In trying to keep the performances grounded and stay true to the goals of the filmmakers, we also found it was always better to err on the side of simplicity when possible. Typically, when animating a character, you look for opportunities to create strong silhouettes so the actions read clearly, but we tended to ignore these rules in favor of keeping everything dirty and with an unscripted feel. We let his legs cross over and knees knock together. Our animation supervisor, Richard Smith, pushed our team to follow the guidelines of “economy of motion.” If Spider-Man needed to get from point A to B he’d take the shortest route — there’s not time to strike an iconic pose in-between!


Let’s talk a little bit about the third act. You had previsualizations from The Third Floor?
Right. All three of the main sequences we worked on in the third act had extensive previs completed before filming began. Janek worked extremely closely with The Third Floor and the director throughout the entire process of the film. In addition, Imageworks was tapped to help come up with ideas and takes. From early on it was a very collaborative effort on the part of the whole production.
The previs for the warehouse sequence was immensely helpful in the planning of the shoot. Given we were filming on location and the VFX shots would largely rely on carefully choreographed plate photography and practical effects, everything had to be planned ahead of time. In the end, the previs for that sequence resembled the final shots in most cases.

The digital performances of our CG Spider-Man varied at times, but the pacing and spirit remained true to the previs. As our plane battle sequence was almost entirely CG, the previs stage was more of an ongoing process for this section. Given that we weren’t locked into plates for the action, the filmmakers were free to iterate and refine ideas well past the time of filming. In addition to The Third Floor’s previs, Imageworks’ internal animation team also contributed heavily to the ideas that eventually formed the sequence.

For the beach battle, we had a mix of plate and all-CG shots. Here the previs was invaluable once again in informing the shoot and subsequent reshoots later on. As there were several all-CG beats to the fight, we again had sections where we continued to refine and experiment till late into post. As with the plane battle, Imageworks’ internal team contributed extensively to pre and postvis of this sequence.

The one scene, you mentioned — the fight in the warehouse — in the production notes, it talks about that scene being inspired by an actual scene from the comic The Amazing Spider-Man #33.
Yes, in our warehouse sequence there are a series of shots that are directly inspired by the comic book’s cells. Different circumstances in the the comic and our sequence lead to Spider-Man being trapped under debris. However, Tom’s performance and the camera angles that were shot play homage to the comic as he escapes. As a side note, many of those shots were added later in the production and filmed as reshoots.

What sort of CG enhancements did you bring to that scene?
For the warehouse sequence, we added digital Spider-Man, Vulture wings, CG destruction, enhanced any practical effects, and extended or repaired the plate as needed.The columns that the Vulture wings slice through as it circles Spider-Man were practically destroyed with small denoted charges. These explosives were rigged within cement that encased the actual warehouses steel girder columns. They had fans on set that were used to help mimic interaction from the turbulence that would be present from a flying wingsuit powered by turbines. These practical effects were immensely helpful for our effects artists as they provided the best-possible in-camera reference. We kept much of what was filmed, adding our fully reactive FX on top to help tie it into the motion of our CG wings.

There’s quite a bit of destruction when the Vulture wings blast through walls as well. For those shots we relied entirely on CG rigid body dynamic simulations for the CG effects, as filming it would have been prohibitive and unreliable. Though most of the shots in this sequence had photographed plates, there were still a few that required the background to be generated in CG. One shot in particular, with Spider-Man sliding back and rising up, stands out in particular. As the shot was conceived later in the production, there was no footage for us to use as our main plate. We did however have many tiles shot of the environment, which we were able to use to quickly reconstruct the entire set in CG.

I was particularly proud of our team for their work on the warehouse sequence. The quality of our CG performances and the look of the rendering is difficult to discern from the live action. Even the rare all-CG shots blended seamlessly between scenes.

When you were looking at that ending plane scene, what sort of challenges were there?
Since over 90 shots within the plane sequence were entirely CG we faced many challenges, for sure. With such a large number of shots without the typical constraints that practical plates impose, we knew a turnkey pipeline was needed. There just wouldn’t be time to have custom workflows for each shot type. This was something Janek, our client-side VFX supervisor, stressed from the onset, “show early, show often and be prepared to change constantly!”

To accomplish this, a balance of 3D and 2D techniques were developed to make the shot production as flexible as possible. Using our compositing software Nuke’s 3D abilities we were able to offload significant portions of the shot production into the compositor’s hands. For example: the city ground plane you see through the clouds, the projections of the imagery on the plane’s cloaking LED’s and the damaged flickering LED’s were all techniques done in the composite.

A unique challenge to the sequence that stands out is definitely the cloaking. Making an invisible jet was only half of the equation. The LEDs that made up the basis for the effect also needed to be able to illuminate our characters. This was true for wide and extreme close-up shots. We’re talking about millions of tiny light sources, which is a particularly expensive rendering problem to tackle. Mix in the fact that the design of these flashing light sources is highly subjective and thus prone to needing many revisions to get the look right.

Painting control texture maps for the location of these LEDs wouldn’t be feasible for the detail needed on our extreme close-up shots. Modeling them in would have been prohibitive as well, resulting in excessive geometric complexity. Instead, using Houdini, our effects software, we built algorithms to automate the distribution of point clouds of data to intelligently represent each LED position. This technique could be reprocessed as necessary without incurring the large amounts of time a texture or model solution would have required. As the plane base model often needed adjustments to accommodate design or performance changes, this was a real factor. The point cloud data was then used by our rendering software to instance geometric approximations of inset LED compartments on the surface.

Interestingly, this was a technique we adopted from rendering technology we use to create room interiors for our CG city buildings. When rendering large CG buildings we can’t afford to model the hundreds and sometimes thousands of rooms you see through the office windows. Instead of modeling the complex geometry you see through the windows, we procedurally generate small inset boxes for each window that have randomized pictures of different rooms. This is the same underlying technology we used to create the millions of highly detailed LEDs on our plane.

First our lighters supplied base renders to our compositors to work with inside of Nuke. The compositors quickly animated flashing damage to the LEDs by projecting animated imagery on the plane using Nuke’s 3D capabilities. Once we got buyoff on the animation of the imagery we’d pass this work back to the lighters as 2D layers that could be used as texture maps for our LED lights in the renderer. These images would instruct each LED when it was on and what color it needed to be. This back and forth technique allowed us to more rapidly iterate on the look of the LEDs in 2D before committing and submitting final 3D renders that would have all of the expensive interactive lighting.

Is that a proprietary system?
Yes, this is a shading system that was actually developed for our earlier Spider-Man films back when we used RenderMan. It has since been ported to work in our proprietary version of Arnold, our current renderer.

The hybridization of VFX and motion design

Plus the rise of the small studio

By Miguel Lee

There has long been a dichotomy between motion graphics and VFX because they have traditionally serviced very different creative needs. However, with the democratization of tools and the migration of talent between these two pillars of the CG industry, a new “hybrid” field of content creators is emerging. And for motion designers like myself, this trend reflects the many exciting things taking place in our industry today, especially as content platforms increase at an incredible rate with smartphones and new LED technologies, not to mention a renaissance in the fields of VR, AR, and projection mapping, to name a few.

Miguel Lee

I’ve always likened the comparison of motion graphics and VFX to the Science Club and the Art Club we remember at school. VFX has its roots in an objective goal: to seamlessly integrate CG into the narrative or spectacle in a convincing and highly technical way. Motion graphics, on the other hand, can be highly subjective. One studio, for instance, might produce a broadcast package laden with 3D animations, whereas another studio will opt for a more minimal, graphical approach to communicating the same brand. A case can typically be made for either direction.

So where does the new “hybrid” studios fit into this analogy? Let’s call them the “Polymath Club,” given their abilities to tap into the proverbial hemispheres of the brain — the “left” representing their affinity for the tools, and the “right” driving the aesthetics and creative. With this “Polymath” mentality, CG artists are now able to generate work that was once only achievable by a large team of artists and technicians. Concurrently, it is influencing the hybridization of the CG industry at large, as VFX companies build motion design teams in-house, while motion graphics studios increasingly incorporate VFX tools into their own workflow.

As a result, we’ve seen a proliferation in the “lean-and-mean” production studio over the last few years. Their rise is the direct result of the democratization of our industry, where content creation tools have significantly evolved in terms of technology, accessibility and reliability. One such example is the dramatic increase in render power with the rise of third-party GPU renderers, such as Otoy’s Octane and Redshift, which have essentially made 3D photorealism more attainable. Cloud rendering solutions have also popped up for conventional and third-party renderers, which mitigates the need to build out expensive renderfarms — a luxury that is still privy to companies of a certain size.

Otoy’s Octane being used on one of Midnight Sherpa’s jobs.

Motion artists, too, have become far more adventurous in employing VFX-specific software like Houdini, which has simultaneously become far more accessible and egalitarian without any compromise to its capability. Maxon’s Cinema 4D, the heavily favored 3D application in motion graphics, has had a long tradition of implementing efficient software-specific workflows to bridge its ecosystem to other programs. Coding and script-based animation has also found a nice home in the Motion repertoire to create inventive and efficient ways to generate content. Even the barrier of entry for creating VR and AR content has eased quite a bit with the latest releases of both the Unity and Unreal engines.

Aside from lower overhead costs, the horizontal work structure of the “lean-and-mean” model has also cultivated truly collaborative environments where artists of trans-disciplinary backgrounds can work together in more streamlined ways than can be done in an oversized team. In many cases, these smaller studios are forced to develop workflows that more effectively reflect their team’s makeup — these systems often enjoy more success as they reflect the styles and, even, personalities of the core teams, which institute them.

The nature of being small also pushes you to innovate and develop greater efficiencies, rather than just throwing more bodies at the problem. These solutions and workflows are often baked into the core team and rolled out on future projects. Smaller studios also have a reputation for cultivating talent. Junior artists and interns are often put on a wider range of projects and into more roles out of necessity to fulfill the various needs of production, whereas they are typically relegated to a single role at larger studios and oftentimes are not afforded the opportunity to branch out. This conversely creates an incentive to hire artists with the intent of developing them over a long term.

There are downsides, of course, to being small — chief among them is how quickly they reach physical capacity at which point jobs would have to be turned down. The proliferation of small studios equals more voices in the landscape of content, which in turn directly contributes to the greater evolution of design as a whole.

Now that the playing field has been technologically equalized, the key between failure and success for many of these companies lies in whether or not they can craft a voice that is unique amongst their peers in an increasingly saturated landscape.

Main Image: Audi – Photorealism is more achievable in a streamlined production pipeline.


Miguel Lee is partner/creative director at LA’s Midnight Sherpa, a boutique creative studio for brands and entertainment.

The importance of on-set VFX supervision

By Karen Maierhofer

Some contend that having a visual effects supervisor present on set during production is a luxury; others deem it a necessity. However, few, if any, see it as unnecessary.

Today, more and more VFX supes can be found alongside directors and DPs during filming, advising and problem-solving, with the goal of saving valuable time and expense during production and, later, in post.

John Kilshaw

“A VFX supervisor is on set and in pre-production to help the director and production team achieve their creative goals. By having the supervisor on set, they gain the flexibility to cope with the unexpected and allow for creative changes in scope or creative direction,” says Zoic Studios creative director John Kilshaw, a sought-after VFX supervisor known for his collaborative creative approach.

Kilshaw, who has worked at a number of top VFX studios including ILM, Method and Double Negative, has an impressive resume of features, among them The Avengers, Pirates of the Caribbean: On Stranger Tides, Mission: Impossible – Ghost Protocol and various Harry Potter films. More recently, he was visual effects supervisor for the TV series Marvel’s The Defenders and Iron Fist.

Weta Digital’s Erik Winquist (Apes trilogy, Avatar, The Hobbit: An Unexpected Journey) believes the biggest contribution a VFX supervisor can make while on set comes during prep. “Involving the VFX supervisor as early as possible can only mean less surprises during principal photography. This is when the important conversations are taking place between the various heads of departments. ‘Does this particular effect need to be executed with computer graphics, or is there a way to get this in-camera? Do we need to build a set for this, or would it be better for the post process to be greenscreen? Can we have practical smoke and air mortars firing debris in this shot, or is that going to mess with the visual effects that have to be added behind it later?’”

War for the Planet of the Apes via Weta Digital

According to Winquist, who is VFX supervisor on Rampage (2018), currently in post production, having a VFX supe around can help clear up misconceptions in the mind of the director or other department heads: “No, putting that guy in a green suit doesn’t make him magically disappear from the shot. Yes, replacing that sky is probably relatively straightforward. No, modifying the teeth of that actor to look more like a vampire’s while he’s talking is actually pretty involved.”

Both Kilshaw and Winquist note that it is not uncommon to have a VFX supervisor on set whenever there are shots that include visual effects. In fact, Winquist has not heard of a major production that didn’t have a visual effects supervisor present for principal photography. “From the filmmaker’s point of view, I can’t imagine why you would not want to have your VFX supervisor there to advise,” he says. “Film is a collaborative medium. Building a solid team is how you put your vision up on the screen in the most cost-effective way possible.”

At Industrial Light & Magic, which has a long list of major VFX film credits, it is a requirement. “We always have a visual effects supervisor on set, and we insist on it. It is critical to our success on a project,” says Lindy De Quattro, VFX supervisor at ILM. “Frankly, it terrifies me to think about what could happen without one present.”

Lindy De Quattro

For some films, such as Evan Almighty, Pacific Rim, Mission: Impossible — Ghost Protocol and the upcoming Downsizing, De Quattro spent an extended period on set, while for many others she was only present for a week or two while big VFX scenes were shot. “No matter how much time you have put into planning, things rarely go entirely as planned. And someone has to be present to make last-minute adjustments and changes, and deal with new ideas that might arise on that day — it’s just part of the creative process,” she says.

For instance, while working on Pacific Rim, Director Guillermo del Toro would stay up until the wee hours of the night making new boards for what would be shot the following day, and the next morning everyone would crowd around his hand-drawn sketches and notebooks and he would say, “OK, this is what we are shooting. So we have to be prepared and do everything in our power to help ensure that the director’s vision becomes reality on screen.”

“I cannot imagine how they would have gone about setting up the shots if they didn’t have a VFX supervisor on set. Someone has to be there to be sure we are gathering the data needed to recreate the environment and the camera move in post, to be sure these things, and the greenscreens, are set up correctly so the post is successful,” De Quattro says. If you don’t know to put in greenscreen, you may be in a position where you cannot extract the foreground elements the way you need to, she warns. “So, suddenly, two days of an extraction and composite turns into three weeks of roto and hair replacement, and a bunch of other time-consuming and expensive work because it wasn’t set up properly in initial photography.”

Sometimes, a VFX supervisor ends up running the second unit, where the bulk of the VFX work is done, if the director is at a different location with the first unit. This was the case recently when De Quattro was in Norway for the Downsizing shoot. She ended up overseeing the plate unit and did location scouting with the DP each morning to find shots or elements that could be used in post. “It’s not that unusual for a VFX supervisor to operate as a second unit director and get a credit for that work,” she adds.

Kilshaw often finds himself discussing the best way of achieving the show’s creative goals with the director and producer while on set. Also, he makes sure that the producer is always informed of changes that will impact the budget. “It becomes very easy for people to say, ‘we can fix this in post.’ It is at this time when costs can start to spiral, and having a VFX supervisor on set to discuss options helps stop this from happening,” he adds. “At Zoic, we ensure that the VFX supervisor is also able to suggest alternative approaches that may help directors achieve what they need.”

Erik Winquist

According to Winquist, the tasks a VFX supe does on set depends on the size of the budget and crew. In a low-budget production, a person might be doing a myriad of different tasks themselves: creating previs and techvis, working with the cinematographer and key grip concerning greenscreen or bluescreen placement, placing tracking markers, collecting camera information for each setup or take, shooting reference photos of the set, helping with camera or lighting placement, gathering lighting measurements with gray and chrome reference spheres — basically any information that will help the person best execute the visual effects requirements of the shot. “And all the while being available to answer questions the director might have,” he says.

If the production has a large budget, the role is more about spreading out and managing those tasks among an on-set visual effects team: data wranglers, surveyors, photographers, coordinators, PAs, perhaps a motion capture crew, “so that each aspect of it is done as thoroughly as possible,” says Winquist. “Your primary responsibility is being there for the director and staying in close communication with the ADs so that you or your team are able to get all the required data from the shoot. You only have one chance to do so.”

The benefits of on-set VFX supervision are not just for those working on big-budget features, however. As Winquist points out, the larger the budget, the more demanding the VFX work and the higher the shot count, therefore the more important it is to involve the VFX supervisor in the shoot. “But it could also be argued that a production with a shoestring budget also can’t afford to get it wrong or be wasteful during the shoot, and the best way to ensure that footage is captured in a way that will make for a cost-effective post process is to have the VFX supervisor there to help.”

Kilshaw concurs. “Regardless of whether it is a period drama or superhero show, whether you need to create a superpower or a digital version of 1900 New York, the advantages of visual effects and visual effects supervision on set are equally important.”

While De Quattro’s resume is overflowing with big-budget VFX films, she has also assisted on smaller projects where a VFX supervisor’s presence was also critical. She recalls a commercial shoot, one that prompted her to question the need for her presence. However, production hit a snag when a young actor was unable to physically accomplish a task during multiple takes, and she was able to step in and offer a suggestion, knowing it would require just a minor VFX fix. “It’s always something like that. Even if the shoot is simple and you think there is no need, inevitably someone will need you and the input of someone who understands the process and what can be done,” she says.

De Quattro’s husband is also a VFX supervisor who is presently working on a non-VFX-driven Netflix series. While he is not on set every day, he is called when there is an effects shoot scheduled.

Mission Impossible: Ghost Protocol

So, with so many benefits to be had, why would someone opt not to have a VFX supervisor on set? De Quattro assumes it is the cost. “What’s that saying, ‘penny wise and pound foolish?’ A producer thinks he or she is saving money by eliminating the line item of an on-set supervisor but doesn’t realize the invisible costs, including how much more expensive the work can be, and often is, on the back end,” she notes.

“On set, people always tell me their plans, and I find myself advising them not to bother building this or that — we are not going to need it, and the money saved could be better utilized elsewhere,” De Quattro says.

On Mission: Impossible, for example, the crew was filming a complicated underwater escape scene with Tom Cruise and finally got the perfect take, only his emergency rig became exposed. However, rather than have the actor go back into the frigid water for another take, De Quattro assured the team that the rig could be removed in post within the original scope of the VFX work. While most people are aware that can be done now, having someone with the authority and knowledge to know that for sure was a relief, she says.

Despite their extensive knowledge of VFX, these supervisors all say they support the best tool for the job on set and, mostly, that is to capture the shot in-camera first. “In most instances, the best way to make something look real is to shoot it real, even if it’s ultimately just a small part of the final frame,” Winquist says. However, when factors conspire against that, whether it be weather, animals, extras, or something similar, “having a VFX supervisor there during the shoot will allow a director to make decisions with confidence.”

Main Image: Weta’s Erik Winquist on set for Planet of the Apes.

Transitioning from VFX artist to director

By Karen Maierhofer

It takes a certain type of person to be a director — someone who has an in-depth understanding of the production process; is an exceptional communicator, planner and organizer; who possesses creative vision; and is able to see the big picture where one does not yet exist. And those same qualities can be found in a visual effects or CG supervisor.

In fact, there are a number of former visual effects artists and supes who have made the successful transition to the director’s chair – Neill Blomkamp (District 9), Andrew Adamson (Shrek, Narnia), Carlos Saldanha (Ice Age, Rio) and Tim Miller (Deadpool), to name a few. And while VFX supervisors possess many of the skills necessary for directing, it is still relatively uncommon for them to bear that credit, whether it is on a feature film, television series, commercial, music video or other project.

Armen Kevorkian
Armen Kevorkian, VFX supervisor and executive creative director at Deluxe’s Encore, says, “It’s not necessarily a new trend, but it’s really not that common.”

Armen Kevorkian (flannel shirt) on set.

Kevorkian, who has a long list of visual effects credits on various television series — two of which he has also directed episodes (Supergirl and The Flash) — has always wanted to direct but embrace VFX, winning an Emmy and three LEO Awards in addition to garnering multiple nominations for that work. “It’s all about filmmaking and storytelling. I loved what I was doing but always wanted to pursue directing, although I was not going to be pushy about it. If it happened, it happened.”

Indeed, it happened. And having the VFX experience gave Kevorkian the confidence and skills to handle being a director. “A VFX supervisor is often directing the second unit, which makes you comfortable with directing. When you direct an entire episode, though, it is not just about a few pieces; it’s about telling an entire story. That is something you learn to handle as you go.”

As a VFX supe, Kevorkian often was present from start to finish, and was able to see the whole preparation process of what worked and what didn’t. “With VFX, you are there for prep, shooting and post — the whole gamut. Not many other departments get to experience that,” he says.

When he was given the chance to direct an episode, Kevorkian was “the visual effects guy directing.” Luckily, he had worked with the actors on previous episodes in his VFX role and had a good relationship with them. “They were really supportive, and I couldn’t have done it without that, but I can see situations where you might be treated differently because your background is visual effects, and it takes more than that to tell a story and direct a full episode,” he adds.

Proving oneself can be scary, and Kevorkian has known others who directed one project and never did it again. Not so for Kevorkian, who has now directed three episodes of The Flash and one episode of Supergirl thus far, and will direct another Supergirl episode later this year.

While the episodes he has directed were not VFX-heavy, he foresees times when he will have to make a certain decision on the spot, and knowing that something can be fixed easily and less expensively in post, as opposed to wasting precious time trying to fix it practically, will be very helpful. “You are not asking the VFX guy, hey is this going to work? You pretty much know the answer because of your background,” he explains.

Despite his turn directing, Kevorkian is still “the VFX guy” for the series. “I love VFX and also love directing,” he says, hoping to one day direct feature films. “A lot of people think they want to direct but don’t realize how difficult it can be,” he adds.

HaZ Dulull
Hasraf “HaZ” Dulull doesn’t see VFX artists as directors as being so unique any more — “there are more of us now” — and recognizes the advantages such a background can bring to the new role.

“The type of films I make are considered high-concept sci-fi, which rely on VFX to help present the vision and tell the story. But it’s not just putting pretty pixels on screen as an artist that has helped me, it was also being in VFX management roles. This meant I spent a lot of time with TV showrunners, film producers on set and in the edit bay,” says Dulull. “I learned a lot from that such as how to deal with producers, executive producers and timelines. And all the other exposure I got in my VFX management role helped me prep for directing/producing a film.”

Dulull has an extensive resume, having worked as a VFX artist on films such as The Dark Knight and Prince of Persia, before moving into a supervisor role on TV shows including Planet Dinosaur and America: The Story of Us, and then into a VFX producer role. While working in VFX, he created several short films, and one of them — Project Kronos — went viral and caught the attention of Hollywood producers. Soon after, Dulull directed his first feature, The Beyond, which will be released the first quarter of next year by Gravitas Ventures. Another, Origin Unknown, based on a story he wrote, will be released later in 2018 by Content.

Before making the transition to director, Dulull had to overcome the stigma of being a first-time director — despite the success three of his short films had online. At the time, “film investors and studios were not too keen on throwing money at me yet to make a feature.” Frustrated, he decided to take the plunge and used his savings to finance his debut feature film The Beyond, based on Project Kronos. That move later on caught the attention of some investors, who helped finance the remaining post budget.

For Dulull, his VFX background is a definite plus when it comes to directing. “When I say we can add a giant alien sphere in the sky while our character looks out of the car window, with helicopters zipping by, I can say it with confidence. Also, when financiers/producers look at the storyboards and mood boards and see the amount of VFX in there, they know they have a director who can handle that and use VFX smartly as a tool to tell the story. This is as opposed to a director who has no experience in VFX and whose production would probably end up costing more due to the lack of education and wrong decisions, or trial and errors made on set and in post.”

The Beyond, courtesy of HaZ Film LTD.

Because of VFX, Dulull has learned to always shoot clean plates and not to encourage the DP to do zooms or whip pans when a scene has VFX elements. “For The Beyond, there is digital body replacements, and although this was not the same budget as Batman v Superman, we were still able to do it because all the camera moves were on sliders and we acquired a lot of data on the day of the shoot. In fact, I ensured I had budget to hire a tracking master on set who would gather all the data required to get an accurate object and camera track later in CG,” he says.

Dulull also plans for effects early in the production, making notes during the script stages concerning the VFX and researching ideas on how to achieve them so that the producers budget for them.

While on set, though, he focuses on the actors and HODs, and doesn’t get too involved with the VFX beyond showing actors a Photoshop mockup he might have done the night before a greenscreen shoot, to give them a sense of what will be occurring in the scene.

Yet, oftentimes Dulull’s artist side takes over in post. On The Beyond, he handled 75 to 80 percent of the work (mainly compositing), while CG houses and trusted freelancers did the CGI and rendering. “It was my baby and my first film, and I was a control freak on every single shot — the curse of having a VFX background,” he says. On his second feature, Origin Unknown, he found it easier to hand off the work — in this instance it was to Territory Studio.

“I still find I end up doing a lot of the key creative VFX scenes merely because there is no budget for it and basically because it was created during the editorial process — which means you can’t go and raise more money at this stage. But since I can do those ideas myself, I can come up with the concepts in the editorial process and pay the price with long nights and lots of coffee with support from Territory – but I have to ensure I don’t push the VFX studio to the breaking point with overages just because I had a creative burst of inspiration in the edit!” he says.

However, Dulull is confident that on his next feature, he will be hands-off on the VFX and focused on the time-demanding duties of directing and producing, though will still be involved with the designing of the VFX, working closely with Territory.

When it comes to outsourcing the VFX, knowing how much they cost helps keep that part of the budget from getting out of hand, Dulull says. And being able to offer up solutions or alternatives enables a studio to get a shot done faster and with better results.

Freddy Chavez Olmos
Freddy Chavez Olmos got the filmmaking/directing bug at an early age while recording horror-style home movies. Later, he found himself working in the visual effects industry in Vancouver, and counts many impressive VFX credits to his name: District 9, Pacific Rim, Deadpool, Chappie, Sin City 2 and the upcoming Blade Runner 2049. He also writes and directs projects independently, including the award-winning short films Shhh (2012) and Leviticus 24:20 (2016) — both in collaboration with VFX studio Image Engine — and R3C1CL4 (2017).

Working in visual effects, particularly compositing, has taught Olmos the artistic and technical sides of filmmaking during production and post, helping him develop a deeper understanding of the process and improving his problem-solving skills on set.

As more features rely on the use of VFX, having a director or producer with a clear understanding of that process has become almost necessary, according to Olmos. “It’s a process that requires constant feedback and clear communication. I’ve seen a lot of productions suffer visually and budget-wise due to a lack of decision-making in the post production process.”

Olmos has learned a number of lessons from VFX that he believes will help him on future directorial projects:
• Avoid last-minute changes.
• Don’t let too many cooks in the kitchen.
• Be clear on your feedback and use references when possible.
• If you can fix it on set, don’t leave it for post to handle.
• Always stay humble and give credit to those who help you.
• CG is time-consuming and expensive. If it doesn’t serve your story, don’t use it.
• Networking and professional relationships are crucial.
• Don’t become a pixel nitpicker. No one will analyze every single frame of your film unless you work on a Star Wars sequel. Your VFX crew will be more gracious to you, too.

Despite his VFX experience, Olmos, like others interviewed for this article, tries to use a practical approach first while in the director’s seat. Nevertheless, he always keeps the “VFX side of his brain open.”

For instance, the first short film he co-directed called for a full-body creature. “I didn’t want to go full CG with it because I knew we could achieve most of it practically, but I also understood the limitations. So we decided to only ‘digitally enhance’ what we couldn’t do on set and become more selective in our shot list,” he explains. “In the end, I was glad we worked as efficiently as we did on the project and didn’t have any throw-away work.”

Shhh film

While some former VFX artists/supervisors may find it difficult to hand off a project they directed to a VFX facility, Olmos maintains that as long as there is someone he trusts on set who is always by his side, he is able to detach himself “from micromanaging that part,” he says, although he does like to be heavily involved in the storyboarding and previs processes whenever possible. “A lot of the changes happen during that stage, and I like giving freedom to the VFX supervisor on set to do what he thinks is best for the project,” says Olmos.

“A few years ago, there were two VFX artists who became mainstream directors because they knew how to tell a good story using visual effects as a supporting platform (Neill Blomkamp and Gareth Edwards, Godzilla, Rogue One). Now there is a similar wave of talented filmmakers with a VFX and animation background doing original short projects,” says Olmos. “We have common interests, and I have become friends with a lot of them. I have no doubt they will end up doing big things in the near future.”

David Mellor
David Mellor is the creative director of Framestore’s new Chicago office and a director with the studio’s production company Framestore Pictures. With a background in computer visualization and animation, he started out in a support role with the rendering team and eventually transitioned to commercials and music videos, working his way up to CG lead and head of the CG department in the studio’s New York office.

In that capacity, Mellor was exposed to the creative side and worked with directors and agencies, and that led to the creative director and director roles he now enjoys.

Mellor has directed spots for Chick-fil-A (VR and live action), Redd’s Wicked Apple, Chex Mix and a series for Qualcomm’s Snapdragon.

Without hesitation, Mellor credits his VFX experience for helping him prepare for directing in that it enables him to “see” the big picture and final result from a fragment of elements, giving him a more solid direction. “VFX supervisors have a full understanding of how to build a scene, how light and camera work, and what effect lensing has,” he says.

Additionally, VFX supervisors are prepared to react to a given situation, as things are always changing. They also have to be able to break down a shot in moments on set, and run the whole shoot — post to finish — through their head when asked a question by a director or DP. “So it gives you this very good instinct as a director and allows you to see beyond what’s in front of you,” Mellor says. “It also allows you to plan well and be creative while looking at the entire timeline of the project. ‘Fix it in post’ is no longer acceptable with everyone wanting more for less time/money.”

And as projects become larger and incorporate more effects, director’s like Mellor will be able to tackle them more efficiently and with a higher quality, knowing all that is needed to produce the final piece. He also values his ability to communicate and collaborate, which are necessary for effects supervisors on big VFX projects.

“Our career path to directing hasn’t been the traditional one, but we have more exposure working with the client from conception through to a project’s finish. That means collaboration is a big aspect for me, working toward the best result holistically within the parameters of time and budget.”

Still, Mellor believes the transition to director for a VFX supervisor remains rare. One reason is because a person often becomes pigeonholed in a role.

While their numbers are still low, VFX artists/supervisors-turned-directors are making their mark across various genres, proving themselves capable and worthy of the much-deserved moniker of director, and in doing so, are helping to pave the way for others in visual effects roles.

Our Main Image: The Beyond, courtesy of HaZ Film LTD.

The Third Floor: Previs and postvis for Wonder Woman

To help realize the cinematic world of Warner Bros.’s Wonder Woman, artists at The Third Floor London, led by Vincent Aupetit, visualized key scenes using previs and postvis. Work spanned nearly two years, as the team collaborated with director Patty Jenkins and visual effects supervisor Bill Westenhofer to map out key action and visual effects scenes.

Previs was also used to explore story elements and to identify requirements for the physical shoot as well as visual effects. Following production, postvis shots with temp CG elements stood in for finals as the editorial cut progressed.

We checked in with previs supervisor Vincent Aupetit at The Third Floor London to find out more.

Wonder Woman is a good example of filmmaking that leveraged not just the technical, but also the creative advantages of previs. How can a director maximize the benefits of having a previs team?
Each project is different, with different needs and opportunities as well as creative styles, but for Wonder Woman our director worked very closely with us and got involved with previs and postvis as much as she could. Even though this was her first time using previs, she was open and enthusiastic and quickly recognized the possibilities. She engaged with us and used our resources to further develop the ideas she had for the story and action, including iconic moments she envisioned for the main character. Seeing the ideas she was after successfully portrayed as moving previs was exciting for her and motivating for us.

How do you ensure what is being visualized translates to what can be achieved through actual filming and visual effects?
We put a big emphasis on shooting methodology and helping with requirements for the physical shoot and visual effects work — even when we are not specifically doing techvis diagrams or schematics. We conceive previs shots from the start with a shooting method in mind to make sure no shots represented in previs would prove impossible to achieve down the line.

What can productions look to previs for when preparing for large-scale visual effects scenes?
Of course, previs can be an important guide in deciding what parts of sets to build, determining equipment, camera and greenscreen needs and having a roadmap of shots. The previs team is in a position to gather input across many departments — art department, camera department, stunt department and visual effects — and effectively communicate the vision and plan.

But another huge part of it creating a working visual outline for what the characters are doing and what action is happening. If a director wants to try different narrative beats, or put them in a new order, they can do that in the previs world before committing to the shoot. If they want to do multiple iterations, it’s possible to do that before embarking on production. All of this helps streamline complexities that are already there for intensive action and visual effects sequences.

On Wonder Woman, we had a couple of notable scenes, including the beach battle, where we combined previs, storyboards and fight tests to convey a sense of how the story and choreography would unfold. Another was the final battle in the third act of the film. It’s an epic 40 minutes that includes a lot of conceptual development. What is the form and shape of Ares, the movie’s antagonist, as he evolves and reveals his true god nature? What happens in each blow of his fight with Diana on the airfield? How do her powers grow, and what do those abilities look like? Previs can definitely help answer important questions that influence the narrative as well as the technical visuals to be produced.

How can directors leverage the postvis process?
Postvis has become more and more instrumental, especially as sequences go through editorial versions and evolving cuts. For Wonder Woman, the extensive postvis aided the director in making editorial choices when she was refining the story for key sequences.

Being able to access postvis during and after reshoots was very helpful as well. When you can see a more complete picture of the scene you have been imagining, with temp characters and backdrops in place, your decisions are much more informed.

How do you balance the ability to explore ideas and shots with the need to turn them around quickly?
This is one of the qualities of previs artists — we need to be both effective and flexible! Our workflow has to sustain and keep track of shots, versions and approvals. On Wonder Woman, our on-board previs editor literally did wonders keeping the show organized and reacting near instantaneously to director or visual effects supervisor requests.

The pace of the show and the will to explore and develop with a passionate director led to our producing an astonishing number of shots at a very rapid rate despite a challenging schedule. We also had a great working relationship, where we were trusted truly and fully by the client and repaid this trust by meeting deliveries with a high level of professionalism and quality.

More speakers added for Italy’s upcoming View Conference

More than 50 speakers are confirmed for 2017’s View Conference, a digital media conference that takes place in Turin, Italy, from October 23-27. Those speakers include six visual effects Oscar winners, two Academy Sci-Tech award winners, animated feature film directors, virtual reality pioneers, computer graphics researchers, game developers, photographers, writers and studio executives.

“One of the special reasons to attend View is that our speakers like to stay for the entire week and attend talks given by the other speakers, so our attendees have many opportunities to interact with them,” says conference director Dr. Maria Elena Gutierrez. “View brings together the world’s best and brightest minds across multiple disciplines, in an intimate and collaborative place where creatives can incubate and celebrate.”

Newly confirmed speakers include:

Scott Stokdyk- This Academy Award winner (VFX supervisor, Valerian and the City of a Thousand Planets) will showcase VFX from the film – from concept, design and inspiration to final color timing.

Paul Debevec – This Academy Award winner (senior staff engineer, Google VR, ICT) will give attendees a glimpse inside the latest work from Google VR and ICT.

Martyn Culpitt – A VFX supervisor on Logan and at Image Engine company, he will breakdown the film Logan, highlighting the visual effects behind Wolverine’s gripping final chapter.

Jan-Bart Van Beek – This studio art director at Guerrilla Games will take attendees through the journey that Guerrilla Games underwent to design the post-apocalyptic world of the game franchise, Horizon Zero Dawn.

David Rosenbaum – This chief creative officer at Cinesite Studios along with Cinesite EP Warren Franklin will present at talk titled, “It’s All Just Funny Business: Looking for IP, Talent ad Audiences.”

Elisabeth Morant – This product manager for Google’s Tilt Brush will discusses the company’s VR painting application in a talk called, “Real Decisions, Virtual Space: Designing for VR.”

Donald Greenberg – This professor of computer graphics at Cornell University will be discussing the “Next-gen of Virtual Reality”

Steve Muench – He will present “The Labor of Loving Vincent: Animating Van Gogh to Solve a Mystery.”

Deborah Fowler – This professor of visual effects at Savannah College of Art and Design/SCAD will showcase “Procedural and Production Techniques using Houdini.”

Daniele Federico: This co-founder and developer at Toolchefs will present “Make us Alive. An In-Depth Look at Atoms Crowd Software.”

Jason Bickerstaff – This character artist from Pixar Animation Studios) will present “Crossing The Dimensional Rift.”

Steve Beck – This VFX art director from ILM will discuss “The Future of Storytelling.”

Nancy Basi – She is executive director of the Film and Media Centre – Vancouver Economic Commission.

For a complete listing of speakers visit http://www.viewconference.it/speakers

 

Director Ava DuVernay named VES Summit’s keynote speaker

Director/producer/writer Ava DuVernay has been named keynote speaker at the 2017 VES Summit, “Inspiring Change: Building on 20 Years of VES Innovation.” The forum, which takes place Saturday, October 28, celebrates the Visual Effects Society’s 20th anniversary and brings together creatives, executives and visionaries from a variety of disciplines to discuss the evolution of visual imagery and the VFX industry landscape in a TED Talks-like atmosphere.

At the 2012 Sundance Film Festival, DuVernay won the Best Director Prize for her second feature film Middle of Nowhere, which she also wrote and produced. For her work on Selma in 2014, she was nominated for the Academy Award for Best Picture. In 2017, she was nominated for the Academy Award for Best Documentary Feature for her film 13th. Her current directorial work includes the dramatic television series Queen Sugar, and the upcoming Disney feature film A Wrinkle in Time.

It was back in 2010 that DuVernay made her directorial debut with the acclaimed 2008 hip-hop documentary This Is The Life, and she has gone on to direct several network documentaries, including Venus Vs. for ESPN. She has also directed significant short form work, including August 28: A Day in the Life of a People, commissioned by The Smithsonian’s National Museum of African American History and Culture, as well as fashion and beauty films for Prada and Apple.

Other speakers include:
–  Syd Mead, visual futurist and conceptual artist
–  President of IMAX Home Entertainment Jason Brenek on “Evolution in Entertainment: VR, Cinema and Beyond”
– CEO of SSP BlueHemanshu Nigam on “When Hackers Attack: How Can Hollywood Fight Back?”
– Head of Adobe Research Gavin Miller on “Will the Future Look More Like Harry Potter or Star Trek?”
–  Senior research engineer at Autodesk, Evan Atherton on “The Age of Imagination”
–  Founder/CEO of the Emblematic Group, Nonny de la Peña on “Creating for Virtual, Augmented and Mixed Realities”

Additional speakers and roundtable moderators will be announced soon. The 2017 VES Summit takes place at the Sofitel Hotel Beverly Hills.

Chaos Group acquires Render Legion and its Corona Renderer

Chaos Group has purchased Prague-based Render Legion, creator of the Corona Renderer. With this new product and Chao’s own V-Ray, the company is offering even more rendering solutions for M&E and the architectural visualization world.

Known for its ease of use, the Corona Renderer has become a popular choice for architectural visualization, but according to Chaos Group’s David Tracy, “There are a few benefits for M&E. Corona plans to implement some VFX-related features, such as hair and skin with the help of the V-Ray team. Also, Corona is sharing technology, like the way they optimize dome lights. That will definitely be a benefit for V-Ray users in the VFX space.”

The Render Legion team, including its founders and developers, will join Chaos Group as they continue to develop Corona using additional support and resources provided through the deal.

Chaos Group’s Academy Award-winning renderer, V-Ray will continue to be a core component of the company’s portfolio. Both V-Ray and Corona will benefit from joint collaborations, bringing complementary features and optimizations to each product.

The Render Legion acquisition is Chaos Group’s largest investment to date. It is the third investment in a visualization company in the last two years, including interactive presentation platform CL3VER and virtual reality pioneer Nurulize. According to Chaos Group, the computer graphics industry is expected to reach $112 billion in 2019, fueled by a rise in the demand for 3D visuals. This, they say, has presented a prime opportunity for companies who make the creation of photorealistic imagery more accessible.

Main Image: ( L-R) Chaos Group co-founder Vlado Koylazov and Render Legion CEO/co-founder Ondřej Karlík.

postPerspective Impact Award winners from SIGGRAPH 2017

Last April, postPerspective announced the debut of our Impact Awards, celebrating innovative products and technologies for the post production and production industries that will influence the way people work. We are now happy to present our second set of Impact Awards, celebrating the outstanding offerings presented at SIGGRAPH 2017.

Now that the show is over, and our panel of VFX/VR/post pro judges has had time to decompress, dig out and think about what impressed them, we are happy to announce our honorees.

And the winners of the postPerspective Impact Award from SIGGRAPH 2017 are:

  • Faceware Technologies for Faceware Live 2.5
  • Maxon for Cinema 4D R19
  • Nvidia for OptiX 5.0  

“All three of these technologies are very worthy recipients of our first postPerspective Impact Awards from SIGGRAPH,” said Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that define the leading-edge of technology while producing tools that actually make users’ working lives easier and projects better, and our winners certainly fall into that category.

“While SIGGRAPH’s focus is on VFX, animation, VR/AR and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production. We’ve tapped real-world users in these areas to vote for our Impact Awards, and they have determined what tools might be most impactful to their day-to-day work. That’s what makes our awards so special.”

There were many new technologies and products at SIGGRAPH this year, and while only three won an Impact Award, our judges felt there were other updates that it was important to let people know about as well.

Blackmagic Design’s Fusion 9 was certainly turning heads and Nvidia’s VRWorks 360 Video was called out as well. Chaos Group also caught our judges attention with V-Ray for Unreal Engine 4.

Stay tuned for future Impact Award winners in the coming months — voted on by users for users — from IBC.

WAR FOR THE PLANET OF THE APES

Editor William Hoy — working on VFX-intensive War for the Planet of the Apes

By Mel Lambert

For William Hoy, ACE, story and character come first. He also likes to use visual effects “to help achieve that idea.” This veteran film editor points to director Zack Snyder’s VFX-heavy I, Robot, director Matt Reeves’ 2014 version of Dawn of the Planet of the Apes and his new installment, War for the Planet of the Apes, as “excellent examples of this tenet.”

War for the Planet of the Apes, the final part of the current reboot trilogy, follows a band of apes and their leader as they are forced into a deadly conflict with a rogue paramilitary faction known as Alpha-Omega. After the apes suffer unimaginable losses, their leader begins a quest to avenge his kind, and an epic battle that determines the fate of both their species and the future of our planet.

Marking the picture editor’s second collaboration with Reeves, Hoy recalls that he initially secured an interview with the director through industry associates. “Matt and I hit it off immediately. We liked each other,” Hoy recalls. “Dawn of the Planet of the Apes had a very short schedule for such a complicated film, and Matt had his own ideas about the script — particularly how the narrative ended. He was adamant that he ‘start over’ when he joined the film project.

“The previous Dawn script, for example, had [the lead ape character] Caesar and his followers gaining intelligence and driving motorized vehicles,” Hoy says. “Matt wanted the action to be incremental which, it turned out, was okay with the studio. But a re-written script meant that we had a very tight shoot and post schedule. Swapping its release date with X-Men: Days of Future Past gave us an additional four or five weeks, which was a huge advantage.”

William Hoy, ACE (left), Matt Reeves (right).

Such a close working relationship on Dawn of the Planet of the Apes meant that Hoy came to the third installment in the current trilogy with a good understanding of the way that Reeves likes to work. “He has his own way of editing from the dailies, so I can see what we will need on rough cut as the filmed drama is unfolding. We keep different versions in Avid Media Composer, with trusted performances and characters, and can see where they are going” with the narrative. Having worked with Reeves over the past two decades, Stan Salfas, ACE, served as co-editor on the project, joining prior to the Director’s Cut.

A member of The Academy of Motion Picture Arts And Sciences, Hoy also worked with director Randall Wallace on We Were Soldiers and The Man in the Iron Mask, with director Phillip Noyce on The Bone Collector and director Zack Snyder on Watchmen, a film “filled with emotional complexity and heavy with visual effects,” he says.

An Evolutionary Editing Process
“Working scene-by-scene with motion capture images and background artwork laid onto the Avid timeline, I can show Matt my point of view,” explains Hoy. “We fill in as we go — it’s an evolutionary process. I will add music and some sound effects for that first cut so we can view it objectively. We ask, ‘Is it working?’ We swap around ideas and refine the look. This is a film that we could definitely not have cut on film; there are simply too many layers as the characters move through these varied backgrounds. And with the various actors in motion capture suits giving us dramatic performances, with full face movements [CGI-developed facial animation], I can see how they are interacting.”

To oversee the dailies on location, Hoy set up a Media Composer editing system in Vancouver, close to the film locations used for principal photography. “War for the Planet of the Apes was shot on Arri Alexa 65 digital cameras that deliver 6K images,” the editor recalls. “These files were down-sampled to 4K and delivered to Weta Digital [in New Zealand] as source material, where they were further down-sampled to 2K for CGI work and then up-sampled back to 4K for the final release. I also converted our camera masters to 2K DNxHD 32/36 for editing color-timed dailies within my Avid workstation.”

In terms of overall philosophy, “we did not want to give away Caesar’s eventual demise. From the script, I determined that the key arc was the unfolding mystery of ‘What is going on?’ And ‘Where will it take us?’ We hid that Caesar [played by Andy Serkis] is shot with an arrow, and initially just showed the blood on the hand of the orangutan, Maurice [Karin Konoval]; we had to decide how to hide that until the key moment.”

Because of the large number of effect-heavy films that Hoy has worked on, he is considered an action/visual effects editor. “But I am drawn to performances of actors and their characters,” he stresses. “If I’m not invested in their fate, I cannot be involved in the action. I like to bring an emotional value to the characters, and visualize battle scenes. In that respect Matt and I are very much in tune. He doesn’t hide his emotion as we work out a lot of the moves in the editing room.”

For example, in Dawn of The Planet of The Apes, Koba, a human-hating Bonobo chimpanzee who led a failed coup against Caesar, is leading apes against the human population. “It was unsatisfying that the apes would be killing humans while the humans were killing apes. Instead, I concentrated on the POV of Caesar’s oldest son, Blue Eyes. We see the events through his eyes, which changed the overall idea of the battle. We shot some additional material but most of the scene — probably 75% — existed; we also spoke with the FX house about the new CGI material,” which involved re-imaged action of horses and backgrounds within the Virtual Sets that were fashioned by Weta Digital.

Hoy utilized VFX tools on various layers within his Media Composer sessions that carried the motion capture images, plus the 3D channels, in addition to different backgrounds. “Sometimes we could use one background version and other times we might need to look around for a new perspective,” Hoy says. “It was a trial-and-error process, but Matt was very receptive to that way of working; it was very collaborative.”

Twentieth Century Fox’s War for the Planet of the Apes.

Developing CGI Requests for Key Scenes
By working closely with Weta Digital, the editor could develop new CGI requests for key scenes and then have them rendered as necessary. “We worked with the post-viz team to define exactly what we needed from a scene — maybe to put a horse into a blizzard, for example. Ryan Stafford, the film’s co-producer and visual effects producer, was our liaison with the CGI team. On some scenes I might have as many as a dozen or more separate layers in the Avid, including Caesar, rendered backgrounds, apes in the background, plus other actors in middle and front layers” that could be moved within the frame. “We had many degrees of freedom so that Matt and I could develop alternate ideas while still preserving the actors’ performances. That way of working could be problematic if you have a director who couldn’t make up his mind; happily, Matt is not that way!”

Hoy cites one complex scene that needed to be revised dramatically. “There is a segment in which Bad Ape [an intelligent chimpanzee who lived in the Sierra Zoo before the Simian Flu pandemic] is seen in front of a hearth. That scene was shot twice because Matt did not consider it frenetic enough. The team returned to the motion capture stage and re-shot the scene [with actor Steve Zahn]. That allowed us to start over again with new, more frantic physical performances against resized backgrounds. We drove the downstream activities – asking Weta to add more snow in another scene, for example, or maybe bring Bad Ape forward in the frame so that we can see him more clearly. Weta was amazing during that collaborative process, with great input.”

The editor also received a number of sound files for use within his Avid workstation. “In the beginning, I used some library effects and some guide music — mostly some cues of composer Michael Giacchino’s Dawn score music from music editor Paul Apelgren. Later, when the picture was in one piece, I received some early sketches from the sound design team. For the Director’s Cut we had a rough cut with no CGI from Weta Digital. But when we received more sound design, I would create temp mixes on the Avid, with a 5.1-channel mix for the sound-editorial team using maybe 24 tracks of effects, dialog and music elements. It was a huge session, but Media Composer is versatile. After turning over that mix to Will Files, the film’s sound designer, supervising sound editor and co-mixer, I was present with Matt on the re-recording stage for maybe six weeks of the final mix as the last VFX elements came in. We were down to the wire!”

Hoy readily concedes that while he loves to work with new directors — “and share their point of view” — returning to a director with whom he has collaborated previously is a rewarding experience. “You develop a friendly liaison because it becomes easier once you understand the ways in which a director works. But I do like to be challenged with new ideas and new experiences.” He may get to work again with Reeves on the director’s next outing, The Batman, “but since Matt is still writing the screenplay, time will tell!”


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLAHe is also a long-time member of the UK’s National Union of Journalists.