Tag Archives: Sony Imageworks

Sony Imageworks’ VFX work on Spider-Man: Homecoming

By Daniel Restuccio

With Sony’s Spider-Man: Homecoming getting ready to release digitally on September 26 and on 4K Ultra HD/Blu-ray, Blu-ray 3D, Blu-ray and DVD on October 17, we thought this was a great opportunity to talk about some of the film’s VFX.

Sony Imageworks has worked on every single Spider-Man movie in some capacity since the 2002 Sam Raimi version. On Spider-Man: Homecoming, Imageworks worked on mostly the “third act,” which encompasses the warehouse, hijacked plane and beach destruction scenes. This meant delivering over 500 VFX shots, created by over 30 artists (at one point this peaked at 200) and compositors, and rendering out 2K finished scenes.

All of the Imageworks artists used Dell R7910 workstations with Intel Xeon CPU E5-2620 24 cores, 64GB memory and Nvidia Quadro P5000 graphics cards. They used Cinesync for client reviews and internally they used their in-house Itview software. Rendering technology was SPI Arnold (not the commercial version) and their custom shading system. Software used was Autodesk 2015, Foundry’s Nuke X 10.0 and Side Effects Houdini 15.5. They avoided plug-ins so that their auto-vend, breaking of comps into layers for the 3D conversion process, would be as smooth as possible. Everything was rendered internally on their on-premises renderfarm. They also used the Sony “Kinect” scanning technique that allowed their artists to do performance capture on themselves and rapidly prototype ideas and generate reference.

We sat down with Sony Imageworks VFX supervisor Theo Bailek, who talks about the studio’s contribution to this latest Spidey film.

You worked on The Amazing Spider-Man in 2012 and The Amazing Spider-Man 2 in 2014. From a visual effects standpoint, what was different?
You know, not a lot. Most of the changes have been iterative improvements. We used many of the same technologies that we developed on the first few movies. How we do our city environments is a specific example of how we build off of our previous assets and techniques, leveraging off the library of buildings and props. As the machines get faster and the software more refined, it allows our artists increased iterations. This alone gave our team a big advantage over the workflows from five years earlier. As the software and pipeline here at Sony has gotten more accessible, it has allowed us to more quickly integrate new artists.

It’s a lot of very small, incremental improvements along the way. The biggest technological changes between now and the early Spider-Mans is our rendering technology. We use a more physically-accurate-based rendering incarnation of our global illumination Arnold renderer. As the shaders and rendering algorithms become more naturalistic, we’re able to conform our assets and workflows. In the end, this translates to a more realistic image out of the box.

The biggest thing on this movie was the inclusion of Spider-Man in a Marvel Universe: a different take on this film and how they wanted it to go. That would be probably the biggest difference.

Did you work directly with director Jon Watts, or did you work with production VFX supervisor Janek Sirrs in terms of the direction on the VFX?
During the shooting of the film I had the advantage of working directly with both Janek and Jon. The entire creative team pushed for open collaboration, and Janek was very supportive toward this goal. He would encourage and facilitate interaction with both the director and Tom Holland (who played Spider-Man) whenever possible. Everything moved so quick on set, often times if you waited to suggest an idea you’d lose the chance, as they would have to set up for the next scene.

The sooner Janek could get his vendor supervisors comfortable interacting, the bigger our contributions. While on set I often had the opportunity to bring our asset work and designs directly to Jon for feedback. There were times on set when we’d iterate on a design three or four times over the span of the day. Getting this type of realtime feedback was amazing. Once post work began, most of our reviews were directly with Janek.

When you had that first meeting about the tone of the movie, what was Jon’s vision? What did he want to accomplish in this movie?
Early on, it was communicated from him through Janek. It was described as, “This is sort of like a John Hughes, Ferris Bueller’s take on Spider-Man. Being a teenager he’s not meant to be fully in control of his powers or the responsibility that comes with them. This translates to not always being super-confident or proficient in his maneuvers. That was the basis of it.

Their goal was a more playful, relatable character. We accomplished this by being more conservative in our performances, of what Spider-Man was capable of doing. Yes, he has heightened abilities, but we never wanted every landing and jump to be perfect. Even superheroes have off days, especially teenage ones.

This being part of the Marvel Universe, was there a pool of common assets that all the VFX houses used?
Yes. With the Marvel movies, they’re incredibly collaborative and always use multiple vendors. We’re constantly sharing the assets. That said, there are a lot of things you just can’t share because of the different systems under the hood. Textures and models are easily exchanged, but how the textures are combined in the material and shaders… that makes them not reusable given the different renderers at companies. Character rigs are not reusable across vendors as facilities have very specific binding and animation tools.

It is typical to expect only models, textures, base joint locations and finished turntable renders for reference when sending or receiving character assets. As an example, we were able to leverage somewhat on the Avengers Tower model we received from ILM. We did supply our Toomes costume model and Spider-Man character and costume models to other vendors as well.

The scan data of Tom Holland, was it a 3D body scan of him or was there any motion capture?
Multiple sessions were done through the production process. A large volume of stunts and test footage were shot with Tom before filming that proved to be invaluable to our team. He’s incredibly athletic and can do a lot of his own stunts, so the mocap takes we came away with were often directly usable. Given that Tom could do backflips and somersaults in the air we were able to use this footage as a reference for how to instruct our animators later on down the road.
Toward the later-end of filming we did a second capture session, focusing on the shots we wanted to acquire using specific mocap performances. Then again several months later, we followed up with a third mocap session to get any new performances required as the edit solidified.

As we were trying to create a signature performance that felt like Tom Holland, we exclusively stuck to his performances whenever possible. On rare occasions when the stunt was too dangerous, a stuntman was used. Other times we resorted to using our own in-house method of performance capture using a modified Xbox Kinect system to record our own animators as they acted out performances.

In the end performance capture accounted for roughly 30% of the character animation of Spider-Man and Vulture in our shots, with the remaining 70% being completed using traditional key-framed methods.

How did you approach the fresh take on this iconic film franchise?
It was clear from our first meeting with the filmmakers that Spider-Man in this film was intended to be a more relatable and light-hearted take on the genre. Yes, we wanted to take the characters and their stories seriously, but not at the expense of having fun with Peter Parker along the way.

For us that meant that despite Spider-Man’s enhanced abilities, how we displayed those abilities on screen needed to always feel grounded in realism. If we faltered on this goal, we ran the risk of eroding the sense of peril and therefore any empathy toward the characters.

When you’re animating a superhero it’s not easy to keep the action relatable. When your characters possess abilities that you never see in the real world, it’s a very thin line between something that looks amazing and something that is amazingly silly and unrealistic. Over-extend the performances and you blow the illusion. Given that Peter Parker is a teenager and he’s coming to grips with the responsibilities and limits of his abilities, we really tried to key into the performances from Tom Holland for guidance.

The first tool at our disposal and the most direct representation of Tom as Spider-Man was, of course, motion capture of his performances. On three separate occasions we recorded Tom running through stunts and other generic motions. For the more dangerous stunts, wires and a stuntman were employed as we pushed the limit of what could be recorded. Even though the cables allowed us to record huge leaps, you couldn’t easily disguise the augmented feel to the actor’s weight and motion. Even so, every session provided us with amazing reference.

Though a bulk of the shots were keyframed, it was always informed by reference. We looked at everything that was remotely relevant for inspiration. For example, we have a scene in the warehouse where the Vulture’s wings are racing toward you as Spider-Man leaps into the air stepping on the top of the wings before flipping to avoid the attack. We found this amazing reference of people who leap over cars racing in excess of 70mph. It’s absurdly dangerous and hard to justify why someone would attempt a stunt like that, and yet it was the perfect for inspiration for our shot.

In trying to keep the performances grounded and stay true to the goals of the filmmakers, we also found it was always better to err on the side of simplicity when possible. Typically, when animating a character, you look for opportunities to create strong silhouettes so the actions read clearly, but we tended to ignore these rules in favor of keeping everything dirty and with an unscripted feel. We let his legs cross over and knees knock together. Our animation supervisor, Richard Smith, pushed our team to follow the guidelines of “economy of motion.” If Spider-Man needed to get from point A to B he’d take the shortest route — there’s not time to strike an iconic pose in-between!


Let’s talk a little bit about the third act. You had previsualizations from The Third Floor?
Right. All three of the main sequences we worked on in the third act had extensive previs completed before filming began. Janek worked extremely closely with The Third Floor and the director throughout the entire process of the film. In addition, Imageworks was tapped to help come up with ideas and takes. From early on it was a very collaborative effort on the part of the whole production.
The previs for the warehouse sequence was immensely helpful in the planning of the shoot. Given we were filming on location and the VFX shots would largely rely on carefully choreographed plate photography and practical effects, everything had to be planned ahead of time. In the end, the previs for that sequence resembled the final shots in most cases.

The digital performances of our CG Spider-Man varied at times, but the pacing and spirit remained true to the previs. As our plane battle sequence was almost entirely CG, the previs stage was more of an ongoing process for this section. Given that we weren’t locked into plates for the action, the filmmakers were free to iterate and refine ideas well past the time of filming. In addition to The Third Floor’s previs, Imageworks’ internal animation team also contributed heavily to the ideas that eventually formed the sequence.

For the beach battle, we had a mix of plate and all-CG shots. Here the previs was invaluable once again in informing the shoot and subsequent reshoots later on. As there were several all-CG beats to the fight, we again had sections where we continued to refine and experiment till late into post. As with the plane battle, Imageworks’ internal team contributed extensively to pre and postvis of this sequence.

The one scene, you mentioned — the fight in the warehouse — in the production notes, it talks about that scene being inspired by an actual scene from the comic The Amazing Spider-Man #33.
Yes, in our warehouse sequence there are a series of shots that are directly inspired by the comic book’s cells. Different circumstances in the the comic and our sequence lead to Spider-Man being trapped under debris. However, Tom’s performance and the camera angles that were shot play homage to the comic as he escapes. As a side note, many of those shots were added later in the production and filmed as reshoots.

What sort of CG enhancements did you bring to that scene?
For the warehouse sequence, we added digital Spider-Man, Vulture wings, CG destruction, enhanced any practical effects, and extended or repaired the plate as needed.The columns that the Vulture wings slice through as it circles Spider-Man were practically destroyed with small denoted charges. These explosives were rigged within cement that encased the actual warehouses steel girder columns. They had fans on set that were used to help mimic interaction from the turbulence that would be present from a flying wingsuit powered by turbines. These practical effects were immensely helpful for our effects artists as they provided the best-possible in-camera reference. We kept much of what was filmed, adding our fully reactive FX on top to help tie it into the motion of our CG wings.

There’s quite a bit of destruction when the Vulture wings blast through walls as well. For those shots we relied entirely on CG rigid body dynamic simulations for the CG effects, as filming it would have been prohibitive and unreliable. Though most of the shots in this sequence had photographed plates, there were still a few that required the background to be generated in CG. One shot in particular, with Spider-Man sliding back and rising up, stands out in particular. As the shot was conceived later in the production, there was no footage for us to use as our main plate. We did however have many tiles shot of the environment, which we were able to use to quickly reconstruct the entire set in CG.

I was particularly proud of our team for their work on the warehouse sequence. The quality of our CG performances and the look of the rendering is difficult to discern from the live action. Even the rare all-CG shots blended seamlessly between scenes.

When you were looking at that ending plane scene, what sort of challenges were there?
Since over 90 shots within the plane sequence were entirely CG we faced many challenges, for sure. With such a large number of shots without the typical constraints that practical plates impose, we knew a turnkey pipeline was needed. There just wouldn’t be time to have custom workflows for each shot type. This was something Janek, our client-side VFX supervisor, stressed from the onset, “show early, show often and be prepared to change constantly!”

To accomplish this, a balance of 3D and 2D techniques were developed to make the shot production as flexible as possible. Using our compositing software Nuke’s 3D abilities we were able to offload significant portions of the shot production into the compositor’s hands. For example: the city ground plane you see through the clouds, the projections of the imagery on the plane’s cloaking LED’s and the damaged flickering LED’s were all techniques done in the composite.

A unique challenge to the sequence that stands out is definitely the cloaking. Making an invisible jet was only half of the equation. The LEDs that made up the basis for the effect also needed to be able to illuminate our characters. This was true for wide and extreme close-up shots. We’re talking about millions of tiny light sources, which is a particularly expensive rendering problem to tackle. Mix in the fact that the design of these flashing light sources is highly subjective and thus prone to needing many revisions to get the look right.

Painting control texture maps for the location of these LEDs wouldn’t be feasible for the detail needed on our extreme close-up shots. Modeling them in would have been prohibitive as well, resulting in excessive geometric complexity. Instead, using Houdini, our effects software, we built algorithms to automate the distribution of point clouds of data to intelligently represent each LED position. This technique could be reprocessed as necessary without incurring the large amounts of time a texture or model solution would have required. As the plane base model often needed adjustments to accommodate design or performance changes, this was a real factor. The point cloud data was then used by our rendering software to instance geometric approximations of inset LED compartments on the surface.

Interestingly, this was a technique we adopted from rendering technology we use to create room interiors for our CG city buildings. When rendering large CG buildings we can’t afford to model the hundreds and sometimes thousands of rooms you see through the office windows. Instead of modeling the complex geometry you see through the windows, we procedurally generate small inset boxes for each window that have randomized pictures of different rooms. This is the same underlying technology we used to create the millions of highly detailed LEDs on our plane.

First our lighters supplied base renders to our compositors to work with inside of Nuke. The compositors quickly animated flashing damage to the LEDs by projecting animated imagery on the plane using Nuke’s 3D capabilities. Once we got buyoff on the animation of the imagery we’d pass this work back to the lighters as 2D layers that could be used as texture maps for our LED lights in the renderer. These images would instruct each LED when it was on and what color it needed to be. This back and forth technique allowed us to more rapidly iterate on the look of the LEDs in 2D before committing and submitting final 3D renders that would have all of the expensive interactive lighting.

Is that a proprietary system?
Yes, this is a shading system that was actually developed for our earlier Spider-Man films back when we used RenderMan. It has since been ported to work in our proprietary version of Arnold, our current renderer.

VFX Roundtable: Trends and inspiration

By Randi Altman

The world of visual effects is ever-changing, and the speed at which artists are being asked to create new worlds, or to make things invisible is moving full-speed ahead. How do visual effects artists (and studios) prepare for these challenges, and what inspired them to get into this business? We reached out to a small group of visual effects pros working in television, commercials and feature films to find out how they work and what gets their creative juices flowing.

Let’s find out what they had to say…

KEVIN BAILLIE, CEO, ATOMIC FICTION
What do you wish clients would know before jumping into a VFX-heavy project?
The core thing for every filmmaking team to recognize is that VFX isn’t a “post process.” Careful advance planning and a tight relationship between the director, production designer, stunt team and cinematographer will yield a far superior result much more cost effectively.

In the best-looking and best-managed productions I’ve ever been a part of, the VFX team is the first department to be brought onto the show and the last one off. It truly acts as a partner in the filmmaking process. After all, once the VFX post phase starts, it’s effectively a continuation of production — with there being a digital corollary to every single department on set, from painters to construction to costume!

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
The move to cloud computing is one of the most exciting trends in VFX. The cloud is letting smaller teams to much bigger work, allowing bigger teams to do things that have never been seen before and will ultimately result in compute resources no longer being a constraint on the creative process.

Cloud computing allowed Atomic Fiction to play alongside the most prestigious companies in the world, even when we were just 20 people. That capability has allowed us to grow to over 200 people, and now we’re able to take the lead vendor position on A-list shows. It’s remarkable what dynamic and large-scale infrastructure in the cloud has enabled Atomic to accomplish.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I grew up in Seattle and started dabbling in 3D as a hobby when I was 14 years old, having been immensely inspired by Jurassic Park. Soon thereafter, I started working at Microsoft in the afternoons, developing visual content to demonstrate their upcoming technologies. I was fortunate enough to land a job with Lucasfilm right after graduating high school, which was 20 years ago at this point! I’ve been lucky enough to work with many of the directors that inspired me as a child, such as George Lucas and Robert Zemeckis, and modern pioneers like JJ Abrams.

Looking back on my career so far, I truly feel like I’ve been living the dream. I can’t wait for what’s next in this exciting, ever-changing business.

ROB LEGATO, OSCAR-WINNING VFX SUPERVISOR, SECOND UNIT DIRECTOR, SECOND UNIT DIRECTOR OF PHOTOGRAPHY
What do you wish clients would know before jumping into a VFX-heavy project?
It takes a good bit of time to come up with a plan that will ensure a sustainable attack when makinging the film. They need to ask someone in authority, “What does it take to do it,” and then make a reasonable plan. Everyone wants to do a great job all the time, and if they could maneuver the schedule — even with the same timeframe — it could be a much less frustrating job.

It happens time and time again, someone comes up with a budget and a schedule that doesn’t really fit with the task and forces you to live with it. That makes for a very difficult assignment that gets done because of the hard work of the people who are in the trenches.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
For me, it’s how realistic you can make something. The rendering capabilities — like what we did on Jungle Book with the animals — are so sophisticated that it fools your eye into believing it’s real. Once you do that you’ve opened the magic door that allows you to do anything with a tremendous amount of fidelity. You can make good movies without it being a special-venue movie or a VFX movie. The computer power and rendering abilities — along with the incredible artistic talent pool that we have created over the years — is very impressive, especially for me, coming from a more traditional camera background. I tended to shy away from computer-generated things because they never had the authenticity you would have wanted.

Then there is the happy accident of shooting something, where an angle you wouldn’t have considered appears as you look through the camera; now you can do that in the computer, which I find infinitely fascinating. This is where all the virtual cinematography things I’ve done in the past come in to help create that happy accident.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I’ve been working in VFX since about 1984. Visual effects wasn’t my dream. I wanted to make movies: direct, shoot and be a cameraman and editor. I fell into it and then used it as an avenue to allow me to create sequences in films and commercials.

The reason you go to movies is to see something you have never seen before, and for me that was Close Encounters. The first time I saw the mothership in Close Encounters, it wasn’t just an effect, it became an art form. It was beautifully realized and it made the story. Blade Runner was another where it’s no longer a visual effect, it’s filmmaking as an art form.

There was also my deep appreciation for Doug Trumbull, whose quality of work was so high it transcended being a visual effect or a photographic effect.

LISA MAHER, VP OF PRODUCTION, SHADE VFX 
What do you wish clients would know before jumping into a VFX-heavy project?
That it’s less expensive in the end to have a VFX representative involved on the project from the get-go, just like all the other filmmaking craft-persons that are represented. It’s getting better all the time though, and we are definitely being brought on board earlier these days.

At Shade we specialize in invisible or supporting VFX. So-called invisible effects are often much harder to pull off. It’s all about integrating digital elements that support the story but don’t pull the audience out of a scene. Being able to assist in the planning stages of a difficult VFX sequence often results in the filmmakers achieving what they envisioned more readily. It also helps tremendously to keep the costs in line with what was originally budgeted. It also goes without saying that it makes for happier VFX artists as they receive photography captured with their best interests in mind.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
I would say the most exciting development affecting visual effects is the explosion of opportunities offered by the OTT content providers such as Netflix, Amazon, HBO and Hulu. Shade primarily served the feature film market up to three years ago, but with the expanding needs of television, our offices in Los Angeles and New York are now evenly split between film and TV work.

We often find that the film work is still being done at the good old reliable 2K resolution while our TV shows are always 4K plus. The quality and diversity of projects being produced for TV now make visual effects a much more buoyant enterprise for a mid-sized company and also a real source of employment for VFX professionals who were previously so dependent on big studio generated features.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I’ve been working in visual effects close to 20 years now. I grew up in Ireland; as a child the world of film, and especially images of sunny California, were always a huge draw for me. They helped me survive the many grey and rainy days of the Irish climate.  I can’t point to one project that inspired me to get into film making — there have been so many — just a general love for storytelling, I guess. Films like Westworld (the 1973 version), Silent Running, Cinema Paradiso, Close Encounters of the Third Kind, Blade Runner and, of course, the original Star Wars were truly inspirational.

DAVID SHELDON-HICKS, CO-FOUNDER/EXECUTIVE CREATIVE DIRECTOR, TERRITORY STUDIO
What do you wish clients would know before jumping into a VFX-heavy project?
The craft and care and love that goes into VFX is often forgotten in the “business” of it all. As a design led studio that straddles art and VFX departments in our screen graphic and VFX work, we prefer to work with the director from the preproduction phase. This ensures that all aspects of our work are integrated into story and world building.

The talent and gut instinct, eye for composition and lighting, appreciation of form, choreography of movement and, most notably, the appreciation of the classics is so pertinent to the art of VFX and is undersold for conversations of shot counts, pipelines, bidding and numbers of artists. Bringing the filmmakers into the creative process has to be the way forward for an art form still finding its own voice.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
The level of concept art and postviz coming through from VFX studios is quite staggering. It gets back to my point from above of bringing the VFX dialogue with filmmakers and VFX artists concentrated on world building and narrative expansion. It’s so exciting to see concept art and postviz getting to a new level of sophistication and influence in the filmmaking process.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I have been working professionally in VFX for over 15 years. My love of VFX and creativity in general came from the moment I picked up a pencil and imagined new possibilities. But once I cut my film teeth designing screens graphics on Casino Royale and followed by Dark Knight, I left my freelance days behind and co-founded Territory Studio. Our first film as a studio was Prometheus, and working with Ridley Scott was a formative experience that has influenced our own design-led approach to motion graphics and VFX, which has established us in the industry and seen the studio grow and expand.

MARK BREAKSPEAR, VFX SUPERVISOR, SONY PICTURES IMAGEWORKS
What do you wish clients would know before jumping into a VFX-heavy project?

Firstly, I think the clients I have worked with have always been extremely cognizant of the key areas affecting VFX heavy projects and consequently have built frameworks that help plan and execute these mammoth shows successfully.

Ironically, it’s the smaller shows that sometimes have the surprising “gotchas” in them. The big shows come with built-in checks and balances in the form of experienced people who are looking out for the best interests of the project and how to navigate the many pitfalls that can make the VFX costs increase.

Smaller shows sometimes don’t allow enough discussion and planning time for the VFX components in pre-production, which could result in the photography not being captured as well as it could have been. Everything goes wrong from there.

So, when I approach any show, I always look for the shots that are going to be underestimated and try to give them the attention they need to succeed. You can get taken out of a movie by a bad driving comp as much as you can a monster space goat biting a planet in half.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
I think there are several red herrings out there right now… the big one being VR. To me, VR is like someone has invented teleportation, but it only works on feet.

So, right now, it’s essentially useless and won’t make creating VFX any easier or make the end result any more spectacular. I would like to see VR used to aid artists working on shots. If you could comp in VR I could see that being a good way to help create more complex and visually thrilling shots. The user interface world is really the key area VR can benefit.

Suicide Squad

I do think however, that AR is very interesting. The real world, with added layers of information is a hugely powerful prospect. Imagine looking at a building in any city of the world, and the apartments for sale in it are highlighted in realtime, with facts like cost, square footage etc. all right there in front of you.

How does AR benefit VFX? An artist could use AR to get valuable info about shots just by looking at them. How often do we look at a shot and ask “what lens was this? AR could have all that meta-data ready to display at any point on any shot.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I’ve been in VFX for 25 years. When I started, VFX was not really a common term. I came to this industry through the commercial world… as a compositor on TV shows and music videos. Lots of (as we would call it now) visual effects, but done in a world bereft of pipelines and huge cloud-based renderfarms.

I was never inspired by a specific project to get into the visual effects world. I was a creative kid who also liked the sciences. I liked to work out why things ticked, and also draw them, and sometimes try to draw them with improvements or updates as I could imagine. It’s a common set of passions that I find in my colleagues.

I watched Star Wars and came out wondering why there were black lines around some of the space ships. Maybe there’s your answer… I was inspired by the broken parts of movies, rather than being swept up in the worlds they portrayed. After all that effort, time and energy… why did it still look wrong? How can I fix it for next time?

CHRIS HEALER, CEO/CTO/VFX SUPERVISOR, THE MOLECULE
What do you wish clients would know before jumping into a VFX-heavy project?

Plan, plan plan… previs, storyboarding and initial design are crucial to VFX-heavy projects. The mindset should ideally be that most (or all) decisions have been made before the shoot starts, as opposed to a “we’ll figure it out in post” approach.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
Photogrammetry, image modeling and data capture are so much more available than ever before. Instead of an expensive Lidar rig that only produces geometry without color, there are many many new ways to capture the color and geometry of the physical world, even using a simple smart phone or DSLR.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I’ve been doing VFX now for over 16 years. I would have to say that The Matrix (part 1) was really inspiring when I saw it the first time, and it made clear that VFX as an art form was coming and available to artists of all kinds all over the world. Previous to that, VFX was very difficult to approach for the average student with limited resources.

PAUL MARANGOS, SENIOR VFX FLAME ARTIST, HOOLIGAN
What do you wish clients would know before jumping into a VFX-heavy project?

The more involved I can be in the early stages, the more I can educate clients on all of the various effects they could use, as well as technical hurdles to watch out for. In general, I wish more clients involved the VFX guys earlier in the process — even at the concepting and storyboarding stages — because we can consult on a range of critical matters related to budgets, timelines, workflow and, of course, bringing the creative to life with the best possible quality.

Fortunately, more and more agencies realize the value of this. For instance, with a recent campaign Hooligan finished for Harvoni, we were able to plan shots for a big scene featuring hundreds of lanterns in the sky, which required lanterns of various sizes for every angle that Elma Garcia’s production team shot. Having everything well storyboarded and under Elma’s direction, who left no detail unnoticed, we managed to create a spectacular display of lantern composites for the commercial.

We were also involved early on for a campaign for MyHeritage DNA (above) via creative agency Berlin Cameron, featuring spoken word artist Prince Ea, and directed by Jonathan Augustavo of Skunk. Devised as if projecting on a wall, we mapped the motion graphics in the 3D environments.

What trends in VFX have impressed you the most over the last year or two, and how are they affecting your work?
Of course VR and 360 live TV shows are exciting, but augmented reality is what I find particularly interesting — mixing the real world with graphics and video all around you. The interactivity of both of these emerging platforms presents an endless area of growth, as our industry is on the cusp of a sea change that hasn’t quite yet begun to directly affect my day-to-day.

Meanwhile, at Hooligan, we’re always educating ourselves on the latest software, tools and technological trends in order to prepare for the future of media and entertainment — which is wise if you want to be relevant 10 years from now. For instance, I recently attended the TED conference, where Chris Milk spoke on the birth of virtual reality as an artform. I’m also seeing advances in Google cardboard, which is making the platform affordable, too. Seeing companies open up VR Departments is an exciting step for us all and it shows the vision for the future of advertising.

How many years have you been working in VFX, and what project inspired you to get into this line of work?
I have worked in VFX for 25 years. After initially studying fine art and graphic design, the craft aspect of visual effects really appealed to me. Seeing special effects genius Ray Harryhausen’s four-minute skeleton fight was a big inspiration. He rear-projected footage of the actual actors and then combined the shots to make a realistic skeleton-Argonaut battle. It took him over four and a half months to shoot the stop-motion animation.

Main Image: Deadpool/Atomic Fiction.

‘Suicide Squad’: Imageworks VFX supervisor Mark Breakspear 

By Randi Altman

In Warner Bros.’ Suicide Squad, a band of captured super-villains are released from prison by the government and tasked with working together to fight a common enemy, the evil Joker. This film, which held top box office honors for weeks, has a bit of everything: comic book antiheroes, super powers, epic battles and redemption. It also features a ton of visual effects work that was supervised by Sony Imageworks’ Mark Breakspear, who worked closely with production supervisor Jerome Chen and director David Ayer (see our interview with Ayer).

Mark Breakspear

Mark Breakspear

Breakspear is an industry veteran with more than 20 years of experience as a visual effects supervisor and artist, working on feature films, television and commercials. His credits include American Sniper, The Giver, Ender’s Game, Thor: The Dark World, The Great Gatsby… and that’s just to name a few.

Suicide Squad features approximately 1,200 shots, with Imageworks doing about 300, including the key fight at the end of the film between Enchantress, the Squad, Incubus and Mega Diablo. Imageworks also provided shots for several other sequences throughout the movie.

MPC worked on the majority of the other visual effects, with Third Floor creating postviz after the shoot to help with the cutting of the film.

I recently threw some questions at Breakspear about his process and work on Suicide Squad.

How early did you get involved in the project?
Jerome Chen, the production supervisor, involved us from the very beginning in the spring of 2015. We read the script and started designing one of the most challenging characters — Incubus. We spent a couple of months working with designer Tim Borgmann to finesse the details of his overall look, shape and, specifically, his skin and sub-surface qualities.


How did Imageworks prepare for taking on the film?

We spent time gathering as much information as we could about the work we were looking to do. That involved lengthy calls with Jerome to pick over every aspect of the designs that David Ayer wanted. As it was still pretty early, there was a lot more “something like” rather than “exactly like” when it came to the ideas. But this is what the prepro was for, and we were able to really focus on narrowing down the many ideas in to key selections and give the crew something to work with during the shoot in Toronto.

Can you talk about being on set?
The main shoot was at Pinewood in Toronto. We had several soundstages that were used for the creation of the various sets. Shoot days are usually long and arduous, and this was no exception. For VFX crews, the days are typically hectic, quiet, hectic, very hectic, quiet and then suddenly very hectic again. After wrap, you still have to download all the data, organize it and prep everything for the next day.

I had fantastic help on set from Chris Hebert who was our on-set photographer. His job was to make sure we had accurate records (photographic and data sets) of anything that could be used in our work later on. That meant actors, props, witness cameras, texture photography and any specific one-off moments that occur 300 times a day. Every movie set needs a Chris Hebert or it’s going to be a huge struggle later on in post!

gb0140_comp_breakdown_plate.1052.tif


Ok, let’s dig into the workflow. Can you walk us through it?

Workflow is a huge subject, so I’ll keep the answer somewhat concise! The general day would begin with a team meet between all the various VFX departments here at Imageworks. The work was split across teams in both Culver and Vancouver, so we did regular video Hangouts to discuss the daily plan, the weekly targets and generally where we were at, plus specific needs that anyone had. We would usually follow this by department meetings prior to AM dailies where I would review the latest work from the department leads, give notes, select things to show Jerome and David, and give feedback that I may have received from production.

We tried our best to keep our afternoons meeting-free so actual work could get done! Toward the end of the day we would have more dailies, and the final days selection of notes and pulls to the client would take place. Most days ended fairly late, as we had to round off the hundreds of emails with meaningful replies, prep for the next day and catch any late submission arrivals from the artists that might benefit from notes before the morning.

What tool, or tools, did you use for remote collaboration?
We used Google Hangouts for video conferencing, and Itview for shot discussion and notes with Jerome and David. Itview is our own software that replaces the need to use [off-the-shelf tools], and allows a much faster, more secure and accurate way to discuss and share shots. Jerome had a system in post and we would place data on it remotely for him to view and comment on in realtime with us during client calls. The notes and drawings he made would go straight in to our note tracker and then on to artists as required.

gb1156_comp_breakdown.1242.tif
What was the most challenging shot or shots, and why?

Our most challenging work was in understanding and implementing fractals into the design of the characters and their weapons. We had to get up to speed on three-dimensional mandlebulbs and how we can render them into our body of work. We also had to create vortical flow simulations that came off the fractal weapons, which created their own set of challenges due to the nature of how particles uniquely behave when near high velocity emissions.

So there wasn’t a specific shot that was more challenging than another, but the work that went in to most of them required a very challenging pre-design and concept solve involving fractal physics to make them work.

Can you talk about tools — off-the-shelf or proprietary — you used for the VFX? Any rendering in the cloud?
We used Side Effects Houdini and Autodesk Maya for the majority of shots and The Foundry’s Nuke to comp everything. When it came to rendering we used Arnold, and in regards to cloud rendering, we did render remotely to our own cloud, which is about 1,000 miles away — does that count (smiles)?

Sony Imageworks helps take ‘Alice Through the Looking Glass’

By Christine Holmes

Sony Imageworks VFX supervisor Jay Redd’s journey with Alice Through the Looking Glass began at the end of 2013, a full two and a half years before the US domestic release. A seasoned veteran in the visual effects world, Redd partnered closely with director James Bobin, Imageworks VFX supervisor Ken Ralston, production designer Dan Hennah and crew to bring this vibrant adventure to life.

Jay Redd

Time itself plays multiple roles in this new chapter to Alice in Wonderland. We see Alice return to Wonderland — or “Underland” as it’s referred to in the film — to help her friend the Mad Hatter find out what happened to his family many years ago. She sets out on a solo quest to find the Chronosphere, a small object located in a castle at the center of a giant clock. The sphere that controls time is being guarded by a new character, played by Sacha Baron Cohen, called Time, a true personification of time. When taken from the clock and activated for time travel, the Chronosphere brings Alice to a new location where all the moments in Underland’s history are displayed within the waves the Oceans of Time. Disrupting the Chronosphere ultimately causes consequences as Time and time to begin to break down.

Redd was kind enough to talk to postPerspective about the challenges of representing Time, the character, and the concept of in this film.

Did it help your initial process to have had a visual language already established from Alice in Wonderland?
I would say it served as a foundation, but part of what was exciting about working with James Bobin on this one was that he wanted to make it feel really different. The time travel element allowed us to go back into Underland before things became sad — before the Jabberwocky attacked the village, before the Red Queen was in power, before all of these things. It allowed us to expand on the palette to be much more saturated and vibrant, and I think you can see that as compared to Alice in Wonderland.

How did you begin the challenge of representing time in both human form and in an entirely new time travel world?
The character Time is not in any of Lewis Carroll’s books, and in one of the early drafts of the script, time itself is just a concept. Then James Bobin brought the idea that time is a character. Personify time and create an actual character. The idea of the back of his head being clockwork came from James as well. He wanted Time to be part of the clock. There’s a moment in the film where Time says, “I am he and he is me.” The clock and Time are the same thing, so when we see Time open his chest, there’s a miniature version of the clock in there as well.

The idea of the Oceans of Time was just one line in the script: “Alice traveled through the Oceans of Time.” Then Ken Ralston, James, myself and our team had to figure out what that looked like. James was very adamant about wanting to include images. Pardon the pun, but over time, we experimented with a number of different looks and ended up with this ocean setting that surrounded you — the characters and the audience. You would go in and out of the ocean to enter moments and different times in history.

clock before

clock after

With the moments depicted within the ocean waves, it felt like there was a very painterly style employed there. What was the thought process behind that decision?
Those images actually came from original shots from the first movie. We started with footage — either completed shots from the first movie or footage from other scenes in this sequel. We couldn’t complete some of the Oceans of Time shots until we had finished the others. So that became a weird schedule for us. We processed the footage for moments to make them feel like they were in the water. We didn’t want the moments to feel like you were going to a drive-in theater like they were just projected on the wall or surface. We wanted them to feel volumetric — to have volume and feel thick and deep —100 feet in the air. It’s a really interesting process that all had to start with 2D processing from our compositing department, which required slowing down the footage to make it feel bigger, and more present.

To add scale?
Yes, exactly, scale. Sometimes a shot used for these moments would only be one second, but we needed three seconds. So we would slow it down, process it and use our own optical flow processes from our 2D workflow that would then feed into our water simulations. Then, in 3D simulations, those moving pieces of footage, using the vector data from the optical flow, would actually move the water and the wave spray around. When you see the Jabberwocky swing its head, it’s actually affecting the surface of the water. That’s why it has that painterly, or liquid, feel to it. I’m really impressed with what our team did in 3D. That’s the stuff you really can’t see as clearly in a 2D flat projection theatre when you’re there. In a 3D theatre, it really comes alive. I’m happy you picked up on it because it was a lot of work.

Those are some of my favorite moments, especially in 3D.
Awesome! Mine too. The Oceans of Time is so much bigger in 3D projection. That little Chronosphere you see, that’s the thing that tells you how big everything is. We can play with the size of the Chronosphere. You can make the Chronosphere bigger and everything feels smaller. Or you can make the Chronosphere smaller and animate it in a slightly different way and suddenly the Oceans of Time is huge. There’s a really interesting relationship between the size of the Chronosphere, how fast it’s moving, how fast we’re moving, and the scale of the world. That was something that took weeks to figure out in animation. If you move too quickly through a large environment, it doesn’t feel that huge. That’s something we wanted to play with and, of course, keep the pacing of the chase sequence to keep it exciting. Those are the kind of things that take months to figure out.

What about the creative evolution of the effect used to represent time breaking down in a tangible way in Underland?
We knew Time’s castle had to get completely rusted over, or frozen over. After a trip to the Los Angeles Natural History Museum, Ralston and I came upon obsidian rock with a kind of mineral growth on it. It was a bright orange and red mineral deposit that had started growing across these crystals. It probably took a million years to happen. We both looked at each other recognizing just how cool that was.

Fast forward a few months after meeting with the effects lead, and the rest of our team about covering the entire world with rust. Ken and I were shooting in London and had been doing dailies with our team for a few weeks, and even while we were shooting we were creating this reference imagery. We all gave this stuff to our team, led by Imageworks senior effects/simulation supervisor Joseph Pepper. A few weeks later he had put together a very rough test.

After the first viewing I said, “At some point we’ll want to get spikes and dust in there, but don’t do that right now, just keep it simple.” Well, Pepper and team went further with the next test. It’s Alice running down a hallway in the castle — granted this is all digital — and there’s this rust aggressively chasing her. Her hair is flopping all over and she looks back, and there is rust shooting up the walls, down the stairs, and through all the arches. A piece of stone breaks off and all this dust is falling and then the rust catches her by the ankle and freezes her in a second and a half, putting her in this physically impossible shape where she’s balancing on her toe reaching for a window. A lot of chaos and excitement! Ken and I looked at that with our jaws dropped. It really showed us what was possible with this idea.

The last week of shooting at Shepperton Studios was that moment where we knew rust was going to be something really cool. We made a couple small changes to the test and showed it to James Bobin. I was looking at his face when he watched it. It kind of sunk and I thought, “Oh crap.” Then he said, “Wow. That’s scary.” We knew we hit it! There was a kind of nervous chuckle and then he said something like, “Yeah, that’ll scare the kids.”

What was the most challenging shot in this film?
The toughest shot was the real movie version of the one I just described to you, when the rust comes over the clock cog and big gear, grabs Alice by the ankle, then climbs up her body and freezes her face all the way to her fingertips as she’s trying to drop the Chronosphere back into the holder. That was one of the most difficult shots of the film.

In fact, we reshot the live-action element of Alice a year later because we wanted to do a slightly better version of it to expose more of her ankle and her body. What we were doing was blending from a full live-action version of Alice to a full photoreal digital double. That’s the kind of shot that every single department touched, from paint and stabilization in the 2D world to wire rig removals, from full on modeling to all the textures of the costumes.

Then there was the lighting, the castle — all the effects — very detailed timing and art direction and controlling of the animation of the rust coming around her face, crossing her eyes, nose and arms at a certain time. There was also the subtle affecting of the cloth so it added weight when the rust went over her arm. It’s incredibly dense and detailed digital work. Our team developed a lot of specialized and cool technology for that. A lot of new animation tools, rendering tools and FX tools to make that happen — definitely pushing boundaries for us at Sony Imageworks.

Christine Holmes is a freelance artist and manager of animated content. She has worked in the film industry for the last six years.

The A-List: ‘Hotel Transylvania 2’ director Genndy Tartakovsky

By Jennifer Walden

Hotel Transylvania 2 has been kicking some box office butt since it was released on September 25. In this sequel  to the original, the story follows Dracula (voiced by Adam Sandler) and his posse as they team up to teach grandson Dennis, who is half-human/half-vampire, a few lessons on how to be a monster, all in an effort to (once again) keep daughter Mavis safely at home.

The sequel is not only a return for the “Drac Pack,” it was also a reunion for the Hotel Transylvania creative teams at Sony, with visual teams at Sony Pictures Imageworks and the post sound team at Sony Pictures Post being guided by director Genndy Tartakovsky.

Genndy Tartakovsky  

Genndy Tartakovsky

After finishing work on the original Hotel Transylvania (2012), Tartakovsky and the Sony animation team immediately started on Popeye and were concurrently developing Hotel Transylvania 2. “We had a script from Adam Sandler and Robert Smigel, which was completely different from what we ended up with in the movie,” notes Tartakovsky. “We worked out all the story problems and then we watched the film in animatic and storyboard form, to see what was good and what was bad.”

They made adjustments until the story felt solid, and then Tartakovsky and his team began work on the design, building sets and new characters in preparation for animation. He notes, “The story is what drives everything initially.”

Tartakovsky, who cut his feature film directing teeth on Hotel Transylvania, explains how that experience informed the entire process on Hotel T2. “I knew where we fell short on the first one visually and animation-wise, so we tried to strengthen that,” says Tartakovsky. “On the first film we just made it all work, but this time around we wanted to actually write some new programs to fit this style of animation. The style of this film, the style that we do, it doesn’t work well with computers, because computers are good for mimicking live-action.”

But the world of Hotel Transylvania doesn’t mimic live-action; it’s quicker and cartoonier. So tools based on real-world physics aren’t always applicable. Imageworks developed a new motion blur system and new ways to manipulate clothing. New rigging techniques were made, and they developed simpler ways to do the modeling. “We made a lot of animation tools to make this style a little more feasible and make it happen quicker.”

Another turbo-boost to the process came from Tartakovsky picking up visual elements for returning characters, like Dracula, Wayne (voiced by Steve Buscemi) and Frankenstein (voiced by Kevin James), and using those as a starting point for the sequel. “Everything was upgraded and then we developed a library system so if we liked an expression that Drac had from the first movie we could just punch it in and that would be the starting place,” says Tartakovsky. “We would be half-way there and then the animators would push and pull him to get him to be even more extreme or in a more specific mood.”

Main

In the case of Blobby — a minor character in the first Hotel Transylvania, Tartakovsky knew his role would be expanded in the sequel. He redesigned Blobby, giving him more rolls, giving him arms and lending an extra jellyness to the character. “We developed this whole system of jellying up and down. We knew the range that Blobby needed to be in, so we went for it. That’s the great thing about working at Sony Imageworks; the process is not super complicated for them because they’ve done all these VFX movies like Spider-Man and Edge of Tomorrow and Alice in Wonderland. They’ve done projects 10 times harder and they bring that experience to our world,” says Tartakovsky.

New characters include human-sized bats called Cronies that hang out with Vlad (Drac’s dad, voiced by Mel Brooks). Tartakovsky drew inspiration from Nosferatu for his initial sketch, which he then turned over to character designer Stephen DeStefano. “He really nailed it by finding the right balance between scary and funny,” Tartakovsky says.

Composer Mark Mothersbaugh was also tasked with balancing the scary with the funny, particularly during fight scenes and scenes involving Vlad and his Cronies. “We wanted to make sure that the scary elements didn’t overtake the film, so we made the music more fun and silly to take a bit of the pressure off that,” he explains. “Also, because we have much more of the human world involved, we ended up doing more contemporary music for this one.” One of Tartakovsky’s favorite moments in the score happens during the big fight at the end. “I saw the film in the theater this weekend with an audience and they were actually cheering on that scene — I think the music had a lot to do with that.”

Sony Pictures Post, Imageworks, Colorworks lend hand for ‘Spider-Man 2’

The visual effects, sound and post teams from Sony Pictures Entertainment teamed up on the post production workflow for Columbia Pictures’ The Amazing Spider-Man 2.

Sony Pictures Imageworks provided over 1,000 visual effects shots, Sony Pictures Post Production provided sound, in its first project in Dolby Atmos, and Colorworks performed film scanning, conforming and color grading all in 4K. An innovative workflow provided the teams with unprecedented access to data associated with the production, enabling them to work together in close and simultaneous collaboration. The result was improved efficiency and a more spectacular cinematic experience for audiences around the world.

“Our groups have developed a close, creative partnership, aided by remarkable new technologies that allowed them to work across disciplines as one team,” said Randy Lake, EVP/GM, Digital Production Services. “The results are clearly evident in The Amazing Spider-Man 2,  which established new standards for technical innovation.”

sr0620_plate.1072sr0620_final_composite.1072

Working with senior visual effects supervisor Jerome Chen, Sony Pictures Imageworks created a variety of visual effects that blend seamlessly with live-action stunts and performances. Artists created three new villains — Electro, Rhino and Goblin — and developed fully-digital CG environments representing New York’s Times Square, Manhattan skyscrapers, a next generation hydroelectric plant and an art deco-era clock tower among many other one-off effects seen throughout the film.

The film’s sound team was led by sound supervisors/designers Addison Teague and Eric Norris, and re-recording mixers Paul Massey and David Giammarco. Working in the newly-renovated William Holden Theater on the Sony Pictures Entertainment lot, Massey and Giammarco mixed the soundtrack natively in Dolby Atmos and finished in Atmos, Auro and 5.1 formats, a combination that has never been done before. They used a Harrison MPC4D console with Xrange engine.

ts0450_electro_over_model.1806 ts0450_final_composite.1826

At Colorworks, film dailies — amounting to more than 1.5 million feet of 35mm film — were scanned to 4K digital format prior to editorial. The film later returned to Colorworks for conforming, final color grading (in 4K) and mastering in 2D and stereo 3D.

Collaboration between the teams was facilitated by Sony Pictures Entertainment’s Production Backbone, the studio’s shared storage environment. Original production elements, along with associated metadata — cumulatively amounting to more than 2.4 Petabytes — were stored on the backbone’s shared-storage environment where it was accessible to sound and picture editors, visual effects artists and others, as needed and in appropriate file formats. The backbone also simplified delivery of elements to external visual effects vendors, who were able to access the backbone through secure, high-speed connections.

“Scanning all of the original film elements at 4K allowed us to work at the highest quality and implement a true digital workflow to the production,” explained Bill Baggelaar, Senior VP of Technology for Colorworks and Post Production. “That saved significant time. The backbone not only provided access to data, it also tracked progress. If an editor or a visual effects artist made a change, it was quickly available to all the other members of the team, increasingly important for quick turn around on a dynamic film like The Amazing Spider-Man 2.”