Tag Archives: Sony Pictures Imageworks

Sony Imageworks provides big effects, animation for Warner’s Smallfoot

By Randi Altman

The legend of Bigfoot: a giant, hairy two-legged creature roaming the forests and giving humans just enough of a glimpse to freak them out. Sightings have been happening for centuries with no sign of slowing down — seriously, Google it.

But what if that story was turned around, and it was Bigfoot who was freaked out by a Smallfoot (human)? Well, that is exactly the premise of the new Warner Bros. film Smallfoot, directed by Karey Kirkpatrick. It’s based on the book “Yeti Tracks” by Sergio Pablos.

Karl Herbst

Instead of a human catching a glimpse of the mysterious giant, a yeti named Migo (Channing Tatum) sees a human (James Corden) and tells his entire snow-filled village about the existence of Smallfoot. Of course, no one believes him so he goes on a trek to find this mythical creature and bring him home as proof.

Sony Pictures Imageworks was tasked with all of the animation and visual effects work on the film, while Warner Animation film did all of the front end work — such as adapting the script, creating the production design, editing, directing, producing and more. We reached out to Imageworks VFX supervisor Karl Herbst (Hotel Transylvania 2) to find out more about creating the animation and effects for Smallfoot.

The film has a Looney Tunes-type feel with squash and stretch. Did this provide more freedom or less?
In general, it provided more freedom since it allowed the animation team to really have fun with gags. It also gave them a ton of reference material to pull from and come up with new twists on older ideas. Once out of animation, depending on how far the performance was pushed, other departments — like the character effects team — would have additional work due to all of the exaggerated movements. But all of the extra work was worth it because everyone really loved seeing the characters pushed.

We also found that as the story evolved, Migo’s journey became more emotionally driven; We needed to find a style that also let the audience truly connect with what he was going through. We brought in a lot more subtlety, and a more truthful physicality to the animation when needed. As a result, we have these incredibly heartfelt performances and moments that would feel right at home in an old Road Runner short. Yet it all still feels like part of the same world with these truly believable characters at the center of it.

Was scale between such large and small characters a challenge?
It was one of the first areas we wanted to tackle since the look of the yeti’s fur next to a human was really important to filmmakers. In the end, we found that the thickness and fidelity of the yeti hair had to be very high so you could see each hair next to the hairs of the humans.

It also meant allowing the rigs for the human and yetis to be flexible enough to scale them as needed to have moments where they are very close together and they did not feel so disproportionate to each other. Everything in our character pipeline from animation down to lighting had to be flexible in dealing with these scale changes. Even things like subsurface scattering in the skin had dials in it to deal with when Percy, or any human character, was scaled up or down in a shot.

How did you tackle the hair?
We updated a couple of key areas in our hair pipeline starting with how we would build our hair. In the past, we would make curves that look more like small groups of hairs in a clump. In this case, we made each curve its own strand of a single hair. To shade this hair in a way that allowed artists to have better control over the look, our development team created a new hair shader that used true multiple-scattering within the hair.

We then extended that hair shading model to add control over the distribution around the hair fiber to model the effect of animal hair, which tends to scatter differently than human hair. This gave artists the ability to create lots of different hair looks, which were not based on human hair, as was the case with our older models.

Was rendering so many fury characters on screen at a time an issue?
Yes. In the past this would have been hard to shade all at once, mostly due to our reliance on opacity to create the soft shadows needed for fur. With the new shading model, we were no longer using opacity at all so the number of rays needed to resolve the hair was lower than in the past. But we now needed to resolve the aliasing due to the number of fine hairs (9 million for LeBron James’ Gwangi).

We developed a few other new tools within our version of the Arnold renderer to help with aliasing and render time in general. The first was adaptive sampling, which would allow us to up the anti-aliasing samples drastically. This meant some pixels would only use a few samples while others would use very high sampling. Whereas in the past, all pixels would get the same number. This focused our render times to where we needed it, helping to reduce overall rendering. Our development team also added the ability for us to pick a render up from its previous point. This meant that at a lower quality level we could do all of our lighting work, get creative approval from the filmmakers and pick up the renders to bring them to full quality not losing the time already spent.

What tools were used for the hair simulations specifically, and what tools did you call on in general?
We used Maya and the Nucleus solvers for all of the hair simulations, but developed tools over them to deal with so much hair per character and so many characters on screen at once. The simulation for each character was driven by their design and motion requirements.

The Looney Tunes-inspired design and motion created a challenge around how to keep hair simulations from breaking with all of the quick and stretched motion while being able to have light wind for the emotional subtle moments. We solved all of those requirements by using a high number of control hairs and constraints. Meechee (Zendaya) used 6,000 simulation curves with over 200 constraints, while Migo needed 3,200 curves with around 30 constraints.

Stonekeeper (Common) was the most complex of the characters, with long braided hair on his head, a beard, shaggy arms and a cloak made of stones. He required a cloth simulation pass, a rigid body simulation was performed for the stones and the hair was simulated on top of the stones. Our in-house tool called Kami builds all of the hair at render time and also allows us to add procedurals to the hair at that point. We relied on those procedurals to create many varied hair looks for all of the generics needed to fill the village full of yetis.

How many different types of snow did you have?
We created three different snow systems for environmental effects. The first was a particle simulation of flakes for near-ground detail. The second was volumetric effects to create lots of atmosphere in the backgrounds that had texture and movement. We used this on each of the large sets and then stored those so lighters could pick which parts they wanted in each shot. To also help with artistically driving the look of each shot, our third system was a library of 2D elements that the effects team rendered and could be added during compositing to add details late in shot production.

For ground snow, we had different systems based on the needs in each shot. For shallow footsteps, we used displacement of the ground surface with additional little pieces of geometry to add crumble detail around the prints. This could be used in foreground or background.

For heavy interactions, like tunneling or sliding in the snow, we developed a new tool we called Katyusha. This new system combined rigid body destruction with fluid simulations to achieve all of the different states snow can take in any given interaction. We then rendered these simulations as volumetrics to give the complex lighting look the filmmakers were looking for. The snow, being in essence a cloud, allowed light transport through all of the different layers of geometry and volume that could be present at any given point in a scene. This made it easier for the lighters to give the snow its light look in any given lighting situation.

Was there a particular scene or effect that was extra challenging? If so, what was it and how did you overcome it?
The biggest challenge to the film as a whole was the environments. The story was very fluid, so design and build of the environments came very late in the process. Coupling that with a creative team that liked to find their shots — versus design and build them — meant we needed to be very flexible on how to create sets and do them quickly.

To achieve this, we begin by breaking the environments into a subset of source shapes that could be combined in any fashion to build Yeti Mountain, Yeti Village and the surrounding environments. Surfacing artists then created materials that could be applied to any set piece, allowing for quick creative decisions about what was rock, snow and ice, and creating many different looks. All of these materials were created using PatternCreate networks as part of our OSL shaders. With them we could heavily leverage the portable procedural texturing between assets making location construction quicker, more flexible and easier to dial.

To get the right snow look for all levels of detail needed, we used a combination of textured snow, modeled snow and a simulation of geometric snowfall, which all needed to shade the same. For the simulated snowfall we created a padding system that could be run at any time on an environment giving it a fresh coating of snow. We did this so that filmmakers could modify sets freely in layout and not have to worry about broken snow lines. Doing all of that with modeled snow would have been too time-consuming and costly. This padding system worked not only in organic environments, like Yeti Village, but also in the Human City at the end of the film. The snow you see in the Human City is a combination of this padding system in the foreground and textures in the background.

Dell partners with Sony on Spider-Man film, showcases VR experience

By Jay Choi

Sony Pictures Imageworks used Dell technology during the creation of the Spider-Man: Homecoming. To celebrate, Dell and Sony held a press junket in New York City that included tech demos and details on the film, as well as the Spider-Man: Homecoming Virtual Reality Experience. While I’m a huge Spider-Man fan, I am not biased in saying it was spectacular.

To begin the VR demo, users are given the same suit Tony Stark designs for Peter Parker in Captain America: Civil War and Spider-Man: Homecoming. The first action you perform is grabbing the mask and putting on the costume. You then jump into a tutorial that teaches you how to use your web-shooter mechanics (which implement intuitively with your VR controllers).

Users are then tasked with thwarting the villainous Vulture from attacking you and the city of New York. Admittedly, I didn’t get too far into the demo. I was a bit confused as to where to progress, but also absolutely stunned by the mechanics and details. Along with pulling triggers to fire webs, each button accessed a different type of web cartridge in your web shooter. So, like Spidey, I had to be both strategic and adaptive to each changing scenario. I actually felt like I was shooting webs and pulling large crates around… I honestly spent most of my time seeing how far the webs could go and what they could stick to — it was amazing!

The Tech
With the power of thousands of workstations, servers and over a petabyte of storage from Dell, Sony Pictures Imageworks and other studios, such as MPC and Method, were able to create the visual effects for the Spider-Man: Homecoming film. The Virtual Reality Experience actually pulled the same models, assets and details used in the film, giving users a truly awesome and immersive experience.

When I asked what this particular VR experience would cost your typical consumer, I was told that when developing the game, Dell researched major VR consoles and workstations and set a benchmark to strive for so most consumers should be able to experience the game without too much of a difference.

Along with the VR game, Dell also showcased its new gaming laptop: the Inspiron 15 7000. With a quad-core H-Class 7th-Gen Intel Core and Nvidia GeForce GTX 1050/1050 Ti, the laptop is marketed for hardcore gaming. It has a tough-yet-sleek design that’s appealing to the eye. However, I was more impressed with its power and potential. The junket had one of these new Inspiron laptops running the recently rebooted Killer Instinct fighting game (which ironically was my very first video game on the Super Nintendo… I guess violent video games did an okay job raising me). As a fighting game fanatic and occasional competitor, I have to say the game ran very smoothly. I couldn’t spot latency between inputs from the USB-connected X-Box One controllers or any frame skipping. It does what it says it can do!

The Inspiron 15 7000 was also featured in the Spider-Man: Homecoming film and was used by Jacob Batalon’s character, Ned, to help aid Peter Parker in his web-tastic mission.

I was also lucky enough to try out Sony Future Lab Program’s projector-based interactive Find Spider-Man game, where the game’s “screen” is projected on a table from a depth-perceiving projector lamp. A blank board was used as a scroll to maneuver a map of New York City, while piles of movable blocks were used to recognize buildings and individual floors. Sometimes Spidey was found sitting on the roof, while other times he was hiding inside on one of the floors.

All in all, Dell and Sony Pictures Imageworks’ partnership provided some sensational insight to what being Spider-Man is like with their technology and innovation, and I hope to see it evolve even further along side more Spider-Man: Homecoming films.

The Spider-Man: Homecoming Virtual Reality Experience arrives on June 30th for all major VR platforms. Marvel’s Spider-Man: Homecoming releases in theaters on July 7th.


Jay Choi is a Korean-American screenwriter, who has an odd fascination with Lego minifigures, a big heart for his cat Sula, and an obsession with all things Spider-Man. He is currently developing an animated television pitch he sold to Nickelodeon and resides in Brooklyn.