Tag Archives: animation

Behind the Title: Legwork director of production Chris Grey

NAME: Chris Grey

COMPANY: Denver-based Legwork

CAN YOU DESCRIBE YOUR COMPANY?
Legwork is an independent creative studio combining animation and technology to create memorable stories and experiences for advertising, entertainment and education.

WHAT’S YOUR JOB TITLE?
Director of Production

WHAT DOES THAT ENTAIL?
I touch almost all parts of the business, including business development, client relationships, scoping, resourcing, strategy, producer mentorship and making sure every project that goes out the door is up to our high standards. Oh, and I still produce several projects myself.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
It might be cliché, but you still need to get your hands dirty producing things. You just can’t escape it, nor should you want to. It sets the example for your team.

Dominos

WHAT’S YOUR FAVORITE PART OF THE JOB?
The problem-solving aspect of it. No matter how tight your project plan is, it’s a given that curveballs are going to happen. Planning for those and being able to react with smart solutions is what makes every day different.

WHAT’S YOUR LEAST FAVORITE?
Anxiety isn’t fun, but it comes with the job. Just know how to deal with it and don’t let it rub off on others.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
First hour of the day for emails. I do my best to keep my afternoons meeting-free, unless it’s a client meeting, My last job put a lot of emphasis on “flow” and staying in it, so I do my best to keep all internals in the morning so the whole team can work in the afternoon, including myself.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’ve always wanted to own a cool bodega/deli type of place. We’d specialize in proper sandwiches, hard to find condiments, cheap beer. Keeping this dream alive…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I knew in college. Crispin Porter + Bogusky was moving to Boulder during my junior or senior year at Colorado University. I read up on them and thought to myself “That’s it. That’s what I want to do.” I was lucky enough to get an internship there after graduation and I haven’t really looked back.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Can I take credit for the team on these two? Cool, because we’re super-proud of these, but I didn’t “produce” them:
Rise: Hope-a-monics
Pandora: Smokepurpp

Yeti

Some stuff I worked on recently that we are equally proud of:
https://www.yeticycles.com/
https://ifthisthendominos.com/
L.L.Bean: Find Your Park

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
More than a project, our relationship with YouTube has been super rewarding. The View in 2 series is now on its fifth season and it was one of the first things I worked on when I got to Legwork. Watching the show and our relationship with the client evolve is something I am proud of. In the coming months, there will be a new show that we’re releasing with them that pushes the style even further.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
1. This is a cheat because it covers music, my calendar, email, etc., but one is my iCloud and Google accounts — because 75 percent of my life on there now.
2. My Nest camera gives me peace of mind when I’m out of town and lets me know my dog isn’t too lonely.
3. Phonograph records — old tech that I love to collect.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Besides friends and family? Lots of food-related ones (current favorites are @wdurney and @turkeyandthewolf), sports/sneakers (@houseofhighlights, @jordansdaily), history (@ww2nowandthen) and a good random one is @celebsonsandwhiches.

I also like every @theonion post.

That was all for Instagram. I save Twitter for political rants and Liverpool F.C.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
We have a Sonos at the office and more often than not it forces me to put on my headphones. Sorry, Legworkers. So it might be a podcast, Howard Stern, KEXP or something British.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’m a new dad, so that helps keep everything in perspective. That and some brewery visits on the weekend, which are totally socially acceptable to bring infants to!

Behind the Title: ATK PLN Technical Supervisor Jon Speer

NAME: Jon Speer

COMPANY: ATK PLN (@atkpln_studio) in Dallas

CAN YOU DESCRIBE YOUR COMPANY?
We are a strategic creative group that specializes in design and animation for commercials and short-form video productions.

WHAT’S YOUR JOB TITLE?
Technical Supervisor

WHAT DOES THAT ENTAIL?
In general, a technical supervisor is responsible for leading the technical director team and making sure that the pipeline enables our artists’ effort of fulfilling the client’s vision.

Day-to-day responsibilities include:
– Reviewing upcoming jobs and making sure we have the necessary hardware resources to complete them
– Working with our producers and VFX supervisors to bid and plan future work
– Working with our CG/VFX supervisors to develop and implement new technologies that make our pipeline more efficient
– When problems arise in production, I am there to determine the cause, find a solution and help implement the fix
– Developing junior technical directors so they can be effective in mitigating pipeline issues that crop up during production

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I would say the most surprising thing that falls under the title is the amount of people and personality management that you need to employ.

As a technical supervisor, you have to represent every single person’s different perspectives and goals. Making everyone from artists, producers, management and, most importantly, clients happy is a tough balancing act. That balancing act needs to be constantly evaluated to make sure you have both the short-term and long-term interests of the company, clients and artists in mind.

WHAT TOOLS DO YOU USE?
Maya, Houdini and Nuke are the main tools we support for shot production. We have our own internal tracking software that we also integrate with.

From text editors for coding, to content creation programs and even budgeting programs, I typically use it all.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Starting the next project. Each new project offers the chance for us to try out a new or revamped pipeline tool that we hope will make things that much better for our team. I love efficiencies, so getting to try new tools, whether they are internally or externally developed, is always fun.

WHAT’S YOUR LEAST FAVORITE?
I know it sounds cliché, but I don’t really have one. My entire job is based on figuring out why things don’t work or how they could work better. So when things are breaking or getting technically difficult, that is why I am here. If I had to pick one thing, I suppose it would be looking at spreadsheets of any kind.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early morning when no one else is in. This is the time of day that I get to see what new tools are out there and try them. This is when I get to come up with the crazy ideas and plans for what we do next from a pipeline standpoint. Most of the rest of my day usually includes dealing with issues that crop up during production, or being in meetings.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I think I would have to give teaching a try. Having studied architecture in school, I always thought it would be fun to teach architectural history.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We just wrapped on a set of Lego spots for the new Lego 2 movie.

Fallout 76

We also did an E3 piece for Fallout 76 this year that was a lot of fun. We are currently helping out with a spot for the big game this year that has been a blast.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I think I am most proud of our Lego spots we have created over the last three years. We have really experimented with pipeline on those spots. We saw a new technology out there — rendering in Octane — and decided to jump in head first. While it wasn’t the easiest thing to do, we forced ourselves to become even more efficient in all aspects of production.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Houdini really makes the difficult things simple to do. I also love Nuke. It does what it does so well, and is amazingly fast and simple to program in.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Mainly I’ll listen to soundtracks when I am working, the lack of words is best when I am programming.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Golf is something I really enjoy on the weekends. However, like a lot of people, I find travel is easily the best way to for me to hit the reset button.

London’s Jelly opens in NYC, EP relocates

London-based Jelly, an animation, design and production company that’s produced for many US-based agencies and direct clients, has opened a full-time presence in New York. Their senior creative producer Eri Panasci will relocate to lead the new entity as executive producer.

Launched in 2002, Jelly functions as both a production company and artist management agency. On the commercials front, Jelly represents a global roster of directors and creators who’ve produced animation and motion graphics for brands like Lacoste, Apple, Samsung, Adidas and others. In the latter role, it represents a roster of illustrators and designers who regularly collaborate with brands on print, digital and outdoor ad campaigns.

Panasci’s move to New York is also a homecoming. This Connecticut native and graduate of Boston University has worked in New York, San Francisco and London for McCann and Vice Media. She joined Jelly in London in 2016, overseeing design and production assignments for such clients as Virgin Media, Google, Nespresso, McDonald’s and Bombay Sapphire.

“One of the things I’ll be able to do is provide a deeper level of service for our US clients from New York versus London,” says Panasci, “and meld that with the Jelly model and culture. And being able to put a face to a name is always good, especially when you’re dealing with someone who understands the American market and its expectations.”

The studio has lined up US representation with James Bartlett of Mr. Bartlett, whose initial brief will be to handle the East Coast.

Coming from the UK, how does Panasci describe the Jelly approach? “It’s playful yet competent,” she says with enthusiasm. “We don’t take ourselves too seriously, but on the other hand we get shit done, and we do it well. We’re known for craft and solutions, and famously for not saying the word ‘no’ — unless we really have to!”

Recent Jelly projects include Hot House, a zany TVC for Virgin Mobile, co-directed by Design Lad and Kitchen; Soho, an animated short for the shared workspace company Fora and London agency Anyways, directed by Niceshit; and Escape, a spot for the outdoor clothing company Berghaus, directed by Em Cooper for VCCP that uses the director’s unique, hand-painted technique.

Panasci says the focus of Jelly’s US operations will initially be motion work, but adds their illustration talents will also be available, and they’ll be showing print portfolios along with show reels when meeting with agencies and clients. Jelly’s head of illustration, Nicki Field, will accompany Panasci in March to kick off the New York presence with a series of meetings and screenings.

While based in London, the studio is at ease working in America, Panasci says. They’ve produced campaigns for such shops as 72andSunny, Mother, Droga5, BBH, Wieden + Kennedy, Publicis and more, working with both their US and European offices.

Most recently, Jelly signed the New York-based animation team Roof to a representation agreement for the UK market; the team played a leading role in the recent “Imaginary Friends” campaign from RPA in Santa Monica.

Logan uses CG to showcase the luxury of the Lexus ES series

Logan, a creative studio with offices in Los Angeles and New York, worked on the new Lexus ES series “A Product of Mastery” campaign with agency Team One. The goal was to showcase the interior craftsmanship and amenities of this luxury sedan with detailed animations. Viewers are at first given just a glimpse of these features as the spot builds toward a reveal of the sedan’s design.

The campaign was created entirely in CG. “When we first saw Team One’s creative brief, we realized we would be able to control the environments, lighting and the overall mood better by using CG, which allowed us to make the campaign stand apart aesthetically and dramatically compared to shooting the products practically. From day one, our team and Team One were aligned on everything and they were an incredible partner throughout the entire process,” says Logan executive producer Paul Abatemarco.

The three spots in the campaign totaled 23 shots, highlighting things like the car’s high-end Mark Levinson sound system. They also reveal the craftsmanship of the driver seat’s reverse ventilation as infinite bars of light while in another spot, the sedan’s wide-view high-definition monitor is unveiled through a vivid use of color and shape.

Autodesk Maya was Logan’s main CG tool, but for the speaker spot they also called on Side Effects Houdini and Cinema 4D. All previs was done in Maya.

Editing was done on Adobe Premiere and they color graded in Resolve in their certified-Dolby Color Studio.

 According to Waka Ichinose and Sakona Kong, co-creative leads on the project, “We had a lot of visual ideas, and there was a lot of exploration on the design side of things. But finding the balance between the beautiful, abstract imagery and then clearly conveying the meaning of each product so that the viewers were intrigued and ultimately excited was a challenge. But it was also really fun and ultimately very satisfying to solve.”

Promoting a Mickey Mouse watch without Mickey

Imagine creating a spot for a watch that celebrates the 90th anniversary of Mickey Mouse — but you can’t show Mickey Mouse. Already Been Chewed (ABC), a design and motion graphics studio, developed a POV concept that met this challenge and also tied in the design of the actual watch.

Nixon, a California-based premium watch company that is releasing a series of watches around the Mickey Mouse anniversary, called on Already Been Chewed to create the 20-second spot.

“The challenge was that the licensing arrangement that Disney made with Nixon doesn’t allow Mickey’s image to be in the spot,” explains Barton Damer, creative director at Already Been Chewed. “We had to come up with a campaign that promotes the watch and has some sort of call to action that inspires people to want this watch. But, at the same time, what were we going to do for 20 seconds if we couldn’t show Mickey?”

After much consideration, Damer and his team developed a concept to determine if they could push the limits on this restriction. “We came up with a treatment for the video that would be completely point-of-view, and the POV would do a variety of things for us that were working in our favor.”

The solution was to show Mickey’s hands and feet without actually showing the whole character. In another instance, a silhouette of Mickey is seen in the shadows on a wall, sending a clear message to viewers that the spot is an official Disney and Mickey Mouse release and not just something that was inspired by Mickey Mouse.

Targeting the appropriate consumer demographic segment was another key issue. “Mickey Mouse has long been one of the most iconic brands in the history of branding, so we wanted to make sure that it also appealed to the Nixon target audience and not just a Disney consumer,” Damer says. “When you think of Disney, you could brand Mickey for children or you could brand it for adults who still love Mickey Mouse. So, we needed to find a style and vibe that would speak to the Nixon target audience.”

The Already Been Chewed team chose surfing and skateboarding as dominant themes, since 16-to 30-year-olds are the target demographic and also because Disney is a West Coast brand.
Damer comments, “We wanted to make sure we were creating Mickey in a kind of 3D, tangible way, with more of a feature film and 3D feel. We felt that it should have a little bit more of a modern approach. But at the same time, we wanted to mesh it with a touch of the old-school vibe, like 1950s cartoons.”

In that spirit, the team wanted the action to start with Mickey walking from his car and then culminate at the famous Venice Beach basketball courts and skate park. Here’s the end result.

“The challenge, of course, is how to do all this in 15 seconds so that we can show the logos at the front and back and a hero image of the watch. And that’s where it was fun thinking it through and coming up with the flow of the spot and seamless transitions with no camera cuts or anything like that. It was a lot to pull off in such a short time, but I think we really succeeded.”

Already Been Chewed achieved these goals with an assist from Maxon’s Cinema 4D and Adobe After Effects. With Damer as creative lead, here’s the complete cast of characters: head of production Aaron Smock; 3D design was via Thomas King, Barton Damer, Bryan Talkish, Lance Eckert; animation was provided by Bryan Talkish and Lance Eckert; character animation was via Chris Watson; soundtrack was DJ Sean P.

How to use animation in reality TV

By Aline Maalouf

The world of animation is changing and evolving at a rapid pace — bringing photorealistic imagery to the small screen and the big screen — animation that is rendered with such detail, you can imagine the exact sensation of the water, feel the heat of the sunshine and experience the wilderness. Just look at the difference between the first Toy Story film, released in 1995, up to Toy Story 3’s release in 2010.

Over 15 years, there is a complete world of difference — we progressed from 2D to 3D, the colors are poignant, we visualize changes from shadows and lightness and the sequences move much more quickly. The third film was a major feat for a studio, and now either years later, the technology there is already on the cusp of being old news.

Technology is advancing faster than it can be implemented — and it isn’t just the Pixar’s and Disney’s of the world who have to stay ahead of the curve with each sequence released. Boutique companies are under just as much pressure to continually push the envelope on what’s possible in the animation space, while still delivering top results to clients within the sometimes demanding time constraints of film and television.

Aline Maalouf

Working in reality TV presents its own set of challenges in comparison to a fully animated program. To start, you need to seamlessly combine real-life interaction with animation — often showcasing what is there against what could be there. As animation continues to evolve, integrating with emerging technology, such as virtual reality, augmented reality and immersive platforms, understanding how users interact with the platform and how to best engage the audience will be crucial.

Here are four ways using animation can enhance a reality TV program:

Showcasing a World of Possibilities
With the introduction of 3D animation, we are able to create imagery so realistic that it is often hard to define what is “real” and what is virtually designed. The real anchor of hyper-realistic animation is the ability to add and play with light and shadows. Layers of light allow us to see reflection, to experience a room from a new angle and to challenge viewers to experience the room in both the daylight and at nighttime.

For example, within our work on Gusto TV’s Where To I Do, couples must select their perfect wedding venue — often viewing a blank space and trying to envision their theme inside it. Using animation, those spaces are brought to life in full, rich color, from dawn to the glaring midday sun to dusk and midnight — no additional film crew time required.

Speeds up Production Process
Gone are the days where studios are spending large budgets resetting room after room to showcase before-and-after options, particularly when it comes to renovation shows. It’s time-consuming and laborious. Working with an animation studio allows producers to showcase a renovated room three different ways, and the audience develops an early feel for the space without the need to see it physically set up.

It’s faster (with the right tools and technology to match TV timelines), allows more flexibility and eliminates the need to build costly sets for one-time use. Even outside of reality TV, the use of greenroom space, green stages and GCI technology allows a flexibility to filming that didn’t necessarily exist two decades ago.

Makes Viewers Part of the Program
If animation is done well, it should make the viewers feel more invested in the program — as if they are part of this experience. Animation should not break what is happening in reality. In order to make this happen, it is essential to have up-to-date software and hardware that bridges the gap between the vision and what is actually accomplished within each scene.

Software and hardware go hand-in-hand in creating high-quality animations. If the software is up to date and not the hardware, the work will be compromised as the rendering process will not be able to support the full project scope. One ripple in the wave of animation and the viewer is reminded that what they’re seeing doesn’t really exist.

Opens Doors to Immersive Experiences
Although we have scratched the surface of what’s possible when it comes to virtual reality, augmented reality and generating immersive experiences for viewers from the comfort of their living rooms, I anticipate there will be a wave of growth in this space over the next five years. Our studio is already building some of these capabilities into our current projects. Overall, studios and production companies are looking for new ways to engage an audience that is exposed to hours of content a day.

Rather than just simply viewing the animation of a wedding venue, viewers will be able to click through the space — guiding their own passage from point A to point B. They become the host of their own journey.

Programs of all genres are dazzling their audiences with the future of animation and reality TV is right there with it.


Aline Maalouf is co-founder/EVP of Neezo Studios, which has produced the animation and renderings for all six seasons of the Property Brothers and all live episodes of Brother vs Brother, in addition to other network shows.

Sony Imageworks provides big effects, animation for Warner’s Smallfoot

By Randi Altman

The legend of Bigfoot: a giant, hairy two-legged creature roaming the forests and giving humans just enough of a glimpse to freak them out. Sightings have been happening for centuries with no sign of slowing down — seriously, Google it.

But what if that story was turned around, and it was Bigfoot who was freaked out by a Smallfoot (human)? Well, that is exactly the premise of the new Warner Bros. film Smallfoot, directed by Karey Kirkpatrick. It’s based on the book “Yeti Tracks” by Sergio Pablos.

Karl Herbst

Instead of a human catching a glimpse of the mysterious giant, a yeti named Migo (Channing Tatum) sees a human (James Corden) and tells his entire snow-filled village about the existence of Smallfoot. Of course, no one believes him so he goes on a trek to find this mythical creature and bring him home as proof.

Sony Pictures Imageworks was tasked with all of the animation and visual effects work on the film, while Warner Animation film did all of the front end work — such as adapting the script, creating the production design, editing, directing, producing and more. We reached out to Imageworks VFX supervisor Karl Herbst (Hotel Transylvania 2) to find out more about creating the animation and effects for Smallfoot.

The film has a Looney Tunes-type feel with squash and stretch. Did this provide more freedom or less?
In general, it provided more freedom since it allowed the animation team to really have fun with gags. It also gave them a ton of reference material to pull from and come up with new twists on older ideas. Once out of animation, depending on how far the performance was pushed, other departments — like the character effects team — would have additional work due to all of the exaggerated movements. But all of the extra work was worth it because everyone really loved seeing the characters pushed.

We also found that as the story evolved, Migo’s journey became more emotionally driven; We needed to find a style that also let the audience truly connect with what he was going through. We brought in a lot more subtlety, and a more truthful physicality to the animation when needed. As a result, we have these incredibly heartfelt performances and moments that would feel right at home in an old Road Runner short. Yet it all still feels like part of the same world with these truly believable characters at the center of it.

Was scale between such large and small characters a challenge?
It was one of the first areas we wanted to tackle since the look of the yeti’s fur next to a human was really important to filmmakers. In the end, we found that the thickness and fidelity of the yeti hair had to be very high so you could see each hair next to the hairs of the humans.

It also meant allowing the rigs for the human and yetis to be flexible enough to scale them as needed to have moments where they are very close together and they did not feel so disproportionate to each other. Everything in our character pipeline from animation down to lighting had to be flexible in dealing with these scale changes. Even things like subsurface scattering in the skin had dials in it to deal with when Percy, or any human character, was scaled up or down in a shot.

How did you tackle the hair?
We updated a couple of key areas in our hair pipeline starting with how we would build our hair. In the past, we would make curves that look more like small groups of hairs in a clump. In this case, we made each curve its own strand of a single hair. To shade this hair in a way that allowed artists to have better control over the look, our development team created a new hair shader that used true multiple-scattering within the hair.

We then extended that hair shading model to add control over the distribution around the hair fiber to model the effect of animal hair, which tends to scatter differently than human hair. This gave artists the ability to create lots of different hair looks, which were not based on human hair, as was the case with our older models.

Was rendering so many fury characters on screen at a time an issue?
Yes. In the past this would have been hard to shade all at once, mostly due to our reliance on opacity to create the soft shadows needed for fur. With the new shading model, we were no longer using opacity at all so the number of rays needed to resolve the hair was lower than in the past. But we now needed to resolve the aliasing due to the number of fine hairs (9 million for LeBron James’ Gwangi).

We developed a few other new tools within our version of the Arnold renderer to help with aliasing and render time in general. The first was adaptive sampling, which would allow us to up the anti-aliasing samples drastically. This meant some pixels would only use a few samples while others would use very high sampling. Whereas in the past, all pixels would get the same number. This focused our render times to where we needed it, helping to reduce overall rendering. Our development team also added the ability for us to pick a render up from its previous point. This meant that at a lower quality level we could do all of our lighting work, get creative approval from the filmmakers and pick up the renders to bring them to full quality not losing the time already spent.

What tools were used for the hair simulations specifically, and what tools did you call on in general?
We used Maya and the Nucleus solvers for all of the hair simulations, but developed tools over them to deal with so much hair per character and so many characters on screen at once. The simulation for each character was driven by their design and motion requirements.

The Looney Tunes-inspired design and motion created a challenge around how to keep hair simulations from breaking with all of the quick and stretched motion while being able to have light wind for the emotional subtle moments. We solved all of those requirements by using a high number of control hairs and constraints. Meechee (Zendaya) used 6,000 simulation curves with over 200 constraints, while Migo needed 3,200 curves with around 30 constraints.

Stonekeeper (Common) was the most complex of the characters, with long braided hair on his head, a beard, shaggy arms and a cloak made of stones. He required a cloth simulation pass, a rigid body simulation was performed for the stones and the hair was simulated on top of the stones. Our in-house tool called Kami builds all of the hair at render time and also allows us to add procedurals to the hair at that point. We relied on those procedurals to create many varied hair looks for all of the generics needed to fill the village full of yetis.

How many different types of snow did you have?
We created three different snow systems for environmental effects. The first was a particle simulation of flakes for near-ground detail. The second was volumetric effects to create lots of atmosphere in the backgrounds that had texture and movement. We used this on each of the large sets and then stored those so lighters could pick which parts they wanted in each shot. To also help with artistically driving the look of each shot, our third system was a library of 2D elements that the effects team rendered and could be added during compositing to add details late in shot production.

For ground snow, we had different systems based on the needs in each shot. For shallow footsteps, we used displacement of the ground surface with additional little pieces of geometry to add crumble detail around the prints. This could be used in foreground or background.

For heavy interactions, like tunneling or sliding in the snow, we developed a new tool we called Katyusha. This new system combined rigid body destruction with fluid simulations to achieve all of the different states snow can take in any given interaction. We then rendered these simulations as volumetrics to give the complex lighting look the filmmakers were looking for. The snow, being in essence a cloud, allowed light transport through all of the different layers of geometry and volume that could be present at any given point in a scene. This made it easier for the lighters to give the snow its light look in any given lighting situation.

Was there a particular scene or effect that was extra challenging? If so, what was it and how did you overcome it?
The biggest challenge to the film as a whole was the environments. The story was very fluid, so design and build of the environments came very late in the process. Coupling that with a creative team that liked to find their shots — versus design and build them — meant we needed to be very flexible on how to create sets and do them quickly.

To achieve this, we begin by breaking the environments into a subset of source shapes that could be combined in any fashion to build Yeti Mountain, Yeti Village and the surrounding environments. Surfacing artists then created materials that could be applied to any set piece, allowing for quick creative decisions about what was rock, snow and ice, and creating many different looks. All of these materials were created using PatternCreate networks as part of our OSL shaders. With them we could heavily leverage the portable procedural texturing between assets making location construction quicker, more flexible and easier to dial.

To get the right snow look for all levels of detail needed, we used a combination of textured snow, modeled snow and a simulation of geometric snowfall, which all needed to shade the same. For the simulated snowfall we created a padding system that could be run at any time on an environment giving it a fresh coating of snow. We did this so that filmmakers could modify sets freely in layout and not have to worry about broken snow lines. Doing all of that with modeled snow would have been too time-consuming and costly. This padding system worked not only in organic environments, like Yeti Village, but also in the Human City at the end of the film. The snow you see in the Human City is a combination of this padding system in the foreground and textures in the background.

Allegorithmic’s Substance Painter adds subsurface scattering

Allegorithmic has released the latest additions to its Substance Painter tool, targeted to VFX, game studios and pros who are looking for ways to create realistic lighting effects. Substance Painter enhancements include subsurface scattering (SSS), new projections and fill tools, improvements to the UX and support for a range of new meshes.

Using Substance Painter’s newly updated shaders, artists will be able to add subsurface scattering as a default option. Artists can add a Scattering map to a texture set and activate the new SSS post-effect. Skin, organic surfaces, wax, jade and any other translucent materials that require extra care will now look more realistic, with redistributed light shining through from under the surface.

The release also includes updates to projection and fill tools, beginning with the user-requested addition of non-square projection. Images can be loaded in both the projection and stencil tool without altering the ratio or resolution. Those projection and stencil tools can also disable tiling in one or both axes. Fill layers can be manipulated directly in the viewport using new manipulator controls. Standard UV projections feature a 2D manipulator in the UV viewport. Triplanar Projection received a full 3D manipulator in the 3D viewport, and both can be translated, scaled and rotated directly in-scene.

Along with the improvements to the artist tools, Substance Painter includes several updates designed to improve the overall experience for users of all skill levels. Consistency between tools has been improved, and additions like exposed presets in Substance Designer and a revamped, universal UI guide make it easier for users to jump between tools.

Additional updates include:
• Alembic support — The Alembic file format is now supported by Substance Painter, starting with mesh and camera data. Full animation support will be added in a future update.
• Camera import and selection — Multiple cameras can be imported with a mesh, allowing users to switch between angles in the viewport; previews of the framed camera angle now appear as an overlay in the 3D viewport.
• Full gITF support — Substance Painter now automatically imports and applies textures when loading gITF meshes, removing the need to import or adapt mesh downloads from Sketchfab.
• ID map drag-and-drop — Both materials and smart materials can be taken from the shelf and dropped directly onto ID colors, automatically creating an ID mask.
• Improved Substance format support — Improved tweaking of Substance-made materials and effects thanks to visible-if and embedded presets.

SIGGRAPH conference chair Roy C. Anthony: VR, AR, AI, VFX, more

By Randi Altman

Next month, SIGGRAPH returns to Vancouver after turns in Los Angeles and Anaheim. This gorgeous city, whose convention center offers a water view, is home to many visual effects studios providing work for film, television and spots.

As usual, SIGGRAPH will host many presentations, showcase artists’ work, display technology and offer a glimpse into what’s on the horizon for this segment of the market.

Roy C. Anthony

Leading up to the show — which takes place August 12-16 — we reached out to Roy C. Anthony, this year’s conference chair. For his day job, Anthony recently joined Ventuz Technology as VP, creative development. There, he leads initiatives to bring Ventuz’s realtime rendering technologies to creators of sets, stages and ProAV installations around the world

SIGGRAPH is back in Vancouver this year. Can you talk about why it’s important for the industry?
There are 60-plus world-class VFX and animation studios in Vancouver. There are more than 20,000 film and TV jobs, and more than 8,000 VFX and animation jobs in the city.

So, Vancouver’s rich production-centric communities are leading the way in film and VFX production for television and onscreen films. They are also are also busy with new media content, games work and new workflows, including those for AR/VR/mixed reality.

How many exhibitors this year?
The conference and exhibition will play host to over 150 exhibitors on the show floor, showcasing the latest in computer graphics and interactive technologies, products and services. Due to the increase in the amount of new technology that has debuted in the computer graphics marketplace over this past year, almost one quarter of this year’s 150 exhibitors will be presenting at SIGGRAPH for the first time

In addition to the traditional exhibit floor and conferences, what are some of the can’t-miss offerings this year?
We have increased the presence of virtual, augmented and mixed reality projects and experiences — and we are introducing our new Immersive Pavilion in the east convention center, which will be dedicated to this area. We’ve incorporated immersive tech into our computer animation festival with the inclusion of our VR Theater, back for its second year, as well as inviting a special, curated experience with New York University’s Ken Perlin — he’s a legendary computer graphics professor.

We’ll be kicking off the week in a big VR way with a special session following the opening ceremony featuring Ivan Sutherland, considered by many as “the father of computer graphics.” That 50-year retrospective will present the history and innovations that sparked our industry.

We have also brought Syd Mead, a legendary “visual futurist” (Blade Runner, Tron, Star Trek: The Motion Picture, Aliens, Time Cop, Tomorrowland, Blade Runner 2049), who will display an arrangement of his art in a special collection called Progressions. This will be seen within our Production Gallery experience, which also returns for its second year. Progressions will exhibit more than 50 years of artwork by Syd, from his academic years to his most current work.

We will have an amazing array of guest speakers, including those featured within the Business Symposium, which is making a return to SIGGRAPH after an absence of a few years. Among these speakers are people from the Disney Technology Innovation Group, Unity and Georgia Tech.

On Tuesday, August 14, our SIGGRAPH Next series will present a keynote speaker each morning to kick off the day with an inspirational talk. These speakers are Tony Derose, a senior scientist from Pixar; Daniel Szecket, VP of design for Quantitative Imaging Systems; and Bob Nicoll, dean of Blizzard Academy.

There will be a 25th anniversary showing of the original Jurassic Park movie, being hosted by “Spaz” Williams, a digital artist who worked on that film.

Can you talk about this year’s keynote and why he was chosen?
We’re thrilled to have ILM head and senior VP, ECD Rob Bredow deliver the keynote address this year. Rob is all about innovation — pushing through scary new directions while maintaining the leadership of artists and technologists.

Rob is the ultimate modern-day practitioner, a digital VFX supervisor who has been disrupting ‘the way it’s always been done’ to move to new ways. He truly reflects the spirit of ILM, which was founded in 1975 and is just one year younger than SIGGRAPH.

A large part of SIGGRAPH is its slant toward students and education. Can you discuss how this came about and why this is important?
SIGGRAPH supports education in all sub-disciplines of computer graphics and interactive techniques, and it promotes and improves the use of computer graphics in education. Our Education Committee sponsors a broad range of projects, such as curriculum studies, resources for educators and SIGGRAPH conference-related activities.

SIGGRAPH has always been a welcoming and diverse community, one that encourages mentorship, and acknowledges that art inspires science and science enables advances in the arts. SIGGRAPH was built upon a foundation of research and education.

How are the Computer Animation Festival films selected?
The Computer Animation Festival has two programs, the Electronic Theater and the VR Theater. Because of the large volume of submissions for the Electronic Theater (over 400), there is a triage committee for the first phase. The CAF Chair then takes the high scoring pieces to a jury comprised of industry professionals. The jury selects then become the Electronic Theater show pieces.

The selections for the VR Theater are made by a smaller panel comprised mostly of sub-committee members that watch each film in a VR headset and vote.

Can you talk more about how SIGGRAPH is tackling AR/VR/AI and machine learning?
Since SIGGRAPH 2018 is about the theme of “Generations,” we took a step back to look at how we got where we are today in terms of AR/VR, and where we are going with it. Much of what we know today couldn’t have been possible without the research and creation of Ivan Sutherland’s 1968 head-mounted display. We have a fanatic panel celebrating the 50-year anniversary of his HMD, which is widely considered and the first VR HMD.

AI tools are newer, and we created a panel that focuses on trends and the future of AI tools in VFX, called “Future Artificial Intelligence and Deep Learning Tools for VFX.” This panel gains insight from experts embedded in both the AI and VFX industries and gives attendees a look at how different companies plan to further their technology development.

What is the process for making sure that all aspects of the industry are covered in terms of panels?
Every year new ideas for panels and sessions are submitted by contributors from all over the globe. Those submissions are then reviewed by a jury of industry experts, and it is through this process that panelists and cross-industry coverage is determined.

Each year, the conference chair oversees the program chairs, then each of the program chairs become part of a jury process — this helps to ensure the best program with the most industries represented from across all disciplines.

In the rare case a program committee feels they are missing something key in the industry, they can try to curate a panel in, but we still require that that panel be reviewed by subject matter experts before it would be considered for final acceptance.

 

Jamm hires animation supervisor Steward Burris

Santa Monica-based visual effects house Jamm has added animation vet and longtime collaborator Steward Burris as animation supervisor.

Burris has been working with Jamm in a freelance capacity since its inception four years ago, and this position makes the partnership official. Burris has been animating and supervising on feature films, television, commercials, games and VR since graduating Vancouver Film School over two decades ago. His resume includes a variety of projects from The X-Files and Breaking Bad to The Curious Case of Benjamin Button, Harry Potter and the famous dancing Kia hamsters.

Burris specializes in character performance and photoreal creature work. A recent job was for a Universal Parks and Resorts Grow Bolder spot, where Burris and VFX supervisor Andy Boyd led the Jamm team to seamlessly integrate CG into live action and further enhance the in-camera elements with additional atmosphere and texture. Recreating King Kong and Transformers sequences was a top favorite for the CG team. Other examples of Burris’ skill for injecting warmth and personality into animated creations can be seen in the KIA Hamster spots, and in the awkward interactions between robots and humans in the Kohler Never Too Next commercial.

“There’s often a belief that to handle a giant CG character job, you need a massive team,” Burris says. “Jamm has shown time and again you can achieve this with a small but highly skilled crew. If you give the best tools to the most talented people, you’ll get fantastic results — in half the time.”