Category Archives: Animation

Sony Imageworks provides big effects, animation for Warner’s Smallfoot

By Randi Altman

The legend of Bigfoot: a giant, hairy two-legged creature roaming the forests and giving humans just enough of a glimpse to freak them out. Sightings have been happening for centuries with no sign of slowing down — seriously, Google it.

But what if that story was turned around, and it was Bigfoot who was freaked out by a Smallfoot (human)? Well, that is exactly the premise of the new Warner Bros. film Smallfoot, directed by Karey Kirkpatrick. It’s based on the book “Yeti Tracks” by Sergio Pablos.

Karl Herbst

Instead of a human catching a glimpse of the mysterious giant, a yeti named Migo (Channing Tatum) sees a human (James Corden) and tells his entire snow-filled village about the existence of Smallfoot. Of course, no one believes him so he goes on a trek to find this mythical creature and bring him home as proof.

Sony Pictures Imageworks was tasked with all of the animation and visual effects work on the film, while Warner Animation film did all of the front end work — such as adapting the script, creating the production design, editing, directing, producing and more. We reached out to Imageworks VFX supervisor Karl Herbst (Hotel Transylvania 2) to find out more about creating the animation and effects for Smallfoot.

The film has a Looney Tunes-type feel with squash and stretch. Did this provide more freedom or less?
In general, it provided more freedom since it allowed the animation team to really have fun with gags. It also gave them a ton of reference material to pull from and come up with new twists on older ideas. Once out of animation, depending on how far the performance was pushed, other departments — like the character effects team — would have additional work due to all of the exaggerated movements. But all of the extra work was worth it because everyone really loved seeing the characters pushed.

We also found that as the story evolved, Migo’s journey became more emotionally driven; We needed to find a style that also let the audience truly connect with what he was going through. We brought in a lot more subtlety, and a more truthful physicality to the animation when needed. As a result, we have these incredibly heartfelt performances and moments that would feel right at home in an old Road Runner short. Yet it all still feels like part of the same world with these truly believable characters at the center of it.

Was scale between such large and small characters a challenge?
It was one of the first areas we wanted to tackle since the look of the yeti’s fur next to a human was really important to filmmakers. In the end, we found that the thickness and fidelity of the yeti hair had to be very high so you could see each hair next to the hairs of the humans.

It also meant allowing the rigs for the human and yetis to be flexible enough to scale them as needed to have moments where they are very close together and they did not feel so disproportionate to each other. Everything in our character pipeline from animation down to lighting had to be flexible in dealing with these scale changes. Even things like subsurface scattering in the skin had dials in it to deal with when Percy, or any human character, was scaled up or down in a shot.

How did you tackle the hair?
We updated a couple of key areas in our hair pipeline starting with how we would build our hair. In the past, we would make curves that look more like small groups of hairs in a clump. In this case, we made each curve its own strand of a single hair. To shade this hair in a way that allowed artists to have better control over the look, our development team created a new hair shader that used true multiple-scattering within the hair.

We then extended that hair shading model to add control over the distribution around the hair fiber to model the effect of animal hair, which tends to scatter differently than human hair. This gave artists the ability to create lots of different hair looks, which were not based on human hair, as was the case with our older models.

Was rendering so many fury characters on screen at a time an issue?
Yes. In the past this would have been hard to shade all at once, mostly due to our reliance on opacity to create the soft shadows needed for fur. With the new shading model, we were no longer using opacity at all so the number of rays needed to resolve the hair was lower than in the past. But we now needed to resolve the aliasing due to the number of fine hairs (9 million for LeBron James’ Gwangi).

We developed a few other new tools within our version of the Arnold renderer to help with aliasing and render time in general. The first was adaptive sampling, which would allow us to up the anti-aliasing samples drastically. This meant some pixels would only use a few samples while others would use very high sampling. Whereas in the past, all pixels would get the same number. This focused our render times to where we needed it, helping to reduce overall rendering. Our development team also added the ability for us to pick a render up from its previous point. This meant that at a lower quality level we could do all of our lighting work, get creative approval from the filmmakers and pick up the renders to bring them to full quality not losing the time already spent.

What tools were used for the hair simulations specifically, and what tools did you call on in general?
We used Maya and the Nucleus solvers for all of the hair simulations, but developed tools over them to deal with so much hair per character and so many characters on screen at once. The simulation for each character was driven by their design and motion requirements.

The Looney Tunes-inspired design and motion created a challenge around how to keep hair simulations from breaking with all of the quick and stretched motion while being able to have light wind for the emotional subtle moments. We solved all of those requirements by using a high number of control hairs and constraints. Meechee (Zendaya) used 6,000 simulation curves with over 200 constraints, while Migo needed 3,200 curves with around 30 constraints.

Stonekeeper (Common) was the most complex of the characters, with long braided hair on his head, a beard, shaggy arms and a cloak made of stones. He required a cloth simulation pass, a rigid body simulation was performed for the stones and the hair was simulated on top of the stones. Our in-house tool called Kami builds all of the hair at render time and also allows us to add procedurals to the hair at that point. We relied on those procedurals to create many varied hair looks for all of the generics needed to fill the village full of yetis.

How many different types of snow did you have?
We created three different snow systems for environmental effects. The first was a particle simulation of flakes for near-ground detail. The second was volumetric effects to create lots of atmosphere in the backgrounds that had texture and movement. We used this on each of the large sets and then stored those so lighters could pick which parts they wanted in each shot. To also help with artistically driving the look of each shot, our third system was a library of 2D elements that the effects team rendered and could be added during compositing to add details late in shot production.

For ground snow, we had different systems based on the needs in each shot. For shallow footsteps, we used displacement of the ground surface with additional little pieces of geometry to add crumble detail around the prints. This could be used in foreground or background.

For heavy interactions, like tunneling or sliding in the snow, we developed a new tool we called Katyusha. This new system combined rigid body destruction with fluid simulations to achieve all of the different states snow can take in any given interaction. We then rendered these simulations as volumetrics to give the complex lighting look the filmmakers were looking for. The snow, being in essence a cloud, allowed light transport through all of the different layers of geometry and volume that could be present at any given point in a scene. This made it easier for the lighters to give the snow its light look in any given lighting situation.

Was there a particular scene or effect that was extra challenging? If so, what was it and how did you overcome it?
The biggest challenge to the film as a whole was the environments. The story was very fluid, so design and build of the environments came very late in the process. Coupling that with a creative team that liked to find their shots — versus design and build them — meant we needed to be very flexible on how to create sets and do them quickly.

To achieve this, we begin by breaking the environments into a subset of source shapes that could be combined in any fashion to build Yeti Mountain, Yeti Village and the surrounding environments. Surfacing artists then created materials that could be applied to any set piece, allowing for quick creative decisions about what was rock, snow and ice, and creating many different looks. All of these materials were created using PatternCreate networks as part of our OSL shaders. With them we could heavily leverage the portable procedural texturing between assets making location construction quicker, more flexible and easier to dial.

To get the right snow look for all levels of detail needed, we used a combination of textured snow, modeled snow and a simulation of geometric snowfall, which all needed to shade the same. For the simulated snowfall we created a padding system that could be run at any time on an environment giving it a fresh coating of snow. We did this so that filmmakers could modify sets freely in layout and not have to worry about broken snow lines. Doing all of that with modeled snow would have been too time-consuming and costly. This padding system worked not only in organic environments, like Yeti Village, but also in the Human City at the end of the film. The snow you see in the Human City is a combination of this padding system in the foreground and textures in the background.

Creating super sounds for Disney XD’s Marvel Rising: Initiation

By Jennifer Walden

Marvel revealed “the next generation of Marvel heroes for the next generation of Marvel fans” in a behind-the-scenes video back in December. Those characters stayed tightly under wraps until August 13, when a compilation of animated shorts called Marvel Rising: Initiation aired on Disney XD. Those shorts dive into the back story of the new heroes and give audiences a taste of what they can expect in the feature-length animated film Marvel Rising: Secret Warriors that aired for the first time on September 30 on both the Disney Channel and Disney XD simultaneously.

L-R: Pat Rodman and Eric P. Sherman

Handling audio post on both the animated shorts and the full-length feature is the Bang Zoom team led by sound supervisor Eric P. Sherman and chief sound engineer Pat Rodman. They worked on the project at the Bang Zoom Atomic Olive location in Burbank. The sounds they created for this new generation of Marvel heroes fit right in with the established Marvel universe but aren’t strictly limited to what already exists. “We love to keep it kind of close, unless Marvel tells us that we should match a specific sound. It really comes down to whether it’s a sound for a new tech or an old tech,” says Rodman.

Sherman adds, “When they are talking about this being for the next generation of fans, they’re creating a whole new collection of heroes, but they definitely want to use what works. The fans will not be disappointed.”

The shorts begin with a helicopter flyover of New York City at night. Blaring sirens mix with police radio chatter as searchlights sweep over a crime scene on the street below. A SWAT team moves in as a voice blasts over a bullhorn, “To the individual known as Ghost Spider, we’ve got you surrounded. Come out peacefully with your hands up and you will not be harmed.” Marvel Rising: Initiation wastes no time in painting a grim picture of New York City. “There is tension and chaos. You feel the oppressiveness of the city. It’s definitely the darker side of New York,” says Sherman.

The sound of the city throughout the series was created using a combination of sourced recordings of authentic New York City street ambience and custom recordings of bustling crowds that Rodman captured at street markets in Los Angeles. Mix-wise, Rodman says they chose to play the backgrounds of the city hotter than normal just to give the track a more immersive feel.

Ghost Spider
Not even 30 seconds into the shorts, the first new Marvel character makes her dramatic debut. Ghost Spider (Dove Cameron), who is also known as Spider Gwen, bursts from a third-story window, slinging webs at the waiting officers. Since she’s a new character, Rodman notes that she’s still finding her way and there’s a bit of awkwardness to her character. “We didn’t want her to sound too refined. Her tech is good, but it’s new. It’s kind of like Spider-Man first starting out as a kid and his tech was a little off,” he says.

Sound designer Gordon Hookailo spent a lot of time crafting the sound of Spider Gwen’s webs, which according to Sherman have more of a nylon, silky kind of sound than Spider-Man’s webs. There’s a subliminal ghostly wisp sound to her webs also. “It’s not very overt. There’s just a little hint of a wisp, so it’s not exactly like regular Spider-Man’s,” explains Rodman.

Initially, Spider Gwen seems to be a villain. She’s confronted by the young-yet-authoritative hero Patriot (Kamil McFadden), a member of S.H.I.E.L.D. who was trained by Captain America. Patriot carries a versatile, high-tech shield that can do lots of things, like become a hovercraft. It shoots lasers and rockets too. The hoverboard makes a subtle whooshy, humming sound that’s high-tech in a way that’s akin to the Goblin’s hovercraft. “It had to sound like Captain America too. We had to make it match with that,” notes Rodman.

Later on in the shorts, Spider Gwen’s story reveals that she’s actually one of the good guys. She joins forces with a crew of new heroes, starting with Ms. Marvel and Squirrel Girl.

Ms. Marvel (Kathreen Khavari) has the ability to stretch and grow. When she reaches out to grab Spider Gwen’s leg, there’s a rubbery, creaking sound. When she grows 50 feet tall she sounds 50 feet tall, complete with massive, ground shaking footsteps and a lower ranged voice that’s sweetened with big delays and reverbs. “When she’s large, she almost has a totally different voice. She’s sound like a large, forceful woman,” says Sherman.

Squirrel Girl
One of the favorites on the series so far is Squirrel Girl (Milana Vayntrub) and her squirrel sidekick Tippy Toe. Squirrel Girl has  the power to call a stampede of squirrels. Sound-wise, the team had fun with that, capturing recordings of animals small and large with their Zoom H6 field recorder. “We recorded horses and dogs mainly because we couldn’t find any squirrels in Burbank; none that would cooperate, anyway,” jokes Rodman. “We settled on a larger animal sound that we manipulated to sound like it had little feet. And we made it sound like there are huge numbers of them.”

Squirrel Girl is a fan of anime, and so she incorporates an anime style into her attacks, like calling out her moves before she makes them. Sherman shares, “Bang Zoom cut its teeth on anime; it’s still very much a part of our lifeblood. Pat and I worked on thousands of episodes of anime together, and we came up with all of these techniques for making powerful power moves.” For example, they add reverb to the power moves and choose “shings” that have an anime style sound.

What is an anime-style sound, you ask? “Diehard fans of anime will debate this to the death,” says Sherman. “It’s an intuitive thing, I think. I’ll tell Pat to do that thing on that line, and he does. We’re very much ‘go with the gut’ kind of people.

“As far as anime style sound effects, Gordon [Hookailo] specifically wanted to create new anime sound effects so we didn’t just take them from an existing library. He created these new, homegrown anime effects.”

Quake
The other hero briefly introduced in the shorts is Quake (Chloe Bennet), who is the same actress who plays Daisy Johnson, aka Quake, on Agents of S.H.I.E.L.D. Sherman says, “Gordon is a big fan of that show and has watched every episode. He used that as a reference for the sound of Quake in the shorts.”

The villain in the shorts has so far remained nameless, but when she first battles Spider Gwen the audience sees her pair of super-daggers that pulse with a green glow. The daggers are somewhat “alive,” and when they cut someone they take some of that person’s life force. “We definitely had them sound as if the power was coming from the daggers and not from the person wielding them,” explains Rodman. “The sounds that Gordon used were specifically designed — not pulled from a library — and there is a subliminal vocal effect when the daggers make a cut. It’s like the blade is sentient. It’s pretty creepy.”

Voices
The character voices were recorded at Bang Zoom, either in the studio or via ISDN. The challenge was getting all the different voices to sound as though they were in the same space together on-screen. Also, some sessions were recorded with single mics on each actor while other sessions were recorded as an ensemble.

Sherman notes it was an interesting exercise in casting. Some of the actors were YouTube stars (who don’t have much formal voice acting experience) and some were experienced voice actors. When an actor without voiceover experience comes in to record, the Bang Zoom team likes to start with mic technique 101. “Mic technique was a big aspect and we worked on that. We are picky about mic technique,” says Sherman. “But, on the other side of that, we got interesting performances. There’s a realism, a naturalness, that makes the characters very relatable.”

To get the voices to match, Rodman spent a lot of time using Waves EQ, Pro Tools Legacy Pitch, and occasionally Waves UltraPitch for when an actor slipped out of character. “They did lots of takes on some of these lines, so an actor might lose focus on where they were, performance-wise. You either have to pull them back in with EQ, pitching or leveling,” Rodman explains.

One highlight of the voice recording process was working with voice actor Dee Bradley Baker, who did the squirrel voice for Tippy Toe. Most of Tippy Toe’s final track was Dee Bradley Baker’s natural voice. Rodman rarely had to tweak the pitch, and it needed no other processing or sound design enhancement. “He’s almost like a Frank Welker (who did the voice of Fred Jones on Scooby-Doo, the voice of Megatron starting with the ‘80s Transformers franchise and Nibbler on Futurama).

Marvel Rising: Initiation was like a training ground for the sound of the feature-length film. The ideas that Bang Zoom worked out there were expanded upon for the soon-to-be released Marvel Rising: Secret Warriors. Sherman concludes, “The shorts gave us the opportunity to get our arms around the property before we really dove into the meat of the film. They gave us a chance to explore these new characters.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.

DG 7.9, 8.27, 9.26

London design, animation studio Golden Wolf sets up shop in NYC

Animation studio Golden Wolf, headquartered in London, has launched its first stateside location in New York City. The expansion comes on the heels of an alliance with animation/VFX/live-action studio Psyop, a minority investor in the company. Golden Wolf now occupies studio space in SoHo adjacent to Psyop and its sister company Blacklist, which formerly represented Golden Wolf stateside and was instrumental to the relationship.

Among the year’s highlights from Golden Wolf are an integrated campaign for Nike FA18 Phantom (client direct), a spot for the adidas x Parley Run for the Oceans initiative (TBWA Amsterdam) in collaboration with Psyop, and Marshmello’s Fly music video for Disney. Golden Wolf also received an Emmy nomination for its main title sequence for Disney’s Ducktales reboot.

Heading up Golden Wolf’s New York office are two transplants from the London studio, executive producer Dotti Sinnott and art director Sammy Moore. Both joined Golden Wolf in 2015, Sinnott from motion design studio Bigstar, where she was a senior producer, and Moore after a run as a freelance illustrator/designer in London’s agency scene.

Sinnott comments: “Building on the strength of our London team, the Golden Wolf brand will continue to grow and evolve with the fresh perspective of our New York creatives. Our presence on either side of the Atlantic not only brings us closer to existing clients, but also positions us perfectly to build new relationships with New York-based agencies and brands. On top of this, we’re able to use the time difference to our advantage to work on faster turnarounds and across a range of budgets.”

Founded in 2013 by Ingi Erlingsson, the studio’s executive creative director, Golden Wolf is known for youth-oriented work — especially content for social media, entertainment and sports — that blurs the lines of irreverent humor, dynamic action and psychedelia. Erlingsson was once a prolific graffiti artist and, later, illustrator/designer and creative director at U.K.-based design agency ilovedust. Today he inspires Golden Wolf’s creative culture and disruptive style fed in part by a wave of next-gen animation talent coming out of schools such as Gobelins in France and The Animation Workshop in Denmark.

“I’m excited about our affiliation with Psyop, which enjoys an incredible legacy producing industry-leading animated advertising content,” Erlingsson says. “Golden Wolf is the new kid on the block, with bags of enthusiasm and an aim to disrupt the industry with new ideas. The combination of the two studios means that we are able to tackle any challenge, regardless of format or technical approach, with the support of some of the world’s best artists and directors. The relationship allows brands and agencies to have complete confidence in our ability to solve even the biggest challenges.”

Golden Wolf’s initial work out of its New York studio includes spots for Supercell (client direct) and Bulleit Bourbon (Barton F. Graf). Golden Wolf is represented in the US market by Hunky Dory for the East Coast, Baer Brown for the Midwest and In House Reps for the West Coast. Stink represents the studio for Europe.

Main Photo: (L-R) Dotti Sinnott, Ingi Erlingsson and Sammy Moore.


Using VFX to bring the new Volkswagen Jetta to life

LA-based studio Jamm provided visual effects for the all-new 2019 Volkswagen Jetta campaign Betta Getta Jetta. Created by Deutsch and produced by ManvsMachine, the series of 12 spots bring the Jetta to life by combining Jamm’s CG design with a color palette inspired by the car’s 10-color ambient lighting system.

“The VW campaign offered up some incredibly fun and intricate challenges. Most notably was the volume of work to complete in a limited amount of time — 12 full-CG spots in just nine weeks, each one unique with its own personality,” says VFX supervisor Andy Boyd.

Collaboration was key to delivering so many spots in such a short span of time. Jamm worked closely with ManvsMachine on every shot. “The team had a very strong creative vision which is crucial in the full 3D world where anything is possible,” explains Boyd.

Jamm employed a variety of techniques for the music-centric campaign, which highlights updated features such as ambient lighting and Beats Audio. The series includes spots titled  Remix, Bumper-to-Bumper, Turb-Whoa, Moods, Bass, Rings, Puzzle and App Magnet, along with 15-second teasers, all of which aired on various broadcast, digital and social channels during the World Cup.

For “Remix,” Jamm brought both a 1985 and a 2019 Jetta to life, along with a hybrid mix of the two, adding a cool layer of turntablist VFX, whereas for “Puzzle,” they cut up the car procedurally in Houdini​, which allowed the team to change around the slices as needed.

For Bass, Jamm helped bring personality to the car while keeping its movements grounded in reality. Animation supervisor Stew Burris pushed the car’s performance and dialed in the choreography of the dance with ManvsMachine as the Jetta discovered the beat, adding exciting life to the car as it bounced to the bassline and hit the switches on a little three-wheel motion.

We reached out to Jamm’s Boyd to find out more.

How early did Jamm get involved?
We got involved as soon as agency boards were client approved. We worked hand in hand with ManvMachine to previs each of the spots in order to lay the foundation for our CG team to execute both agency and directors’ vision.

What were the challenges of working on so many spots at once.
The biggest challenge was for editorial to keep up with the volume of previs options we gave them to present to agency.

Other than Houdini, what tools did they use?
Flame, Nuke and Maya were used as well.

What was your favorite spot of the 12 and why?
Puzzle was our favorite to work on. It was the last of the bunch delivered to Deutsch which we treated with a more technical approach, slicing up the car like a Rubix’s Cube.

 


Allegorithmic’s Substance Painter adds subsurface scattering

Allegorithmic has released the latest additions to its Substance Painter tool, targeted to VFX, game studios and pros who are looking for ways to create realistic lighting effects. Substance Painter enhancements include subsurface scattering (SSS), new projections and fill tools, improvements to the UX and support for a range of new meshes.

Using Substance Painter’s newly updated shaders, artists will be able to add subsurface scattering as a default option. Artists can add a Scattering map to a texture set and activate the new SSS post-effect. Skin, organic surfaces, wax, jade and any other translucent materials that require extra care will now look more realistic, with redistributed light shining through from under the surface.

The release also includes updates to projection and fill tools, beginning with the user-requested addition of non-square projection. Images can be loaded in both the projection and stencil tool without altering the ratio or resolution. Those projection and stencil tools can also disable tiling in one or both axes. Fill layers can be manipulated directly in the viewport using new manipulator controls. Standard UV projections feature a 2D manipulator in the UV viewport. Triplanar Projection received a full 3D manipulator in the 3D viewport, and both can be translated, scaled and rotated directly in-scene.

Along with the improvements to the artist tools, Substance Painter includes several updates designed to improve the overall experience for users of all skill levels. Consistency between tools has been improved, and additions like exposed presets in Substance Designer and a revamped, universal UI guide make it easier for users to jump between tools.

Additional updates include:
• Alembic support — The Alembic file format is now supported by Substance Painter, starting with mesh and camera data. Full animation support will be added in a future update.
• Camera import and selection — Multiple cameras can be imported with a mesh, allowing users to switch between angles in the viewport; previews of the framed camera angle now appear as an overlay in the 3D viewport.
• Full gITF support — Substance Painter now automatically imports and applies textures when loading gITF meshes, removing the need to import or adapt mesh downloads from Sketchfab.
• ID map drag-and-drop — Both materials and smart materials can be taken from the shelf and dropped directly onto ID colors, automatically creating an ID mask.
• Improved Substance format support — Improved tweaking of Substance-made materials and effects thanks to visible-if and embedded presets.


Maxon intros Cinema 4D Release 20

Maxon will be at Siggraph this year showing the next iteration of its Cinema 4D Release 20 (R20), an update of its 3D design and animation software. Release 20 introduces high-end features for VFX and motion graphics artists including node-based materials, volume modeling, CAD import and an evolution of the MoGraph toolset.

Maxon expects Cinema 4D Release 20 to be available this September for both Mac and Windows operating systems.

Key highlights in Release 20 include:
Node-Based Materials – This feature provides new possibilities for creating materials — from simple references to complex shaders — in a node-based editor. With more than 150 nodes to choose from that perform different functions, artists can combine nodes to easily build complex shading effects. Users new to a node-based material workflow still can rely on Cinema 4D’s standard Material Editor interface to create the corresponding node material in the background automatically. Node-based materials can be packaged into assets with user-defined parameters exposed in a similar interface to Cinema 4D’s Material Editor.

MoGraph Fields – New capabilities in this procedural animation toolset offer an entirely new way to define the strength of effects by combining falloffs — from simple shapes, to shaders or sounds to objects and formulas. Artists can layer Fields atop each other with standard mixing modes and remap their effects. They can also group multiple Fields together and use them to control effectors, deformers, weights and more.

CAD Data Import – Popular CAD formats can be imported into Cinema 4D R20 with a drag and drop. A new scale-based tessellation interface allows users to adjust detail to build amazing visualizations. Step, Solidworks, JT, Catia V5 and IGES formats are supported.

Volume Modeling – Users can create complex models by adding or subtracting basic shapes in Boolean-type operations using Cinema 4D R20’s OpenVDB–based Volume Builder and Mesher. They can also procedurally build organic or hard-surface volumes using any Cinema 4D object, including new Field objects. Volumes can be exported in sequenced .vdb format for use in any application or render engine that supports OpenVDB.

ProRender Enhancements — ProRender in Cinema 4D R20 extends the GPU-rendering toolset with key features including subsurface scattering, motion blur and multipasses. Also included are Metal 2 support, an updated ProRender core, out-of-core textures and other architectural enhancements.

Core Technology Modernization —As part of the transition to a more modern core in Cinema 4D, R20 comes with substantial API enhancements, the new node framework, further development on the new modeling framework and a new UI framework.

During Siggraph, Maxon will have guest artists presenting at their booth each day of the show. Presentations will be live streamed on C4DLive.com.

 

 


SIGGRAPH conference chair Roy C. Anthony: VR, AR, AI, VFX, more

By Randi Altman

Next month, SIGGRAPH returns to Vancouver after turns in Los Angeles and Anaheim. This gorgeous city, whose convention center offers a water view, is home to many visual effects studios providing work for film, television and spots.

As usual, SIGGRAPH will host many presentations, showcase artists’ work, display technology and offer a glimpse into what’s on the horizon for this segment of the market.

Roy C. Anthony

Leading up to the show — which takes place August 12-16 — we reached out to Roy C. Anthony, this year’s conference chair. For his day job, Anthony recently joined Ventuz Technology as VP, creative development. There, he leads initiatives to bring Ventuz’s realtime rendering technologies to creators of sets, stages and ProAV installations around the world

SIGGRAPH is back in Vancouver this year. Can you talk about why it’s important for the industry?
There are 60-plus world-class VFX and animation studios in Vancouver. There are more than 20,000 film and TV jobs, and more than 8,000 VFX and animation jobs in the city.

So, Vancouver’s rich production-centric communities are leading the way in film and VFX production for television and onscreen films. They are also are also busy with new media content, games work and new workflows, including those for AR/VR/mixed reality.

How many exhibitors this year?
The conference and exhibition will play host to over 150 exhibitors on the show floor, showcasing the latest in computer graphics and interactive technologies, products and services. Due to the increase in the amount of new technology that has debuted in the computer graphics marketplace over this past year, almost one quarter of this year’s 150 exhibitors will be presenting at SIGGRAPH for the first time

In addition to the traditional exhibit floor and conferences, what are some of the can’t-miss offerings this year?
We have increased the presence of virtual, augmented and mixed reality projects and experiences — and we are introducing our new Immersive Pavilion in the east convention center, which will be dedicated to this area. We’ve incorporated immersive tech into our computer animation festival with the inclusion of our VR Theater, back for its second year, as well as inviting a special, curated experience with New York University’s Ken Perlin — he’s a legendary computer graphics professor.

We’ll be kicking off the week in a big VR way with a special session following the opening ceremony featuring Ivan Sutherland, considered by many as “the father of computer graphics.” That 50-year retrospective will present the history and innovations that sparked our industry.

We have also brought Syd Mead, a legendary “visual futurist” (Blade Runner, Tron, Star Trek: The Motion Picture, Aliens, Time Cop, Tomorrowland, Blade Runner 2049), who will display an arrangement of his art in a special collection called Progressions. This will be seen within our Production Gallery experience, which also returns for its second year. Progressions will exhibit more than 50 years of artwork by Syd, from his academic years to his most current work.

We will have an amazing array of guest speakers, including those featured within the Business Symposium, which is making a return to SIGGRAPH after an absence of a few years. Among these speakers are people from the Disney Technology Innovation Group, Unity and Georgia Tech.

On Tuesday, August 14, our SIGGRAPH Next series will present a keynote speaker each morning to kick off the day with an inspirational talk. These speakers are Tony Derose, a senior scientist from Pixar; Daniel Szecket, VP of design for Quantitative Imaging Systems; and Bob Nicoll, dean of Blizzard Academy.

There will be a 25th anniversary showing of the original Jurassic Park movie, being hosted by “Spaz” Williams, a digital artist who worked on that film.

Can you talk about this year’s keynote and why he was chosen?
We’re thrilled to have ILM head and senior VP, ECD Rob Bredow deliver the keynote address this year. Rob is all about innovation — pushing through scary new directions while maintaining the leadership of artists and technologists.

Rob is the ultimate modern-day practitioner, a digital VFX supervisor who has been disrupting ‘the way it’s always been done’ to move to new ways. He truly reflects the spirit of ILM, which was founded in 1975 and is just one year younger than SIGGRAPH.

A large part of SIGGRAPH is its slant toward students and education. Can you discuss how this came about and why this is important?
SIGGRAPH supports education in all sub-disciplines of computer graphics and interactive techniques, and it promotes and improves the use of computer graphics in education. Our Education Committee sponsors a broad range of projects, such as curriculum studies, resources for educators and SIGGRAPH conference-related activities.

SIGGRAPH has always been a welcoming and diverse community, one that encourages mentorship, and acknowledges that art inspires science and science enables advances in the arts. SIGGRAPH was built upon a foundation of research and education.

How are the Computer Animation Festival films selected?
The Computer Animation Festival has two programs, the Electronic Theater and the VR Theater. Because of the large volume of submissions for the Electronic Theater (over 400), there is a triage committee for the first phase. The CAF Chair then takes the high scoring pieces to a jury comprised of industry professionals. The jury selects then become the Electronic Theater show pieces.

The selections for the VR Theater are made by a smaller panel comprised mostly of sub-committee members that watch each film in a VR headset and vote.

Can you talk more about how SIGGRAPH is tackling AR/VR/AI and machine learning?
Since SIGGRAPH 2018 is about the theme of “Generations,” we took a step back to look at how we got where we are today in terms of AR/VR, and where we are going with it. Much of what we know today couldn’t have been possible without the research and creation of Ivan Sutherland’s 1968 head-mounted display. We have a fanatic panel celebrating the 50-year anniversary of his HMD, which is widely considered and the first VR HMD.

AI tools are newer, and we created a panel that focuses on trends and the future of AI tools in VFX, called “Future Artificial Intelligence and Deep Learning Tools for VFX.” This panel gains insight from experts embedded in both the AI and VFX industries and gives attendees a look at how different companies plan to further their technology development.

What is the process for making sure that all aspects of the industry are covered in terms of panels?
Every year new ideas for panels and sessions are submitted by contributors from all over the globe. Those submissions are then reviewed by a jury of industry experts, and it is through this process that panelists and cross-industry coverage is determined.

Each year, the conference chair oversees the program chairs, then each of the program chairs become part of a jury process — this helps to ensure the best program with the most industries represented from across all disciplines.

In the rare case a program committee feels they are missing something key in the industry, they can try to curate a panel in, but we still require that that panel be reviewed by subject matter experts before it would be considered for final acceptance.

 


Boxx’s Apexx SE capable of 5.0GHz clock speed

Boxx Technologies has introduced the Apexx Special Edition (SE), a workstation featuring a professionally overclocked Intel Core i7-8086K limited edition processor capable of reaching 5.0GHz across all six of its cores.

In celebration of the 40th anniversary of the Intel 8086 (the processor that launched x86 architecture), Intel provided Boxx with a limited number of the high-performance CPUs ideal for 3D modeling, animation and CAD workflows.

Available only while supplies last and custom-configured to accelerate Autodesk’s 3ds Max and Maya, Adobe CC, Maxon Cinema 4D and other pro apps, Apexx SE features a six-core, 8th generation Intel core i7-8086K limited edition processor professionally overclocked to 5.0GHz. Unlike PC gaming systems, the liquid-cooled Apexx SE sustains that frequency across all cores — even in the most demanding situations.

Featuring a compact and metallic blue chassis, the Apexx S3 supports up to three Nvidia or AMD Radeon pro graphics cards, features solid state drives and 2600MHz DDR4 memory. Boxx is offering a three-year warranty on the systems.

“As longtime Intel partners, Boxx is honored to be chosen to offer this state-of-the-art technology. Lightly threaded 3D content creation tools are limited by the frequency of the processor, so a faster clock speed means more creating and less waiting,” explains Boxx VP, marketing and business development Shoaib Mohammad.


Luke Scott to run newly created Ridley Scott Creative Group

Filmmaker Ridley Scott has brought together all of his RSA Films-affiliated companies together in a multi-business restructure to form the Ridley Scott Creative Group. The Ridley Scott Creative Group aims to strengthen the network across the related companies to take advantage of emerging opportunities across all entertainment genres as well as their existing work in film, television, branded entertainment, commercials, VR, short films, documentaries, music video, design and animation, and photography.

Ridley Scott

Luke Scott will assume the role of global CEO, working with founder Ridley Scott and partners Jake and Jordan Scott to oversee the future strategic direction of the newly formed group.

“We are in a new golden age of entertainment,” says Ridley Scott. “The world’s greatest brands, platforms, agencies, new entertainment players and studios are investing hugely in entertainment. We have brought together our talent, capabilities and creative resources under the Ridley Scott Creative Group, and I look forward to maximizing the creative opportunities we now see unfolding with our executive team.”

The companies that make up the RSCG will continue to operate autonomously but will now offer clients synergy under the group offering.

The group includes commercial production company RSA Films, which produced such ads such as Apple’s 1984, Budweiser’s Super Bowl favorite Lost Dog and more recently, Adidas Originals’ Original is Never Finished campaign, as well as branded content for Johnnie Walker, HBO, Jaguar, Ford, Nike and the BMW Films series; the music video production company founded by Jake Scott, Black Dog Films (Justin Timberlake, Maroon 5, Nicki Minaj, Beyoncé, Coldplay, Björk and Radiohead); the entertainment marketing company 3AM; commercial production company Hey Wonderful founded by Michael Di Girolamo; newly founded UK commercial production company Darling Films; and film and television production company Scott Free (Gladiator, Taboo, The Martian, The Good Wife), which continues to be led by David W. Zucker, president, US television; Kevin J. Walsh, president, US film; and Ed Rubin-Managing, director, UK television/film.

“Our Scott Free Films and Television divisions have an unprecedented number of movies and shows in production,” reports Luke Scott. “We are also seeing a huge appetite for branded entertainment from our brand and agency partners to run alongside high-quality commercials. Our entertainment marketing division 3AM is extending its capabilities to all our partners, while Black Dog is moving into short films and breaking new, world-class talent. It is a very exciting time to be working in entertainment.”

 

 

 

 

 

Carbon creates four animated thrill ride spots

Carbon was called on once again by agency Cramer-Krasselt to create four spots — Railblazer, Twisted Timbers, Steel Vengeance and HangTime — for Cedar Fair Entertainment Company, which owns and operates 11 amusement parks across North America.

Following the success of Carbon’s creepy 2017 teaser film for the ride Mystic Timbers, Cramer-Krasselt senior art director David Vaca and his team presented Carbon with four ideas, each a deep dive into the themes and backstories of the rides.

Working across four 30-second films simultaneously and leading a “tri-coastal” team of artists, CD Liam Chapple shared directing duties with lead artists Tim Little and Gary Fouchy. The studio has offices in NYC, LA and Chicago.

According to Carbon executive producer/managing director Phil Linturn, “We soaked each script in the visual language, color grades, camera framing and edits reminiscent of our key inspiration films for each world — a lone gun-slinger arriving to town at sundown in the wild west, the carefree and nostalgic surf culture of California, and extreme off-roading adventures in the twisting canyons of the southwest.”

Carbon’s technical approach to these films was dictated by the fast turnaround and having all films in production at the same time. To achieve the richness, tone and detail required to immerse the viewer in these worlds, Carbon blended stylized CGI with hyper-real matte paintings and realistic lighting to create a look somewhere between their favorite children’s storybooks, contemporary manga animation, the Spaghetti Westerns of Sergio Leone and one or two of their favorite Pixar films.

Carbon called on Side Effects Houdini (partially for their procedural ocean toolkit), Autodesk Maya’s nCloth and 3ds Max, Pixologic’s Zbrush for 3D sculpting and matte painting, Foundry’s Nuke and FilmLight’s Baselight for color.

“We always love working with Cramer-Krasselt,” concludes Linturn. “They come with awesome concepts and an open mind, challenging us to surprise them with each new deck. This was a fantastic opportunity to expand on our body of full CGI-direction work and to explore some interesting looks and styles. It also allowed us to come up with some very creative workflows across all three offices and to achieve two minutes of animation in just a few weeks. The fact that these four films are part of a much bigger broadcast campaign comprising 70-plus broadcast spots is a testament to the focus and range of the production team.”