Category Archives: 3D

Sony Imageworks provides big effects, animation for Warner’s Smallfoot

By Randi Altman

The legend of Bigfoot: a giant, hairy two-legged creature roaming the forests and giving humans just enough of a glimpse to freak them out. Sightings have been happening for centuries with no sign of slowing down — seriously, Google it.

But what if that story was turned around, and it was Bigfoot who was freaked out by a Smallfoot (human)? Well, that is exactly the premise of the new Warner Bros. film Smallfoot, directed by Karey Kirkpatrick. It’s based on the book “Yeti Tracks” by Sergio Pablos.

Karl Herbst

Instead of a human catching a glimpse of the mysterious giant, a yeti named Migo (Channing Tatum) sees a human (James Corden) and tells his entire snow-filled village about the existence of Smallfoot. Of course, no one believes him so he goes on a trek to find this mythical creature and bring him home as proof.

Sony Pictures Imageworks was tasked with all of the animation and visual effects work on the film, while Warner Animation film did all of the front end work — such as adapting the script, creating the production design, editing, directing, producing and more. We reached out to Imageworks VFX supervisor Karl Herbst (Hotel Transylvania 2) to find out more about creating the animation and effects for Smallfoot.

The film has a Looney Tunes-type feel with squash and stretch. Did this provide more freedom or less?
In general, it provided more freedom since it allowed the animation team to really have fun with gags. It also gave them a ton of reference material to pull from and come up with new twists on older ideas. Once out of animation, depending on how far the performance was pushed, other departments — like the character effects team — would have additional work due to all of the exaggerated movements. But all of the extra work was worth it because everyone really loved seeing the characters pushed.

We also found that as the story evolved, Migo’s journey became more emotionally driven; We needed to find a style that also let the audience truly connect with what he was going through. We brought in a lot more subtlety, and a more truthful physicality to the animation when needed. As a result, we have these incredibly heartfelt performances and moments that would feel right at home in an old Road Runner short. Yet it all still feels like part of the same world with these truly believable characters at the center of it.

Was scale between such large and small characters a challenge?
It was one of the first areas we wanted to tackle since the look of the yeti’s fur next to a human was really important to filmmakers. In the end, we found that the thickness and fidelity of the yeti hair had to be very high so you could see each hair next to the hairs of the humans.

It also meant allowing the rigs for the human and yetis to be flexible enough to scale them as needed to have moments where they are very close together and they did not feel so disproportionate to each other. Everything in our character pipeline from animation down to lighting had to be flexible in dealing with these scale changes. Even things like subsurface scattering in the skin had dials in it to deal with when Percy, or any human character, was scaled up or down in a shot.

How did you tackle the hair?
We updated a couple of key areas in our hair pipeline starting with how we would build our hair. In the past, we would make curves that look more like small groups of hairs in a clump. In this case, we made each curve its own strand of a single hair. To shade this hair in a way that allowed artists to have better control over the look, our development team created a new hair shader that used true multiple-scattering within the hair.

We then extended that hair shading model to add control over the distribution around the hair fiber to model the effect of animal hair, which tends to scatter differently than human hair. This gave artists the ability to create lots of different hair looks, which were not based on human hair, as was the case with our older models.

Was rendering so many fury characters on screen at a time an issue?
Yes. In the past this would have been hard to shade all at once, mostly due to our reliance on opacity to create the soft shadows needed for fur. With the new shading model, we were no longer using opacity at all so the number of rays needed to resolve the hair was lower than in the past. But we now needed to resolve the aliasing due to the number of fine hairs (9 million for LeBron James’ Gwangi).

We developed a few other new tools within our version of the Arnold renderer to help with aliasing and render time in general. The first was adaptive sampling, which would allow us to up the anti-aliasing samples drastically. This meant some pixels would only use a few samples while others would use very high sampling. Whereas in the past, all pixels would get the same number. This focused our render times to where we needed it, helping to reduce overall rendering. Our development team also added the ability for us to pick a render up from its previous point. This meant that at a lower quality level we could do all of our lighting work, get creative approval from the filmmakers and pick up the renders to bring them to full quality not losing the time already spent.

What tools were used for the hair simulations specifically, and what tools did you call on in general?
We used Maya and the Nucleus solvers for all of the hair simulations, but developed tools over them to deal with so much hair per character and so many characters on screen at once. The simulation for each character was driven by their design and motion requirements.

The Looney Tunes-inspired design and motion created a challenge around how to keep hair simulations from breaking with all of the quick and stretched motion while being able to have light wind for the emotional subtle moments. We solved all of those requirements by using a high number of control hairs and constraints. Meechee (Zendaya) used 6,000 simulation curves with over 200 constraints, while Migo needed 3,200 curves with around 30 constraints.

Stonekeeper (Common) was the most complex of the characters, with long braided hair on his head, a beard, shaggy arms and a cloak made of stones. He required a cloth simulation pass, a rigid body simulation was performed for the stones and the hair was simulated on top of the stones. Our in-house tool called Kami builds all of the hair at render time and also allows us to add procedurals to the hair at that point. We relied on those procedurals to create many varied hair looks for all of the generics needed to fill the village full of yetis.

How many different types of snow did you have?
We created three different snow systems for environmental effects. The first was a particle simulation of flakes for near-ground detail. The second was volumetric effects to create lots of atmosphere in the backgrounds that had texture and movement. We used this on each of the large sets and then stored those so lighters could pick which parts they wanted in each shot. To also help with artistically driving the look of each shot, our third system was a library of 2D elements that the effects team rendered and could be added during compositing to add details late in shot production.

For ground snow, we had different systems based on the needs in each shot. For shallow footsteps, we used displacement of the ground surface with additional little pieces of geometry to add crumble detail around the prints. This could be used in foreground or background.

For heavy interactions, like tunneling or sliding in the snow, we developed a new tool we called Katyusha. This new system combined rigid body destruction with fluid simulations to achieve all of the different states snow can take in any given interaction. We then rendered these simulations as volumetrics to give the complex lighting look the filmmakers were looking for. The snow, being in essence a cloud, allowed light transport through all of the different layers of geometry and volume that could be present at any given point in a scene. This made it easier for the lighters to give the snow its light look in any given lighting situation.

Was there a particular scene or effect that was extra challenging? If so, what was it and how did you overcome it?
The biggest challenge to the film as a whole was the environments. The story was very fluid, so design and build of the environments came very late in the process. Coupling that with a creative team that liked to find their shots — versus design and build them — meant we needed to be very flexible on how to create sets and do them quickly.

To achieve this, we begin by breaking the environments into a subset of source shapes that could be combined in any fashion to build Yeti Mountain, Yeti Village and the surrounding environments. Surfacing artists then created materials that could be applied to any set piece, allowing for quick creative decisions about what was rock, snow and ice, and creating many different looks. All of these materials were created using PatternCreate networks as part of our OSL shaders. With them we could heavily leverage the portable procedural texturing between assets making location construction quicker, more flexible and easier to dial.

To get the right snow look for all levels of detail needed, we used a combination of textured snow, modeled snow and a simulation of geometric snowfall, which all needed to shade the same. For the simulated snowfall we created a padding system that could be run at any time on an environment giving it a fresh coating of snow. We did this so that filmmakers could modify sets freely in layout and not have to worry about broken snow lines. Doing all of that with modeled snow would have been too time-consuming and costly. This padding system worked not only in organic environments, like Yeti Village, but also in the Human City at the end of the film. The snow you see in the Human City is a combination of this padding system in the foreground and textures in the background.

Creating super sounds for Disney XD’s Marvel Rising: Initiation

By Jennifer Walden

Marvel revealed “the next generation of Marvel heroes for the next generation of Marvel fans” in a behind-the-scenes video back in December. Those characters stayed tightly under wraps until August 13, when a compilation of animated shorts called Marvel Rising: Initiation aired on Disney XD. Those shorts dive into the back story of the new heroes and give audiences a taste of what they can expect in the feature-length animated film Marvel Rising: Secret Warriors that aired for the first time on September 30 on both the Disney Channel and Disney XD simultaneously.

L-R: Pat Rodman and Eric P. Sherman

Handling audio post on both the animated shorts and the full-length feature is the Bang Zoom team led by sound supervisor Eric P. Sherman and chief sound engineer Pat Rodman. They worked on the project at the Bang Zoom Atomic Olive location in Burbank. The sounds they created for this new generation of Marvel heroes fit right in with the established Marvel universe but aren’t strictly limited to what already exists. “We love to keep it kind of close, unless Marvel tells us that we should match a specific sound. It really comes down to whether it’s a sound for a new tech or an old tech,” says Rodman.

Sherman adds, “When they are talking about this being for the next generation of fans, they’re creating a whole new collection of heroes, but they definitely want to use what works. The fans will not be disappointed.”

The shorts begin with a helicopter flyover of New York City at night. Blaring sirens mix with police radio chatter as searchlights sweep over a crime scene on the street below. A SWAT team moves in as a voice blasts over a bullhorn, “To the individual known as Ghost Spider, we’ve got you surrounded. Come out peacefully with your hands up and you will not be harmed.” Marvel Rising: Initiation wastes no time in painting a grim picture of New York City. “There is tension and chaos. You feel the oppressiveness of the city. It’s definitely the darker side of New York,” says Sherman.

The sound of the city throughout the series was created using a combination of sourced recordings of authentic New York City street ambience and custom recordings of bustling crowds that Rodman captured at street markets in Los Angeles. Mix-wise, Rodman says they chose to play the backgrounds of the city hotter than normal just to give the track a more immersive feel.

Ghost Spider
Not even 30 seconds into the shorts, the first new Marvel character makes her dramatic debut. Ghost Spider (Dove Cameron), who is also known as Spider Gwen, bursts from a third-story window, slinging webs at the waiting officers. Since she’s a new character, Rodman notes that she’s still finding her way and there’s a bit of awkwardness to her character. “We didn’t want her to sound too refined. Her tech is good, but it’s new. It’s kind of like Spider-Man first starting out as a kid and his tech was a little off,” he says.

Sound designer Gordon Hookailo spent a lot of time crafting the sound of Spider Gwen’s webs, which according to Sherman have more of a nylon, silky kind of sound than Spider-Man’s webs. There’s a subliminal ghostly wisp sound to her webs also. “It’s not very overt. There’s just a little hint of a wisp, so it’s not exactly like regular Spider-Man’s,” explains Rodman.

Initially, Spider Gwen seems to be a villain. She’s confronted by the young-yet-authoritative hero Patriot (Kamil McFadden), a member of S.H.I.E.L.D. who was trained by Captain America. Patriot carries a versatile, high-tech shield that can do lots of things, like become a hovercraft. It shoots lasers and rockets too. The hoverboard makes a subtle whooshy, humming sound that’s high-tech in a way that’s akin to the Goblin’s hovercraft. “It had to sound like Captain America too. We had to make it match with that,” notes Rodman.

Later on in the shorts, Spider Gwen’s story reveals that she’s actually one of the good guys. She joins forces with a crew of new heroes, starting with Ms. Marvel and Squirrel Girl.

Ms. Marvel (Kathreen Khavari) has the ability to stretch and grow. When she reaches out to grab Spider Gwen’s leg, there’s a rubbery, creaking sound. When she grows 50 feet tall she sounds 50 feet tall, complete with massive, ground shaking footsteps and a lower ranged voice that’s sweetened with big delays and reverbs. “When she’s large, she almost has a totally different voice. She’s sound like a large, forceful woman,” says Sherman.

Squirrel Girl
One of the favorites on the series so far is Squirrel Girl (Milana Vayntrub) and her squirrel sidekick Tippy Toe. Squirrel Girl has  the power to call a stampede of squirrels. Sound-wise, the team had fun with that, capturing recordings of animals small and large with their Zoom H6 field recorder. “We recorded horses and dogs mainly because we couldn’t find any squirrels in Burbank; none that would cooperate, anyway,” jokes Rodman. “We settled on a larger animal sound that we manipulated to sound like it had little feet. And we made it sound like there are huge numbers of them.”

Squirrel Girl is a fan of anime, and so she incorporates an anime style into her attacks, like calling out her moves before she makes them. Sherman shares, “Bang Zoom cut its teeth on anime; it’s still very much a part of our lifeblood. Pat and I worked on thousands of episodes of anime together, and we came up with all of these techniques for making powerful power moves.” For example, they add reverb to the power moves and choose “shings” that have an anime style sound.

What is an anime-style sound, you ask? “Diehard fans of anime will debate this to the death,” says Sherman. “It’s an intuitive thing, I think. I’ll tell Pat to do that thing on that line, and he does. We’re very much ‘go with the gut’ kind of people.

“As far as anime style sound effects, Gordon [Hookailo] specifically wanted to create new anime sound effects so we didn’t just take them from an existing library. He created these new, homegrown anime effects.”

Quake
The other hero briefly introduced in the shorts is Quake (Chloe Bennet), who is the same actress who plays Daisy Johnson, aka Quake, on Agents of S.H.I.E.L.D. Sherman says, “Gordon is a big fan of that show and has watched every episode. He used that as a reference for the sound of Quake in the shorts.”

The villain in the shorts has so far remained nameless, but when she first battles Spider Gwen the audience sees her pair of super-daggers that pulse with a green glow. The daggers are somewhat “alive,” and when they cut someone they take some of that person’s life force. “We definitely had them sound as if the power was coming from the daggers and not from the person wielding them,” explains Rodman. “The sounds that Gordon used were specifically designed — not pulled from a library — and there is a subliminal vocal effect when the daggers make a cut. It’s like the blade is sentient. It’s pretty creepy.”

Voices
The character voices were recorded at Bang Zoom, either in the studio or via ISDN. The challenge was getting all the different voices to sound as though they were in the same space together on-screen. Also, some sessions were recorded with single mics on each actor while other sessions were recorded as an ensemble.

Sherman notes it was an interesting exercise in casting. Some of the actors were YouTube stars (who don’t have much formal voice acting experience) and some were experienced voice actors. When an actor without voiceover experience comes in to record, the Bang Zoom team likes to start with mic technique 101. “Mic technique was a big aspect and we worked on that. We are picky about mic technique,” says Sherman. “But, on the other side of that, we got interesting performances. There’s a realism, a naturalness, that makes the characters very relatable.”

To get the voices to match, Rodman spent a lot of time using Waves EQ, Pro Tools Legacy Pitch, and occasionally Waves UltraPitch for when an actor slipped out of character. “They did lots of takes on some of these lines, so an actor might lose focus on where they were, performance-wise. You either have to pull them back in with EQ, pitching or leveling,” Rodman explains.

One highlight of the voice recording process was working with voice actor Dee Bradley Baker, who did the squirrel voice for Tippy Toe. Most of Tippy Toe’s final track was Dee Bradley Baker’s natural voice. Rodman rarely had to tweak the pitch, and it needed no other processing or sound design enhancement. “He’s almost like a Frank Welker (who did the voice of Fred Jones on Scooby-Doo, the voice of Megatron starting with the ‘80s Transformers franchise and Nibbler on Futurama).

Marvel Rising: Initiation was like a training ground for the sound of the feature-length film. The ideas that Bang Zoom worked out there were expanded upon for the soon-to-be released Marvel Rising: Secret Warriors. Sherman concludes, “The shorts gave us the opportunity to get our arms around the property before we really dove into the meat of the film. They gave us a chance to explore these new characters.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.

DG 7.9, 8.27, 9.26

A Conversation: 3P Studio founder Haley Stibbard

Australia’s 3P Studio is a post house founded and led by artisan Haley Stibbard. The company’s portfolio of work includes commercials for brands such as Subway, Allianz and Isuzu Motor Company as well as iconic shows like Sesame Street. Stibbard’s path to opening her own post house was based on necessity.

After going on maternity to have her first child in 2013, she returned to her job at a content studio to find that her role had been made redundant. She was subsequently let go. Needing and wanting to work, she began freelancing as an editor — working seven days a week and never turning down a job. Eventually she realized that she couldn’t keep up with that type of schedule and took her fate into her own hands. She launched 3P Studio, one of Brisbane’s few women-led post facilities.

We reached out to Stibbard to ask about her love of post and her path to 3P Studio.

What made you want to get into post production? School?
I had a strong love of film, which I got from my late dad, Ray. He was a big film buff and would always come home from work when I was a kid with a shopping bag full of $2 movies from the video store and he would watch them. He particularly liked the crime stories and thrillers! So I definitely got my love of film and television from him.

We did not have any film courses at high school in the ‘90s, so the closest I could get was photography. Without a show reel it was hard to get a place at university in the college of art; a portfolio was a requirement and I didn’t have one. I remember I had to talk my way into the film program, and in the end I think they just got sick of me and let me into the course through the back door without a show reel — I can be very persistent when I want to be. I always had enjoyed editing and I was good at it, so in group tasks I was always chosen as the editor and then my love of post came from there.

What was your first job?
My very first job was quite funny, actually. I was working in both a shoe store and a supermarket at the time, and two post positions became available one day, an in-house editor for a big furniture chain and a job as a production assistant for a large VFX company at Movie World on the Gold Coast. Anyone who knows me knows that I would be the worst PA in the world. So, luckily for that company director, I didn’t get the PA job and became the in-house editor for the furniture chain.

I’m glad that I took that job, as it taught me so much — how to work under pressure, how to use an Avid, how to work with deadlines, what a key number was, how to dispatch TVCS to the stations, be quick, be accurate, how to take constructive feedback.

I made every mistake known to man, including one weekend when I forgot to remove the 4×3 safe bars from a TVC and my boss saw it on TV. I ended up having to drive to the office, climb the fence that was locked to get into the office and pull it off air. So I’ve learned a lot of things the hard way, but my boss was a very patient and forgiving man, and 18 years later is now a client of mine!

What job did you hold when you went out on maternity leave?
Before I left on maternity leave to have my son Dashiell, I was an editor for a small content company. I have always been a jack-of-all-trades and I took care of everything from offline to online, grading in Resolve, motion graphics in After Effects and general design. I loved my job and I loved the variety that it brought. Doing something different every day was very enjoyable.

After leaving that job, you started freelancing as an editor. What systems did you edit on at the time and what types of projects? How difficult a time was that for you? New baby, working all the time, etc.
I started freelancing when my son was just past seven months old. I had a mortgage and had just come off six months of unpaid maternity leave, so I needed to make a living and I needed to make it quickly. I also had the added pressure of looking after a young child under the age of one who still needed his mother.

So I started contacting advertising agencies and production companies that I thought may be interested in my skill set. I just took every job that I could get my hands on, as I was always worried that every job that I took could potentially be my last for a while. I was lucky that I had an incredibly well-behaved baby! I never said “no” to a job.

As my client base started to grow, my clients would always book me since they knew that I would never say “no” (they know I still don’t say no!). It got to the point where I was working seven days a week. I worked all day when my son was in childcare and all night after he would go to bed. I would take the baby monitor downstairs where I worked out of my husband’s ‘man den.’

As my freelance business grew, I was so lucky that I had the most supportive husband in the world who was doing everything for me, the washing, the cleaning, the cooking, bath time, as well has holding down his own full-time job as an engineer. I wouldn’t have been able to do what I did for that period of time without his support and encouragement. This time really proved to be a huge stepping stone for 3P Studio.

Do you remember the moment you decided you would start your own business?
There wasn’t really a specific moment where I decided to start my own business. It was something that seemed to just naturally come together. The busier I became, the more opportunities came about, like having enough work through the door to build a space and hire staff. I have always been very strategic in regard to the people that I have brought on at 3P, and the timing in which they have come on board.

Can you walk us through that bear of a process?
At the start of 2016, I made the decision to get out of the house. My work life was starting to blend in with my home life and I needed to have that separation. I worked out of a small office for 12 months, and about six months into that it came to a point where I was able to purchase an office space that would become our studio today.

I went to work planning the fit out for the next six months. The studio was an investment in the business and I needed a place that my clients could also bring their clients for approvals, screenings and collaboration on jobs, as well as just generally enjoying the space.

The office space was an empty white shell, but the beauty of coming into a blank canvas was that I was able to create a studio that was specifically built for post production. I was lucky in that I had worked in some of the best post houses in the country as an editor, and this being a custom build I was able to take all the best bits out of all the places I had previously worked and put them into my studio without the restriction of existing walls.

I built up the walls, ripped down the ceilings and was able to design the edit suites and infrastructure all the way down to designing and laying the cable runs myself that I knew would work for us down the line. Then, we saved money and added more equipment to the studio bit by bit. It wasn’t 0 to 100 overnight, I had to work at the business development side of the company a lot, and I spent a lot of long days sitting by myself in those edit suites doing everything. Soon, word of mouth started to circulate and the business started to grow on the back of some nice jobs from my existing loyal clients.

What type of work do you do, and what gear do you call on?
3P Studio is a boutique post production studio that specializes in full-service post production, we also shoot content when required.

Our clients range anywhere from small content videos for the web all the way up to large commercial campaigns and everything in between.

There are currently six of us working full time in the studio, and we handle everything in-house from offline editing to VFX to videography and sound design. We work primarily in the Adobe Creative suite for offline editing in Premiere, mixed with Maxon Cinema 4D/Autodesk Maya for 3D work, Autodesk Flame and Side Effects Houdini for online compositing and VFX, Blackmagic Resolve for color grading and Pro Tools HD for sound mixing. We use EditShare EFS shared storage nodes for collaborative working and sharing of content between the mix of creative platforms we use.

This year we have invested in a Red Digital Cinema camera as well as an EditShare XStream 200 EFS scale-out single-node server so we can become that one-stop shop for our clients. We have been able to create an amazing creative space for our clients to come and work with us, be it from the bespoke design of our editorial suites or the high level of client service we offer.

How did you build 3P Studios to be different from other studios you’ve worked at?
From a personal perspective, the culture that we have been able to build in the studio is unlike anywhere else I have worked in that we genuinely work as a team and support each other. On the business side, we cater to clients of all sizes and budgets while offering uncompromising services and experience whether they be large or small. Making sure they walk away feeling that they have had great value and exemplary service for their budget means that they will end up being a customer of ours for life. This is the mantra that I have been able to grow the business on.

What is your hiring process like, and how do you protect employees who need to go out on maternity or family leave?
When I interview people to join 3P, attitude and willingness to learn is everything to me — hands down. You can be the most amazing operator on the planet, but if your attitude stinks then I’m really not interested. I’ve been incredibly lucky with the team that I have, and I have met them along the journey at exactly the right times. We have an amazing team culture and as the company grows our success is shared.

I always make it clear that it’s swings and roundabouts and that family is always number one. I am there to support my team if they need me to be, not just inside of work but outside as well and I receive the same support in return. We have flexible working hours, I have team members with young families who, at times, are able to work both in the studio and from home so that they can be there for their kids when they need to be. This flexibility works fine for us. Happy team members make for a happy, productive workplace, and I like to think that 3P is forward thinking in that respect.

Any tips for young women either breaking into the industry or in it that want to start a family but are scared it could cost them their job?
Well, for starters, we have laws in Australia that make it illegal for any woman in this country to be discriminated against for starting a family. 3P also supports the 18 weeks paid maternity leave available to women heading out to start a family. I would love to see more female workers in post production, especially in operator roles. We aren’t just going to be the coffee and tea girls, we are directors, VFX artists, sound designers, editors and cinematographers — the future is female!

Any tips for anyone starting a new business?
Work hard, be nice to people and stay humble because you’re only as good as your last job.

Main Image: Haley Stibbard (second from left) with her team.


London design, animation studio Golden Wolf sets up shop in NYC

Animation studio Golden Wolf, headquartered in London, has launched its first stateside location in New York City. The expansion comes on the heels of an alliance with animation/VFX/live-action studio Psyop, a minority investor in the company. Golden Wolf now occupies studio space in SoHo adjacent to Psyop and its sister company Blacklist, which formerly represented Golden Wolf stateside and was instrumental to the relationship.

Among the year’s highlights from Golden Wolf are an integrated campaign for Nike FA18 Phantom (client direct), a spot for the adidas x Parley Run for the Oceans initiative (TBWA Amsterdam) in collaboration with Psyop, and Marshmello’s Fly music video for Disney. Golden Wolf also received an Emmy nomination for its main title sequence for Disney’s Ducktales reboot.

Heading up Golden Wolf’s New York office are two transplants from the London studio, executive producer Dotti Sinnott and art director Sammy Moore. Both joined Golden Wolf in 2015, Sinnott from motion design studio Bigstar, where she was a senior producer, and Moore after a run as a freelance illustrator/designer in London’s agency scene.

Sinnott comments: “Building on the strength of our London team, the Golden Wolf brand will continue to grow and evolve with the fresh perspective of our New York creatives. Our presence on either side of the Atlantic not only brings us closer to existing clients, but also positions us perfectly to build new relationships with New York-based agencies and brands. On top of this, we’re able to use the time difference to our advantage to work on faster turnarounds and across a range of budgets.”

Founded in 2013 by Ingi Erlingsson, the studio’s executive creative director, Golden Wolf is known for youth-oriented work — especially content for social media, entertainment and sports — that blurs the lines of irreverent humor, dynamic action and psychedelia. Erlingsson was once a prolific graffiti artist and, later, illustrator/designer and creative director at U.K.-based design agency ilovedust. Today he inspires Golden Wolf’s creative culture and disruptive style fed in part by a wave of next-gen animation talent coming out of schools such as Gobelins in France and The Animation Workshop in Denmark.

“I’m excited about our affiliation with Psyop, which enjoys an incredible legacy producing industry-leading animated advertising content,” Erlingsson says. “Golden Wolf is the new kid on the block, with bags of enthusiasm and an aim to disrupt the industry with new ideas. The combination of the two studios means that we are able to tackle any challenge, regardless of format or technical approach, with the support of some of the world’s best artists and directors. The relationship allows brands and agencies to have complete confidence in our ability to solve even the biggest challenges.”

Golden Wolf’s initial work out of its New York studio includes spots for Supercell (client direct) and Bulleit Bourbon (Barton F. Graf). Golden Wolf is represented in the US market by Hunky Dory for the East Coast, Baer Brown for the Midwest and In House Reps for the West Coast. Stink represents the studio for Europe.

Main Photo: (L-R) Dotti Sinnott, Ingi Erlingsson and Sammy Moore.


Reallusion intros three tools for mocap, characters

Reallusion has launched three new motion capture and character creation products: Character Creator 3, a stand-alone character creation tool; Motion Live, a realtime motion capture solution; and 3D Face Motion Capture with Live Face for iPhone X. With these products Reallusion is offering a total solution to build, morph, animate and gamify 3D characters.

Character Creator 3 (CC3), the new generation of iClone Character Creator, has separated from iClone to become a professional stand-alone tool. With a new quad base, roundtrip editing with ZBrush and photorealistic rendering using Iray, Character Creator 3 is a full character-creation solution for generating optimized 3D characters that are ready for games or intensive artistic design.

CC3 provides a new game character base with topology optimized for mobile, game and AR/VR developers. The big breakthrough is the integration with InstaLOD’s model and material optimization technologies to generate game-ready characters that are animatable on the fly, fulfilling the complete character pipeline on polygon reduction, material merge, texture baking, remeshing and LOD generation.

CC3 launches this month and is available now for preorder for $199. More details can be found here. iClone Motion Live, the multidevice motion capture system, connects industry-standard motion gear — including Rokoko, Leap Motion, Xsens, Faceware, OptiTrack, Noitom and iPhone X — into one solution.

Motion Live’s intuitive plug-and-play design makes connecting complicated mocap devices simple by animating custom imported characters or fully rigged 3D characters generated by Character Creator, Daz Studio or other industry-standard sources.

Reallusion has also debuted the combination of the 3D Face Motion Capture with the iPhone X solution with the Live Face app for iClone. As a result, users can record instant facial motion capture on any 3D character with an iPhone X. Reallusion has expanded the technology behind Animoji and Memoji to lift iPhone X animation and motion capture to the next level for studios and independent creators. The solution combines the power of iPhone X mocap with iClone Motion Live to blend face motion capture with Xsens, Perception Neuron, Rokoko, OptiTrack and Leap Motion for a truly realtime live experience in full-body mocap.


Review: Foundry’s Athera cloud platform

By David Cox

I’ve been thinking for a while that there are two types of post houses — those that know what cloud technology can do for them, and those whose days are numbered. That isn’t to say that the use of cloud technology is essential to the survival of a post house, but if they haven’t evaluated the possibilities of it they’re probably living in the past. In such a fast-moving business, that’s not a good place to be.

The term “cloud computing” suffers a bit from being hijacked by know-nothing marketeers and has become a bit vague in meaning. It’s quite simple though: it just means a computer (or storage) owned and maintained by someone else, housed somewhere else and used remotely. The advantage is that a post house can reduce its destructive fixed overheads by owning fewer computers and thus save money on installation and upkeep. Cloud computers can be used as and when they are needed. This allows scaling up and down in proportion to workload.

Over the last few years, several providers have created global datacenters containing upwards of 50,000 servers per site, entirely for the use of anyone who wants to “remote in.” Amazon and Google are the two biggest providers, but as anyone who has tried to harness their power for post production can confirm, they’re not simple to understand or configure. Amazon alone has hundreds of different computer “instance” types, and accessing them requires navigating through a sea of unintelligible jargon. You must know your Elastic Beanstalks from your EC2, EKS and Lambda. And make sure you’ve worked out how to connect your S3, EFS and Glacier. Software licensing can also be tricky.

The truth is, these incredible cloud installations are for cleverer people than those of us that just like to make pretty pictures. They are more for the sort that like to build neural networks and don’t go outside very much. What our industry needs is some clever company to make a nice shiny front end that allows us to harness that power using the tools we know and love, and just make it all a bit simpler. Enter Athera, from Foundry. That’s exactly what they’ve done.

What is Athera?

Athera is a platform hosted on Google Cloud infrastructure that presents a user with icons for apps such as Nuke and Houdini. Access to each app is via short-term (30-day) rental. When an available app icon is clicked, a cloud computer is commanded into action, pre-installed with the chosen app. From then on, the app is used just as if locally installed. Of course, the app is actually running on a high-performance computer located in a secure and nicely cooled datacenter environment. Provided the user has a vaguely decent Internet connection, they’re good to go, because only the user interface is being transmitted across the network, not the actual raw image data.

Apps available on Athera include Foundry’s products, plus a few others. Nuke is represented in its base form, plus a Nuke X variant, Nuke Studio, and a combination of Nuke X and Cara VR. Also available are the Mari texture painting suite, Katana look-creating app and Modo CGI modeling software.

Athera also offers access to non-Foundry products like CGI software Houdini and Blender, as well as the Gaffer management tool.

NukeIn my first test, I rustled up an instance of Nuke Studio and one of Blender. The first thing I wanted to test was the GPU speed, as this can be somewhat variable for many cloud computer types (usually between zero and not much). I was pleasantly surprised as the rendering speed was close to that of a local Nvidia GeForce GTX 1080, which is pretty decent. I was also pleased to see that user preferences were maintained between sessions.

One thing that particularly impressed me was how I could call up multiple apps together and Athera would effectively build a network in the background to link them all up. Frames rendered out of Blender were instantly available in the cloud-hosted Nuke Studio, even though it was running on a different machine. This suggests the Athera infrastructure is well thought out because multi-machine, networked pipelines with attached storage are constructed with just a few clicks and without really thinking about it.

Access to the Athera apps is either by web browser or via a local client software called “Orbit.” In web browser mode, each app opens in its own browser tab. With Orbit, each app appears in a dedicated local window. Orbit boasts lower latency and the ability to use local hardware such as multiple monitors. Latency, which would show itself as a frustrating delay between control input and visual feedback, was impressively low, even when using the web browser interface. Generally, it was easy to forget that the app being used was not installed locally.

Getting files in and out was also straightforward. A Dropbox account can be directly linked, although a Google or Amazon S3 storage “bucket” is preferred for speed. There is also a hosted app called “Toolbox,” which is effectively a file browser to allow the management of files and folders.

The Athera platform also contains management and reporting features. A manager can set up projects and users, setting out which apps and projects a user has access to. Quotas can be set, and full reports are given as to who did what, when and with which app.

Athera’s pricing is laid out on their website and it’s interesting to drill into the costs and make comparisons. A user buys access to apps in 30-day blocks. Personally, I would like to see shorter blocks at some point to increase up/down scale flexibility. That said, render-only instances for many of the apps can be accessed on a per-second billing basis. The 30-day block comes with a “fair use” policy of 200 hours. This is a hard limit, which equates to around nine and a half hours per day for five-day weeks (which is technically known in post production as part time).

Figuring Out Cost
Blender is a good place to start analyzing cost because it’s open source (free) software, so the $244 Athera cost to run for 30 days/200 hours must be for hardware only. This equates to $1.22 per hour, which, compared to direct cloud computer usage, is pretty good value for the GPU-backed machine on offer.

Modo

Another way of comparing the amount of $244 a month would be to say that a new computer costing $5,800 depreciates at roughly this monthly rate if depreciated over two years. That is to say, if a computer of that value is kept for two years before being replaced, it effectively loses roughly $241 per month in value. If depreciated over three years, the figure is $80 per month less. Of course, that’s just comparing the cost of depreciation. Cost of ownership must also include the costs of updating, maintaining, powering, cooling, insuring, housing and repairing if (when!) it breaks down. If a cloud computer breaks down, Google has a few thousand waiting in the wings. In general, the base hardware cost seems quite competitive.

Of course, Blender is not really the juicy stuff. Access to a base Nuke, complete with workstation, is $685 per 30 days / 200 hours. Nuke X is $1,025. There are also “power” options for around 20% more, where a significantly more powerful machine is provided. Compared to running a local machine with purchased or rented software, these prices are very interesting. But when the ability to scale up and down with workload is factored in, especially being able to scale down to nothing during quiet times, the case for Athera becomes quite compelling.

Another helpful factor is that a single 30-day access block to a particular app can be shared between multiple users — as long as only one user has control of the app at a time. This is subject to the fair use limitation.

There is an issue if commercial (licensed) plug-ins are needed. For the time being, these can’t be used on Athera due to the obvious licensing issues relating to their installation on a different cloud machine each time. Hopefully, plugin developers will become alive to the possibilities of pay-per-use licensing, as a platform like Athera could be the perfect storefront.

Mari

Security
One of the biggest concerns about using remote computing is that of security. This concern tends to be more perceptual than real. The truth is that a Google datacenter is likely to have significantly more security than an average post company’s machine room. Also, they will be employing the best in the security business. But if material being worked on leaks out into the public, telling a client, “But I just sent it to Google and figured it would be fine,” isn’t going to sound great. Realistically, the most likely concern for security is the sending of data to and from a datacenter. A security breach inside the datacenter is very unlikely. As ever, a post producer has to remain vigilant.

Summing Up
I think Foundry has been very smart and forward thinking to create a platform that is able to support more than just Foundry products in the cloud. It would have been understandable if they just made it a storefront for alternative ways of using a Nuke (etc), but they clearly see a bigger picture. Using a platform like Athera, post infrastructure can be assembled and disassembled on demand to allow post producers to match their overheads to their workload.

Athera enables smart post producers to build a highly scalable post environment with access to a global pool of creative talent who can log in and contribute from anywhere with little more than a modest computer and internet connection.

I hate the term game-changer — it’s another term so abused by know-nothing marketeers who have otherwise run out of ideas — but Athera, or at least what this sort of platform promises to provide, is most certainly a game-changer. Especially if more apps from different manufacturers can be included.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.


Our SIGGRAPH 2018 video coverage

SIGGRAPH is always a great place to wander around and learn about new and future technology. You can get see amazing visual effects reels and learn how the work was created by the artists themselves. You can get demos of new products, and you can immerse yourself in a completely digital environment. In short, SIGGRAPH is educational and fun.

If you weren’t able to make it this year, or attended but couldn’t see it all, we would like to invite you to watch our video coverage from the show.

SIGGRAPH 2018


postPerspective Impact Award winners from SIGGRAPH 2018

postPerspective has announced the winners of our Impact Awards from SIGGRAPH 2018 in Vancouver. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and professionals. It’s working pros who are going to be using new tools — so we let them make the call.

The awards honor innovative products and technologies for the visual effects, post production and production industries that will influence the way people work. They celebrate companies that push the boundaries of technology to produce tools that accelerate artistry and actually make users’ working lives easier.

While SIGGRAPH’s focus is on VFX, animation, VR/AR, AI and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production, which makes these SIGGRAPH Impact Awards doubly interesting.

The winners are as follows:

postPerspective Impact Award — SIGGRAPH 2018 MVP Winner:

They generated a lot of buzz at the show, as well as a lot of votes from our team of judges, so our MVP Impact Award goes to Nvidia for its Quadro RTX raytracing GPU.

postPerspective Impact Awards — SIGGRAPH 2018 Winners:

  • Maxon for its Cinema 4D R20 3D design and animation software.
  • StarVR for its StarVR One headset with integrated eye tracking.

postPerspective Impact Awards — SIGGRAPH 2018 Horizon Winners:

This year we have started a new Imapct Award category. Our Horizon Award celebrates the next wave of impactful products being previewed at a particular show. At SIGGRAPH, the winners were:

  • Allegorithmic for its Substance Alchemist tool powered by AI.
  • OTOY and Epic Games for their OctaneRender 2019 integration with UnrealEngine 4.

And while these products and companies didn’t win enough votes for an award, our voters believe they do deserve a mention and your attention: Wrnch, Google Lightfields, Microsoft Mixed Reality Capture and Microsoft Cognitive Services integration with PixStor.

 


Artifex provides VFX limb removal for Facebook Watch’s Sacred Lies

Vancouver-based VFX house Artifex Studios created CG amputation effects for the lead character in Blumhouse Productions’ new series for Facebook Watch, Sacred Lies. In the show, the lead character, Minnow Bly (Elena Kampouris), emerges after 12 years in the Kevinian cult missing both of her hands. Artifex was called on to remove the actress’ limbs.

VFX supervisor Rob Geddes led Artifex’s team who created the hand/stump transposition, which encompassed 165 shots across the series. This involved detailed paint work to remove the real hands, while Artifex 3D artists simultaneously performed tracking and match move in SynthEyes to align the CG stump assets to the actress’ forearm.

This was followed up with some custom texture and lighting work in Autodesk Maya and Chaos V-Ray to dial in the specific degree of scarring or level of healing on the stumps, depending on each scene’s context in the story. While the main focus of Artifex’s work was on hand removal, the team also created a pair of severed hands for the first episode after rubber prosthetics didn’t pass the eye test. VFX work was run through Side Effects Houdini and composited in Foundry’s Nuke.

“The biggest hurdle for the team during this assignment was working with the actress’ movements and complex performance demands, especially the high level of interaction with her environment, clothing or hair,” says Adam Stern, founder of Artifex. “In one visceral sequence, Rob and his team created the actual severed hands. These were originally shot practically with prosthetics, however the consensus was that the practical hands weren’t working. We fully replaced these with CG hands, which allowed us to dial in the level of decomposition, dirt, blood and torn skin around the cuts. We couldn’t be happier with the results.”

Geddes adds, “One interesting thing we discovered when wrangling the stumps, is that the logical and accurate placement of the wrist bone of the stumps didn’t necessarily feel correct when the hands weren’t there. There was quite a bit of experimentation to keep the ‘hand-less’ arms from looking unnaturally long, or thin.”

Artifex also added a scene involving absolute devastation in a burnt forest in Episode 101, involving matte painting and set extension of extensive fire damage that couldn’t safely be achieved on set. Artifex fell back on their experience in environmental VFX creation, using matte painting and projections tied together with ample rotoscope work.

Approximately 20 Artifex artists took part in Sacred Lies across 3D, compositing, matte painting, I/O and production staff.

Watch Artifex founder Adam Stern talk about the show from the floor of SIGGRAPH 2018:

Patrick Ferguson joins MPC LA as VFX supervisor

MPC’s Los Angeles studio has added Patrick Ferguson to its staff as visual effects supervisor. He brings with him experience working in both commercials and feature films.

Ferguson started out in New York and moved to Los Angeles in 2002, and he has since worked at a range of visual effect houses along the West Coast, including The Mission, where he was VFX supervisor, and Method, where he was head of 2D. “No matter where I am in the world or what I’m working on, one thing has remained consistent since I started working in the industry: I still love what I do. I think that’s the most important thing.”

Ferguson has collaborated with directors such as Stacy Wall, Mark Romanek, Melina Matsoukas, Brian Billow and Carl Rinsch, and has worked on campaigns for big global brands, including Nike, Apple, Audi, HP and ESPN.

He has also worked on high-profile films, including Pirates of the Caribbean and Alice in Wonderland, and he was a member of the Academy Award-winning team for The Curious Case of Benjamin Button.

“In this new role at MPC, I hope to bring my varied experience of working on large scale feature films as well as on commercials that have a much quicker turnaround time,” he says. “It’s all about knowing what the correct tools are for the particular job at hand, as every project is unique.”

For Ferguson, there is no substitute for being on set: “Being on set is vital, as that’s when key relationships are forged between the director, the crew, the agency and the entire team. Those shared experiences go a long way in creating a trust that is carried all the way through to end of the project and beyond.”