OWC 12.4

Category Archives: 3D

2 Chainz’s 2 Dolla Bill gets VFX from Timber

Santa Monica’s Timber, known for its VMA-winning work on the Kendrick Lamar music video “Humble, provided visual effects and post production for the latest music video from 2 Chainz, featuring E-40 and Lil Wayne — 2 Dolla Bill.

The video begins with a group of people in a living room with the artist singing, “I’m rare” while holding a steak. It transitions to a poker game where the song continues with “I’m rare, like a two dollar bill.” We then see a two dollar bill with Thomas Jefferson singing the phrase as well. The video takes us back to the living room, the poker game, an operating room, a kitchen and other random locations.

Artists at collaborating company Kevin provided 2D visual effects for the music video, including the scene with the third eye.

According to Timber creative director/partner Kevin Lau, “The main challenge for this project was the schedule. It was a quick turnaround initially, so it was great to be able to work in tandem with offline to get ahead of the schedule. This also allowed us to work closely with the director and implement some his requests to enhance the video after it was shot.”

Timber got involved early on in the project and was on set while they shot the piece. The studio called on Autodesk Flame for clean-up, compositing and enhancement work, as well as the animation of the talking money.

Lau was happy Timber got the chance to be on set. “It was very useful to have a VFX supervisor on set for this project, given the schedule and scope of work. We were able to flag any concerns/issues right away so they didn’t become bigger problems in post.”

Arcade Edit’s Geoff Hounsell edited the piece. Daniel de Vue from A52 provided the color grade.

 

Marvel Studios’ Victoria Alonso to keynote SIGGRAPH 2019

Marvel Studios executive VP of production Victoria Alonso has been name keynote speaker for SIGGRAPH 2019, which will run from July 28 through August 1 in downtown Los Angeles. Registration is now open. The annual SIGGRAPH conference is a melting pot for researchers, artists and technologists, among other professionals.

“Victoria is the ultimate symbol of where the computer graphics industry is headed and a true visionary for inclusivity,” says SIGGRAPH 2019 conference chair Mikki Rose. “Her outlook reflects the future I envision for computer graphics and for SIGGRAPH. I am thrilled to have her keynote this summer’s conference and cannot wait to hear more of her story.”

One of few women in Hollywood to hold such a prominent title, Alonso’s dedication to the industry has been admired for a long time, leading to multiple awards and honors, including the 2015 New York Women in Film & Television Muse Award for Outstanding Vision and Achievement, the Advanced Imaging Society’s first female Harold Lloyd Award recipient, and the 2017 VES Visionary Award (another female first). A native of Buenos Aires, her career began in visual effects and included a four-year stint at Digital Domain.

Alonso’s film credits include productions such as Ridley Scott’s Kingdom of Heaven, Tim Burton’s Big Fish, Andrew Adamson’s Shrek, and numerous Marvel titles — Iron Man, Iron Man 2, Thor, Captain America: The First Avenger, Iron Man 3, Captain America: The Winter Soldier, Captain America: Civil War, Thor: The Dark World, Avengers: Age of Ultron, Ant-Man, Guardians of the Galaxy, Doctor Strange, Guardians of the Galaxy Vol. 2, Spider-Man: Homecoming, Thor: Ragnarok, Black Panther, Avengers: Infinity War, Ant-Man and the Wasp and, most recently, Captain Marvel.

“I’ve been attending SIGGRAPH since before there was a line at the ladies’ room,” says Alonso. “I’m very much looking forward to having a candid conversation about the state of visual effects, diversity and representation in our industry.”

She adds, “At Marvel Studios, we have always tried to push boundaries with both our storytelling and our visual effects. Bringing our work to SIGGRAPH each year offers us the opportunity to help shape the future of filmmaking.”

The 2019 keynote session will be presented as a fireside chat, allowing attendees the opportunity to hear Alonso discuss her life and career in an intimate setting.

OWC 12.4

Review: Maxon Cinema 4D Release 20

By Brady Betzel

Last August, Maxon made available its Cinema 4D Release 20. From the new node-based Material Editor to the all new console used to debug and develop scripts, Maxon has really upped the ante.

At the recent NAB show, Maxon announced that they acquired Redshift Rendering Technologies, the makers of the Redshift rendering engine. This acquisition will hopefully tie in an industry standard GPU-based rendering engine inside of Cinema 4D R20’s workflow and speed up rendering. As of now there is still the same licensing fees attached to Redshift as there were before the acquisition: Node-Locked is $500 and Floating is $600.

Digging In
The first update to Cinema 4D R20 that I wanted to touch on is the new node-based Material Editor. If you are familiar with Blackmagic’s DaVinci Resolve or Nuke’s applications, then you have seen how nodes work. I love how nodes work, allowing the user to not only layer up effects — or in Cinema 4D R20’s case — diffusion to camera distance. There are over 150 nodes inside of the material editor to build textures with.

One small change that I noticed inside of the updated Material Editor was the new gradient settings. When you are working with gradient knots you can now select multiple knots at once and then right click and double the selected knots, invert the knots, select different knot interpolations (including stepped, smooth, cubic, linear, and blend) and even distribute the knots to clean up your pattern. A real nice and convenient update to gradient workflows.

In Cinema 4D R20, not only can you add new nodes from the search menu, but you can also click the node dots in the Basic properties window and route nodes through there. When you are happy with your materials made in the node editor, you can save them as assets in the scene file or even compress them in a .zip file to share with others.

In a related update category, Cinema 4D Release 20 has introduced the Uber Material. In simple terms (and I mean real simple), the Uber Material is a node-based material that is different from standard or physical materials because it can be edited inside of the Attribute Manager or Material Editor but retain the properties available in the Node Editor.

The Camera Tracking and 2D Camera View has been updated. While the Camera Tracking mode has been improved, the new 2D Camera View mode has combined the Film Move mode with the Film Zoom mode. Adding the ability to use standard shortcuts to move around a scene instead of messing with the Film Offset or Focal Length in the Camera Object Properties dialogue. For someone like me who isn’t a certified pro in Cinema 4D, these little shortcuts really make me feel at home. Much more like apps I’m used to such as Mocha Pro or After Effects. Maxon has also improved the 2D tracking algorithm for much tighter tracks as well as added virtual keyframes. The virtual keyframes are an extreme help when you don’t have time for minute adjustments.

Volume Modeling
What seems to be one of the largest updates in Cinema 4D R20 is the addition of Volume Modeling with the OpenVDB-based Volume Builder. According to www.openvdb.org, “OpenVDB is an Academy Award-winning C++ library comprising a hierarchical data structure and a suite of tools for the efficient manipulation of sparse, time-varying, volumetric data discretized on three-dimensional grids,” developed by Ken Museth at DreamWorks Animation. It uses 3D pixels called voxels instead of polygons. When using the Volume Builder you can combine multiple polygon and primitive objects using Boolean operations: Union, Subtract or Intersect. Furthermore you can smooth your volume using multiple techniques, including one that made me do some extra Google work: Laplacian Flow.

Fields
When going down the voxel rabbit hole in Cinema 4D R20, you will run into another new update: Fields. Prior to Cinema 4D R20, we would use Effectors to affect strength values of an object. You would stack and animate multiple effectors to achieve different results. In Cinema 4D R20, under the Falloff tab you will now see a Fields list along with the types of Field Objects to choose from.

Imagine you make a MoGraph object that you want its opacity to be controlled by a box object moving through your MoGraph but also physically modified by a capsule poking through. You can combine these different field object effectors by using compositing functions in the Fields list. In addition you can animate or alter these new fields straight away in the Objects window.

Summing Up
Cinema 4D Release 20 has some amazing updates that will greatly improve efficiency and quality of your work. From tracking updates to field updates, there are plenty of exciting tools to dive into. And if you are reading this as an After Effects user who isn’t sure about Cinema 4D, now is the time to dive in. Once you learn the basics, whether it’s from Youtube tutorials or you sign up for www.cineversity.com classes, you will immediately see an increase in the quality of your work.

Combining Adobe After Effects, Element 3D and Cinema 4D R20 is the ultimate in 3D motion graphics and 2D compositing — accessible to almost everyone. And I didn’t even touch on the dozens of other updates to Cinema 4D R20 like the multitude of ProRender updates, FBX import/export options, new node materials and CAD import support for Cataia, Iges, JT, Solidworks and Step formats. Check out Cinema 4D Release 20’s newest features on YouTube and on their website.

And, finally, I think it’s safe to assume that Maxon’s acquisition of RedShift renderer poses a bright future for Cinema 4D users.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


Quantum offers new F-Series NVMe storage arrays

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.


NAB 2019: Maxon acquires Redshift Rendering Technologies

Maxon, makers of Cinema 4D, has purchased Redshift Rendering Technologies, developers of the Redshift rendering engine. Redshift is a flexible GPU-accelerated renderer targeting high-end production. Redshift offers an extensive suite of features that makes rendering complicated 3D projects faster. Redshift is available as a plugin for Maxon’s Cinema 4D and other industry-standard 3D applications.

“Rendering can be the most time-consuming and demanding aspect of 3D content creation,” said David McGavran, CEO of Maxon. “Redshift’s speed and efficiency combined with Cinema 4D’s responsive workflow make it a perfect match for our portfolio.”

“We’ve always admired Maxon and the Cinema 4D community, and are thrilled to be a part of it,” said Nicolas Burtnyk, co-founder/CEO, Redshift. “We are looking forward to working closely with Maxon, collaborating on seamless integration of Redshift into Cinema 4D and continuing to push the boundaries of what’s possible with production-ready GPU rendering.”

Redshift is used by post companies, including Technicolor, Digital Domain, Encore Hollywood and Blizzard. Redshift has been used for VFX and motion graphics on projects such as Black Panther, Aquaman, Captain Marvel, Rampage, American Gods, Gotham, The Expanse and more.


Autodesk’s Flame 2020 features machine learning tools

Autodesk’s new Flame 2020 offers a new machine-learning-powered feature set with a host of new capabilities for Flame artists working in VFX, color grading, look development or finishing. This latest update will be showcased at the upcoming NAB Show.

Advancements in computer vision, photogrammetry and machine learning have made it possible to extract motion vectors, Z depth and 3D normals based on software analysis of digital stills or image sequences. The Flame 2020 release adds built-in machine learning analysis algorithms to isolate and modify common objects in moving footage, dramatically accelerating VFX and compositing workflows.

New creative tools include:
· Z-Depth Map Generator— Enables Z-depth map extraction analysis using machine learning for live-action scene depth reclamation. This allows artists doing color grading or look development to quickly analyze a shot and apply effects accurately based on distance from camera.
· Human Face Normal Map Generator— Since all human faces have common recognizable features (relative distance between eyes, nose, location of mouth) machine learning algorithms can be trained to find these patterns. This tool can be used to simplify accurate color adjustment, relighting and digital cosmetic/beauty retouching.
· Refraction— With this feature, a 3D object can now refract, distorting background objects based on its surface material characteristics. To achieve convincing transparency through glass, ice, windshields and more, the index of refraction can be set to an accurate approximation of real-world material light refraction.

Productivity updates include:
· Automatic Background Reactor— Immediately after modifying a shot, this mode is triggered, sending jobs to process. Accelerated, automated background rendering allows Flame artists to keep projects moving using GPU and system capacity to its fullest. This feature is available on Linux only, and can function on a single GPU.
· Simpler UX in Core Areas— A new expanded full-width UX layout for MasterGrade, Image surface and several Map User interfaces, are now available, allowing for easier discoverability and accessibility to key tools.
· Manager for Action, Image, Gmask—A simplified list schematic view, Manager makes it easier to add, organize and adjust video layers and objects in the 3D environment.
· Open FX Support—Flame, Flare and Flame Assist version 2020 now include comprehensive support for industry-standard Open FX creative plugins such as Batch/BFX nodes or on the Flame timeline.
· Cryptomatte Support—Available in Flame and Flare, support for the Cryptomatte open source advanced rendering technique offers a new way to pack alpha channels for every object in a 3D rendered scene.

For single-user licenses, Linux customers can now opt for monthly, yearly and three-year single user licensing options. Customers with an existing Mac-only single user license can transfer their license to run Flame on Linux.
Flame, Flare, Flame Assist and Lustre 2020 will be available on April 16, 2019 at no additional cost to customers with a current Flame Family 2019 subscription. Pricing details can be found at the Autodesk website.


Adobe’s new Content-Aware fill in AE is magic, plus other CC updates

By Brady Betzel

NAB is just under a week away, and we are here to share some of Adobe’s latest Creative Cloud offerings. And there are a few updates worth mentioning, such as a freeform project panel in Premiere Pro, AI-driven Auto Ducking for Ambience for Audition and addition of a Twitch extension for Character Animator. But, in my opinion, the Adobe After Effects updates are what this year’s release will be remembered by.


Content Aware: Here is the before and after. Our main image is the mask.

There is a new expression editor in After Effects, so us old pseudo-website designers can now feel at home with highlighting, line numbers and more. There are also performance improvements, such as faster project loading times and new deBayering support for Metal on macOS. But the first prize ribbon goes to the Content-Aware fill for video powered by Adobe Sensei, the company’s AI technology. It’s one of those voodoo features that when you use it, you will be blown away. If you have ever used Mocha Pro by BorisFX then you have had a similar tool known as the “Object Removal” tool. Essentially, you draw around the object you want to remove, such as a camera shadow or boom mic, hit the magic button and your object will be removed with a new background in its place. This will save users hours of manual work.

Freeform Project panel in Premiere.

Here are some details on other new features:

● Freeform Project panel in Premiere Pro— Arrange assets visually and save layouts for shot selects, production tasks, brainstorming story ideas, and assembly edits.
● Rulers and Guides—Work with familiar Adobe design tools inside Premiere Pro, making it easier to align titling, animate effects, and ensure consistency across deliverables.
● Punch and Roll in Audition—The new feature provides efficient production workflows in both Waveform and Multitrack for longform recording, including voiceover and audiobook creators.
● Surprise viewers in Twitch Live-Streaming Triggers with Character Animator Extension—Livestream performances are enhanced where audiences engage with characters in real-time with on-the-fly costume changes, impromptu dance moves, and signature gestures and poses—a new way to interact and even monetize using Bits to trigger actions.
● Auto Ducking for ambient sound in Audition and Premiere Pro — Also powered by Adobe Sensei, Auto Ducking now allows for dynamic adjustments to ambient sounds against spoken dialog. Keyframed adjustments can be manually fine-tuned to retain creative control over a mix.
● Adobe Stock now offers 10 million professional-quality, curated, royalty-free HD and 4K video footage and Motion Graphics templates from leading agencies and independent editors to use for editorial content, establishing shots or filling gaps in a project.
● Premiere Rush, introduced late last year, offers a mobile-to-desktop workflow integrated with Premiere Pro for on-the-go editing and video assembly. Built-in camera functionality in Premiere Rush helps you take pro-quality video on your mobile devices.

The new features for Adobe Creative Cloud are now available with the latest version of Creative Cloud.


Wonder Park’s whimsical sound

By Jennifer Walden

The imagination of a young girl comes to life in the animated feature Wonder Park. A Paramount Animation and Nickelodeon Movies film, the story follows June (Brianna Denski) and her mother (Jennifer Garner) as they build a pretend amusement park in June’s bedroom. There are rides that defy the laws of physics — like a merry-go-round with flying fish that can leave the carousel and travel all over the park; a Zero-G-Land where there’s no gravity; a waterfall made of firework sparks; a super tube slide made from bendy straws; and other wild creations.

But when her mom gets sick and leaves for treatment, June’s creative spark fizzles out. She disassembles the park and packs it away. Then one day as June heads home through the woods, she stumbles onto a real-life Wonderland that mirrors her make-believe one. Only this Wonderland is falling apart and being consumed by the mysterious Darkness. June and the park’s mascots work together to restore Wonderland by stopping the Darkness.

Even in its more tense moments — like June and her friend Banky (Oev Michael Urbas) riding a homemade rollercoaster cart down their suburban street and nearly missing an on-coming truck — the sound isn’t intense. The cart doesn’t feel rickety or squeaky, like it’s about to fly apart (even though the brake handle breaks off). There’s the sense of danger that could result in non-serious injury, but never death. And that’s perfect for the target audience of this film — young children. Wonder Park is meant to be sweet and fun, and supervising sound editor John Marquis captures that masterfully.

Marquis and his core team — sound effects editor Diego Perez, sound assistant Emma Present, dialogue/ADR editor Michele Perrone and Foley supervisor Jonathan Klein — handled sound design, sound editorial and pre-mixing at E² Sound on the Warner Bros. lot in Burbank.

Marquis was first introduced to Wonder Park back in 2013, but the team’s real work began in January 2017. The animated sequences steadily poured in for 17 months. “We had a really long time to work the track, to get some of the conceptual sounds nailed down before going into the first preview. We had two previews with temp score and then two more with mockups of composer Steven Price’s score. It was a real luxury to spend that much time massaging and nitpicking the track before getting to the dub stage. This made the final mix fun; we were having fun mixing and not making editorial choices at that point.”

The final mix was done at Technicolor’s Stage 1, with re-recording mixers Anna Behlmer (effects) and Terry Porter (dialogue/music).

Here, Marquis shares insight on how he created the whimsical sound of Wonder Park, from the adorable yet naughty chimpanzombies to the tonally pleasing, rhythmic and resonant bendy-straw slide.

The film’s sound never felt intense even in tense situations. That approach felt perfectly in-tune with the sensibilities of the intended audience. Was that the initial overall goal for this soundtrack?
When something was intense, we didn’t want it to be painful. We were always in search of having a nice round sound that had the power to communicate the energy and intensity we wanted without having the pointy, sharp edges that hurt. This film is geared toward a younger audience and we were supersensitive about that right out of the gate, even without having that direction from anyone outside of ourselves.

I have two kids — one 10 and one five. Often, they will pop by the studio and listen to what we’re doing. I can get a pretty good gauge right off the bat if we’re doing something that is not resonating with them. Then, we can redirect more toward the intended audience. I pretty much previewed every scene for my kids, and they were having a blast. I bounced ideas off of them so the soundtrack evolved easily toward their demographic. They were at the forefront of our thoughts when designing these sequences.

John Marquis recording the bendy straw sound.

There were numerous opportunities to create fun, unique palettes of sound for this park and these rides that stem from this little girl’s imagination. If I’m a little kid and I’m playing with a toy fish and I’m zipping it around the room, what kind of sound am I making? What kind of sounds am I imagining it making?

This film reminded me of being a kid and playing with toys. So, for the merry-go-round sequence with the flying fish, I asked my kids, “What do you think that would sound like?” And they’d make some sound with their mouths and start playing, and I’d just riff off of that.

I loved the sound of the bendy-straw slide — from the sound of it being built, to the characters traveling through it, and even the reverb on their voices while inside of it. How did you create those sounds?
Before that scene came to us, before we talked about it or saw it, I had the perfect sound for it. We had been having a lot of rain, so I needed to get an expandable gutter for my house. It starts at about one-foot long but can be pulled out to three-feet long if needed. It works exactly like a bendy-straw, but it’s huge. So when I saw the scene in the film, I knew I had the exact, perfect sound for it.

We mic’d it with a Sanken CO-100k, inside and out. We pulled the tube apart and closed it, and got this great, ribbed, rippling, zuzzy sound. We also captured impulse responses inside the tube so we could create custom reverbs. It was one of those magical things that I didn’t even have to think about or go hunting for. This one just fell in my lap. It’s a really fun and tonal sound. It’s musical and has a rhythm to it. You can really play with the Doppler effect to create interesting pass-bys for the building sequences.

Another fun sequence for sound was inside Zero-G-Land. How did you come up with those sounds?
That’s a huge, open space. Our first instinct was to go with a very reverberant sound to showcase the size of the space and the fact that June is in there alone. But as we discussed it further, we came to the conclusion that since this is a zero-gravity environment there would be no air for the sound waves to travel through. So, we decided to treat it like space. That approach really worked out because in the scene proceeding Zero-G-Land, June is walking through a chasm and there are huge echoes. So the contrast between that and the air-less Zero-G-Land worked out perfectly.

Inside Zero-G-Land’s tight, quiet environment we have the sound of these giant balls that June is bouncing off of. They look like balloons so we had balloon bounce sounds, but it wasn’t whimsical enough. It was too predictable. This is a land of imagination, so we were looking for another sound to use.

John Marquis with the Wind Wand.

My friend has an instrument called a Wind Wand, which combines the sound of a didgeridoo with a bullroarer. The Wind Wand is about three feet long and has a gigantic rubber band that goes around it. When you swing the instrument around in the air, the rubber band vibrates. It almost sounds like an organic lightsaber-like sound. I had been playing around with that for another film and thought the rubbery, resonant quality of its vibration could work for these gigantic ball bounces. So we recorded it and applied mild processing to get some shape and movement. It was just a bit of pitching and Doppler effect; we didn’t have to do much to it because the actual sound itself was so expressive and rich and it just fell into place. Once we heard it in the cut, we knew it was the right sound.

How did you approach the sound of the chimpanzombies? Again, this could have been an intense sound, but it was cute! How did you create their sounds?
The key was to make them sound exciting and mischievous instead of scary. It can’t ever feel like June is going to die. There is danger. There is confusion. But there is never a fear of death.

The chimpanzombies are actually these Wonder Chimp dolls gone crazy. So they were all supposed to have the same voice — this pre-recorded voice that is in every Wonder Chimp doll. So, you see this horde of chimpanzombies coming toward you and you think something really threatening is happening but then you start to hear them and all they are saying is, “Welcome to Wonderland!” or something sweet like that. It’s all in a big cacophony of high-pitched voices, and they have these little squeaky dog-toy feet. So there’s this contrast between what you anticipate will be scary but it turns out these things are super-cute.

The big challenge was that they were all supposed to sound the same, just this one pre-recorded voice that’s in each one of these dolls. I was afraid it was going to sound like a wall of noise that was indecipherable, and a big, looping mess. There’s a software program that I ended up using a lot on this film. It’s called Sound Particles. It’s really cool, and I’ve been finding a reason to use it on every movie now. So, I loaded this pre-recorded snippet from the Wonder Chimp doll into Sound Particles and then changed different parameters — I wanted a crowd of 20 dolls that could vary in pitch by 10%, and they’re going to walk by at a medium pace.

Changing the parameters will change the results, and I was able to make a mass of different voices based off of this one, individual audio file. It worked perfectly once I came up with a recipe for it. What would have taken me a day or more — to individually pitch a copy of a file numerous times to create a crowd of unique voices — only took me a few minutes. I just did a bunch of varieties of that, with smaller groups and bigger groups, and I did that with their feet as well. The key was that the chimpanzombies were all one thing, but in the context of music and dialogue, you had to be able to discern the individuality of each little one.

There’s a fun scene where the chimpanzombies are using little pickaxes and hitting the underside of the glass walkway that June and the Wonderland mascots are traversing. How did you make that?
That was for Fireworks Falls; one of the big scenes that we had waited a long time for. We weren’t really sure how that was going to look — if the waterfall would be more fiery or more sparkly.

The little pickaxes were a blacksmith’s hammer beating an iron bar on an anvil. Those “tink” sounds were pitched up and resonated just a little bit to give it a glass feel. The key with that, again, was to try to make it cute. You have these mischievous chimpanzombies all pecking away at the glass. It had to sound like they were being naughty, not malicious.

When the glass shatters and they all fall down, we had these little pinball bell sounds that would pop in from time to time. It kept the scene feeling mildly whimsical as the debris is falling and hitting the patio umbrellas and tables in the background.

Here again, it could have sounded intense as June makes her escape using the patio umbrella, but it didn’t. It sounded fun!
I grew up in the Midwest and every July 4th we would shoot off fireworks on the front lawn and on the sidewalk. I was thinking about the fun fireworks that I remembered, like sparklers, and these whistling spinning fireworks that had a fun acceleration sound. Then there were bottle rockets. When I hear those sounds now I remember the fun time of being a kid on July 4th.

So, for the Fireworks Falls, I wanted to use those sounds as the fun details, the top notes that poke through. There are rocket crackles and whistles that support the low-end, powerful portion of the rapids. As June is escaping, she’s saying, “This is so amazing! This is so cool!” She’s a kid exploring something really amazing and realizing that this is all of the stuff that she was imagining and is now experiencing for real. We didn’t want her to feel scared, but rather to be overtaken by the joy and awesomeness of what she’s experiencing.

The most ominous element in the park is the Darkness. What was your approach to the sound in there?
It needed to be something that was more mysterious than ominous. It’s only scary because of the unknown factor. At first, we played around with storm elements, but that wasn’t right. So I played around with a recording of my son as a baby; he’s cooing. I pitched that sound down a ton, so it has this natural, organic, undulating, human spine to it. I mixed in some dissonant windchimes. I have a nice set of windchimes at home and I arranged them so they wouldn’t hit in a pleasing way. I pitched those way down, and it added a magical/mystical feel to the sound. It’s almost enticing June to come and check it out.

The Darkness is the thing that is eating up June’s creativity and imagination. It’s eating up all of the joy. It’s never entirely clear what it is though. When June gets inside the Darkness, everything is silent. The things in there get picked up and rearranged and dropped. As with the Zero-G-Land moment, we bring everything to a head. We go from a full-spectrum sound, with the score and June yelling and the sound design, to a quiet moment where we only hear her breathing. For there, it opens up and blossoms with the pulse of her creativity returning and her memories returning. It’s a very subjective moment that’s hard to put into words.

When June whispers into Peanut’s ear, his marker comes alive again. How did you make the sound of Peanut’s marker? And how did you give it movement?
The sound was primarily this ceramic, water-based bird whistle, which gave it a whimsical element. It reminded me of a show I watched when I was little where the host would draw with his marker and it would make a little whistling, musical sound. So anytime the marker was moving, it would make this really fun sound. This marker needed to feel like something you would pick up and wave around. It had to feel like something that would inspire you to draw and create with it.

To get the movement, it was partially performance based and partially done by adding in a Doppler effect. I used variations in the Waves Doppler plug-in. This was another sound that I also used Sound Particles for, but I didn’t use it to generate particles. I used it to generate varied movement for a single source, to give it shape and speed.

Did you use Sound Particles on the paper flying sound too? That one also had a lot of movement, with lots of twists and turns.
No, that one was an old-fashioned fader move. What gave that sound its interesting quality — this soft, almost ethereal and inviting feel — was the practical element we used to create the sound. It was a piece of paper bag that was super-crumpled up, so it felt fluttery and soft. Then, every time it moved, it had a vocally whoosh element that gave it personality. So once we got that practical element nailed down, the key was to accentuate it with a little wispy whoosh to make it feel like the paper was whispering to June, saying, “Come follow me!”

Wonder Park is in theaters now. Go see it!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.


Behind the Title: Nice Shoes animator Yandong Dino Qiu

This artist/designer has taken to sketching people on the subway to keep his skills fresh and mind relaxed.

NAME: Yandong Dino Qiu

COMPANY: New York’s Nice Shoes

CAN YOU DESCRIBE YOUR COMPANY?
Nice Shoes is a full-service creative studio. We offer design, animation, VFX, editing, color grading, VR/AR, working with agencies, brands and filmmakers to help realize their creative vision.

WHAT’S YOUR JOB TITLE?
Designer/Animator

WHAT DOES THAT ENTAIL?
Helping our clients to explore different looks in the pre-production stage, while aiding them in getting as close as possible to the final look of the spot. There’s a lot of exploration and trial and error as we try to deliver beautiful still frames that inform the look of the moving piece.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Not so much for the title, but for myself, design and animation can be quite broad. People may assume you’re only 2D, but it also involves a lot of other skill sets such as 3D lighting and rendering. It’s pretty close to a generalist role that requires you to know nearly every software as well as to turn things around very quickly.

WHAT TOOLS DO YOU USE?
Photoshop, After Effects,. Illustrator, InDesign — the full Adobe Creative Suite — and Maxon Cinema 4D.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Pitch and exploration. At that stage, all possibilities are open. The job is alive… like a baby. You’re seeing it form and helping to make new life. Before this, you have no idea what it’s going to look like. After this phase, everyone has an idea. It’s very challenging, exciting and rewarding.

WHAT’S YOUR LEAST FAVORITE?
Revisions. Especially toward the end of a project. Everything is set up. One little change will affect everything else.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
2:15pm. Its right after lunch. You know you have the whole afternoon. The sun is bright. The mood is light. It’s not too late for anything.

Sketching on the subway.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a Manga artist.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
La Mer. Frontline. Friskies. I’ve also been drawing during my commute everyday, sketching the people I see on the subway. I’m trying to post every week on Instagram. I think it’s important for artists to keep to a routine. I started up with this at the beginning of 2019, and there’ve been about 50 drawings already. Artists need to keep their pen sharp all the time. By doing these sketches, I’m not only benefiting my drawing skills, but I’m improving my observation about shapes and compositions, which is extremely valuable for work. Being able to break down shapes and components is a key principle of design, and honing that skill helps me in responding to client briefs.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
TED-Ed What Is Time? We had a lot of freedom in figuring out how to animate Einstein’s theories in a fun and engaging way. I worked with our creative director Harry Dorrington to establish the look and then with our CG team to ensure that the feel we established in the style frames was implemented throughout the piece.

TED-Ed What Is Time?

The film was extremely well received. There was a lot of excitement at Nice Shoes when it premiered, and TED-Ed’s audience seemed to respond really warmly as well. It’s rare to see so much positivity in the YouTube comments.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Wacom tablet for drawing and my iPad for reading.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I take time and draw for myself. I love that drawing and creating is such a huge part of my job, but it can get stressful and tiring only creating for others. I’m proud of that work, but when I can draw something that makes me personally happy, any stress or exhaustion from the work day just melts away.

Quick Chat: Lord Danger takes on VFX-heavy Devil May Cry 5 spot

By Randi Altman

Visual effects for spots have become more and more sophisticated, and the recent Capcom trailer promoting the availability of its game Devil May Cry 5 is a perfect example.

 The Mike Diva-directed Something Greater starts off like it might be a commercial for an anti-depressant with images of a woman cooking dinner for some guests, people working at a construction site, a bored guy trimming hedges… but suddenly each of our “Everyday Joes” turns into a warrior fighting baddies in a video game.

Josh Shadid

The hedge trimmer’s right arm turns into a futuristic weapon, the construction worker evokes a panther to fight a monster, and the lady cooking is seen with guns a blazin’ in both hands. When she runs out of ammo, and to the dismay of her dinner guests, her arms turn into giant saws. 

Lord Danger’s team worked closely with Capcom USA to create this over-the-top experience, and they provided everything from production to VFX to post, including sound and music.

We reached out to Lord Danger founder/EP Josh Shadid to learn more about their collaboration with Capcom, as well as their workflow.

How much direction did you get from Capcom? What was their brief to you?
Capcom’s fight-games director of brand marketing, Charlene Ingram, came to us with a simple request — make a memorable TV commercial that did not use gameplay footage but still illustrated the intensity and epic-ness of the DMC series.

What was it shot on and why?
We shot on both Arri Alexa Mini and Phantom Flex 4k using Zeiss Super Speed MKii Prime lenses, thanks to our friends at Antagonist Camera, and a Technodolly motion control crane arm. We used the Phantom on the Technodolly to capture the high-speed shots. We used that setup to speed ramp through character actions, while maintaining 4K resolution for post in both the garden and kitchen transformations.

We used the Alexa Mini on the rest of the spot. It’s our preferred camera for most of our shoots because we love the combination of its size and image quality. The Technodolly allowed us to create frame-accurate, repeatable camera movements around the characters so we could seamlessly stitch together multiple shots as one. We also needed to cue the fight choreography to sync up with our camera positions.

You had a VFX supervisor on set. Can you give an example of how that was beneficial?
We did have a VFX supervisor on site for this production. Our usual VFX supervisor is one of our lead animators — having him on site to work with means we’re often starting elements in our post production workflow while we’re still shooting.

Assuming some of it was greenscreen?
We shot elements of the construction site and gardening scene on greenscreen. We used pop-ups to film these elements on set so we could mimic camera moves and lighting perfectly. We also took photogrammetry scans of our characters to help rebuild parts of their bodies during transition moments, and to emulate flying without requiring wire work — which would have been difficult to control outside during windy and rainy weather.

Can you talk about some of the more challenging VFX?
The shot of the gardener jumping into the air while the camera spins around him twice was particularly difficult. The camera starts on a 45-degree frontal, swings behind him and then returns to a 45-degree frontal once he’s in the air.

We had to digitally recreate the entire street, so we used the technocrane at the highest position possible to capture data from a slow pan across the neighborhood in order to rebuild the world. We also had to shoot this scene in several pieces and stitch it together. Since we didn’t use wire work to suspend the character, we also had to recreate the lower half of his body in 3D to achieve a natural looking jump position. That with the combination of the CG weapon elements made for a challenging composite — but in the end, it turned out really dramatic (and pretty cool).

Were any of the assets provided by Capcom? All created from scratch?
We were provided with the character and weapons models from Capcom — but these were in-game assets, and if you’ve played the game you’ll see that the environments are often dark and moody, so the textures and shaders really didn’t apply to a real-world scenario.

Our character modeling team had to recreate and re-interpret what these characters and weapons would look like in the real world — and they had to nail it — because game culture wouldn’t forgive a poor interpretation of these iconic elements. So far the feedback has been pretty darn good.

In what ways did being the production company and the VFX house on the project help?
The separation of creative from production and post production is an outdated model. The time it takes to bring each team up to speed, to manage the communication of ideas between creatives and to ensure there is a cohesive vision from start to finish, increases both the costs and the time it takes to deliver a final project.

We shot and delivered all of Devil May Cry’s Something Greater in four weeks total, all in-house. We find that working as the production company and VFX house reduces the ratio of managers per creative significantly, putting more of the money into the final product.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

VFX supervisor Christoph Schröer joins NYC’s Artjail

New York City-based VFX house Artjail has added Christoph Schröer as VFX supervisor. Previously a VFX supervisor/senior compositor at The Mill, Schröer brings over a decade of experience to his new role at Artjail. His work has been featured in spots for Mercedes-Benz, Visa, Volkswagen, Samsung, BMW, Hennessy and Cartier.

Combining his computer technology expertise and a passion for graffiti design, Schröer applied his degree in Computer and Media Sciences to begin his career in VFX. He started off working at visual effects studios in Germany and Switzerland where he collaborated with a variety of European auto clients. His credits from his tenure in the European market include lead compositor for multiple Mercedes-Benz spots, two global Volkswagen campaign launches and BMW’s “Rev Up Your Family.”

In 2016, Schröer made the move to New York to take on a role as senior compositor and VFX supervisor at The Mill. There, he teamed with directors such as Tarsem Singh and Derek Cianfrance, and worked on campaigns for Hennessy, Nissan Altima, Samsung, Cartier and Visa.

Autodesk Arnold 5.3 with Arnold GPU in public beta

Autodesk has made its Arnold 5.3 with Arnold GPU available as a public beta. The release provides artists with GPU rendering for a set number of features, and the flexibility to choose between rendering on the CPU or GPU without changing renderers.

From look development to lighting, support for GPU acceleration brings greater interactivity and speed to artist workflows, helping reduce iteration and review cycles. Arnold 5.3 also adds new functionality to help maximize performance and give artists more control over their rendering processes, including updates to adaptive sampling, a new version of the Randomwalk SSS mode and improved Operator UX.

Arnold GPU rendering makes it easier for artists and small studios to iterate quickly in a fast working environment and scale rendering capacity to accommodate project demands. From within the standard Arnold interface, users can switch between rendering on the CPU and GPU with a single click. Arnold GPU currently supports features such as arbitrary shading networks, SSS, hair, atmospherics, instancing, and procedurals. Arnold GPU is based on the Nvidia OptiX framework and is optimized to leverage Nvidia RTX technology.

New feature summary:
— Major improvements to quality and performance for adaptive sampling, helping to reduce render times without jeopardizing final image quality
— Improved version of Randomwalk SSS mode for more realistic shading
— Enhanced usability for Standard Surface, giving users more control
— Improvements to the Operator framework
— Better sampling of Skydome lights, reducing direct illumination noise
— Updates to support for MaterialX, allowing users to save a shading network as a MaterialX look

Arnold 5.3 with Arnold GPU in public beta will be available March 20 as a standalone subscription or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection. You can also try Arnold GPU with a free 30-day trial of Arnold. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, Houdini, Cinema 4D and Katana.

Sandbox VR partners with Vicon on Amber Sky 2088 experience

VR gaming company Sandbox VR has been partnering and working with Vicon motion capture tools to create next-generation immersive experiences. By using Vicon’s motion capture cameras and its location-based VR (LBVR) software Evoke, the Hong Kong-based Sandbox VR is working to transport up to six people at a time into the Amber Sky 2088 experience, which takes place in a future where the fate of humanity lies in the balance.

Sandbox VR’s adventures resemble movies where the players become the characters. With two proprietary AAA-quality games already in operation across Sandbox VR’s seven locations, for its third title, Amber Sky 2088, a new motion capture solution was needed. In the futuristic game, users step into the role of androids, granting players abilities far beyond the average human while still scaling the game to their actual movements. To accurately convey that for multiple users in a free-roam environment, precision tracking and flexible scalability were vital. For that, Sandbox VR turned to Vicon.

Set in the twilight of the 21st century, Amber Sky 2088 takes players to a futuristic version of Hong Kong, then through the clouds to the edge of space to fight off an alien invasion. Android abilities allow players to react with incredible strength and move at speeds fast enough to dodge bullets. And while the in-game action is furious, participants in the real-world — equipped with VR headsets —  freely roam an open environment as Vicon LBVR motion capture cameras track their movement.

Vicon’s motion capture cameras record every player movement, then send the data to its Evoke software, a solution introduced last year as part of its LBVR platform, Origin. Vicon’s solution offers  precise tracking, while also animating player motion in realtime, creating a seamless in-game experience. Automatic re-calibration also makes the experience’s operation easier than ever despite its complex nature, and the system’s scalability means fewer cameras can be used to capture more movement, making it cost-effective for large scale expansion.

Since its founding in 2016, Sandbox VR has been creating interactive experiences by combining motion capture technology with virtual reality. After opening its first location in Hong Kong in 2017, the company has since expanded to seven locations across Asia and North America, with six new sites on the way. Each 30- to 60-minute experience is created in-house by Sandbox VR, and each can accommodate up to six players at a time.

The recent partnership with Vicon is the first step in Sandbox VR’s expansion plans that will see it open over 40 experience rooms across 12 new locations around the world by the end of the year. In considering its plans to build and operate new locations, the VR makers chose to start with five systems from Vicon, in part because of the company’s collaborative nature.

Review: Red Giant’s Trapcode Suite 15

By Brady Betzel

We are now comfortably into 2019 and enjoying the Chinese Year of the Pig — or at least I am! So readers, you might remember that with each new year comes a Red Giant Trapcode Suite update. And Red Giant didn’t disappoint with Trapcode Suite 15.

Every year Red Giant adds more amazing features to its already amazing particle generator and emitter toolset, Trapcode Suite, and this year is no different. Trapcode Suite 15 is keeping tools like 3D Stroke, Shine, Starglow, Sound Keys, Lux, Tao, Echospace and Horizon while significantly updating Particular, Form and Mir.

I won’t be covering each plugin in this review but you can check out what each individual plugin does on the Red Giant’s website.

Particular 4
The bread and butter of the Trapcode Suite has always been Particular, and Version 4 continues to be a powerhouse. The biggest differences between using a true 3D app like Maxon’s Cinema 4D or Autodesk Maya and Adobe After Effects (besides being pseudo 3D) are features like true raytraced rendering and interacting particle systems with fluid dynamics. As I alluded to, After Effects isn’t technically a 3D app, but with plugins like Particular you can create pseudo-3D particle systems that can affect and be affected by different particle emitters in your scenes. Trapcode Suite 15 and, in particular (all the pun intended), Particular 4, have evolved to another level with the latest update to include Dynamic Fluids. Dynamic Fluids essentially allows particle systems that have the fluid-physics engine enabled to interact with one another as well as create mind-blowing liquid-like simulations inside of After Effects.

What’s even more impressive is that with the Particular Designer and over 335 presets, you don’t  need a master’s degree to make impressive motion graphics. While I love to work in After Effects, I don’t always have eight hours to make a fluidly dynamic particle system bounce off 3D text, or have two systems interact with each other for a text reveal. This is where Particular 4 really pays for itself. With a little research and tutorial watching, you will be up and rendering within 30 minutes.

When I was using Particular 4, I simply wanted to recreate the Dynamic Fluid interaction I had seen in one of their promos. Basically, two emitters crashing into each other in a viscus-like fluid, then interacting. While it isn’t necessarily easy, if you have a slightly above-beginner amount of After Effects knowledge you will be able to do this. Apply the Particular plugin to a new solid object and open up the Particular Designer in Effect Controls. From there you can designate emitter type, motion, particle type, particle shadowing, particle color and dispersion types, as well as add multiple instances of emitters, adjust physics and much more.

The presets for all of these options can be accessed by clicking the “>” symbol in the upper left of the Designer interface. You can access all of the detailed settings and building “Blocks” of each of these categories by clicking the “<” in the same area. With a few hours spent watching tutorials on YouTube, you can be up and running with particle emitters and fluid dynamics. The preset emitters are pretty amazing, including my favorite, the two-emitter fluid dynamic systems that interact with one another.

Form 4
The second plugin in the Trapcode Suite 15 that has been updated is Trapcode Form 4. Form is a plugin that literally creates forms using particles that live forever in a unified 3D space, allowing for interaction. Form 4 adds the updated Designer, which makes particle grids a little more accessible and easier to construct for non-experts. Form 4 also includes the latest Fluid Dynamics update that Particular gained. The Fluid Dynamics engine really adds another level of beauty to Form projects, allowing you to create fluid-like particle grids from the 150 included presets or even your own .obj files.

My favorite settings to tinker with are Swirl and Viscosity. Using both settings in tandem can help create an ooey-gooey liquid particle grid that can interact with other Form systems to build pretty incredible scenes. To test out how .obj models worked within form, I clicked over to www.sketchfab.com and downloaded an .obj 3D model. If you search for downloadable models that do not cost anything, you can use them in your projects under Creative Commons licensing protocols, as long as you credit the creator. When in doubt always read the licensing (You can find more info on creative commons licensing here, but in this case you can use them as great practice models.

Anyway, Form 4 allows us to import .obj files, including animated .obj sequences as well as their textures. I found a Day of the Dead-type skull created by JMUHIST, pointed form to the .obj as well as its included texture, added a couple After Effect’s lights, a camera, and I was in business. Form has a great replicator feature (much like Element3D). There are a ton of options, including fog distance under visibility, animation properties, and even the ability to quickly add a null object linked to your model for quick alignment of other elements in the scene.

Mir 3
Up last is Trapcode Mir 3. Mir 3 is used to create 3D terrains, objects and wireframes in After Effects. In this latest update, Mir has added the ability to import .obj models and textures. Using fractal displacement mapping, you can quickly create some amazing terrains. From mountain-like peaks to alien terrains, Mir is a great supplement when using plugins like Video Copilot Element 3D to add endless tunnels or terrains to your 3D scenes quickly and easily.

And if you don’t have or own Element 3D, you will really enjoy the particle replication system. Use one 3D object and duplicate, then twist, distort and animate multiple instances of them quickly. The best part about all of these Trapcode Suite tools is that they interact with the cameras and lighting native to After Effects, making it a unified animating experience (instead of animating separate camera and lighting rigs like in the old days). Two of my favorite features from the last update are the ability to use quad- or triangle-based polygons to texture your surfaces. This can give an 8-bit or low-poly feel quickly, as well as a second pass wireframe to add a grid-like surface to your terrain.

Summing Up
Red Giant’s Trapcode Suite 15 is amazing. If you have a previous version of the Trapcode Suite, you’re in luck: the upgrade is “only” $199. If you need to purchase the full suite, it will cost you $999. Students get a bit of a break at $499.

If you are on the fence about it, go watch Daniel Hashimoto’s Cheap Tricks: Aquaman Underwater Effects tutorial (Part 1 and Part 2). He explains how you can use all of the Red Giant Trapcode Suite effects with other plugins like Video CoPilot’s Element 3D and Red Giant’s Universe and offers up some pro tips when using www.sketchfab.com to find 3D models.

I think I even saw him using Video CoPilot’s FX Console, which is a free After Effects plugin that makes accessing plugins much faster in After Effects. You may have seen his work as @ActionMovieKid on Twitter or @TheActionMovieKid on Instagram. He does some amazing VFX with his kids — he’s a must follow. Red Giant made a power move to get him to make tutorials for them! Anyway, his Aquaman Underwater Effects tutorial take you step by step through how to use each part of the Trapcode Suite 15 in an amazing way. He makes it look a little too easy, but I guess that is a combination of his VFX skills and the Trapcode Suite toolset.

If you are excited about 3D objects, particle systems and fluid dynamics you must check out Trapcode Suite 15 and its latest updates to Particular, Mir and Form.

After I finished the Trapcode Suite 15 review, Red Giant released the Trapcode Suite 15.1 update. The 15.1 update includes Text and Mask Emitters for Form and Particular 4.1, updated Designer, Shadowlet particle type matching, shadowlet softness and 21 additional presets.

This is a free update that can be downloaded from the Red Giant website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

 

Behind the Title: Gentleman Scholar MD/EP Jo Arghiris

LA-based Jo Arghiris embraces the creativity of the job and enjoys “pulling treatments together with our directors. It’s always such a fun, collaborative process.” Find out more…

Name: Jo Arghiris

Company: Gentleman Scholar (@gentscholar)

Can You Describe Your Company?
Gentleman Scholar is a creative production studio, drawn together by a love of design and an eagerness to push boundaries.  Since launching in Los Angeles in 2010, and expanding to New York in 2016, we have evolved within the disciplines of live-action production, digital exploration, print and VR. At our very core, we are a band of passionate artists and fearless makers.

The biggest thing that struck me when I joined Scholar was everyone’s willingness to roll up their sleeves and give it a go. There are so many creative people working across both our studios, it’s quite amazing what we can achieve when we put our collective minds to it. In fact, it’s really hard to put us in a category or to define what we do on a day-to-day basis. But if I had to sum it up in just one word, our company feels like “home”; there’s no place quite like it.

What’s Your Job Title?
Managing Director/EP Los Angeles

What Does That Entail?
Truth be told, it’s evolving all the time. In its purest form, my job entails having top-line involvement on everything going on in the LA studio, both from operational and new business POVs. I face inwards and outwards. I mentor and I project. I lead and I follow. But the main thing I want to mention is that I couldn’t do my job without all these incredible people by my side. It really does take a village, every single day.

What Would Surprise People the Most About What Falls Under That Title?
Not so much “surprising” but certainly different from other roles, is that my job is never done (or at least it shouldn’t be). I never go home with all my to-do’s ticked off. The deck is constantly shuffled and re-dealt. This fluidity can be off-putting to some people who like to have a clear idea of what they need to achieve on any given day. But I really like to work that way, as it keeps my mind nimble and fresh.

What’s Your Favorite Part of the Job?
Learning new things and expanding my mind. I like to see our teams push themselves in this way, too. It’s incredibly satisfying watching folks overcome challenges and grow into their roles. Also, I obviously love winning work, especially if it’s an intense pitch process. I’m a creative person and I really enjoy pulling treatments together with our directors. It’s always such a fun, collaborative process.

What’s Your Least Favorite?
Well, I guess the 24/7 availability thing that we’ve all become accustomed to and are all guilty of. It’s so, so important for us to have boundaries. If I’m emailing the team late at night or on the weekend, I will write in the subject line, “For the Morning” or “For Monday.” I sometimes need to get stuff set up in advance, but I absolutely do not expect a response at 10pm on a Sunday night. To do your best work, it’s essential that you have a healthy work/life balance.

What is Your Favorite Time of the Day?
As clichéd as it may sound, I love to get up before anyone else and sit, in silence, with a cup of coffee. I’m a one-a-day kind of girl, so it’s pretty sacred to me. Weekdays or weekends, I have so much going on, I need to set my day up in these few solitary moments. I am not a night person at all and can usually be found fast asleep on the sofa sometime around 9pm each night. Equally favorite is when my kids get up and we do “huggle” time together, before the day takes us away on our separate journeys.

Bleacher Report

Can you Name Some Recent Projects?
Gentleman Scholar worked on a big Acura TLX campaign, which is probably one of my all-time favorites. Other fun projects include Legends Club for Timberland, Upwork “Hey World!” campaign from Duncan Channon, the Sponsor Reel for the 2018 AICP Show and Bleacher Report’s Sports Alphabet.

If You Didn’t Have This Job, What Would You be Doing Instead?
I love photography, writing and traveling. So if I could do it all again, I’d be some kind of travel writer/photographer combo or a journalist or something. My brother actually does just that, and I’m super-proud of his choices. To stand behind your own creative point of view takes skill and dedication.

How Did You Know This Would Be Your Path?
The road has been long, and it has carried me from London to New York to Los Angeles. I originally started in post production and VFX, where I got a taste for creative problem-solving. The jump from this world to a creative production studio like Scholar was perfectly timed and I relished the learning curve that came with it. I think it’s quite hard to have a defined “path” these days.

My advice to anyone getting into our industry right now would be to understand that knowledge and education are powerful tools, so go out of your way to harness them. And never stand still; always keep pushing yourself.

Name Three Pieces of Technology You Can’t Live Without.
My Ear Pods — so happy to not have that charging/listening conflict with my iPhone anymore; all the apps that allow me to streamline my life and get shit done any time of day no matter what, no matter where; I think my electric toothbrush is pretty high up there too. Can I have one more? Not “tech” per se, but my super-cute mini-hair straightener, which make my bangs look on point, even after working out!

What Social Media Channels Do You Follow?
Well, I like Instagram mostly. Do you count Pinterest? I love a Pinterest board. I have many of those. And I read Twitter, but I don’t Tweet too much. To be honest, I’m pretty lame on social media, and all my accounts are private. But I realize they are such important tools in our industry so I use them on an as-needed basis. Also, it’s something I need to consider soon for my kids, who are obsessed with watching random, “how-to” videos online and periodically ask me, “Are you going to put that on YouTube?” So I need to keep on top of it, not just for work, but also for them. It will be their world very soon.

Do You Listen to Music While You Work? Care to Share Your Favorite Music to Work to?
Yes, I have a Sonos set up in my office. I listen to a lot of playlists — found ones and the random ones that your streaming services build for you. Earlier this morning I had an album called Smino by blkswn playing. Right now I’m listening to a band called Pronoun. They were on a playlist Nylon Studios released called, “All the Brooklyn Bands You Should Be Listening To.”

My drive home is all about the podcast. I’m trying to educate myself more on American history at the moment. I’m also tempted to get into Babel and learn French. With all the hours I spend in the car, I’m pretty sure I would be fluent in no time!

What Do You Do to De-stress From it All?
So many things! I literally never stop. Hot yoga, spinning, hiking, mountain biking, cooking and thinking of new projects for my house. Road tripping, camping and exploring new places with my family and friends. Taking photographs and doing art projects with my kids. My all-time favorite thing to do is hit the beach for the day, winter and summer. I find it one of the most restorative places on Earth. I’m so happy to call LA my home. It suits me down to the ground!

Autodesk cloud-enabled tools now work with BeBop post platform

Autodesk has enabled use of its software in the cloud — including 3DS Max, Arnold, Flame and Maya — and BeBop Technology will deploy the tools on its cloud-based post platform. The BeBop platform enables processing-heavy post projects, such as visual effects and editing, in the cloud on powerful and highly secure virtualized desktops. Creatives can process, render, manage and deliver media files from anywhere on BeBop using any computer and as small as a 20Mbps Internet connection.

The ongoing deployment of Autodesk software on the BeBop platform mirrors the ways BeBop and Adobe work closely together to optimize the experience of Adobe Creative Cloud subscribers. Adobe applications have been available natively on BeBop since April 2018.

Autodesk software users will now also gain access to BeBop Rocket Uploader, which enables ingestion of large media files at incredibly high speeds for a predictable monthly fee with no volume limits. Additionally, BeBop Over the Shoulder (OTS) enables secure and affordable remote collaboration, review and approval sessions in real-time. BeBop runs on all of the major public clouds, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

Cinesite recreates Nottingham for Lionsgate’s Robin Hood

The city of Nottingham perpetually exists in two states: the metropolitan center that it is today, and the fictional home of one of the world’s most famous outlaws. So when the filmmakers behind Robin Hood, which is now streaming and on DVD, looked to recreate the fictional Nottingham, they needed to build it from scratch with help from London’s Cinesite Studio. The film stars Taron Egerton, Jamie Foxx, Ben Mendelsohn, Eve Hewson, and Jamie Dornan.

Working closely with Robin Hood’s VFX supervisor Simon Stanley-Clamp and director Otto Bathurst, Cinesite created a handful of settings and backgrounds for the film, starting with a digital model of Nottingham built to scale. Given its modern look and feel, Nottingham of today wouldn’t do, so the team used Dubrovnik, Croatia, as its template. The Croatian city — best known to TV fans around the world as the model for Game of Thrones’ Kings Landing — has become a popular spot for filming historical fiction, thanks to its famed stone walls and medieval structures. That made it an ideal starting point for a film set around the time of the Crusades.

“Robin’s Nottingham is a teeming industrial city dominated by global influences, politics and religion. It’s also full of posh grandeur but populated by soot-choked mines and sprawling slums reflecting the gap between haves and have-nots, and we needed to establish that at a glance for audiences,” says Cinesite’s head of assets, Tim Potter. “With so many buildings making up the city, the Substance Suite allowed us to achieve the many variations and looks that were required for the large city of Nottingham in a very quick and easy manner.”

Using Autodesk Maya for the builds and Pixologic ZBrush for sculpting and displacement, the VFX team then relied on Allegorithmic Substance Designer (which was acquired by Adobe recently) to customize the city, creating detailed materials that would give life and personality to the stone and wood structures. From the slums inspired by Brazilian favelas to the gentry and nobility’s grandiose environments, the texturing and materials helped to provide audiences with unspoken clues about the outlaw archer’s world.

Creating these swings from the oppressors to the oppressed was often a matter of dirt, dust and grime, which were added to the RGB channels over the textures to add wear and tear to the city. Once the models and layouts were finalized, Cinesite then added even more intricate details using Substance Painter, giving an already realistic recreation additional touches to reflect the sometimes messy lives of the people that would inhabit a city like Nottingham.

At its peak, Cinesite had around 145 artists working on the project, including around 10 artists focusing on texturing and look development. The team spent six months alone creating the reimagined Nottingham, with another three months spent on additional scenes. Although the city of Dubrovnik informed many of the design choices, one of the pieces that had to be created from scratch was a massive cathedral, a focal point of the story. To fit with the film’s themes, Cinesite took inspiration from several real churches around the world to create something original, with a brutalist feel.

Using models and digital texturing, the team also created Robin’s childhood home of Loxley Manor, which was loosely based on a real structure in Završje, Croatia. There were two versions of the manor: one meant to convey the Loxley family in better times, and another seen after years of neglect and damage. Cinesite also helped to create one of the film’s most integral and complex moments, which saw Robin engage in a wagon chase through Nottingham. The scene was far too dangerous to use real animals in most shots, requiring Cinesite to dip back into its toolbox to create the texturing and look of the horse and its groom, along with the rigging and CFX.

“To create the world that the filmmakers wanted, we started by going through the process of understanding the story. From there we saw what the production had filmed and where the action needed to take place within the city, then we went about creating something unique,” Potter says. “The scale was massive, but the end result is a realistic world that will feel somewhat familiar, and yet still offer plenty of surprises.”

Robin Hood was released on home media on February 19.

Behind the Title: Carousel’s Head of VFX/CD Jeff Spangler

This creative has been an artist for as long as he could remember. “I’ve always loved the process of creation and can’t imagine any career where I’m not making something,” he says.

Name: Jeff Spangler

Company: NYC’s Carousel

Can you describe your company?
Carousel is a “creative collective” that was a response to this rapidly changing industry we all know and love. Our offerings range from agency creative services to editorial, design, animation (including both motion design and CGI), retouching, color correction, compositing, music licensing, content creation, and pretty much everything that falls between.

We have created a flexible workflow that covers everything from concept to execution (and delivery), while also allowing for clients whose needs are less all-encompassing to step on or off at any point in the process. That’s just one of the reasons we called ourselves Carousel — our clients have the freedom to climb on board for as much of the ride as they desire. And with the different disciplines all living under the same roof, we find that a lot of the inefficiencies and miscommunications that can get in the way of achieving the best possible result are eliminated.

What’s your job title?
Head of VFX/Creative Director

What does that entail?
That’s a really good question. There is the industry standard definition of that title as it applies to most companies. But it’s quite different if you are talking about a collective that combines creative with post production, animation and design. So for me, the dual role of CD and head of VFX works in a couple of ways. Where we have the opportunity to work with agencies, I am able to bring my experience and talents as a VFX lead to bear, communicating with the agency creatives and ensuring that the different Carousel artist involved are all able to collaborate and communicate effectively to get the work done.

Alternatively, when we work direct-to-client, I get involved much earlier in the process, collaborating with the Carousel creative directors to conceptualize and pitch new ideas, design brand elements, visualize concept art, storyboard and write copy or even work with stargeists to help hone the direction and target of a campaign.

That’s the true strength of Carousel — getting creatives from different backgrounds involved early on in the process where their experience and talent can make a much bigger impact in the long run. Most importantly, my role is not about dictating direction as much as it is about guiding and allowing for people’s talents to shine. You have to give artists the room to flourish if you really want to serve your clients and are serious about getting them something more than what they expected.

What would surprise people the most about what falls under that title?
I think that there is this misconception that it’s one creative sitting in a room that comes up with the “Big Idea” and he or she just dictates that idea to everyone. My experience is that any good idea started out as a lot of different ideas that were merged, pruned, refined and polished until they began to resemble something truly great.

Then after 24 hours, you look at that idea again and tear it apart because all of the flaws have started to show and you realize it still needs to be pummeled into shape. That process is generally a collaboration within a group of talented people who all look at the world very differently.

What tools do you use?
Anything that I can get my hands on (and my brain wrapped around). My foundation is as a traditional artist and animator and I find that those core skills are really the strength behind what I do everyday. I started out after college as a broadcast designer and later transitioned into a Flame artist where I spent many years working as a beauty retouch artist and motion designer.

These days, I primarily use Adobe Creative Suite as my role has become more creative in nature. I use Photoshop for digital painting and concept art , Illustrator for design and InDesign for layouts and decks. I also have a lot of experience in After Effects and Autodesk Maya and will use those tools for any animation or CGI that requires me to be hands-on, even if just to communicate the initial concept or design.

What’s your favorite part of the job?
Coming up with new ideas at the very start. At that point, the gloves are off and everything is possible.

What’s your least favorite?
Navigating politics within the industry that can sometimes get in the way of people doing their best work.

What is your favorite time of the day?
I’m definitely more of a night person. But if I had to choose a favorite time of day, it would be early morning — before everything has really started and there’s still a ton of anticipation and potential.

If you didn’t have this job, what would you be doing instead?
Working as a full-time concept artist. Or a logo designer. While I frequently have the opportunity to do both of those things in my role at Carousel, they are, for me, the most rewarding expression of being creative.

A&E’s Scraps

How early on did you know this would be your path?
I’ve been an artist for as long as I can remember and never really had any desire (or ability) to set it aside. I’ve always loved the process of creation and can’t imagine any career where I’m not “making” something.

Can you name some recents projects you have worked on?
We are wrapping up Season 2 of an A&E food show titled Scraps that has allowed us to flex our animation muscles. We’ve also been doing some in-store work with Victoria’s Secret for some of their flagship stores that has been amazing in terms of collaboration and results.

What is the project that you are most proud of?
It’s always hard to pick a favorite and my answer would probably change if you asked me more than once. But I recently had the opportunity to work with an up-and-coming eSports company to develop their logo. Collaborating with their CD, we landed on a design and aesthetic that makes me smile every time I see it out there. The client has taken that initial work and continues to surprise me with the way they use it across print, social media, swag, etc. Seeing their ability to be creative and flexible with what I designed is just validation that I did a good job. That makes me proud.

Name pieces of technology you can’t live without.
My iPad Pro. It’s my portable sketch tablet and presentation device that also makes for a damn good movie player during long commutes.

What do you do to de-stress from it all?
Muay Thai. Don’t get me wrong. I’m no serious martial artist and have never had the time to dedicate myself properly. But working out by punching and kicking a heavy bag can be very cathartic.

Review: Boris FX’s Continuum and Mocha Pro 2019

By Brady Betzel

I realize I might sound like a broken record, but if you are looking for the best plugin to help with object removals or masking, you should seriously consider the Mocha Pro plugin. And if you work inside of Avid Media Composer, you should also seriously consider Boris Continuum and/or Sapphire, which can use the power of Mocha.

As an online editor, I consistently use Continuum along with Mocha for tight blur and mask tracking. If you use After Effects, there is even a whittled-down version of Mocha built in for free. For those pros who don’t want to deal with Mocha inside of an app, it also comes as a standalone software solution where you can copy and paste tracking data between apps or even export the masks, object removals or insertions as self-contained files.

The latest releases of Continuum and Mocha Pro 2019 continue the evolution of Boris FX’s role in post production image restoration, keying and general VFX plugins, at least inside of NLEs like Media Composer and Adobe Premiere.

Mocha Pro

As an online editor I am alway calling on Continuum for its great Chroma Key Studio, Flicker Fixer and blurring. Because Mocha is built into Continuum, I am able to quickly track (backwards and forwards) difficult shapes and even erase shapes that the built-in Media Composer tools simply can’t do. But if you are lucky enough to own Mocha Pro you also get access to some amazing tools that go beyond planar tracking — such as automated object removal, object insertion, stabilizing and much more.

Boris FX’s latest updates to Boris Continuum and Mocha Pro go even further than what I’ve already mentioned and have resulted in a new version naming, this round we are at 2019 (think of it as Version 12). They have also created the new Application Manager, which makes it a little easier to find the latest downloads. You can find them here. This really helps when jumping between machines and you need to quickly activate and deactivate licenses.

Boris Continuum 2019
I often get offline edits effects from a variety plugins — lens flares, random edits, light flashes, whip transitions, and many more — so I need Continuum to be compatible with offline clients. I also need to use it for image repair and compositing.

In this latest version of Continuum, BorisFX has not only kept plugins like Primatte Studio, they have brought back Particle Illusion and updated Mocha and Title Studio. Overall, Continuum and Mocha Pro 2019 feel a lot snappier when applying and rendering effects, probably because of the overall GPU-acceleration improvements.

Particle Illusion has been brought back from the brink of death in Continuum 2019 for a 64-bit keyframe-able particle emitter system that can even be tracked and masked with Mocha. In this revamp of Particle Illusion there is an updated interface, realtime GPU-based particle generation, expanded and improved emitter library (complete with motion-blur-enabled particle systems) and even a standalone app that can design systems to be used in the host app — you cannot render systems inside of the standalone app.

While Particle Illusion is a part of the entire Continuum toolset that works with OFX apps like Blackmagic’s DaVinci Resolve, Media Composer, After Effects, and Premiere, it seems to work best in applications like After Effects, which can handle composites simply and naturally. Inside the Particle Illusion interface you can find all of the pre-built emitters. If you only have a handful make sure you download additional emitters, which you can find in the Boris FX App Manager.

       
Particle Illusion: Before and After

I had a hard time seeing my footage in a Media Composer timeline inside of Particle Illusion, but I could still pick my emitter, change specs like life and opacity, exit out and apply to my footage. I used Mocha to track some fire from Particle Illusion to a dumpster I had filmed. Once I dialed in the emitter, I launched Mocha and tracked the dumpster.

The first time I went into Mocha I didn’t see the preset tracks for the emitter or the world in which the emitter lives. The second time I launched Mocha, I saw track points. From there you can track where you want your emitter to track and be placed. Once you are done and happy with your track, jump back to your timeline where it should be reflected. In Media Composer I noticed that I had to go to the Mocha options and change the option from Mocha Shape to no shape. Essentially, the Mocha shape will act like a matte and cut off anything outside the matte.

If you are inside of After Effects, most parameters can now be keyframed and parented (aka pick-whipped) natively in the timeline. The Particle Illusion plugin is a quick, easy and good-looking tool to add sparks, Milky Way-like star trails or even fireworks to any scene. Check out @SurfacedStudio’s tutorial on Particle Illusion to get a good sense of how it works in Adobe Premiere Pro.

Continuum Title Studio
When inside of Media Composer (prior to the latest release 2018.12), there were very few ways to create titles that were higher resolution than HD (1920×1080) — the New Blue Titler was the only other option if you wanted to stay within Media Composer.

Title Studio within Media Composer

At first, the Continuum Title Studio interface appeared to be a mildly updated Boris Red interface — and I am allergic to the Boris Red interface. Some of the icons for the keyframing and the way properties are adjusted looks similar and threw me off. I tried really hard to jump into Title Studio and love it, but I really never got comfortable with it.

On the flip side, there are hundreds of presets that could help build quick titles that render a lot faster than New Blue Titler did. In some of the presets I noticed the text was placed outside of 16×9 Title Safety, which is odd since that is kind of a long standing rule in television. In the author’s defense, they are within Action Safety, but still.

If you need a quick way to make 4K titles, Title Studio might be what you want. The updated Title Studio includes realtime playback using the GPU instead of the CPU, new materials, new shaders and external monitoring support using Blackmagic hardware (AJA will be coming at some point). There are some great pre-sets including pre-built slates, lower thirds, kinetic text and even progress bars.

If you don’t have Mocha Pro, Continuum can still access and use Mocha to track shapes and masks. Almost every plugin can access Mocha and can track objects quickly and easily.
That brings me to the newly updated Mocha, which has some new features that are extremely helpful including a Magnetic Spline tool, prebuilt geometric shapes and more.

Mocha Pro 2019
If you loved the previous version of Mocha, you are really going to love Mocha Pro 2019. Not only do you get the Magnetic Lasso, pre-built geometric shapes, the Essentials interface and high-resolution display support, but BorisFX has rewritten the Remove Module code to use GPU video hardware. This increases render speeds about four to five times. In addition, there is no longer a separate Mocha VR software suite. All of the VR tools are included inside of Mocha Pro 2019.

If you are unfamiliar with what Mocha is, then I have a treat for you. Mocha is a standalone planar tracking app as well as a native plugin that works with Media Composer, Premiere and After Effects, or through OFX in Blackmagic’s Fusion, Foundry’s Nuke, Vegas Pro and Hitfilm.

Mocha tracking

In addition (and unofficially) it will work with Blackmagic DaVinci Resolve by way of importing the Mocha masks through Fusion. While I prefer to use After Effects for my work, importing Mocha masks is relatively painless. You can watch colorist Dan Harvey run through the process of importing Mocha masks to Resolve through Fusion, here.

But really, Mocha is a planar tracker, which means it tracks multiple points in a defined area that works best in flat surfaces or at least segmented surfaces, like the side of a face, ear, nose, mouth and forehead tracked separately instead of all at once. From blurs to mattes, Mocha tracks objects like glue and can be a great asset for an online editor or colorist.

If you have read any of my plugin reviews you probably are sick of me spouting off about Mocha, saying how it is probably the best plugin ever made. But really, it is amazing — especially when incorporated with plugins like Continuum and Sapphire. Also, thanks to the latest Media Composer with Symphony option you can incorporate the new Color Correction shapes with Mocha Pro to increase the effectiveness of your secondary color corrections.

Mocha Pro Remove module

So how fast is Mocha Pro 2019’s Remove Module these days? Well, it used to be a very slow process, taking lots of time to calculate an object’s removal. With the latest Mocha Pro 2019 release, including improved GPU support, the render time has been cut down tremendously. In my estimation, I would say three to four times the speed (that’s on the safe side). In Mocha Pro 2019 removal jobs that take under 30 seconds would have taken four to five minutes in previous versions. It’s quite a big improvement in render times.

There are a few changes in the new Mocha Pro, including interface changes and some amazing tool additions. There is a new drop-down tab that offers different workflow views once you are inside of Mocha: Essentials, Classic, Big Picture and Roto. I really wish the Essentials view was out when I first started using Mocha, because it gives you the basic tools you need to get a roto job done and nothing more.

For instance, just giving access to the track motion objects (Translation, Scale, Rotate, Skew and Perspective) with big shiny buttons helps to eliminate my need to watch YouTube videos on how to navigate the Mocha interface. However, if like me you are more than just a beginner, the Classic interface is still available and one I reach for most often — it’s literally the old interface. Big Screen hides the tools and gives you the most screen real estate for your roto work. My favorite after Classic is Roto. The Roto interface shows just the project window and the classic top toolbar. It’s the best of both worlds.

Mocha Pro 2019 Essentials Interface

Beyond the interface changes are some additional tools that will speed up any roto work. This has been one of the longest running user requests. I imagine the most requested feature that BorisFX gets for Mocha is the addition of basic shapes, such as rectangles and circles. In my work, I am often drawing rectangles around license plates or circles around faces with X-splines, so why not eliminate a few clicks and have that done already? Answering my need, Mocha now has elliptical and rectangular shapes ready to go in both X-splines and B-splines with one click.

I use Continuum and Mocha hand in hand. Inside of Media Composer I will use tools like Gaussian Blur or Remover, which typically need tracking and roto shapes created. Once I apply the Continuum effect, I launch Mocha from the Effect Editor and bam, I am inside Mocha. From here I track the objects I want to affect, as well as any objects I don’t want to affect (think of it like an erase track).

Summing Up
I can save tons of time and also improve the effectiveness of my work exponentially when working in Continuum 2019 and Mocha Pro 2019. It’s amazing how much more intuitive Mocha is to track with instead of the built-in Media Composer and Symphony trackers.

In the end, I can’t say enough great things about Continuum and especially Mocha Pro. Mocha saves me tons of time in my VFX and image restoration work. From removing camera people behind the main cast in the wilderness to blurring faces and license plates, using Mocha in tandem with Continuum is a match made in post production heaven.

Rendering in Continuum and Mocha Pro 2019 is a lot faster than previous versions, really giving me a leg up on efficiency. Time is money right?! On top of that, using Mocha Pro’s magic Object removal and Modules takes my image restoration work to the next level, separating me from other online editors who use standard paint and tracking tools.

In Continuum, Primatte Studio gives me the leg up on greenscreen keys with its exceptional ability to auto analyze a scene and perform 80% of the keying work before I dial-in the details. Whenever anyone asks me what tools I couldn’t live without, I without a doubt always say Mocha.
If you want a real Mocha Pro education you need to watch all of Mary Poplin’s tutorials. You can find them on YouTube. Check out this one on how to track and replace a logo using Mocha Pro 2019 in Adobe After Effects. You can also find great videos at Borisfx.com.

Mocha point parameter tracking

I always feel like there are tons of tools inside of the Mocha Pro toolset that go unused simply because I don’t know about them. One I recently learned about in a Surfaced Studio tutorial was the Quick Stabilize function. It essentially stabilizes the video around the object you are tracking allowing you to more easily rotoscope your object with it sitting still instead of moving all over the screen. It’s an amazing feature that I just didn’t know about.

As I was finishing up this review I saw that Boris FX came out with a training series, which I will be checking out. One thing I always wanted was a top-down set of tutorials like the ones on Mocha’s YouTube page but organized and sent along with practical footage to practice with.

You can check out Curious Turtle’s “More Than The Essentials: Mocha in After Effects” on their website where I found more Mocha training. There is even a great search parameter called Getting Started on BorisFX.com. Definitely check them out. You can never learn enough Mocha!


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Behind the Title: Left Field Labs ECD Yann Caloghiris

NAME: Yann Caloghiris

COMPANY: Left Field Labs (@LeftFieldLabs)

CAN YOU DESCRIBE YOUR COMPANY?
Left Field Labs is a Venice-California-based creative agency dedicated to applying creativity to emerging technologies. We create experiences at the intersection of strategy, design and code for our clients, who include Google, Uber, Discovery and Estée Lauder.

But it’s how we go about our business that has shaped who we have become. Over the past 10 years, we have consciously moved away from the traditional agency model and have grown by deepening our expertise, sourcing exceptional talent and, most importantly, fostering a “lab-like” creative culture of collaboration and experimentation.

WHAT’S YOUR JOB TITLE?
Executive Creative Director

WHAT DOES THAT ENTAIL?
My role is to drive the creative vision across our client accounts, as well as our own ventures. In practice, that can mean anything from providing insights for ongoing work to proposing creative strategies to running ideation workshops. Ultimately, it’s whatever it takes to help the team flourish and push the envelope of our creative work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably that I learn more now than I did at the beginning of my career. When I started, I imagined that the executive CD roles were occupied by seasoned industry veterans, who had seen and done it all, and would provide tried and tested direction.

Today, I think that cliché is out of touch with what’s required from agency culture and where the industry is going. Sure, some aspects of the role remain unchanged — such as being a supportive team lead or appreciating the value of great copy — but the pace of change is such that the role often requires both the ability to leverage past experience and accept that sometimes a new paradigm is emerging and assumptions need to be adjusted.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with the team, and the excitement that comes from workshopping the big ideas that will anchor the experiences we create.

WHAT’S YOUR LEAST FAVORITE?
The administrative parts of a creative business are not always the most fulfilling. Thankfully, tasks like timesheeting, expense reporting and invoicing are becoming less exhaustive thanks to better predictive tools and machine learning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The early hours of the morning, usually when inspiration strikes — when we haven’t had to deal with the unexpected day-to-day challenges that come with managing a busy design studio.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be somewhere at the cross-section between an artist, like my mum was, and an engineer like my dad. There is nothing more satisfying than to apply art to an engineering challenge or vice versa.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I went to school in France, and there wasn’t much room for anything other than school and homework. When I got my Baccalaureate, I decided that from that point onward that whatever I did, it would be fun, deeply engaging and at a place where being creative was an asset.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently partnered with ad agency RK Venture to craft a VR experience for the New Mexico Department of Transportation’s ongoing ENDWI campaign, which immerses viewers into a real-life drunk-driving scenario.

ENDWI

To best communicate and tell the human side of this story, we turned to rapid breakthroughs within volumetric capture and 3D scanning. Working with Microsoft’s Mixed Reality Capture Studio, we were able to bring every detail of an actor’s performance to life with volumetric performance capture in a way that previous techniques could not.

Bringing a real actor’s performance into a virtual experience is a game changer because of the emotional connection it creates. For ENDWI, the combination of rich immersion with compelling non-linear storytelling proved to affect the participants at a visceral level — with the goal of changing behavior further down the road.

Throughout this past year, we partnered with the VMware Cloud Marketing Team to create a one-of-a-kind immersive booth experience for VMworld Las Vegas 2018 and Barcelona 2018 called Cloud City. VMware’s cloud offering needed a distinct presence to foster a deeper understanding and greater connectivity between brand, product and customers stepping into the cloud.

Cloud City

Our solution was Cloud City, a destination merging future-forward architecture, light, texture, sound and interactions with VMware Cloud experts to give consumers a window into how the cloud, and more specifically how VMware Cloud, can be an essential solution for them. VMworld is the brand’s penultimate engagement where hands-on learning helped showcase its cloud offerings. Cloud City garnered 4000-plus demos, which led to a 20% lead conversion in 10 days.

Finally, for Google, we designed and built a platform for the hosting of online events anywhere in the world: Google Gather. For its first release, teams across Google, including Android, Cloud and Education, used Google Gather to reach and convert potential customers across the globe. With hundreds of events to date, the platform now reaches enterprise decision-makers at massive scale, spanning far beyond what has been possible with traditional event marketing, management and hosting.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Recently, a friend and I shot and edited a fun video homage to the original technology boom-town: Detroit, Michigan. It features two cultural icons from the region, an original big block ‘60s muscle car and some gritty electro beats. My four-year-old son thinks it’s the coolest thing he’s ever seen. It’s going to be hard for me to top that.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Human flight, the Internet and our baby monitor!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Instagram, Twitter, Medium and LinkedIn.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Where to start?! Music has always played an important part of my creative process, and the joy I derive from what we do. I have day-long playlists curated around what I’m trying to achieve during that time. Being able to influence how I feel when working on a brief is essential — it helps set me in the right mindset.

Sometimes, it might be film scores when working on visuals, jazz to design a workshop schedule or techno to dial-up productivity when doing expenses.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Spend time with my kids. They remind me that there is a simple and unpretentious way to look at life.

Adobe acquires Allegorithmic, makers of Substance

Adobe has acquired Allegorithmic, makers of Substance, the industry standard for 3D textures and material creation in game and post production. By combining Allegorithmic’s Substance 3D design tools with Creative Cloud’s imaging, video and motion graphics tools, Adobe will empower video game creators, VFX artists working in film and television, designers and marketers to deliver the next generation of immersive experiences.

As brands look to compete and differentiate themselves, compelling, interactive experiences enabled by 3D content, VR and AR will become more critical to their future success. 3D content is already transforming traditional workflows into fully immersive and digital ones that save time, reduce cost and open new creative horizons. With the acquisition of Allegorithmic, Adobe has added expanded 3D and immersive workflows to Creative Cloud and provides Adobe’s users a new set of tools for 3D projects.

“We are seeing an increasing appetite from customers to leverage 3D technology across media, entertainment, retail and marketing to design and deliver fully immersive experiences,” says Scott Belsky, chief product officer/executive VP, Creative Cloud, Adobe. “Substance products are a natural complement to existing Creative Cloud apps that are used in the creation of immersive content, including Photoshop, Dimension, After Effects and Project Aero.”

Allegorithmic has users working across the gaming, film and television, automotive, design and advertising industries, including brands like Electronic Arts, Ubisoft, BMW, Ikea, Louis Vuitton and Foster + Partners. Allegorithmic is used on AAA gaming franchises, including Call of Duty, Assassin’s Creed and Forza, and was used in the making of movies, including Blade Runner 2049, Pacific Rim Uprising and Tomb Raider.

Allegorithmic tools are already offered as a subscription service to individuals and enterprise customers, and in the future Adobe will focus on expanding the availability of the Allegorithmic tools via subscription. Later this year, Adobe will announce an update on new offerings that will bring the full power of Allegorithmic technology and Adobe Creative Cloud together.

Efilm’s Natasha Leonnet: Grading Spider-Man: Into the Spider-Verse

By Randi Altman

Sony Pictures’ Spider-Man: Into the Spider-Verse is not your typical Spider-Man film… in so many ways. The most obvious is the movie’s look, which was designed to make the viewer feel they are walking inside a comic book. This tale, which blends CGI with 2D hand-drawn animation and comic book textures, focuses on a Brooklyn teen who is bitten by a radioactive spider on the subway and soon develops special powers.

Natasha Leonnet

When he meets Peter Parker, he realizes he’s not alone in the Spider-Verse. It was co-directed by Peter Ramsey, Robert Persichetti Jr. and Rodney Rothman and produced by Phil Lord and Chris Miller, the pair behind 21 Jump Street and The Lego Movie.

Efilm senior colorist Natasha Leonnet provided the color finish for the film, which was nominated for an Oscar in the Best Animated Feature category. We reached out to find out more.

How early were you brought on the film?
I had worked on Angry Birds with visual effects supervisor Danny Dimian, which is how I was brought onto the film. It was a few months before we started color correction. Also, there was no LUT for the film. They used the ACES workflow, developed by The Academy and Efilm’s VP of technology, Joachim “JZ” Zell.

Can you talk about the kind of look they were after and what it took to achieve that look?
They wanted to achieve a comic book look. You look at the edges of characters or objects in comic books and you actually see aspects of the color printing from the beginning of comic book printing — the CMYK dyes wouldn’t all be the same line — it creates a layered look along with the comic book dots and expression lines on faces, as if you’re drawing a comic book.

For example, if someone gets hurt you put actual slashes on their face. For me it was a huge education about the comic book art form. Justin Thompson, the art director, in particular is so knowledgeable about the history of comic books. I was so inspired I just bought my first comic book. Also, with the overall look, the light is painting color everywhere the way it does in life.

You worked closely Justin, VFX supervisor Danny Dimian and art director Dean Gordon What was that process like?
They were incredible. It was usually a group of us working together during the color sessions — a real exercise in collaboration. They were all so open to each other’s opinions and constantly discussing every change in order to make certain that the change best served the film. There was no idea that was more important than another idea. Everyone listened to each other’s ideas.

Had you worked on an animated film previously? What are the challenges and benefits of working with animation?
I’ve been lucky enough to do all of Blue Sky Studios’ color finishes so far, except for the first Ice Age. One of the special aspects of working on animated films is that you’re often working with people who are fine-art painters. As a result, they bring in a different background and way of analyzing the images. That’s really special. They often focus on the interplay of different hues.

In the case of Spider-Man: Into the Spider-Verse, they also wanted to bring a certain naturalism to the color experience. With this particular film, they made very bold choices with their use of color finishing. They used an aspect of color correctors that are used to shift all of the hues and colors; that’s usually reserved for music videos. They completely embraced it. They were basically using color finishing to augment the story and refine their hues, especially time of day and progression of the day or night. They used it as their extra lighting step.

Can you talk about your typical process? Did that differ because of the animated content?
My process actually does not differ when I’m color finishing animated content. Continuity is always at the forefront, even in animation. I use the color corrector as a creative tool on every project.

How would you describe the look of the film?
The film embodies the vivid and magical colors that I always observed in childhood but never saw reflected on the screen. The film is very color intense. It’s as if you’re stepping inside a comic book illustrator’s mind. It’s a mind-meld with how they’re imagining things.

What system did you use for color and why?
I used Resolve on this project, as it was the system that the clients were most familiar with.

Any favorite parts of the process?
My favorite part is from start to finish. It was all magical on this film.

What was your path to being a colorist?
My parents loved going to the cinema. They didn’t believe in babysitters, so they took me to everything. They were big fans of the French new wave movement and films that offered unconventional ways of depicting the human experience. As a result, I got to see some pretty unusual films. I got to see how passionate my parents were about these films and their stories and unusual way of telling them, and it sparked something in me. I think I can give my parents full credit for my career.

I studied non-narrative experimental filmmaking in college even though ultimately my real passion was narrative film. I started as a runner in the Czech Republic, which is where I’d made my thesis film for my BA degree. From there I worked my way up and met a colorist (Biggi Klier) who really inspired me. I was hooked and lucky enough to study with her and another mentor of mine in Munich, Germany.

How do you prefer a director and DP describe a look?
Every single person I’ve worked with works differently, and that’s what makes it so fun and exciting, but also challenging. Every person communicates about color differently and our vocabulary for color is so limited, therein lies the challenge.

Where do you find inspiration?
From both the natural world and the world of films. I live in a place that faces east, and I get up every morning to watch the sunrise and the color palette is always different. It’s beautiful and inspiring. The winter palettes in particular are gorgeous, with reds and oranges that don’t exist in summer sunrises.

Autodesk launches Maya 2019 for animation, rendering, more

Autodesk has released the latest version of Maya, its 3D animation, modeling, simulation and rendering software. Maya 2019 features significant updates for speed and interactivity and addresses some challenges artists face throughout production, providing faster animation playback to reduce the need for playblasts, higher quality 3D previews with Autodesk Arnold updates in viewport 2.0, improved pipeline integration with more flexible development environment support, and performance improvements that most Maya artists will notice in their daily work.

Key new Maya 2019 features include:
• Faster Animation: New cached playback increases animation playback speeds in viewport 2.0, giving animators a more interactive and responsive animating environment to produce better quality animations. It helps reduce the need to produce time-consuming playblasts to evaluate animation work, so animators can work faster.


• Higher Quality Previews Closer to Final Renders: Arnold upgrades improve realtime previews in viewport 2.0, allowing artists to preview higher quality results that are closer to the final Arnold render for better creativity and less wasted time.
• Faster Maya: New performance and stability upgrades help improve daily productivity in a range of areas that most artists will notice in their daily work.
• Refining Animation Data: New filters within the graph editor make it easier to work with motion capture data, including the Butterworth filter and the key reducer to help refine animation curves.
• Rigging Improvements: New updates help make the work of riggers and character TDs easier, including the ability to hide sets from the outliner to streamline scenes, improvements to the bake deformer tool and new methods for saving deformer weights to more easily script rig creation.
• Pipeline Integration Improvements: Development environment updates make it easier for pipeline and tool developers to create, customize and integrate into production pipelines.
• Help for Animators in Training: Sample rigged and animated characters, as well as motion capture samples, make it easier for students to learn and quickly get started animating.

Maya 2019 is available now as a standalone subscription or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection.

Avengers: Infinity War leads VES Awards with six noms

The Visual Effects Society (VES) has announced the nominees for the 17th Annual VES Awards, which recognize outstanding visual effects artistry and innovation in film, animation, television, commercials and video games as well as the VFX supervisors, VFX producers and hands-on artists who bring this work to life.

Avengers: Infinity War garners the most feature film nomination with six. Incredibles 2 is the top animated film contender with five nominations and Lost in Space leads the broadcast field with six nominations.

Nominees in 24 categories were selected by VES members via events hosted by 11 of the organizations Sections, including Australia, the Bay Area, Germany, London, Los Angeles, Montreal, New York, New Zealand, Toronto, Vancouver and Washington.

The VES Awards will be held on February 5th at the Beverly Hilton Hotel. As previously announced, the VES Visionary Award will be presented to writer/director/producer and co-creator of Westworld Jonathan Nolan. The VES Award for Creative Excellence will be given to award-winning creators/executive producers/writers/directors David Benioff and D.B. Weiss of Game of Thrones fame. Actor-comedian-author Patton Oswalt will once again host the VES Awards.

Here are the nominees:

Outstanding Visual Effects in a Photoreal Feature

Avengers: Infinity War

Daniel DeLeeuw

Jen Underdahl

Kelly Port

Matt Aitken

Daniel Sudick

 

Christopher Robin

Christopher Robin

Chris Lawrence

Steve Gaub

Michael Eames

Glenn Melenhorst

Chris Corbould

 

Ready Player One

Roger Guyett

Jennifer Meislohn

David Shirk

Matthew Butler

Neil Corbould

 

Solo: A Star Wars Story

Rob Bredow

Erin Dusseault

Matt Shumway

Patrick Tubach

Dominic Tuohy

 

Welcome to Marwen

Kevin Baillie

Sandra Scott

Seth Hill

Marc Chu

James Paradis

 

Outstanding Supporting Visual Effects in a Photoreal Feature 

12 Strong

Roger Nall

Robert Weaver

Mike Meinardus

 

Bird Box

Marcus Taormina

David Robinson

Mark Bakowski

Sophie Dawes

Mike Meinardus

 

Bohemian Rhapsody

Paul Norris

Tim Field

May Leung

Andrew Simmonds

 

First Man

Paul Lambert

Kevin Elam

Tristan Myles

Ian Hunter

JD Schwalm

 

Outlaw King

Alex Bicknell

Dan Bethell

Greg O’Connor

Stefano Pepin

 

Outstanding Visual Effects in an Animated Feature

Dr. Seuss’ The Grinch

Pierre Leduc

Janet Healy

Bruno Chauffard

Milo Riccarand

 

Incredibles 2

Brad Bird

John Walker

Rick Sayre

Bill Watral

 

Isle of Dogs

Mark Waring

Jeremy Dawson

Tim Ledbury

Lev Kolobov

 

Ralph Breaks the Internet

Scott Kersavage

Bradford Simonsen

Ernest J. Petti

Cory Loftis

 

Spider-Man: Into the Spider-Verse

Joshua Beveridge

Christian Hejnal

Danny Dimian

Bret St. Clair

 

Outstanding Visual Effects in a Photoreal Episode

Altered Carbon; Out of the Past

Everett Burrell

Tony Meagher

Steve Moncur

Christine Lemon

Joel Whist

 

Krypton; The Phantom Zone

Ian Markiewicz

Jennifer Wessner

Niklas Jacobson

Martin Pelletier

 

LOST IN SPACE

Lost in Space; Danger, Will Robinson

Jabbar Raisani

Terron Pratt

Niklas Jacobson

Joao Sita

 

The Terror; Go For Broke

Frank Petzold

Lenka Líkařová

Viktor Muller

Pedro Sabrosa

 

Westworld; The Passenger

Jay Worth

Elizabeth Castro

Bruce Branit

Joe Wehmeyer

Michael Lantieri

 

Outstanding Supporting Visual Effects in a Photoreal Episode

Tom Clancy’s Jack Ryan; Pilot

Erik Henry

Matt Robken

Bobo Skipper

Deak Ferrand

Pau Costa

 

The Alienist; The Boy on the Bridge

Kent Houston

Wendy Garfinkle

Steve Murgatroyd

Drew Jones

Paul Stephenson

 

The Deuce; We’re All Beasts

Jim Rider

Steven Weigle

John Bair

Aaron Raff

 

The First; Near and Far

Karen Goulekas

Eddie Bonin

Roland Langschwert

Bryan Godwin

Matthew James Kutcher

 

The Handmaid’s Tale; June

Brendan Taylor

Stephen Lebed

Winston Lee

Leo Bovell

 

Outstanding Visual Effects in a Realtime Project

Age of Sail

John Kahrs

Kevin Dart

Cassidy Curtis

Theresa Latzko

 

Cycles

Jeff Gipson

Nicholas Russell

Lauren Nicole Brown

Jorge E. Ruiz Cano

 

Dr Grordbort’s Invaders

Greg Broadmore

Mhairead Connor

Steve Lambert

Simon Baker

 

God of War

Maximilian Vaughn Ancar

Corey Teblum

Kevin Huynh

Paolo Surricchio

 

Marvel’s Spider-Man

Grant Hollis

Daniel Wang

Seth Faske

Abdul Bezrati

 

Outstanding Visual Effects in a Commercial 

Beyond Good & Evil 2

Maxime Luere

Leon Berelle

Remi Kozyra

Dominique Boidin

 

John Lewis; The Boy and the Piano

Kamen Markov

Philip Whalley

Anthony Bloor

Andy Steele

 

McDonald’s; #ReindeerReady

Ben Cronin

Josh King

Gez Wright

Suzanne Jandu

 

U.S. Marine Corps; A Nation’s Call

Steve Drew

Nick Fraser

Murray Butler

Greg White

Dave Peterson

 

Volkswagen; Born Confident

Carsten Keller

Anandi Peiris

Dan Sanders

Fabian Frank

 

Outstanding Visual Effects in a Special Venue Project

Beautiful Hunan; Flight of the Phoenix

R. Rajeev

Suhit Saha

Arish Fyzee

Unmesh Nimbalkar

 

Childish Gambino’s Pharos

Keith Miller

Alejandro Crawford

Thelvin Cabezas

Jeremy Thompson

 

DreamWorks Theatre Presents Kung Fu Panda

Marc Scott

Doug Cooper

Michael Losure

Alex Timchenko

 

Osheaga Music and Arts Festival

Andre Montambeault

Marie-Josee Paradis

Alyson Lamontagne

David Bishop Noriega

 

Pearl Quest

Eugénie von Tunzelmann

Liz Oliver

Ian Spendloff

Ross Burgess

 

Outstanding Animated Character in a Photoreal Feature

Avengers: Infinity War; Thanos

Jan Philip Cramer

Darren Hendler

Paul Story

Sidney Kombo-Kintombo

 

Christopher Robin; Tigger

Arslan Elver

Kayn Garcia

Laurent Laban

Mariano Mendiburu

 

Jurassic World: Fallen Kingdom; Indoraptor

Jance Rubinchik

Ted Lister

Yannick Gillain

Keith Ribbons

 

Ready Player One; Art3mis

David Shirk

Brian Cantwell

Jung-Seung Hong

Kim Ooi

 

Outstanding Animated Character in an Animated Feature

Dr. Seuss’ The Grinch; The Grinch

David Galante

Francois Boudaille

Olivier Luffin

Yarrow Cheney

 

Incredibles 2; Helen Parr

Michal Makarewicz

Ben Porter

Edgar Rodriguez

Kevin Singleton

 

Ralph Breaks the Internet; Ralphzilla

Dong Joo Byun

Dave K. Komorowski

Justin Sklar

Le Joyce Tong

 

Spider-Man: Into the Spider-Verse; Miles Morales

Marcos Kang

Chad Belteau

Humberto Rosa

Julie Bernier Gosselin

 

Outstanding Animated Character in an Episode or Realtime Project

Cycles; Rae

Jose Luis Gomez Diaz

Edward Everett Robbins III

Jorge E. Ruiz Cano

Jose Luis -Weecho- Velasquez

 

Lost in Space; Humanoid

Chad Shattuck

Paul Zeke

Julia Flanagan

Andrew McCartney

 

Nightflyers; All That We Have Found; Eris

Peter Giliberti

James Chretien

Ryan Cromie

Cesar Dacol Jr.

 

Spider-Man; Doc Ock

Brian Wyser

Henrique Naspolini

Sophie Brennan

William Salyers

 

Outstanding Animated Character in a Commercial

McDonald’s; Bobbi the Reindeer

Gabriela Ruch Salmeron

Joe Henson

Andrew Butler

Joel Best

 

Overkill’s The Walking Dead; Maya

Jonas Ekman

Goran Milic

Jonas Skoog

Henrik Eklundh

 

Peta; Best Friend; Lucky

Bernd Nalbach

Emanuel Fuchs

Sebastian Plank

Christian Leitner

 

Volkswagen; Born Confident; Bam

David Bryan

Chris Welsby

Fabian Frank

Chloe Dawe

 

Outstanding Created Environment in a Photoreal Feature

Ant-Man and the Wasp; Journey to the Quantum Realm

Florian Witzel

Harsh Mistri

Yuri Serizawa

Can Yuksel

 

Aquaman; Atlantis

Quentin Marmier

Aaron Barr

Jeffrey De Guzman

Ziad Shureih

 

Ready Player One; The Shining, Overlook Hotel

Mert Yamak

Stanley Wong

Joana Garrido

Daniel Gagiu

 

Solo: A Star Wars Story; Vandor Planet

Julian Foddy

Christoph Ammann

Clement Gerard

Pontus Albrecht

 

Outstanding Created Environment in an Animated Feature

Dr. Seuss’ The Grinch; Whoville

Loic Rastout

Ludovic Ramiere

Henri Deruer

Nicolas Brack

 

Incredibles 2; Parr House

Christopher M. Burrows

Philip Metschan

Michael Rutter

Joshua West

 

Ralph Breaks the Internet; Social Media District

Benjamin Min Huang

Jon Kim Krummel II

Gina Warr Lawes

Matthias Lechner

 

Spider-Man; Into the Spider-Verse; Graphic New York City

Terry Park

Bret St. Clair

Kimberly Liptrap

Dave Morehead

 

Outstanding Created Environment in an Episode, Commercial, or Realtime Project

Cycles; The House

Michael R.W. Anderson

Jeff Gipson

Jose Luis Gomez Diaz

Edward Everett Robbins III

 

Lost in Space; Pilot; Impact Area

Philip Engström

Kenny Vähäkari

Jason Martin

Martin Bergquist

 

The Deuce; 42nd St

John Bair

Vance Miller

Jose Marin

Steve Sullivan

 

The Handmaid’s Tale; June; Fenway Park

Patrick Zentis

Kevin McGeagh

Leo Bovell

Zachary Dembinski

 

The Man in the High Castle; Reichsmarschall Ceremony

Casi Blume

Michael Eng

Ben McDougal

Sean Myers

 

Outstanding Virtual Cinematography in a Photoreal Project

Aquaman; Third Act Battle

Claus Pedersen

Mohammad Rastkar

Cedric Lo

Ryan McCoy

 

Echo; Time Displacement

Victor Perez

Tomas Tjernberg

Tomas Wall

Marcus Dineen

 

Jurassic World: Fallen Kingdom; Gyrosphere Escape

Pawl Fulker

Matt Perrin

Oscar Faura

David Vickery

 

Ready Player One; New York Race

Daniele Bigi

Edmund Kolloen

Mathieu Vig

Jean-Baptiste Noyau

 

Welcome to Marwen; Town of Marwen

Kim Miles

Matthew Ward

Ryan Beagan

Marc Chu

 

Outstanding Model in a Photoreal or Animated Project 

Avengers: Infinity War; Nidavellir Forge Megastructure

Chad Roen

Ryan Rogers

Jeff Tetzlaff

Ming Pan

 

Incredibles 2; Underminer Vehicle

Neil Blevins

Philip Metschan

Kevin Singleton

 

Mortal Engines; London

Matthew Sandoval

James Ogle

Nick Keller

Sam Tack

 

Ready Player One; DeLorean DMC-12

Giuseppe Laterza

Kim Lindqvist

Mauro Giacomazzo

William Gallyot

 

Solo: A Star Wars Story; Millennium Falcon

Masa Narita

Steve Walton

David Meny

James Clyne

 

Outstanding Effects Simulations in a Photoreal Feature

Avengers: Infinity War; Titan

Gerardo Aguilera

Ashraf Ghoniem

Vasilis Pazionis

Hartwell Durfor

 

Avengers: Infinity War; Wakanda

Florian Witzel

Adam Lee

Miguel Perez Senent

Francisco Rodriguez

 

Fantastic Beasts: The Crimes of Grindelwald

Dominik Kirouac

Chloe Ostiguy

Christian Gaumond

 

Venom

Aharon Bourland

Jordan Walsh

Aleksandar Chalyovski

Federico Frassinelli

 

Outstanding Effects Simulations in an Animated Feature

Dr. Seuss’ The Grinch; Snow, Clouds and Smoke

Eric Carme

Nicolas Brice

Milo Riccarand

 

Incredibles 2

Paul Kanyuk

Tiffany Erickson Klohn

Vincent Serritella

Matthew Kiyoshi Wong

 

Ralph Breaks the Internet; Virus Infection & Destruction

Paul Carman

Henrik Fält

Christopher Hendryx

David Hutchins

 

Smallfoot

Henrik Karlsson

Theo Vandernoot

Martin Furness

Dmitriy Kolesnik

 

Spider-Man: Into the Spider-Verse

Ian Farnsworth

Pav Grochola

Simon Corbaux

Brian D. Casper

 

Outstanding Effects Simulations in an Episode, Commercial, or Realtime Project

Altered Carbon

Philipp Kratzer

Daniel Fernandez

Xavier Lestourneaud

Andrea Rosa

 

Lost in Space; Jupiter is Falling

Denys Shchukin

Heribert Raab

Michael Billette

Jaclyn Stauber

 

Lost in Space; The Get Away

Juri Bryan

Will Elsdale

Hugo Medda

Maxime Marline

 

The Man in the High Castle; Statue of Liberty Destruction

Saber Jlassi

Igor Zanic

Nick Chamberlain

Chris Parks

 

Outstanding Compositing in a Photoreal Feature

Avengers: Infinity War; Titan

Sabine Laimer

Tim Walker

Tobias Wiesner

Massimo Pasquetti

 

First Man

Joel Delle-Vergin

Peter Farkas

Miles Lauridsen

Francesco Dell’Anna

 

Jurassic World: Fallen Kingdom

John Galloway

Enrik Pavdeja

David Nolan

Juan Espigares Enriquez

 

Welcome to Marwen

Woei Lee

Saul Galbiati

Max Besner

Thai-Son Doan

 

Outstanding Compositing in a Photoreal Episode

Altered Carbon

Jean-François Leroux

Reece Sanders

Stephen Bennett

Laraib Atta

 

Handmaids Tale; June

Winston Lee

Gwen Zhang

Xi Luo

Kevin Quatman

 

Lost in Space; Impact; Crash Site Rescue

David Wahlberg

Douglas Roshamn

Sofie Ljunggren

Fredrik Lönn

 

Silicon Valley; Artificial Emotional Intelligence; Fiona

Tim Carras

Michael Eng

Shiying Li

Bill Parker

 

Outstanding Compositing in a Photoreal Commercial

Apple; Unlock

Morten Vinther

Michael Gregory

Gustavo Bellon

Rodrigo Jimenez

 

Apple; Welcome Home

Michael Ralla

Steve Drew

Alejandro Villabon

Peter Timberlake

 

Genesis; G90 Facelift

Neil Alford

Jose Caballero

Joseph Dymond

Greg Spencer

 

John Lewis; The Boy and the Piano

Kamen Markov

Pratyush Paruchuri

Kalle Kohlstrom

Daniel Benjamin

 

Outstanding Visual Effects in a Student Project

Chocolate Man

David Bellenbaum

Aleksandra Todorovic

Jörg Schmidt

Martin Boué

 

Proxima-b

Denis Krez

Tina Vest

Elias Kremer

Lukas Löffler

 

Ratatoskr

Meike Müller

Lena-Carolin Lohfink

Anno Schachner

Lisa Schachner

 

Terra Nova

Thomas Battistetti

Mélanie Geley

Mickael Le Mezo

Guillaume Hoarau

VFX studio Electric Theatre Collective adds three to London team

London visual effects studio Electric Theatre Collective has added three to its production team: Elle Lockhart, Polly Durrance and Antonia Vlasto.

Lockhart brings with her extensive CG experience, joining from Touch Surgery where she ran the Johnson & Johnson account. Prior to that she worked at Analog as a VFX producer where she delivered three global campaigns for Nike. At Electric, she will serve as producer on Martini and Toyota.

Vlasto joins Electric working on clients such Mercedes, Tourism Ireland and Tui. She joins from 750MPH where, over a four-year period, she served as producer on Nike, Great Western Railway, VW and Amazon to name but a few.

At Electric, Polly Durrance will serve as producer on H&M, TK Maxx and Carphone Warehouse. She joins from Unit where she helped launched their in-house Design Collective, worked with clients such as Lush, Pepsi and Thatchers Cider. Prior to Unit Polly was at Big Buoy where she produced work for Jaguar Land Rover, giffgaff and Redbull.

Recent projects at the studio, which also has an office in Santa Monica, California, include Tourism Ireland Capture Your Heart and Honda Palindrome.

Main Image: (L-R) Elle Lockhart, Antonia Vlasto and Polly Durrance.

Behind the Title: FuseFX VFX supervisor Marshall Krasser

Over the years, this visual effects veteran has worked with both George Lucas and Steven Spielberg, whose films helped inspire his career path.

NAME: Marshall Krasser

COMPANY: FuseFX 

CAN YOU DESCRIBE YOUR COMPANY?
FuseFX offers visual effects services for episodic television, feature films, commercials and VR productions. Founded in 2006, the company employs over 300 people across three studio locations in LA, NYC and Vancouver

WHAT’S YOUR JOB TITLE?
Visual Effects Supervisor

WHAT DOES THAT ENTAIL?
In general, a VFX supervisor is responsible for leading the creative team that brings the director’s vision to life. The role does vary from show to show depending on whether or not there is an on-set or studio-side VFX supervisor.

Here is a list of responsibilities across the board:
– Read and flag the required VFX shots in the script.
– Work with the producer and team to bid the VFX work.
– Attend the creative meetings and location scouts.
– Work with the studio creative team to determine what they want and what we need to achieve it.
– Be the on-set presence for VFX work — making sure the required data and information we need is shot, gathered and catalogued.
– Work with our in-house team to start developing assets and any pre-production concept art that will be needed.
– Once the VFX work is in post production, the VFX supervisor guides the team of in-house artists and technicians through the shot creation/completion phase, while working with the producer to keep the show within the budgets constraints.
– Keep the client happy!

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
That the job is much more than pointing at the computer screen and making pretty images. Team management is critical. Since you are working with very talented and creative people, it takes a special skill set and understanding. Having worked up through the VFX ranks, it helps you understand the mind set since you have been in their shoes.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
My first job was creating computer graphic images for speaker support presentations on a Genigraphics workstation in 1984. I then transitioned into feature film in 1994.

HOW HAS THE VFX INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING? WHAT’S BEEN GOOD, WHAT’S BEEN BAD?
It’s changed a lot. In the early days at ILM, we were breaking ground by being asked to create imagery that had never been seen before. This involved creating new tools and approaches that had not been previously possible.

Today, VFX has less of the “man behind the curtain” mystique and has become more mainstream and familiar to most. The tools and computer power have evolved so there is less of the “heavy lifting” that was required in the past. This is all good, but the “bad” part is the fact that “tricking” people’s eyes is more difficult these days.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
A couple really focused my attention toward VFX. There is a whole generation that was enthralled with the first Star Wars movie. I will never forget the feeling I had upon first viewing it — it was magical.

The other was E.T., since it was more grounded on Earth and more plausible. I was blessed to be able to work directly with both George Lucas and Steven Spielberg [and the artisans who created the VFX for these films] during the course of my career.

DID YOU GO TO FILM SCHOOL?
I did not. At the time, there was virtually no opportunity to attend a film school, or any school, that taught VFX. I took the route that made the most sense for me at the time — art major. I am a classically trained artist who focused on graphic design and illustration, but I also took computer programming.

On a typical Saturday, I would spend the morning in the computer lab programming and the afternoon on the potter’s wheel throwing pots. Always found that ironic – primitive to modern in the same day!

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with the team and bringing the creative to life.

WHAT’S YOUR LEAST FAVORITE?
Numbers, no one told me there would be math! Re: bidding.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Maybe a fishing or outdoor adventure guide. Something far away from computers and an office.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
– the Vice movie
– the Waco miniseries
–  the Life Sentence TV series
– the Needle in a Timestack film
The 100 TV series

WHAT IS THE PROJECT/S THAT YOU ARE MOST PROUD OF?
A few stand out, in no particular order. Pearl Harbor, Harry Potter, Galaxy Quest, Titanic, War of the Worlds and the last Indiana Jones movie.

WHAT TOOLS DO YOU USE DAY TO DAY?
I would have to say Nuke. I use it for shot and concept work when needed.

WHERE DO YOU FIND INSPIRATION NOW?
Everything around me. I am heavily into photography these days, and am always looking at putting a new spin on ordinary things and capturing the unique.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Head into the great British Columbian outdoors for camping and other outdoor activities.

Asahi beer spot gets the VFX treatment

A collaboration between The Monkeys Melbourne, In The Thicket and Alt, a newly released Asahi campaign takes viewers on a journey through landscapes built around surreal Japanese iconography. Watch Asahi Super Dry — Enter Asahi here.

From script to shoot — a huge operation that took place at Sydney’s Fox Studios — director Marco Prestini and his executive producer Genevieve Triquet (from production house In The Thicket) brought on the VFX team at Alt to help realize the creative vision.

The VFX team at Alt (which has offices in Sydney, Melbourne and Los Angeles) worked with Prestini to help design and build the complex “one shot” look, with everything from robotic geishas to a gigantic CG squid in the mix, alongside a seamless blend of CG set extensions and beautifully shot live-action plates.

“VFX supervisor Dave Edwards and the team at Alt, together with my EP Genevieve, have been there since the very beginning, and their creative input and expertise were key in every step of the way,” explains Prestini. “Everything we did on set was the results of weeks of endless back and forth on technical previz, a process that required pretty much everyone’s input on a daily basis and that was incredibly inspiring for me to be part of.”

Dave Edwards, VFX supervisor at Alt, shares: “Production designer Michael Iacono designed sets in 3D, with five huge sets built for the shoot. The team then worked out camera speeds for timings based on these five sets and seven plates. DP Stefan Duscio would suggest rigs and mounts, which our team was able to then test it in previs to see if it would work with the set. During previs, we worked out that we couldn’t get the resolution and the required frame rate to shoot the high frame rate samurais, so we had to use Alexa LF. Of course, that also helped Marco, who wanted minimal lens distortion as it allowed a wide field of view without the distortion of normal anamorphic lenses.”

One complex scene involves a character battling a gigantic underwater squid, which was done via a process known as “dry for wet” — a film technique in which smoke, colored filters and/or lighting effects are used to simulate a character being underwater while filming on a dry stage. The team at Alt did a rough animation of the squid to help drive the actions of the talent and the stunt team on the day, before spending the final weeks perfecting the look of the photoreal monster.

In terms of tools, for concept design/matte painting Alt used Adobe Photoshop while previs/modeling/texturing/animation was done in Autodesk Maya. All of the effects/lighting/look development was via Side Effects Houdini; the compositing pipeline was built around Foundry Nuke; final online was completed in Autodesk Flame; and for graphics, they used Adobe After Effects.
The final edit was done by The Butchery.

Here is the VFX breakdown:

Enter Asahi – VFX Breakdown from altvfx on Vimeo.

Full-service creative agency Carousel opens in NYC

Carousel, a new creative agency helmed by Pete Kasko and Bernadette Quinn, has opened its doors in New York City. Billing itself as “a collaborative collective of creative talent,” Carousel is positioned to handle projects from television series to ad campaigns for brands, media companies and advertising agencies.

Clients such as PepsiCo’s Pepsi, Quaker and Lays brands; Victoria’s Secret; Interscope Records; A&E Network and The Skimm have all worked with the company.

Designed to provide full 360 capabilities, Carousel allows its brand partners to partake of all its services or pick and choose specific offerings including strategy, creative development, brand development, production, editorial, VFX/GFX, color, music and mix. Along with its client relationships, Carousel has also been the post production partner for agencies such as McGarryBowen, McCann, Publicis and Virtue.

“The industry is shifting in how the work is getting done. Everyone has to be faster and more adaptable to change without sacrificing the things that matter,” says Quinn. “Our goal is to combine brilliant, high-caliber people, seasoned in all aspects of the business, under one roof together with a shared vision of how to create better content in a more efficient way.”

According to managing director Dee Tagert comments, “The name Carousel describes having a full set of capabilities from ideation to delivery so that agencies or brands can jump on at any point in their process. By having a small but complete agency team that can manage and execute everything from strategy, creative development and brand development to production and post, we can prove more effective and efficient than a traditional agency model.”

Danielle Russo, Dee Tagert, AnaLiza Alba Leen

AnaLiza Alba Leen comes on board Carousel as creative director with 15 years of global agency experience, and executive producer Danielle Russo brings 12 years of agency experience.
Tagert adds, “The industry has been drastically changing over the last few years. As clients’ hunger for content is driving everything at a much faster pace, it was completely logical to us to create a fully integrative company to be able to respond to our clients in a highly productive, successful manner.”

Carousel is currently working on several upcoming projects for clients including Victoria’s Secret, DNTL, Subway, US Army, Tazo Tea and Range Rover.

Main Image: Bernadette Quinn and Pete Kasko

Behind the Title: Aardman director/designer Gavin Strange

NAME: Gavin Strange

COMPANY: Bristol, England-based Aardman. They also have an office in NYC under the banner Aardman Nathan Love

CAN YOU DESCRIBE HOW YOUR CAREER AT AARDMAN BEGAN?
I can indeed! I started 10 years ago as a freelancer, joining the fledgling Interactive department (or Aardman Online as it was known back then). They needed a digital designer for a six-month project for the UK’s Channel 4.

I was a freelancer in Bristol at the time and I made it my business to be quite vocal on all the online platforms, always updating those platforms and my own website with my latest work — whether that be client work or self-initiated projects. Luckily for me, the creative director of Aardman Online, Dan Efergan, saw my work when he was searching for a designer and got in touch (it was the most exciting email ever, with the subject of “Hello from Aardman!”

The short version of this story is that I got Dan’s email, popped in for a cup of tea and a chat, and 10 years later I’m still here! Ha!

The slightly longer but still truncated version is that after the six-month freelance project was done, the role of senior designer for the online team became open and I gave up the freelance life and, very excitedly, joined the team as an official Aardmanite!

Thing is, I was never shy about sharing with my new colleagues the other work I did. My role in the beginning was primarily digital/graphic design, but in my own time, under the banner of JamFactory (my own artist alter-ego name) I put out all sorts of work that was purely passion projects; films, characters, toys, clothing, art.

Gavin Strange directed this Christmas spot for the luxury brand Fortnum & Mason .

Filmmaking was a huge passion of mine and even at the earliest stages in my career when I first started out (I didn’t go to university so I got my first role as a junior designer when I was 17) I’d always be blending graphic design and film together.

Over those 10 years at Aardman I continued to make films of all kinds and share them with my colleagues. Because of that more opportunities arose to develop my film work within my existing design role. I had the unique advantage of having a lot of brilliant mentors who guided me and helped me with my moving image projects.

Those opportunities continued to grow and happen more frequently. I was doing more and more directing here, finally becoming officially represented by Aardman and added to their roster of directors. It’s a dream come true for me, because, not only do I get to work at the place I’ve admired growing up, but I’ve been mentored and shaped by the very individuals who make this place so special — that’s a real privilege.

What I really love is that my role is so varied — I’m both a director and a senior designer. I float between projects, and I love that variety. Sometimes I’m directing a commercial, sometimes I’m illustrating icons, other times I’m animating motion graphics. To me though, I don’t see a difference — it’s all creating something engaging, beautiful and entertaining — whatever the final format or medium!

So that’s my Aardman story. Ten years in, and I just feel like I’m getting started. I love this place.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE OF DIRECTOR?
Hmm, it’s tricky, as I actually think that most people’s perception of being a director is true: it’s that person’s responsibility to bring the creative vision to life.

Maybe what people don’t know is how flexible the role is, depending on the project. I love smaller projects where I get to board, design and animate, but then I love larger jobs with a whole crew of people. It’s always hands-on, but in many different ways.

Perhaps what would surprise a lot of people is that it’s every directors responsibility to clean the toilets at the end of the day. That’s what Aardman has always told me and, of course, I honor that tradition. I mean, I haven’t actually ever seen anyone else do it, but that’s because everyone else just gets on with it quietly, right? Right!?

WHAT’S YOUR FAVORITE PART OF THE JOB?
Oh man, can I say everything!? I really, really enjoy the job as a whole — having that creative vision, working with yourself, your colleagues and your clients to bring it to life. Adapting and adjusting to changes and ensuring something great pops out the other end.

I really, genuinely, get a thrill seeing something on screen. I love concentrating on every single frame — it’s a win-win situation. You get to make a lovely image each frame, but when you stitch them together and play them really fast one after another, then you get a lovely movie — how great is that?

In short, I really love the sum total of the job. All those different exciting elements that all come together for the finished piece.

WHAT’S YOUR LEAST FAVORITE?
I pride myself on being an optimist and being a right positive pain in the bum, so I don’t know if there’s any part I don’t enjoy — if anything is tricky I try and see it as a challenge and something that will only improve my skillset.

I know that sounds super annoying doesn’t it? I know that can seem all floaty and idealistic, but I pride myself on being a “realistic’ idealist” — recognizing the reality of a tricky situation, but seeing it through an idealistic lens.

If I’m being honest, then probably that really early stage is my least favorite — when the project is properly kicking off and you’ve got that gap between what the treatment/script/vision says it will be and the huge gulf in between that and the finished thing. That’s also the most exciting too, the not knowing how it will turn out. It’s terrifying and thrilling, in all good measure. It surprises me every single time, but I think that panic is an essential part of any creative process.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
In an alternate world, I’d be a photographer, traveling the world, documenting everything I see, living the nomadic life. But that’s still a creative role, and I still class it as the same job, really. I love my graphic design roots too — print and digital design — but, again, I see it as all the same role really.

So that means, if I didn’t have this job, I’d be roaming the lands, offering to draw/paint/film/make for anyone that wanted it! (Is that a mercenary? Is there such a thing as a visual mercenary? I don’t really have the physique for that I don’t think.)

WHY DID YOU CHOOSE THIS PROFESSION?
This profession chose me. I’m just kidding, that’s ridiculous, I just always wanted to say that.

I think, like most folks, I fell into it in a series of natural choices. Art, design, graphics and games always stole my attention as a kid, and I just followed the natural path into that, which turned into my career. I’m lucky enough that I didn’t feel the need to single out any one passion, and kept them all bubbling along even as I made my career choices as designer to director. I still did and still do indulge my passion for all types of mediums in my own time.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I’m not sure. I wasn’t particularly driven or focused as a kid. I knew I loved design and art, but I didn’t know of the many, many different roles out there that existed. I like that though, I see that as a positive, and also as an achievable way to progress through a career path. I speak to a lot of students and young professionals and I think it can be so overwhelming to plot a big ‘X’ on a career map and then feel all confused about how to get there. I’m an advocate of taking it one step at a time, and make more manageable advances forward — as things always get in the way and change anyway.

I love the idea of a meandering, surprising path. Who knows where it will lead!? I think as long as your aim is to make great work, then you’ll surprise yourself where you end up.

WHAT WAS IT ABOUT DIRECTING THAT ATTRACTED YOU?
I’ve always obsessed over films, and obsessed over the creation of them. I’ll watch a behind-the-scenes on any film or bit of moving image. I just love the fact that the role is to bring something to life — it’s to oversee and create something from nothing, ensuring every frame is right. The way it makes you feel, the way it looks, the way it sounds.

It’s just such an exciting role. There’s a lot of unknowns too, on every project. I think that’s where the good stuff lies. Trusting in the process and moving forwards, embracing it.

HOW DOES DIRECTING FOR ANIMATION DIFFER FROM DIRECTING FOR LIVE ACTION — OR DOES IT?
Technically it’s different — with animation your choices are pretty much made all up front, with the storyboards and animatic as your guides, and then they’re brought to life with animation. Whereas, for me, the excitement in live action is not really knowing what you’ll get until there’s a lens on it. And even then, it can come together in a totally new way in the edit.

I don’t try to differentiate myself as an “animation director” or “live-action” director. They’re just different tools for the job. Whatever tells the best story and connects with audiences!

HOW DO YOU PICK THE PEOPLE YOU WORK WITH ON A PARTICULAR PROJECT?
Their skillset is paramount, but equally as important is their passion and their kindness. There are so many great people out there, but I think it’s so important to work with people who are great and kind. Too many people get a free pass for being brilliant and feel that celebration of their work means it’s okay to mistreat others. It’s not okay… ever. I’m lucky that Aardman is a place full of excited, passionate and engaged folk who are a pleasure to work with, because you can tell they love what they do.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’ve been lucky enough to work on a real variety of projects recently. I directed an ident for the rebrand of BBC2, a celebratory Christmas spot for the luxury brand Fortnum & Mason and an autobiographical motion graphics short film about Maya Angelou for BBC Radio 4.

Maya Angelou short film for BBC Radio 4

I love the variety of them; just those three projects alone were so different. The BBC2 ident was live-action in-camera effects with a great crew of people, whereas the Maya Angelou film was just me on design, direction and animation. I love hopping between projects of all types and sizes!

I’m working on development of a stop-frame short at the moment, which is all I can say for now, but just the process alone going from idea to a scribble in a notebook to a script is so exciting. Who knows what 2019 holds!?

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Oh man, that’s a tough one! A few years back I co-directed a title sequence for a creative festival called OFFF, which happens every year in Barcelona. I worked with Aardman legend Merlin Crossingham to bring this thing to life, and it’s a proper celebration of what we both love — it ended up being what we lovingly refer to as our “stop-frame live-action motion-graphics rap-video title-sequence.” It really was all those things.

That was really special as not only did we have a great crew, I got to work with one of my favorite rappers, P.O.S., who kindly provided the beats and the raps for the film.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT
– My iPhone. It’s my music player, Internet checker, email giver, tweet maker, picture capturer.
– My Leica M6 35mm camera. It’s my absolute pride and joy. I love the images it makes.
– My Screens. At work I have a 27-inch iMac and then two 25-inch monitors on either side. I just love screens. If I could have more, I would!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I genuinely love what I do, so I rarely feel like I “need to get away from it all.” But I do enjoy life outside of work. I’m a drummer and that really helps with any and all stress really. Even just practicing on a practice pad is cathartic, but nothing compares to smashing away on a real kit.

I like to run, and I sometimes do a street dance class, which is both great fun and excruciatingly frustrating because I’m not very good.

I’m a big gamer, even though I don’t have much time for it anymore. A blast on the PS4 is a treat. In fact, after this I’m going to have a little session on God of War before bedtime.

I love hanging with my family. My wife Jane, our young son Sullivan and our dog Peggy. Just hanging out, being a dad and being a husband is the best for de-stressing. Unless Sullivan gets up at 3am, then I change my answer back to the PS4.

I’m kidding, I love my family, I wouldn’t be anything or be anywhere without them.

Foundry Nuke 11.3’s performance, collaboration updates

Foundry has launched Nuke 11.3, introducing new features and updates to the company’s family of compositing and review tools. The release is the fourth update to the Nuke 11 Series and is designed to improve the user experience and to speed up heavy processing tasks for pipelines and individual users.

Nuke 11.3 lands with major enhancements to its Live Groups feature. It introduces new functionality along with corresponding Python callbacks and UI notifications that will allow for greater collaboration and offer more control. These updates make Live Groups easier for larger pipelines to integrate and give artists more visibility over the state of the Live Group and flexibility when using user knobs to override values within a Live Group.

The particle system in NukeX has been optimized to produce particle simulations up to six times faster than previous versions of the software, and up to four times faster for playback, allowing for faster iteration when setting up particle systems.

New Timeline Multiview support provides an extension to stereo and VR workflows. Artists can now use the same multiple-file stereo workflows that exist in Nuke on the Nuke Studio, Hiero and HieroPlayer timeline. The updated export structure can also be used to create multiple-view Nuke scripts from the timeline in Nuke Studio and Hiero.

Support for full-resolution stereo on monitor out makes review sessions even easier, and a new export preset helps with rendering of stereo projects.

New UI indications for changes in bounding box size and channel count help artists troubleshoot their scripts. A visual indication identifies nodes that increase bounding box size to be greater than the image, helping artists to identify the state of the bounding box at a glance. Channel count is now displayed in the status bar, and a warning is triggered when the 1024-channel limit is exceeded. The appearance and threshold for triggering the bounding box and channel warnings can be set in the preferences.

The selection tool has also been improved in both 2D and 3D views, and an updated marquee and new lasso tool make selecting shapes and points even easier.

Nuke 11.3 is available for purchase — alongside full release details — on Foundry’s website and via accredited resellers.

Making an animated series with Adobe Character Animator

By Mike McCarthy

In a departure from my normal film production technology focus, I have also been working on an animated web series called Grounds of Freedom. Over the past year I have been directing the effort and working with a team of people across the country who are helping in various ways. After a year of meetings, experimentation and work we finally started releasing finished episodes on YouTube.

The show takes place in Grounds of Freedom, a coffee shop where a variety of animated mini-figures gather to discuss freedom and its application to present-day cultural issues and events. The show is created with a workflow that weaves through a variety of Adobe Creative Cloud apps. Back in October I presented our workflow during Adobe Max in LA, and I wanted to share it with postPerspective’s readers as well.

When we first started planning for the series, we considered using live action. Ultimately, after being inspired by the preview releases of Adobe Character Animator, I decided to pursue a new digital approach to brick filming (a film made using Legos), which is traditionally accomplished through stop-motion animation. Once everyone else realized the simpler workflow possibilities and increased level of creative control offered by that new animation process, they were excited to pioneer this new approach. Animation gives us more control and flexibility over the message and dialog, lowers production costs and eases collaboration over long distances, as there is no “source footage” to share.

Creating the Characters
The biggest challenge to using Character Animator is creating digital puppets, which are deeply layered Photoshop PSDs with very precise layer naming and stacking. There are ways to generate the underlying source imagery in 3D animation programs, but I wanted the realism and authenticity of sourcing from actual photographs of the models and figures. So we took lots of 5K macro shots of our sets and characters in various positions with our Canon 60D and 70D DSLRs and cut out hundreds of layers of content in Photoshop to create our characters and all of their various possible body positions. The only thing that was synthetically generated was the various facial expressions digitally painted onto their clean yellow heads, usually to match an existing physical reference character face.

Mike McCarthy shooting stills.

Once we had our source imagery organized into huge PSDs, we rigged those puppets in Character Animator with various triggers, behaviors and controls. The walking was accomplished by cycling through various layers, instead of the default bending of the leg elements. We created arm movement by mapping each arm position to a MIDI key. We controlled facial expressions and head movement via webcam, and the mouth positions were calculated by the program based on the accompanying audio dialog.

Animating Digital Puppets
The puppets had to be finished and fully functional before we could start animating on the digital stages we had created. We had been writing the scripts during that time, parallel to generating the puppet art, so we were ready to record the dialog by the time the puppets were finished. We initially attempted to record live in Character Animator while capturing the animation motions as well, but we didn’t have the level of audio editing functionality we needed available to us in Character Animator. So during that first session, we switched over to Adobe Audition, and planned to animate as a separate process, once the audio was edited.

That whole idea of live capturing audio and facial animation data is laughable now, looking back, since we usually spend a week editing the dialog before we do any animating. We edited each character audio on a separate track and exported those separate tracks to Character Animator. We computed lipsync for each puppet based on their dedicated dialog track and usually exported immediately. This provided a draft visual that allowed us to continue editing the dialog within Premiere Pro. Having a visual reference makes a big difference when trying to determine how a conversation will feel, so that was an important step — even though we had to throw away our previous work in Character Animator once we made significant edit changes that altered sync.

We repeated the process once we had a more final edit. We carried on from there in Character Animator, recording arm and leg motions with the MIDI keyboard in realtime for each character. Once those trigger layers had been cleaned up and refined, we recorded the facial expressions, head positions and eye gaze with a single pass on the webcam. Every re-record to alter a particular section adds a layer to the already complicated timeline, so we limited that as much as possible, usually re-recording instead of making quick fixes unless we were nearly finished.

Compositing the Characters Together
Once we had fully animated scenes in Character Animator, we would turn off the background elements, and isolate each character layer to be exported in Media Encoder via dynamic link. I did a lot of testing before settling on JPEG2000 MXF as the format of choice. I wanted a highly compressed file, but need alpha channel support, and that was the best option available. Each of those renders became a character layer, which was composited into our stage layers in After Effects. We could have dynamically linked the characters directly into AE, but with that many layers that would decrease performance for the interactive part of the compositing work. We added shadows and reflections in AE, as well as various other effects.

Walking was one of the most challenging effects to properly recreate digitally. Our layer cycling in Character Animator resulted in a static figure swinging its legs, but people (and mini figures) have a bounce to their step, and move forward at an uneven rate as they take steps. With some pixel measurement and analysis, I was able to use anchor point keyframes in After Effects to get a repeating movement cycle that made the character appear to be walking on a treadmill.

I then used carefully calculated position keyframes to add the appropriate amount of travel per frame for the feet to stick to the ground, which varies based on the scale as the character moves toward the camera. (In my case the velocity was half the scale value in pixels per seconds.) We then duplicated that layer to create the reflection and shadow of the character as well. That result can then be composited onto various digital stages. In our case, the first two shots of the intro were designed to use the same walk animation with different background images.

All of the character layers were pre-comped, so we only needed to update a single location when a new version of a character was rendered out of Media Encoder, or when we brought in a dynamically linked layer. It would propagate all the necessary comp layers to generate updated reflections and shadows. Once the main compositing work was finished, we usually only needed to make slight changes in each scene between episodes. These scenes were composited at 5K, based on the resolution off the DSLR photos of the sets we had built. These 5K plates could be dynamically linked directly into Premiere Pro, and occasionally used later in the process to ripple slight changes through the workflow. For the interactive work, we got far better editing performance by rendering out flattened files. We started with DNxHR 5K assets, but eventually switched to HEVC files since they were 30x smaller and imperceptibly different in quality with our relatively static animated content.

Editing the Animated Scenes
In Premiere Pro, we had the original audio edit, and usually a draft render of the characters with just their mouths moving. Once we had the plate renders, we placed them each in their own 5K scene sub-sequence and used those sequences as source on our master timeline. This allowed us to easily update the content when new renders were available, or source from dynamically linked layers instead if needed. Our master timeline was 1080p, so with 5K source content we could push in two and a half times the frame size without losing resolution. This allowed us to digitally frame every shot, usually based on one of two rendered angles, and gave us lots of flexibility all the way to the end of the editing process.

Collaborative Benefits of Dynamic Link
While Dynamic Link doesn’t offer the best playback performance without making temp renders, it does have two major benefits in this workflow. It ripples change to the source PSD all the way to the final edit in Premiere just by bringing each app into focus once. (I added a name tag to one character’s PSD during my presentation, and 10 seconds later, it was visible throughout my final edit.) Even more importantly, it allows us to collaborate online without having to share any exported video assets. As long as each member of the team has the source PSD artwork and audio files, all we have to exchange online are the Character Animator project (which is small once the temp files are removed), the .AEP file and the .PrProj file.

This gives any of us the option to render full-quality visual assets anytime we need them, but the work we do on those assets is all contained within the project files that we sync to each other. The coffee shop was built and shot in Idaho, our voice artist was in Florida, our puppets faces were created in LA. I animate and edit in Northern California, the AE compositing was done in LA, and the audio is mixed in New Jersey. We did all of that with nothing but a Dropbox account, using the workflow I have just outlined.

Past that point, it was a fairly traditional finish, in that we edited in music and sound effects, and sent an OMF to Steve, our sound guy at DAWPro Studios http://dawpro.com/photo_gallery.html for the final mix. During that time we added other b-roll visuals or other effects, and once we had the final audio back we rendered the final result to H.264 at 1080p and uploaded to YouTube.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

VFX supervisor Simon Carr joins London’s Territory

Simon Carr has joined visual effects house Territory, bringing with him 20 years of experience as a VFX supervisor. He most recently served that role at London’s Halo, where he built the VFX department from scratch. He has also supervised at Realise Studio, Method Studios, Pixomondo, Digital Domain and others. While Carr will be based in London, he will also support the studio’s San Francisco offices as needed.

Having invested in a Shotgun pipeline, with a bespoke toolkit that integrates Territory’s design-led approach with VFX delivery, Carr’s appointment, according to the studio, signals a strategic approach to expanding the team’s capabilities. “Simon’s experience of all stages of the VFX process from pre-production to final delivery means that our clients and partners can be confident of seamless high-end VFX delivery at every stage of a project” says David Sheldon-Hicks, Territory’s founder and executive creative director.

At Territory, Carr will use his experience building and leading teams of artists, from compositing through to complex environment builds. The studio will also benefit from his experience of building a facility from scratch — establishing pipeline and workflows, recruiting and retaining artists; developing and maintaining relationships with clients and being involved with the pitching and bidding process.

The studio has worked on high-profile film projects, including Blade Runner 2049, Ready Player One, Pacific Rim: Uprising, Ghost in the Shell, The Martian and Guardians of the Galaxy. On the broadcast front, they have worked on the new series based on George R.R. Martin’s novella, Nightflyers, Amazon Prime/Channel 4’s Electric Dreams and National Geographic’s Year Million.

 

Behind the Title: Lobo EP, Europe Loic Francois Marie Dubois

NAME: Loic Francois Marie Dubois

COMPANY: New York- and São Paulo, Brazil-based Lobo

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service creative studio offering design, live action, stop motion, 3D & 2D, mixed media, print, digital, AR and VR.

Day One spot Sunshine

WHAT’S YOUR JOB TITLE?
Creative executive producer for Europe and formerly head of production. I’m based in Brazil, but work out of the New York office as well.

WHAT DOES THAT ENTAIL?
Managing, hiring creative teams, designers, producers and directors for international productions (USA, Europe, Asia). Also, I have served as the creative executive director for TBWA Paris on the McDonald’s Happy Meal global campaign for the last five years. Now as creative EP for Europe, I am also responsible for streamlining information from pre-production to post production between all production parties for a more efficient and prosperous sales outcome.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The patience and the fun psychological side you need to have to handle all the production peeps, agencies, and clients.

WHAT TOOLS DO YOU USE?
Excel, Word, Showbiz, Keynote, Pages, Adobe Package (Photoshop, Illustrator, After Effects, Premiere, InDesign), Maya, Flame, Nuke and AR/VR technology.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with talented creative people on extraordinary projects with a stunning design and working on great narratives, such as the work we have done for clients including Interface, Autism Speaks, Imaginary Friends, Unicef and Travelers, to name a few.

WHAT’S YOUR LEAST FAVORITE?
Monday morning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early afternoon between Europe closing down and the West Coast waking up.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Meditating in Tibet…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Since I was 13 years old. After shooting and editing a student short film (an Oliver Twist adaptation) with a Bolex 16mm on location in London and Paris, I was hooked.

Promoting Lacta 5Star chocolate bars

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
An animated campaign for the candy company Mondelez’s Lacta 5Star chocolate bars; an animated short film for the Imaginary Friends Society; a powerful animated short on the dangers of dating abuse and domestic violence for nonprofit Day One; a mixed media campaign for Chobani called FlipLand; and a broadcast spot for McDonald’s and Spider-Man.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
My three kids 🙂

It’s really hard to choose one project, as they are all equally different and amazing in their own way, but maybe D&AD Wish You Were Here. It stands out for the number of awards it won and the collective creative production process.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Internet.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Meditation and yoga.

Epic Games’ Unreal Engine 4.21 adds more mobile optimizations, efficiencies

Epic Games’ Unreal Engine 4.21 is designed to offer developers greater efficiency, performance and stability for those working on any platform.

Unreal Engine 4.21 adds even more mobile optimizations to both Android and iOS, up to 60% speed increases when cooking content and more power and flexibility in the Niagara effects toolset for realtime VFX. Also, the new production-ready Replication Graph plugin enables developers to build multiplayer experiences at a scale that hasn’t been possible before, and Pixel Streaming allows users to stream interactive content directly to remote devices with no compromises on rendering quality.

Updates in Unreal Studio 4.21 also offer new capabilities and enhanced productivity for users in the enterprise space, including architecture, manufacturing, product design and other areas of professional visualization. Unreal Studio’s Datasmith workflow toolkit now includes support for Autodesk Revit, and enhanced material translation for Autodesk 3ds Max, all enabling more efficient design review and iteration.

Here is more about the key features:
Replication Graph: The Replication Graph plugin, which is now production-ready, makes it possible to customize network replication in order to build large-scale multiplayer games that would not be viable with traditional replication strategies.

Niagara Enhancements: The Niagara VFX feature set continues to grow, with substantial quality of life improvements and Nintendo Switch support added in Unreal Engine 4.21.

Sequencer Improvements: New capabilities within Sequencer allow users to record incoming video feeds to disk as OpenEXR frames and create a track in Sequencer, with the ability to edit and scrub the track as usual. This enables users to synchronize video with CG assets and play them back together from the timeline.

Pixel Streaming (Early Access): With the new Pixel Streaming feature, users can author interactive experiences such as product configurations or training applications, host them on a cloud-based GPU or local server, and stream them to remove devices via web browser without the need for additional software or porting.

Mobile Optimizations: The mobile development process gets even better thanks to all of the mobile optimizations that were developed for Fortnite‘s initial release on Android, in addition to all of the iOS improvements from Epic’s ongoing updates. With the help of Samsung, Unreal Engine 4.21 includes all of the Vulkan engineering and optimization work that was done to help ship Fortnite on the Samsung Galaxy Note 9 and is 100% feature compatible with OpenGL ES 3.1.

Much Faster Cook Times: In addition to the optimized cooking process, low-level code avoids performing unnecessary file system operations, and cooker timers have been streamlined.

Gauntlet Automation Framework (Early access): The new Gauntlet automation framework enables developers to automate the process of deploying builds to devices, running one or more clients and or/servers, and processing the results. Gauntlet scripts can automatically profile points of interest, validate gameplay logic, check return values from backend APIs and more. Gauntlet has been battle tested for months in the process of optimizing Fortnite, and is a key part of ensuring it runs smoothly on all platforms.

Animation System Optimizations and Improvements: Unreal Engine’s animation system continues to build on best-in-class features thanks to new workflow improvements, better surfacing of information, new tools, and more.

Blackmagic Video Card Support: Unreal Engine 4.21 also adds support for Blackmagic video I/O cards for those working in film and broadcast. Creatives in the space can now choose between Blackmagic and AJA Video Systems, the two most popular options for professional video I/O.

Improved Media I/O: Unreal Engine 4.21 now supports 10-bit video I/O, audio I/O, 4K, and Ultra HD output over SDI, as well as legacy interlaced and PsF HD formats, enabling greater color accuracy and integration of some legacy formats still in use by large broadcasters.

Windows Mixed Reality: Unreal Engine 4.21 natively supports the Windows Mixed Reality (WMR) platform and headsets, such as the HP Mixed Reality headset and the Samsung HMD Odyssey headset.

Magic Leap Improvements: Unreal Engine 4.21 supports all the features needed to develop complete applications on Magic Leap’s Lumin-based devices — rendering, controller support, gesture recognition, audio input/output, media, and more.

Oculus Avatars: The Oculus Avatar SDK includes an Unreal package to assist developers in implementing first-person hand presence for the Rift and Touch controllers. The package includes avatar hand and body assets that are viewable by other users in social applications.

Datasmith for Revit (Unreal Studio): Unreal Studio’s Datasmith workflow toolkit for streamlining the transfer of CAD data into Unreal Engine now includes support for Autodesk Revit. Supported elements include materials, metadata, hierarchy, geometric instancing, lights and cameras.

Multi-User Viewer Project Template (Unreal Studio): A new project template for Unreal Studio 4.21 enables multiple users to connect in a real-time environment via desktop or VR, facilitating interactive, collaborative design reviews across any work site.

Accelerated Automation with Jacketing and Defeaturing (Unreal Studio): Jacketing automatically identifies meshes and polygons that have a high probability of being hidden from view, and lets users hide, remove or move them to another layer; this command is also available through Python so Unreal Studio users can integrate this step into automated preparation workflows. Defeaturing automatically removes unnecessary detail (e.g. blind holes, protrusions) from mechanical models to reduce polygon count and boost performance.

Enhanced 3ds Max Material Translation (Unreal Studio): There is now support for most commonly used 3ds Max maps, improving visual fidelity and reducing rework. Those in the free Unreal Studio beta can now translate 3ds Max material graphs to Unreal graphs when exporting, making materials easier to understand and work with. Users can also leverage improvements in BRDF matching from V-Ray materials, especially metal and glass.

DWG and Alias Wire Import (Unreal Studio): Datasmith now supports DWG and Alias Wire file types, enabling designers to import more 3D data directly from Autodesk AutoCAD and Autodesk Alias.

Chaos Group to support Cinema 4D with two rendering products

At the Maxon Supermeet 2018 event, Chaos Group announced its plans to support the Maxon Cinema 4D community with two rendering products: V-Ray for Cinema 4D and Corona for Cinema 4D. Based on V-Ray’s Academy Award-winning raytracing technology, the development of V-Ray for Cinema 4D will be focused on production rendering for high-end visual effects and motion graphics. Corona for Cinema 4D will focus on artist-friendly design visualization.

Chaos Group, which acquired the V-Ray for Cinema 4D product from LAUBlab and will lead development on the product for the first time, will offer current customers free migration to a new update, V-Ray 3.7 for Cinema 4D. All users who move to the new version will receive a free V-Ray for Cinema 4D license, including all product updates, through January 15, 2020. Moving forward, Chaos Group will be providing all support, sales and product development in-house.

In addition to ongoing improvements to V-Ray for Cinema 4D, Chaos Group is also released the Corona for Cinema 4D beta 2 at Supermeet, with the final product to follow in January 2019.

Main Image: Daniel Sian created Robots using V-ray for Cinema 4D.

Promoting a Mickey Mouse watch without Mickey

Imagine creating a spot for a watch that celebrates the 90th anniversary of Mickey Mouse — but you can’t show Mickey Mouse. Already Been Chewed (ABC), a design and motion graphics studio, developed a POV concept that met this challenge and also tied in the design of the actual watch.

Nixon, a California-based premium watch company that is releasing a series of watches around the Mickey Mouse anniversary, called on Already Been Chewed to create the 20-second spot.

“The challenge was that the licensing arrangement that Disney made with Nixon doesn’t allow Mickey’s image to be in the spot,” explains Barton Damer, creative director at Already Been Chewed. “We had to come up with a campaign that promotes the watch and has some sort of call to action that inspires people to want this watch. But, at the same time, what were we going to do for 20 seconds if we couldn’t show Mickey?”

After much consideration, Damer and his team developed a concept to determine if they could push the limits on this restriction. “We came up with a treatment for the video that would be completely point-of-view, and the POV would do a variety of things for us that were working in our favor.”

The solution was to show Mickey’s hands and feet without actually showing the whole character. In another instance, a silhouette of Mickey is seen in the shadows on a wall, sending a clear message to viewers that the spot is an official Disney and Mickey Mouse release and not just something that was inspired by Mickey Mouse.

Targeting the appropriate consumer demographic segment was another key issue. “Mickey Mouse has long been one of the most iconic brands in the history of branding, so we wanted to make sure that it also appealed to the Nixon target audience and not just a Disney consumer,” Damer says. “When you think of Disney, you could brand Mickey for children or you could brand it for adults who still love Mickey Mouse. So, we needed to find a style and vibe that would speak to the Nixon target audience.”

The Already Been Chewed team chose surfing and skateboarding as dominant themes, since 16-to 30-year-olds are the target demographic and also because Disney is a West Coast brand.
Damer comments, “We wanted to make sure we were creating Mickey in a kind of 3D, tangible way, with more of a feature film and 3D feel. We felt that it should have a little bit more of a modern approach. But at the same time, we wanted to mesh it with a touch of the old-school vibe, like 1950s cartoons.”

In that spirit, the team wanted the action to start with Mickey walking from his car and then culminate at the famous Venice Beach basketball courts and skate park. Here’s the end result.

“The challenge, of course, is how to do all this in 15 seconds so that we can show the logos at the front and back and a hero image of the watch. And that’s where it was fun thinking it through and coming up with the flow of the spot and seamless transitions with no camera cuts or anything like that. It was a lot to pull off in such a short time, but I think we really succeeded.”

Already Been Chewed achieved these goals with an assist from Maxon’s Cinema 4D and Adobe After Effects. With Damer as creative lead, here’s the complete cast of characters: head of production Aaron Smock; 3D design was via Thomas King, Barton Damer, Bryan Talkish, Lance Eckert; animation was provided by Bryan Talkish and Lance Eckert; character animation was via Chris Watson; soundtrack was DJ Sean P.

Sony Imageworks provides big effects, animation for Warner’s Smallfoot

By Randi Altman

The legend of Bigfoot: a giant, hairy two-legged creature roaming the forests and giving humans just enough of a glimpse to freak them out. Sightings have been happening for centuries with no sign of slowing down — seriously, Google it.

But what if that story was turned around, and it was Bigfoot who was freaked out by a Smallfoot (human)? Well, that is exactly the premise of the new Warner Bros. film Smallfoot, directed by Karey Kirkpatrick. It’s based on the book “Yeti Tracks” by Sergio Pablos.

Karl Herbst

Instead of a human catching a glimpse of the mysterious giant, a yeti named Migo (Channing Tatum) sees a human (James Corden) and tells his entire snow-filled village about the existence of Smallfoot. Of course, no one believes him so he goes on a trek to find this mythical creature and bring him home as proof.

Sony Pictures Imageworks was tasked with all of the animation and visual effects work on the film, while Warner Animation film did all of the front end work — such as adapting the script, creating the production design, editing, directing, producing and more. We reached out to Imageworks VFX supervisor Karl Herbst (Hotel Transylvania 2) to find out more about creating the animation and effects for Smallfoot.

The film has a Looney Tunes-type feel with squash and stretch. Did this provide more freedom or less?
In general, it provided more freedom since it allowed the animation team to really have fun with gags. It also gave them a ton of reference material to pull from and come up with new twists on older ideas. Once out of animation, depending on how far the performance was pushed, other departments — like the character effects team — would have additional work due to all of the exaggerated movements. But all of the extra work was worth it because everyone really loved seeing the characters pushed.

We also found that as the story evolved, Migo’s journey became more emotionally driven; We needed to find a style that also let the audience truly connect with what he was going through. We brought in a lot more subtlety, and a more truthful physicality to the animation when needed. As a result, we have these incredibly heartfelt performances and moments that would feel right at home in an old Road Runner short. Yet it all still feels like part of the same world with these truly believable characters at the center of it.

Was scale between such large and small characters a challenge?
It was one of the first areas we wanted to tackle since the look of the yeti’s fur next to a human was really important to filmmakers. In the end, we found that the thickness and fidelity of the yeti hair had to be very high so you could see each hair next to the hairs of the humans.

It also meant allowing the rigs for the human and yetis to be flexible enough to scale them as needed to have moments where they are very close together and they did not feel so disproportionate to each other. Everything in our character pipeline from animation down to lighting had to be flexible in dealing with these scale changes. Even things like subsurface scattering in the skin had dials in it to deal with when Percy, or any human character, was scaled up or down in a shot.

How did you tackle the hair?
We updated a couple of key areas in our hair pipeline starting with how we would build our hair. In the past, we would make curves that look more like small groups of hairs in a clump. In this case, we made each curve its own strand of a single hair. To shade this hair in a way that allowed artists to have better control over the look, our development team created a new hair shader that used true multiple-scattering within the hair.

We then extended that hair shading model to add control over the distribution around the hair fiber to model the effect of animal hair, which tends to scatter differently than human hair. This gave artists the ability to create lots of different hair looks, which were not based on human hair, as was the case with our older models.

Was rendering so many fury characters on screen at a time an issue?
Yes. In the past this would have been hard to shade all at once, mostly due to our reliance on opacity to create the soft shadows needed for fur. With the new shading model, we were no longer using opacity at all so the number of rays needed to resolve the hair was lower than in the past. But we now needed to resolve the aliasing due to the number of fine hairs (9 million for LeBron James’ Gwangi).

We developed a few other new tools within our version of the Arnold renderer to help with aliasing and render time in general. The first was adaptive sampling, which would allow us to up the anti-aliasing samples drastically. This meant some pixels would only use a few samples while others would use very high sampling. Whereas in the past, all pixels would get the same number. This focused our render times to where we needed it, helping to reduce overall rendering. Our development team also added the ability for us to pick a render up from its previous point. This meant that at a lower quality level we could do all of our lighting work, get creative approval from the filmmakers and pick up the renders to bring them to full quality not losing the time already spent.

What tools were used for the hair simulations specifically, and what tools did you call on in general?
We used Maya and the Nucleus solvers for all of the hair simulations, but developed tools over them to deal with so much hair per character and so many characters on screen at once. The simulation for each character was driven by their design and motion requirements.

The Looney Tunes-inspired design and motion created a challenge around how to keep hair simulations from breaking with all of the quick and stretched motion while being able to have light wind for the emotional subtle moments. We solved all of those requirements by using a high number of control hairs and constraints. Meechee (Zendaya) used 6,000 simulation curves with over 200 constraints, while Migo needed 3,200 curves with around 30 constraints.

Stonekeeper (Common) was the most complex of the characters, with long braided hair on his head, a beard, shaggy arms and a cloak made of stones. He required a cloth simulation pass, a rigid body simulation was performed for the stones and the hair was simulated on top of the stones. Our in-house tool called Kami builds all of the hair at render time and also allows us to add procedurals to the hair at that point. We relied on those procedurals to create many varied hair looks for all of the generics needed to fill the village full of yetis.

How many different types of snow did you have?
We created three different snow systems for environmental effects. The first was a particle simulation of flakes for near-ground detail. The second was volumetric effects to create lots of atmosphere in the backgrounds that had texture and movement. We used this on each of the large sets and then stored those so lighters could pick which parts they wanted in each shot. To also help with artistically driving the look of each shot, our third system was a library of 2D elements that the effects team rendered and could be added during compositing to add details late in shot production.

For ground snow, we had different systems based on the needs in each shot. For shallow footsteps, we used displacement of the ground surface with additional little pieces of geometry to add crumble detail around the prints. This could be used in foreground or background.

For heavy interactions, like tunneling or sliding in the snow, we developed a new tool we called Katyusha. This new system combined rigid body destruction with fluid simulations to achieve all of the different states snow can take in any given interaction. We then rendered these simulations as volumetrics to give the complex lighting look the filmmakers were looking for. The snow, being in essence a cloud, allowed light transport through all of the different layers of geometry and volume that could be present at any given point in a scene. This made it easier for the lighters to give the snow its light look in any given lighting situation.

Was there a particular scene or effect that was extra challenging? If so, what was it and how did you overcome it?
The biggest challenge to the film as a whole was the environments. The story was very fluid, so design and build of the environments came very late in the process. Coupling that with a creative team that liked to find their shots — versus design and build them — meant we needed to be very flexible on how to create sets and do them quickly.

To achieve this, we begin by breaking the environments into a subset of source shapes that could be combined in any fashion to build Yeti Mountain, Yeti Village and the surrounding environments. Surfacing artists then created materials that could be applied to any set piece, allowing for quick creative decisions about what was rock, snow and ice, and creating many different looks. All of these materials were created using PatternCreate networks as part of our OSL shaders. With them we could heavily leverage the portable procedural texturing between assets making location construction quicker, more flexible and easier to dial.

To get the right snow look for all levels of detail needed, we used a combination of textured snow, modeled snow and a simulation of geometric snowfall, which all needed to shade the same. For the simulated snowfall we created a padding system that could be run at any time on an environment giving it a fresh coating of snow. We did this so that filmmakers could modify sets freely in layout and not have to worry about broken snow lines. Doing all of that with modeled snow would have been too time-consuming and costly. This padding system worked not only in organic environments, like Yeti Village, but also in the Human City at the end of the film. The snow you see in the Human City is a combination of this padding system in the foreground and textures in the background.

Creating super sounds for Disney XD’s Marvel Rising: Initiation

By Jennifer Walden

Marvel revealed “the next generation of Marvel heroes for the next generation of Marvel fans” in a behind-the-scenes video back in December. Those characters stayed tightly under wraps until August 13, when a compilation of animated shorts called Marvel Rising: Initiation aired on Disney XD. Those shorts dive into the back story of the new heroes and give audiences a taste of what they can expect in the feature-length animated film Marvel Rising: Secret Warriors that aired for the first time on September 30 on both the Disney Channel and Disney XD simultaneously.

L-R: Pat Rodman and Eric P. Sherman

Handling audio post on both the animated shorts and the full-length feature is the Bang Zoom team led by sound supervisor Eric P. Sherman and chief sound engineer Pat Rodman. They worked on the project at the Bang Zoom Atomic Olive location in Burbank. The sounds they created for this new generation of Marvel heroes fit right in with the established Marvel universe but aren’t strictly limited to what already exists. “We love to keep it kind of close, unless Marvel tells us that we should match a specific sound. It really comes down to whether it’s a sound for a new tech or an old tech,” says Rodman.

Sherman adds, “When they are talking about this being for the next generation of fans, they’re creating a whole new collection of heroes, but they definitely want to use what works. The fans will not be disappointed.”

The shorts begin with a helicopter flyover of New York City at night. Blaring sirens mix with police radio chatter as searchlights sweep over a crime scene on the street below. A SWAT team moves in as a voice blasts over a bullhorn, “To the individual known as Ghost Spider, we’ve got you surrounded. Come out peacefully with your hands up and you will not be harmed.” Marvel Rising: Initiation wastes no time in painting a grim picture of New York City. “There is tension and chaos. You feel the oppressiveness of the city. It’s definitely the darker side of New York,” says Sherman.

The sound of the city throughout the series was created using a combination of sourced recordings of authentic New York City street ambience and custom recordings of bustling crowds that Rodman captured at street markets in Los Angeles. Mix-wise, Rodman says they chose to play the backgrounds of the city hotter than normal just to give the track a more immersive feel.

Ghost Spider
Not even 30 seconds into the shorts, the first new Marvel character makes her dramatic debut. Ghost Spider (Dove Cameron), who is also known as Spider Gwen, bursts from a third-story window, slinging webs at the waiting officers. Since she’s a new character, Rodman notes that she’s still finding her way and there’s a bit of awkwardness to her character. “We didn’t want her to sound too refined. Her tech is good, but it’s new. It’s kind of like Spider-Man first starting out as a kid and his tech was a little off,” he says.

Sound designer Gordon Hookailo spent a lot of time crafting the sound of Spider Gwen’s webs, which according to Sherman have more of a nylon, silky kind of sound than Spider-Man’s webs. There’s a subliminal ghostly wisp sound to her webs also. “It’s not very overt. There’s just a little hint of a wisp, so it’s not exactly like regular Spider-Man’s,” explains Rodman.

Initially, Spider Gwen seems to be a villain. She’s confronted by the young-yet-authoritative hero Patriot (Kamil McFadden), a member of S.H.I.E.L.D. who was trained by Captain America. Patriot carries a versatile, high-tech shield that can do lots of things, like become a hovercraft. It shoots lasers and rockets too. The hoverboard makes a subtle whooshy, humming sound that’s high-tech in a way that’s akin to the Goblin’s hovercraft. “It had to sound like Captain America too. We had to make it match with that,” notes Rodman.

Later on in the shorts, Spider Gwen’s story reveals that she’s actually one of the good guys. She joins forces with a crew of new heroes, starting with Ms. Marvel and Squirrel Girl.

Ms. Marvel (Kathreen Khavari) has the ability to stretch and grow. When she reaches out to grab Spider Gwen’s leg, there’s a rubbery, creaking sound. When she grows 50 feet tall she sounds 50 feet tall, complete with massive, ground shaking footsteps and a lower ranged voice that’s sweetened with big delays and reverbs. “When she’s large, she almost has a totally different voice. She’s sound like a large, forceful woman,” says Sherman.

Squirrel Girl
One of the favorites on the series so far is Squirrel Girl (Milana Vayntrub) and her squirrel sidekick Tippy Toe. Squirrel Girl has  the power to call a stampede of squirrels. Sound-wise, the team had fun with that, capturing recordings of animals small and large with their Zoom H6 field recorder. “We recorded horses and dogs mainly because we couldn’t find any squirrels in Burbank; none that would cooperate, anyway,” jokes Rodman. “We settled on a larger animal sound that we manipulated to sound like it had little feet. And we made it sound like there are huge numbers of them.”

Squirrel Girl is a fan of anime, and so she incorporates an anime style into her attacks, like calling out her moves before she makes them. Sherman shares, “Bang Zoom cut its teeth on anime; it’s still very much a part of our lifeblood. Pat and I worked on thousands of episodes of anime together, and we came up with all of these techniques for making powerful power moves.” For example, they add reverb to the power moves and choose “shings” that have an anime style sound.

What is an anime-style sound, you ask? “Diehard fans of anime will debate this to the death,” says Sherman. “It’s an intuitive thing, I think. I’ll tell Pat to do that thing on that line, and he does. We’re very much ‘go with the gut’ kind of people.

“As far as anime style sound effects, Gordon [Hookailo] specifically wanted to create new anime sound effects so we didn’t just take them from an existing library. He created these new, homegrown anime effects.”

Quake
The other hero briefly introduced in the shorts is Quake (Chloe Bennet), who is the same actress who plays Daisy Johnson, aka Quake, on Agents of S.H.I.E.L.D. Sherman says, “Gordon is a big fan of that show and has watched every episode. He used that as a reference for the sound of Quake in the shorts.”

The villain in the shorts has so far remained nameless, but when she first battles Spider Gwen the audience sees her pair of super-daggers that pulse with a green glow. The daggers are somewhat “alive,” and when they cut someone they take some of that person’s life force. “We definitely had them sound as if the power was coming from the daggers and not from the person wielding them,” explains Rodman. “The sounds that Gordon used were specifically designed — not pulled from a library — and there is a subliminal vocal effect when the daggers make a cut. It’s like the blade is sentient. It’s pretty creepy.”

Voices
The character voices were recorded at Bang Zoom, either in the studio or via ISDN. The challenge was getting all the different voices to sound as though they were in the same space together on-screen. Also, some sessions were recorded with single mics on each actor while other sessions were recorded as an ensemble.

Sherman notes it was an interesting exercise in casting. Some of the actors were YouTube stars (who don’t have much formal voice acting experience) and some were experienced voice actors. When an actor without voiceover experience comes in to record, the Bang Zoom team likes to start with mic technique 101. “Mic technique was a big aspect and we worked on that. We are picky about mic technique,” says Sherman. “But, on the other side of that, we got interesting performances. There’s a realism, a naturalness, that makes the characters very relatable.”

To get the voices to match, Rodman spent a lot of time using Waves EQ, Pro Tools Legacy Pitch, and occasionally Waves UltraPitch for when an actor slipped out of character. “They did lots of takes on some of these lines, so an actor might lose focus on where they were, performance-wise. You either have to pull them back in with EQ, pitching or leveling,” Rodman explains.

One highlight of the voice recording process was working with voice actor Dee Bradley Baker, who did the squirrel voice for Tippy Toe. Most of Tippy Toe’s final track was Dee Bradley Baker’s natural voice. Rodman rarely had to tweak the pitch, and it needed no other processing or sound design enhancement. “He’s almost like a Frank Welker (who did the voice of Fred Jones on Scooby-Doo, the voice of Megatron starting with the ‘80s Transformers franchise and Nibbler on Futurama).

Marvel Rising: Initiation was like a training ground for the sound of the feature-length film. The ideas that Bang Zoom worked out there were expanded upon for the soon-to-be released Marvel Rising: Secret Warriors. Sherman concludes, “The shorts gave us the opportunity to get our arms around the property before we really dove into the meat of the film. They gave us a chance to explore these new characters.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.

A Conversation: 3P Studio founder Haley Stibbard

Australia’s 3P Studio is a post house founded and led by artisan Haley Stibbard. The company’s portfolio of work includes commercials for brands such as Subway, Allianz and Isuzu Motor Company as well as iconic shows like Sesame Street. Stibbard’s path to opening her own post house was based on necessity.

After going on maternity to have her first child in 2013, she returned to her job at a content studio to find that her role had been made redundant. She was subsequently let go. Needing and wanting to work, she began freelancing as an editor — working seven days a week and never turning down a job. Eventually she realized that she couldn’t keep up with that type of schedule and took her fate into her own hands. She launched 3P Studio, one of Brisbane’s few women-led post facilities.

We reached out to Stibbard to ask about her love of post and her path to 3P Studio.

What made you want to get into post production? School?
I had a strong love of film, which I got from my late dad, Ray. He was a big film buff and would always come home from work when I was a kid with a shopping bag full of $2 movies from the video store and he would watch them. He particularly liked the crime stories and thrillers! So I definitely got my love of film and television from him.

We did not have any film courses at high school in the ‘90s, so the closest I could get was photography. Without a show reel it was hard to get a place at university in the college of art; a portfolio was a requirement and I didn’t have one. I remember I had to talk my way into the film program, and in the end I think they just got sick of me and let me into the course through the back door without a show reel — I can be very persistent when I want to be. I always had enjoyed editing and I was good at it, so in group tasks I was always chosen as the editor and then my love of post came from there.

What was your first job?
My very first job was quite funny, actually. I was working in both a shoe store and a supermarket at the time, and two post positions became available one day, an in-house editor for a big furniture chain and a job as a production assistant for a large VFX company at Movie World on the Gold Coast. Anyone who knows me knows that I would be the worst PA in the world. So, luckily for that company director, I didn’t get the PA job and became the in-house editor for the furniture chain.

I’m glad that I took that job, as it taught me so much — how to work under pressure, how to use an Avid, how to work with deadlines, what a key number was, how to dispatch TVCS to the stations, be quick, be accurate, how to take constructive feedback.

I made every mistake known to man, including one weekend when I forgot to remove the 4×3 safe bars from a TVC and my boss saw it on TV. I ended up having to drive to the office, climb the fence that was locked to get into the office and pull it off air. So I’ve learned a lot of things the hard way, but my boss was a very patient and forgiving man, and 18 years later is now a client of mine!

What job did you hold when you went out on maternity leave?
Before I left on maternity leave to have my son Dashiell, I was an editor for a small content company. I have always been a jack-of-all-trades and I took care of everything from offline to online, grading in Resolve, motion graphics in After Effects and general design. I loved my job and I loved the variety that it brought. Doing something different every day was very enjoyable.

After leaving that job, you started freelancing as an editor. What systems did you edit on at the time and what types of projects? How difficult a time was that for you? New baby, working all the time, etc.
I started freelancing when my son was just past seven months old. I had a mortgage and had just come off six months of unpaid maternity leave, so I needed to make a living and I needed to make it quickly. I also had the added pressure of looking after a young child under the age of one who still needed his mother.

So I started contacting advertising agencies and production companies that I thought may be interested in my skill set. I just took every job that I could get my hands on, as I was always worried that every job that I took could potentially be my last for a while. I was lucky that I had an incredibly well-behaved baby! I never said “no” to a job.

As my client base started to grow, my clients would always book me since they knew that I would never say “no” (they know I still don’t say no!). It got to the point where I was working seven days a week. I worked all day when my son was in childcare and all night after he would go to bed. I would take the baby monitor downstairs where I worked out of my husband’s ‘man den.’

As my freelance business grew, I was so lucky that I had the most supportive husband in the world who was doing everything for me, the washing, the cleaning, the cooking, bath time, as well has holding down his own full-time job as an engineer. I wouldn’t have been able to do what I did for that period of time without his support and encouragement. This time really proved to be a huge stepping stone for 3P Studio.

Do you remember the moment you decided you would start your own business?
There wasn’t really a specific moment where I decided to start my own business. It was something that seemed to just naturally come together. The busier I became, the more opportunities came about, like having enough work through the door to build a space and hire staff. I have always been very strategic in regard to the people that I have brought on at 3P, and the timing in which they have come on board.

Can you walk us through that bear of a process?
At the start of 2016, I made the decision to get out of the house. My work life was starting to blend in with my home life and I needed to have that separation. I worked out of a small office for 12 months, and about six months into that it came to a point where I was able to purchase an office space that would become our studio today.

I went to work planning the fit out for the next six months. The studio was an investment in the business and I needed a place that my clients could also bring their clients for approvals, screenings and collaboration on jobs, as well as just generally enjoying the space.

The office space was an empty white shell, but the beauty of coming into a blank canvas was that I was able to create a studio that was specifically built for post production. I was lucky in that I had worked in some of the best post houses in the country as an editor, and this being a custom build I was able to take all the best bits out of all the places I had previously worked and put them into my studio without the restriction of existing walls.

I built up the walls, ripped down the ceilings and was able to design the edit suites and infrastructure all the way down to designing and laying the cable runs myself that I knew would work for us down the line. Then, we saved money and added more equipment to the studio bit by bit. It wasn’t 0 to 100 overnight, I had to work at the business development side of the company a lot, and I spent a lot of long days sitting by myself in those edit suites doing everything. Soon, word of mouth started to circulate and the business started to grow on the back of some nice jobs from my existing loyal clients.

What type of work do you do, and what gear do you call on?
3P Studio is a boutique post production studio that specializes in full-service post production, we also shoot content when required.

Our clients range anywhere from small content videos for the web all the way up to large commercial campaigns and everything in between.

There are currently six of us working full time in the studio, and we handle everything in-house from offline editing to VFX to videography and sound design. We work primarily in the Adobe Creative suite for offline editing in Premiere, mixed with Maxon Cinema 4D/Autodesk Maya for 3D work, Autodesk Flame and Side Effects Houdini for online compositing and VFX, Blackmagic Resolve for color grading and Pro Tools HD for sound mixing. We use EditShare EFS shared storage nodes for collaborative working and sharing of content between the mix of creative platforms we use.

This year we have invested in a Red Digital Cinema camera as well as an EditShare XStream 200 EFS scale-out single-node server so we can become that one-stop shop for our clients. We have been able to create an amazing creative space for our clients to come and work with us, be it from the bespoke design of our editorial suites or the high level of client service we offer.

How did you build 3P Studios to be different from other studios you’ve worked at?
From a personal perspective, the culture that we have been able to build in the studio is unlike anywhere else I have worked in that we genuinely work as a team and support each other. On the business side, we cater to clients of all sizes and budgets while offering uncompromising services and experience whether they be large or small. Making sure they walk away feeling that they have had great value and exemplary service for their budget means that they will end up being a customer of ours for life. This is the mantra that I have been able to grow the business on.

What is your hiring process like, and how do you protect employees who need to go out on maternity or family leave?
When I interview people to join 3P, attitude and willingness to learn is everything to me — hands down. You can be the most amazing operator on the planet, but if your attitude stinks then I’m really not interested. I’ve been incredibly lucky with the team that I have, and I have met them along the journey at exactly the right times. We have an amazing team culture and as the company grows our success is shared.

I always make it clear that it’s swings and roundabouts and that family is always number one. I am there to support my team if they need me to be, not just inside of work but outside as well and I receive the same support in return. We have flexible working hours, I have team members with young families who, at times, are able to work both in the studio and from home so that they can be there for their kids when they need to be. This flexibility works fine for us. Happy team members make for a happy, productive workplace, and I like to think that 3P is forward thinking in that respect.

Any tips for young women either breaking into the industry or in it that want to start a family but are scared it could cost them their job?
Well, for starters, we have laws in Australia that make it illegal for any woman in this country to be discriminated against for starting a family. 3P also supports the 18 weeks paid maternity leave available to women heading out to start a family. I would love to see more female workers in post production, especially in operator roles. We aren’t just going to be the coffee and tea girls, we are directors, VFX artists, sound designers, editors and cinematographers — the future is female!

Any tips for anyone starting a new business?
Work hard, be nice to people and stay humble because you’re only as good as your last job.

Main Image: Haley Stibbard (second from left) with her team.

London design, animation studio Golden Wolf sets up shop in NYC

Animation studio Golden Wolf, headquartered in London, has launched its first stateside location in New York City. The expansion comes on the heels of an alliance with animation/VFX/live-action studio Psyop, a minority investor in the company. Golden Wolf now occupies studio space in SoHo adjacent to Psyop and its sister company Blacklist, which formerly represented Golden Wolf stateside and was instrumental to the relationship.

Among the year’s highlights from Golden Wolf are an integrated campaign for Nike FA18 Phantom (client direct), a spot for the adidas x Parley Run for the Oceans initiative (TBWA Amsterdam) in collaboration with Psyop, and Marshmello’s Fly music video for Disney. Golden Wolf also received an Emmy nomination for its main title sequence for Disney’s Ducktales reboot.

Heading up Golden Wolf’s New York office are two transplants from the London studio, executive producer Dotti Sinnott and art director Sammy Moore. Both joined Golden Wolf in 2015, Sinnott from motion design studio Bigstar, where she was a senior producer, and Moore after a run as a freelance illustrator/designer in London’s agency scene.

Sinnott comments: “Building on the strength of our London team, the Golden Wolf brand will continue to grow and evolve with the fresh perspective of our New York creatives. Our presence on either side of the Atlantic not only brings us closer to existing clients, but also positions us perfectly to build new relationships with New York-based agencies and brands. On top of this, we’re able to use the time difference to our advantage to work on faster turnarounds and across a range of budgets.”

Founded in 2013 by Ingi Erlingsson, the studio’s executive creative director, Golden Wolf is known for youth-oriented work — especially content for social media, entertainment and sports — that blurs the lines of irreverent humor, dynamic action and psychedelia. Erlingsson was once a prolific graffiti artist and, later, illustrator/designer and creative director at U.K.-based design agency ilovedust. Today he inspires Golden Wolf’s creative culture and disruptive style fed in part by a wave of next-gen animation talent coming out of schools such as Gobelins in France and The Animation Workshop in Denmark.

“I’m excited about our affiliation with Psyop, which enjoys an incredible legacy producing industry-leading animated advertising content,” Erlingsson says. “Golden Wolf is the new kid on the block, with bags of enthusiasm and an aim to disrupt the industry with new ideas. The combination of the two studios means that we are able to tackle any challenge, regardless of format or technical approach, with the support of some of the world’s best artists and directors. The relationship allows brands and agencies to have complete confidence in our ability to solve even the biggest challenges.”

Golden Wolf’s initial work out of its New York studio includes spots for Supercell (client direct) and Bulleit Bourbon (Barton F. Graf). Golden Wolf is represented in the US market by Hunky Dory for the East Coast, Baer Brown for the Midwest and In House Reps for the West Coast. Stink represents the studio for Europe.

Main Photo: (L-R) Dotti Sinnott, Ingi Erlingsson and Sammy Moore.

Reallusion intros three tools for mocap, characters

Reallusion has launched three new motion capture and character creation products: Character Creator 3, a stand-alone character creation tool; Motion Live, a realtime motion capture solution; and 3D Face Motion Capture with Live Face for iPhone X. With these products Reallusion is offering a total solution to build, morph, animate and gamify 3D characters.

Character Creator 3 (CC3), the new generation of iClone Character Creator, has separated from iClone to become a professional stand-alone tool. With a new quad base, roundtrip editing with ZBrush and photorealistic rendering using Iray, Character Creator 3 is a full character-creation solution for generating optimized 3D characters that are ready for games or intensive artistic design.

CC3 provides a new game character base with topology optimized for mobile, game and AR/VR developers. The big breakthrough is the integration with InstaLOD’s model and material optimization technologies to generate game-ready characters that are animatable on the fly, fulfilling the complete character pipeline on polygon reduction, material merge, texture baking, remeshing and LOD generation.

CC3 launches this month and is available now for preorder for $199. More details can be found here. iClone Motion Live, the multidevice motion capture system, connects industry-standard motion gear — including Rokoko, Leap Motion, Xsens, Faceware, OptiTrack, Noitom and iPhone X — into one solution.

Motion Live’s intuitive plug-and-play design makes connecting complicated mocap devices simple by animating custom imported characters or fully rigged 3D characters generated by Character Creator, Daz Studio or other industry-standard sources.

Reallusion has also debuted the combination of the 3D Face Motion Capture with the iPhone X solution with the Live Face app for iClone. As a result, users can record instant facial motion capture on any 3D character with an iPhone X. Reallusion has expanded the technology behind Animoji and Memoji to lift iPhone X animation and motion capture to the next level for studios and independent creators. The solution combines the power of iPhone X mocap with iClone Motion Live to blend face motion capture with Xsens, Perception Neuron, Rokoko, OptiTrack and Leap Motion for a truly realtime live experience in full-body mocap.

Review: Foundry’s Athera cloud platform

By David Cox

I’ve been thinking for a while that there are two types of post houses — those that know what cloud technology can do for them, and those whose days are numbered. That isn’t to say that the use of cloud technology is essential to the survival of a post house, but if they haven’t evaluated the possibilities of it they’re probably living in the past. In such a fast-moving business, that’s not a good place to be.

The term “cloud computing” suffers a bit from being hijacked by know-nothing marketeers and has become a bit vague in meaning. It’s quite simple though: it just means a computer (or storage) owned and maintained by someone else, housed somewhere else and used remotely. The advantage is that a post house can reduce its destructive fixed overheads by owning fewer computers and thus save money on installation and upkeep. Cloud computers can be used as and when they are needed. This allows scaling up and down in proportion to workload.

Over the last few years, several providers have created global datacenters containing upwards of 50,000 servers per site, entirely for the use of anyone who wants to “remote in.” Amazon and Google are the two biggest providers, but as anyone who has tried to harness their power for post production can confirm, they’re not simple to understand or configure. Amazon alone has hundreds of different computer “instance” types, and accessing them requires navigating through a sea of unintelligible jargon. You must know your Elastic Beanstalks from your EC2, EKS and Lambda. And make sure you’ve worked out how to connect your S3, EFS and Glacier. Software licensing can also be tricky.

The truth is, these incredible cloud installations are for cleverer people than those of us that just like to make pretty pictures. They are more for the sort that like to build neural networks and don’t go outside very much. What our industry needs is some clever company to make a nice shiny front end that allows us to harness that power using the tools we know and love, and just make it all a bit simpler. Enter Athera, from Foundry. That’s exactly what they’ve done.

What is Athera?

Athera is a platform hosted on Google Cloud infrastructure that presents a user with icons for apps such as Nuke and Houdini. Access to each app is via short-term (30-day) rental. When an available app icon is clicked, a cloud computer is commanded into action, pre-installed with the chosen app. From then on, the app is used just as if locally installed. Of course, the app is actually running on a high-performance computer located in a secure and nicely cooled datacenter environment. Provided the user has a vaguely decent Internet connection, they’re good to go, because only the user interface is being transmitted across the network, not the actual raw image data.

Apps available on Athera include Foundry’s products, plus a few others. Nuke is represented in its base form, plus a Nuke X variant, Nuke Studio, and a combination of Nuke X and Cara VR. Also available are the Mari texture painting suite, Katana look-creating app and Modo CGI modeling software.

Athera also offers access to non-Foundry products like CGI software Houdini and Blender, as well as the Gaffer management tool.

NukeIn my first test, I rustled up an instance of Nuke Studio and one of Blender. The first thing I wanted to test was the GPU speed, as this can be somewhat variable for many cloud computer types (usually between zero and not much). I was pleasantly surprised as the rendering speed was close to that of a local Nvidia GeForce GTX 1080, which is pretty decent. I was also pleased to see that user preferences were maintained between sessions.

One thing that particularly impressed me was how I could call up multiple apps together and Athera would effectively build a network in the background to link them all up. Frames rendered out of Blender were instantly available in the cloud-hosted Nuke Studio, even though it was running on a different machine. This suggests the Athera infrastructure is well thought out because multi-machine, networked pipelines with attached storage are constructed with just a few clicks and without really thinking about it.

Access to the Athera apps is either by web browser or via a local client software called “Orbit.” In web browser mode, each app opens in its own browser tab. With Orbit, each app appears in a dedicated local window. Orbit boasts lower latency and the ability to use local hardware such as multiple monitors. Latency, which would show itself as a frustrating delay between control input and visual feedback, was impressively low, even when using the web browser interface. Generally, it was easy to forget that the app being used was not installed locally.

Getting files in and out was also straightforward. A Dropbox account can be directly linked, although a Google or Amazon S3 storage “bucket” is preferred for speed. There is also a hosted app called “Toolbox,” which is effectively a file browser to allow the management of files and folders.

The Athera platform also contains management and reporting features. A manager can set up projects and users, setting out which apps and projects a user has access to. Quotas can be set, and full reports are given as to who did what, when and with which app.

Athera’s pricing is laid out on their website and it’s interesting to drill into the costs and make comparisons. A user buys access to apps in 30-day blocks. Personally, I would like to see shorter blocks at some point to increase up/down scale flexibility. That said, render-only instances for many of the apps can be accessed on a per-second billing basis. The 30-day block comes with a “fair use” policy of 200 hours. This is a hard limit, which equates to around nine and a half hours per day for five-day weeks (which is technically known in post production as part time).

Figuring Out Cost
Blender is a good place to start analyzing cost because it’s open source (free) software, so the $244 Athera cost to run for 30 days/200 hours must be for hardware only. This equates to $1.22 per hour, which, compared to direct cloud computer usage, is pretty good value for the GPU-backed machine on offer.

Modo

Another way of comparing the amount of $244 a month would be to say that a new computer costing $5,800 depreciates at roughly this monthly rate if depreciated over two years. That is to say, if a computer of that value is kept for two years before being replaced, it effectively loses roughly $241 per month in value. If depreciated over three years, the figure is $80 per month less. Of course, that’s just comparing the cost of depreciation. Cost of ownership must also include the costs of updating, maintaining, powering, cooling, insuring, housing and repairing if (when!) it breaks down. If a cloud computer breaks down, Google has a few thousand waiting in the wings. In general, the base hardware cost seems quite competitive.

Of course, Blender is not really the juicy stuff. Access to a base Nuke, complete with workstation, is $685 per 30 days / 200 hours. Nuke X is $1,025. There are also “power” options for around 20% more, where a significantly more powerful machine is provided. Compared to running a local machine with purchased or rented software, these prices are very interesting. But when the ability to scale up and down with workload is factored in, especially being able to scale down to nothing during quiet times, the case for Athera becomes quite compelling.

Another helpful factor is that a single 30-day access block to a particular app can be shared between multiple users — as long as only one user has control of the app at a time. This is subject to the fair use limitation.

There is an issue if commercial (licensed) plug-ins are needed. For the time being, these can’t be used on Athera due to the obvious licensing issues relating to their installation on a different cloud machine each time. Hopefully, plugin developers will become alive to the possibilities of pay-per-use licensing, as a platform like Athera could be the perfect storefront.

Mari

Security
One of the biggest concerns about using remote computing is that of security. This concern tends to be more perceptual than real. The truth is that a Google datacenter is likely to have significantly more security than an average post company’s machine room. Also, they will be employing the best in the security business. But if material being worked on leaks out into the public, telling a client, “But I just sent it to Google and figured it would be fine,” isn’t going to sound great. Realistically, the most likely concern for security is the sending of data to and from a datacenter. A security breach inside the datacenter is very unlikely. As ever, a post producer has to remain vigilant.

Summing Up
I think Foundry has been very smart and forward thinking to create a platform that is able to support more than just Foundry products in the cloud. It would have been understandable if they just made it a storefront for alternative ways of using a Nuke (etc), but they clearly see a bigger picture. Using a platform like Athera, post infrastructure can be assembled and disassembled on demand to allow post producers to match their overheads to their workload.

Athera enables smart post producers to build a highly scalable post environment with access to a global pool of creative talent who can log in and contribute from anywhere with little more than a modest computer and internet connection.

I hate the term game-changer — it’s another term so abused by know-nothing marketeers who have otherwise run out of ideas — but Athera, or at least what this sort of platform promises to provide, is most certainly a game-changer. Especially if more apps from different manufacturers can be included.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Our SIGGRAPH 2018 video coverage

SIGGRAPH is always a great place to wander around and learn about new and future technology. You can get see amazing visual effects reels and learn how the work was created by the artists themselves. You can get demos of new products, and you can immerse yourself in a completely digital environment. In short, SIGGRAPH is educational and fun.

If you weren’t able to make it this year, or attended but couldn’t see it all, we would like to invite you to watch our video coverage from the show.

SIGGRAPH 2018

postPerspective Impact Award winners from SIGGRAPH 2018

postPerspective has announced the winners of our Impact Awards from SIGGRAPH 2018 in Vancouver. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and professionals. It’s working pros who are going to be using new tools — so we let them make the call.

The awards honor innovative products and technologies for the visual effects, post production and production industries that will influence the way people work. They celebrate companies that push the boundaries of technology to produce tools that accelerate artistry and actually make users’ working lives easier.

While SIGGRAPH’s focus is on VFX, animation, VR/AR, AI and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production, which makes these SIGGRAPH Impact Awards doubly interesting.

The winners are as follows:

postPerspective Impact Award — SIGGRAPH 2018 MVP Winner:

They generated a lot of buzz at the show, as well as a lot of votes from our team of judges, so our MVP Impact Award goes to Nvidia for its Quadro RTX raytracing GPU.

postPerspective Impact Awards — SIGGRAPH 2018 Winners:

  • Maxon for its Cinema 4D R20 3D design and animation software.
  • StarVR for its StarVR One headset with integrated eye tracking.

postPerspective Impact Awards — SIGGRAPH 2018 Horizon Winners:

This year we have started a new Imapct Award category. Our Horizon Award celebrates the next wave of impactful products being previewed at a particular show. At SIGGRAPH, the winners were:

  • Allegorithmic for its Substance Alchemist tool powered by AI.
  • OTOY and Epic Games for their OctaneRender 2019 integration with UnrealEngine 4.

And while these products and companies didn’t win enough votes for an award, our voters believe they do deserve a mention and your attention: Wrnch, Google Lightfields, Microsoft Mixed Reality Capture and Microsoft Cognitive Services integration with PixStor.

 

Artifex provides VFX limb removal for Facebook Watch’s Sacred Lies

Vancouver-based VFX house Artifex Studios created CG amputation effects for the lead character in Blumhouse Productions’ new series for Facebook Watch, Sacred Lies. In the show, the lead character, Minnow Bly (Elena Kampouris), emerges after 12 years in the Kevinian cult missing both of her hands. Artifex was called on to remove the actress’ limbs.

VFX supervisor Rob Geddes led Artifex’s team who created the hand/stump transposition, which encompassed 165 shots across the series. This involved detailed paint work to remove the real hands, while Artifex 3D artists simultaneously performed tracking and match move in SynthEyes to align the CG stump assets to the actress’ forearm.

This was followed up with some custom texture and lighting work in Autodesk Maya and Chaos V-Ray to dial in the specific degree of scarring or level of healing on the stumps, depending on each scene’s context in the story. While the main focus of Artifex’s work was on hand removal, the team also created a pair of severed hands for the first episode after rubber prosthetics didn’t pass the eye test. VFX work was run through Side Effects Houdini and composited in Foundry’s Nuke.

“The biggest hurdle for the team during this assignment was working with the actress’ movements and complex performance demands, especially the high level of interaction with her environment, clothing or hair,” says Adam Stern, founder of Artifex. “In one visceral sequence, Rob and his team created the actual severed hands. These were originally shot practically with prosthetics, however the consensus was that the practical hands weren’t working. We fully replaced these with CG hands, which allowed us to dial in the level of decomposition, dirt, blood and torn skin around the cuts. We couldn’t be happier with the results.”

Geddes adds, “One interesting thing we discovered when wrangling the stumps, is that the logical and accurate placement of the wrist bone of the stumps didn’t necessarily feel correct when the hands weren’t there. There was quite a bit of experimentation to keep the ‘hand-less’ arms from looking unnaturally long, or thin.”

Artifex also added a scene involving absolute devastation in a burnt forest in Episode 101, involving matte painting and set extension of extensive fire damage that couldn’t safely be achieved on set. Artifex fell back on their experience in environmental VFX creation, using matte painting and projections tied together with ample rotoscope work.

Approximately 20 Artifex artists took part in Sacred Lies across 3D, compositing, matte painting, I/O and production staff.

Watch Artifex founder Adam Stern talk about the show from the floor of SIGGRAPH 2018:

Patrick Ferguson joins MPC LA as VFX supervisor

MPC’s Los Angeles studio has added Patrick Ferguson to its staff as visual effects supervisor. He brings with him experience working in both commercials and feature films.

Ferguson started out in New York and moved to Los Angeles in 2002, and he has since worked at a range of visual effect houses along the West Coast, including The Mission, where he was VFX supervisor, and Method, where he was head of 2D. “No matter where I am in the world or what I’m working on, one thing has remained consistent since I started working in the industry: I still love what I do. I think that’s the most important thing.”

Ferguson has collaborated with directors such as Stacy Wall, Mark Romanek, Melina Matsoukas, Brian Billow and Carl Rinsch, and has worked on campaigns for big global brands, including Nike, Apple, Audi, HP and ESPN.

He has also worked on high-profile films, including Pirates of the Caribbean and Alice in Wonderland, and he was a member of the Academy Award-winning team for The Curious Case of Benjamin Button.

“In this new role at MPC, I hope to bring my varied experience of working on large scale feature films as well as on commercials that have a much quicker turnaround time,” he says. “It’s all about knowing what the correct tools are for the particular job at hand, as every project is unique.”

For Ferguson, there is no substitute for being on set: “Being on set is vital, as that’s when key relationships are forged between the director, the crew, the agency and the entire team. Those shared experiences go a long way in creating a trust that is carried all the way through to end of the project and beyond.”

Using VFX to bring the new Volkswagen Jetta to life

LA-based studio Jamm provided visual effects for the all-new 2019 Volkswagen Jetta campaign Betta Getta Jetta. Created by Deutsch and produced by ManvsMachine, the series of 12 spots bring the Jetta to life by combining Jamm’s CG design with a color palette inspired by the car’s 10-color ambient lighting system.

“The VW campaign offered up some incredibly fun and intricate challenges. Most notably was the volume of work to complete in a limited amount of time — 12 full-CG spots in just nine weeks, each one unique with its own personality,” says VFX supervisor Andy Boyd.

Collaboration was key to delivering so many spots in such a short span of time. Jamm worked closely with ManvsMachine on every shot. “The team had a very strong creative vision which is crucial in the full 3D world where anything is possible,” explains Boyd.

Jamm employed a variety of techniques for the music-centric campaign, which highlights updated features such as ambient lighting and Beats Audio. The series includes spots titled  Remix, Bumper-to-Bumper, Turb-Whoa, Moods, Bass, Rings, Puzzle and App Magnet, along with 15-second teasers, all of which aired on various broadcast, digital and social channels during the World Cup.

For “Remix,” Jamm brought both a 1985 and a 2019 Jetta to life, along with a hybrid mix of the two, adding a cool layer of turntablist VFX, whereas for “Puzzle,” they cut up the car procedurally in Houdini​, which allowed the team to change around the slices as needed.

For Bass, Jamm helped bring personality to the car while keeping its movements grounded in reality. Animation supervisor Stew Burris pushed the car’s performance and dialed in the choreography of the dance with ManvsMachine as the Jetta discovered the beat, adding exciting life to the car as it bounced to the bassline and hit the switches on a little three-wheel motion.

We reached out to Jamm’s Boyd to find out more.

How early did Jamm get involved?
We got involved as soon as agency boards were client approved. We worked hand in hand with ManvMachine to previs each of the spots in order to lay the foundation for our CG team to execute both agency and directors’ vision.

What were the challenges of working on so many spots at once.
The biggest challenge was for editorial to keep up with the volume of previs options we gave them to present to agency.

Other than Houdini, what tools did they use?
Flame, Nuke and Maya were used as well.

What was your favorite spot of the 12 and why?
Puzzle was our favorite to work on. It was the last of the bunch delivered to Deutsch which we treated with a more technical approach, slicing up the car like a Rubix’s Cube.

 

Siggraph: StarVR One’s VR headset with integrated eye tracking

StarVR was at SIGGRAPH 2018 with its StarVR One, its next-generation VR headset built to support the most optimal lifelike VR experience. Featuring advanced optics, VR-optimized displays, integrated eye tracking and a vendor-agnostic tracking architecture, StarVR One is built from the ground up to support use cases in the commercial and enterprise sectors.

The StarVR One VR head-mounted display provides a nearly 100 percent human viewing angle — a 210-degree horizontal and 130-degree vertical field-of-view — and supports a more expansive user experience. Approximating natural human peripheral vision, StarVR One can support rigorous and exacting VR experiences such as driving and flight simulations, as well as tasks such as identifying design issues in engineering applications.

StarVR’s custom AMOLED displays serve up 16 million subpixels at a refresh rate of 90 frames per second. The proprietary displays are designed specifically for VR with a unique full-RGB-per-pixel arrangement to provide a professional-grade color spectrum for real-life color. Coupled with StarVR’s custom Fresnel lenses, the result is a clear visual experience within the entire field of view.

StarVR One automatically measures interpupillary distance (IPD) and instantly provides the best image adjusted for every user. Integrated Tobii eye-tracking technology enables foveated rendering, a technology that concentrates high-quality rendering only where the eyes are focused. As a result, the headset pushes the highest-quality imagery to the eye-focus area while maintaining the right amount of peripheral image detail.

StarVR One eye-tracking thus opens up commercial possibilities that leverage user-intent data for content gaze analysis and improved interactivity, including heat maps.

Two products are available with two different integrated tracking systems. The StarVR One is ready out of the box for the SteamVR 2.0 tracking solution. Alternatively, StarVR One XT is embedded with active optical markers for compatibility with optical tracking systems for more demanding use cases. It is further enhanced with ready-to-use plugins for a variety of tracking systems and with additional customization tools.

The StarVR One headset weighs 450 grams, and its ergonomic headband design evenly distributes this weight to ensure comfort even during extended sessions.

The StarVR software development kit (SDK) simplifies the development of new content or the upgrade of an existing VR experience to StarVR’s premium wide-field-of-view platform. Developers also have the option of leveraging the StarVR One dual-input VR SLI mode, maximizing the rendering performance. The StarVR SDK API is designed to be familiar to developers working with existing industry standards.

The development effort that culminated in the launch of StarVR One involved extensive collaboration with StarVR technology partners, which include Intel, Nvidia and Epic Games.

Allegorithmic’s Substance Painter adds subsurface scattering

Allegorithmic has released the latest additions to its Substance Painter tool, targeted to VFX, game studios and pros who are looking for ways to create realistic lighting effects. Substance Painter enhancements include subsurface scattering (SSS), new projections and fill tools, improvements to the UX and support for a range of new meshes.

Using Substance Painter’s newly updated shaders, artists will be able to add subsurface scattering as a default option. Artists can add a Scattering map to a texture set and activate the new SSS post-effect. Skin, organic surfaces, wax, jade and any other translucent materials that require extra care will now look more realistic, with redistributed light shining through from under the surface.

The release also includes updates to projection and fill tools, beginning with the user-requested addition of non-square projection. Images can be loaded in both the projection and stencil tool without altering the ratio or resolution. Those projection and stencil tools can also disable tiling in one or both axes. Fill layers can be manipulated directly in the viewport using new manipulator controls. Standard UV projections feature a 2D manipulator in the UV viewport. Triplanar Projection received a full 3D manipulator in the 3D viewport, and both can be translated, scaled and rotated directly in-scene.

Along with the improvements to the artist tools, Substance Painter includes several updates designed to improve the overall experience for users of all skill levels. Consistency between tools has been improved, and additions like exposed presets in Substance Designer and a revamped, universal UI guide make it easier for users to jump between tools.

Additional updates include:
• Alembic support — The Alembic file format is now supported by Substance Painter, starting with mesh and camera data. Full animation support will be added in a future update.
• Camera import and selection — Multiple cameras can be imported with a mesh, allowing users to switch between angles in the viewport; previews of the framed camera angle now appear as an overlay in the 3D viewport.
• Full gITF support — Substance Painter now automatically imports and applies textures when loading gITF meshes, removing the need to import or adapt mesh downloads from Sketchfab.
• ID map drag-and-drop — Both materials and smart materials can be taken from the shelf and dropped directly onto ID colors, automatically creating an ID mask.
• Improved Substance format support — Improved tweaking of Substance-made materials and effects thanks to visible-if and embedded presets.

Behind the Title: Weta Digital VFX supervisor Erik Winquist

NAME: Erik Winquist

COMPANY: Wellington, New Zealand’s Weta Digital

CAN YOU DESCRIBE YOUR COMPANY?
We’re currently a collection of about 1,600 ridiculously talented artists and developers down at the bottom of the world who have created some the most memorable digital characters and visual effects for film over the last couple of decades. We’re named after a giant New Zealand bug.

WHAT’S YOUR JOB TITLE?
Visual Effects Supervisor

WHAT DOES THAT ENTAIL?
Making the director and studio happy without making my crew unhappy. Ensuring that everybody on the shoot has the same goal in mind for a shot before the cameras start rolling is one way to help accomplish both of those goals. Using the strengths and good ideas of everybody on your team is another.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The amount of problem solving that is required. Every show is completely different from the last. We’re often asked to do something and don’t know how we’re going to accomplish it at the outset. That’s where it’s incredibly important to have a crew full of insanely brilliant people you can bash ideas around with.

HOW DID YOU START YOUR CAREER IN VFX?
I went to school for it. After graduating from the Ringling College of Art and Design with a degree in computer animation, I eventually landed a job as an assistant animator at Pacific Data Images (PDI). The job title was a little misleading, because although my degree was fairly character animation-centric, the first thing I was asked to do at PDI was morphing. I found that I really enjoyed working on the 2D side of things, and that sent me down a path that ultimately got me hired as a compositor at Weta on The Lord of the Rings.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
I was hired by PDI in 1998, so I guess that means 20 years now. (Whoa.)

HOW HAS THE VFX INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING? WHAT’S BEEN GOOD? WHAT’S BEEN BAD?
Oh, there’s just been so much great stuff. We’re able to make images now that are completely indistinguishable from reality. Thanks to massive technology advancements over the years, interactivity for artists has gotten way better. We’re sculpting incredible amounts of detail into our models, painting them with giga-pixels worth of texture information, scrubbing our animation in realtime, using hardware-accelerated engines to light our scenes, rendering them with physically-based renderers and compositing with deep images and a 3D workspace.

Of course, all of these efficiency gains get gobbled up pretty quickly by the ever-expanding vision of the directors we work for!

The industry’s technology advancements and flexibility have also perhaps had some downsides. Studios demand increasingly shorter post schedules, prep time is reduced, shots can be less planned out because so much can be decided in post. When the brief is constantly shifting, it’s difficult to deliver the quality that everyone wants. And when the quality isn’t there, suddenly the Internet starts clamoring that “CGI is ruining movies!”

But, when a great idea — planned well by a decisive director and executed brilliantly by a visual effects team working in concert with all of the other departments — the movie magic that results is just amazing. And that’s why we’re all here doing what we do.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
There were some films I saw very early on that left a lasting impression: Clash of the Titans, The Empire Strikes Back. Later inspiration came in high school with the TV spots that Pixar was doing prior to Toy Story, and the early computer graphics work that Disney Feature Animation was employing in their films of the early ‘90s.

But the big ones that really set me off around this time were ILM’s work on Jurassic Park, and films like Jim Cameron’s The Abyss and Terminator 2. That’s why it was a particular kick to find myself on set with Jim on Avatar.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Dailies. When I challenge an artist to bring their best, and they come up with an idea that completely surprises me; that is way better than what I had imagined or asked for. Those moments are gold. Dailies is pretty much the only chance I have to see a shot for the first time like an audience member gets to, so I pay a lot of attention to my reaction to that very first impression.

WHAT’S YOUR LEAST FAVORITE?
Getting a shot ripped from our hands by those pesky deadlines before every little thing is perfect. And scheduling meetings. Though, the latter is critically important to make sure that the former doesn’t happen.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
There was a time when I was in grade school where I thought I might like to go into sound effects, which is a really interesting what-if scenario for me to think about. But these days, if I were to hang up my VFX hat, I imagine I would end up doing something photography-related. It’s been a passion for a very long time.

Rampage

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I supervised Weta’s work on Rampage, starring Dwayne Johnson and a very large albino gorilla. Prior to that was War for the Planet of the Apes, Spectral and Dawn of the Planet of the Apes.

WHAT IS THE PROJECT/S THAT YOU ARE MOST PROUD OF?
We had a lot of fun working on Rampage, and I think audiences had a ton of fun watching it. I’m quite proud of what we achieved with Dawn of the Planet of the Apes. But I’m also really fond of what our crew turned out for the Netflix film Spectral. That project gave us the opportunity to explore some VFX-heavy sci-fi imagery and was a really interesting challenge.

WHAT TOOLS DO YOU USE DAY TO DAY?
Most of my day revolves around reviewing work and communicating with my production team and the crew, so it’s our in-house review software, Photoshop and e-mail. But I’m constantly jumping in and out of Maya, and always have a Nuke session open for one thing or another. I’m also never without my camera and am constantly shooting reference photos or video, and have been known to initiate impromptu element shoots at a moment’s notice.

WHERE DO YOU FIND INSPIRATION NOW?
Everywhere. It’s why I always have my camera in my bag.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Scuba diving and sea kayaking are two hobbies that get me out in the water, though that happens far less than I would like. My wife and I recently bought a small rural place north of Wellington. I’ve found going up there doing “farm stuff” on the weekend is a great way to re-calibrate.