Category Archives: 3D

Full-service creative agency Carousel opens in NYC

Carousel, a new creative agency helmed by Pete Kasko and Bernadette Quinn, has opened its doors in New York City. Billing itself as “a collaborative collective of creative talent,” Carousel is positioned to handle projects from television series to ad campaigns for brands, media companies and advertising agencies.

Clients such as PepsiCo’s Pepsi, Quaker and Lays brands; Victoria’s Secret; Interscope Records; A&E Network and The Skimm have all worked with the company.

Designed to provide full 360 capabilities, Carousel allows its brand partners to partake of all its services or pick and choose specific offerings including strategy, creative development, brand development, production, editorial, VFX/GFX, color, music and mix. Along with its client relationships, Carousel has also been the post production partner for agencies such as McGarryBowen, McCann, Publicis and Virtue.

“The industry is shifting in how the work is getting done. Everyone has to be faster and more adaptable to change without sacrificing the things that matter,” says Quinn. “Our goal is to combine brilliant, high-caliber people, seasoned in all aspects of the business, under one roof together with a shared vision of how to create better content in a more efficient way.”

According to managing director Dee Tagert comments, “The name Carousel describes having a full set of capabilities from ideation to delivery so that agencies or brands can jump on at any point in their process. By having a small but complete agency team that can manage and execute everything from strategy, creative development and brand development to production and post, we can prove more effective and efficient than a traditional agency model.”

Danielle Russo, Dee Tagert, AnaLiza Alba Leen

AnaLiza Alba Leen comes on board Carousel as creative director with 15 years of global agency experience, and executive producer Danielle Russo brings 12 years of agency experience.
Tagert adds, “The industry has been drastically changing over the last few years. As clients’ hunger for content is driving everything at a much faster pace, it was completely logical to us to create a fully integrative company to be able to respond to our clients in a highly productive, successful manner.”

Carousel is currently working on several upcoming projects for clients including Victoria’s Secret, DNTL, Subway, US Army, Tazo Tea and Range Rover.

Main Image: Bernadette Quinn and Pete Kasko

Behind the Title: Aardman director/designer Gavin Strange

NAME: Gavin Strange

COMPANY: Bristol, England-based Aardman. They also have an office in NYC under the banner Aardman Nathan Love

CAN YOU DESCRIBE HOW YOUR CAREER AT AARDMAN BEGAN?
I can indeed! I started 10 years ago as a freelancer, joining the fledgling Interactive department (or Aardman Online as it was known back then). They needed a digital designer for a six-month project for the UK’s Channel 4.

I was a freelancer in Bristol at the time and I made it my business to be quite vocal on all the online platforms, always updating those platforms and my own website with my latest work — whether that be client work or self-initiated projects. Luckily for me, the creative director of Aardman Online, Dan Efergan, saw my work when he was searching for a designer and got in touch (it was the most exciting email ever, with the subject of “Hello from Aardman!”

The short version of this story is that I got Dan’s email, popped in for a cup of tea and a chat, and 10 years later I’m still here! Ha!

The slightly longer but still truncated version is that after the six-month freelance project was done, the role of senior designer for the online team became open and I gave up the freelance life and, very excitedly, joined the team as an official Aardmanite!

Thing is, I was never shy about sharing with my new colleagues the other work I did. My role in the beginning was primarily digital/graphic design, but in my own time, under the banner of JamFactory (my own artist alter-ego name) I put out all sorts of work that was purely passion projects; films, characters, toys, clothing, art.

Gavin Strange directed this Christmas spot for the luxury brand Fortnum & Mason .

Filmmaking was a huge passion of mine and even at the earliest stages in my career when I first started out (I didn’t go to university so I got my first role as a junior designer when I was 17) I’d always be blending graphic design and film together.

Over those 10 years at Aardman I continued to make films of all kinds and share them with my colleagues. Because of that more opportunities arose to develop my film work within my existing design role. I had the unique advantage of having a lot of brilliant mentors who guided me and helped me with my moving image projects.

Those opportunities continued to grow and happen more frequently. I was doing more and more directing here, finally becoming officially represented by Aardman and added to their roster of directors. It’s a dream come true for me, because, not only do I get to work at the place I’ve admired growing up, but I’ve been mentored and shaped by the very individuals who make this place so special — that’s a real privilege.

What I really love is that my role is so varied — I’m both a director and a senior designer. I float between projects, and I love that variety. Sometimes I’m directing a commercial, sometimes I’m illustrating icons, other times I’m animating motion graphics. To me though, I don’t see a difference — it’s all creating something engaging, beautiful and entertaining — whatever the final format or medium!

So that’s my Aardman story. Ten years in, and I just feel like I’m getting started. I love this place.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE OF DIRECTOR?
Hmm, it’s tricky, as I actually think that most people’s perception of being a director is true: it’s that person’s responsibility to bring the creative vision to life.

Maybe what people don’t know is how flexible the role is, depending on the project. I love smaller projects where I get to board, design and animate, but then I love larger jobs with a whole crew of people. It’s always hands-on, but in many different ways.

Perhaps what would surprise a lot of people is that it’s every directors responsibility to clean the toilets at the end of the day. That’s what Aardman has always told me and, of course, I honor that tradition. I mean, I haven’t actually ever seen anyone else do it, but that’s because everyone else just gets on with it quietly, right? Right!?

WHAT’S YOUR FAVORITE PART OF THE JOB?
Oh man, can I say everything!? I really, really enjoy the job as a whole — having that creative vision, working with yourself, your colleagues and your clients to bring it to life. Adapting and adjusting to changes and ensuring something great pops out the other end.

I really, genuinely, get a thrill seeing something on screen. I love concentrating on every single frame — it’s a win-win situation. You get to make a lovely image each frame, but when you stitch them together and play them really fast one after another, then you get a lovely movie — how great is that?

In short, I really love the sum total of the job. All those different exciting elements that all come together for the finished piece.

WHAT’S YOUR LEAST FAVORITE?
I pride myself on being an optimist and being a right positive pain in the bum, so I don’t know if there’s any part I don’t enjoy — if anything is tricky I try and see it as a challenge and something that will only improve my skillset.

I know that sounds super annoying doesn’t it? I know that can seem all floaty and idealistic, but I pride myself on being a “realistic’ idealist” — recognizing the reality of a tricky situation, but seeing it through an idealistic lens.

If I’m being honest, then probably that really early stage is my least favorite — when the project is properly kicking off and you’ve got that gap between what the treatment/script/vision says it will be and the huge gulf in between that and the finished thing. That’s also the most exciting too, the not knowing how it will turn out. It’s terrifying and thrilling, in all good measure. It surprises me every single time, but I think that panic is an essential part of any creative process.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
In an alternate world, I’d be a photographer, traveling the world, documenting everything I see, living the nomadic life. But that’s still a creative role, and I still class it as the same job, really. I love my graphic design roots too — print and digital design — but, again, I see it as all the same role really.

So that means, if I didn’t have this job, I’d be roaming the lands, offering to draw/paint/film/make for anyone that wanted it! (Is that a mercenary? Is there such a thing as a visual mercenary? I don’t really have the physique for that I don’t think.)

WHY DID YOU CHOOSE THIS PROFESSION?
This profession chose me. I’m just kidding, that’s ridiculous, I just always wanted to say that.

I think, like most folks, I fell into it in a series of natural choices. Art, design, graphics and games always stole my attention as a kid, and I just followed the natural path into that, which turned into my career. I’m lucky enough that I didn’t feel the need to single out any one passion, and kept them all bubbling along even as I made my career choices as designer to director. I still did and still do indulge my passion for all types of mediums in my own time.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I’m not sure. I wasn’t particularly driven or focused as a kid. I knew I loved design and art, but I didn’t know of the many, many different roles out there that existed. I like that though, I see that as a positive, and also as an achievable way to progress through a career path. I speak to a lot of students and young professionals and I think it can be so overwhelming to plot a big ‘X’ on a career map and then feel all confused about how to get there. I’m an advocate of taking it one step at a time, and make more manageable advances forward — as things always get in the way and change anyway.

I love the idea of a meandering, surprising path. Who knows where it will lead!? I think as long as your aim is to make great work, then you’ll surprise yourself where you end up.

WHAT WAS IT ABOUT DIRECTING THAT ATTRACTED YOU?
I’ve always obsessed over films, and obsessed over the creation of them. I’ll watch a behind-the-scenes on any film or bit of moving image. I just love the fact that the role is to bring something to life — it’s to oversee and create something from nothing, ensuring every frame is right. The way it makes you feel, the way it looks, the way it sounds.

It’s just such an exciting role. There’s a lot of unknowns too, on every project. I think that’s where the good stuff lies. Trusting in the process and moving forwards, embracing it.

HOW DOES DIRECTING FOR ANIMATION DIFFER FROM DIRECTING FOR LIVE ACTION — OR DOES IT?
Technically it’s different — with animation your choices are pretty much made all up front, with the storyboards and animatic as your guides, and then they’re brought to life with animation. Whereas, for me, the excitement in live action is not really knowing what you’ll get until there’s a lens on it. And even then, it can come together in a totally new way in the edit.

I don’t try to differentiate myself as an “animation director” or “live-action” director. They’re just different tools for the job. Whatever tells the best story and connects with audiences!

HOW DO YOU PICK THE PEOPLE YOU WORK WITH ON A PARTICULAR PROJECT?
Their skillset is paramount, but equally as important is their passion and their kindness. There are so many great people out there, but I think it’s so important to work with people who are great and kind. Too many people get a free pass for being brilliant and feel that celebration of their work means it’s okay to mistreat others. It’s not okay… ever. I’m lucky that Aardman is a place full of excited, passionate and engaged folk who are a pleasure to work with, because you can tell they love what they do.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’ve been lucky enough to work on a real variety of projects recently. I directed an ident for the rebrand of BBC2, a celebratory Christmas spot for the luxury brand Fortnum & Mason and an autobiographical motion graphics short film about Maya Angelou for BBC Radio 4.

Maya Angelou short film for BBC Radio 4

I love the variety of them; just those three projects alone were so different. The BBC2 ident was live-action in-camera effects with a great crew of people, whereas the Maya Angelou film was just me on design, direction and animation. I love hopping between projects of all types and sizes!

I’m working on development of a stop-frame short at the moment, which is all I can say for now, but just the process alone going from idea to a scribble in a notebook to a script is so exciting. Who knows what 2019 holds!?

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Oh man, that’s a tough one! A few years back I co-directed a title sequence for a creative festival called OFFF, which happens every year in Barcelona. I worked with Aardman legend Merlin Crossingham to bring this thing to life, and it’s a proper celebration of what we both love — it ended up being what we lovingly refer to as our “stop-frame live-action motion-graphics rap-video title-sequence.” It really was all those things.

That was really special as not only did we have a great crew, I got to work with one of my favorite rappers, P.O.S., who kindly provided the beats and the raps for the film.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT
– My iPhone. It’s my music player, Internet checker, email giver, tweet maker, picture capturer.
– My Leica M6 35mm camera. It’s my absolute pride and joy. I love the images it makes.
– My Screens. At work I have a 27-inch iMac and then two 25-inch monitors on either side. I just love screens. If I could have more, I would!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I genuinely love what I do, so I rarely feel like I “need to get away from it all.” But I do enjoy life outside of work. I’m a drummer and that really helps with any and all stress really. Even just practicing on a practice pad is cathartic, but nothing compares to smashing away on a real kit.

I like to run, and I sometimes do a street dance class, which is both great fun and excruciatingly frustrating because I’m not very good.

I’m a big gamer, even though I don’t have much time for it anymore. A blast on the PS4 is a treat. In fact, after this I’m going to have a little session on God of War before bedtime.

I love hanging with my family. My wife Jane, our young son Sullivan and our dog Peggy. Just hanging out, being a dad and being a husband is the best for de-stressing. Unless Sullivan gets up at 3am, then I change my answer back to the PS4.

I’m kidding, I love my family, I wouldn’t be anything or be anywhere without them.

DigitalGlue 12.3

Foundry Nuke 11.3’s performance, collaboration updates

Foundry has launched Nuke 11.3, introducing new features and updates to the company’s family of compositing and review tools. The release is the fourth update to the Nuke 11 Series and is designed to improve the user experience and to speed up heavy processing tasks for pipelines and individual users.

Nuke 11.3 lands with major enhancements to its Live Groups feature. It introduces new functionality along with corresponding Python callbacks and UI notifications that will allow for greater collaboration and offer more control. These updates make Live Groups easier for larger pipelines to integrate and give artists more visibility over the state of the Live Group and flexibility when using user knobs to override values within a Live Group.

The particle system in NukeX has been optimized to produce particle simulations up to six times faster than previous versions of the software, and up to four times faster for playback, allowing for faster iteration when setting up particle systems.

New Timeline Multiview support provides an extension to stereo and VR workflows. Artists can now use the same multiple-file stereo workflows that exist in Nuke on the Nuke Studio, Hiero and HieroPlayer timeline. The updated export structure can also be used to create multiple-view Nuke scripts from the timeline in Nuke Studio and Hiero.

Support for full-resolution stereo on monitor out makes review sessions even easier, and a new export preset helps with rendering of stereo projects.

New UI indications for changes in bounding box size and channel count help artists troubleshoot their scripts. A visual indication identifies nodes that increase bounding box size to be greater than the image, helping artists to identify the state of the bounding box at a glance. Channel count is now displayed in the status bar, and a warning is triggered when the 1024-channel limit is exceeded. The appearance and threshold for triggering the bounding box and channel warnings can be set in the preferences.

The selection tool has also been improved in both 2D and 3D views, and an updated marquee and new lasso tool make selecting shapes and points even easier.

Nuke 11.3 is available for purchase — alongside full release details — on Foundry’s website and via accredited resellers.


Making an animated series with Adobe Character Animator

By Mike McCarthy

In a departure from my normal film production technology focus, I have also been working on an animated web series called Grounds of Freedom. Over the past year I have been directing the effort and working with a team of people across the country who are helping in various ways. After a year of meetings, experimentation and work we finally started releasing finished episodes on YouTube.

The show takes place in Grounds of Freedom, a coffee shop where a variety of animated mini-figures gather to discuss freedom and its application to present-day cultural issues and events. The show is created with a workflow that weaves through a variety of Adobe Creative Cloud apps. Back in October I presented our workflow during Adobe Max in LA, and I wanted to share it with postPerspective’s readers as well.

When we first started planning for the series, we considered using live action. Ultimately, after being inspired by the preview releases of Adobe Character Animator, I decided to pursue a new digital approach to brick filming (a film made using Legos), which is traditionally accomplished through stop-motion animation. Once everyone else realized the simpler workflow possibilities and increased level of creative control offered by that new animation process, they were excited to pioneer this new approach. Animation gives us more control and flexibility over the message and dialog, lowers production costs and eases collaboration over long distances, as there is no “source footage” to share.

Creating the Characters
The biggest challenge to using Character Animator is creating digital puppets, which are deeply layered Photoshop PSDs with very precise layer naming and stacking. There are ways to generate the underlying source imagery in 3D animation programs, but I wanted the realism and authenticity of sourcing from actual photographs of the models and figures. So we took lots of 5K macro shots of our sets and characters in various positions with our Canon 60D and 70D DSLRs and cut out hundreds of layers of content in Photoshop to create our characters and all of their various possible body positions. The only thing that was synthetically generated was the various facial expressions digitally painted onto their clean yellow heads, usually to match an existing physical reference character face.

Mike McCarthy shooting stills.

Once we had our source imagery organized into huge PSDs, we rigged those puppets in Character Animator with various triggers, behaviors and controls. The walking was accomplished by cycling through various layers, instead of the default bending of the leg elements. We created arm movement by mapping each arm position to a MIDI key. We controlled facial expressions and head movement via webcam, and the mouth positions were calculated by the program based on the accompanying audio dialog.

Animating Digital Puppets
The puppets had to be finished and fully functional before we could start animating on the digital stages we had created. We had been writing the scripts during that time, parallel to generating the puppet art, so we were ready to record the dialog by the time the puppets were finished. We initially attempted to record live in Character Animator while capturing the animation motions as well, but we didn’t have the level of audio editing functionality we needed available to us in Character Animator. So during that first session, we switched over to Adobe Audition, and planned to animate as a separate process, once the audio was edited.

That whole idea of live capturing audio and facial animation data is laughable now, looking back, since we usually spend a week editing the dialog before we do any animating. We edited each character audio on a separate track and exported those separate tracks to Character Animator. We computed lipsync for each puppet based on their dedicated dialog track and usually exported immediately. This provided a draft visual that allowed us to continue editing the dialog within Premiere Pro. Having a visual reference makes a big difference when trying to determine how a conversation will feel, so that was an important step — even though we had to throw away our previous work in Character Animator once we made significant edit changes that altered sync.

We repeated the process once we had a more final edit. We carried on from there in Character Animator, recording arm and leg motions with the MIDI keyboard in realtime for each character. Once those trigger layers had been cleaned up and refined, we recorded the facial expressions, head positions and eye gaze with a single pass on the webcam. Every re-record to alter a particular section adds a layer to the already complicated timeline, so we limited that as much as possible, usually re-recording instead of making quick fixes unless we were nearly finished.

Compositing the Characters Together
Once we had fully animated scenes in Character Animator, we would turn off the background elements, and isolate each character layer to be exported in Media Encoder via dynamic link. I did a lot of testing before settling on JPEG2000 MXF as the format of choice. I wanted a highly compressed file, but need alpha channel support, and that was the best option available. Each of those renders became a character layer, which was composited into our stage layers in After Effects. We could have dynamically linked the characters directly into AE, but with that many layers that would decrease performance for the interactive part of the compositing work. We added shadows and reflections in AE, as well as various other effects.

Walking was one of the most challenging effects to properly recreate digitally. Our layer cycling in Character Animator resulted in a static figure swinging its legs, but people (and mini figures) have a bounce to their step, and move forward at an uneven rate as they take steps. With some pixel measurement and analysis, I was able to use anchor point keyframes in After Effects to get a repeating movement cycle that made the character appear to be walking on a treadmill.

I then used carefully calculated position keyframes to add the appropriate amount of travel per frame for the feet to stick to the ground, which varies based on the scale as the character moves toward the camera. (In my case the velocity was half the scale value in pixels per seconds.) We then duplicated that layer to create the reflection and shadow of the character as well. That result can then be composited onto various digital stages. In our case, the first two shots of the intro were designed to use the same walk animation with different background images.

All of the character layers were pre-comped, so we only needed to update a single location when a new version of a character was rendered out of Media Encoder, or when we brought in a dynamically linked layer. It would propagate all the necessary comp layers to generate updated reflections and shadows. Once the main compositing work was finished, we usually only needed to make slight changes in each scene between episodes. These scenes were composited at 5K, based on the resolution off the DSLR photos of the sets we had built. These 5K plates could be dynamically linked directly into Premiere Pro, and occasionally used later in the process to ripple slight changes through the workflow. For the interactive work, we got far better editing performance by rendering out flattened files. We started with DNxHR 5K assets, but eventually switched to HEVC files since they were 30x smaller and imperceptibly different in quality with our relatively static animated content.

Editing the Animated Scenes
In Premiere Pro, we had the original audio edit, and usually a draft render of the characters with just their mouths moving. Once we had the plate renders, we placed them each in their own 5K scene sub-sequence and used those sequences as source on our master timeline. This allowed us to easily update the content when new renders were available, or source from dynamically linked layers instead if needed. Our master timeline was 1080p, so with 5K source content we could push in two and a half times the frame size without losing resolution. This allowed us to digitally frame every shot, usually based on one of two rendered angles, and gave us lots of flexibility all the way to the end of the editing process.

Collaborative Benefits of Dynamic Link
While Dynamic Link doesn’t offer the best playback performance without making temp renders, it does have two major benefits in this workflow. It ripples change to the source PSD all the way to the final edit in Premiere just by bringing each app into focus once. (I added a name tag to one character’s PSD during my presentation, and 10 seconds later, it was visible throughout my final edit.) Even more importantly, it allows us to collaborate online without having to share any exported video assets. As long as each member of the team has the source PSD artwork and audio files, all we have to exchange online are the Character Animator project (which is small once the temp files are removed), the .AEP file and the .PrProj file.

This gives any of us the option to render full-quality visual assets anytime we need them, but the work we do on those assets is all contained within the project files that we sync to each other. The coffee shop was built and shot in Idaho, our voice artist was in Florida, our puppets faces were created in LA. I animate and edit in Northern California, the AE compositing was done in LA, and the audio is mixed in New Jersey. We did all of that with nothing but a Dropbox account, using the workflow I have just outlined.

Past that point, it was a fairly traditional finish, in that we edited in music and sound effects, and sent an OMF to Steve, our sound guy at DAWPro Studios http://dawpro.com/photo_gallery.html for the final mix. During that time we added other b-roll visuals or other effects, and once we had the final audio back we rendered the final result to H.264 at 1080p and uploaded to YouTube.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


VFX supervisor Simon Carr joins London’s Territory

Simon Carr has joined visual effects house Territory, bringing with him 20 years of experience as a VFX supervisor. He most recently served that role at London’s Halo, where he built the VFX department from scratch. He has also supervised at Realise Studio, Method Studios, Pixomondo, Digital Domain and others. While Carr will be based in London, he will also support the studio’s San Francisco offices as needed.

Having invested in a Shotgun pipeline, with a bespoke toolkit that integrates Territory’s design-led approach with VFX delivery, Carr’s appointment, according to the studio, signals a strategic approach to expanding the team’s capabilities. “Simon’s experience of all stages of the VFX process from pre-production to final delivery means that our clients and partners can be confident of seamless high-end VFX delivery at every stage of a project” says David Sheldon-Hicks, Territory’s founder and executive creative director.

At Territory, Carr will use his experience building and leading teams of artists, from compositing through to complex environment builds. The studio will also benefit from his experience of building a facility from scratch — establishing pipeline and workflows, recruiting and retaining artists; developing and maintaining relationships with clients and being involved with the pitching and bidding process.

The studio has worked on high-profile film projects, including Blade Runner 2049, Ready Player One, Pacific Rim: Uprising, Ghost in the Shell, The Martian and Guardians of the Galaxy. On the broadcast front, they have worked on the new series based on George R.R. Martin’s novella, Nightflyers, Amazon Prime/Channel 4’s Electric Dreams and National Geographic’s Year Million.

 


Behind the Title: Lobo EP, Europe Loic Francois Marie Dubois

NAME: Loic Francois Marie Dubois

COMPANY: New York- and São Paulo, Brazil-based Lobo

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service creative studio offering design, live action, stop motion, 3D & 2D, mixed media, print, digital, AR and VR.

Day One spot Sunshine

WHAT’S YOUR JOB TITLE?
Creative executive producer for Europe and formerly head of production. I’m based in Brazil, but work out of the New York office as well.

WHAT DOES THAT ENTAIL?
Managing, hiring creative teams, designers, producers and directors for international productions (USA, Europe, Asia). Also, I have served as the creative executive director for TBWA Paris on the McDonald’s Happy Meal global campaign for the last five years. Now as creative EP for Europe, I am also responsible for streamlining information from pre-production to post production between all production parties for a more efficient and prosperous sales outcome.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The patience and the fun psychological side you need to have to handle all the production peeps, agencies, and clients.

WHAT TOOLS DO YOU USE?
Excel, Word, Showbiz, Keynote, Pages, Adobe Package (Photoshop, Illustrator, After Effects, Premiere, InDesign), Maya, Flame, Nuke and AR/VR technology.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with talented creative people on extraordinary projects with a stunning design and working on great narratives, such as the work we have done for clients including Interface, Autism Speaks, Imaginary Friends, Unicef and Travelers, to name a few.

WHAT’S YOUR LEAST FAVORITE?
Monday morning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early afternoon between Europe closing down and the West Coast waking up.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Meditating in Tibet…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Since I was 13 years old. After shooting and editing a student short film (an Oliver Twist adaptation) with a Bolex 16mm on location in London and Paris, I was hooked.

Promoting Lacta 5Star chocolate bars

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
An animated campaign for the candy company Mondelez’s Lacta 5Star chocolate bars; an animated short film for the Imaginary Friends Society; a powerful animated short on the dangers of dating abuse and domestic violence for nonprofit Day One; a mixed media campaign for Chobani called FlipLand; and a broadcast spot for McDonald’s and Spider-Man.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
My three kids 🙂

It’s really hard to choose one project, as they are all equally different and amazing in their own way, but maybe D&AD Wish You Were Here. It stands out for the number of awards it won and the collective creative production process.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Internet.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Meditation and yoga.


Epic Games’ Unreal Engine 4.21 adds more mobile optimizations, efficiencies

Epic Games’ Unreal Engine 4.21 is designed to offer developers greater efficiency, performance and stability for those working on any platform.

Unreal Engine 4.21 adds even more mobile optimizations to both Android and iOS, up to 60% speed increases when cooking content and more power and flexibility in the Niagara effects toolset for realtime VFX. Also, the new production-ready Replication Graph plugin enables developers to build multiplayer experiences at a scale that hasn’t been possible before, and Pixel Streaming allows users to stream interactive content directly to remote devices with no compromises on rendering quality.

Updates in Unreal Studio 4.21 also offer new capabilities and enhanced productivity for users in the enterprise space, including architecture, manufacturing, product design and other areas of professional visualization. Unreal Studio’s Datasmith workflow toolkit now includes support for Autodesk Revit, and enhanced material translation for Autodesk 3ds Max, all enabling more efficient design review and iteration.

Here is more about the key features:
Replication Graph: The Replication Graph plugin, which is now production-ready, makes it possible to customize network replication in order to build large-scale multiplayer games that would not be viable with traditional replication strategies.

Niagara Enhancements: The Niagara VFX feature set continues to grow, with substantial quality of life improvements and Nintendo Switch support added in Unreal Engine 4.21.

Sequencer Improvements: New capabilities within Sequencer allow users to record incoming video feeds to disk as OpenEXR frames and create a track in Sequencer, with the ability to edit and scrub the track as usual. This enables users to synchronize video with CG assets and play them back together from the timeline.

Pixel Streaming (Early Access): With the new Pixel Streaming feature, users can author interactive experiences such as product configurations or training applications, host them on a cloud-based GPU or local server, and stream them to remove devices via web browser without the need for additional software or porting.

Mobile Optimizations: The mobile development process gets even better thanks to all of the mobile optimizations that were developed for Fortnite‘s initial release on Android, in addition to all of the iOS improvements from Epic’s ongoing updates. With the help of Samsung, Unreal Engine 4.21 includes all of the Vulkan engineering and optimization work that was done to help ship Fortnite on the Samsung Galaxy Note 9 and is 100% feature compatible with OpenGL ES 3.1.

Much Faster Cook Times: In addition to the optimized cooking process, low-level code avoids performing unnecessary file system operations, and cooker timers have been streamlined.

Gauntlet Automation Framework (Early access): The new Gauntlet automation framework enables developers to automate the process of deploying builds to devices, running one or more clients and or/servers, and processing the results. Gauntlet scripts can automatically profile points of interest, validate gameplay logic, check return values from backend APIs and more. Gauntlet has been battle tested for months in the process of optimizing Fortnite, and is a key part of ensuring it runs smoothly on all platforms.

Animation System Optimizations and Improvements: Unreal Engine’s animation system continues to build on best-in-class features thanks to new workflow improvements, better surfacing of information, new tools, and more.

Blackmagic Video Card Support: Unreal Engine 4.21 also adds support for Blackmagic video I/O cards for those working in film and broadcast. Creatives in the space can now choose between Blackmagic and AJA Video Systems, the two most popular options for professional video I/O.

Improved Media I/O: Unreal Engine 4.21 now supports 10-bit video I/O, audio I/O, 4K, and Ultra HD output over SDI, as well as legacy interlaced and PsF HD formats, enabling greater color accuracy and integration of some legacy formats still in use by large broadcasters.

Windows Mixed Reality: Unreal Engine 4.21 natively supports the Windows Mixed Reality (WMR) platform and headsets, such as the HP Mixed Reality headset and the Samsung HMD Odyssey headset.

Magic Leap Improvements: Unreal Engine 4.21 supports all the features needed to develop complete applications on Magic Leap’s Lumin-based devices — rendering, controller support, gesture recognition, audio input/output, media, and more.

Oculus Avatars: The Oculus Avatar SDK includes an Unreal package to assist developers in implementing first-person hand presence for the Rift and Touch controllers. The package includes avatar hand and body assets that are viewable by other users in social applications.

Datasmith for Revit (Unreal Studio): Unreal Studio’s Datasmith workflow toolkit for streamlining the transfer of CAD data into Unreal Engine now includes support for Autodesk Revit. Supported elements include materials, metadata, hierarchy, geometric instancing, lights and cameras.

Multi-User Viewer Project Template (Unreal Studio): A new project template for Unreal Studio 4.21 enables multiple users to connect in a real-time environment via desktop or VR, facilitating interactive, collaborative design reviews across any work site.

Accelerated Automation with Jacketing and Defeaturing (Unreal Studio): Jacketing automatically identifies meshes and polygons that have a high probability of being hidden from view, and lets users hide, remove or move them to another layer; this command is also available through Python so Unreal Studio users can integrate this step into automated preparation workflows. Defeaturing automatically removes unnecessary detail (e.g. blind holes, protrusions) from mechanical models to reduce polygon count and boost performance.

Enhanced 3ds Max Material Translation (Unreal Studio): There is now support for most commonly used 3ds Max maps, improving visual fidelity and reducing rework. Those in the free Unreal Studio beta can now translate 3ds Max material graphs to Unreal graphs when exporting, making materials easier to understand and work with. Users can also leverage improvements in BRDF matching from V-Ray materials, especially metal and glass.

DWG and Alias Wire Import (Unreal Studio): Datasmith now supports DWG and Alias Wire file types, enabling designers to import more 3D data directly from Autodesk AutoCAD and Autodesk Alias.


Chaos Group to support Cinema 4D with two rendering products

At the Maxon Supermeet 2018 event, Chaos Group announced its plans to support the Maxon Cinema 4D community with two rendering products: V-Ray for Cinema 4D and Corona for Cinema 4D. Based on V-Ray’s Academy Award-winning raytracing technology, the development of V-Ray for Cinema 4D will be focused on production rendering for high-end visual effects and motion graphics. Corona for Cinema 4D will focus on artist-friendly design visualization.

Chaos Group, which acquired the V-Ray for Cinema 4D product from LAUBlab and will lead development on the product for the first time, will offer current customers free migration to a new update, V-Ray 3.7 for Cinema 4D. All users who move to the new version will receive a free V-Ray for Cinema 4D license, including all product updates, through January 15, 2020. Moving forward, Chaos Group will be providing all support, sales and product development in-house.

In addition to ongoing improvements to V-Ray for Cinema 4D, Chaos Group is also released the Corona for Cinema 4D beta 2 at Supermeet, with the final product to follow in January 2019.

Main Image: Daniel Sian created Robots using V-ray for Cinema 4D.


Promoting a Mickey Mouse watch without Mickey

Imagine creating a spot for a watch that celebrates the 90th anniversary of Mickey Mouse — but you can’t show Mickey Mouse. Already Been Chewed (ABC), a design and motion graphics studio, developed a POV concept that met this challenge and also tied in the design of the actual watch.

Nixon, a California-based premium watch company that is releasing a series of watches around the Mickey Mouse anniversary, called on Already Been Chewed to create the 20-second spot.

“The challenge was that the licensing arrangement that Disney made with Nixon doesn’t allow Mickey’s image to be in the spot,” explains Barton Damer, creative director at Already Been Chewed. “We had to come up with a campaign that promotes the watch and has some sort of call to action that inspires people to want this watch. But, at the same time, what were we going to do for 20 seconds if we couldn’t show Mickey?”

After much consideration, Damer and his team developed a concept to determine if they could push the limits on this restriction. “We came up with a treatment for the video that would be completely point-of-view, and the POV would do a variety of things for us that were working in our favor.”

The solution was to show Mickey’s hands and feet without actually showing the whole character. In another instance, a silhouette of Mickey is seen in the shadows on a wall, sending a clear message to viewers that the spot is an official Disney and Mickey Mouse release and not just something that was inspired by Mickey Mouse.

Targeting the appropriate consumer demographic segment was another key issue. “Mickey Mouse has long been one of the most iconic brands in the history of branding, so we wanted to make sure that it also appealed to the Nixon target audience and not just a Disney consumer,” Damer says. “When you think of Disney, you could brand Mickey for children or you could brand it for adults who still love Mickey Mouse. So, we needed to find a style and vibe that would speak to the Nixon target audience.”

The Already Been Chewed team chose surfing and skateboarding as dominant themes, since 16-to 30-year-olds are the target demographic and also because Disney is a West Coast brand.
Damer comments, “We wanted to make sure we were creating Mickey in a kind of 3D, tangible way, with more of a feature film and 3D feel. We felt that it should have a little bit more of a modern approach. But at the same time, we wanted to mesh it with a touch of the old-school vibe, like 1950s cartoons.”

In that spirit, the team wanted the action to start with Mickey walking from his car and then culminate at the famous Venice Beach basketball courts and skate park. Here’s the end result.

“The challenge, of course, is how to do all this in 15 seconds so that we can show the logos at the front and back and a hero image of the watch. And that’s where it was fun thinking it through and coming up with the flow of the spot and seamless transitions with no camera cuts or anything like that. It was a lot to pull off in such a short time, but I think we really succeeded.”

Already Been Chewed achieved these goals with an assist from Maxon’s Cinema 4D and Adobe After Effects. With Damer as creative lead, here’s the complete cast of characters: head of production Aaron Smock; 3D design was via Thomas King, Barton Damer, Bryan Talkish, Lance Eckert; animation was provided by Bryan Talkish and Lance Eckert; character animation was via Chris Watson; soundtrack was DJ Sean P.

Sony Imageworks provides big effects, animation for Warner’s Smallfoot

By Randi Altman

The legend of Bigfoot: a giant, hairy two-legged creature roaming the forests and giving humans just enough of a glimpse to freak them out. Sightings have been happening for centuries with no sign of slowing down — seriously, Google it.

But what if that story was turned around, and it was Bigfoot who was freaked out by a Smallfoot (human)? Well, that is exactly the premise of the new Warner Bros. film Smallfoot, directed by Karey Kirkpatrick. It’s based on the book “Yeti Tracks” by Sergio Pablos.

Karl Herbst

Instead of a human catching a glimpse of the mysterious giant, a yeti named Migo (Channing Tatum) sees a human (James Corden) and tells his entire snow-filled village about the existence of Smallfoot. Of course, no one believes him so he goes on a trek to find this mythical creature and bring him home as proof.

Sony Pictures Imageworks was tasked with all of the animation and visual effects work on the film, while Warner Animation film did all of the front end work — such as adapting the script, creating the production design, editing, directing, producing and more. We reached out to Imageworks VFX supervisor Karl Herbst (Hotel Transylvania 2) to find out more about creating the animation and effects for Smallfoot.

The film has a Looney Tunes-type feel with squash and stretch. Did this provide more freedom or less?
In general, it provided more freedom since it allowed the animation team to really have fun with gags. It also gave them a ton of reference material to pull from and come up with new twists on older ideas. Once out of animation, depending on how far the performance was pushed, other departments — like the character effects team — would have additional work due to all of the exaggerated movements. But all of the extra work was worth it because everyone really loved seeing the characters pushed.

We also found that as the story evolved, Migo’s journey became more emotionally driven; We needed to find a style that also let the audience truly connect with what he was going through. We brought in a lot more subtlety, and a more truthful physicality to the animation when needed. As a result, we have these incredibly heartfelt performances and moments that would feel right at home in an old Road Runner short. Yet it all still feels like part of the same world with these truly believable characters at the center of it.

Was scale between such large and small characters a challenge?
It was one of the first areas we wanted to tackle since the look of the yeti’s fur next to a human was really important to filmmakers. In the end, we found that the thickness and fidelity of the yeti hair had to be very high so you could see each hair next to the hairs of the humans.

It also meant allowing the rigs for the human and yetis to be flexible enough to scale them as needed to have moments where they are very close together and they did not feel so disproportionate to each other. Everything in our character pipeline from animation down to lighting had to be flexible in dealing with these scale changes. Even things like subsurface scattering in the skin had dials in it to deal with when Percy, or any human character, was scaled up or down in a shot.

How did you tackle the hair?
We updated a couple of key areas in our hair pipeline starting with how we would build our hair. In the past, we would make curves that look more like small groups of hairs in a clump. In this case, we made each curve its own strand of a single hair. To shade this hair in a way that allowed artists to have better control over the look, our development team created a new hair shader that used true multiple-scattering within the hair.

We then extended that hair shading model to add control over the distribution around the hair fiber to model the effect of animal hair, which tends to scatter differently than human hair. This gave artists the ability to create lots of different hair looks, which were not based on human hair, as was the case with our older models.

Was rendering so many fury characters on screen at a time an issue?
Yes. In the past this would have been hard to shade all at once, mostly due to our reliance on opacity to create the soft shadows needed for fur. With the new shading model, we were no longer using opacity at all so the number of rays needed to resolve the hair was lower than in the past. But we now needed to resolve the aliasing due to the number of fine hairs (9 million for LeBron James’ Gwangi).

We developed a few other new tools within our version of the Arnold renderer to help with aliasing and render time in general. The first was adaptive sampling, which would allow us to up the anti-aliasing samples drastically. This meant some pixels would only use a few samples while others would use very high sampling. Whereas in the past, all pixels would get the same number. This focused our render times to where we needed it, helping to reduce overall rendering. Our development team also added the ability for us to pick a render up from its previous point. This meant that at a lower quality level we could do all of our lighting work, get creative approval from the filmmakers and pick up the renders to bring them to full quality not losing the time already spent.

What tools were used for the hair simulations specifically, and what tools did you call on in general?
We used Maya and the Nucleus solvers for all of the hair simulations, but developed tools over them to deal with so much hair per character and so many characters on screen at once. The simulation for each character was driven by their design and motion requirements.

The Looney Tunes-inspired design and motion created a challenge around how to keep hair simulations from breaking with all of the quick and stretched motion while being able to have light wind for the emotional subtle moments. We solved all of those requirements by using a high number of control hairs and constraints. Meechee (Zendaya) used 6,000 simulation curves with over 200 constraints, while Migo needed 3,200 curves with around 30 constraints.

Stonekeeper (Common) was the most complex of the characters, with long braided hair on his head, a beard, shaggy arms and a cloak made of stones. He required a cloth simulation pass, a rigid body simulation was performed for the stones and the hair was simulated on top of the stones. Our in-house tool called Kami builds all of the hair at render time and also allows us to add procedurals to the hair at that point. We relied on those procedurals to create many varied hair looks for all of the generics needed to fill the village full of yetis.

How many different types of snow did you have?
We created three different snow systems for environmental effects. The first was a particle simulation of flakes for near-ground detail. The second was volumetric effects to create lots of atmosphere in the backgrounds that had texture and movement. We used this on each of the large sets and then stored those so lighters could pick which parts they wanted in each shot. To also help with artistically driving the look of each shot, our third system was a library of 2D elements that the effects team rendered and could be added during compositing to add details late in shot production.

For ground snow, we had different systems based on the needs in each shot. For shallow footsteps, we used displacement of the ground surface with additional little pieces of geometry to add crumble detail around the prints. This could be used in foreground or background.

For heavy interactions, like tunneling or sliding in the snow, we developed a new tool we called Katyusha. This new system combined rigid body destruction with fluid simulations to achieve all of the different states snow can take in any given interaction. We then rendered these simulations as volumetrics to give the complex lighting look the filmmakers were looking for. The snow, being in essence a cloud, allowed light transport through all of the different layers of geometry and volume that could be present at any given point in a scene. This made it easier for the lighters to give the snow its light look in any given lighting situation.

Was there a particular scene or effect that was extra challenging? If so, what was it and how did you overcome it?
The biggest challenge to the film as a whole was the environments. The story was very fluid, so design and build of the environments came very late in the process. Coupling that with a creative team that liked to find their shots — versus design and build them — meant we needed to be very flexible on how to create sets and do them quickly.

To achieve this, we begin by breaking the environments into a subset of source shapes that could be combined in any fashion to build Yeti Mountain, Yeti Village and the surrounding environments. Surfacing artists then created materials that could be applied to any set piece, allowing for quick creative decisions about what was rock, snow and ice, and creating many different looks. All of these materials were created using PatternCreate networks as part of our OSL shaders. With them we could heavily leverage the portable procedural texturing between assets making location construction quicker, more flexible and easier to dial.

To get the right snow look for all levels of detail needed, we used a combination of textured snow, modeled snow and a simulation of geometric snowfall, which all needed to shade the same. For the simulated snowfall we created a padding system that could be run at any time on an environment giving it a fresh coating of snow. We did this so that filmmakers could modify sets freely in layout and not have to worry about broken snow lines. Doing all of that with modeled snow would have been too time-consuming and costly. This padding system worked not only in organic environments, like Yeti Village, but also in the Human City at the end of the film. The snow you see in the Human City is a combination of this padding system in the foreground and textures in the background.