Category Archives: 2D

Behind the Title: Aardman director/designer Gavin Strange

NAME: Gavin Strange

COMPANY: Bristol, England-based Aardman. They also have an office in NYC under the banner Aardman Nathan Love

CAN YOU DESCRIBE HOW YOUR CAREER AT AARDMAN BEGAN?
I can indeed! I started 10 years ago as a freelancer, joining the fledgling Interactive department (or Aardman Online as it was known back then). They needed a digital designer for a six-month project for the UK’s Channel 4.

I was a freelancer in Bristol at the time and I made it my business to be quite vocal on all the online platforms, always updating those platforms and my own website with my latest work — whether that be client work or self-initiated projects. Luckily for me, the creative director of Aardman Online, Dan Efergan, saw my work when he was searching for a designer to work with and got in touch (it was the most exciting email ever, with the subject of “Hello from Aardman!”

The short version of this story is that I got Dan’s email, popped in for a cup of tea and a chat, and 10 years later I’m still here! Ha!

The slightly longer but still truncated version is that after the six-month freelance project was done, the role of senior designer for the online team became open and I gave up the freelance life and, very excitedly, joined the team as an official Aardmanite!

Thing is, I was never shy about sharing with my new colleagues the other work I did. My role in the beginning was primarily digital/graphic design, but in my own time, under the banner of JamFactory (my own artist alter-ego name) I put out all sorts of work that was purely passion projects; films, characters, toys, clothing, art.

Gavin Strange directed this Christmas spot for the luxury brand Fortnum & Mason .

Filmmaking was a huge passion of mine and even at the earliest stages in my career when I first started out (I didn’t go to university so I got my first role as a junior designer when I was 17) I’d always be blending graphic design and film together.

Over those 10 years at Aardman I continued to make films, of all kinds, and share them with my colleagues. Because of that more opportunities arose to develop my film work within my existing design role. I had the unique advantage of having a lot of brilliant mentors who would guide me and help me with my moving image projects.

Those opportunities continued to grow and happen more frequently, until I was doing more and more directing here, becoming officially represented by Aardman and added to their roster of directors. It’s a dream come true for me, because not only do I get to work at the place I’ve admired growing up, but I’ve been mentored and shaped by the very individuals who make this place so special — that’s a real privilege.

What I really love is that my role is so varied — I’m both a director and  a senior designer. I float between projects, and I love that variety. Sometimes I’m directing a commercial, sometimes I’m illustrating icons, other times I’m animating motion graphics. To me though, I don’t see a difference — it’s all creating something engaging, beautiful and entertaining — whatever the final format or medium!

So that’s my Aardman story. Ten years in, and I just feel like I’m getting started. I love this place.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE OF DIRECTOR?
Hmm, it’s tricky, as I actually think that most people’s perception of being a director is true: it’s that person’s responsibility to bring the creative vision to life.

Maybe what people don’t know is how flexible the role is, depending on the project. I love smaller projects where I get to board, design and animate, but then I love larger jobs with a whole crew of people. It’s always hands-on, but in many different ways.

Perhaps what would surprise a lot of people is that it’s every directors responsibility to clean the toilets at the end of the day. That’s what Aardman has always told me and, of course, I honor that tradition. I mean, I haven’t actually ever seen anyone else do it, but that’s because everyone else just gets on with it quietly, right? Right!?

WHAT’S YOUR FAVORITE PART OF THE JOB?
Oh man, can I say everything!? I really, really enjoy the job as a whole — having that creative vision, working with yourself, your colleagues and your clients to bring it to life. Adapting and adjusting to changes and ensuring something great pops out the other end.

I really, genuinely get a thrill seeing something on-screen. I love concentrating on every single frame — it’s a win-win situation. You get to make a lovely image, each frame but when you stitch them together and play them really fast one after another, and then you get a lovely movie — how great is that?

In short, I really love the sum total of the job. All those different exciting elements that all come together for the finished piece.

WHAT’S YOUR LEAST FAVORITE?
I pride myself on being an optimist and being a right positive pain in the bum, so I don’t know if there’s any part I don’t enjoy — if anything is tricky I try and see it as a challenge and something that will only improve my skillset.

I know that sounds super annoying doesn’t it? I know that can seem all floaty and idealistic, but I pride myself on being a “realistic’ idealist” — recognizing the reality of a tricky situation, but seeing it through an idealistic lens.

If I’m being honest, then probably that really early stage is my least favorite — when the project is properly kicking off and you’ve got that gap between what the treatment/script/vision says it will be and the huge gulf in-between that and the finished thing. That’s also the most exciting too, the not knowing how it will turn out. It’s terrifying and thrilling, in all good measure. It surprises me every single time, but I think that panic is an essential part of any creative process.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
In an alternate world, I’d be a photographer, traveling the world, documenting everything I see, living the nomadic life. But that’s still a creative role, and I still class it as the same job really. I love my graphic design roots too, print and digital design, but again I see it as all the same role really.

So that means, if I didn’t have this job, I’d be roaming the lands, offering to draw/paint/film/make for anyone that wanted it! (Is that a mercenary? Is there such a thing as a visual mercenary? I don’t really have the physique for that I don’t think.)

WHY DID YOU CHOOSE THIS PROFESSION?
This profession chose me. I’m just kidding, that’s ridiculous, I just always wanted to say that.

I think, like most folks, I fell into it in a series of natural choices. Art, design, graphics and games always stole my attention as a kid, and I just followed the natural path into that, which turned into my career. I’m lucky enough that I didn’t feel the need to single out any one passion, and kept them all bubbling along even as I made my career choices as designer to director. I still did and still do indulge my passion for all types of mediums in my own time.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I’m not sure. I wasn’t particularly driven or focused as a kid. I knew I loved design and art, but I didn’t know of the many, many different roles out there that existed. I like that though, I see that as a positive, and also as an achievable way to progress through a career path. I speak to a lot of students and young professionals and I think it can be so overwhelming to plot a big ‘X’ on a career map and then feel all confused about how to get there. I’m an advocate of taking it one step at a time, and make more manageable advances forward — as things always get in the way and change anyway.

I love the idea of a meandering, surprising path. Who knows where it will lead!? I think as long as your aim is to make great work, then you’ll surprise yourself where you end up.

WHAT WAS IT ABOUT DIRECTING THAT ATTRACTED YOU?
I’ve always obsessed over films, and obsessed over the creation of them. I’ll watch a behind-the-scenes on any film or bit of moving image. I just love the fact that the role is to bring something to life — it’s to oversee and create something from nothing, ensuring every frame is right. The way it makes you feel, the way it looks, the way it sounds.

It’s just such an exciting role. There’s a lot of unknowns too, on every project. I think that’s where the good stuff lies. Trusting in the process and moving forwards, embracing it.

HOW DOES DIRECTING FOR ANIMATION DIFFER FROM DIRECTING FOR LIVE ACTION — OR DOES IT?
Technically it’s different — with animation your choices are pretty much made all up front, with the storyboards and animatic as your guides, and then they’re brought to life with animation. Whereas, for me, the excitement in live action is not really knowing what you’ll get until there’s a lens on it. And even then, it can come together in a totally new way in the edit.

I don’t try to differentiate myself as an “animation director” or “live-action” director. They’re just different tools for the job. Whatever tells the best story and connects with audiences!

HOW DO YOU PICK THE PEOPLE YOU WORK WITH ON A PARTICULAR PROJECT?
Their skillset is paramount, but equally as important is their passion and their kindness. There are so many great people out there, but I think it’s so important to work with people who are great and kind. Too many people get a free pass for being brilliant but feel that celebration of their work means it’s okay to mistreat others. It’s not okay… ever. I’m lucky that Aardman is a place full of excited, passionate and engaged folk who are a pleasure to work with, because you can tell they love what they do.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’ve been lucky enough to work on a real variety of projects recently. I directed an ident for the rebrand of BBC2, a celebratory Christmas spot for the luxury brand Fortnum & Mason and an autobiographical motion graphics short film about Maya Angelou for BBC Radio 4.

Maya Angelou short film for BBC Radio 4

I love the variety of them, just those three projects alone were so different. The BBC2 ident was live-action in-camera effects with a great crew of people, whereas the Maya Angelou film was just me on design, direction and animation. I love hopping between projects of all types and sizes!

I’m working on development of a stop-frame short at the moment, which is all I can say for now, but just the process alone going from idea to a scribble in a notebook to a script is so exciting. Who knows what 2019 holds!?

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Oh man, that’s a tough one! A few years back I co-directed a title sequence for a creative festival called OFFF, which happens every year in Barcelona. I worked with Aardman legend Merlin Crossingham to bring this thing to life, and it’s a proper celebration of what we both love — it ended up being, and we lovingly referred to as, our “stop-frame live-action motion-graphics rap-video title-sequence.” It really was all those things!

That was really special as not only did we have a great crew, I got to work with one of my favo rite rappers, P.O.S., who kindly provided the beats and the raps for the film.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT
– My iPhone. It’s my music player, Internet checker, email giver, tweet maker, picture capturer.
– My Leica M6 35mm camera. It’s my absolute pride and joy, I love the images it makes.
– My Screens. At work I have a 27-inch iMac and then two 25-inch monitors on either side. I just love screens. If I could have more, I would!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I genuinely love what I do, so I rarely feel like I “need to get away from it all.” But I do enjoy life outside of work. I’m a drummer and that really helps with any and all stress really. Even just practicing on a practice pad is cathartic, but nothing compares to smashing away on a real kit.

I like to run, and I sometimes do a street dance class, which is both great fun and excruciatingly frustrating because I’m not very good.

I’m a big gamer, even though I don’t have much time for it anymore. A blast on the PS4 is a treat. In fact, after this I’m going to have a little session on God of War before bed time.

I love hanging with my family. My wife Jane, our young son Sullivan and our dog Peggy. Just hanging out, being a dad and being a husband is the best for de-stressing. Unless Sullivan gets up at 3am, then I change my answer back to the PS4.

I’m kidding, I love my family, I wouldn’t be anything or be anywhere without them.

Foundry Nuke 11.3’s performance, collaboration updates

Foundry has launched Nuke 11.3, introducing new features and updates to the company’s family of compositing and review tools. The release is the fourth update to the Nuke 11 Series and is designed to improve the user experience and to speed up heavy processing tasks for pipelines and individual users.

Nuke 11.3 lands with major enhancements to its Live Groups feature. It introduces new functionality along with corresponding Python callbacks and UI notifications that will allow for greater collaboration and offer more control. These updates make Live Groups easier for larger pipelines to integrate and give artists more visibility over the state of the Live Group and flexibility when using user knobs to override values within a Live Group.

The particle system in NukeX has been optimized to produce particle simulations up to six times faster than previous versions of the software, and up to four times faster for playback, allowing for faster iteration when setting up particle systems.

New Timeline Multiview support provides an extension to stereo and VR workflows. Artists can now use the same multiple-file stereo workflows that exist in Nuke on the Nuke Studio, Hiero and HieroPlayer timeline. The updated export structure can also be used to create multiple-view Nuke scripts from the timeline in Nuke Studio and Hiero.

Support for full-resolution stereo on monitor out makes review sessions even easier, and a new export preset helps with rendering of stereo projects.

New UI indications for changes in bounding box size and channel count help artists troubleshoot their scripts. A visual indication identifies nodes that increase bounding box size to be greater than the image, helping artists to identify the state of the bounding box at a glance. Channel count is now displayed in the status bar, and a warning is triggered when the 1024-channel limit is exceeded. The appearance and threshold for triggering the bounding box and channel warnings can be set in the preferences.

The selection tool has also been improved in both 2D and 3D views, and an updated marquee and new lasso tool make selecting shapes and points even easier.

Nuke 11.3 is available for purchase — alongside full release details — on Foundry’s website and via accredited resellers.

DigitalGlue 12.3

Making an animated series with Adobe Character Animator

By Mike McCarthy

In a departure from my normal film production technology focus, I have also been working on an animated web series called Grounds of Freedom. Over the past year I have been directing the effort and working with a team of people across the country who are helping in various ways. After a year of meetings, experimentation and work we finally started releasing finished episodes on YouTube.

The show takes place in Grounds of Freedom, a coffee shop where a variety of animated mini-figures gather to discuss freedom and its application to present-day cultural issues and events. The show is created with a workflow that weaves through a variety of Adobe Creative Cloud apps. Back in October I presented our workflow during Adobe Max in LA, and I wanted to share it with postPerspective’s readers as well.

When we first started planning for the series, we considered using live action. Ultimately, after being inspired by the preview releases of Adobe Character Animator, I decided to pursue a new digital approach to brick filming (a film made using Legos), which is traditionally accomplished through stop-motion animation. Once everyone else realized the simpler workflow possibilities and increased level of creative control offered by that new animation process, they were excited to pioneer this new approach. Animation gives us more control and flexibility over the message and dialog, lowers production costs and eases collaboration over long distances, as there is no “source footage” to share.

Creating the Characters
The biggest challenge to using Character Animator is creating digital puppets, which are deeply layered Photoshop PSDs with very precise layer naming and stacking. There are ways to generate the underlying source imagery in 3D animation programs, but I wanted the realism and authenticity of sourcing from actual photographs of the models and figures. So we took lots of 5K macro shots of our sets and characters in various positions with our Canon 60D and 70D DSLRs and cut out hundreds of layers of content in Photoshop to create our characters and all of their various possible body positions. The only thing that was synthetically generated was the various facial expressions digitally painted onto their clean yellow heads, usually to match an existing physical reference character face.

Mike McCarthy shooting stills.

Once we had our source imagery organized into huge PSDs, we rigged those puppets in Character Animator with various triggers, behaviors and controls. The walking was accomplished by cycling through various layers, instead of the default bending of the leg elements. We created arm movement by mapping each arm position to a MIDI key. We controlled facial expressions and head movement via webcam, and the mouth positions were calculated by the program based on the accompanying audio dialog.

Animating Digital Puppets
The puppets had to be finished and fully functional before we could start animating on the digital stages we had created. We had been writing the scripts during that time, parallel to generating the puppet art, so we were ready to record the dialog by the time the puppets were finished. We initially attempted to record live in Character Animator while capturing the animation motions as well, but we didn’t have the level of audio editing functionality we needed available to us in Character Animator. So during that first session, we switched over to Adobe Audition, and planned to animate as a separate process, once the audio was edited.

That whole idea of live capturing audio and facial animation data is laughable now, looking back, since we usually spend a week editing the dialog before we do any animating. We edited each character audio on a separate track and exported those separate tracks to Character Animator. We computed lipsync for each puppet based on their dedicated dialog track and usually exported immediately. This provided a draft visual that allowed us to continue editing the dialog within Premiere Pro. Having a visual reference makes a big difference when trying to determine how a conversation will feel, so that was an important step — even though we had to throw away our previous work in Character Animator once we made significant edit changes that altered sync.

We repeated the process once we had a more final edit. We carried on from there in Character Animator, recording arm and leg motions with the MIDI keyboard in realtime for each character. Once those trigger layers had been cleaned up and refined, we recorded the facial expressions, head positions and eye gaze with a single pass on the webcam. Every re-record to alter a particular section adds a layer to the already complicated timeline, so we limited that as much as possible, usually re-recording instead of making quick fixes unless we were nearly finished.

Compositing the Characters Together
Once we had fully animated scenes in Character Animator, we would turn off the background elements, and isolate each character layer to be exported in Media Encoder via dynamic link. I did a lot of testing before settling on JPEG2000 MXF as the format of choice. I wanted a highly compressed file, but need alpha channel support, and that was the best option available. Each of those renders became a character layer, which was composited into our stage layers in After Effects. We could have dynamically linked the characters directly into AE, but with that many layers that would decrease performance for the interactive part of the compositing work. We added shadows and reflections in AE, as well as various other effects.

Walking was one of the most challenging effects to properly recreate digitally. Our layer cycling in Character Animator resulted in a static figure swinging its legs, but people (and mini figures) have a bounce to their step, and move forward at an uneven rate as they take steps. With some pixel measurement and analysis, I was able to use anchor point keyframes in After Effects to get a repeating movement cycle that made the character appear to be walking on a treadmill.

I then used carefully calculated position keyframes to add the appropriate amount of travel per frame for the feet to stick to the ground, which varies based on the scale as the character moves toward the camera. (In my case the velocity was half the scale value in pixels per seconds.) We then duplicated that layer to create the reflection and shadow of the character as well. That result can then be composited onto various digital stages. In our case, the first two shots of the intro were designed to use the same walk animation with different background images.

All of the character layers were pre-comped, so we only needed to update a single location when a new version of a character was rendered out of Media Encoder, or when we brought in a dynamically linked layer. It would propagate all the necessary comp layers to generate updated reflections and shadows. Once the main compositing work was finished, we usually only needed to make slight changes in each scene between episodes. These scenes were composited at 5K, based on the resolution off the DSLR photos of the sets we had built. These 5K plates could be dynamically linked directly into Premiere Pro, and occasionally used later in the process to ripple slight changes through the workflow. For the interactive work, we got far better editing performance by rendering out flattened files. We started with DNxHR 5K assets, but eventually switched to HEVC files since they were 30x smaller and imperceptibly different in quality with our relatively static animated content.

Editing the Animated Scenes
In Premiere Pro, we had the original audio edit, and usually a draft render of the characters with just their mouths moving. Once we had the plate renders, we placed them each in their own 5K scene sub-sequence and used those sequences as source on our master timeline. This allowed us to easily update the content when new renders were available, or source from dynamically linked layers instead if needed. Our master timeline was 1080p, so with 5K source content we could push in two and a half times the frame size without losing resolution. This allowed us to digitally frame every shot, usually based on one of two rendered angles, and gave us lots of flexibility all the way to the end of the editing process.

Collaborative Benefits of Dynamic Link
While Dynamic Link doesn’t offer the best playback performance without making temp renders, it does have two major benefits in this workflow. It ripples change to the source PSD all the way to the final edit in Premiere just by bringing each app into focus once. (I added a name tag to one character’s PSD during my presentation, and 10 seconds later, it was visible throughout my final edit.) Even more importantly, it allows us to collaborate online without having to share any exported video assets. As long as each member of the team has the source PSD artwork and audio files, all we have to exchange online are the Character Animator project (which is small once the temp files are removed), the .AEP file and the .PrProj file.

This gives any of us the option to render full-quality visual assets anytime we need them, but the work we do on those assets is all contained within the project files that we sync to each other. The coffee shop was built and shot in Idaho, our voice artist was in Florida, our puppets faces were created in LA. I animate and edit in Northern California, the AE compositing was done in LA, and the audio is mixed in New Jersey. We did all of that with nothing but a Dropbox account, using the workflow I have just outlined.

Past that point, it was a fairly traditional finish, in that we edited in music and sound effects, and sent an OMF to Steve, our sound guy at DAWPro Studios http://dawpro.com/photo_gallery.html for the final mix. During that time we added other b-roll visuals or other effects, and once we had the final audio back we rendered the final result to H.264 at 1080p and uploaded to YouTube.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


VFX supervisor Simon Carr joins London’s Territory

Simon Carr has joined visual effects house Territory, bringing with him 20 years of experience as a VFX supervisor. He most recently served that role at London’s Halo, where he built the VFX department from scratch. He has also supervised at Realise Studio, Method Studios, Pixomondo, Digital Domain and others. While Carr will be based in London, he will also support the studio’s San Francisco offices as needed.

Having invested in a Shotgun pipeline, with a bespoke toolkit that integrates Territory’s design-led approach with VFX delivery, Carr’s appointment, according to the studio, signals a strategic approach to expanding the team’s capabilities. “Simon’s experience of all stages of the VFX process from pre-production to final delivery means that our clients and partners can be confident of seamless high-end VFX delivery at every stage of a project” says David Sheldon-Hicks, Territory’s founder and executive creative director.

At Territory, Carr will use his experience building and leading teams of artists, from compositing through to complex environment builds. The studio will also benefit from his experience of building a facility from scratch — establishing pipeline and workflows, recruiting and retaining artists; developing and maintaining relationships with clients and being involved with the pitching and bidding process.

The studio has worked on high-profile film projects, including Blade Runner 2049, Ready Player One, Pacific Rim: Uprising, Ghost in the Shell, The Martian and Guardians of the Galaxy. On the broadcast front, they have worked on the new series based on George R.R. Martin’s novella, Nightflyers, Amazon Prime/Channel 4’s Electric Dreams and National Geographic’s Year Million.

 


Behind the Title: Lobo EP, Europe Loic Francois Marie Dubois

NAME: Loic Francois Marie Dubois

COMPANY: New York- and São Paulo, Brazil-based Lobo

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service creative studio offering design, live action, stop motion, 3D & 2D, mixed media, print, digital, AR and VR.

Day One spot Sunshine

WHAT’S YOUR JOB TITLE?
Creative executive producer for Europe and formerly head of production. I’m based in Brazil, but work out of the New York office as well.

WHAT DOES THAT ENTAIL?
Managing, hiring creative teams, designers, producers and directors for international productions (USA, Europe, Asia). Also, I have served as the creative executive director for TBWA Paris on the McDonald’s Happy Meal global campaign for the last five years. Now as creative EP for Europe, I am also responsible for streamlining information from pre-production to post production between all production parties for a more efficient and prosperous sales outcome.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The patience and the fun psychological side you need to have to handle all the production peeps, agencies, and clients.

WHAT TOOLS DO YOU USE?
Excel, Word, Showbiz, Keynote, Pages, Adobe Package (Photoshop, Illustrator, After Effects, Premiere, InDesign), Maya, Flame, Nuke and AR/VR technology.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with talented creative people on extraordinary projects with a stunning design and working on great narratives, such as the work we have done for clients including Interface, Autism Speaks, Imaginary Friends, Unicef and Travelers, to name a few.

WHAT’S YOUR LEAST FAVORITE?
Monday morning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early afternoon between Europe closing down and the West Coast waking up.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Meditating in Tibet…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Since I was 13 years old. After shooting and editing a student short film (an Oliver Twist adaptation) with a Bolex 16mm on location in London and Paris, I was hooked.

Promoting Lacta 5Star chocolate bars

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
An animated campaign for the candy company Mondelez’s Lacta 5Star chocolate bars; an animated short film for the Imaginary Friends Society; a powerful animated short on the dangers of dating abuse and domestic violence for nonprofit Day One; a mixed media campaign for Chobani called FlipLand; and a broadcast spot for McDonald’s and Spider-Man.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
My three kids 🙂

It’s really hard to choose one project, as they are all equally different and amazing in their own way, but maybe D&AD Wish You Were Here. It stands out for the number of awards it won and the collective creative production process.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Internet.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Meditation and yoga.


Chaos Group to support Cinema 4D with two rendering products

At the Maxon Supermeet 2018 event, Chaos Group announced its plans to support the Maxon Cinema 4D community with two rendering products: V-Ray for Cinema 4D and Corona for Cinema 4D. Based on V-Ray’s Academy Award-winning raytracing technology, the development of V-Ray for Cinema 4D will be focused on production rendering for high-end visual effects and motion graphics. Corona for Cinema 4D will focus on artist-friendly design visualization.

Chaos Group, which acquired the V-Ray for Cinema 4D product from LAUBlab and will lead development on the product for the first time, will offer current customers free migration to a new update, V-Ray 3.7 for Cinema 4D. All users who move to the new version will receive a free V-Ray for Cinema 4D license, including all product updates, through January 15, 2020. Moving forward, Chaos Group will be providing all support, sales and product development in-house.

In addition to ongoing improvements to V-Ray for Cinema 4D, Chaos Group is also released the Corona for Cinema 4D beta 2 at Supermeet, with the final product to follow in January 2019.

Main Image: Daniel Sian created Robots using V-ray for Cinema 4D.


Promoting a Mickey Mouse watch without Mickey

Imagine creating a spot for a watch that celebrates the 90th anniversary of Mickey Mouse — but you can’t show Mickey Mouse. Already Been Chewed (ABC), a design and motion graphics studio, developed a POV concept that met this challenge and also tied in the design of the actual watch.

Nixon, a California-based premium watch company that is releasing a series of watches around the Mickey Mouse anniversary, called on Already Been Chewed to create the 20-second spot.

“The challenge was that the licensing arrangement that Disney made with Nixon doesn’t allow Mickey’s image to be in the spot,” explains Barton Damer, creative director at Already Been Chewed. “We had to come up with a campaign that promotes the watch and has some sort of call to action that inspires people to want this watch. But, at the same time, what were we going to do for 20 seconds if we couldn’t show Mickey?”

After much consideration, Damer and his team developed a concept to determine if they could push the limits on this restriction. “We came up with a treatment for the video that would be completely point-of-view, and the POV would do a variety of things for us that were working in our favor.”

The solution was to show Mickey’s hands and feet without actually showing the whole character. In another instance, a silhouette of Mickey is seen in the shadows on a wall, sending a clear message to viewers that the spot is an official Disney and Mickey Mouse release and not just something that was inspired by Mickey Mouse.

Targeting the appropriate consumer demographic segment was another key issue. “Mickey Mouse has long been one of the most iconic brands in the history of branding, so we wanted to make sure that it also appealed to the Nixon target audience and not just a Disney consumer,” Damer says. “When you think of Disney, you could brand Mickey for children or you could brand it for adults who still love Mickey Mouse. So, we needed to find a style and vibe that would speak to the Nixon target audience.”

The Already Been Chewed team chose surfing and skateboarding as dominant themes, since 16-to 30-year-olds are the target demographic and also because Disney is a West Coast brand.
Damer comments, “We wanted to make sure we were creating Mickey in a kind of 3D, tangible way, with more of a feature film and 3D feel. We felt that it should have a little bit more of a modern approach. But at the same time, we wanted to mesh it with a touch of the old-school vibe, like 1950s cartoons.”

In that spirit, the team wanted the action to start with Mickey walking from his car and then culminate at the famous Venice Beach basketball courts and skate park. Here’s the end result.

“The challenge, of course, is how to do all this in 15 seconds so that we can show the logos at the front and back and a hero image of the watch. And that’s where it was fun thinking it through and coming up with the flow of the spot and seamless transitions with no camera cuts or anything like that. It was a lot to pull off in such a short time, but I think we really succeeded.”

Already Been Chewed achieved these goals with an assist from Maxon’s Cinema 4D and Adobe After Effects. With Damer as creative lead, here’s the complete cast of characters: head of production Aaron Smock; 3D design was via Thomas King, Barton Damer, Bryan Talkish, Lance Eckert; animation was provided by Bryan Talkish and Lance Eckert; character animation was via Chris Watson; soundtrack was DJ Sean P.


London design, animation studio Golden Wolf sets up shop in NYC

Animation studio Golden Wolf, headquartered in London, has launched its first stateside location in New York City. The expansion comes on the heels of an alliance with animation/VFX/live-action studio Psyop, a minority investor in the company. Golden Wolf now occupies studio space in SoHo adjacent to Psyop and its sister company Blacklist, which formerly represented Golden Wolf stateside and was instrumental to the relationship.

Among the year’s highlights from Golden Wolf are an integrated campaign for Nike FA18 Phantom (client direct), a spot for the adidas x Parley Run for the Oceans initiative (TBWA Amsterdam) in collaboration with Psyop, and Marshmello’s Fly music video for Disney. Golden Wolf also received an Emmy nomination for its main title sequence for Disney’s Ducktales reboot.

Heading up Golden Wolf’s New York office are two transplants from the London studio, executive producer Dotti Sinnott and art director Sammy Moore. Both joined Golden Wolf in 2015, Sinnott from motion design studio Bigstar, where she was a senior producer, and Moore after a run as a freelance illustrator/designer in London’s agency scene.

Sinnott comments: “Building on the strength of our London team, the Golden Wolf brand will continue to grow and evolve with the fresh perspective of our New York creatives. Our presence on either side of the Atlantic not only brings us closer to existing clients, but also positions us perfectly to build new relationships with New York-based agencies and brands. On top of this, we’re able to use the time difference to our advantage to work on faster turnarounds and across a range of budgets.”

Founded in 2013 by Ingi Erlingsson, the studio’s executive creative director, Golden Wolf is known for youth-oriented work — especially content for social media, entertainment and sports — that blurs the lines of irreverent humor, dynamic action and psychedelia. Erlingsson was once a prolific graffiti artist and, later, illustrator/designer and creative director at U.K.-based design agency ilovedust. Today he inspires Golden Wolf’s creative culture and disruptive style fed in part by a wave of next-gen animation talent coming out of schools such as Gobelins in France and The Animation Workshop in Denmark.

“I’m excited about our affiliation with Psyop, which enjoys an incredible legacy producing industry-leading animated advertising content,” Erlingsson says. “Golden Wolf is the new kid on the block, with bags of enthusiasm and an aim to disrupt the industry with new ideas. The combination of the two studios means that we are able to tackle any challenge, regardless of format or technical approach, with the support of some of the world’s best artists and directors. The relationship allows brands and agencies to have complete confidence in our ability to solve even the biggest challenges.”

Golden Wolf’s initial work out of its New York studio includes spots for Supercell (client direct) and Bulleit Bourbon (Barton F. Graf). Golden Wolf is represented in the US market by Hunky Dory for the East Coast, Baer Brown for the Midwest and In House Reps for the West Coast. Stink represents the studio for Europe.

Main Photo: (L-R) Dotti Sinnott, Ingi Erlingsson and Sammy Moore.


Review: Foundry’s Athera cloud platform

By David Cox

I’ve been thinking for a while that there are two types of post houses — those that know what cloud technology can do for them, and those whose days are numbered. That isn’t to say that the use of cloud technology is essential to the survival of a post house, but if they haven’t evaluated the possibilities of it they’re probably living in the past. In such a fast-moving business, that’s not a good place to be.

The term “cloud computing” suffers a bit from being hijacked by know-nothing marketeers and has become a bit vague in meaning. It’s quite simple though: it just means a computer (or storage) owned and maintained by someone else, housed somewhere else and used remotely. The advantage is that a post house can reduce its destructive fixed overheads by owning fewer computers and thus save money on installation and upkeep. Cloud computers can be used as and when they are needed. This allows scaling up and down in proportion to workload.

Over the last few years, several providers have created global datacenters containing upwards of 50,000 servers per site, entirely for the use of anyone who wants to “remote in.” Amazon and Google are the two biggest providers, but as anyone who has tried to harness their power for post production can confirm, they’re not simple to understand or configure. Amazon alone has hundreds of different computer “instance” types, and accessing them requires navigating through a sea of unintelligible jargon. You must know your Elastic Beanstalks from your EC2, EKS and Lambda. And make sure you’ve worked out how to connect your S3, EFS and Glacier. Software licensing can also be tricky.

The truth is, these incredible cloud installations are for cleverer people than those of us that just like to make pretty pictures. They are more for the sort that like to build neural networks and don’t go outside very much. What our industry needs is some clever company to make a nice shiny front end that allows us to harness that power using the tools we know and love, and just make it all a bit simpler. Enter Athera, from Foundry. That’s exactly what they’ve done.

What is Athera?

Athera is a platform hosted on Google Cloud infrastructure that presents a user with icons for apps such as Nuke and Houdini. Access to each app is via short-term (30-day) rental. When an available app icon is clicked, a cloud computer is commanded into action, pre-installed with the chosen app. From then on, the app is used just as if locally installed. Of course, the app is actually running on a high-performance computer located in a secure and nicely cooled datacenter environment. Provided the user has a vaguely decent Internet connection, they’re good to go, because only the user interface is being transmitted across the network, not the actual raw image data.

Apps available on Athera include Foundry’s products, plus a few others. Nuke is represented in its base form, plus a Nuke X variant, Nuke Studio, and a combination of Nuke X and Cara VR. Also available are the Mari texture painting suite, Katana look-creating app and Modo CGI modeling software.

Athera also offers access to non-Foundry products like CGI software Houdini and Blender, as well as the Gaffer management tool.

NukeIn my first test, I rustled up an instance of Nuke Studio and one of Blender. The first thing I wanted to test was the GPU speed, as this can be somewhat variable for many cloud computer types (usually between zero and not much). I was pleasantly surprised as the rendering speed was close to that of a local Nvidia GeForce GTX 1080, which is pretty decent. I was also pleased to see that user preferences were maintained between sessions.

One thing that particularly impressed me was how I could call up multiple apps together and Athera would effectively build a network in the background to link them all up. Frames rendered out of Blender were instantly available in the cloud-hosted Nuke Studio, even though it was running on a different machine. This suggests the Athera infrastructure is well thought out because multi-machine, networked pipelines with attached storage are constructed with just a few clicks and without really thinking about it.

Access to the Athera apps is either by web browser or via a local client software called “Orbit.” In web browser mode, each app opens in its own browser tab. With Orbit, each app appears in a dedicated local window. Orbit boasts lower latency and the ability to use local hardware such as multiple monitors. Latency, which would show itself as a frustrating delay between control input and visual feedback, was impressively low, even when using the web browser interface. Generally, it was easy to forget that the app being used was not installed locally.

Getting files in and out was also straightforward. A Dropbox account can be directly linked, although a Google or Amazon S3 storage “bucket” is preferred for speed. There is also a hosted app called “Toolbox,” which is effectively a file browser to allow the management of files and folders.

The Athera platform also contains management and reporting features. A manager can set up projects and users, setting out which apps and projects a user has access to. Quotas can be set, and full reports are given as to who did what, when and with which app.

Athera’s pricing is laid out on their website and it’s interesting to drill into the costs and make comparisons. A user buys access to apps in 30-day blocks. Personally, I would like to see shorter blocks at some point to increase up/down scale flexibility. That said, render-only instances for many of the apps can be accessed on a per-second billing basis. The 30-day block comes with a “fair use” policy of 200 hours. This is a hard limit, which equates to around nine and a half hours per day for five-day weeks (which is technically known in post production as part time).

Figuring Out Cost
Blender is a good place to start analyzing cost because it’s open source (free) software, so the $244 Athera cost to run for 30 days/200 hours must be for hardware only. This equates to $1.22 per hour, which, compared to direct cloud computer usage, is pretty good value for the GPU-backed machine on offer.

Modo

Another way of comparing the amount of $244 a month would be to say that a new computer costing $5,800 depreciates at roughly this monthly rate if depreciated over two years. That is to say, if a computer of that value is kept for two years before being replaced, it effectively loses roughly $241 per month in value. If depreciated over three years, the figure is $80 per month less. Of course, that’s just comparing the cost of depreciation. Cost of ownership must also include the costs of updating, maintaining, powering, cooling, insuring, housing and repairing if (when!) it breaks down. If a cloud computer breaks down, Google has a few thousand waiting in the wings. In general, the base hardware cost seems quite competitive.

Of course, Blender is not really the juicy stuff. Access to a base Nuke, complete with workstation, is $685 per 30 days / 200 hours. Nuke X is $1,025. There are also “power” options for around 20% more, where a significantly more powerful machine is provided. Compared to running a local machine with purchased or rented software, these prices are very interesting. But when the ability to scale up and down with workload is factored in, especially being able to scale down to nothing during quiet times, the case for Athera becomes quite compelling.

Another helpful factor is that a single 30-day access block to a particular app can be shared between multiple users — as long as only one user has control of the app at a time. This is subject to the fair use limitation.

There is an issue if commercial (licensed) plug-ins are needed. For the time being, these can’t be used on Athera due to the obvious licensing issues relating to their installation on a different cloud machine each time. Hopefully, plugin developers will become alive to the possibilities of pay-per-use licensing, as a platform like Athera could be the perfect storefront.

Mari

Security
One of the biggest concerns about using remote computing is that of security. This concern tends to be more perceptual than real. The truth is that a Google datacenter is likely to have significantly more security than an average post company’s machine room. Also, they will be employing the best in the security business. But if material being worked on leaks out into the public, telling a client, “But I just sent it to Google and figured it would be fine,” isn’t going to sound great. Realistically, the most likely concern for security is the sending of data to and from a datacenter. A security breach inside the datacenter is very unlikely. As ever, a post producer has to remain vigilant.

Summing Up
I think Foundry has been very smart and forward thinking to create a platform that is able to support more than just Foundry products in the cloud. It would have been understandable if they just made it a storefront for alternative ways of using a Nuke (etc), but they clearly see a bigger picture. Using a platform like Athera, post infrastructure can be assembled and disassembled on demand to allow post producers to match their overheads to their workload.

Athera enables smart post producers to build a highly scalable post environment with access to a global pool of creative talent who can log in and contribute from anywhere with little more than a modest computer and internet connection.

I hate the term game-changer — it’s another term so abused by know-nothing marketeers who have otherwise run out of ideas — but Athera, or at least what this sort of platform promises to provide, is most certainly a game-changer. Especially if more apps from different manufacturers can be included.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Our SIGGRAPH 2018 video coverage

SIGGRAPH is always a great place to wander around and learn about new and future technology. You can get see amazing visual effects reels and learn how the work was created by the artists themselves. You can get demos of new products, and you can immerse yourself in a completely digital environment. In short, SIGGRAPH is educational and fun.

If you weren’t able to make it this year, or attended but couldn’t see it all, we would like to invite you to watch our video coverage from the show.

SIGGRAPH 2018