NBCUni 7.26

Category Archives: motion graphics

Behind the Title: Title Designer Nina Saxon

For 40 years, Nina Saxon has been a pioneer in the area of designing movie titles. She is still one of the few women working in this part of the industry.

NAME: Nina Saxon

COMPANY: Nina Saxon Design

CAN YOU DESCRIBE YOUR COMPANY?
We design main and end titles for film and television as well as branding for still and moving images.

WHAT’S YOUR JOB TITLE?
Title Designer

WHAT DOES THAT ENTAIL?
Making a moving introduction for films, like a book cover, that introduces a film. Or it might be simple type over picture. Also watching a film and showing the director samples or storyboards of what I think should be used for the film.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
That I’m one of only a few women in this field and have worked for 40 years, hiring others to help me only if necessary.

WHAT’S YOUR FAVORITE PART OF THE JOB?
When my project is done and I get to see my finished work up on the screen.

WHAT’S YOUR LEAST FAVORITE?
Waiting to be paid.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Morning

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be a psychologist.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
In 1975, I was in the film department at UCLA and decided I was determined to work in the film business.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
The upcoming documentary on Paul McCartney called Here, There and Everywhere, and upcoming entertainment industry corporate logos that will be revealed in October. In the past, I did the movie Salt with Angeline Jolie and the movie Flight with Denzel Washington.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Working on the main title open for Forrest Gump.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My iPad, iPhone and computer

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I exercise a lot, five to six days a week; drink a nice glass of wine; try to get enough sleep; listen to music while meditating before sleep; and make sure I know what I need to do the next day before I go to bed.

Game of Thrones’ Emmy-nominated visual effects

By Iain Blair

Once upon a time, only glamorous movies could afford the time and money it took to create truly imaginative and spectacular visual effects. Meanwhile, television shows either tried to avoid them altogether or had to rely on hand-me-downs. But the digital revolution changed all that with its technological advances, and new tools quickly leveling the playing field. Today, television is giving the movies a run for their money when it comes to sophisticated visual effects, as evidenced by HBO’s blockbuster series, Game of Thrones.

Mohsen Mousavi

This fantasy series was recently Emmy-nominated a record-busting 32 times for its eighth and final season — including one for its visually ambitious VFX in the penultimate episode, “The Bells.”

The epic mass destruction presented Scanline’s VFX supervisor, Mohsen Mousavi, and his team many challenges. But his expertise in high-end visual effects, and his reputation for constant innovation in advanced methodology, made him a perfect fit to oversee Scanline’s VFX for the crucial last three episodes of the final season of Game of Thrones.

Mousavi started his VFX career in the field of artificial intelligence and advanced-physics-based simulations. He spearheaded designing and developing many different proprietary toolsets and pipelines for doing crowd, fluid and rigid body simulation, including FluidIT, BehaveIT and CardIT, a node-based crowd choreography toolset.

Prior to joining Scanline VFX Vancouver, Mousavi rose through the ranks of top visual effects houses, working in jobs that ranged from lead effects technical director to CG supervisor and, ultimately, VFX supervisor. He’s been involved in such high-profile projects as Hugo, The Amazing Spider-Man and Sucker Punch.

In 2012, he began working with Scanline, acting as digital effects supervisor on 300: Rise of an Empire, for which Scanline handled almost 700 water-based sea battle shots. He then served as VFX supervisor on San Andreas, helping develop the company’s proprietary city-generation software. That software and pipeline were further developed and enhanced for scenes of destruction in director Roland Emmerich’s Independence Day: Resurgence. In 2017, he served as the lead VFX supervisor for Scanline on the Warner Bros. shark thriller, The Meg.

I spoke with Mousavi about creating the VFX and their pipeline.

Congratulations on being Emmy-nominated for “The Bells,” which showcased so many impressive VFX. How did all your work on Season 4 prepare you for the big finale?
We were heavily involved in the finale of Season 4, however the scope was far smaller. What we learned was the collaboration and the nature of the show, and what the expectations were in terms of the quality of the work and what HBO wanted.

You were brought onto the project by lead VFX supervisor Joe Bauer, correct?
Right. Joe was the “client VFX supervisor” on the HBO side and was involved since Season 3. Together with my producer, Marcus Goodwin, we also worked closely with HBO’s lead visual effects producer, Steve Kullback, who I’d worked with before on a different show and in a different capacity. We all had daily sessions and conversations, a lot of back and forth, and Joe would review the entire work, give us feedback and manage everything between us and other vendors, like Weta, Image Engine and Pixomondo. This was done both technically and creatively, so no one stepped on each other’s toes if we were sharing a shot and assets. But it was so well-planned that there wasn’t much overlap.

[Editor’s Note: Here is the full list of those nominated for their VFX work on Game of Thrones — Joe Bauer, lead visual effects supervisor; Steve Kullback, lead visual effects producer; Adam Chazen, visual effects associate producer; Sam Conway, special effects supervisor; Mohsen Mousavi, visual effects supervisor; Martin Hill, visual effects supervisor; Ted Rae, visual effects plate supervisor; Patrick Tiberius Gehlen, previz lead; and Thomas Schelesny, visual effects and animation supervisor.]

What were you tasked with doing on Season 8?
We were involved as one of the lead vendors on the last three episodes and covered a variety of sequences. In episode four, “The Last of the Starks,” we worked on the confrontation between Daenerys and Cersei in front of the King’s Landing’s gate, which included a full CG environment of the city gate and the landscape around it, as well as Missandei’s death sequence, which featured a full CG Missandei. We also did the animated Drogon outside the gate while the negotiations took place.

Then for “The Bells” we were responsible for most of the Battle of King’s Landing, which included full digital city, Daenerys’ army camp site outside the walls of King’s Landing, the gathering of soldiers in front of the King’s Landing walls, Danny’s attack on the scorpions, the city gate, streets and the Red Keep, which had some very close-up set extensions, close-up fire and destruction simulations and full CG crowd of various different factions — armies and civilians. We also did the iconic Cleaganebowl fight between The Hound and The Mountain and Jamie Lannister’s fight with Euron at the beach underneath the Red Keep. In Episode 5, we received raw animation caches of the dragon from Image Engine and did the full look-dev, lighting and rendering of the final dragon in our composites.

For the final episode, “The Iron Throne, we were responsible for the entire Deanerys speech sequence, which included a full 360 digital environment of the city aftermath and the Red Keep plaza filled with digital unsullied Dothrakies and CG horses leading into the majestic confrontation between Jon and Drogon, where it revealed itself from underneath a huge pile of snow outside Red Keep. We were also responsible for the iconic throne melt sequence, which included some advance simulation of high viscous fluid and destruction of the area around the throne and finishing the dramatic sequence with Drogon carrying Danny out of the throne room and away from King’s Landing into the unknown.

Where was all this work done?
The majority of the work was done here in Vancouver, which is the biggest Scanline office. Additionally we had teams working in our Munich, Montreal and LA offices. We’re a 100% connected company, all working under the same infrastructure in the same pipeline. So if I work with the team in Munich, it’s like they’re sitting in the next room. That allows us to set up and attack the project with a larger crew and get the benefit of the 24/7 scenario; as we go home, they can continue working, and it makes us far more productive.

How many VFX did you have to create for the final season?
We worked on over 600 shots across the final three episodes which gave us approximately over an hour of screen time of high-end consistent visual effects.

Isn’t that hour length unusual for 600 shots?
Yes, but we had a number of shots that were really long, including some ground coverage shots of Arya in the streets of King’s Landing that were over four or five minutes long. So we had the complexity along with the long duration.

How many people were on your team?
At the height, we had about 350 artists on the project, and we began in March 2018 and didn’t wrap till nearly the end of April 2019 — so it took us over a year of very intense work.

Tell us about the pipeline specific to Game of Thrones.
Scanline has an industry-wide reputation for delivering very complex, full CG environments combined with complex simulation scenarios of all sort of fluid dynamics and destruction based on our simulation framework “Flowline.” We had a high-end digital character and hero creature pipeline that gave the final three episodes a boost up front. What was new were the additions to our procedural city generation pipeline for the recreation of King’s Landing, making sure it can deliver both in wide angle shots as well as some extreme close-up set extensions.

How did you do that?
We used a framework we developed back for Independence Day: Resurgence, which is a module-based procedural city generation leveraging some incredible scans of the historical city of Dubrovnik as a blueprint and foundation of King’s Landing. Instead of doing the modeling conventionally, you model a lot of small modules, kind of like Lego blocks. You create various windows, stones, doors, shingles and so on, and once it’s encoded in the system, you can semi-automatically generate variations of buildings on the fly. That also goes for texturing. We had procedurally generated layers of façade textures, which gave us a lot of flexibility on texturing the entire city, with full control over the level of aging and damage. We could decide to make a block look older easily without going back to square one. That’s how we could create King’s Landing with its hundreds of thousands of unique buildings.

The same technology was applied to the aftermath of the city in Episode 6. We took the intact King’s Landing and ran a number of procedural collapsing simulations on the buildings to get the correct weight based on references from the bombed city of Dresden during WWII, and then we added procedurally created CG snow on the entire city.

It didn’t look like the usual matte paintings were used at all.
You’re right, and there were a lot of shots that normally would be done that way, but to Joe’s credit, he wanted to make sure the environments weren’t cheated in any way. That was a big challenge, to keep everything consistent and accurate. Even if we used traditional painting methods, it was all done on top of an accurate 3D representation with correct lighting and composition.

What other tools did you use?
We use Autodesk Maya for all our front-end departments, including modeling, layout, animation, rigging and creature effects, and we bridge the results to Autodesk 3ds Max, which encapsulates our look-dev/FX and rendering departments, powered by Flowline and Chaos Group’s V-Ray as our primary render engine, followed by Foundry’s Nuke as our main compositing package.

At the heart of our crowd pipeline, we use Massive and our creature department is driven with Ziva muscles which was a collaboration we started with Ziva Dynamics back for the creation of the hero Megalodon in The Meg.

Fair to say that your work on Game of Thrones was truly cutting-edge?
Game of Thrones has pushed the limit above and beyond and has effectively erased the TV/feature line. In terms of environment and effects and the creature work, this is what you’d do for a high-end blockbuster for the big screen. No difference at all.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

NBCUni 7.26

Maxon intros Cinema 4D R21, consolidates versions into one offering

By Brady Betzel

At SIGGRAPH 2019, Maxon introduced the next release of its graphics software, Cinema 4D R21. Maxon also announced a subscription-based pricing structure as well as a very welcomed consolidation of its Cinema 4D versions into a single version, aptly titled Cinema 4D.

That’s right, no more Studio, Broadcast or BodyPaint. It all comes in one package at one price, and that pricing will now be subscription-based — but don’t worry, the online anxiety over this change seems to have been misplaced.

The cost has been substantially dropped for Cinema 4D R21, leading the way to start what Maxon is calling the “3D for the Real World” initiative. Maxon wants it to be the tool you choose for your graphics needs.

If you plan on upgrading every year or two, the new subscription-based model seems to be a great deal:

– Cinema 4D subscription paid annually: $59.99/month
– Cinema 4D subscription paid monthly: $94.99/month
– Cinema 4D subscription with Redshift paid annually: $81.99/month
– Cinema 4D subscription with Redshift paid monthly: $116.99/month
– Cinema 4D perpetual pricing: $3,495 (upgradeable)

Maxon did mention that if you have previously purchased Cinema 4D, there will be subscription-based upgrade/crossgrade deals coming.

The Updates
Cinema 4D R21 includes some great updates that will be welcomed by many users, both new and experienced. The new Field Force dynamics object allows the use of dynamic forces in modeling and animation within the MoGraph toolset. Caps and bevels have an all-new system that not only allows the extrusion of 3D logos and text effects but also means caps and bevels are integrated on all spline-based objects.

Furthering Cinema 4D’s integration with third-party apps, there is an all-new Mixamo Control rig allowing you to easily control any Mixamo characters. (If you haven’t checked out the models from Mixamo, you should. It’s a great way to find character rigs fast.)

An all-new Intel Open Image Denoise integration has been added to R21 in what seems like part of a rendering revolution for Cinema 4D. From the acquistion of Redshift to this integration, Maxon is expanding its third-party reach and doesn’t seem scared.

There is a new Node Space, which shows what materials are compatible with chosen render engines, as well as a new API available to third-party developers that allows them to integrate render engines with the new material node system. R21 has overall speed and efficiency improvements, with Cinema 4D supporting the latest processor optimizations from both Intel and AMD.

All this being said, my favorite update — or map toward the future — was actually announced last week. Unreal Engine added Cinema 4D .c4d file support via the Datasmith plugin, which is featured in the free Unreal Studio beta.

Today, Maxon is also announcing its integration with yet another game engine: Unity. In my opinion, the future lies in this mix of real-time rendering alongside real-world television and film production as well as gaming. With Cinema 4D, Maxon is bringing all sides to the table with a mix of 3D modeling, motion-graphics-building support, motion tracking, integration with third-party apps like Adobe After Effects via Cineware, and now integration with real-time game engines like Unreal Engine. Now I just have to learn it all.

Cinema 4D R21 will be available on both Mac OS and Windows on Tuesday, Sept. 3. In the meantime, watch out for some great SIGGRAPH presentations, including one from my favorite, Mike Winkelmann, better known as Beeple. You can find some past presentations on how he uses Cinema 4D to cover his “Everydays.”


Review: Maxon Cinema 4D Release 20

By Brady Betzel

Last August, Maxon made available its Cinema 4D Release 20. From the new node-based Material Editor to the all new console used to debug and develop scripts, Maxon has really upped the ante.

At the recent NAB show, Maxon announced that they acquired Redshift Rendering Technologies, the makers of the Redshift rendering engine. This acquisition will hopefully tie in an industry standard GPU-based rendering engine inside of Cinema 4D R20’s workflow and speed up rendering. As of now there is still the same licensing fees attached to Redshift as there were before the acquisition: Node-Locked is $500 and Floating is $600.

Digging In
The first update to Cinema 4D R20 that I wanted to touch on is the new node-based Material Editor. If you are familiar with Blackmagic’s DaVinci Resolve or Nuke’s applications, then you have seen how nodes work. I love how nodes work, allowing the user to not only layer up effects — or in Cinema 4D R20’s case — diffusion to camera distance. There are over 150 nodes inside of the material editor to build textures with.

One small change that I noticed inside of the updated Material Editor was the new gradient settings. When you are working with gradient knots you can now select multiple knots at once and then right click and double the selected knots, invert the knots, select different knot interpolations (including stepped, smooth, cubic, linear, and blend) and even distribute the knots to clean up your pattern. A real nice and convenient update to gradient workflows.

In Cinema 4D R20, not only can you add new nodes from the search menu, but you can also click the node dots in the Basic properties window and route nodes through there. When you are happy with your materials made in the node editor, you can save them as assets in the scene file or even compress them in a .zip file to share with others.

In a related update category, Cinema 4D Release 20 has introduced the Uber Material. In simple terms (and I mean real simple), the Uber Material is a node-based material that is different from standard or physical materials because it can be edited inside of the Attribute Manager or Material Editor but retain the properties available in the Node Editor.

The Camera Tracking and 2D Camera View has been updated. While the Camera Tracking mode has been improved, the new 2D Camera View mode has combined the Film Move mode with the Film Zoom mode. Adding the ability to use standard shortcuts to move around a scene instead of messing with the Film Offset or Focal Length in the Camera Object Properties dialogue. For someone like me who isn’t a certified pro in Cinema 4D, these little shortcuts really make me feel at home. Much more like apps I’m used to such as Mocha Pro or After Effects. Maxon has also improved the 2D tracking algorithm for much tighter tracks as well as added virtual keyframes. The virtual keyframes are an extreme help when you don’t have time for minute adjustments.

Volume Modeling
What seems to be one of the largest updates in Cinema 4D R20 is the addition of Volume Modeling with the OpenVDB-based Volume Builder. According to www.openvdb.org, “OpenVDB is an Academy Award-winning C++ library comprising a hierarchical data structure and a suite of tools for the efficient manipulation of sparse, time-varying, volumetric data discretized on three-dimensional grids,” developed by Ken Museth at DreamWorks Animation. It uses 3D pixels called voxels instead of polygons. When using the Volume Builder you can combine multiple polygon and primitive objects using Boolean operations: Union, Subtract or Intersect. Furthermore you can smooth your volume using multiple techniques, including one that made me do some extra Google work: Laplacian Flow.

Fields
When going down the voxel rabbit hole in Cinema 4D R20, you will run into another new update: Fields. Prior to Cinema 4D R20, we would use Effectors to affect strength values of an object. You would stack and animate multiple effectors to achieve different results. In Cinema 4D R20, under the Falloff tab you will now see a Fields list along with the types of Field Objects to choose from.

Imagine you make a MoGraph object that you want its opacity to be controlled by a box object moving through your MoGraph but also physically modified by a capsule poking through. You can combine these different field object effectors by using compositing functions in the Fields list. In addition you can animate or alter these new fields straight away in the Objects window.

Summing Up
Cinema 4D Release 20 has some amazing updates that will greatly improve efficiency and quality of your work. From tracking updates to field updates, there are plenty of exciting tools to dive into. And if you are reading this as an After Effects user who isn’t sure about Cinema 4D, now is the time to dive in. Once you learn the basics, whether it’s from Youtube tutorials or you sign up for www.cineversity.com classes, you will immediately see an increase in the quality of your work.

Combining Adobe After Effects, Element 3D and Cinema 4D R20 is the ultimate in 3D motion graphics and 2D compositing — accessible to almost everyone. And I didn’t even touch on the dozens of other updates to Cinema 4D R20 like the multitude of ProRender updates, FBX import/export options, new node materials and CAD import support for Cataia, Iges, JT, Solidworks and Step formats. Check out Cinema 4D Release 20’s newest features on YouTube and on their website.

And, finally, I think it’s safe to assume that Maxon’s acquisition of RedShift renderer poses a bright future for Cinema 4D users.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


Review: Red Giant’s Trapcode Suite 15

By Brady Betzel

We are now comfortably into 2019 and enjoying the Chinese Year of the Pig — or at least I am! So readers, you might remember that with each new year comes a Red Giant Trapcode Suite update. And Red Giant didn’t disappoint with Trapcode Suite 15.

Every year Red Giant adds more amazing features to its already amazing particle generator and emitter toolset, Trapcode Suite, and this year is no different. Trapcode Suite 15 is keeping tools like 3D Stroke, Shine, Starglow, Sound Keys, Lux, Tao, Echospace and Horizon while significantly updating Particular, Form and Mir.

I won’t be covering each plugin in this review but you can check out what each individual plugin does on the Red Giant’s website.

Particular 4
The bread and butter of the Trapcode Suite has always been Particular, and Version 4 continues to be a powerhouse. The biggest differences between using a true 3D app like Maxon’s Cinema 4D or Autodesk Maya and Adobe After Effects (besides being pseudo 3D) are features like true raytraced rendering and interacting particle systems with fluid dynamics. As I alluded to, After Effects isn’t technically a 3D app, but with plugins like Particular you can create pseudo-3D particle systems that can affect and be affected by different particle emitters in your scenes. Trapcode Suite 15 and, in particular (all the pun intended), Particular 4, have evolved to another level with the latest update to include Dynamic Fluids. Dynamic Fluids essentially allows particle systems that have the fluid-physics engine enabled to interact with one another as well as create mind-blowing liquid-like simulations inside of After Effects.

What’s even more impressive is that with the Particular Designer and over 335 presets, you don’t  need a master’s degree to make impressive motion graphics. While I love to work in After Effects, I don’t always have eight hours to make a fluidly dynamic particle system bounce off 3D text, or have two systems interact with each other for a text reveal. This is where Particular 4 really pays for itself. With a little research and tutorial watching, you will be up and rendering within 30 minutes.

When I was using Particular 4, I simply wanted to recreate the Dynamic Fluid interaction I had seen in one of their promos. Basically, two emitters crashing into each other in a viscus-like fluid, then interacting. While it isn’t necessarily easy, if you have a slightly above-beginner amount of After Effects knowledge you will be able to do this. Apply the Particular plugin to a new solid object and open up the Particular Designer in Effect Controls. From there you can designate emitter type, motion, particle type, particle shadowing, particle color and dispersion types, as well as add multiple instances of emitters, adjust physics and much more.

The presets for all of these options can be accessed by clicking the “>” symbol in the upper left of the Designer interface. You can access all of the detailed settings and building “Blocks” of each of these categories by clicking the “<” in the same area. With a few hours spent watching tutorials on YouTube, you can be up and running with particle emitters and fluid dynamics. The preset emitters are pretty amazing, including my favorite, the two-emitter fluid dynamic systems that interact with one another.

Form 4
The second plugin in the Trapcode Suite 15 that has been updated is Trapcode Form 4. Form is a plugin that literally creates forms using particles that live forever in a unified 3D space, allowing for interaction. Form 4 adds the updated Designer, which makes particle grids a little more accessible and easier to construct for non-experts. Form 4 also includes the latest Fluid Dynamics update that Particular gained. The Fluid Dynamics engine really adds another level of beauty to Form projects, allowing you to create fluid-like particle grids from the 150 included presets or even your own .obj files.

My favorite settings to tinker with are Swirl and Viscosity. Using both settings in tandem can help create an ooey-gooey liquid particle grid that can interact with other Form systems to build pretty incredible scenes. To test out how .obj models worked within form, I clicked over to www.sketchfab.com and downloaded an .obj 3D model. If you search for downloadable models that do not cost anything, you can use them in your projects under Creative Commons licensing protocols, as long as you credit the creator. When in doubt always read the licensing (You can find more info on creative commons licensing here, but in this case you can use them as great practice models.

Anyway, Form 4 allows us to import .obj files, including animated .obj sequences as well as their textures. I found a Day of the Dead-type skull created by JMUHIST, pointed form to the .obj as well as its included texture, added a couple After Effect’s lights, a camera, and I was in business. Form has a great replicator feature (much like Element3D). There are a ton of options, including fog distance under visibility, animation properties, and even the ability to quickly add a null object linked to your model for quick alignment of other elements in the scene.

Mir 3
Up last is Trapcode Mir 3. Mir 3 is used to create 3D terrains, objects and wireframes in After Effects. In this latest update, Mir has added the ability to import .obj models and textures. Using fractal displacement mapping, you can quickly create some amazing terrains. From mountain-like peaks to alien terrains, Mir is a great supplement when using plugins like Video Copilot Element 3D to add endless tunnels or terrains to your 3D scenes quickly and easily.

And if you don’t have or own Element 3D, you will really enjoy the particle replication system. Use one 3D object and duplicate, then twist, distort and animate multiple instances of them quickly. The best part about all of these Trapcode Suite tools is that they interact with the cameras and lighting native to After Effects, making it a unified animating experience (instead of animating separate camera and lighting rigs like in the old days). Two of my favorite features from the last update are the ability to use quad- or triangle-based polygons to texture your surfaces. This can give an 8-bit or low-poly feel quickly, as well as a second pass wireframe to add a grid-like surface to your terrain.

Summing Up
Red Giant’s Trapcode Suite 15 is amazing. If you have a previous version of the Trapcode Suite, you’re in luck: the upgrade is “only” $199. If you need to purchase the full suite, it will cost you $999. Students get a bit of a break at $499.

If you are on the fence about it, go watch Daniel Hashimoto’s Cheap Tricks: Aquaman Underwater Effects tutorial (Part 1 and Part 2). He explains how you can use all of the Red Giant Trapcode Suite effects with other plugins like Video CoPilot’s Element 3D and Red Giant’s Universe and offers up some pro tips when using www.sketchfab.com to find 3D models.

I think I even saw him using Video CoPilot’s FX Console, which is a free After Effects plugin that makes accessing plugins much faster in After Effects. You may have seen his work as @ActionMovieKid on Twitter or @TheActionMovieKid on Instagram. He does some amazing VFX with his kids — he’s a must follow. Red Giant made a power move to get him to make tutorials for them! Anyway, his Aquaman Underwater Effects tutorial take you step by step through how to use each part of the Trapcode Suite 15 in an amazing way. He makes it look a little too easy, but I guess that is a combination of his VFX skills and the Trapcode Suite toolset.

If you are excited about 3D objects, particle systems and fluid dynamics you must check out Trapcode Suite 15 and its latest updates to Particular, Mir and Form.

After I finished the Trapcode Suite 15 review, Red Giant released the Trapcode Suite 15.1 update. The 15.1 update includes Text and Mask Emitters for Form and Particular 4.1, updated Designer, Shadowlet particle type matching, shadowlet softness and 21 additional presets.

This is a free update that can be downloaded from the Red Giant website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

 


AMD Radeon Vega mobile graphics coming to MacBook Pro

New AMD Radeon Vega Mobile graphics processors — including the AMD Radeon Pro Vega 20 and Radeon Pro Vega 16 graphics — will be available as configuration options on Apple’s 15-inch MacBook Pro starting in late November.

AMD Radeon Vega Mobile graphics offers performance upgrades in 3D rendering, video editing and other creative applications, as well as 1080p HD gaming at ultra settings in the most-used AAA and eSports games.

Built around AMD’s Vega architecture, the new graphics processors were engineered to excel in notebooks for cool and quiet operation. In addition, the processor’s thin design features HBM2 memory (2nd-generation high-bandwidth memory), which takes up less space in a notebook compared to traditional GDDR5-based graphics processors.

 


Animation and design studio Lobo expands to NYC’s Chinatown

After testing the New York market with a small footprint in Manhattan, creative animation/design studio Lobo has moved its operations to a new studio in New York’s Chinatown. The new location will be led by creative director Guilherme Marcondes, art director Felipe Jornada and executive producer Luis Ribeiro.

The space includes two suites, featuring the Adobe Creative Cloud apps, Autodesk Flame, Foundry Nuke and Blackmagic Resolve. There is also a finished rooftop deck and a multipurpose production space that will allow the team to scale per the specifications of each project.

Director/founder Mateus De Paula Santos will continue to oversee both New York offices creatively. Lobo’s NYC team will work closely with the award-winning São Paulo office, offering the infrastructure and horsepower of its nearly 200 staff with their US-based creative team.

Marcondes brings a distinct styling that fuses live action and animation techniques to craft immersive worlds. His art-driven style can be seen in work for clients such as Google, Chobani, Autism Speaks, Hyundai, Pepsi, Audi and British Gas. His short films have been screened at festivals worldwide, with his Tiger winning over 20 international awards. His latest film, Caveirão, made its worldwide premiere at SXSW.

Ribeiro brings over two decades of experience running business development and producing for creative post shops in the US, including Framestore, Whitehouse Post, Deluxe, Method Studios, Beast, Company 3 and Speedshape. He also served as the US consultant for FilmBrazil for four years, connecting US and Brazilian companies in the advertising production network.

Recent work out of Lobo’s US office includes the imaginative mixed media FlipLand campaign for Chobani, the animated PSA Sunshine for Day One out of BBDO NY and an animated short for the Imaginary Friends Society out of RPA.

Our Main Image: L-R: Luis Ribeiro, Mateus De Paula Santos, Felipe Jornada and Guilherme Marcondes.


Behind the Title: Trollbäck+Company’s David Edelstein

NAME: David Edelstein

COMPANY: Trollbäck+Company (@trollback)

CAN YOU DESCRIBE YOUR COMPANY?
We are a creative agency that believes in the power of communication, craft and collaboration.
Our mission is to promote innovation, create beauty and foster a lasting partnership. We believe that the brands of the future will thrive on the constant spirit of invention. We apply the same principle to our work, always evolving our practice and reaching across disciplines to produce unexpected, original results.

WHAT’S YOUR JOB TITLE?
Executive Director of Client Partnerships

WHAT DOES THAT ENTAIL?
I’m responsible for building on current client relationships and bringing in new ones. I work closely with the team on our strategic approach to presenting us to a wide array of clients.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think you need to be in a position of doing business development to really understand that question. The goal is to land work that the company wants to do and balance that with the needs of running a business. It is not an easy task to juggle.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love working with a talented team, and being in a position to present a company with such a strong legacy.

WHAT’S YOUR LEAST FAVORITE?
Even after all these years, rejection still isn’t easy, but it’s something you deal with on a daily, sometimes hourly, basis.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I’m a morning person, so I find it’s the perfect time to reach out to people when they’re fresh — and before their day gets chaotic.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Are you trying to tell me something? (laughs) I actually think I’d be doing the same thing, but perhaps for a different industry. I truly enjoy the experience of developing relationships and the challenge of solving creative problems with others. I think it’s a valuable skill set that can be applied to other types of jobs.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
This career came about pretty organically for me. I had a traditional production background and grew up in LA. When I moved to New York, I wound up at Showtime as a producer and discovered motion graphics. When I left there, I was fortunate enough to launch a few small studios. Being an owner makes you the head of business development from the start. These experiences have certainly prepared me for where I’ve been and where I am today.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’m only a few months in, but we are currently spearheading branding for a Fortune 500 company. Trollbäck is also coming off a fantastic title sequence and package for the final episode of the Motion Conference, which just took place in June.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It’s tough to call out one particular project, but some career highlights have been a long relationship with Microsoft, as well as traveling the world with Marriott and Hilton.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Cell phone, computer/email and iPad.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook, LinkedIn and Instagram.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
I try to give different types of music a go, so Spotify works well for me. But, honestly, I’m still a Springsteen guy.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I go home to relax and then come back the next day and try to be positive and grateful. Repeat!


Review: Maxon Cinema 4D R19 — an editor’s perspective

By Brady Betzel

It’s time for my yearly review of Maxon’s Cinema 4D. Currently in Release 19, Cinema 4D comes with a good amount of under-the-hood updates. I am an editor, first and foremost, so while I dabble in Cinema 4D, I am not an expert. There are a few things in the latest release, however, that directly correlate to editors like me.

Maxon offers five versions of Cinema 4D, not including BodyPaint 3D. There is the Cinema 4D Lite, which comes free with Adobe After Effects. It is really an amazing tool for discovering the world of 3D without having to invest a bunch of money. But, if you want all the goodies that come packed into Cinema 4D you will have to pay the piper and purchase one of the other four versions. The other versions include Prime, Broadcast, Visualize and Studio.

Cinema 4D Prime is the first version that includes features like lighting, cameras and animation. Cinema 4D Broadcast includes all of Cinema 4D Prime’s features as well as the beloved MoGraph tools and the Broadcast Library, which offers pre-built objects and cameras that will work with motion graphics. Cinema 4D Visualize includes Cinema 4D Prime features as well, but is geared more toward architects and designers. It includes Sketch and Toon, as well as an architecturally focused library of objects and presets. Cinema 4D Studio includes everything in the other versions plus unlimited Team Render nodes, a hair system, a motion/object tracker and much more. If you want to see a side-by-side comparison you can check out Maxon’s website.

What’s New
As usual, there are a bunch of new updates to Cinema 4D Release 19, but I am going to focus on my top three, which relate to the workflows and processes I might use as an editor: New Media Core, Scene Reconstruction and the Spherical Camera. Obviously, there are a lot more updates — including the incredible new OpenGL Previews and the cross-platform ProRender, which adds the ability to use AMD or Nvidia graphics cards — but to keep this review under 30 pages I am focusing on the three that directly impact my work.

New Media Core
Buckle up! You can now import animated GIFs into Cinema 4D. So, yes, you can import animated GIFs into Cinema 4D Release 19, but that is just one tiny aspect of this update. The really big addition is the QuickTime-free support of MP4 videos. Now MP4s can be imported and used as textures, as well as exported with different compression settings, directly from within Cinema 4D’s  interface — all of this without the need to have QuickTime installed. What is cool about this is that you no longer need to export image-based file sequences to get your movie inside of Cinema 4D. The only slowdown will be how long it takes Cinema 4D R19 to cache your MP4 so that you will have realtime playback… if possible.

In my experience, it doesn’t take that much time, but that will be dependent on your system performance. While this is a big under-the-hood type of update, it is great for those quick exports of a scene for approval. No need to take your export into Adobe Media Encoder, or something else, to squeeze out an MP4.

Scene Reconstruction
First off, for any new Cinema 4D users out there, Scene Reconstruction is convoluted and a little thick to wade through. However, if you work with footage and want to add motion graphics work to a scene, you will want to learn this. You can check out this Cineversity.com video for an eight-minute overview.

Cinema 4D’s Scene Reconstruction works by tracking your footage to generate point clouds, and then after you go back and enable Scene Reconstruction, it creates a mesh from the resulting scene calculation that Cinema 4D computes. In the end, depending on how compatible your footage is with Scene Detection (i.e. contrasting textures and good lighting will help) you will get a camera view with matching scene vertices that are then fully animatable. I, unfortunately, do not have enough time to recreate a set or scene inside of Cinema 4D R19, however, it feels like Maxon is getting very close to fully automated scene reconstruction, which would be very, very interesting.

I’ve seen a lot of ideas from pros on Twitter and YouTube that really blow my mind, like 3D scanning with a prosumer camera to recreate objects inside of Cinema 4D. Scene Reconstruction could be a game-changing update, especially if it becomes more automated as it would allow base users like me to recreate a set in Cinema 4D without having to physically rebuild a set. A pretty incredible motion graphics-compositing future is really starting to emerge from Cinema 4D.

In addition, the Motion Tracker has received some updates, including manual tracking on R, G, B, or custom channel — viewed as Tracker View — and the tracker can now work with a circular tracking pattern.

Spherical Camera
Finally, the last update, which seems incredible, is the new Spherical Camera. It’s probably because I have been testing and using a lot more 360 video, but the ability to render your scene using a Spherical Camera is here. You can now create a scene, add a camera and enable Spherical mapping, including equirectangular, cubic string, cubic cross or even Facebook’s 360 video 3×2 cubic format. In addition, there is now support for Stereo VR as well as dome projection.

Other Updates
In addition to the three top updates I’ve covered, there are numerous others updates that are just as important, if not more so to those who use Cinema 4D in other ways. In my opinion, the rendering updates take the cake. Also, as mentioned before, there is support for both Nvidia and AMD GPUs, multi-GPU support, incredible viewport enhancements with Physical Rendering and interactive Preview Renders in the viewport.

Under MoGraph, there is an improved Voronoi Fracture system (ability to destroy an object quickly) including improved performance for high polygon counts and detailing to give the fracture a more realistic look. There is also a New Sound Effector to allow for interactive MoGraph creation to the beat of the music. One final note: the new Modern Modelling Kernel has been introduced. The new kernel gives more ability to things like polygon reduction and levels of detail.

In the end, Cinema 4D Release 19 is a huge under-the-hood update that will please legacy users but will also attract new users with AMD-based GPUs. Moreover, Maxon seems to be slowly morphing Cinema 4D into a total 2D and 3D modeling and motion graphics powerhouse, much like the way Blackmagic’s Resolve is for colorists, video editors, VFX creators and audio mixers.

Summing Up
With updates like Scene Recreation and improved motion tracking, Maxon gives users like me the ability to work way above their pay grade to composite 3D objects onto our 2D footage. If any of this sounds interesting to you and you are a paying Adobe Creative Cloud user, download and open up Cinema 4D Lite along with After Effects, then run over to Cineversity and brush up on the basics. Cinema 4D Release 19 is an immensely powerful 3D application that is blurring the boundaries between 3D and 2D compositing. With Cinema 4D Release 19’s large library of objects, preset scenes and lighting setups you can be experimenting in no time, and I didn’t even touch on the modeling and sculpting power!


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Quick Chat: FOM’s Adam Espinoza on DirecTV graphics campaign

By Randi Altman

Denver-based creative brand firm Friends of Mine (FOM) recently completed a graphics package for DirecTV Latin America that they had been working on for almost a year. The campaign, which first aired at the start of the 2017/2018 soccer season in August, has been airing on DirecTV’s Latin American network since then.

In addition to providing the graphics packages that ran on DirecTV Sports throughout the European Football League seasons (in Spain, England and France), FOM is currently creating graphics that will promote the World Cup games, set to take place between June 14 and July 15 in Russia.

Adam Espinoza

We reached out to FOM’s co-founder and creative director, Adam Espinoza, to find out more.

How early did you get involved in the piece? How much input did you have?
We were invited to the RFP process two months before the season started. We fully developed the look and concept from their written creative brief and objectives. We had complete input on the direction and execution.

What was it the client wanted to accomplish, and what did you suggest? 
The client wanted to convey the excitement of soccer throughout the season. There were two objectives: highlight the exclusive benefits of DirectTV for its subscribers while at the same time showing footage of goals and celebrations from the best players and teams in the world. We suggested the idea of intersections and digital energy.

Why did you think the visuals you created told the story the client needed? 
The digital energy graphics created a kinetic movement inherent in the sport while connecting the players around the league. The intersections concept helped to integrate the world of soccer seamlessly with DirecTV’s message.

What exactly did you provide services-wise on the piece? 
Conceptual design, art direction, 2D and 3D animation and video editing
.

What gear/tools did you use for each of those services? 
Our secret sauce along with Cinema 4D, Adobe Premiere, Adobe After Effects and Adobe Illustrator.

What was the most challenging part of the process?
Evolving the look from month to month throughout the season and building to the climatic finals, while still staying true to the original concept.

What’s was your favorite part of the process?
Being able to fine tune a concept over such a stretch of time.

Behind the Title: Julia Siemón

NAME: Julia Siemón

COMPANY: New York City-based GIMIK/Julia Siemón

CAN YOU DESCRIBE YOUR COMPANY?
We strive to achieve beauty through design with a primary focus on 3D motion content creation. We help clients solve creative challenges and constantly evolve by staying ahead of technology trends. I am currently working on developing content using Oculus VR.
.
WHAT’S YOUR JOB TITLE?
A Creative (Designer, Animator and Director)

WHAT DOES THAT ENTAIL?
Everything. I wear many hats at my job, whether it is art direction, character animation or pitching. Being a creative means you have to be able to adapt to the needs of the project and pick up where someone else left off.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Budgets. Having to always be mindful of budget and time. Also snacks. My snack game is the best in town.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Constantly discovering new ways to achieve our creative goals. Whether it’s through the use of a new technology or inspiration. Randomly seeing my work on TV, a subway or on Instagram is also pretty awesome.

WHAT’S YOUR LEAST FAVORITE?
Dealing with finances, writing up invoices and collecting. Having to “hound” clients to collect is never fun, but unfortunately frequently comes with the territory.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
I find my most productive times for creating are between 10am and 2pm, and from 5pm to 7pm. I would love to nap between 3-4pm. I think that would expand my productive time to 9pm. Just that one-hour nap would do wonders. I guess my favorite time of day is when I get to climb back into bed.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I teach at the School of Visual Arts in New York City, so I would either teach full time and/or design video games. If it wasn’t design related, I would work with plants, helping people set up and manage gardens in their backyards to produce enough fresh vegetables and herbs for their families. Or I’d run a travel blog.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
At age three, I drew a baby pram with one continuous stroke. From that moment on my mother knew I was to become an artist, so my art education began early on. However, I always gravitated towards new technology. By combining my visual talent with new media I was able to find a career path that is always surprising and rewarding.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Over the summer I worked with creative production studio Hey Beautiful Jerk on three Yahoo Fantasy Football spots. It is the type of project I love working — it included a mixture of the absurd and comic elements where the client goes for something out of their comfort zone and I get to play with Maxon Cinema 4D.

The spots each included a purple hue over everything to tie in the Yahoo brand color. I worked on several backgrounds and most of the character animation. While I did various backdrops and animations for all three spots, my favorite is called Glory Year. For this piece I created, textured and animated the floating brains and the futuristic city. I had a very limited time to create a The Fifth Element-inspired cityscape in Cinema 4D, and relied heavily on ready made models I found online, combining and adjusting them to get the desired look. I also added a sky highway to help illustrate the future. A lot of work for a half a second.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF? WHAT SOFTWARE DID YOU RELY ON?
I would have to say the CCTV-9 IDs I did with branding and design studio Trollbäck+Company a few years back. We did six IDs for the CCTV9 documentary channel promoting their new cube logo. I designed and animated CCTV ID Electronica, which was meant to reflect the energy of Chinese cities. The spots won an international BDA award that year. Cinema 4D was the perfect tool for this project; it allowed me to explore multiple creative directions fast and easy. I relied heavily on the Mograph module in Cinema 4D to give the CCTV cube the vibrancy and spirit of a Chinese metropolis.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My camera, Google maps/satellites and my Wacom tablet.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Instagram and Pinterest.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? IF SO, WHAT KIND?
It varies greatly depending on my mood. I listen to anything from Leonard Cohen and Imogen Heap to Tool and Smashing Pumpkins, with a bit of Zedd/Deadmau5 thrown in.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ve been running a mini farm out in New Jersey for four years now; it helps me to clear my mind and stay close to nature and our roots as an agrarian society. I’ve also gotten into food preservation. Both have been very therapeutic. However, when the stress is too great even for dehydrated kale chips and apples I try to take time off to travel.

Foundry intros Mari 4.0

Foundry’s Mari 4.0 is the latest version of the company’s digital 3D painting and texturing tool. Foundry launches Mari 4.0 with a host of advanced features, making the tool easier to use and faster to learn. Mari 4.0 comes equipped with more flexible and configurable exporting, simpler navigation, and a raft of improved workflows.

Key benefits of Mari 4.0 include:
Quicker start-up and export: Mari 4.0 allows artists to get projects up-and-running faster with a new startup mechanism that automatically performs the steps previously completed manually by the user. Shaders are automatically built, with channels connected to them as defined by the channel presets in the startup dialog. The user also now gets the choice of initial lighting and shading setup. The new Export Manager configures the batch exporting of Channels and Bake Point Nodes. Artists can create and manage multiple export targets from the same source, as well as perform format conversions during export. This allows for far more control and flexibility when passing Mari’s texture maps down the pipeline.

Better navigation: A new Palettes Toolbar containing all Mari’s palettes offers easy access and visibility to everything Mari can do. It’s now easier to expand a Palette to fullscreen by hitting the spacebar while your mouse is hovered over it. Tools of a similar function have been grouped under a single button in the Tools toolbar, taking up less space and allowing the user to better focus on the Canvas. Various Palettes have been merged together, removing duplication and simplifying the UI, making Mari both easier to learn and use.

Improved UI: The Colors Palette is now scalable for better precision, and the component sliders have been improved to show the resulting color at each point along the control. Users can now fine tune their procedural operations with precision keyboard stepping functionality brought into Mari’s numeric controls.

The HUD has been redesigned so it no longer draws over the paint subject, allowing the user to better focus on their painting and work more effectively. Basic Node Graph mode has been removed: Advanced is now the default. For everyone learning Mari, the Non-Commercial version now has full Node Graph access.

Enhanced workflows: A number of key workflow improvements have been brought to Mari 4.0. A drag-and-drop fill mechanism allows users to fill paint across their selections in a far more intuitive manner, reducing time and increasing efficiency. The Brush Editor has been merged into the Tool Properties Palette, with the brush being used now clearly displayed. It’s now easy to browse and load sets of texture files into Mari, with a new Palette for browsing texture sets. The Layers Palette is now more intuitive when working with Group layers, allowing users to achieve the setups they desire with less steps. And users now have a shader in Mari that previews and works with the channels that match their final 3D program/shader: The Principled BRDF, based on the 2012 paper from Brent Burley of Walt Disney Animation Studios.

Core: Having upgraded to OpenSubdiv 3.1.x and introduced the features into the UI, users are able to better match the behavior of mesh subdivision that they get in software renderers. Mari’s user preference files are now saved with the application version embedded in the file names —meaning artists can work between different versions of Mari without the danger of corrupting their UI or preferences. Many preferences have had their groups, labels and tooltips modified to be easier to understand. All third-party libraries have been upgraded to match those specified by the VFX Reference Platform 2017.
Mari 4.0 is available now.

Review: Red Giant Trapcode Suite 14

By Brady Betzel

Every year we get multiple updates to Red Giant’s Adobe After Effects plug-in behemoth, Trapcode Suite. The 14th update to the Trapcode suite is small but powerful and brings significant updates to Version 3 of Trapcode as well as Form (Trapcode Form 3 is a particle system generator much like Particular, but instead of the particles living and dying they stay alive forever as grids, 3D objects and other organic shapes). If you have the Trapcode Suite from a previous purchase the update will cost $199, and if you are new the suite costs $999, or $499 with an academic discount.

Particular 3 UI

There are three updates to the Suite that warrant the $199 upgrade fee: Trapcode 3, Form 3 and Tao 1.2 update. However, you still get the rest of the products with the Trapcode Suite 14: Mir 2.1, Shine 2.0, Lux 1.4, 3D Stroke 2.6, Echospace 1.1, Starglow 1.7, Sound Keys 1.1 and Horizon 1.1

First up is the Tao 1.2 update. Trapcode Tao allows you to create 3D geometric patterns along a path in After Effects. If you do a quick YouTube search of Tao you will find some amazing examples of what it can do. In the Tao 1.2 update Red Giant has added a Depth-of-Field tool to create realistic bokeh effects on your Tao objects. It’s a simple but insanely powerful update that really gives your Tao creations a sense of realism and beauty. To enable the new Depth-of-Field, wander over to the Rendering twirl-down menu under Tao and either select “off” or “Camera Settings.” It’s pretty simple. From there it is up to your After Effects camera skills and Tao artistry.

Trapcode Particular 3
Trapcode Particular is one of Red Giant’s flagship plugins and it’s easy to see why. Particular allows you to create complex particle animations within After Effects. From fire to smoke to star trails, it can pretty much do whatever your mind can come up with, and Version 3 has some powerful updates, including the overhauled Trapcode Particular Designer.

The updated designer window is very reminiscent of the Magic Bullet Designer window, easy and natural to use. Here you design your particle system, including the look, speed and overall lifespan of your system. While you can also adjust all of these parameters in the Effects Window dialog, the Designer gives an immediate visual representation of your particle systems that you can drag around and see how it interacts with movement. In addition you can see any presets that you want to use or create.

Particular 3

In Particular 3, you can now use OBJ objects as emitters. An OBJ is essentially a 3D object. You can use the OBJ’s faces, vertices, edges, and the volume inside the object to create your particle system.

The largest and most important update to the entire Trapcode Suite 14 is found within Particular 3, and it is the ability to add up to eight particle systems per instance of Particular. What does that mean? Well, your particle systems will now interact in a way that you can add details such as dust or a bright core that can carry over properties from other particle systems in the same same instance, adding the ability to create way more intricate systems than before.

Personally, the newly updated Designer is what allows me to dial in these details easily without trying to twirl down tons of menus in the Effect Editor window. A specific use of this is that you want to duplicate your system and inherit the properties, but change the blend mode and/or colors, simply you click the drop down arrow under system and click “duplicate.” Another great update within the multiple particle system update is the ability to create and load “multi-system” presets quickly and easily.

Now, with all of these particle systems mashed together you probably are wondering, “How in the world will my system be able to handle all of these when it’s hard to even playback a system in the older Trapcode Suite?” Well, lucky for us Trapcode Particular 3 is now OpenGL — GPU-accelerated and allowing for sometimes 4x speed increases. To access these options in the Designer window, click the cogwheel on the lower edge of the window towards the middle. You will find the option to render using the CPU or the GPU. There are some limitations to the GPU acceleration. For instance, when using mixed blend modes you might not be able to use certain GPU acceleration types — it will not reflect the proper blend mode that you selected. Another limitation can be with Sprites that are QuickTime movies; you may have to use the CPU mode.

Last but not least, Particular 3’s AUX system (a particle system within the main particle system) has been re-designed. You can now choose custom Sprites as well as keyframe many parameters that could not be keyframed before.

Form 3 UI

Trapcode Form 3
For clarification, Trapcode Particular can create particle emitters that emit particles that have a life, so basically they are born and they die. Trapcode Form is a particle system that does not have a life — it is not born and it does not die. Some practical examples can be a ribbon like background or a starfield. These particle systems can be made from 3D models and even be dynamically driven by an audio track. And much like Particular’s updated Designer, Form 3 has an updated designer that will help you build you particle array quickly and easily. Once done inside the Designer you can hop out and adjust parameters in the Effects Panel. If you want to use pre-built objects or images as your particles you can load those as Sprites or Textured Polygons and animate their movement.

Another really handy update in Trapcode Form 3 is the addition of the Graphing System. This allows you to animate controls like color, size, opacity and dispersion over time.

Just like Particular, Form reacts to After Effect’s cameras and lights, completely immersing them into any scene that you’ve built. For someone like me, who loves After Effects and the beauty of creations from Form and Particular but who doesn’t necessarily have the time to create from scratch, there is a library of over 70 pre-built elements. Finally, Form has added a new rendering option called Shadowlet rendering which adds light falloff to your particle grid or array.

Form 3

Summing Up
In the end, the Trapcode Suite 14 has significantly updated Trapcode Particular 3 with multiple particle systems, Trapcode Form 3 with a beautiful new Designer, and Trapcode Tao with Depth-of-Field, all for an upgrade price of $199. Some Trapcode Particular users have been asking for the ability to build and manipulate multiple particle systems together, and Red Giant has answered their wishes.

If you’ve never used the Trapcode Suite you should also check out the rest of the mega-bundle which includes apps like Shine, 3D Stroke, Starglow, MIr, Lux, Sound Keys, Horizon and Echospace here. And if you want to get more in-depth rundowns of each of these programs check out Harry Frank’s (@graymachine) and Chad Perkin’s tutorials on the Red Giant News website. Then immediately follow @trapcode_lab and @RedGiantNews on Twitter.

If you want to find out more about the other tools in the Trapcode Suite check out my previous two-part review of Suite 13 here on postPerspective: https://postperspective.com/review-red-giants-trapcode-suite-13-part-1 and https://postperspective.com/review-red-giant-trapcode-suite-13-part-2.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Behind the Title: Undefined Creative founder/CD Maria Rapetskaya

NAME: Maria Rapetskaya

COMPANY: Undefined Creative

CAN YOU DESCRIBE YOUR COMPANY?
Undefined Creative is a Brooklyn-based media production agency specializing in motion graphics.

Our portfolio spans television, digital marketing, social media and live events, making us the perfect studio for big brands, agencies and networks looking to establish holistic creative partnerships. We deliver premium-grade motion media, at fair and transparent prices, on time, on budget, on the mark and with a personal touch.

WHAT’S YOUR JOB TITLE?
Founder/Creative Director

WHAT DOES THAT ENTAIL?
There are two sides to my job: the entrepreneur and the creative. The “entrepreneur” is the founder part, and that makes me responsible for nearly everything, even if only in a supervising or approval role.

I am responsible for the majority of business development. I set the company vision and work on the strategy to get there. I work in tandem with my executive producer on marketing. I oversee finances and operations, and do a good deal of maintaining client relationships.

The “creative” part of my job is being the creative director of a boutique. This encompasses setting the aesthetic direction of the studio in general and each project in particular. Communication with clients about all aspects of a project, and guiding the creative along the production process and — since we are a boutique — a good deal of hands-on production. I love that last part, since I never wanted to get away entirely from actually DOING what I love.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Being both an entrepreneur and a creative director is primarily about managing people. I have to manage our clients by setting realistic expectations without creating negative sentiments, or guiding them effectively through the process so that they understand and appreciate the creative decisions and directions we’re taking.

I also have to manage my team, making sure that everyone understands, for example, that there are objective and subjective comments when it comes to my critiques. The objective comments are not a judgment on anyone’s aesthetic, but a way to develop the best solution for the problem at hand. If I fail to do any of these, all I wind up with is miserable clients and miserable co-workers. So, in essence, the success of this studio depends in a large part on my ability to communicate accurately, efficiently, courteously and emphatically.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Getting unsolicited happy feedback from our clients. We’ve gotten such amazing notes following project delivery. It’s part of our company mission to never forget that our clients are people, so knowing that we made them look good, that their experience of working with us was enjoyable… that they’re less stressed out because they know we’ll take good care of them. All these things really inspire and encourage all of us here.

WHAT’S YOUR LEAST FAVORITE?
Experiencing a project I was really excited about become drudgery. It happens and it happens everywhere, to all creatives. There’s usually a combination of factors that contribute to this, like deadlines getting pushed up suddenly and significantly, or a lot of voices in the approval process pulling in completely different directions that are incompatible. I’ve learned over the course of my career to keep a healthy distance from my work, and that helps me manage my reactions, stay focused and motivated. But I’m still human, and even if I don’t get bummed, it’s hard to see the occasional disappointment in the team when this kind of stuff happens.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Whatever I can squeeze in before 9am. Zero distractions, plenty of caffeine.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d do something that combines people, travel, teaching/mentoring and health/wellness.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was always into, and good at, art. So once I recognized that the only high school classes I was super-excited about were my art classes, I knew I could do this for a living. I come from a creative family of people who love to work for themselves, so even starting a company of my own wasn’t a big surprise. However, with respect to the specific discipline I chose being animation and motion graphics that was pretty random. I picked animation as a college major by default, on the advice and encouragement of an older friend who was graduating from the animation department when I was a freshman. And I didn’t discover motion graphics until about a year after I graduated.

The NHL Awards

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
This summer, we branded the NHL Awards Show in Las Vegas, creating all of the live-event animations for multiple screens on the show stage. We re-branded the Maury Show for the seventh time, creating new graphics packages for on-air, marketing and social media. We did a couple of cool broadcast promo spots for A&E. We worked on an animation for the US Navy and Men’s Health that described some fun facts about sailors (did you know the fitness test includes two minutes of pushups?)

Most recently, we created a graphics package for the United Nations Equator Prize to play on stage during their 2017 Awards Ceremony.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
That’s a very hard question to answer. I don’t think it’s an individual project, but rather our commitment to doing work pro bono for social causes. We’ve created 10-plus (I am actually losing track of how many) awareness videos since 2010, as well as a number of other projects for organizations and missions we care about.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My iPhone, although I am now very conscious of when and how much I’m on it. My analog alarm clock that ensures my iPhone can stay out of the bedroom. My MacBook Air, which lets me get away from my desk even if I’m still working.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
None, if I can help it. I don’t have much love for social media, and if not for needing it to run a business, I would gladly disconnect all together. I do appreciate LinkedIn as a business community, but I try to not get sucked in.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Funny you ask. In my twenties, I listened to music while working… loudly and all day long. Now, I just love silence when I work. Helps me focus.

THIS IS A HIGH STRESS JOB WITH DEADLINES AND CLIENT EXPECTATIONS. WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’ve been a professional creative for nearly 20 years, and coming up with fresh ideas on demand and all the time isn’t easy. Neither is running a company, which is a Ferris wheel ride of gaining clients, losing clients, getting jobs, not getting jobs. People depend on me to pay their bills. My job can be either exhilarating or exhausting, and which it will be depends on my ability to stay creative, productive and encouraged.

If I don’t take care of my mind and body properly, consistently and thoroughly, I’ll burn out. So I take control of my time. I don’t work after hours unless it’s actually necessary. I meditate every day. I try to get a workout in daily. I disconnect whenever I can. I stay off my smartphone when possible. I don’t have a TV — in fact, I rarely watch anything once I’m done working. Staring at a screen all day for work makes it far less enticing to stare at one for leisure. I love what I do, but I take time off to travel whenever I can, and I never guilt myself for wanting a life outside of work

Red Giant Universe 2.2 gets 11 new transitions, supports Media Composer

Red Giant is now offering Universe 2.2, which features 11 new transition tools — 76 transitions and effects in total — for editors and motion graphics artists. In addition to brand new transitions, Red Giant has made updates to two existing plugins and added support for Avid Media Composer. The Universe toolset, and more, can be seen in action in the brand new short film Hewlogram, written and directed by Red Giant’s Aharon Rabinowitz, and starring David Hewlett from the Stargate: Atlantis series.

The latest update to Red Giant’s collection of GPU-accelerated plugins, Universe 2.2’s transitions range from Retrograde, which creates an authentic film strip transition using real scans from 16mm and 8mm film to a Channel Surf transition that creates the effect of changing channels on an old CRT TV.

This release brings the complete set of Universe tools to Avid Media Composer, which means that all 76 Red Giant Universe effects and transitions now run in eight host applications, including: Adobe Premiere Pro CC, After Effects CC, Apple Final Cut Pro X, Blackmagic DaVinci Resolve and more.

Retrograde

Brand-new transition effects in Red Giant Universe 2.2 include:
• VHS Transition: A transition that mimics the effect that occurs when a VCR has been used to record over pre-existing footage.
• Retrograde Transition: A transition that that uses real scans of 16mm and 8mm film to create an authentic film strip transition.
• Carousel Transition: A transition that mimics advancing to the next slide in an old slide projector.
• Flicker Cut: A transition that rapidly cuts between two clips or a solid color, and which can invert the clips or add fades.
• Camera Shake Transition: A transition that mimics camera shake while it transitions between clips.
• Channel Surf: A transition that mimics the distortion you’d get by changing the channel on a cathode ray tube TV.
• Channel Blur: A transition that blurs each of the RGB channels separately for a unique chromatic effect.
• Linear Wipe: A classic linear wipe with the addition of wipe mirroring, as well as an inner/outer stroke with glow on the wipe border.
• Shape Wipe: A transition that uses an ellipse, rectangle or star shape to move between 2 pieces of footage. Includes control over points, size, stroke and fill.
• Color Mosaic: A Transition that overlays a variety of colors in a mosaic pattern as it transitions between 2 clips.
• Clock Wipe: A classic radial wipe transition with feathering and the option for a dual clock wipe.

Updates to existing effects in Universe 2.2 include:
• VHS: This update includes new VHS noise samples, VHS style text, timecode and function icons (like play, fast-forward, rewind), updated presets, and updated defaults for better results upon application.
• Retrograde: This update includes a small but valuable addition that allows Retrograde to use the original aspect ratio of your footage for the effect.

Existing Universe customers can download the new tools directly by launching Red Giant Link. Universe is available as an annual subscription ($99/year) or as a monthly subscription ($20/month). Red Giant Universe is available in Red Giant’s Volume Program, the flexible and affordable solution for customers who need five or more floating licenses.

Sonnet’s portable eGPU accelerates computer graphics

Sonnet has introduced a Thunderbolt-connected external GPU (eGPU) device called the eGFX Breakaway Puck, which is a portable, high-performance, all-in-one eGPU for Thunderbolt 3 computers. The Puck offers accelerated graphics and provides multi-display connectivity thanks to AMD’s Eyefinity technology. Users employing a Puck will experience boosted GPU acceleration when using professional video apps.

Sonnet is offering two Puck models: the eGFX Breakaway Puck Radeon RX 560 and eGFX Breakaway Puck Radeon RX 570. Each Puck model is 6 inches wide by 5.1 inches deep by 2 inches tall. Both feature one Thunderbolt 3 port, three DisplayPorts and one HDMI port to support up to four 4K displays in multi-monitor mode.

The Puck connects to a computer with a single Thunderbolt 3 cable and provides up to 45W of power to charge the computer. On the desktop, the Puck has a minimal footprint. With an optional VESA mounting bracket kit, the Puck can be attached to the back of a display or the arm of a multi-monitor stand, leaving a zero footprint on the desktop. The kit also includes a 0.5-meter cable to help reduce cable clutter.

The eGFX Breakaway Puck Radeon RX 560 sells for $449., and the eGFX Breakaway Puck Radeon RX 570 costs $599. The optional PuckCuff VESA Mounting Bracket Kit has an MSRP of $59. All models are immediately available.

 

Quick Chat: Creating graphics package for UN’s Equator Prize ceremony

Undefined Creative (UC) was recently commissioned by the United Nations Development Programme (UNDP) to produce a fresh package of event graphics for its Equator Prize 2017 Award Ceremony. This project is the latest in a series of motion design-centered work collaborations between the creative studio and the UN, a relationship that began when UC donated their skills to the Equator Prize in 2010.

The Equator Prize recognizes local and indigenous community initiatives from across the planet that are advancing innovative on-the-ground solutions to climate, environment and poverty challenges. Award categories honor achievement and local innovation in the thematic areas of oceans, forests, grasslands and wildlife protection.

For this year’s ceremony, UNDP wanted a complete refresh that gave the on-stage motion graphics a current vibe while incorporating the key icons behind its sustainable development goals (SDGs). Consisting of a “Countdown to Ceremony” screensaver, an opening sequence, 15 winner slates, three category slates and 11 presenter slates, the package had to align visually with a presentation from National Geographic Society, which was part of the evening’s program.

To bring it all together, UC drew from the SDG color palettes and relied on subject matter knowledge of both the UNDP and National Geographic in establishing the ceremony graphics’ overall look and feel. With only still photos available for the Equator Prize winners, UC created motion and depth by strategically intertwining the best shots with moving graphics and strategically selected stock footage. Naturally moving flora and fauna livened up the photography, added visual diversity and contributed creating a unique aesthetic.

We reached out to Undefined Creative’s founder/creative director Maria Rapetskaya to find out more:

How early did you get involved in the project, and was the client open to input?
We got the call a couple of months before the event. The original show had been used multiple times since we created it in 2010, so the client was definitely looking for input on how we could refresh or even rebrand.

Any particular challenges for this one?
For non-commercial organizations, budgets and messaging are equally sensitive topics. We have to be conscious of costs, and also very aware of Do’s and Don’t’s when it comes to assets and use. Our creative discussions took place over several calls, laying out options and ideas at different budget tiers — anything from simply updating the existing package to creating something entirely different. In case of the latter, parameters had to be established right away for how different “different” could be.

For example, it was agreed that we should stick with photography provided by the 2017 award winners. However, our proposal to include stock for flora and fauna was agreed on by all involved. Which SDG icons would be used and how, what partner and UN organizational branding should be featured prominently as design inspiration, how this would integrate with content being produced for UNDP/Equator Prize by Nat Geo… all of these questions had to be addressed before we started any real ideation in order for the creative to stay on brand, on message, on budget and on time.

What tools did you use on the project?
We relied on Adobe CC, in particular, After Effects, which is our staple software. In this particular project, we also relied heavily on stock from multiple vendors. Pond5 have a robust and cost-effective collection of video elements we were seeking.

Why is this project important to you?
The majority of our clients are for-profit commercial entities, and while that’s wonderful, there’s always a different feeling of reward when we have the chance to do something for the good of humanity at large, however minuscule our contribution is. The winners are coming from such different corners of the globe — at times, very remote. They’re incredibly excited to be honored, on stage, in New York City, and we can only imagine what it feels like to see their faces, the faces of their colleagues and friends, the names of their projects, up on this screen in front of a large, live audience. This particular event brings us a lot closer to what we’re creating, on a really empathetic, human level.

Updating the long-running Ford F-150 campaign

Giving a decade-long very successful campaign a bit of a goose presents unique challenges, including maintaining tone and creative continuity while bringing a fresh perspective. To help with the launch of the new 2018 Ford F-150, Big Block director Paul Trillo brought all of his tools to the table, offering an innovative spin to the campaign.

Big Block worked closely with agency GTB, from development to previz, live-action, design, editorial, all the way through color and finish.

Trillo wanted to maintain the tone and voice of the original campaign while adding a distinct technical style and energy. Dynamic camera movement and quick editing helped bring new vitality to the “Built Ford Tough” concept.

Technically challenging camera moves help guide the audience through distinct moments. While previous spots relied largely on motion graphics, Trillo’s used custom camera rigs on real locations.

Typography remained a core of the spots, all underscored by an array of stop-motion, hyperlapse, dolly zooms, drone footage, camera flips, motion control and match frames.

We reached out to Big Block’s Paul and VFX supervisor John Cherniack to find out more…

How early did Big Block get involved in this F-150 campaign?
We worked with Detroit agency GTB starting in May 2017.

How much creative input did you have on the campaign? In terms of both original concept and execution?
Trillo: What was so original about this pitch was that they gave us a blank canvas and VO script to work with, and that’s it. I was building off a campaign that had been running for nearly 10 years and I knew what the creatives were looking for in terms of some sort of kinetic, constantly transitioning energy. However, it was essentially up to me to design each moment of the spot and how we get from A to B to C.

Typically, car commercials can be pretty prescriptive and sensitive to how the car is depicted. This campaign functions a lot differently than your typical car commercial. There was a laundry list of techniques, concepts, tricks and toys I’ve wanted to implement, so we seized the opportunity to throw the kitchen sink at this. Then, by breaking down the script and pairing it with the different tricks I wanted to try out, I sort of formed the piece. It was through the development of the scripts, boards and animatics that certain ideas fell to the wayside and the best rose to the top.

Cherniack: Paul had some great ideas from the very beginning, and the whole team got to help contribute to the brainstorming. We took the best ideas and started to put them all together in a previz to see which ones would stitch together seamlessly.

Paul, Justin Trask (production designer) and I all spent a very long together going through each board and shot, determining which elements we could build, and what we would make in CG. As much as we wanted to build a giant gantry to raise the bar, some elements were cost-prohibitive. This is where we were able to get creative on what we would be able to achieve between practical and CG elements.

How much creative input did you have on set?
Trillo: The only creative decisions we were let to make on set were coming up with creative solutions for logistical challenges. We’d done all the pre-production work, mapping out the camera moves and transitions down to the frame, so the heavy lifting was finished. Of course, you always look to make it better on set and find the right energy in the moment, but that’s all icing.

Cherniack: By the time we started shooting, we had gone through a good amount of planning, and I had a good feeling about everything that Paul was trying to achieve. One area that we both worked together on set was to get the most creative shot, while also maintaining our plans for combining the shots in post.

What challenges did you face?
Trillo: I think I have a sort of addictive personality when it comes to logistical and creative challenges. Before this thing was fully locked in, before we had any storyboards or a single location, I knew what I had written out was going to be super challenging if not impossible. Especially because I wanted to shot as much as we could practically. However, what you write down on a piece of paper and what you animate in a 3D environment doesn’t always align with the physics of the real world. Each shot provided its own unique challenge, whether it’s an art department build or deciding which type of camera rig to use to move the camera in an unusual way. Fortunately, I had a top-notch crew both in camera (DP Dan Mindel) and production design (Justin Trask) that there were always a couple ways to solve each problem.

Cherniack: In order to have all of the measurements, HDRI, set surveys and reference photography, I had to always be on the move, while being close enough should any VFX questions come up. Doing this in 110+ degree heat, in the quarry, during three of the hottest days of the summer was quite a challenge. We also had very little control of lake currents, and had to modify the way we shot the boat scene in Brainiac on the fly. We had a great crew who was able to change directions quickly.

What was your favorite part of working on this campaign? What aspect are you most proud of?
Trillo: It was pretty spectacular to see each of these pieces evolve from chicken scratch into a fully-realized image. There was little creative compromise in that entire process. But I have to say I think I’m proudest of dropping 400lbs of french fries out of a shipping container.

Any major differences between automotive campaigns and ads for other industries?
The main difference is there aren’t any rules here. The only thing you need to keep in mind when doing this campaign is stay true to the F-150’s brand and ethos. As long as you remain true to the spirit, there are no other guidelines to follow in terms of how a car commercial needs to function. What appeals to me about this campaign is it combines a few of my interests of design, technical camera work and a dash of humor.

What tools did you use?
Cherniack: We used the software Maya, 3ds Max, Nuke, Flame, PFTrack for post-production.

Tobin Kirk joins design/animation house Laundry as EP

Tobin Kirk has joined LA-based design and animation studio Laundry as executive producer. Kirk brings nearly 20 years of experience spanning broadcast design, main title sequences, integrated content, traditional on-air spots, branded content, digital and social. At Laundry, he will work closely with executive producer Garrett Braren on business development, as well as client and project management efforts.

Kirk was most recently managing executive producer at Troika, where he oversaw all production at the entertainment brand agency’s 25,000-square-foot facility in Hollywood, including its creative studio and live-action production subsidiary, Troika Production Group. Prior to that, he spent nearly five years as executive producer at Blind, managing projects for Xbox/Microsoft, AT&T, ancestry.com and Sealy Mattress, among others.

As a producer, Kirk’s background is highlighted by such projects as the main title sequence for David Fincher’s The Girl With the Dragon Tattoo at Blur Studio, commercials for Chrysler and Gatorade at A52 and an in-flight video for Method/Virgin America at Green Dot Films. He also spent three years with Farmer Brown working for TBS, CBS, Mark Burnett Productions, Al Roker Productions, The Ant Farm, Bunim/Murray and Endemol USA.

In addition, Kirk collaborated with video artist Bill Viola for over six years, producing projects for the London National Gallery, Athens Olympics, the Getty Museum, Opera National de Paris, Guggenheim Museum, Munich’s E.ON Corporation and Anthony d’Offay Gallery.

Behind the Title: Artist Jayse Hansen

NAME: Jayse Hansen

COMPANY: Jayse Design Group

CAN YOU DESCRIBE YOUR COMPANY?
I specialize in designing and animating completely fake-yet-advanced-looking user interfaces, HUDs (head-up displays) and holograms for film franchises such as The Hunger Games, Star Wars, Iron Man, The Avengers, Guardians of the Galaxy, Spiderman: Homecoming, Big Hero 6, Ender’s Game and others.

On the side, this has led to developing untraditional, real-world, outside-the-rectangle type UIs, mainly with companies looking to have an edge in efficiency/data-storytelling and to provide a more emotional connection with all things digital.

Iron Man

WHAT’S YOUR JOB TITLE?
Designer/Creative Director

WHAT DOES THAT ENTAIL?
Mainly, I try to help filmmakers (or companies) figure out how to tell stories in quick reads with visual graphics. In a film, we sometimes only have 24 frames (one second) to get information across to the audience. It has to look super complex, but it has to be super clear at the same time. This usually involves working with directors, VFX supervisors, editorial and art directors.

With real-world companies, the way I work is similar. I help figure out what story can be told visually with the massive amount of data we have available to us nowadays. We’re all quickly finding that data is useless without some form of engaging story and a way to quickly ingest, make sense of and act on that data. And, of course, with design-savvy users, a necessary emotional component is that the user interface looks f’n rad.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
A lot of R&D! Movie audiences have become more sophisticated, and they groan if a fake UI seems outlandish, impossible or Playskool cartoon-ish. Directors strive to not insult their audience’s intelligence, so we spend a lot of time talking to experts and studying real UIs in order to ground them in reality while still making them exciting, imaginative and new.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Research, breaking down scripts and being able to fully explore and do things that have never been done before. I love the challenge of mixing strong design principles with storytelling and imagination.

WHAT’S YOUR LEAST FAVORITE?
Paperwork!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early morning and late nights. I like to jam on design when everyone else is sleeping.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I actually can’t imagine doing anything else. It’s what I dream about and obsess about day and night. And I have since I was little. So I’m pretty lucky that they pay me well for it!

If I lost my sight, I’d apply for Oculus or Meta brain implants and live in the AR/VR world to keep creating visually.

SO YOU KNEW THIS WAS YOUR PATH EARLY ON?
When I was 10 I learned that they used small models for the big giant ships in Star Wars. Mind blown! Suddenly, it seemed like I could also do that!

As a kid I would pause movies and draw all the graphic parts of films, such as the UIs in the X-wings in Star Wars, or the graphics on the pilot helmets. I never guessed this was actually a “specialty niche” until I met Mark Coleran, an amazing film UI designer who coined the term “FUI” (Fictional User Interface). Once I knew it was someone’s “everyday” job, I didn’t rest until I made it MY everyday job. And it’s been an insanely great adventure ever since.

CAN YOU TALK MORE ABOUT FUI AND WHAT IT MEANS?
FUI stands for Fictional (or Future, Fantasy, Fake) User Interface. UIs have been used in films for a long time to tell an audience many things, such as: their hero can’t do what they need to do (Access Denied) or that something is urgent (Countdown Timer), or they need to get from point A to point B, or a threat is “incoming” (The Map).

Mockingjay Part I

As audiences are getting more tech-savvy, the potential for screens to act as story devices has developed, and writers and directors have gotten more creative. Now, entire lengths of story are being told through interfaces, such as in The Hunger Games: The Mockingjay Part I where Katniss, Peeta, Beetee and President Snow have some of their most tense moments.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
The most recent projects I can talk about are Guardians of the Galaxy 2 and Spider-Man: Homecoming, both with the Cantina Creative team and Marvel. For Guardians 2, I had a ton of fun designing and animating various screens, including Rocket, Gamora and Star-Lord’s glass screens and the large “Drone Tactical Situation Display” holograms for the Sovereign (gold people). Spider-Man was my favorite superhero as a child, so I was honored to be asked to define the “Stark-Designed” UI design language of the HUDs, holograms and various AR overlays.

I spent a good amount of time researching the comic book version of Spider-man. His suit and abilities are actually quite complex, and I ended up writing a 30-plus page guide to all of its functions so I could build out the HUD and blueprint diagrams in a way that made sense to Marvel fans.

In the end, it was a great challenge to blend the combination of the more military Stark HUDs for Iron Man, which I’m very used to designing, and a new, slightly “webby” and somewhat cute “training-wheels” UI that Stark designed for the young Peter Parker. I loved the fact that in the film they played up the humor of a teenager trying to understand the complexities of Stark’s UIs.

Star Wars: The Force Awakens

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I think Star Wars: The Force Awakens is the one I was most proud to be a part of. It was my one bucket list film to work on from childhood, and I got to work with some of the best talents in the business. Not only JJ Abrams and his production team at Bad Robot, but with my longtime industry friends Navarro Parker and Andrew Kramer.

WHAT SOFTWARE DID YOU RELY ON?
As always, we used a ton of Maxon Cinema 4D, Adobe’s After Effects and Illustrator and Element 3D to pull off rather complex and lengthy design sequences such as the Starkiller Base hologram and the R2D2/BB8 “Map to Luke Skywalker” holograms.

Cinema 4D was essential in allowing us to be super creative while still meeting rather insane deadlines. It also integrates so well with the Adobe suite, which allowed us to iterate really quickly when the inevitable last-minute design changes came flying in. I would do initial textures in Adobe Illustrator, then design in C4D, and transfer that into After Effects using the Element 3D plugin. It was a great workflow.

YOU ALSO CREATE VR AND AR CONTENT. CAN YOU TELL US MORE ABOUT THAT?
Yes! Finally, AR and VR are allowing what I’ve been doing for years in film to actually happen in the real world. With a Meta (AR) or Oculus (VR) you can actually walk around your UI like an Iron Man hologram and interact with it like the volumetric UI’s we did for Ender’s Game.

For instance, today with Google Earth VR you can use a holographic mapping interface like in The Hunger Games to plan your next vacation. With apps like Medium, Quill, Tilt Brush or Gravity Sketch you can design 3D parts for your robot like Hiro did in Big Hero 6.

Big Hero 6

While wearing a Meta 2, you can surround yourself with multiple monitors of content and pull 3D models from them and enlarge them to life size.

So we have a deluge of new abilities, but most designers have only designed on flat traditional monitors or phone screens. They’re used to the two dimensions of up and down (X and Y), but have never had the opportunity to use the Z axis. So you have all kinds of new challenges like, “What does this added dimension do for my UI? How is it better? Why would I use it? And what does the back of a UI look like when other people are looking at it?”

For instance, in the Iron Man HUD, most of the time I was designing for when the audience is looking at Tony Stark, which is the back of the UI. But I also had to design it from the side. And it all had to look proper, of course, from the front. UI design becomes a bit like product design at this point.

In AR and VR, similar design challenges arise. When we are sharing volumetric UIs — we will see other people’s UIs from the back. At times, we want to be able to understand them, and at other times, they should be disguised, blurred or shrouded for privacy reasons.

How do you design when your UI can take up the whole environment? How can a UI give you important information without distracting you from the world around you? How do you deal with additive displays where black is not a color you can use? And on and on. These are all things we tackle with each film, so we have a bit of a head start in those areas.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
I love tech, but it would be fun to be stuck with just a pen, paper and a book… for a while, anyway.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m on Twitter (@jayse_), Instagram (@jayse_) and Pinterest (skyjayse). Aside from that I also started a new FUI newsletter to discuss some behind the scenes of this type of work.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Heck yeah. Lately, I find myself working to Chillstep and Deep House playlists on Spotify. But check out The Cocteau Twins. They sing in a “non-language,” and it’s awesome.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I chill with my best friend and fiancé, Chelsea. We have a rooftop wet-bar area with a 360-degree view of Las Vegas from the hills. We try to go up each evening at sunset with our puppy Bella and just chill. Sometimes it’s all fancy-like with a glass of wine and fruit. Chelsea likes to make it all pretty.

It’s a long way from just 10 years ago where we were hunting spare-change in the car to afford 99-cent nachos from Taco Bell, so we’re super appreciative of where we’ve come. And because of that, no matter how many times my machine has crashed, or how many changes my client wants — we always make time for just each other. It’s important to keep perspective and realize your work is not life or death, even though in films sometimes they try to make it seem that way.

It’s important to always have something that is only for you and your loved ones that nobody can take away. After all, as long as we’re healthy and alive, life is good!

Heidi Netzley joins We Are Royale as director of biz dev

Creative agency We Are Royale has added Heidi Netzley as director of business development. She will be responsible for helping to evolve the company’s business development process and building its direct-to-brand vertical.

Most recently, Netzley held a similar position at Digital Kitchen, where she expanded and diversified the company’s client base and also led projects like a digital documentary series for Topgolf and show launch campaigns for CBS and E! Network. Prior to that, she was business development manager at Troika, where she oversaw brand development initiatives and broadcast network rebrands for the agency’s network clients, including ABC, AwesomenessTV, Audience Network and Sundance Cinemas.

Netzley launched her career at Disney/ABC Television Group within the entertainment marketing division. During her seven-year tenure, she held various roles ranging from marketing specialist to manager of creative services, where she helped manage the brand across multi-platform marketing campaigns for all of ABC’s primetime properties.

“With our end-to-end content creation capabilities, we can be both a strategic and creative partner to other types of brands, and I look forward to helping make that happen,” says Netzley.

When Netzley isn’t training for the 2018 LA Marathon, she’s busy fundraising for causes close to her heart, including the Leukemia & Lymphoma Society, for which she was nominated as the organization’s 2016 Woman of the Year. She currently sits on the society’s leadership committee.

Red Giant Trapcode Suite 14 now available

By Brady Betzel

Red Giant has released an update to its Adobe After Effects focused plug-in toolset Trapcode Suite 14, including new versions of Trapcode Particular and Form as well as an update to Trapcode Tao.

The biggest updates seem to be in Red Giant’s flagship product Trapcode Particular 3. Trapcode Particular is now GPU accelerated through OpenGL with a proclaimed 4X speed increase over previous versions. The Designer has been re-imagined and seems to take on a more Magic Bullet-esque look and feel. You can now include multiple particle systems inside the same 3D space, which will add to the complexity and skill level needed to work with Particular.

You can now also load your own 3D model OBJ files as emitters in the Designer panel or use any image in your comp as a particle. There are also a bunch of new presets that have been added to start you on your Particular system building journey — over 210 new presets, to be exact.

Trapcode Form has been updated to version 3 with the updated Designer, ability to add 3D models and animated OBJ sequences as particle grids, load images to be used as a particle, new graphing system to gain more precise control over the system and over 70 presets in the designer.

Trapcode Tao has been updated with depth of field effects to allow for that beautiful camera-realistic blur that really sets pro After Effects users apart.

Trapcode Particular 3 and Form 3 are paid updates while Tao is free for existing users. If you want to only update Tao make sure you only select Tao for the update otherwise you will install new Trapcode plug-ins over your old ones.

Trapcode Particular 3 is available now for $399. The update is $149 and the academic version is $199. You can also get it as a part of the Trapcode Suite 14 for $999.

Trapcode Form 3 is available now for $199. The update is $99 and the academic costs $99. It can be purchased as part of the Trapcode Suite 14 for $999.

Check out the new Trapcode Suite 14 bundle.

 

Maxon debuts Cinema 4D Release 19 at SIGGRAPH

Maxon was at this year’s SIGGRAPH in Los Angeles showing Cinema 4D Release 19 (R19). This next-generation of Maxon’s pro 3D app offers a new viewport and a new Sound Effector, and additional features for Voronoi Fracturing have been added to the MoGraph toolset. It also boasts a new Spherical Camera, the integration of AMD’s ProRender technology and more. Designed to serve individual artists as well as large studio environments, Release 19 offers a streamlined workflow for general design, motion graphics, VFX, VR/AR and all types of visualization.

With Cinema 4D Release 19, Maxon also introduced a few re-engineered foundational technologies, which the company will continue to develop in future versions. These include core software modernization efforts, a new modeling core, integrated GPU rendering for Windows and Mac, and OpenGL capabilities in BodyPaint 3D, Maxon’s pro paint and texturing toolset.

More details on the offerings in R19:
Viewport Improvements provide artists with added support for screen-space reflections and OpenGL depth-of-field, in addition to the screen-space ambient occlusion and tessellation features (added in R18). Results are so close to final render that client previews can be output using the new native MP4 video support.

MoGraph enhancements expand on Cinema 4D’s toolset for motion graphics with faster results and added workflow capabilities in Voronoi Fracturing, such as the ability to break objects progressively, add displaced noise details for improved realism or glue multiple fracture pieces together more quickly for complex shape creation. An all-new Sound Effector in R19 allows artists to create audio-reactive animations based on multiple frequencies from a single sound file.

The new Spherical Camera allows artists to render stereoscopic 360° virtual reality videos and dome projections. Artists can specify a latitude and longitude range, and render in equirectangular, cubic string, cubic cross or 3×2 cubic format. The new spherical camera also includes stereo rendering with pole smoothing to minimize distortion.

New Polygon Reduction works as a generator, so it’s easy to reduce entire hierarchies. The reduction is pre-calculated, so adjusting the reduction strength or desired vertex count is extremely fast. The new Polygon Reduction preserves vertex maps, selection tags and UV coordinates, ensuring textures continue to map properly and providing control over areas where polygon detail is preserved.

Level of Detail (LOD) Object features a new interface element that lets customers define and manage settings to maximize viewport and render speed, create new types of animations or prepare optimized assets for game workflows. Level of Detail data exports via the FBX 3D file exchange format for use in popular game engines.

AMD’s Radeon ProRender technology is now seamlessly integrated into R19, providing artists a cross-platform GPU rendering solution. Though just the first phase of integration, it provides a useful glimpse into the power ProRender will eventually provide as more features and deeper Cinema 4D integration are added in future releases.

Modernization efforts in R19 reflect Maxon’s development legacy and offer the first glimpse into the company’s planned ‘under-the-hood’ future efforts to modernize the software, as follows:

  • Revamped Media Core gives Cinema 4D R19 users a completely rewritten software core to increase speed and memory efficiency for image, video and audio formats. Native support for MP4 video without QuickTime delivers advantages to preview renders, incorporate video as textures or motion track footage for a more robust workflow. Export for production formats, such as OpenEXR and DDS, has also been improved.
  • Robust Modeling offers a new modeling core with improved support for edges and N-gons can be seen in the Align and Reverse Normals commands. More modeling tools and generators will directly use this new core in future versions.
  • BodyPaint 3D now uses an OpenGL painting engine giving R19 artists painting color and adding surface details in film, game design and other workflows, a real-time display of reflections, alpha, bump or normal, and even displacement, for improved visual feedback and texture painting. Redevelopment efforts to improve the UV editing toolset in Cinema 4D continue with the first-fruits of this work available in R19 for faster and more efficient options to convert point and polygon selections, grow and shrink UV point selects, and more.

Liron Ashkenazi-Eldar joins The Artery as design director  

Creative studio The Artery has brought on Liron Ashkenazi-Eldar as lead design director. In her new role, she will spearhead the formation of a department that will focus on design and branding. Ashkenazi-Eldar and team are also developing in-house design capabilities to support the company’s VFX, experiential and VR/AR content, as well as website development, including providing motion graphics, print and social campaigns.

“While we’ve been well established for many years in the areas of production and VFX, our design team can now bring a new dimension to our company,” says Ashkenazi-Eldar, who is based in The Artery’s NYC office. “We are seeking brand clients with strong identities so that we can offer them exciting, new and even weird creative solutions that are not part of the traditional branding process. We will be taking a completely new approach to branding — providing imagery that is more emotional and more personal, instead of just following an existing protocol. Our goal is to provide a highly immersive experience for our new brand clients.”

Originally from Israel, the 27-year-old Ashkenazi-Eldar is a recent graduate of New York’s School of Visual Arts with a BFA degree in Design. She is the winner of a 2017 ADC Silver Cube Award from The One Club, in the category 2017 Design: Typography, for her contributions to a project titled Asa Wife Zine. She led the Creative Team that submitted the project via the School of Visual Arts.

 

Nutmeg and Nickelodeon team up to remix classic SpongeBob songs

New York creative studio Nutmeg Creative was called on by Nickelodeon to create trippy music-video-style remixes of some classic SpongeBob SquarePants songs for the kids network’s YouTube channel. Catchy, sing-along kids’ songs have been an integral part of SpongeBob since its debut in 1999.

Though there are dozens of unofficial fan remixes on YouTube, Nickelodeon frequently turns to Nutmeg for official remixes: vastly reimagined versions accompanied by trippy, trance-inducing visuals that inevitably go viral. It all starts with the music, and the music is inspired by the show.

Infused with the manic energy of classic Warner Bros. Looney Toons, SpongeBob is simultaneously slapstick and surreal with an upbeat vibe that has attracted a cult-like following from the get-go. Now in its 10th season, SpongeBob attracts fans that span two generations: kids who grew up watching SpongeBob now have kids of their own.

The show’s sensibility and multi-generational audience informs the approach of Nutmeg sound designer, mixer and composer JD McMillin, whose remixes of three popular and vintage SpongeBob songs have become viral hits: Krusty Krab Pizza and Ripped My Pants from 1999, and The Campfire Song Song (yes, that’s correct) from 2004. With musical styles ranging from reggae, hip-hop and trap/EDM to stadium rock, drum and bass and even Brazilian dance, McMillin’s remixes expand the appeal of the originals with ear candy for whole new audiences. That’s why, when Nickelodeon provides a song to Nutmeg, McMillin is given free rein to remix it.

“No one from Nick is sitting in my studio babysitting,” he says. “They could, but they don’t. They know that if they let me do my thing they will get something great.”

“Nickelodeon gives us a lot of creative freedom,” says executive producer Mike Greaney. “The creative briefs are, in a word, brief. There are some parameters, of course, but, ultimately, they give us a track and ask us to make something new and cool out of it.”

All three remixes have collectively racked up hundreds of thousands of views on YouTube, with The Campfire Song Song remix generating 655K views in less than 24 hours on the SpongeBob Facebook page.

McMillin credits the success to the fact that Nutmeg serves as a creative collaborative force: what he delivers is more reinvention than remix.

“We’re not just mixing stuff,” he says. “We’re making stuff.”

Once Nick signs off on the audio, that approach continues with the editorial. Editors Liz Burton, Brian Donnelly and Drew Hankins each bring their own unique style and sensibility, with graphic Effects designer Stephen C. Walsh adding the finishing touches.

But Greaney isn’t always content with cut, shaken and stirred clips from the show, going the extra mile to deliver something unexpected. Case in point: he recently donned a pair of red track pants and high-kicked in front of a greenscreen to add a suitably outrageous element to the Ripped My Pants remix.

In terms of tools used for audio work, Nutmeg used Ableton Live, Native Instruments Maschine and Avid Pro Tools. For editorial they called on Avid Media Composer, Sapphire and Boris FX. Graphics were created in Adobe After Effects, and Mocha Pro.

Exceptional Minds: Autistic students learn VFX, work on major feature films

After graduation, these artists have been working on projects for Marvel, Disney, Fox and HBO.

By Randi Altman

With an estimated 1 in 68 children in the US being born with some sort of autism spectrum disorder, according to the Centers for Disease Control’s Autism and Developmental Disabilities Monitoring, I think it’s fair to say that most people have been touched in some way by a child on the spectrum.

As a parent of a teenager with autism, I can attest to the fact that one of our biggest worries, the thing that keeps us up at night, is the question of independence. Will he be able to make a living? Will there be an employer who can see beyond his deficits to his gifts and exploit those gifts in the best possible way?

Enter Exceptional Minds, a school in Los Angeles that teaches young adults with autism how to create visual effects and animation while working as part of a team. This program recognizes how bright these young people are and how focused they can be, surrounds them with the right teachers and behavioral therapists, puts the right tools in their hands and lets them fly.

The school, which also has a VFX and animation studio that employs its graduates, was started in 2011 by a group of parents who have children on the spectrum. “They were looking for work opportunities for their kids, and quickly discovered they couldn’t find any. So they decided to start Exceptional Minds and prepare them for careers in animation and visual effects,” explains Susan Zwerman, the studio executive producer at Exceptional Minds and a long-time VFX producer whose credits include Broken Arrow, Alien Resurrection, Men of Honor, Around the World in 80 Days and The Guardian.

Since the program began, these young people have had the opportunity to work on some very high-profile films and TV programs. Recent credits include Game of Thrones, The Fate of the Furious and Doctor Strange, which was nominated for an Oscar for visual effects this year.

We reached out to Zwerman to find out more about this school, its studio and how they help young people with autism find a path to independence.

The school came first and then the studio?
Yes. We started training them for visual effects and animation and then the conversation turned to, “What do they do when they graduate?” That led to the idea to start a visual effects studio. I came on board two years ago to organize and set it up. It’s located downstairs from the school.

How do you pick who is suitable for the program?
We can only take 10 students each year, and unfortunately, there is a waiting list because we are the only program of its kind anywhere. We have a review process that our educators and teachers have in terms of assessing the student’s ability to be able to work in this area. You know, not everybody can function working on a computer for six or eight hours. There are different levels of the spectrum. So the higher functioning and the medium functioning are more suited for this work, which takes a lot of focus.

Students are vetted by our teachers and behavioral specialists, who take into account the student’s ability, as well as their enthusiasm for visual effects and animation — it’s very intense, and they have to be motivated.

Susie Zwerman (in back row, red hair) with artists in the Exceptional Minds studio.

I know that kids on the spectrum aren’t necessarily social butterflies, how do you teach them to work as a team?
Oh, that’s a really good question. We have what’s called our Work Readiness program. They practice interviewing, they practice working as a team, they learn about appearance, attitude, organization and how to problem solve in a work place.

A lot of it is all about working in a team, and developing their social skills. That’s something we really stress in terms of behavioral curriculum.

Can you describe how the school works?
It’s a three-year program. In the first year, they learn about the principles of design and using programs like Adobe’s Flash and Photoshop. In Flash, they study 2D animation and in Photoshop they learn how to do backgrounds for their animation work.

During year two, they learn how to work in a production pipeline. They are given a project that the class works on together, and then they learn how to edit using Adobe Premiere Pro and compositing on Adobe After Effects.

In the third year, they are developing their skills in 3D via Autodesk Maya and compositing with The Foundry’s Nuke. So they learn the way we work in the studio and our pipeline, as well as preparing their portfolios for the workplace. At the end of three years, each student completes their training with a demo reel and resume of their work.

Who helps with the reels and resumes?
Their teachers supervise that process and help them with editing and picking the best pieces for their reel. Having a reel is important for many reasons. While many students will work in our studio for a year after graduation, I was able to place some directly into the work environment because their talent was so good… and their reel was so good.

What is the transition like from school to studio?
They graduate in June and we transition many of them to the studio, where they learn about deadlines and get paid for their work. Here, many experience independence for the first time. We do a lot of 2D-type visual effects clean-up work. We give them shots to work on and test them for the first month to see how they are doing. That’s when we decide if they need more training.

The visual effects side of the studio deals with paint work, wire and rod removal and tracker or marker removals — simple composites — plus a lot of rotoscoping and some greenscreen keying. We also do end title credits for the major movies.

We just opened the animation side of the studio in 2016, so it’s still in the beginning stages, but we’re doing 2D animation. We are not a 3D studio… yet! The 2D work we’ve done includes music videos, Websites, Power Points and some stuff for the LA Zoo. We are gearing up for major projects.

How many work in the studio?
Right now, we have about 15 artists at workstations in our current studio. Some of these will be placed on the outside, but that’s part of using strategic planning in the future to figure out how much expansion we want to do over the next five years.

Thanks to your VFX background, you have many existing relationships with the major studios. Can you talk about how that has benefitted Exceptional Minds?
We have had so much support from the studios; they really want to help us get work for the artists. We started out with Fox, then Disney and then HBO for television. Marvel Studios is one of our biggest fans. Marvel’s Victoria Alonso is a big supporter, so much so that we gave her our Ed Asner Award last June.

Once we started to do tracker marker and end title credits for Marvel, it opened doors. People say, “Well, if you work for Marvel, you could work for us.” So she has been so instrumental in our success.

What were the Fox and Marvel projects?
Our very first client was Fox and we did tracker removals for Dawn of the Planet of the Apes — that was about three years ago. Marvel happened about two years ago and our first job for them was on Avengers: Age of Ultron.

What are some of the other projects Exceptional Minds has worked on?
We worked on Doctor Strange, providing tracker marker removals and end credits. We worked on Ant-Man, Captain America: Civil War, Pete’s Dragon, Alvin & the Chipmunks: The Road Chip and X-Men: Apocalypse.

Thanks to HBO’s Holly Schiffer we did a lot of Game of Thrones work. She has also been a huge supporter of ours.

It’s remarkable how far you guys have come in a short amount of time. Can you talk about how you ended up at Exceptional Minds?
I used to be DGA production manager/location manager and then segued into visual effects as a freelance VFX producer for all the major studios. About three years ago, my best friend Yudi Bennett, who is one of the founders of Exceptional Minds, convinced me to leave my career and  come here to help set up the studio. I was also tasked with producing, scheduling and budgeting work to come into the studio. For me, personally, this has been a spiritual journey. I have had such a good career in the industry, and this is my way of giving back.

So some of these kids move on to other places?
After they have worked in the studio for about a year, or sometimes longer, I look to have them placed at an outside studio. Some of them will stay here at our studio because they may not have the social skills to work on the outside.

Five graduates have been placed so far and they are working full time at various productions studios and visual effects facilities in Los Angeles. We have also had graduates in internships at Cartoon Network and Nickelodeon.

One student is at Marvel, and others are at Stargate Studios, Mr. Wolf and New Edit. To be able to place our artists on the outside is our ultimate goal. We love to place them because it’s sort of life changing. For example, one of the first students we placed, Kevin, is at Stargate. He moved out of his parents’ apartment, he is traveling by himself to and from the studio, he is getting raises and he is moving up as a rotoscope artist.

What is the tuition like?
Students pay about 50 percent and we fundraise the other 50 percent. We also have scholarships for those that can’t afford it. We have to raise a lot of money to support the efforts of the school and studio.

Do companies donate gear?
When we first started, Adobe donated software. That’s how we were able to fund the school before the studio was up and running. Now we’re on an educational plan with them where we pay the minimum. Autodesk and The Foundry also give us discounts or try to donate licenses to us. In terms of hardware, we have been working with Melrose Mac, who is giving us discounts on computers for the school and studio.


Check out Exceptional Minds Website for more info.

Aardman creates short film, struts its stuff

By Randi Altman

All creative studios strive for creative ways to show off their talent and offerings, and London-based Aardman is no exception. Famous for its stop-motion animation work (remember the Wallace and Gromit films?), this studio now provides so much more, including live-action, CG, 2D animation and character creation.

Danny Capozzi

In order to help hammer home all of their offerings, and in hopes of breaking that stop-motion stereotype, Aardman has created a satirical short film, called Visualize This, depicting a conference call between a production company and an advertising agency, giving the studio the ability to show off the range of solutions they can provide for clients. Each time the fictional client suggests something, that visual pops up on the screen, whether it’s adding graffiti to a snail’s shell or textured type or making a giant monster out of CG cardboard boxes.

We reached out to Aardman’s Danny Capozzi, who directed the short, to find out more about this project and the studio in general.

How did the idea for this short come about?
I felt that the idea of making a film based on a conference call was something that would resonate with a lot of people in any creative industry. The continuous spit balling of ideas and suggestions would make a great platform to demonstrate a lot of different styles that myself and Aardman can produce. Aardman is well known for its high level of stop-motion/Claymation work, but we do CGI, live action and 2D just as well. We also create brand new ways of animating by combining styles and techniques.

Why was now the right time to do this?
I think we are living in a time of uncertainty, and this film really expresses that. We do a lot of procrastinating. We have the luxury to change our minds, our tastes and our styles every two minutes. With so much choice of everything at our fingertips we can no longer make quick decisions and stick to them. There’s always that sense of “I love this… it’s perfect, but what if there’s something better?” I think Visualize This sums it up.

You guys work with agencies and directly with brands — how would you break that up percentage wise?
The large majority of our advertising work still comes through agencies, although we are increasingly doing one-off projects for clients who seek us out for our storytelling and characters. It’s hard to give a percentage on it because the one-offs vary so much in size that they can skew the numbers and give the wrong impression. More often than not, they aren’t advertising projects either and tend to fall into the realm of short films for organizations, which can be either charities, museums or visitor attractions, or even mass participation arts projects and events.

Can you talk about making the short? Your workflow?
When I first pitched the idea to our executive producer Heather Wright, she immediately loved the idea. After a bit of tweaking on the script and the pace of the dialogue we soon went into production. The film was achieved during some down time from commercial productions and took about 14 weeks on and off over several months.

What tools did you call on?
We used a large variety of techniques CGI, stop-motion, 2D, live action, timelapse photography and greenscreen. Compositing and CG was via Maya, Houdini and Nuke software. We used HDRI (High Dynamic Range Images). We also used Adobe’s After Effects, Premiere, Photoshop, and Illustrator, along with clay sculpting, model making and blood, sweat and, of course, some tears.

What was the most complicated shot?
The glossy black oil shot. This could have been done in CGI with a very good team of modelers and lighters and compositors, but I wanted to achieve this in-camera.

Firstly, I secretly stole some of my son Vinny’s toys away to Aardman’s model-making workshop and spray painted them black. Sorry Vinny! I hot glued the black toys onto a black board (huge mistake!), you’ll see why later. Then I cleared Asda out of cheap cooking oil — 72 litres of the greasy stuff. I mixed it with black oil paint and poured it into a casket.

We then rigged the board of toys to a motion control rig. This would act as the winch to raise the toys out of the black oily soup. Another motion control was rigged to do the panning shot with the camera attached to it. This way we get a nice up and across motion in-camera.

We lowered the board of toys into the black soup and the cables that held it up sagged and released the board of toys. Noooooo! I watched them sink. Then to add insult to injury, the hot glue gave way and the toys floated up. How do you glue something to an oily surface?? You don’t! You use screws. After much tinkering it was ready to be submerged again. After a couple of passes, it worked. I just love the way the natural glossy highlights move over the objects. All well worth doing in-camera for real, and so much more rewarding.

What sort of response has it received?
I’m delighted. It has really travelled since we launched a couple of weeks ago, and it’s fantastic to keep seeing it pop up in my news feed on various social media sites! I think we are on over 20,000 YouTube views and 40,000 odd views on Facebook.

Behind the Title: Artist/Creative Director Barton Damer

NAME: Barton Damer

COMPANY: Dallas-based  Already Been Chewed

CAN YOU DESCRIBE YOUR COMPANY?
AlreadyBeenChewed is a boutique studio that I founded in 2010. We have created a variety of design, motion graphics and 3D animated content for iconic brands, including Nike, Vans, Star Wars, Harry Potter and Marvel Comics. Check out our motion reel.

WHAT’S YOUR JOB TITLE?
Owner/Founding Artist/Creative Director

WHAT DOES THAT ENTAIL?
My job is to set the vibe for the types of projects, clients and style of work we create. I’m typically developing the creative, working with our chief strategy officer to land projects and then directing the team to execute the creative for the project.

WHAT WOULD SURPRISE PEOPLE ABOUT WHAT FALLS UNDER THAT TITLE?
When you launch out on your own, it’s surprising how much non-creative work there is to do. It’s no longer good enough to be great at what you do (being an artist). Now you have to be excellent with communication skills, people skills, business, organization, marketing, sales and leadership skills. It’s surprising how much you have to juggle in the course of a single day and still hit deadlines.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Developing a solution that will not only meet the clients needs but also push us forward as a studio is always exciting. My favorite part of any job is making sure it looks amazing. That’s my passion. The way it animates is secondary. If it doesn’t look good to begin with, it won’t look better just because you start animating it.

WHAT’S YOUR LEAST FAVORITE?
Dealing with clients that stress me out for various reasons —whether it’s because they are scope creeping or not realizing that they signed a contract… or not paying a bill. Fortunately, I have a team of great people that help relieve that stress for me, but it can still be stressful knowing that they are fighting those battles for the company. We get a lot of clients who will sign a contract without even realizing what they agreed to. It’s always stressful when you have to remind them what they signed.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Night time! That’s when the freaks come out! I do my best creative at night. No doubt!

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Real estate investing/fixing up/flipping. I like all aspects of designing, including interior design. I’ve designed and renovated three different studio spaces for Already Been Chewed over the last seven years.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I blew out my ACL and tore my meniscus while skateboarding. I wanted to stay involved with my friends that I skated with knowing that surgery and rehab would have me off the board for at least a full year. During that time, I began filming and editing skate videos of my friends. I quickly discovered that the logging and capturing of footage was my least favorite part, but I loved adding graphics and motion graphics to the skate videos. I then began to learn Adobe After Effects and Maxon Cinema 4D.

At this time I was already a full-time graphic designer, but didn’t even really know what motion graphics were. I had been working professionally for about five or six years before making the switch from print design to animation. That was after dabbling in Flash animations and discovering I didn’t want to do code websites (this was around 2003-2004).

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently worked with Nike on various activations for the Super Bowl, March Madness and got to create motion graphics for storefronts as part of the Equality Campaign they launched during Black History Month. It was cool to see our work in the flagship Niketown NYC store while visiting New York a few weeks ago.

We are currently working on a variety of projects for Nike, Malibu Boats, Training Mask, Marvel and DC Comics licensed product releases, as well as investing heavily in GPUs and creating 360 animated videos for VR content.

HOW DID THE NIKE EQUALITY MOTION GRAPHICS CAMPAIGN COME TO FRUITION?
Nike had been working on a variety of animated concepts to bring the campaign to life for storefronts. They had a library of animation styles that had already been done that they felt were not working. Our job was to come up with something that would benefit the campaign style.

We recreated 16 athlete portraits in 3D so that we could cast light and shadows across their faces to slowly reveal them from black and also created a seamless video loop transitioning between the athlete portraits and various quotes about equality.

CAN YOU DESCRIBE THE MOTION GRAPHICS SCOPE OF THE NIKE EQUALITY CAMPAIGN, AND THE SOFTWARE USED?
The video we created was used in various Nike flagship stores — Niketown NYC, Soho and LA, to name a few. We reformatted the video to work in a variety of sizes. We were able to see the videos at Niketown NYC where it was on the front of the window displays. It was also used on large LED walls on the interior as well as a four-story vertical screen in store.

We created the portrait technique on all 16 athletes using Cinema 4D and Octane. The remainder of the video was animated in After Effects.

The portraits were sculpted in Cinema 4D and we used camera projection to accurately project real photos of the athletes onto the 3D portrait. This allowed us to keep 100 percent accuracy of the photos Nike provided, but be able to re-light and cast shadows accordingly to reveal the faces up from black.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
That’s a tough one. Usually, it’s whatever the latest project is. We’re blessed to be working on some really fun projects. That being said… working on Vans 50th Anniversary campaign for the Era shoe is pretty epic! Especially since I am a long time skateboarder.

Our work was used globally on everything from POP displays to storefronts to interactive Website takeover and 3D animated spots for broadcast. It was amazing to see it being used across so many mediums.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
A computer, my iPhone and speakers!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m very active on Instagram and Facebook. I chose to say “no” to Snapchat in hopes that it will go away so that I don’t have to worry about one more thing (he laughs), and twitter is pretty much dead for me these days. I log in once a month and see if I have any notifications. I also use Behance and LinkedIn a lot, and Dribbble once in a blue moon.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? IF SO, WHAT KIND?
My 25-year-old self would cyber bully me for saying this but soft Drake is “Too Good” these days. Loving Travis Scott and Migos among a long list of others.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
First I bought a swimming pool to help me get away from the computer/emails and swim laps with the kids. That worked for a while, but then I bought a convertible BMW to try to ease the tension and enjoy the wind through my hair. Once that wore off and the stress came back, I bought a puppy. Then I started doing yoga. A year later I bought another puppy.

Industry vet Alex Moulton joins NYC’s Trollbäck+Company

New York’s Trollbäck+Company has hired Alex Moulton as chief creative officer where he has been tasked with helping businesses and organizations develop sustainable brands through design-driven strategy and mixed media.

Moulton, who joins the agency from Vice Media, was recently at the helm of NBC Universo’s highly regarded brand refresh, as well as show packaging for ESPN’s The Undefeated In-Depth: Serena With Common.

“Alex brings an invaluable perspective to Trollbäck+Company as both an artist and entrepreneur,” says founder Jakob Trollbäck. “In his short time here, he has already reinvigorated the collective creative energy of our company. This clearly stems from his constant quest to dig deeper as a creative problem solver, which falls perfectly into our philosophy of ‘Discard everything that means nothing.’”

Says Moulton, “My vision for Trollbäck+Company is very clear: design culturally relevant, sustainable brands — from initial strategy and positioning to content and experiential activations —  with a nimble and holistic approach that makes us the ultimate partner for CMOs that care about designing an enduring brand and bringing it to market with integrity.”

Prior to Trollbäck+Company, as senior director, creative and content at Vice, Moulton helped launch digital content channel Live Nation TV (LNTV) — a joint venture for which he led brand creative, content development, production and partnership initiatives.

As executive creative director at advertising agency Eyeball, Moulton led product launches, rebrands and campaigns for major brands, including Amazon, New York Public Radio, Wildlife Conservation Society’s New York Aquarium, A&E, CMT, Disney, E!, Nickelodeon, Oxygen, Ovation and VH1.

An early adopter of audio branding, Moulton founded his own branding agency and record label, Expansion Team, in 2002. As chief creative officer of the company, he created the sonic identities of Aetna, Amazon Studios/Originals, Boeing, JetBlue and Rovi, as well as more than 15 TV networks, including CNN International, Discovery, PBS, Universal and Comedy Central.

A DJ, composer and speaker about topics that combine music and design, Moulton has been featured in Billboard, V Man, Electronic Musician and XLR8R and has performed at The Guggenheim.

Behind the Title: Director/Designer Ash Thorp

NAME: Ash Thorp (@ashthorp)

COMPANY: ALT Creative, Inc.

CAN YOU DESCRIBE YOUR COMPANY?
ALT Creative is co-owned by my wife Monica and myself. She helps coordinate and handle the company operations, while I manage the creative needs of clients. We work with a select list of outside contractors as needed, mainly depending on the size and scale of the project.

WHAT’S YOUR JOB TITLE?
I fulfill many roles, but if I had to summarize I would say I most commonly am hired for the role of director or designer.

WHAT DOES THAT ENTAIL?
Directing is about facilitating the team to achieve the best outcome on a given project. My ability to communicate with and engage my team toward a visionary goal is my top priority as a director. As a designer, I look at my role as an individual problem solver. My goal is to find the root of what is needed or requested and solve it using design as a mental process of solution.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I believe that directing is more about communication and not how well you can design, so many would be surprised by the amount of time and energy needed outside of “creative” tasks, such as emails, critiques, listening, observation and deep analysis.

WHAT’S YOUR FAVORITE PART OF THE JOB?
As a director, I love the freedom to expose the ideas in my mind to others and work closely with them to bring them to life. It’s immensely liberating and rewarding.

WHAT’S YOUR LEAST FAVORITE?
Redundancy often eats up my ambitions. Instructing my vision repeatedly to numerous teammates and partners can be taxing on my subconscious at times.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The late evening because that is often when I have my mind to myself and am free of outside world distractions and noise.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Nothing. I strongly believe that this is what I was put on earth to do. This is the path I have been designed and focused on since I was a child.

SO YOU KNEW EARLY ON THIS WOULD BE YOUR PATH?
I grew up with a very artistic family; my mother’s side of the family displays creative traits in one media or another. They were and still are all very deeply committed to supporting me in my creative endeavors. Based on my upbringing, it was a natural progression to also be a creative person.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
As for client projects that are publicly released, I most recently worked on the Assassin’s Creed feature film and Call of Duty: Infinite Warfare video game.

For my own projects, I designed and co-directed a concept short for Lost Boy with Anthony Scott Burns. In addition, I released two personal projects: None is a short expression film devised to capture a tone and mood of finding oneself in a city of darkness, and Epoch
is an 11-minute space odyssey that merges my deep love of space and design.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
With Epoch being the most recently released project, I have received so many kind and congratulatory correspondences from viewers about how much they love the film. I am very proud of all the hard work and internal thought, development and personal growth it took to build this project with Chris Bjerre. I believe Epoch shows who I truly am, and I consider it one of the best projects of my personal career to date.

WHAT SOFTWARE DID YOU RELY ON FOR EPOCH?
We used a pretty wide spectrum of tools. Our general production tool kit was comprised of Adobe Photoshop for images and stills, texture building and 2D image editing; Adobe Bridge for reviewing frames and keeping a clear vision of the project; Adobe Premiere for editing everything from the beginning animatic to the final film; and, of course, our main staple in 3D was Maxon Cinema 4D, which we used to construct all of the final scenes and render everything using Octane Renderer.

We used Cinema 4D for everything — from building shots for the rough animatic to compiling entire scenes and shots for final render. We used it to animate the planets, moons, orbits, lights and the Vessel. It really is a rock-solid piece of software that I couldn’t imagine trying to build a film like Epoch without it. It allowed us to capture the animations, look, lighting and shots seamlessly from the project’s inception.

WHAT WAS YOUR INSPIRATION FOR THIS WORK?
I am personally inspired by so many things. Epoch was a personal tribute to Stanley Kubrick’s 2001: A Space Odyssey, Alien, Carl Sagan, my love of space and space travel, classical sci-fi art and literature, and my personal love of graphic design all combined into one. We put tremendous effort into Epoch to pay proper homage to these things, yet also invite a new audience to experience something uniquely new. We hope you all enjoyed it!

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Internet, computers and physical traveling devices (like cars, planes).

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I try and limit my time spent on social media, but I have two Facebooks, Instagram, Twitter and a Behance account.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
I frequently listen to music while I work as it helps me fall deep into my mentally focused work state of mind. The type of music varies as some genres work better than others because they trigger different emotions for different tasks. When I am in deep thought, I listen to composers that have no lyrics in their work that may pull away my mind’s focus. When I am doing ordinary tasks or busy work, I listen to anything from heavy metal to drum and bass. The scale of music really varies for me as it’s also often based on my current mood. Music is a big part of my workday and my life.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I actually let the stress in and let it shape my decision making. I feel if I run away from it or unwind my mind, it takes double the effort to go back in to work. I embrace it as being a part of the high consumption industry in which I have chosen to work. It’s not always ideal and is often very demanding, but I often let it be the spark of the fire of my work.

Digging Deep: Helping launch the OnePlus 3T phone

By Jonathan Notaro

It’s always a big deal when a company drops a new smartphone. The years of planning and development culminate in a single moment, and the consumers are left to judge whether or not the new device is worthy of praise and — more importantly — worthy of purchase.

For bigger companies like Google and Apple, a misstep with a new phone release can often amount to nothing more than a hiccup in their operations. But for newer upstarts like OnePlus, it’s a make or break event. When we got the call at Brand New School to develop a launch spot for the company’s 3T smartphone, along with the agency Carrot Creative, we didn’t hesitate to dive in.

The Idea
OnePlus has built a solid foundation of loyal fans with their past releases, but with the 3T they saw the chance to build their fanbase out to more everyday consumers who may not be as tech-obsessed as their existing fans. It is an entirely new offering and, as creatives, the chance to present such a technologically advanced device to a new, wider audience was an opportunity we couldn’t pass up.

Carrot wanted to create something for OnePlus that gave viewers a unique sense of what the phone was capable of — to capture the energy, momentum and human element of the OnePlus 3T. The 3T is meant to be an extension of its owner, so this spot was designed to explore the parallels between man and machine. Doing this can run the risk of being cliché, so we opted for futuristic, abstract imagery that gets the point across effectively without being too heavy handed. We focused on representing the phone’s features that set it apart from other devices in this market, such as its powerful processor and its memory and storage capabilities.

How We Did It
Inspired by the brooding, alluring mood reflected in the design for the title sequence of The Girl With the Dragon Tattoo, we set out to meld lavish shots of the OnePlus 3T with robotically-infused human anatomy, drawing up initial designs in Autodesk Maya and Maxon Cinema 4D.

When the project moved into the animation phase, we stuck with Maya and used Nuke for compositing. Type designs were done in Adobe Illustrator and animated in Adobe After Effects.

Collaboration is always a concern when there are this many different scenes and moving parts, but this was a particular challenge. With a CG-heavy production like this, there’s no room for error, so we had to make sure that all of the different artists were on the same page every step along the way.

Our CG supervisor Russ Wootton and technical director Dan Bradham led the way and compiled a crack team to make this thing happen. I may be biased, but they continue to amaze me with what they can accomplish.

The Final Product
The project was two-month production process. Along the way, we found that working with Carrot and the brand was a breath of fresh air, as they were very knowledgeable and amenable to what we had in mind. They afforded us the creative space to take a few risks and explore some more abstract, avant-garde imagery that I felt represented what they were looking to achieve with this project.

In the end, we created something that I hope cuts through the crowded landscape of product videos and appeals to both the brand’s diehard-tech-savvy following and consumers who may not be as deep into that world. (Check it out here.)

Fueled by the goal of conveying the underlying message of “raw power” while balancing the scales of artificial and human elements, we created something I believe is beautiful, compelling and completely unique. Ultimately though, the biggest highlight was seeing the positive reaction the piece received when it was released. Normally, reaction from consumers would be centered solely on the product, but to have the video receive praise from a very discerning audience was truly satisfying.


Jonathan Notaro is a director at Brand New School, a bicoastal studio that provides VFX, animation and branding. 

Nickelodeon gets new on-air brand refresh

The children’s network Nickelodeon has debuted an all-new brand refresh of its on-air and online look and feel. Created with animation, design, global branding and creative agency Superestudio, based in Buenos Aires, Argentina, Nick’s new look features an array of kids interacting with the real world and Nick’s characters in live-action and graphic environments.

The new look consists of almost 300 deliverables, including bumpers, IDs, promo toolkits and graphic developments that first rolled out across the network’s US linear platform, followed by online, social media and off-channel. Updated elements for the network’s international channels will follow.

“We really wanted to highlight how much surprise and fun are parts of kids’ lives, so we took as our inspiration the surreal nature of GIFs, memes and emoticons and created an entire new visual vocabulary,” says Michael Waldron, SVP, creative director art and design for Nickelodeon Group and Nick@Nite. “Using a mix of real kids and on-air talent, the refresh looks through the lens of how kids see things — the unpredictable, extraordinary and joyful nature of a child’s imagination. Superestudio was the right company for this refresh because they use a great mix of different techniques, and they brought a fresh viewpoint that had just the right amount of quirk and whimsy.”

Nickelodeon’s new look was created by combining real kids with 2D and 3D graphics to create imaginative reinterpretations of Nickelodeon’s properties and characters as they became real-world playgrounds for kids to bring to life, rearrange and redesign. From turning SpongeBob’s face into a tongue-twisted fun zone to kids rearranging and rebuilding Lincoln Loud from The Loud House, everything from the overhead and docu-style camera angles to the seamless blend of real-world and tactile elements.

Nickelodeon’s classic orange logo is now set against an updated color palette of bright tones, including purple, light blue, lime and cream.

According to Superestudio executive creative director Ezequiel Rormoser, “The software that we used is Adobe After Effects and Maxon Cinema 4D. I think the most interesting thing is how we mixed live action with graphics, not in terms of technical complexity, but in the way they interact in an unexpected way. “

Flavor Detroit welcomes VFX artist/designer Scott Stephens

Twenty-year industry veteran Scott Stephens has joined Flavor Detroit as senior VFX artist/designer. Previously the lead designer at Postique, Stephens has been a key part of the post boutique Section 8 as co-founder and lead designer since its launch in 2001.

Known for his work with top brands and directors on major commercial campaigns for Blue Cross Blue Shield (BCBS), Chrysler, Expedia, Food Network, Mazda and Six Flags, to name but a few, Stephens also brings vast experience creating content that maximizes unique environments and screens of all sizes.

Recent projects include the Amazon Kindle release in Times Square, the Ford Focus theatrical release for the Electric Music Festival, BCBS media for the Pandora app, Buick’s multi-screen auto show installations and the Mount St. Helens installation for the National Park Service.

Lost in Time game show embraces ‘Interactive Mixed Reality’

By Daniel Restuccio

The Future Group — who has partnered with Fremantle Media, Ross Video and Epic Games — have created a new super-agile entertainment platform that blends linear television and game technology into a hybrid format called “Interactive Mixed Reality.”

The brainchild of Bård Anders Kasin, this innovative content deployment medium generated a storm of industry buzz at NAB 2016, and their first production Lost in Time — a weekly primetime game show — is scheduled to air this month on Norwegian television.

The Idea
The idea originated more than 13 years ago in Los Angeles. In 2003, at age 22, Kasin, a self-taught multimedia artist from Notodden, Norway, sent his CV and a bunch of media projects to Warner Bros. in Burbank, California, in hopes of working on The Matrix. They liked it. His interview was on a Wednesday and by Friday he had a job as a technical director.

Kasin immersed himself in the cutting-edge movie revolution that was The Matrix franchise. The Wachowskis visionary production was a masterful inspiration and featured a compelling sci-fi action story, Oscar-winning editing, breakthrough visual effects (“bullet-time”) and an expanded media universe that included video games and an animè-style short The Animatrix. The Matrix Reloaded and The Matrix Revolutions were shot at the same time, as well as more than an hour of footage specifically designed for the video game. The Matrix Online, an Internet gaming platform, was a direct sequel to The Matrix Revolutions.

L-R: Bård Anders Kasin and Jens Petter Høili.

Fast forward to 2013 and Kasin has connected with software engineer and serial entrepreneur Jens Petter Høili, founder of EasyPark and Fairchance. “There was this producer I knew in Norway,” explains Kasin, “who runs this thing called the Artists’ Gala charity. He called and said, ‘There’s this guy you should meet. I think you’ll really hit it off.’” Kasin met Høili had lunch and discussed projects they each were working on. “We both immediately felt there was a connection,” recalls Kasin. No persuading was necessary. “We thought that if we combined forces we were going to get something that’s truly amazing.”

That meeting of the minds led to the merging of their companies and the formation of The Future Group. The mandate of Oslo-based The Future Group is to revolutionize the television medium by combining linear TV production with cutting-edge visual effects, interactive gameplay, home viewer participation and e-commerce. Their IMR concept ditches the individual limiting virtual reality (VR) headset, but conceptually keeps the idea of creating content that is a multi-level, intricate and immersive experience.

Lost in Time
Fast forward again, this time to 2014. Through another mutual friend, The Future Group formed an alliance with Fremantle Media. Fremantle, a global media company, has produced some of the highest-rated and longest-running shows in the world, and is responsible for top international entertainment brands such as Got Talent, Idol and The X Factor.

Kasin started developing the first IMR prototype. At this point, the Lost in Time production had expanded to include Ross Video and Epic Games. Ross Video is a broadcast technology innovator and Epic Games is a video game producer and the inventor of the Unreal game engine. The Future Group, in collaboration with Ross Video, engineered the production technology and developed a broadcast-compatible version of the Unreal game engine called Frontier, shown at NAB 2016, to generate high-resolution, realtime graphics used in the production.

On January 15, 2015 the first prototype was shown. When Freemantle saw the prototype, they were amazed. They went directly to stage two, moving to the larger stages at Dagslys Studios. “Lost in Time has been the driver for the technology,” explains Kasin. “We’re a very content-driven company. We’ve used that content to drive the development of the platform and the technology, because there’s nothing better than having actual content to set the requirements for the technology rather than building technology for general purposes.”

In Lost in Time, three studio contestants are set loose on a greenscreen stage and perform timed, physical game challenges. The audience, which could be watching at home or on a mobile device, sees the contestant seamlessly blended into a virtual environment built out of realtime computer graphics. The environments are themed as western, ice age, medieval times and Jurassic period sets (among others) with interactive real props.

The audience can watch the contestants play the game or participate in the contest as players on their mobile device at home, riding the train or literally anywhere. They can play along or against contestants, performing customized versions of the scripted challenges in the TV show. The mobile content uses graphics generated from the same Unreal engine that created the television version.

“It’s a platform,” reports partner Høili, referring to the technology behind Lost in Time. A business model is a way you make money, notes tech blogger Jonathan Clarks, and a platform is something that generates business models. So while Lost in Time is a specific game show with specific rules, built on television technology, it’s really a business technology framework where multiple kinds of interactive content could be generated. Lost in Time is like the Unreal engine itself, software that can be used to create games, VR experiences and more, limited only by the imagination of the content creator. What The Future Group has done is create a high-tech kitchen from which any kind of cuisine can be cooked up.

Soundstages and Gear
Lost in Time is produced on two greenscreen soundstages at Dagslys Studios in Oslo. The main “gameplay set” takes up all of Studio 1 (5,393 square feet) and the “base station set” is on Studio 3 (1,345 square feet). Over 150 liters (40 gallons) of ProCyc greenscreen paint was used to cover both studios.

Ross Video, in collaboration with The Future Group, devised an integrated technology of hardware and software that supports the Lost in Time production platform. This platform consists of custom cameras, lenses, tracking, control, delay, chroma key, rendering, greenscreen, lighting and switcher technology. This system includes the new Frontier hardware, introduced at NAB 2016, which runs the Unreal game engine 3D graphics software.

Eight Sony HDC-2500 cameras running HZC-UG444 software are used for the production. Five are deployed on the “gameplay set.” One camera rides on a technocrane, two are on manual pedestal dollies and one is on Steadicam. For fast-action tracking shots, another camera sits on the Furio RC dolly that rides on a straight track that runs the 90-foot length of the studio. The Furio RC pedestal, controlled by SmartShell, guarantees smooth movement in virtual environments and uses absolute encoders on all axes to send complete 3D tracking data into the Unreal engine.

There is also one Sony HDC-P1 camera that is used as a static, center stage, ceiling cam flying 30 feet above the gameplay set. There are three cameras in the home base set, two on Furio Robo dollies and one on a technocrane. In the gameplay set, all cameras (except the ceiling cam) are tracked with the SolidTrack IR markerless tracking system.

All filming is done at 1080p25 and output RGB 444 via SDI. They use a custom LUT on the cameras to avoid clipping and an expanded dynamic range for post work. All nine camera ISOs, separate camera “clean feeds,” are recorded with a “flat” LUT in RGB 444. For all other video streams, including keying and compositing, they use LUT boxes to invert the signal back to Rec 709.

Barnfind provided the fiber optic network infrastructure that links all the systems. Ross Video Dashboard controls the BarnOne frames as well as the router, Carbonite switchers, Frontier graphics system and robotic cameras.

A genlock signal distributed via OpenGear syncs all the gear to a master clock. The Future Group added proprietary code to Unreal so the render engine can genlock, receive and record linear timecode (LTC) and output video via SDI in all industry standard formats. They also added additional functionality to the Unreal engine to control lights via DMX, send and receive GPI signals, communicate with custom sensors, buttons, switches and wheels used for interaction with the games and controlling motion simulation equipment.

In order for the “virtual cameras” in the graphics systems and the real cameras viewing the real elements to have the exact same perspectives, an “encoded” camera lens is required that provides the lens focal length (zoom) and focus data. In addition the virtual lens field of view (FOV) must be properly calibrated to match the FOV of the real lens. Full servo digital lenses with 16-bit encoders are needed for virtual productions. Lost in Time uses three Canon lenses with these specifications: Canon Hj14ex4.3B-IASE, Canon Hj22ex7.6B-IASE-A and Canon Kj17ex7.7B-IASE-A.

The Lost in Time camera feeds are routed to the Carbonite family hardware: Ultrachrome HR, Carbonite production frame and Carbonite production switcher. Carbonite Ultrachrome HR is a stand-alone multichannel chroma key processor based on the Carbonite Black processing engine. On Lost in Time, the Ultrachrome switcher accepts the Sony camera RGB 444 signal and uses high-resolution chroma keyers, each with full control of delay management, fill color temperature for scene matching, foreground key and fill, and internal storage for animated graphics.

Isolated feeds of all nine cameras are recorded, plus two quad-splits with the composited material and the program feed. Metus Ingest, a The Future Group proprietary hardware solution, was used for all video recording. Metus Ingest can simultaneously capture and record  up to six HD channels of video and audio from multiple devices on a single platform.

Post Production
While the system is capable of being broadcast live, they decided not to go live for the debut. Instead they are only doing a modest amount of post to retain the live feel. That said, the potential of the post workflow on Lost in Time arguably sets a whole new post paradigm. “Post allows us to continue to develop the virtual worlds for a longer amount of time,” says Kasin. “This gives us more flexibility in terms of storytelling. We’re always trying to push the boundaries with the creative content. How we tell the story of the different challenges.”

All camera metadata, including position, rotation, lens data, etc., and all game interaction, were recorded in the Unreal engine with a proprietary system. This allowed graphics playback as a recorded session later. This also let the editors change any part of the graphics non-destructively. They could choose to replace 3D models or textures or in post change the tracking or point-of-view of any of the virtual cameras as well as add cameras for more virtual “coverage.”

Lost in Time episodes were edited as a multicam project, based on the program feed, in Adobe Premiere CC. They have a multi-terabyte storage solution from Pixit Media running Tiger Technology’s workflow manager. “The EDL from the final edit is fed through a custom system, which then builds a timeline in Unreal to output EXR sequences for a final composite.”

That’s it for now, but be sure to visit this space again to see part two of our coverage on The Future Group’s Lost in Time. Our next story will include the real and virtual lighting systems, the SolidTrack IR tracking system, the backend component, and interview with Epic Games’ Kim Libreri about Unreal engine development/integration and a Lost in Time episode editor.


Daniel Restuccio, who traveled to Oslo for this piece, is a writer, producer and teacher. He is currently multimedia department chairperson at California Lutheran in Thousand Oaks.

Alkemy X adds creative director Geoff Bailey

Alkemy X, which offers live-action production, design, high-end VFX and post services, has added creative director Geoff Bailey to its New York office, which has now almost doubled in staff. The expansion comes after Alkemy X served as the exclusive visual effects company on M. Night Shyamalan’s Split.

Alkemy X and Bailey started collaborating in 2016 when the two worked together on a 360 experiential film project for EY (formerly Ernst & Young) and brand consultancy BrandPie. Bailey was creative director on the project, which was commissioned for EY’s Strategic Growth Forum held in Palm Desert, California, last November. The project featured Alkemy X’s live-action, VFX, animation, design and editorial work.

“I enjoy creating at the convergence of many disciplines and look forward to leveraging my branding knowledge to support Alkemy X’s hybrid creation pipeline — from ideation and strategy, to live-action production, design and VFX,” says Bailey.

Most recently, Bailey was a creative director at Loyalkaspar, where he creatively led the launch campaign for A&E’s Bates Motel. He also served as creative director/designer on the title sequence for the American launch of A&E’s The Returned, and as CD/director on a series of launch spots for the debut of Vice Media’s TV channel Viceland.

Prior to that, Bailey freelanced for several New York design firms as a director, designer and animator. His freelance résumé includes work for HBO, Showtime, Hulu, ABC, Cinemax, HP, Jay-Z, U2, Travel Channel, Comedy Central, CourtTV, Fuse, AMC Networks, Kiehl’s and many more. Bailey holds an MFA in film production from Columbia University.

Nutmeg adds Broadway Video’s former design group

New York City-based Nutmeg, a creative marketing and post production house, has acquired Broadway Video’s design team formerly known as FAC5. Under the Nutmeg brand, they are now known as NTMG Design.

The team of four — executive creative producer Doug LeBow, executive creative director Fred Salkind, creative director David Rogers and art director Karolina Dawson — is an Emmy, Telly and PromaxBDA award-winning creative collective working on design across multiple media platforms. Existing clients that could benefit from the new services include broadcast networks, cable channels and brands.

With services that include main titles and show packaging, experiential and event design, promotions and image campaigns, the group has worked with a variety of clients on a wide range of projects, including Nickelodeon HALO Awards; Nickelodeon Kids’ Choice Awards; The Emmys for Don Mischer Productions; Indy 500 100th Anniversary for ESPN; HBO’s Rock and Roll Hall of Fame Induction Ceremony for Line-by-Line Productions; Thursday Night Football and Sunday Night Football tune-in promo packaging for CBS Sports; AT&T Concert Series for iHeart Media; The Great Human Race for National Geographic Channel; The Peabody Awards for Den of Thieves and others.

“Nutmeg has always embraced growth,” says Nutmeg executive producer Laura Vick. “As our clients and the marketplace shift to engage end users, the addition of a full-service design team allows us to offer all aspects of content creation under one roof. We can now assist at the inception of an idea to help create complete visual experiences — show opens, trade shows, corporate interiors or digital billboards.”

“We look at these new design capabilities as both a new frontier unto itself, and as yet another component of what we’re already doing — telling compelling stories,” says Nutmeg executive creative director Dave Rogan. “Nothing at Nutmeg is created in a vacuum, so these new areas of design crossing over into an interactive web environment, for example, is natural.”

The new NTMG Design team will be working within Nutmeg’s midtown location. Their suite contains five workstations supported by a 10-box renderfarm, Maxon Cinema 4D, Adobe After Effects, one seat of Flame, Assimilate Scratch access for color and an insert stage for practical shooting. It is further supported by 28TBs of Infortrend storage. 

While acknowledging tools are important, executive creative director Fred Salkind says, “Sometimes when I’m asked what we work with, I say Scotch tape and scissors, because it’s the idea that puts the tools to work, not the other way around.”

Main Photo by Eljay Aguillo. L-R: Fred Salkind, David Rogers, Doug LeBow and Karolina Dawson.

Review: Maxon Cinema 4D Studio Release 18

By Brady Betzel

Each year I get to test out some of the latest and greatest software and hardware releases our industry has to offer. One of my favorites — and most challenging — is Maxon’s Cinema 4D. I say challenging because while I love Cinema 4D, I don’t use it every day. So, in order to test it thoroughly, I watched tutorials on Cineversity to brush up on what I forgot and what’s new. Even though I don’t use it every day, I do love it.

I’ve reviewed Cinema 4D Release 15 through R18. I started using the product when I was studying at California Lutheran University in Thousand Oaks, California, which coincidentally is about 10 minutes from Maxon’s Newbury Park office.

Voronoi Fracture

Each version update has been packed full of remarkable additions and updates. From the grass generator in R15, addition of the Reflectance channel in R16, lens distortion tools in R17 to the multitude of updates in R18 — Cinema 4D keeps on cranking out the hits. I say multitude because there are a ton of updates packed into the latest Cinema 4D Release 18 update. You can check out a complete list of them as well as comparisons between Cinema 4D Studio, Visualize, Broadcast, Prime, BodyPaint 3D and Lite Release 18 versions on the Maxon site.

For this review, I’m going to touch on three of what I think are the most compelling updates in Release 18: the new Voronoi Fracture, Thin Film Shader and the Push Apart Effector. Yes, I know there are a bazillion other great updates to Cinema 4D R18 — such as Weight Painting, new Knife Tools, Inverse Ambient Occlusion, the ability to save cache files externally and many more — but I’m going to stick to the features that I think stand out.

Keep in mind that I am using Cinema 4D Studio R18 for this review, so if you don’t have Studio, some of the features might not be available in your version. For instance, I am going to touch on some of the MoGraph toolset updates, and those are only inside the Studio and Broadcast versions. Finally, while you should use a super powerful workstation to get the smoothest and most robust experience when using Cinema 4D R18, I am using a tablet that uses a quad core Intel i7 3.1GHz processor, 8GB of RAM and an Intel Iris graphics 6100 GPU. Definitely on the lower end of processing power for this app, but it works and I have to credit Maxon for making it work so well.

Voronoi Fracture
If, like me, you’ve never heard of the term Voronoi, check out the first paragraph of this Wiki page. A very simple way to imagine a Voronoi diagram is a bunch of cell-like polygons that are all connected (there’s a much more intricate and deeply mathematical definition, but I can barely understand it, and it’s really beyond the scope of this review). In Cinema 4D Studio R18, the Voronoi Fracture object allows us to easily, and I mean really easily, procedurally break apart objects like MoGraph text, or any other object, without the need for external third-party plug-ins such as Nitro4D’s Thrausi.

Voronoi Fracture

To apply Voronoi Fracture in as few steps as possible, you apply the Voronoi Fracture located in the MoGraph menu to your object, adjust parameters under the Sources menu (like distribution type or point amount) add effectors to cause dispersion, keyframe values and render. With a little practice you can explode your raytraced MoGraph text in no time. The best part is your object will not look fractured until animated, which in the past took some work so this is a great update.

Thin Film Shader
Things that are hard to recreate in a photorealistic way are transparent objects, such as glass bottles, windows and bubbles. In Cinema 4D R18, Maxon has added the new Thin Film Shader, which can add the film-like quality that you see on bubbles or soap. It’s an incredible addition to Cinema 4D, furthering the idea that Maxon is concentrating on adding features that improve efficiency for people like me who want to use Cinema 4D, but sometimes don’t because making a material like Thin Film will take a long time.

To apply the Thin Film to your object, find the Reflectance channel of your material that you want to add the Thin Film property to add a new Beckmann or GGX layer, lower the Specular Strength of this layer to zero, under Layer Color choose Texture > Effects > Thin Film. From there, if you want to see the Thin Film as a true layer of film you need to change your composite setting to Add on your layer; you should then see it properly. You can get some advanced tips from the great tutorials over at Cineversity and from Andy Needham (Twitter: @imcalledandy) on lynda.com. One tip I learned from Andy is to change the Index of Refraction to get some different looks, which can be found under the Shader properties.

Push Apart Effector

Push Apart Effector
The new Push Apart Effector is a simple but super-powerful addition to Cinema 4D. The easiest way to describe the Push Apart Effector is to imagine a bunch of objects in an array or using a Cloner where all of your objects are touching — the Push Apart Effector helps to push them away from each other. To decrease the intersection of your clones, you can dial-in the specific radius of your objects (like a sphere) and then tell Cinema 4D R18 how many times you want it to look through the scene by specifying iterations. The more iterations the less chance your objects will intersect, but the more time it will take to compute.

Summing Up
I love Maxon’s continual development of Cinema 4D in Release 18. I specifically love that while they are adding new features, like Weight Painting and Update Knife Tools, they are also helping to improve efficiency for people like me who love to work in Cinema 4D but sometimes skip it because of the steep learning curve and technical know-how you need in order to operate it. You should not fear though, I cannot emphasize how much you can learn at Cineversity, Lynda.com, and on YouTube from an expert like Sean Frangella. Whether you are new to the world of Cinema 4D, mildly experienced like me, or an expert you can always learn something new.

Something I love about Maxon’s licensing for education is that if you go to a qualified school, you can get a free Cinema 4D license. Instructors can get access to Cineversity to use the tutorials in their curriculum as well as project files to use. It’s an amazing resource.

Thin Film Render

If you are an Adobe After Effects user, don’t forget that you automatically get a free version of Cinema 4D bundled with After Effects — Cinema 4D Lite. Even though you have to have After Effects open to use the Cinema 4D Lite, it is still a great way to dip your toes into the 3D world, and maybe even bring your projects back into After Effects to do some compositing.

Cinema 4D Studio R18’s pricing breaks down like this: Commercial Pricing/Annual License Pricing/Upgrade R17 to R18 pricing — Cinema 4D Studio Release 18: $3,695/$650 /$995; Cinema 4D Visualize Release 18: $2,295/$500/$795; Cinema 4D Broadcast Release 18: $1,695/$400 /$795; Cinema 4D Prime Release 18: $995/$250/$395.

Another interesting option is Maxon’s short-term licensing in three- or six-month chunks for the Studio version ($600/$1,100) and 75 percent of the fees you pay for a short-term license can be applied to your purchase of a full license later. Keep in mind, when using such a powerful and robust software like Cinema 4D you are making an investment that will payoff with concentrated effort in learning the software. With a few hours of training from some of the top trainers — like Tim Clapham on www.helloluxx.com, Greyscalegorilla.com and Motionworks.com — you will be off and running in 3D land.

For everyday Cinema 4D creations and inspiration, check out @beeple_crap on Instagram. He produces amazing work all the time.

In this review, I tested some of the new updates to Cinema 4D Studio R18 with sample projects from Andy Needham’s Lynda.com class Cinema 4D R18: New Features and Joren Kandel’s awesome website, which offers tons of free content to play with while learning the new tools.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Ingenuity Studios helps VFX-heavy spot get NASCAR-ready

Hollywood-based VFX house Ingenuity Studios recently worked on a 60-second Super Bowl spot for agency Pereira & O’Dell promoting Fox Sports’ coverage of the Daytona 500, which takes place on February 26. The ad, directed by Joseph Kahn, features people from all over the country gearing up to watch the Daytona 500, including footage from NASCAR races, drivers and, for some reason, actor James Van Der Beek.

The Ingenuity team had only two weeks to turn around this VFX-heavy spot, called Daytona Day. Some CG elements include a giant robot, race cars and crowds. While they were working on the effects, Fox was shooting footage in Charlotte, North Carolina and Los Angeles.

“When we were initially approached about this project we knew the turnaround would be a challenge,” explains creative director/VFX supervisor Grant Miller. “Editorial wasn’t fully locked until Thursday before the big game! With such a tight deadline preparing as much as we could in advance was key.”

Portions of the shoot took place at the Daytona Speedway, and since it was an off day the stadium and infield were empty. “In preparation, our CG team built the entire Daytona stadium while we were still shooting, complete with cheering CG crowds, RVs filling the interior, pit crews, etc.,” says Miller. “This meant that once shots were locked we simply needed to track the camera, adjust the lighting and render all the stadium passes for each shot.”

Additional shooting took place at the Charlotte Motor Speedway, Downtown Los Angeles and Pasadena, California.

In addition to prepping CG for set extensions, Ingenuity also got a head start on the giant robot that shows up halfway through the commercial.  “Once the storyboards were approved and we were clear on the level of detail required, we took our ‘concept bot’ out of ZBrush, retopologized and unwrapped it, then proceeded to do surfacing and materials in Substance Painter. While we had some additional detailing to do, we were able to get the textures 80 percent completed by applying a variety of procedural materials to the mesh, saving a ton of manual painting.”

Other effects work included over 40 CG NASCAR vehicles to fill the track, additional cars for the traffic jam and lots of greenscreen and roto work to get the scenes shot in Charlotte into Daytona. There was also a fair bit of invisible work that included cleaning up sets, removing rain, painting out logos, etc.

Other tools used include Autodesk’s Maya, The Foundry’s Nuke and BorisFX’s Mocha.

Hush adds Eloise Murphy as senior producer

Design agency Hush has expanded its creative production team with the addition of senior producer Eloise Murphy. In her new position at Hush, Murphy will oversee all project phases and develop relationships with new and existing vendors.

During her career, Murphy has worked in the UK and North America for companies such as the BBC, TED and Moment Factory. Her resume is diverse, working on projects that range from content production for Madonna’s Rebel Heart Tour to experiential production for TED Talks in Rio de Janeiro. Her experience spans digital design, content production and experiential activations for brands including Samsung, Intel and BBC Radio 1.

“Having worked with a variety of brands, artists and companies, I have a solid understanding of how to manage projects optimally within different settings, parameters and environments,” says Murphy. “It has enabled me to be highly adaptable, flexible and develop a strong knack for pre-empting, identifying and resolving issues promptly and successfully. I believe my international experience has made me well-versed in managing complex projects and I’m looking forward to bringing new ideas to the table at Hush.”

Chris Hill & Sami Tahari

Imaginary Forces expands with EP Chris Hill and director of biz dev Sami Tahari

Imaginary Forces has added executive producer Chris Hill and director of business development Sami Tahari to its Los Angeles studio. The additions come at a time when the creative studio is looking to further expand their cross-platform presence with projects that mix VR/AR/360 with traditional, digital and social media.

Celebrating 20 years in business this year, the independently owned Imaginary Forces is a creative company specializing in brand strategy and visual storytelling encompassing many disciplines, including full-service design, production and post production. Being successful for that long in this business means they are regularly innovating and moving where the industry takes them. This led to the hiring of Hill and Tahari, whose diverse backgrounds will help strengthen the company’s long-standing relationships, as well as its continuous expansion into emerging markets.

Recent work of note includes main titles for Netflix’s beloved Stranger Things, the logo reveal for Michael Bay’s Transformers: The Last Knight and an immersive experience for the Empire State Building.

Hill’s diverse production experience includes commercials, experience design, entertainment marketing and branding for such clients as HBO Sports, Google, A&E and the Jacksonville Jaguars, among others. He joins Imaginary Forces after recently presiding over the broadcast division of marketing agency BPG.

Tahari brings extensive marketing, business and product development experience spanning the tech and entertainment spaces. His resume includes time at Lionsgate and Google, where he was an instrumental leader in the creative development and marketing of Google Glass.

“Imaginary Forces has a proven ability to use design and storytelling across any medium or industry,” adds Hill. “We can expand that ability to new markets, whether it’s emerging technologies, original content or sports franchises. When you consider, for example, the investment in massive screens and new technologies in stadiums across the country, it demands [that] same high level of brand strategy and visual storytelling.”

Our Main Image: L-R: Chris Hill and Sami Tahari.

Review: Nvidia’s new Pascal-based Quadro cards

By Mike McCarthy

Nvidia has announced a number of new professional graphic cards, filling out their entire Quadro line-up with models based on their newest Pascal architecture. At the absolute top end, there is the new Quadro GP100, which is a PCIe card implementation of their supercomputer chip. It has similar 32-bit (graphics) processing power to the existing Quadro P6000, but adds 16-bit (AI) and 64-bit (simulation). It is intended to combine compute and visualization capabilities into a single solution. It has 16GB of new HBM2 (High Bandwidth Memory) and two cards can be paired together with NVLink at 80GB/sec to share a total of 32GB between them.

This powerhouse is followed by the existing P6000 and P5000 announced last July. The next addition to the line-up is the single-slot VR-ready Quadro P4000. With 1,792 CUDA cores running at 1200MHz, it should outperform a previous-generation M5000 for less than half the price. It is similar to its predecessor the M4000 in having 8GB RAM, four DisplayPort connectors, and running on a single six-pin power connector. The new P2000 follows next with 1024 cores at 1076MHz and 5GB of RAM, giving it similar performance to the K5000, which is nothing to scoff at. The P1000, P600 and P400 are all low-profile cards with Mini-DisplayPort connectors.

All of these cards run on PCIe Gen3 x16, and use DisplayPort 1.4, which adds support for HDR and DSC. They all support 4Kp60 output, with the higher end cards allowing 5K and 4Kp120 displays. In regards to high-resolution displays, Nvidia continues to push forward with that, allowing up to 32 synchronized displays to be connected to a single system, provided you have enough slots for eight Quadro P4000 cards and two Quadro Sync II boards.

Nvidia also announced a number of Pascal-based mobile Quadro GPUs last month, with the mobile P4000 having roughly comparable specifications to the desktop version. But you can read the paper specs for the new cards elsewhere on the Internet. More importantly, I have had the opportunity to test out some of these new cards over the last few weeks, to get a feel for how they operate in the real world.

DisplayPorts

Testing
I was able to run tests and benchmarks with the P6000, P4000 and P2000 against my current M6000 for comparison. All of these test were done on a top-end Dell 7910 workstation, with a variety of display outputs, primarily using Adobe Premiere Pro, since I am a video editor after all.

I ran a full battery of benchmark tests on each of the cards using Premiere Pro 2017. I measured both playback performance and encoding speed, monitoring CPU and GPU use, as well as power usage throughout the tests. I had HD, 4K, and 6K source assets to pull from, and tested monitoring with an HD projector, a 4K LCD and a 6K array of TVs. I had assets that were RAW R3D files, compressed MOVs and DPX sequences. I wanted to see how each of the cards would perform at various levels of production quality and measure the differences between them to help editors and visual artists determine which option would best meet the needs of their individual workflow.

I started with the intuitive expectation that the P2000 would be sufficient for most HD work, but that a P4000 would be required to effectively handle 4K. I also assumed that a top-end card would be required to playback 6K files and split the image between my three Barco Escape formatted displays. And I was totally wrong.

Besides when using the higher-end options within Premiere’s Lumetri-based color corrector, all of the cards were fully capable of every editing task I threw at them. To be fair, the P6000 usually renders out files about 30 percent faster than the P2000, but that is a minimal difference compared to the costs. Even the P2000 was able to playback my uncompressed 6K assets onto my array of Barco Escape displays without issue. It was only when I started making heavy color changes in Lumetri that I began to observe any performance differences at all.

Lumetri

Color correction is an inherently parallel, graphics-related computing task, so this is where GPU processing really shines. Premiere’s Lumetri color tools are based on SpeedGrade’s original CUDA processing engine, and it can really harness the power of the higher-end cards. The P2000 can make basic corrections to 6K footage, but it is possible to max out the P6000 with HD footage if I adjust enough different parameters. Fortunately, most people aren’t looking for more stylized footage than the 300 had, so in this case, my original assumptions seem to be accurate. The P2000 can handle reasonable corrections to HD footage, the P4000 is probably a good choice for VR and 4K footage, while the P6000 is the right tool for the job if you plan to do a lot of heavy color tweaking or are working on massive frame sizes.

The other way I expected to be able to measure a difference between the cards would be in playback while rendering in Adobe Media Encoder. By default, Media Encoder pauses exports during timeline playback, but this behavior can be disabled by reopening Premiere after queuing your encode. Even with careful planning to avoid reading from the same disks as the encoder was accessing from, I was unable to get significantly better playback performance from the P6000 compared to the P2000. This says more about the software than it says about the cards.

P6000

The largest difference I was able to consistently measure across the board was power usage, with each card averaging about 30 watts more as I stepped up from the P2000 to the P4000 to the P6000. But they all are far more efficient than the previous M6000, which frequently sucked up an extra 100 watts in the same tests. While “watts” may not be a benchmark most editors worry too much about, among other things it does equate to money for electricity. Lower wattage also means less cooling is needed, which results in quieter systems that can be kept closer to the editor without being distracting from the creative process or interfering with audio editing. It also allows these new cards to be installed in smaller systems with smaller power supplies, using up fewer power connectors. My HP Z420 workstation only has one 6-pin PCIe power plug, so the P4000 is the ideal GPU solution for that system.

Summing Up
It appears that we have once again reached a point where hardware processing capabilities have surpassed the software capacity to use them, at least within Premiere Pro. This leads to the cards performing relatively similar to one another in most of my tests, but true 3D applications might reveal much greater differences in their performance. Further optimization of CUDA implementation in Premiere Pro might also lead to better use of these higher-end GPUs in the future.


Mike McCarthy is an online editor and workflow consultant with 10 years of experience on feature films and commercials. He has been on the forefront of pioneering new solutions for tapeless workflows, DSLR filmmaking and now multiscreen and surround video experiences. If you want to see more specific details about performance numbers and benchmark tests for these Nvidia cards, check out techwithmikefirst.com.

Reel FX hires Chad Mosley as senior designer

Chad Moseley has joined Reel FX as senior designer. Moseley brings with him nearly a decade of experience in motion graphics and design, spanning television, advertising and broadcast promos.

He comes to Reel FX, which has offices in Dallas and Santa Monica, from Starz Entertainment, where he spent two years as a broadcast designer, concepting and executing promotions for original programming on series such as Outlander, Da Vinci’s Demons and Flesh and Bone, including teasers, spots and graphics packages. His work for brands such as Enterprise, Nestle, Purina and Busch Gardens has earned him a Gold American Advertising Award (AAA), a Gold Addy Award and an AAF Best of Digital Award.

Texas native Moseley studied graphic design and 3D animation in Denver. He developed his career at a Texas news channel, handling the video and graphics for the channel’s website. While there he learned post production. He then worked as a video editor/animator at Denver-based ORCC, later relocating to St. Louis to take a position as senior motion graphics/VFX artist at 90 Degrees West. While there, he contributed to post projects from concept through completion for national brands including Anheuser Busch, Enterprise and UPS, among others. An opportunity as an in-house broadcast designer at Starz Entertainment led Moseley back to Denver in 2014, before once again returning to Dallas once again to join the Reel FX team.

ESPN’s NBA coverage gets a rebrand

The bi-coastal studio Big Block recently collaborated with ESPN to develop, design and animate a rebrand package that promotes their NBA coverage. With nearly a year of design development, the studio’s role expanded beyond a simple production partner, with Big Block executive creative director Curtis Doss and managing director Kenny Solomon leading the charge.

The package, which features a rich palette of textures and fluid elegance, was designed to reflect the style of the NBA. Additionally, Big Block embedded what they call “visual touchstones” to put the spotlight on the stars of the show — the NBA players, the NBA teams and the redesigned NBA and ESPN co-branded logo.

Big Block and ESPN’s creative teams — which included senior coordinating producer for the NBA on ESPN Tim Corrigan — collaborated closely on the logos. The NBA’s was reconfigured and simplified, allowing it to combine with ESPN’s as well as support the iconic silhouette of Jerry West as the centerpiece of the new creation.

Next, the team worked on taking the unique branding and colors of each NBA team and using them as focal points within the broadcasts. Team logos were assembled and rendered and given textures and fast-moving action, providing the broadcast with a high-end look that Big Block and ESPN feel match the face of the league itself.

Big Block provided ESPN with a complete toolkit for the integration of live game footage with team logos, supers, buttons and transitions, as well as team and player-based information like player comparisons and starting lineups. The materials were designed to be visually cohesive between ESPN’s pre-show, game and post-show broadcasts, with Big Block crafting high-end solutions to keep the sophisticated look and feel consistent across the board.

When asked if working with such iconic logos added some challenges to the project, Doss said, “It definitely adds pressure anytime your combining multiple brands, however it was not the first time ESPN and NBA have collaborated, obviously. I will say that there were needs unique to each brand that we absolutely had to consider. This did take us down many paths during the design process, but we feel that the result is a very strong marriage of the two icons that both benefit from a brand perspective.”

In terms of tools, the studio called on Adobe’s Creative Suite and Maxon Cinema 4D. Final renders were done in Cinema 4D’s Physical Render.

Wacom’s Intuos Pro Paper Edition lets artists sketch old-school

Do you miss the days of just pulling out your sketchpad and letting your creative energy flow? Well, Wacom has a new solution for you that bridges old-school paper-and-ink drawings with portable digital technology.

Wacom is at CES showing its new Intuos Pro and Intuos Pro Paper Edition pen and touch tablets. While the two products have similar functionality, the Intuos Pro Paper Edition gives artists the ability to incorporate paper into their workflow — and when not used with paper, this version will also function as a regular Intuos Pro.

The tablet allows ink-on-paper drawings to be captured and stored digitally on the Intuos Pro Paper Edition so they can be refined later on the tablet with any compatible layered raster or vector software application. This means no more scanning.

“The Paper Edition lets artists secure a paper on the device and sketch, draw or write with an real ink, analog pen, while it captures the information digitally because it is seated on our Electro-Magnetic Resonance board and stored for later use,” explains Wacom’s Doug Little.

Little also emphasizes that while the Paper Edition does function as a Intuos Pro when paper isn’t involved, the newest Intuos Pro is “thinner and lighter and features our new Pro Pen 2 (4x the pressure-sensitivity of our previous pen). It also features the same ExpressKeys for creating shortcuts and modifiers.”

Wacom Intuos ProThe new Intuos Pro is less than half an inch thick but offers the same sized active area in a smaller overall footprint. It comes equipped with anodized aluminum backing, a smaller pen stand with 10 nibs and a new pen case. Both sizes of the Intuos Pro, Medium and Large, use a TouchRing, Multi-Touch and eight ExpressKeys for the creation of customized shortcuts to speed up the creative workflow.

The Paper Edition adds a Paper Clip (to attach the artists favorite drawing paper), pressure-sensitive Finetip gel ink pen and the Wacom Inkspace App to convert drawings for use with leading creative software applications. The Inkspace App environment also allows users to easily store and share their artwork.

The new Wacom Pro Pen 2 comes with both the Intuos Pro and Intuos Pro Paper Edition. This new pen features 4X the pressure sensitivity than the former Pro Pen, delivering 8,192 levels of pressure to support a natural and intuitive creative process.

The recently released Wacom Finetip Pen, included with the Intuos Pro Paper Edition, provides smooth-gel ink. Designed for those who begin their creative process on paper, the Finetip lets users visually depict ideas that are automatically digitized. Users can also select a Ballpoint Pen as an optional purchase.

Available in medium and large models, Intuos Pro is Bluetooth-enabled and compatible with Macs and PCs. The Intuos Pro Medium ($349.95 USD) and Large ($499.95 USD) will be available this month.

Intuos Pro Paper Edition will contain added features as a bundled package to enable paper-to-digital creation. The Intuos Pro Paper Edition Medium ($399.95) and Large ($549.95) will be available this month as well.

New Wacom Cintiq Pro line offers portability, updated pen, more

Wacom has introduced a new line of Wacom Cintiq Pro creative pen displays: the Cintiq Pro 13 and Cintiq Pro 16. The Wacom Cintiq Pro features a thin and portable form factor, making them suitable for working on the road or remotely.

Cintiq Pro’s new Pro Pen 2, according to Wacom, offers four times greater accuracy and pressure sensitivity than the previous Pro Pen. The improved Pro Pen 2 creates an intuitive experience with virtually lag-free tracking on a glass surface that produces the right amount of friction, and is coated to reduce reflection.

Additionally, the new optical bonding process reduces parallax, providing a pen-on-screen performance that feels natural and has the feedback of a traditional pen or brush. Both Cintiq Pro models also feature multi-touch for easy and fast navigation, as well as the ability to pinch, zoom and rotate illustrations, photos or models within supporting 2D or 3D creative software apps.

Both high-resolution Cintiq Pro models come with an optimized edge-to-edge etched glass workspace. The Cintiq Pro also builds on its predecessor, the Cintiq 13HD touch, offering the ExpressKey Remote as an optional accessory so users can customize their most commonly used shortcuts and modifiers when working with their most-used software applications. In addition, ergonomic features, such as ErgoFlex, fully integrated pop out legs and an optional three-position desk stand (available in February), let users focus on their work instead of constantly adjusting for comfort.

The Wacom Cintiq Pro 13 and 16 are compatible with both Macs and PCs and feature full HD (1920×1080) and UHD (3840×2160) resolution, respectively. Both Cintiq Pro configurations deliver vivid colors, the 13-inch model providing 87 percent Adobe RGB and the 16-inch, 94 percent.

Priced at $999.95 USD, the Cintiq Pro 13 is expected to be available online and at select retail locations at the beginning of December. The Cintiq Pro 16, $1499.95 USD, is expected in February.

Super Hero music video gets Aardman Nathan Love treatment

The Aardman Nathan Love animation studio recently finished design and animation work on director Kris Merc’s music video for Super Hero, the leadoff single from Kool Keith’s new album Feature Magnetic that is a collaboration with MF Doom.

The video starts with a variety of hypnotic imagery, from eye charts to kaleidoscopic wheels, with Doom’s iconic, ever-rotating mask as its centerpiece.

“Being a huge fan of both Kool Keith and MF Doom for years, and knowing our studio had capacity to help Kris out, we couldn’t not get involved,” recalls Aardman Nathan Love (ANL) founder/executive creative director Joe Burrascano. “Kris was able to let his imagination run wild. ANL’s team of designers, 3D artists and technical directors gave him the support he needed to help shape his vision and make the final piece as strong and unique as possible.”

According to Merc, who’s helmed notable projects from music videos for hip-hop pioneers De La Soul to spots for HTC during his lengthy career, the Super Hero production afforded him the space to realize his vision of bending and manipulating pop aesthetics to create something altogether mysterious and otherworldly. “I wanted to capture something that felt like a visual pop travesty,” explains the director. “I wanted it to visually speak to the legacy of the artists, and Afrofuturism mixed with comic book concepts. I’m a fan of the unseen, and I was obsessed with the idea of using Doom’s mask and the iconography as a centralized point – as if time and space converged around these strange, sometimes magical tableaus and we were witnessing an ascension.”

To help develop his concepts, Merc worked closely with Aardman Nathan Love in several key stages of production from the idea and design stage to technical aspects like compositing and rendering. “Our specialty lies mainly in CG character animation work, which typically involves a lot of careful planning and development work up front,” adds ANL CG director Eric Cunha. “Kris has a very organic process, and is constantly finding inspiration for new and exciting ideas. The biggest challenge we faced was being able to respond to this constant flow of new ideas, and facilitate the growth of the piece. In the end, it was an exciting new challenge that pushed us to develop a new way of working that resulted in an amazing, visually fresh and creative piece of work.”

Zbrush was used to create some of the assets, and Autodesk Maya was Aardman Nathan Love’s main animation tool. Most of the rendering was done in Maxwell, aside of two or so shots that were done in Arnold.

FCPX Creative Summit keynotes announced

Later this month, in Cupertino, California, Apple Final Cut Pro X editors and potential users will be attending the second annual FCPX Creative Summit. The three-day event will take place October 27-30.

The keynote line-up consists of two panels: the first features directors Glenn Ficarra and John Requa, along with editor Jan Kovac. The trio worked together on two of the first feature films edited in Final Cut Pro X — Focus and Whiskey Tango Foxtrot.

The second panel includes two-time Emmy-winner Chuck Braverman, Supersphere VR executive producer Lucas Wilson and creative director Duncan Shepherd.

Organized by Future Media Concepts (FMC), in collaboration with Apple, this year’s event will take place next door to the Apple Campus. In addition to the keynote presentations, there will be 30-plus sessions focused on editorial, motion graphics, workflow and case studies.

postPespective readers can save $125 off of registration with code: POST16.

Trippy Empire of the Sun music video mixes live-action and animation

NYC’s Roof Studio recently partnered with Australian music duo Empire of the Sun on their music video High and Low, a surreal visual journey into a psychedelic trip, which captures the song’s celebration of the innocence and boldness of youth. The lead single from the band’s upcoming Two Vines LP, “High and Low” follows a small group of people as they are guided by a Shaman into the forest to indulge in the experience of mind-changing substances. Using a mix of live-action and animation, the video shows the group’s trip experience — the Empire of the Sun members serve as “emperors.”

Roof and Empire of the Sun previously worked together on the Honda Civic The Dreamer spot via ad agency RPA, which combined Roof’s visual language and direction with the band’s “Walking on a Dream” track.

“We recognized that there was something special in our initial partnership,” says Vinicius Costa, Roof Studio co-founder/director. “[The band] wanted a psychedelic film with a strong connection to nature to visually, yet indirectly, represent the mind-bending journey. However, they were open to our ideas on execution.”

The only constraints Luke Steele and Nick Littlemore had were not to take a too-literal approach to the visualization of the lyrics. In contrast to the duo’s previous album’s desert landscape art direction, this time around they wanted to explore a tropical environment. Initially, Roof sought to create the entire film in CG, however, due to the limited timeframe of three weeks, they felt it was best to combine live-action with animation in order to focus on providing more than 40 realistic CG shots. This shift in direction spurred the studio to develop the natural and psychedelic narratives that tie together.

“The band came to us with a clear point of view, even referencing aspects of some of our previous work,” says Guto Terni, Roof Studio co-founder/director. “From there, we created a loose narrative based on the right balance of live-action and post production visuals. As directors, we engage in every step of the process, from concept to storyboarding and pre-visualization to shooting, and finally, post production and finishing. This project really showcased our full range of capabilities.”

Roof’s directors, Costa, Terni and Sam Mason, were on set for both live-action shoots, including the band shoot in a Los Angeles studio, and the actors and extras shoot in Costa Rica, which provided the tropical aesthetics. Roof had one week to plan and facilitate both shoots and then two weeks to execute the ambitious CG animation.

Roof used a 3D scan of Steele and Littlemore through a technique called photogrammetry in order to create the telescope shot featured in the video. This process created multiple images of the band in which the studio was able to generate a 3D version of its members. From there, Roof added cloth simulation in CG to mimic wind blowing their clothes for more believability. The result is a fantastic shot in which only the band members’ faces are real and the remaining is CG.

Roof used a mix of technology, including Nuke, After Effects, Maya, Modo, 3D Max and Corona to blend of live-action and animation.

The creative process behind The Human Rights Zoetrope

By Sophia Kyriacou

As an artist working in the broadcast industry of almost 20 years, I’ve designed everything from opening title sequences to program brands to content graphics. About three years into my career, I was asked to redesign a program entirely in 3D. The rest, as they say, is history.

Over two years ago I was working full-time at the BBC doing the same work as I am doing now, broadcast designer and 3D artist, but decided it was time to cut my time in half and allow myself to focus on my own creative ventures. I wanted to work with external and varied clients, both here in the UK and internationally. I also wanted to use my spare time for development work. In an industry where technology is constantly evolving it’s essential to keep ahead of the game.

One of those creative ventures was commissioned by Noon Visual Creatives — a London-based production and post company that serves several Arabic broadcasters in both the United Kingdom and worldwide — to create a television branding package for a program called Human Rights.

I had previously worked with Noon on a documentary about the ill-fated 1999 EgyptAir plane crash (which is still awaiting broadcast), so when I was approached again I was more than happy to create their Human Rights brand.

My Inspiration
I was very lucky in that my client essentially gave me free rein, which I find is a rarity these days. I have always been excited and inspired by the works of the creative illusionist M.C Escher. His work has always made me think and explore how you can hook your viewer by giving them something to unravel and interact with. His 1960 lithograph, called Ascending and Descending, was my initial starting point. There was something about the figures going round and round but getting nowhere.The Human Rights Zeotrope Titles

While Escher’s work kickstarted my creative process I also wanted to create something that was illusion-based, so I revisited Mark Gertler’s Merry-Go-Round. As a young art student I had his poster on my wall. Sometimes I would find myself staring at it for hours, looking at the people’s expressions and the movement Gertler had expressed in the figures with his onion-skin-style strokes. There was so much movement within the painting that it jumped out at me. I loved the contrasting colors of orange and blue, the composition was incredibly strong and animated.

I have always been fascinated by the mechanics of old hand-cranked metal toys, including zoetropes, and I have always loved how inanimate objects could come alive to tell you a story. It is very powerful. You have the control to be given the narrative or you can walk away from it — it’s about making a choice and being in control.

Once I had established I was going to build a 3D zoetrope, I explored the mechanics of building one. It was the perfect object to address the issue of human rights because without the trigger it would remain lifeless. I then starting digging into the declaration of Human Rights to put forward a proposal of what I thought would work within their program. I shortlisted 10 rights and culled that down to the final eight. Everything had to be considered. The positioning of the final eight had their own hierarchy and place.

At the base of the zoetrope are water pumps, signifying the right to clean water and sanitation. This is the most important element of the entire zoetrope, grounding the entire structure, as without water, there simply is no life, no existence. Above, a prisoner gestures for attention to the outside world, its environment completely contradicting, given hope by an energetic burst of comforting orange. The gavel references the rights for justice and are subliminally inspired by the hammers walking defiantly within the Pink Floyd video, Another Brick in the Wall. The gavel within the zoetrope becomes that monumental object of power, helped along by the dynamic camera with repetitions of itself staggered over time like echoes on a loop. Surrounding the gavel of justice is a dove flying free from a metal birdcage in a shape of the world. This was my reference to the wonderful book, I Know Why the Caged Bird Sings, by Maya Angelou.

My client wanted to highlight the crisis of the Syrian refugees, so I decided to depict an exhausted child wearing a life jacket, suggesting he had travelled across the Mediterranean Sea, while a young girl at his side, oblivious, happily plays with a spinning top. I wanted to show the negativity being cancelled out by optimism.

To hammer home the feeling of isolation and emptiness that the lack of human rights brings forth, I placed the zoetrope into a cold and almost brutal environment: an empty warehouse. My theme of the positivity canceling out negativity once again is echoed as the sunlight penetrates through hitting the cold floor in an attempt to signify hope and reconnect with the outside world.

the-human-rights-zoetrope_gavel-shotEvery level of detail was broken up into sections. I created very simple one-second loops of animation that were subtle, but enough to tell the story. Once I had animated each section, it was a case of painstakingly pulling apart each object into a stop-frame animated existence so once they were placed in their position and spun, they would animate back into life again.

My Workflow
For ease and budget, I used Poser Pro, a character-based software to animate all the figures in isolation first. Using both the PoserFusion plug-in and the Alembic export, I was able to import each looping character into Maxon Cinema 4D where I froze and separated each 3D object one by one. Any looping objects that were not figure-based were all modelled and animated within Cinema 4D. Once the individual components were animated and positioned, I imported everything into a master 3D scene where I was able to focus on the lighting and camera shots.

For the zoetrope centrepiece, I built a simple lighting rig made up of the GSG Light Kit Pro, two soft boxes, that I had adapted and placed within a NULL and an area Omni light above. This allowed me to rotate the rig around according to my camera shot. Having a default position and brightness set-up was great and helped to get me out of trouble if I got a little too carried away with the settings, and the lighting didn’t change too dramatically on each camera shot. I also added a couple of Visible Area Spotlights out of the warehouse pointing inwards to give the environment a foggy distant feel.

I deliberately chose not to render using volumetric lighting because I didn’t want that specific look and did not want any light bursts hitting my zoetrope. The zoetrope was the star of the show and nothing else. Another lighting feature I tend to use within my work is the combination of the Physical Sky and the Sun. Both give a natural warm feel and I wanted sunlight to burst through the window; it was conceptually important and it added balance to the composition.

The most challenging part of the entire project was getting the lighting to work seamlessly throughout, as well as the composition within some of the camera shots. Some shots were very tight in frame, so I could not rely on the default rig and needed additional lighting to catch objects where the 3-point lights didn’t work so well. I had decided very early on, that rather than work from a single master file, as with the lighting, I had a default “get me out of trouble” master, saving each shot with its own independent settings as I went along to keep my workflow clean. Each scene file was around a gigabyte in size as none of the objects within the zoetrope were parametric anymore once they had been split, separated-out and converted to polygons.

My working machine was a 3.2GHz 8-core Mac Pro with 24GB of RAM, rendered out on a PC — custom-built 3X3 machine — with an Intel Core Processor i7 5960X with water cooling, 32GB RAM and clockable to 4.5GHz.

Since completion, The Human Rights Zoetrope titles have won several awards, including a Gold at the Muse Creative Awards in the Best Motion Graphics category, a Platinum Best of Show in the Art Direction category, and a Gold in the Best Graphic Design category at the Aurora Awards.

The Human Rights Zoetrope is also a Finalist at the New York Festivals 2017 in the Animation: Promotion/Open & IDs category. The winners will be announced at the NAB Show.

 

Sophia Kyriacou is a London-based broadcast designer and 3D artist.

My first trip to IBC

By Sophia Kyriacou

When I was asked by the team at Maxon to present my work at their IBC stand this year, I jumped at the chance. I’m a London-based working professional with 20 years of experience as a designer and 3D artist, but I had never been to an IBC. My first impression of the RAI convention center in Amsterdam was that it’s super huge and easy to get lost in for days. But once I found the halls relevant to my interests, the creative and technical buzz hit me like heat in the face when disembarking from a plane in a hot humid summer. It was immediate, and it felt so good!

The sounds and lights were intense. I was surrounded by booths with baselines of audio vibrating against the floor changing as you walked along. It was a great atmosphere; so warm and friendly.

My first Maxon presentation was on day two of IBC — it was a show-and-tell of three award-winning and nominated sequences I created for the BBC in London and one for Noon Visual Creatives. As a Cinema 4D user, it was great to see the audience at the stand captivated by my work. and knowing it was streamed live to a large audience globally made it even more exciting.

The great thing about IBC is that it’s not only about companies shouting about their new toys. I also saw how it brings passionate pros from all over the world together — people you would never meet in your usual day-to-day work life. I met people from all over globe and made new friends. Everyone appeared to share the same or similar experience, which was wonderful.

The great thing about having the first presentation of the day at Maxon meant I could take a breather and look around the show. I also sat in on a Dell Precision/Radeon Technologies roundtable event one afternoon. That was a really interesting meeting. We were a group of pros from varied disciplines within the industry. It was great to talk about what hardware works, what doesn’t work, and how it could all get better. I don’t work in a realtime area, but I do know what I would like to see as someone who works in 3D. It was incredibly interesting, and everyone was so welcoming. Thoroughly enjoyed it.

Sunday evening, I went over to the SuperMeet — such an energetic and friendly vibe. The stage demos were very interesting. I was particularly taken with the fayIN tracker plug-in for Adobe After Effects. It appears to be a very effective tool, and I will certainly look into purchasing it. The new Adobe Premiere features look fantastic as well.

Everything about my time at IBC was so enjoyable. I went back London buzzing, and am already looking forward to next year’s IBC show.

Sophia Kyriacou is a London-based broadcast designer and 3D artist who splits her time working as a freelancer and for the BBC.