Cinnafilm 6.6.19

Category Archives: 3D

Epic Games’ Unreal Engine 4.21 adds more mobile optimizations, efficiencies

Epic Games’ Unreal Engine 4.21 is designed to offer developers greater efficiency, performance and stability for those working on any platform.

Unreal Engine 4.21 adds even more mobile optimizations to both Android and iOS, up to 60% speed increases when cooking content and more power and flexibility in the Niagara effects toolset for realtime VFX. Also, the new production-ready Replication Graph plugin enables developers to build multiplayer experiences at a scale that hasn’t been possible before, and Pixel Streaming allows users to stream interactive content directly to remote devices with no compromises on rendering quality.

Updates in Unreal Studio 4.21 also offer new capabilities and enhanced productivity for users in the enterprise space, including architecture, manufacturing, product design and other areas of professional visualization. Unreal Studio’s Datasmith workflow toolkit now includes support for Autodesk Revit, and enhanced material translation for Autodesk 3ds Max, all enabling more efficient design review and iteration.

Here is more about the key features:
Replication Graph: The Replication Graph plugin, which is now production-ready, makes it possible to customize network replication in order to build large-scale multiplayer games that would not be viable with traditional replication strategies.

Niagara Enhancements: The Niagara VFX feature set continues to grow, with substantial quality of life improvements and Nintendo Switch support added in Unreal Engine 4.21.

Sequencer Improvements: New capabilities within Sequencer allow users to record incoming video feeds to disk as OpenEXR frames and create a track in Sequencer, with the ability to edit and scrub the track as usual. This enables users to synchronize video with CG assets and play them back together from the timeline.

Pixel Streaming (Early Access): With the new Pixel Streaming feature, users can author interactive experiences such as product configurations or training applications, host them on a cloud-based GPU or local server, and stream them to remove devices via web browser without the need for additional software or porting.

Mobile Optimizations: The mobile development process gets even better thanks to all of the mobile optimizations that were developed for Fortnite‘s initial release on Android, in addition to all of the iOS improvements from Epic’s ongoing updates. With the help of Samsung, Unreal Engine 4.21 includes all of the Vulkan engineering and optimization work that was done to help ship Fortnite on the Samsung Galaxy Note 9 and is 100% feature compatible with OpenGL ES 3.1.

Much Faster Cook Times: In addition to the optimized cooking process, low-level code avoids performing unnecessary file system operations, and cooker timers have been streamlined.

Gauntlet Automation Framework (Early access): The new Gauntlet automation framework enables developers to automate the process of deploying builds to devices, running one or more clients and or/servers, and processing the results. Gauntlet scripts can automatically profile points of interest, validate gameplay logic, check return values from backend APIs and more. Gauntlet has been battle tested for months in the process of optimizing Fortnite, and is a key part of ensuring it runs smoothly on all platforms.

Animation System Optimizations and Improvements: Unreal Engine’s animation system continues to build on best-in-class features thanks to new workflow improvements, better surfacing of information, new tools, and more.

Blackmagic Video Card Support: Unreal Engine 4.21 also adds support for Blackmagic video I/O cards for those working in film and broadcast. Creatives in the space can now choose between Blackmagic and AJA Video Systems, the two most popular options for professional video I/O.

Improved Media I/O: Unreal Engine 4.21 now supports 10-bit video I/O, audio I/O, 4K, and Ultra HD output over SDI, as well as legacy interlaced and PsF HD formats, enabling greater color accuracy and integration of some legacy formats still in use by large broadcasters.

Windows Mixed Reality: Unreal Engine 4.21 natively supports the Windows Mixed Reality (WMR) platform and headsets, such as the HP Mixed Reality headset and the Samsung HMD Odyssey headset.

Magic Leap Improvements: Unreal Engine 4.21 supports all the features needed to develop complete applications on Magic Leap’s Lumin-based devices — rendering, controller support, gesture recognition, audio input/output, media, and more.

Oculus Avatars: The Oculus Avatar SDK includes an Unreal package to assist developers in implementing first-person hand presence for the Rift and Touch controllers. The package includes avatar hand and body assets that are viewable by other users in social applications.

Datasmith for Revit (Unreal Studio): Unreal Studio’s Datasmith workflow toolkit for streamlining the transfer of CAD data into Unreal Engine now includes support for Autodesk Revit. Supported elements include materials, metadata, hierarchy, geometric instancing, lights and cameras.

Multi-User Viewer Project Template (Unreal Studio): A new project template for Unreal Studio 4.21 enables multiple users to connect in a real-time environment via desktop or VR, facilitating interactive, collaborative design reviews across any work site.

Accelerated Automation with Jacketing and Defeaturing (Unreal Studio): Jacketing automatically identifies meshes and polygons that have a high probability of being hidden from view, and lets users hide, remove or move them to another layer; this command is also available through Python so Unreal Studio users can integrate this step into automated preparation workflows. Defeaturing automatically removes unnecessary detail (e.g. blind holes, protrusions) from mechanical models to reduce polygon count and boost performance.

Enhanced 3ds Max Material Translation (Unreal Studio): There is now support for most commonly used 3ds Max maps, improving visual fidelity and reducing rework. Those in the free Unreal Studio beta can now translate 3ds Max material graphs to Unreal graphs when exporting, making materials easier to understand and work with. Users can also leverage improvements in BRDF matching from V-Ray materials, especially metal and glass.

DWG and Alias Wire Import (Unreal Studio): Datasmith now supports DWG and Alias Wire file types, enabling designers to import more 3D data directly from Autodesk AutoCAD and Autodesk Alias.

Chaos Group to support Cinema 4D with two rendering products

At the Maxon Supermeet 2018 event, Chaos Group announced its plans to support the Maxon Cinema 4D community with two rendering products: V-Ray for Cinema 4D and Corona for Cinema 4D. Based on V-Ray’s Academy Award-winning raytracing technology, the development of V-Ray for Cinema 4D will be focused on production rendering for high-end visual effects and motion graphics. Corona for Cinema 4D will focus on artist-friendly design visualization.

Chaos Group, which acquired the V-Ray for Cinema 4D product from LAUBlab and will lead development on the product for the first time, will offer current customers free migration to a new update, V-Ray 3.7 for Cinema 4D. All users who move to the new version will receive a free V-Ray for Cinema 4D license, including all product updates, through January 15, 2020. Moving forward, Chaos Group will be providing all support, sales and product development in-house.

In addition to ongoing improvements to V-Ray for Cinema 4D, Chaos Group is also released the Corona for Cinema 4D beta 2 at Supermeet, with the final product to follow in January 2019.

Main Image: Daniel Sian created Robots using V-ray for Cinema 4D.

Cinnafilm 6.6.19

Promoting a Mickey Mouse watch without Mickey

Imagine creating a spot for a watch that celebrates the 90th anniversary of Mickey Mouse — but you can’t show Mickey Mouse. Already Been Chewed (ABC), a design and motion graphics studio, developed a POV concept that met this challenge and also tied in the design of the actual watch.

Nixon, a California-based premium watch company that is releasing a series of watches around the Mickey Mouse anniversary, called on Already Been Chewed to create the 20-second spot.

“The challenge was that the licensing arrangement that Disney made with Nixon doesn’t allow Mickey’s image to be in the spot,” explains Barton Damer, creative director at Already Been Chewed. “We had to come up with a campaign that promotes the watch and has some sort of call to action that inspires people to want this watch. But, at the same time, what were we going to do for 20 seconds if we couldn’t show Mickey?”

After much consideration, Damer and his team developed a concept to determine if they could push the limits on this restriction. “We came up with a treatment for the video that would be completely point-of-view, and the POV would do a variety of things for us that were working in our favor.”

The solution was to show Mickey’s hands and feet without actually showing the whole character. In another instance, a silhouette of Mickey is seen in the shadows on a wall, sending a clear message to viewers that the spot is an official Disney and Mickey Mouse release and not just something that was inspired by Mickey Mouse.

Targeting the appropriate consumer demographic segment was another key issue. “Mickey Mouse has long been one of the most iconic brands in the history of branding, so we wanted to make sure that it also appealed to the Nixon target audience and not just a Disney consumer,” Damer says. “When you think of Disney, you could brand Mickey for children or you could brand it for adults who still love Mickey Mouse. So, we needed to find a style and vibe that would speak to the Nixon target audience.”

The Already Been Chewed team chose surfing and skateboarding as dominant themes, since 16-to 30-year-olds are the target demographic and also because Disney is a West Coast brand.
Damer comments, “We wanted to make sure we were creating Mickey in a kind of 3D, tangible way, with more of a feature film and 3D feel. We felt that it should have a little bit more of a modern approach. But at the same time, we wanted to mesh it with a touch of the old-school vibe, like 1950s cartoons.”

In that spirit, the team wanted the action to start with Mickey walking from his car and then culminate at the famous Venice Beach basketball courts and skate park. Here’s the end result.

“The challenge, of course, is how to do all this in 15 seconds so that we can show the logos at the front and back and a hero image of the watch. And that’s where it was fun thinking it through and coming up with the flow of the spot and seamless transitions with no camera cuts or anything like that. It was a lot to pull off in such a short time, but I think we really succeeded.”

Already Been Chewed achieved these goals with an assist from Maxon’s Cinema 4D and Adobe After Effects. With Damer as creative lead, here’s the complete cast of characters: head of production Aaron Smock; 3D design was via Thomas King, Barton Damer, Bryan Talkish, Lance Eckert; animation was provided by Bryan Talkish and Lance Eckert; character animation was via Chris Watson; soundtrack was DJ Sean P.


Sony Imageworks provides big effects, animation for Warner’s Smallfoot

By Randi Altman

The legend of Bigfoot: a giant, hairy two-legged creature roaming the forests and giving humans just enough of a glimpse to freak them out. Sightings have been happening for centuries with no sign of slowing down — seriously, Google it.

But what if that story was turned around, and it was Bigfoot who was freaked out by a Smallfoot (human)? Well, that is exactly the premise of the new Warner Bros. film Smallfoot, directed by Karey Kirkpatrick. It’s based on the book “Yeti Tracks” by Sergio Pablos.

Karl Herbst

Instead of a human catching a glimpse of the mysterious giant, a yeti named Migo (Channing Tatum) sees a human (James Corden) and tells his entire snow-filled village about the existence of Smallfoot. Of course, no one believes him so he goes on a trek to find this mythical creature and bring him home as proof.

Sony Pictures Imageworks was tasked with all of the animation and visual effects work on the film, while Warner Animation film did all of the front end work — such as adapting the script, creating the production design, editing, directing, producing and more. We reached out to Imageworks VFX supervisor Karl Herbst (Hotel Transylvania 2) to find out more about creating the animation and effects for Smallfoot.

The film has a Looney Tunes-type feel with squash and stretch. Did this provide more freedom or less?
In general, it provided more freedom since it allowed the animation team to really have fun with gags. It also gave them a ton of reference material to pull from and come up with new twists on older ideas. Once out of animation, depending on how far the performance was pushed, other departments — like the character effects team — would have additional work due to all of the exaggerated movements. But all of the extra work was worth it because everyone really loved seeing the characters pushed.

We also found that as the story evolved, Migo’s journey became more emotionally driven; We needed to find a style that also let the audience truly connect with what he was going through. We brought in a lot more subtlety, and a more truthful physicality to the animation when needed. As a result, we have these incredibly heartfelt performances and moments that would feel right at home in an old Road Runner short. Yet it all still feels like part of the same world with these truly believable characters at the center of it.

Was scale between such large and small characters a challenge?
It was one of the first areas we wanted to tackle since the look of the yeti’s fur next to a human was really important to filmmakers. In the end, we found that the thickness and fidelity of the yeti hair had to be very high so you could see each hair next to the hairs of the humans.

It also meant allowing the rigs for the human and yetis to be flexible enough to scale them as needed to have moments where they are very close together and they did not feel so disproportionate to each other. Everything in our character pipeline from animation down to lighting had to be flexible in dealing with these scale changes. Even things like subsurface scattering in the skin had dials in it to deal with when Percy, or any human character, was scaled up or down in a shot.

How did you tackle the hair?
We updated a couple of key areas in our hair pipeline starting with how we would build our hair. In the past, we would make curves that look more like small groups of hairs in a clump. In this case, we made each curve its own strand of a single hair. To shade this hair in a way that allowed artists to have better control over the look, our development team created a new hair shader that used true multiple-scattering within the hair.

We then extended that hair shading model to add control over the distribution around the hair fiber to model the effect of animal hair, which tends to scatter differently than human hair. This gave artists the ability to create lots of different hair looks, which were not based on human hair, as was the case with our older models.

Was rendering so many fury characters on screen at a time an issue?
Yes. In the past this would have been hard to shade all at once, mostly due to our reliance on opacity to create the soft shadows needed for fur. With the new shading model, we were no longer using opacity at all so the number of rays needed to resolve the hair was lower than in the past. But we now needed to resolve the aliasing due to the number of fine hairs (9 million for LeBron James’ Gwangi).

We developed a few other new tools within our version of the Arnold renderer to help with aliasing and render time in general. The first was adaptive sampling, which would allow us to up the anti-aliasing samples drastically. This meant some pixels would only use a few samples while others would use very high sampling. Whereas in the past, all pixels would get the same number. This focused our render times to where we needed it, helping to reduce overall rendering. Our development team also added the ability for us to pick a render up from its previous point. This meant that at a lower quality level we could do all of our lighting work, get creative approval from the filmmakers and pick up the renders to bring them to full quality not losing the time already spent.

What tools were used for the hair simulations specifically, and what tools did you call on in general?
We used Maya and the Nucleus solvers for all of the hair simulations, but developed tools over them to deal with so much hair per character and so many characters on screen at once. The simulation for each character was driven by their design and motion requirements.

The Looney Tunes-inspired design and motion created a challenge around how to keep hair simulations from breaking with all of the quick and stretched motion while being able to have light wind for the emotional subtle moments. We solved all of those requirements by using a high number of control hairs and constraints. Meechee (Zendaya) used 6,000 simulation curves with over 200 constraints, while Migo needed 3,200 curves with around 30 constraints.

Stonekeeper (Common) was the most complex of the characters, with long braided hair on his head, a beard, shaggy arms and a cloak made of stones. He required a cloth simulation pass, a rigid body simulation was performed for the stones and the hair was simulated on top of the stones. Our in-house tool called Kami builds all of the hair at render time and also allows us to add procedurals to the hair at that point. We relied on those procedurals to create many varied hair looks for all of the generics needed to fill the village full of yetis.

How many different types of snow did you have?
We created three different snow systems for environmental effects. The first was a particle simulation of flakes for near-ground detail. The second was volumetric effects to create lots of atmosphere in the backgrounds that had texture and movement. We used this on each of the large sets and then stored those so lighters could pick which parts they wanted in each shot. To also help with artistically driving the look of each shot, our third system was a library of 2D elements that the effects team rendered and could be added during compositing to add details late in shot production.

For ground snow, we had different systems based on the needs in each shot. For shallow footsteps, we used displacement of the ground surface with additional little pieces of geometry to add crumble detail around the prints. This could be used in foreground or background.

For heavy interactions, like tunneling or sliding in the snow, we developed a new tool we called Katyusha. This new system combined rigid body destruction with fluid simulations to achieve all of the different states snow can take in any given interaction. We then rendered these simulations as volumetrics to give the complex lighting look the filmmakers were looking for. The snow, being in essence a cloud, allowed light transport through all of the different layers of geometry and volume that could be present at any given point in a scene. This made it easier for the lighters to give the snow its light look in any given lighting situation.

Was there a particular scene or effect that was extra challenging? If so, what was it and how did you overcome it?
The biggest challenge to the film as a whole was the environments. The story was very fluid, so design and build of the environments came very late in the process. Coupling that with a creative team that liked to find their shots — versus design and build them — meant we needed to be very flexible on how to create sets and do them quickly.

To achieve this, we begin by breaking the environments into a subset of source shapes that could be combined in any fashion to build Yeti Mountain, Yeti Village and the surrounding environments. Surfacing artists then created materials that could be applied to any set piece, allowing for quick creative decisions about what was rock, snow and ice, and creating many different looks. All of these materials were created using PatternCreate networks as part of our OSL shaders. With them we could heavily leverage the portable procedural texturing between assets making location construction quicker, more flexible and easier to dial.

To get the right snow look for all levels of detail needed, we used a combination of textured snow, modeled snow and a simulation of geometric snowfall, which all needed to shade the same. For the simulated snowfall we created a padding system that could be run at any time on an environment giving it a fresh coating of snow. We did this so that filmmakers could modify sets freely in layout and not have to worry about broken snow lines. Doing all of that with modeled snow would have been too time-consuming and costly. This padding system worked not only in organic environments, like Yeti Village, but also in the Human City at the end of the film. The snow you see in the Human City is a combination of this padding system in the foreground and textures in the background.


Creating super sounds for Disney XD’s Marvel Rising: Initiation

By Jennifer Walden

Marvel revealed “the next generation of Marvel heroes for the next generation of Marvel fans” in a behind-the-scenes video back in December. Those characters stayed tightly under wraps until August 13, when a compilation of animated shorts called Marvel Rising: Initiation aired on Disney XD. Those shorts dive into the back story of the new heroes and give audiences a taste of what they can expect in the feature-length animated film Marvel Rising: Secret Warriors that aired for the first time on September 30 on both the Disney Channel and Disney XD simultaneously.

L-R: Pat Rodman and Eric P. Sherman

Handling audio post on both the animated shorts and the full-length feature is the Bang Zoom team led by sound supervisor Eric P. Sherman and chief sound engineer Pat Rodman. They worked on the project at the Bang Zoom Atomic Olive location in Burbank. The sounds they created for this new generation of Marvel heroes fit right in with the established Marvel universe but aren’t strictly limited to what already exists. “We love to keep it kind of close, unless Marvel tells us that we should match a specific sound. It really comes down to whether it’s a sound for a new tech or an old tech,” says Rodman.

Sherman adds, “When they are talking about this being for the next generation of fans, they’re creating a whole new collection of heroes, but they definitely want to use what works. The fans will not be disappointed.”

The shorts begin with a helicopter flyover of New York City at night. Blaring sirens mix with police radio chatter as searchlights sweep over a crime scene on the street below. A SWAT team moves in as a voice blasts over a bullhorn, “To the individual known as Ghost Spider, we’ve got you surrounded. Come out peacefully with your hands up and you will not be harmed.” Marvel Rising: Initiation wastes no time in painting a grim picture of New York City. “There is tension and chaos. You feel the oppressiveness of the city. It’s definitely the darker side of New York,” says Sherman.

The sound of the city throughout the series was created using a combination of sourced recordings of authentic New York City street ambience and custom recordings of bustling crowds that Rodman captured at street markets in Los Angeles. Mix-wise, Rodman says they chose to play the backgrounds of the city hotter than normal just to give the track a more immersive feel.

Ghost Spider
Not even 30 seconds into the shorts, the first new Marvel character makes her dramatic debut. Ghost Spider (Dove Cameron), who is also known as Spider Gwen, bursts from a third-story window, slinging webs at the waiting officers. Since she’s a new character, Rodman notes that she’s still finding her way and there’s a bit of awkwardness to her character. “We didn’t want her to sound too refined. Her tech is good, but it’s new. It’s kind of like Spider-Man first starting out as a kid and his tech was a little off,” he says.

Sound designer Gordon Hookailo spent a lot of time crafting the sound of Spider Gwen’s webs, which according to Sherman have more of a nylon, silky kind of sound than Spider-Man’s webs. There’s a subliminal ghostly wisp sound to her webs also. “It’s not very overt. There’s just a little hint of a wisp, so it’s not exactly like regular Spider-Man’s,” explains Rodman.

Initially, Spider Gwen seems to be a villain. She’s confronted by the young-yet-authoritative hero Patriot (Kamil McFadden), a member of S.H.I.E.L.D. who was trained by Captain America. Patriot carries a versatile, high-tech shield that can do lots of things, like become a hovercraft. It shoots lasers and rockets too. The hoverboard makes a subtle whooshy, humming sound that’s high-tech in a way that’s akin to the Goblin’s hovercraft. “It had to sound like Captain America too. We had to make it match with that,” notes Rodman.

Later on in the shorts, Spider Gwen’s story reveals that she’s actually one of the good guys. She joins forces with a crew of new heroes, starting with Ms. Marvel and Squirrel Girl.

Ms. Marvel (Kathreen Khavari) has the ability to stretch and grow. When she reaches out to grab Spider Gwen’s leg, there’s a rubbery, creaking sound. When she grows 50 feet tall she sounds 50 feet tall, complete with massive, ground shaking footsteps and a lower ranged voice that’s sweetened with big delays and reverbs. “When she’s large, she almost has a totally different voice. She’s sound like a large, forceful woman,” says Sherman.

Squirrel Girl
One of the favorites on the series so far is Squirrel Girl (Milana Vayntrub) and her squirrel sidekick Tippy Toe. Squirrel Girl has  the power to call a stampede of squirrels. Sound-wise, the team had fun with that, capturing recordings of animals small and large with their Zoom H6 field recorder. “We recorded horses and dogs mainly because we couldn’t find any squirrels in Burbank; none that would cooperate, anyway,” jokes Rodman. “We settled on a larger animal sound that we manipulated to sound like it had little feet. And we made it sound like there are huge numbers of them.”

Squirrel Girl is a fan of anime, and so she incorporates an anime style into her attacks, like calling out her moves before she makes them. Sherman shares, “Bang Zoom cut its teeth on anime; it’s still very much a part of our lifeblood. Pat and I worked on thousands of episodes of anime together, and we came up with all of these techniques for making powerful power moves.” For example, they add reverb to the power moves and choose “shings” that have an anime style sound.

What is an anime-style sound, you ask? “Diehard fans of anime will debate this to the death,” says Sherman. “It’s an intuitive thing, I think. I’ll tell Pat to do that thing on that line, and he does. We’re very much ‘go with the gut’ kind of people.

“As far as anime style sound effects, Gordon [Hookailo] specifically wanted to create new anime sound effects so we didn’t just take them from an existing library. He created these new, homegrown anime effects.”

Quake
The other hero briefly introduced in the shorts is Quake (Chloe Bennet), who is the same actress who plays Daisy Johnson, aka Quake, on Agents of S.H.I.E.L.D. Sherman says, “Gordon is a big fan of that show and has watched every episode. He used that as a reference for the sound of Quake in the shorts.”

The villain in the shorts has so far remained nameless, but when she first battles Spider Gwen the audience sees her pair of super-daggers that pulse with a green glow. The daggers are somewhat “alive,” and when they cut someone they take some of that person’s life force. “We definitely had them sound as if the power was coming from the daggers and not from the person wielding them,” explains Rodman. “The sounds that Gordon used were specifically designed — not pulled from a library — and there is a subliminal vocal effect when the daggers make a cut. It’s like the blade is sentient. It’s pretty creepy.”

Voices
The character voices were recorded at Bang Zoom, either in the studio or via ISDN. The challenge was getting all the different voices to sound as though they were in the same space together on-screen. Also, some sessions were recorded with single mics on each actor while other sessions were recorded as an ensemble.

Sherman notes it was an interesting exercise in casting. Some of the actors were YouTube stars (who don’t have much formal voice acting experience) and some were experienced voice actors. When an actor without voiceover experience comes in to record, the Bang Zoom team likes to start with mic technique 101. “Mic technique was a big aspect and we worked on that. We are picky about mic technique,” says Sherman. “But, on the other side of that, we got interesting performances. There’s a realism, a naturalness, that makes the characters very relatable.”

To get the voices to match, Rodman spent a lot of time using Waves EQ, Pro Tools Legacy Pitch, and occasionally Waves UltraPitch for when an actor slipped out of character. “They did lots of takes on some of these lines, so an actor might lose focus on where they were, performance-wise. You either have to pull them back in with EQ, pitching or leveling,” Rodman explains.

One highlight of the voice recording process was working with voice actor Dee Bradley Baker, who did the squirrel voice for Tippy Toe. Most of Tippy Toe’s final track was Dee Bradley Baker’s natural voice. Rodman rarely had to tweak the pitch, and it needed no other processing or sound design enhancement. “He’s almost like a Frank Welker (who did the voice of Fred Jones on Scooby-Doo, the voice of Megatron starting with the ‘80s Transformers franchise and Nibbler on Futurama).

Marvel Rising: Initiation was like a training ground for the sound of the feature-length film. The ideas that Bang Zoom worked out there were expanded upon for the soon-to-be released Marvel Rising: Secret Warriors. Sherman concludes, “The shorts gave us the opportunity to get our arms around the property before we really dove into the meat of the film. They gave us a chance to explore these new characters.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.


A Conversation: 3P Studio founder Haley Stibbard

Australia’s 3P Studio is a post house founded and led by artisan Haley Stibbard. The company’s portfolio of work includes commercials for brands such as Subway, Allianz and Isuzu Motor Company as well as iconic shows like Sesame Street. Stibbard’s path to opening her own post house was based on necessity.

After going on maternity to have her first child in 2013, she returned to her job at a content studio to find that her role had been made redundant. She was subsequently let go. Needing and wanting to work, she began freelancing as an editor — working seven days a week and never turning down a job. Eventually she realized that she couldn’t keep up with that type of schedule and took her fate into her own hands. She launched 3P Studio, one of Brisbane’s few women-led post facilities.

We reached out to Stibbard to ask about her love of post and her path to 3P Studio.

What made you want to get into post production? School?
I had a strong love of film, which I got from my late dad, Ray. He was a big film buff and would always come home from work when I was a kid with a shopping bag full of $2 movies from the video store and he would watch them. He particularly liked the crime stories and thrillers! So I definitely got my love of film and television from him.

We did not have any film courses at high school in the ‘90s, so the closest I could get was photography. Without a show reel it was hard to get a place at university in the college of art; a portfolio was a requirement and I didn’t have one. I remember I had to talk my way into the film program, and in the end I think they just got sick of me and let me into the course through the back door without a show reel — I can be very persistent when I want to be. I always had enjoyed editing and I was good at it, so in group tasks I was always chosen as the editor and then my love of post came from there.

What was your first job?
My very first job was quite funny, actually. I was working in both a shoe store and a supermarket at the time, and two post positions became available one day, an in-house editor for a big furniture chain and a job as a production assistant for a large VFX company at Movie World on the Gold Coast. Anyone who knows me knows that I would be the worst PA in the world. So, luckily for that company director, I didn’t get the PA job and became the in-house editor for the furniture chain.

I’m glad that I took that job, as it taught me so much — how to work under pressure, how to use an Avid, how to work with deadlines, what a key number was, how to dispatch TVCS to the stations, be quick, be accurate, how to take constructive feedback.

I made every mistake known to man, including one weekend when I forgot to remove the 4×3 safe bars from a TVC and my boss saw it on TV. I ended up having to drive to the office, climb the fence that was locked to get into the office and pull it off air. So I’ve learned a lot of things the hard way, but my boss was a very patient and forgiving man, and 18 years later is now a client of mine!

What job did you hold when you went out on maternity leave?
Before I left on maternity leave to have my son Dashiell, I was an editor for a small content company. I have always been a jack-of-all-trades and I took care of everything from offline to online, grading in Resolve, motion graphics in After Effects and general design. I loved my job and I loved the variety that it brought. Doing something different every day was very enjoyable.

After leaving that job, you started freelancing as an editor. What systems did you edit on at the time and what types of projects? How difficult a time was that for you? New baby, working all the time, etc.
I started freelancing when my son was just past seven months old. I had a mortgage and had just come off six months of unpaid maternity leave, so I needed to make a living and I needed to make it quickly. I also had the added pressure of looking after a young child under the age of one who still needed his mother.

So I started contacting advertising agencies and production companies that I thought may be interested in my skill set. I just took every job that I could get my hands on, as I was always worried that every job that I took could potentially be my last for a while. I was lucky that I had an incredibly well-behaved baby! I never said “no” to a job.

As my client base started to grow, my clients would always book me since they knew that I would never say “no” (they know I still don’t say no!). It got to the point where I was working seven days a week. I worked all day when my son was in childcare and all night after he would go to bed. I would take the baby monitor downstairs where I worked out of my husband’s ‘man den.’

As my freelance business grew, I was so lucky that I had the most supportive husband in the world who was doing everything for me, the washing, the cleaning, the cooking, bath time, as well has holding down his own full-time job as an engineer. I wouldn’t have been able to do what I did for that period of time without his support and encouragement. This time really proved to be a huge stepping stone for 3P Studio.

Do you remember the moment you decided you would start your own business?
There wasn’t really a specific moment where I decided to start my own business. It was something that seemed to just naturally come together. The busier I became, the more opportunities came about, like having enough work through the door to build a space and hire staff. I have always been very strategic in regard to the people that I have brought on at 3P, and the timing in which they have come on board.

Can you walk us through that bear of a process?
At the start of 2016, I made the decision to get out of the house. My work life was starting to blend in with my home life and I needed to have that separation. I worked out of a small office for 12 months, and about six months into that it came to a point where I was able to purchase an office space that would become our studio today.

I went to work planning the fit out for the next six months. The studio was an investment in the business and I needed a place that my clients could also bring their clients for approvals, screenings and collaboration on jobs, as well as just generally enjoying the space.

The office space was an empty white shell, but the beauty of coming into a blank canvas was that I was able to create a studio that was specifically built for post production. I was lucky in that I had worked in some of the best post houses in the country as an editor, and this being a custom build I was able to take all the best bits out of all the places I had previously worked and put them into my studio without the restriction of existing walls.

I built up the walls, ripped down the ceilings and was able to design the edit suites and infrastructure all the way down to designing and laying the cable runs myself that I knew would work for us down the line. Then, we saved money and added more equipment to the studio bit by bit. It wasn’t 0 to 100 overnight, I had to work at the business development side of the company a lot, and I spent a lot of long days sitting by myself in those edit suites doing everything. Soon, word of mouth started to circulate and the business started to grow on the back of some nice jobs from my existing loyal clients.

What type of work do you do, and what gear do you call on?
3P Studio is a boutique post production studio that specializes in full-service post production, we also shoot content when required.

Our clients range anywhere from small content videos for the web all the way up to large commercial campaigns and everything in between.

There are currently six of us working full time in the studio, and we handle everything in-house from offline editing to VFX to videography and sound design. We work primarily in the Adobe Creative suite for offline editing in Premiere, mixed with Maxon Cinema 4D/Autodesk Maya for 3D work, Autodesk Flame and Side Effects Houdini for online compositing and VFX, Blackmagic Resolve for color grading and Pro Tools HD for sound mixing. We use EditShare EFS shared storage nodes for collaborative working and sharing of content between the mix of creative platforms we use.

This year we have invested in a Red Digital Cinema camera as well as an EditShare XStream 200 EFS scale-out single-node server so we can become that one-stop shop for our clients. We have been able to create an amazing creative space for our clients to come and work with us, be it from the bespoke design of our editorial suites or the high level of client service we offer.

How did you build 3P Studios to be different from other studios you’ve worked at?
From a personal perspective, the culture that we have been able to build in the studio is unlike anywhere else I have worked in that we genuinely work as a team and support each other. On the business side, we cater to clients of all sizes and budgets while offering uncompromising services and experience whether they be large or small. Making sure they walk away feeling that they have had great value and exemplary service for their budget means that they will end up being a customer of ours for life. This is the mantra that I have been able to grow the business on.

What is your hiring process like, and how do you protect employees who need to go out on maternity or family leave?
When I interview people to join 3P, attitude and willingness to learn is everything to me — hands down. You can be the most amazing operator on the planet, but if your attitude stinks then I’m really not interested. I’ve been incredibly lucky with the team that I have, and I have met them along the journey at exactly the right times. We have an amazing team culture and as the company grows our success is shared.

I always make it clear that it’s swings and roundabouts and that family is always number one. I am there to support my team if they need me to be, not just inside of work but outside as well and I receive the same support in return. We have flexible working hours, I have team members with young families who, at times, are able to work both in the studio and from home so that they can be there for their kids when they need to be. This flexibility works fine for us. Happy team members make for a happy, productive workplace, and I like to think that 3P is forward thinking in that respect.

Any tips for young women either breaking into the industry or in it that want to start a family but are scared it could cost them their job?
Well, for starters, we have laws in Australia that make it illegal for any woman in this country to be discriminated against for starting a family. 3P also supports the 18 weeks paid maternity leave available to women heading out to start a family. I would love to see more female workers in post production, especially in operator roles. We aren’t just going to be the coffee and tea girls, we are directors, VFX artists, sound designers, editors and cinematographers — the future is female!

Any tips for anyone starting a new business?
Work hard, be nice to people and stay humble because you’re only as good as your last job.

Main Image: Haley Stibbard (second from left) with her team.


London design, animation studio Golden Wolf sets up shop in NYC

Animation studio Golden Wolf, headquartered in London, has launched its first stateside location in New York City. The expansion comes on the heels of an alliance with animation/VFX/live-action studio Psyop, a minority investor in the company. Golden Wolf now occupies studio space in SoHo adjacent to Psyop and its sister company Blacklist, which formerly represented Golden Wolf stateside and was instrumental to the relationship.

Among the year’s highlights from Golden Wolf are an integrated campaign for Nike FA18 Phantom (client direct), a spot for the adidas x Parley Run for the Oceans initiative (TBWA Amsterdam) in collaboration with Psyop, and Marshmello’s Fly music video for Disney. Golden Wolf also received an Emmy nomination for its main title sequence for Disney’s Ducktales reboot.

Heading up Golden Wolf’s New York office are two transplants from the London studio, executive producer Dotti Sinnott and art director Sammy Moore. Both joined Golden Wolf in 2015, Sinnott from motion design studio Bigstar, where she was a senior producer, and Moore after a run as a freelance illustrator/designer in London’s agency scene.

Sinnott comments: “Building on the strength of our London team, the Golden Wolf brand will continue to grow and evolve with the fresh perspective of our New York creatives. Our presence on either side of the Atlantic not only brings us closer to existing clients, but also positions us perfectly to build new relationships with New York-based agencies and brands. On top of this, we’re able to use the time difference to our advantage to work on faster turnarounds and across a range of budgets.”

Founded in 2013 by Ingi Erlingsson, the studio’s executive creative director, Golden Wolf is known for youth-oriented work — especially content for social media, entertainment and sports — that blurs the lines of irreverent humor, dynamic action and psychedelia. Erlingsson was once a prolific graffiti artist and, later, illustrator/designer and creative director at U.K.-based design agency ilovedust. Today he inspires Golden Wolf’s creative culture and disruptive style fed in part by a wave of next-gen animation talent coming out of schools such as Gobelins in France and The Animation Workshop in Denmark.

“I’m excited about our affiliation with Psyop, which enjoys an incredible legacy producing industry-leading animated advertising content,” Erlingsson says. “Golden Wolf is the new kid on the block, with bags of enthusiasm and an aim to disrupt the industry with new ideas. The combination of the two studios means that we are able to tackle any challenge, regardless of format or technical approach, with the support of some of the world’s best artists and directors. The relationship allows brands and agencies to have complete confidence in our ability to solve even the biggest challenges.”

Golden Wolf’s initial work out of its New York studio includes spots for Supercell (client direct) and Bulleit Bourbon (Barton F. Graf). Golden Wolf is represented in the US market by Hunky Dory for the East Coast, Baer Brown for the Midwest and In House Reps for the West Coast. Stink represents the studio for Europe.

Main Photo: (L-R) Dotti Sinnott, Ingi Erlingsson and Sammy Moore.


Reallusion intros three tools for mocap, characters

Reallusion has launched three new motion capture and character creation products: Character Creator 3, a stand-alone character creation tool; Motion Live, a realtime motion capture solution; and 3D Face Motion Capture with Live Face for iPhone X. With these products Reallusion is offering a total solution to build, morph, animate and gamify 3D characters.

Character Creator 3 (CC3), the new generation of iClone Character Creator, has separated from iClone to become a professional stand-alone tool. With a new quad base, roundtrip editing with ZBrush and photorealistic rendering using Iray, Character Creator 3 is a full character-creation solution for generating optimized 3D characters that are ready for games or intensive artistic design.

CC3 provides a new game character base with topology optimized for mobile, game and AR/VR developers. The big breakthrough is the integration with InstaLOD’s model and material optimization technologies to generate game-ready characters that are animatable on the fly, fulfilling the complete character pipeline on polygon reduction, material merge, texture baking, remeshing and LOD generation.

CC3 launches this month and is available now for preorder for $199. More details can be found here. iClone Motion Live, the multidevice motion capture system, connects industry-standard motion gear — including Rokoko, Leap Motion, Xsens, Faceware, OptiTrack, Noitom and iPhone X — into one solution.

Motion Live’s intuitive plug-and-play design makes connecting complicated mocap devices simple by animating custom imported characters or fully rigged 3D characters generated by Character Creator, Daz Studio or other industry-standard sources.

Reallusion has also debuted the combination of the 3D Face Motion Capture with the iPhone X solution with the Live Face app for iClone. As a result, users can record instant facial motion capture on any 3D character with an iPhone X. Reallusion has expanded the technology behind Animoji and Memoji to lift iPhone X animation and motion capture to the next level for studios and independent creators. The solution combines the power of iPhone X mocap with iClone Motion Live to blend face motion capture with Xsens, Perception Neuron, Rokoko, OptiTrack and Leap Motion for a truly realtime live experience in full-body mocap.


Review: Foundry’s Athera cloud platform

By David Cox

I’ve been thinking for a while that there are two types of post houses — those that know what cloud technology can do for them, and those whose days are numbered. That isn’t to say that the use of cloud technology is essential to the survival of a post house, but if they haven’t evaluated the possibilities of it they’re probably living in the past. In such a fast-moving business, that’s not a good place to be.

The term “cloud computing” suffers a bit from being hijacked by know-nothing marketeers and has become a bit vague in meaning. It’s quite simple though: it just means a computer (or storage) owned and maintained by someone else, housed somewhere else and used remotely. The advantage is that a post house can reduce its destructive fixed overheads by owning fewer computers and thus save money on installation and upkeep. Cloud computers can be used as and when they are needed. This allows scaling up and down in proportion to workload.

Over the last few years, several providers have created global datacenters containing upwards of 50,000 servers per site, entirely for the use of anyone who wants to “remote in.” Amazon and Google are the two biggest providers, but as anyone who has tried to harness their power for post production can confirm, they’re not simple to understand or configure. Amazon alone has hundreds of different computer “instance” types, and accessing them requires navigating through a sea of unintelligible jargon. You must know your Elastic Beanstalks from your EC2, EKS and Lambda. And make sure you’ve worked out how to connect your S3, EFS and Glacier. Software licensing can also be tricky.

The truth is, these incredible cloud installations are for cleverer people than those of us that just like to make pretty pictures. They are more for the sort that like to build neural networks and don’t go outside very much. What our industry needs is some clever company to make a nice shiny front end that allows us to harness that power using the tools we know and love, and just make it all a bit simpler. Enter Athera, from Foundry. That’s exactly what they’ve done.

What is Athera?

Athera is a platform hosted on Google Cloud infrastructure that presents a user with icons for apps such as Nuke and Houdini. Access to each app is via short-term (30-day) rental. When an available app icon is clicked, a cloud computer is commanded into action, pre-installed with the chosen app. From then on, the app is used just as if locally installed. Of course, the app is actually running on a high-performance computer located in a secure and nicely cooled datacenter environment. Provided the user has a vaguely decent Internet connection, they’re good to go, because only the user interface is being transmitted across the network, not the actual raw image data.

Apps available on Athera include Foundry’s products, plus a few others. Nuke is represented in its base form, plus a Nuke X variant, Nuke Studio, and a combination of Nuke X and Cara VR. Also available are the Mari texture painting suite, Katana look-creating app and Modo CGI modeling software.

Athera also offers access to non-Foundry products like CGI software Houdini and Blender, as well as the Gaffer management tool.

NukeIn my first test, I rustled up an instance of Nuke Studio and one of Blender. The first thing I wanted to test was the GPU speed, as this can be somewhat variable for many cloud computer types (usually between zero and not much). I was pleasantly surprised as the rendering speed was close to that of a local Nvidia GeForce GTX 1080, which is pretty decent. I was also pleased to see that user preferences were maintained between sessions.

One thing that particularly impressed me was how I could call up multiple apps together and Athera would effectively build a network in the background to link them all up. Frames rendered out of Blender were instantly available in the cloud-hosted Nuke Studio, even though it was running on a different machine. This suggests the Athera infrastructure is well thought out because multi-machine, networked pipelines with attached storage are constructed with just a few clicks and without really thinking about it.

Access to the Athera apps is either by web browser or via a local client software called “Orbit.” In web browser mode, each app opens in its own browser tab. With Orbit, each app appears in a dedicated local window. Orbit boasts lower latency and the ability to use local hardware such as multiple monitors. Latency, which would show itself as a frustrating delay between control input and visual feedback, was impressively low, even when using the web browser interface. Generally, it was easy to forget that the app being used was not installed locally.

Getting files in and out was also straightforward. A Dropbox account can be directly linked, although a Google or Amazon S3 storage “bucket” is preferred for speed. There is also a hosted app called “Toolbox,” which is effectively a file browser to allow the management of files and folders.

The Athera platform also contains management and reporting features. A manager can set up projects and users, setting out which apps and projects a user has access to. Quotas can be set, and full reports are given as to who did what, when and with which app.

Athera’s pricing is laid out on their website and it’s interesting to drill into the costs and make comparisons. A user buys access to apps in 30-day blocks. Personally, I would like to see shorter blocks at some point to increase up/down scale flexibility. That said, render-only instances for many of the apps can be accessed on a per-second billing basis. The 30-day block comes with a “fair use” policy of 200 hours. This is a hard limit, which equates to around nine and a half hours per day for five-day weeks (which is technically known in post production as part time).

Figuring Out Cost
Blender is a good place to start analyzing cost because it’s open source (free) software, so the $244 Athera cost to run for 30 days/200 hours must be for hardware only. This equates to $1.22 per hour, which, compared to direct cloud computer usage, is pretty good value for the GPU-backed machine on offer.

Modo

Another way of comparing the amount of $244 a month would be to say that a new computer costing $5,800 depreciates at roughly this monthly rate if depreciated over two years. That is to say, if a computer of that value is kept for two years before being replaced, it effectively loses roughly $241 per month in value. If depreciated over three years, the figure is $80 per month less. Of course, that’s just comparing the cost of depreciation. Cost of ownership must also include the costs of updating, maintaining, powering, cooling, insuring, housing and repairing if (when!) it breaks down. If a cloud computer breaks down, Google has a few thousand waiting in the wings. In general, the base hardware cost seems quite competitive.

Of course, Blender is not really the juicy stuff. Access to a base Nuke, complete with workstation, is $685 per 30 days / 200 hours. Nuke X is $1,025. There are also “power” options for around 20% more, where a significantly more powerful machine is provided. Compared to running a local machine with purchased or rented software, these prices are very interesting. But when the ability to scale up and down with workload is factored in, especially being able to scale down to nothing during quiet times, the case for Athera becomes quite compelling.

Another helpful factor is that a single 30-day access block to a particular app can be shared between multiple users — as long as only one user has control of the app at a time. This is subject to the fair use limitation.

There is an issue if commercial (licensed) plug-ins are needed. For the time being, these can’t be used on Athera due to the obvious licensing issues relating to their installation on a different cloud machine each time. Hopefully, plugin developers will become alive to the possibilities of pay-per-use licensing, as a platform like Athera could be the perfect storefront.

Mari

Security
One of the biggest concerns about using remote computing is that of security. This concern tends to be more perceptual than real. The truth is that a Google datacenter is likely to have significantly more security than an average post company’s machine room. Also, they will be employing the best in the security business. But if material being worked on leaks out into the public, telling a client, “But I just sent it to Google and figured it would be fine,” isn’t going to sound great. Realistically, the most likely concern for security is the sending of data to and from a datacenter. A security breach inside the datacenter is very unlikely. As ever, a post producer has to remain vigilant.

Summing Up
I think Foundry has been very smart and forward thinking to create a platform that is able to support more than just Foundry products in the cloud. It would have been understandable if they just made it a storefront for alternative ways of using a Nuke (etc), but they clearly see a bigger picture. Using a platform like Athera, post infrastructure can be assembled and disassembled on demand to allow post producers to match their overheads to their workload.

Athera enables smart post producers to build a highly scalable post environment with access to a global pool of creative talent who can log in and contribute from anywhere with little more than a modest computer and internet connection.

I hate the term game-changer — it’s another term so abused by know-nothing marketeers who have otherwise run out of ideas — but Athera, or at least what this sort of platform promises to provide, is most certainly a game-changer. Especially if more apps from different manufacturers can be included.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Our SIGGRAPH 2018 video coverage

SIGGRAPH is always a great place to wander around and learn about new and future technology. You can get see amazing visual effects reels and learn how the work was created by the artists themselves. You can get demos of new products, and you can immerse yourself in a completely digital environment. In short, SIGGRAPH is educational and fun.

If you weren’t able to make it this year, or attended but couldn’t see it all, we would like to invite you to watch our video coverage from the show.

SIGGRAPH 2018

postPerspective Impact Award winners from SIGGRAPH 2018

postPerspective has announced the winners of our Impact Awards from SIGGRAPH 2018 in Vancouver. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and professionals. It’s working pros who are going to be using new tools — so we let them make the call.

The awards honor innovative products and technologies for the visual effects, post production and production industries that will influence the way people work. They celebrate companies that push the boundaries of technology to produce tools that accelerate artistry and actually make users’ working lives easier.

While SIGGRAPH’s focus is on VFX, animation, VR/AR, AI and the like, the types of gear they have on display vary. Some are suited for graphics and animation, while others have uses that slide into post production, which makes these SIGGRAPH Impact Awards doubly interesting.

The winners are as follows:

postPerspective Impact Award — SIGGRAPH 2018 MVP Winner:

They generated a lot of buzz at the show, as well as a lot of votes from our team of judges, so our MVP Impact Award goes to Nvidia for its Quadro RTX raytracing GPU.

postPerspective Impact Awards — SIGGRAPH 2018 Winners:

  • Maxon for its Cinema 4D R20 3D design and animation software.
  • StarVR for its StarVR One headset with integrated eye tracking.

postPerspective Impact Awards — SIGGRAPH 2018 Horizon Winners:

This year we have started a new Imapct Award category. Our Horizon Award celebrates the next wave of impactful products being previewed at a particular show. At SIGGRAPH, the winners were:

  • Allegorithmic for its Substance Alchemist tool powered by AI.
  • OTOY and Epic Games for their OctaneRender 2019 integration with UnrealEngine 4.

And while these products and companies didn’t win enough votes for an award, our voters believe they do deserve a mention and your attention: Wrnch, Google Lightfields, Microsoft Mixed Reality Capture and Microsoft Cognitive Services integration with PixStor.

 

Artifex provides VFX limb removal for Facebook Watch’s Sacred Lies

Vancouver-based VFX house Artifex Studios created CG amputation effects for the lead character in Blumhouse Productions’ new series for Facebook Watch, Sacred Lies. In the show, the lead character, Minnow Bly (Elena Kampouris), emerges after 12 years in the Kevinian cult missing both of her hands. Artifex was called on to remove the actress’ limbs.

VFX supervisor Rob Geddes led Artifex’s team who created the hand/stump transposition, which encompassed 165 shots across the series. This involved detailed paint work to remove the real hands, while Artifex 3D artists simultaneously performed tracking and match move in SynthEyes to align the CG stump assets to the actress’ forearm.

This was followed up with some custom texture and lighting work in Autodesk Maya and Chaos V-Ray to dial in the specific degree of scarring or level of healing on the stumps, depending on each scene’s context in the story. While the main focus of Artifex’s work was on hand removal, the team also created a pair of severed hands for the first episode after rubber prosthetics didn’t pass the eye test. VFX work was run through Side Effects Houdini and composited in Foundry’s Nuke.

“The biggest hurdle for the team during this assignment was working with the actress’ movements and complex performance demands, especially the high level of interaction with her environment, clothing or hair,” says Adam Stern, founder of Artifex. “In one visceral sequence, Rob and his team created the actual severed hands. These were originally shot practically with prosthetics, however the consensus was that the practical hands weren’t working. We fully replaced these with CG hands, which allowed us to dial in the level of decomposition, dirt, blood and torn skin around the cuts. We couldn’t be happier with the results.”

Geddes adds, “One interesting thing we discovered when wrangling the stumps, is that the logical and accurate placement of the wrist bone of the stumps didn’t necessarily feel correct when the hands weren’t there. There was quite a bit of experimentation to keep the ‘hand-less’ arms from looking unnaturally long, or thin.”

Artifex also added a scene involving absolute devastation in a burnt forest in Episode 101, involving matte painting and set extension of extensive fire damage that couldn’t safely be achieved on set. Artifex fell back on their experience in environmental VFX creation, using matte painting and projections tied together with ample rotoscope work.

Approximately 20 Artifex artists took part in Sacred Lies across 3D, compositing, matte painting, I/O and production staff.

Watch Artifex founder Adam Stern talk about the show from the floor of SIGGRAPH 2018:

Patrick Ferguson joins MPC LA as VFX supervisor

MPC’s Los Angeles studio has added Patrick Ferguson to its staff as visual effects supervisor. He brings with him experience working in both commercials and feature films.

Ferguson started out in New York and moved to Los Angeles in 2002, and he has since worked at a range of visual effect houses along the West Coast, including The Mission, where he was VFX supervisor, and Method, where he was head of 2D. “No matter where I am in the world or what I’m working on, one thing has remained consistent since I started working in the industry: I still love what I do. I think that’s the most important thing.”

Ferguson has collaborated with directors such as Stacy Wall, Mark Romanek, Melina Matsoukas, Brian Billow and Carl Rinsch, and has worked on campaigns for big global brands, including Nike, Apple, Audi, HP and ESPN.

He has also worked on high-profile films, including Pirates of the Caribbean and Alice in Wonderland, and he was a member of the Academy Award-winning team for The Curious Case of Benjamin Button.

“In this new role at MPC, I hope to bring my varied experience of working on large scale feature films as well as on commercials that have a much quicker turnaround time,” he says. “It’s all about knowing what the correct tools are for the particular job at hand, as every project is unique.”

For Ferguson, there is no substitute for being on set: “Being on set is vital, as that’s when key relationships are forged between the director, the crew, the agency and the entire team. Those shared experiences go a long way in creating a trust that is carried all the way through to end of the project and beyond.”

Using VFX to bring the new Volkswagen Jetta to life

LA-based studio Jamm provided visual effects for the all-new 2019 Volkswagen Jetta campaign Betta Getta Jetta. Created by Deutsch and produced by ManvsMachine, the series of 12 spots bring the Jetta to life by combining Jamm’s CG design with a color palette inspired by the car’s 10-color ambient lighting system.

“The VW campaign offered up some incredibly fun and intricate challenges. Most notably was the volume of work to complete in a limited amount of time — 12 full-CG spots in just nine weeks, each one unique with its own personality,” says VFX supervisor Andy Boyd.

Collaboration was key to delivering so many spots in such a short span of time. Jamm worked closely with ManvsMachine on every shot. “The team had a very strong creative vision which is crucial in the full 3D world where anything is possible,” explains Boyd.

Jamm employed a variety of techniques for the music-centric campaign, which highlights updated features such as ambient lighting and Beats Audio. The series includes spots titled  Remix, Bumper-to-Bumper, Turb-Whoa, Moods, Bass, Rings, Puzzle and App Magnet, along with 15-second teasers, all of which aired on various broadcast, digital and social channels during the World Cup.

For “Remix,” Jamm brought both a 1985 and a 2019 Jetta to life, along with a hybrid mix of the two, adding a cool layer of turntablist VFX, whereas for “Puzzle,” they cut up the car procedurally in Houdini​, which allowed the team to change around the slices as needed.

For Bass, Jamm helped bring personality to the car while keeping its movements grounded in reality. Animation supervisor Stew Burris pushed the car’s performance and dialed in the choreography of the dance with ManvsMachine as the Jetta discovered the beat, adding exciting life to the car as it bounced to the bassline and hit the switches on a little three-wheel motion.

We reached out to Jamm’s Boyd to find out more.

How early did Jamm get involved?
We got involved as soon as agency boards were client approved. We worked hand in hand with ManvMachine to previs each of the spots in order to lay the foundation for our CG team to execute both agency and directors’ vision.

What were the challenges of working on so many spots at once.
The biggest challenge was for editorial to keep up with the volume of previs options we gave them to present to agency.

Other than Houdini, what tools did they use?
Flame, Nuke and Maya were used as well.

What was your favorite spot of the 12 and why?
Puzzle was our favorite to work on. It was the last of the bunch delivered to Deutsch which we treated with a more technical approach, slicing up the car like a Rubix’s Cube.

 

Siggraph: StarVR One’s VR headset with integrated eye tracking

StarVR was at SIGGRAPH 2018 with its StarVR One, its next-generation VR headset built to support the most optimal lifelike VR experience. Featuring advanced optics, VR-optimized displays, integrated eye tracking and a vendor-agnostic tracking architecture, StarVR One is built from the ground up to support use cases in the commercial and enterprise sectors.

The StarVR One VR head-mounted display provides a nearly 100 percent human viewing angle — a 210-degree horizontal and 130-degree vertical field-of-view — and supports a more expansive user experience. Approximating natural human peripheral vision, StarVR One can support rigorous and exacting VR experiences such as driving and flight simulations, as well as tasks such as identifying design issues in engineering applications.

StarVR’s custom AMOLED displays serve up 16 million subpixels at a refresh rate of 90 frames per second. The proprietary displays are designed specifically for VR with a unique full-RGB-per-pixel arrangement to provide a professional-grade color spectrum for real-life color. Coupled with StarVR’s custom Fresnel lenses, the result is a clear visual experience within the entire field of view.

StarVR One automatically measures interpupillary distance (IPD) and instantly provides the best image adjusted for every user. Integrated Tobii eye-tracking technology enables foveated rendering, a technology that concentrates high-quality rendering only where the eyes are focused. As a result, the headset pushes the highest-quality imagery to the eye-focus area while maintaining the right amount of peripheral image detail.

StarVR One eye-tracking thus opens up commercial possibilities that leverage user-intent data for content gaze analysis and improved interactivity, including heat maps.

Two products are available with two different integrated tracking systems. The StarVR One is ready out of the box for the SteamVR 2.0 tracking solution. Alternatively, StarVR One XT is embedded with active optical markers for compatibility with optical tracking systems for more demanding use cases. It is further enhanced with ready-to-use plugins for a variety of tracking systems and with additional customization tools.

The StarVR One headset weighs 450 grams, and its ergonomic headband design evenly distributes this weight to ensure comfort even during extended sessions.

The StarVR software development kit (SDK) simplifies the development of new content or the upgrade of an existing VR experience to StarVR’s premium wide-field-of-view platform. Developers also have the option of leveraging the StarVR One dual-input VR SLI mode, maximizing the rendering performance. The StarVR SDK API is designed to be familiar to developers working with existing industry standards.

The development effort that culminated in the launch of StarVR One involved extensive collaboration with StarVR technology partners, which include Intel, Nvidia and Epic Games.

Allegorithmic’s Substance Painter adds subsurface scattering

Allegorithmic has released the latest additions to its Substance Painter tool, targeted to VFX, game studios and pros who are looking for ways to create realistic lighting effects. Substance Painter enhancements include subsurface scattering (SSS), new projections and fill tools, improvements to the UX and support for a range of new meshes.

Using Substance Painter’s newly updated shaders, artists will be able to add subsurface scattering as a default option. Artists can add a Scattering map to a texture set and activate the new SSS post-effect. Skin, organic surfaces, wax, jade and any other translucent materials that require extra care will now look more realistic, with redistributed light shining through from under the surface.

The release also includes updates to projection and fill tools, beginning with the user-requested addition of non-square projection. Images can be loaded in both the projection and stencil tool without altering the ratio or resolution. Those projection and stencil tools can also disable tiling in one or both axes. Fill layers can be manipulated directly in the viewport using new manipulator controls. Standard UV projections feature a 2D manipulator in the UV viewport. Triplanar Projection received a full 3D manipulator in the 3D viewport, and both can be translated, scaled and rotated directly in-scene.

Along with the improvements to the artist tools, Substance Painter includes several updates designed to improve the overall experience for users of all skill levels. Consistency between tools has been improved, and additions like exposed presets in Substance Designer and a revamped, universal UI guide make it easier for users to jump between tools.

Additional updates include:
• Alembic support — The Alembic file format is now supported by Substance Painter, starting with mesh and camera data. Full animation support will be added in a future update.
• Camera import and selection — Multiple cameras can be imported with a mesh, allowing users to switch between angles in the viewport; previews of the framed camera angle now appear as an overlay in the 3D viewport.
• Full gITF support — Substance Painter now automatically imports and applies textures when loading gITF meshes, removing the need to import or adapt mesh downloads from Sketchfab.
• ID map drag-and-drop — Both materials and smart materials can be taken from the shelf and dropped directly onto ID colors, automatically creating an ID mask.
• Improved Substance format support — Improved tweaking of Substance-made materials and effects thanks to visible-if and embedded presets.

Behind the Title: Weta Digital VFX supervisor Erik Winquist

NAME: Erik Winquist

COMPANY: Wellington, New Zealand’s Weta Digital

CAN YOU DESCRIBE YOUR COMPANY?
We’re currently a collection of about 1,600 ridiculously talented artists and developers down at the bottom of the world who have created some the most memorable digital characters and visual effects for film over the last couple of decades. We’re named after a giant New Zealand bug.

WHAT’S YOUR JOB TITLE?
Visual Effects Supervisor

WHAT DOES THAT ENTAIL?
Making the director and studio happy without making my crew unhappy. Ensuring that everybody on the shoot has the same goal in mind for a shot before the cameras start rolling is one way to help accomplish both of those goals. Using the strengths and good ideas of everybody on your team is another.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The amount of problem solving that is required. Every show is completely different from the last. We’re often asked to do something and don’t know how we’re going to accomplish it at the outset. That’s where it’s incredibly important to have a crew full of insanely brilliant people you can bash ideas around with.

HOW DID YOU START YOUR CAREER IN VFX?
I went to school for it. After graduating from the Ringling College of Art and Design with a degree in computer animation, I eventually landed a job as an assistant animator at Pacific Data Images (PDI). The job title was a little misleading, because although my degree was fairly character animation-centric, the first thing I was asked to do at PDI was morphing. I found that I really enjoyed working on the 2D side of things, and that sent me down a path that ultimately got me hired as a compositor at Weta on The Lord of the Rings.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
I was hired by PDI in 1998, so I guess that means 20 years now. (Whoa.)

HOW HAS THE VFX INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING? WHAT’S BEEN GOOD? WHAT’S BEEN BAD?
Oh, there’s just been so much great stuff. We’re able to make images now that are completely indistinguishable from reality. Thanks to massive technology advancements over the years, interactivity for artists has gotten way better. We’re sculpting incredible amounts of detail into our models, painting them with giga-pixels worth of texture information, scrubbing our animation in realtime, using hardware-accelerated engines to light our scenes, rendering them with physically-based renderers and compositing with deep images and a 3D workspace.

Of course, all of these efficiency gains get gobbled up pretty quickly by the ever-expanding vision of the directors we work for!

The industry’s technology advancements and flexibility have also perhaps had some downsides. Studios demand increasingly shorter post schedules, prep time is reduced, shots can be less planned out because so much can be decided in post. When the brief is constantly shifting, it’s difficult to deliver the quality that everyone wants. And when the quality isn’t there, suddenly the Internet starts clamoring that “CGI is ruining movies!”

But, when a great idea — planned well by a decisive director and executed brilliantly by a visual effects team working in concert with all of the other departments — the movie magic that results is just amazing. And that’s why we’re all here doing what we do.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
There were some films I saw very early on that left a lasting impression: Clash of the Titans, The Empire Strikes Back. Later inspiration came in high school with the TV spots that Pixar was doing prior to Toy Story, and the early computer graphics work that Disney Feature Animation was employing in their films of the early ‘90s.

But the big ones that really set me off around this time were ILM’s work on Jurassic Park, and films like Jim Cameron’s The Abyss and Terminator 2. That’s why it was a particular kick to find myself on set with Jim on Avatar.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Dailies. When I challenge an artist to bring their best, and they come up with an idea that completely surprises me; that is way better than what I had imagined or asked for. Those moments are gold. Dailies is pretty much the only chance I have to see a shot for the first time like an audience member gets to, so I pay a lot of attention to my reaction to that very first impression.

WHAT’S YOUR LEAST FAVORITE?
Getting a shot ripped from our hands by those pesky deadlines before every little thing is perfect. And scheduling meetings. Though, the latter is critically important to make sure that the former doesn’t happen.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
There was a time when I was in grade school where I thought I might like to go into sound effects, which is a really interesting what-if scenario for me to think about. But these days, if I were to hang up my VFX hat, I imagine I would end up doing something photography-related. It’s been a passion for a very long time.

Rampage

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I supervised Weta’s work on Rampage, starring Dwayne Johnson and a very large albino gorilla. Prior to that was War for the Planet of the Apes, Spectral and Dawn of the Planet of the Apes.

WHAT IS THE PROJECT/S THAT YOU ARE MOST PROUD OF?
We had a lot of fun working on Rampage, and I think audiences had a ton of fun watching it. I’m quite proud of what we achieved with Dawn of the Planet of the Apes. But I’m also really fond of what our crew turned out for the Netflix film Spectral. That project gave us the opportunity to explore some VFX-heavy sci-fi imagery and was a really interesting challenge.

WHAT TOOLS DO YOU USE DAY TO DAY?
Most of my day revolves around reviewing work and communicating with my production team and the crew, so it’s our in-house review software, Photoshop and e-mail. But I’m constantly jumping in and out of Maya, and always have a Nuke session open for one thing or another. I’m also never without my camera and am constantly shooting reference photos or video, and have been known to initiate impromptu element shoots at a moment’s notice.

WHERE DO YOU FIND INSPIRATION NOW?
Everywhere. It’s why I always have my camera in my bag.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Scuba diving and sea kayaking are two hobbies that get me out in the water, though that happens far less than I would like. My wife and I recently bought a small rural place north of Wellington. I’ve found going up there doing “farm stuff” on the weekend is a great way to re-calibrate.

Animation and design studio Lobo expands to NYC’s Chinatown

After testing the New York market with a small footprint in Manhattan, creative animation/design studio Lobo has moved its operations to a new studio in New York’s Chinatown. The new location will be led by creative director Guilherme Marcondes, art director Felipe Jornada and executive producer Luis Ribeiro.

The space includes two suites, featuring the Adobe Creative Cloud apps, Autodesk Flame, Foundry Nuke and Blackmagic Resolve. There is also a finished rooftop deck and a multipurpose production space that will allow the team to scale per the specifications of each project.

Director/founder Mateus De Paula Santos will continue to oversee both New York offices creatively. Lobo’s NYC team will work closely with the award-winning São Paulo office, offering the infrastructure and horsepower of its nearly 200 staff with their US-based creative team.

Marcondes brings a distinct styling that fuses live action and animation techniques to craft immersive worlds. His art-driven style can be seen in work for clients such as Google, Chobani, Autism Speaks, Hyundai, Pepsi, Audi and British Gas. His short films have been screened at festivals worldwide, with his Tiger winning over 20 international awards. His latest film, Caveirão, made its worldwide premiere at SXSW.

Ribeiro brings over two decades of experience running business development and producing for creative post shops in the US, including Framestore, Whitehouse Post, Deluxe, Method Studios, Beast, Company 3 and Speedshape. He also served as the US consultant for FilmBrazil for four years, connecting US and Brazilian companies in the advertising production network.

Recent work out of Lobo’s US office includes the imaginative mixed media FlipLand campaign for Chobani, the animated PSA Sunshine for Day One out of BBDO NY and an animated short for the Imaginary Friends Society out of RPA.

Our Main Image: L-R: Luis Ribeiro, Mateus De Paula Santos, Felipe Jornada and Guilherme Marcondes.

Maxon intros Cinema 4D Release 20

Maxon will be at Siggraph this year showing the next iteration of its Cinema 4D Release 20 (R20), an update of its 3D design and animation software. Release 20 introduces high-end features for VFX and motion graphics artists including node-based materials, volume modeling, CAD import and an evolution of the MoGraph toolset.

Maxon expects Cinema 4D Release 20 to be available this September for both Mac and Windows operating systems.

Key highlights in Release 20 include:
Node-Based Materials – This feature provides new possibilities for creating materials — from simple references to complex shaders — in a node-based editor. With more than 150 nodes to choose from that perform different functions, artists can combine nodes to easily build complex shading effects. Users new to a node-based material workflow still can rely on Cinema 4D’s standard Material Editor interface to create the corresponding node material in the background automatically. Node-based materials can be packaged into assets with user-defined parameters exposed in a similar interface to Cinema 4D’s Material Editor.

MoGraph Fields – New capabilities in this procedural animation toolset offer an entirely new way to define the strength of effects by combining falloffs — from simple shapes, to shaders or sounds to objects and formulas. Artists can layer Fields atop each other with standard mixing modes and remap their effects. They can also group multiple Fields together and use them to control effectors, deformers, weights and more.

CAD Data Import – Popular CAD formats can be imported into Cinema 4D R20 with a drag and drop. A new scale-based tessellation interface allows users to adjust detail to build amazing visualizations. Step, Solidworks, JT, Catia V5 and IGES formats are supported.

Volume Modeling – Users can create complex models by adding or subtracting basic shapes in Boolean-type operations using Cinema 4D R20’s OpenVDB–based Volume Builder and Mesher. They can also procedurally build organic or hard-surface volumes using any Cinema 4D object, including new Field objects. Volumes can be exported in sequenced .vdb format for use in any application or render engine that supports OpenVDB.

ProRender Enhancements — ProRender in Cinema 4D R20 extends the GPU-rendering toolset with key features including subsurface scattering, motion blur and multipasses. Also included are Metal 2 support, an updated ProRender core, out-of-core textures and other architectural enhancements.

Core Technology Modernization —As part of the transition to a more modern core in Cinema 4D, R20 comes with substantial API enhancements, the new node framework, further development on the new modeling framework and a new UI framework.

During Siggraph, Maxon will have guest artists presenting at their booth each day of the show. Presentations will be live streamed on C4DLive.com.

 

 

SIGGRAPH conference chair Roy C. Anthony: VR, AR, AI, VFX, more

By Randi Altman

Next month, SIGGRAPH returns to Vancouver after turns in Los Angeles and Anaheim. This gorgeous city, whose convention center offers a water view, is home to many visual effects studios providing work for film, television and spots.

As usual, SIGGRAPH will host many presentations, showcase artists’ work, display technology and offer a glimpse into what’s on the horizon for this segment of the market.

Roy C. Anthony

Leading up to the show — which takes place August 12-16 — we reached out to Roy C. Anthony, this year’s conference chair. For his day job, Anthony recently joined Ventuz Technology as VP, creative development. There, he leads initiatives to bring Ventuz’s realtime rendering technologies to creators of sets, stages and ProAV installations around the world

SIGGRAPH is back in Vancouver this year. Can you talk about why it’s important for the industry?
There are 60-plus world-class VFX and animation studios in Vancouver. There are more than 20,000 film and TV jobs, and more than 8,000 VFX and animation jobs in the city.

So, Vancouver’s rich production-centric communities are leading the way in film and VFX production for television and onscreen films. They are also are also busy with new media content, games work and new workflows, including those for AR/VR/mixed reality.

How many exhibitors this year?
The conference and exhibition will play host to over 150 exhibitors on the show floor, showcasing the latest in computer graphics and interactive technologies, products and services. Due to the increase in the amount of new technology that has debuted in the computer graphics marketplace over this past year, almost one quarter of this year’s 150 exhibitors will be presenting at SIGGRAPH for the first time

In addition to the traditional exhibit floor and conferences, what are some of the can’t-miss offerings this year?
We have increased the presence of virtual, augmented and mixed reality projects and experiences — and we are introducing our new Immersive Pavilion in the east convention center, which will be dedicated to this area. We’ve incorporated immersive tech into our computer animation festival with the inclusion of our VR Theater, back for its second year, as well as inviting a special, curated experience with New York University’s Ken Perlin — he’s a legendary computer graphics professor.

We’ll be kicking off the week in a big VR way with a special session following the opening ceremony featuring Ivan Sutherland, considered by many as “the father of computer graphics.” That 50-year retrospective will present the history and innovations that sparked our industry.

We have also brought Syd Mead, a legendary “visual futurist” (Blade Runner, Tron, Star Trek: The Motion Picture, Aliens, Time Cop, Tomorrowland, Blade Runner 2049), who will display an arrangement of his art in a special collection called Progressions. This will be seen within our Production Gallery experience, which also returns for its second year. Progressions will exhibit more than 50 years of artwork by Syd, from his academic years to his most current work.

We will have an amazing array of guest speakers, including those featured within the Business Symposium, which is making a return to SIGGRAPH after an absence of a few years. Among these speakers are people from the Disney Technology Innovation Group, Unity and Georgia Tech.

On Tuesday, August 14, our SIGGRAPH Next series will present a keynote speaker each morning to kick off the day with an inspirational talk. These speakers are Tony Derose, a senior scientist from Pixar; Daniel Szecket, VP of design for Quantitative Imaging Systems; and Bob Nicoll, dean of Blizzard Academy.

There will be a 25th anniversary showing of the original Jurassic Park movie, being hosted by “Spaz” Williams, a digital artist who worked on that film.

Can you talk about this year’s keynote and why he was chosen?
We’re thrilled to have ILM head and senior VP, ECD Rob Bredow deliver the keynote address this year. Rob is all about innovation — pushing through scary new directions while maintaining the leadership of artists and technologists.

Rob is the ultimate modern-day practitioner, a digital VFX supervisor who has been disrupting ‘the way it’s always been done’ to move to new ways. He truly reflects the spirit of ILM, which was founded in 1975 and is just one year younger than SIGGRAPH.

A large part of SIGGRAPH is its slant toward students and education. Can you discuss how this came about and why this is important?
SIGGRAPH supports education in all sub-disciplines of computer graphics and interactive techniques, and it promotes and improves the use of computer graphics in education. Our Education Committee sponsors a broad range of projects, such as curriculum studies, resources for educators and SIGGRAPH conference-related activities.

SIGGRAPH has always been a welcoming and diverse community, one that encourages mentorship, and acknowledges that art inspires science and science enables advances in the arts. SIGGRAPH was built upon a foundation of research and education.

How are the Computer Animation Festival films selected?
The Computer Animation Festival has two programs, the Electronic Theater and the VR Theater. Because of the large volume of submissions for the Electronic Theater (over 400), there is a triage committee for the first phase. The CAF Chair then takes the high scoring pieces to a jury comprised of industry professionals. The jury selects then become the Electronic Theater show pieces.

The selections for the VR Theater are made by a smaller panel comprised mostly of sub-committee members that watch each film in a VR headset and vote.

Can you talk more about how SIGGRAPH is tackling AR/VR/AI and machine learning?
Since SIGGRAPH 2018 is about the theme of “Generations,” we took a step back to look at how we got where we are today in terms of AR/VR, and where we are going with it. Much of what we know today couldn’t have been possible without the research and creation of Ivan Sutherland’s 1968 head-mounted display. We have a fanatic panel celebrating the 50-year anniversary of his HMD, which is widely considered and the first VR HMD.

AI tools are newer, and we created a panel that focuses on trends and the future of AI tools in VFX, called “Future Artificial Intelligence and Deep Learning Tools for VFX.” This panel gains insight from experts embedded in both the AI and VFX industries and gives attendees a look at how different companies plan to further their technology development.

What is the process for making sure that all aspects of the industry are covered in terms of panels?
Every year new ideas for panels and sessions are submitted by contributors from all over the globe. Those submissions are then reviewed by a jury of industry experts, and it is through this process that panelists and cross-industry coverage is determined.

Each year, the conference chair oversees the program chairs, then each of the program chairs become part of a jury process — this helps to ensure the best program with the most industries represented from across all disciplines.

In the rare case a program committee feels they are missing something key in the industry, they can try to curate a panel in, but we still require that that panel be reviewed by subject matter experts before it would be considered for final acceptance.

 

Review: Maxon Cinema 4D R19 — an editor’s perspective

By Brady Betzel

It’s time for my yearly review of Maxon’s Cinema 4D. Currently in Release 19, Cinema 4D comes with a good amount of under-the-hood updates. I am an editor, first and foremost, so while I dabble in Cinema 4D, I am not an expert. There are a few things in the latest release, however, that directly correlate to editors like me.

Maxon offers five versions of Cinema 4D, not including BodyPaint 3D. There is the Cinema 4D Lite, which comes free with Adobe After Effects. It is really an amazing tool for discovering the world of 3D without having to invest a bunch of money. But, if you want all the goodies that come packed into Cinema 4D you will have to pay the piper and purchase one of the other four versions. The other versions include Prime, Broadcast, Visualize and Studio.

Cinema 4D Prime is the first version that includes features like lighting, cameras and animation. Cinema 4D Broadcast includes all of Cinema 4D Prime’s features as well as the beloved MoGraph tools and the Broadcast Library, which offers pre-built objects and cameras that will work with motion graphics. Cinema 4D Visualize includes Cinema 4D Prime features as well, but is geared more toward architects and designers. It includes Sketch and Toon, as well as an architecturally focused library of objects and presets. Cinema 4D Studio includes everything in the other versions plus unlimited Team Render nodes, a hair system, a motion/object tracker and much more. If you want to see a side-by-side comparison you can check out Maxon’s website.

What’s New
As usual, there are a bunch of new updates to Cinema 4D Release 19, but I am going to focus on my top three, which relate to the workflows and processes I might use as an editor: New Media Core, Scene Reconstruction and the Spherical Camera. Obviously, there are a lot more updates — including the incredible new OpenGL Previews and the cross-platform ProRender, which adds the ability to use AMD or Nvidia graphics cards — but to keep this review under 30 pages I am focusing on the three that directly impact my work.

New Media Core
Buckle up! You can now import animated GIFs into Cinema 4D. So, yes, you can import animated GIFs into Cinema 4D Release 19, but that is just one tiny aspect of this update. The really big addition is the QuickTime-free support of MP4 videos. Now MP4s can be imported and used as textures, as well as exported with different compression settings, directly from within Cinema 4D’s  interface — all of this without the need to have QuickTime installed. What is cool about this is that you no longer need to export image-based file sequences to get your movie inside of Cinema 4D. The only slowdown will be how long it takes Cinema 4D R19 to cache your MP4 so that you will have realtime playback… if possible.

In my experience, it doesn’t take that much time, but that will be dependent on your system performance. While this is a big under-the-hood type of update, it is great for those quick exports of a scene for approval. No need to take your export into Adobe Media Encoder, or something else, to squeeze out an MP4.

Scene Reconstruction
First off, for any new Cinema 4D users out there, Scene Reconstruction is convoluted and a little thick to wade through. However, if you work with footage and want to add motion graphics work to a scene, you will want to learn this. You can check out this Cineversity.com video for an eight-minute overview.

Cinema 4D’s Scene Reconstruction works by tracking your footage to generate point clouds, and then after you go back and enable Scene Reconstruction, it creates a mesh from the resulting scene calculation that Cinema 4D computes. In the end, depending on how compatible your footage is with Scene Detection (i.e. contrasting textures and good lighting will help) you will get a camera view with matching scene vertices that are then fully animatable. I, unfortunately, do not have enough time to recreate a set or scene inside of Cinema 4D R19, however, it feels like Maxon is getting very close to fully automated scene reconstruction, which would be very, very interesting.

I’ve seen a lot of ideas from pros on Twitter and YouTube that really blow my mind, like 3D scanning with a prosumer camera to recreate objects inside of Cinema 4D. Scene Reconstruction could be a game-changing update, especially if it becomes more automated as it would allow base users like me to recreate a set in Cinema 4D without having to physically rebuild a set. A pretty incredible motion graphics-compositing future is really starting to emerge from Cinema 4D.

In addition, the Motion Tracker has received some updates, including manual tracking on R, G, B, or custom channel — viewed as Tracker View — and the tracker can now work with a circular tracking pattern.

Spherical Camera
Finally, the last update, which seems incredible, is the new Spherical Camera. It’s probably because I have been testing and using a lot more 360 video, but the ability to render your scene using a Spherical Camera is here. You can now create a scene, add a camera and enable Spherical mapping, including equirectangular, cubic string, cubic cross or even Facebook’s 360 video 3×2 cubic format. In addition, there is now support for Stereo VR as well as dome projection.

Other Updates
In addition to the three top updates I’ve covered, there are numerous others updates that are just as important, if not more so to those who use Cinema 4D in other ways. In my opinion, the rendering updates take the cake. Also, as mentioned before, there is support for both Nvidia and AMD GPUs, multi-GPU support, incredible viewport enhancements with Physical Rendering and interactive Preview Renders in the viewport.

Under MoGraph, there is an improved Voronoi Fracture system (ability to destroy an object quickly) including improved performance for high polygon counts and detailing to give the fracture a more realistic look. There is also a New Sound Effector to allow for interactive MoGraph creation to the beat of the music. One final note: the new Modern Modelling Kernel has been introduced. The new kernel gives more ability to things like polygon reduction and levels of detail.

In the end, Cinema 4D Release 19 is a huge under-the-hood update that will please legacy users but will also attract new users with AMD-based GPUs. Moreover, Maxon seems to be slowly morphing Cinema 4D into a total 2D and 3D modeling and motion graphics powerhouse, much like the way Blackmagic’s Resolve is for colorists, video editors, VFX creators and audio mixers.

Summing Up
With updates like Scene Recreation and improved motion tracking, Maxon gives users like me the ability to work way above their pay grade to composite 3D objects onto our 2D footage. If any of this sounds interesting to you and you are a paying Adobe Creative Cloud user, download and open up Cinema 4D Lite along with After Effects, then run over to Cineversity and brush up on the basics. Cinema 4D Release 19 is an immensely powerful 3D application that is blurring the boundaries between 3D and 2D compositing. With Cinema 4D Release 19’s large library of objects, preset scenes and lighting setups you can be experimenting in no time, and I didn’t even touch on the modeling and sculpting power!


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

iPi Motion Capture V.4 software offers live preview

iPi Soft, makers of motion capture technology, has introduced iPi Motion Capture Version 4, the next version of its markerless motion capture software. Version 4 includes realtime preview capability for a single-depth sensor. Other new features and enhancements include support for new depth sensors (Intel RealSense D415/D435, ASUS Xtion2 and Orbbec Astra/ Astra Pro); improved arms and body tracking; and support for action cameras such as GoPro and SJCAM. With Version 4, iPi Soft also introduces a perpetual license model.

The realtime tracking feature in Version 4 uses iPi Recorder, a free software provided by iPi Soft for capturing, playback and processing video records from multiple cameras and depth sensors, to communicate with iPi Mocap Studio software, which tracks in realtime and instantly transfers motion to 3D characters. This allows users to see how the motion will look on a 3D character and improve motion accordingly at the time of acting and recording, without the need to redo multiple iterations of acting, recording and offline tracking.

Live tracking results can then be stored to disk for additional offline post processing, such as tracking refinement (to improve tracking accuracy), manual corrections and jitter removal.

iPi Mocap Version 4 currently includes the realtime tracking feature for a single depth sensor only. iPi Soft is scheduled to bring realtime functionality for multiple depth sensors to users by the end of this year.

Development of plug-ins for popular 3D game engines, including Unreal Engine and Unity, is also underway.

Tracking Improvements include:
• Realtime tracking support of human performance for live preview for a single-depth sensor (Basic and Pro configurations). Motion can be transferred to a 3D character.
• Improved individual body parts tracking, after performing initial tracking, allows users to re-do tracking for selected body parts to fix tracking errors more quickly.
• Tracking improvements of head and hands when used in conjunction with Sony’s PS Move motion controller takes into account joint limits.

New Sensors and cameras supported include:
• Support for Intel RealSense D415 / D435 depth cameras, Asus Xtion2 motion sensors and Orbbec Astra / Astra Pro 3D cameras.
• Support for action cameras such as GoPro and SJCAM, including wide-angle cameras, allows users to come closer to the camera decreasing space requirements.
• The ability to calibrate individual internal parameters of any camera, helps users to correctly reconstruct 3D information from video for improved overall tracking quality.
• The ability to load unsynchronized videos from multiple cameras and then use iPi Recorder to sync and convert footage to .iPiVideo format used by iPi Mocap Studio.
• Support of fast motion action cameras — video frame rate can reach up to 120fps to allow for tracking extremely fast motions.

Version 4’s perpetual license is not time-limited and includes two years with full support and software updates. Afterwards, users have the option to subscribe to a support plan to continue receiving full support and software updates. Alternatively, they can continue using their latest software version.

iPi Motion Capture Version 4 is also available as a subscription-based model. Prices range from $165 to $1995 depending on the version of software (Express, Basic, Pro) and the duration of subscription.

The Basic edition provides support for up to 6 Sony PS3 Eye cameras or 2 Kinect sensors, and tracking of a single actor. The Pro version features full 16-camera/four depth sensors capability and can track up to three actors. A 30-day free trial for Version 4 is available.

 

Boxx’s Apexx SE capable of 5.0GHz clock speed

Boxx Technologies has introduced the Apexx Special Edition (SE), a workstation featuring a professionally overclocked Intel Core i7-8086K limited edition processor capable of reaching 5.0GHz across all six of its cores.

In celebration of the 40th anniversary of the Intel 8086 (the processor that launched x86 architecture), Intel provided Boxx with a limited number of the high-performance CPUs ideal for 3D modeling, animation and CAD workflows.

Available only while supplies last and custom-configured to accelerate Autodesk’s 3ds Max and Maya, Adobe CC, Maxon Cinema 4D and other pro apps, Apexx SE features a six-core, 8th generation Intel core i7-8086K limited edition processor professionally overclocked to 5.0GHz. Unlike PC gaming systems, the liquid-cooled Apexx SE sustains that frequency across all cores — even in the most demanding situations.

Featuring a compact and metallic blue chassis, the Apexx S3 supports up to three Nvidia or AMD Radeon pro graphics cards, features solid state drives and 2600MHz DDR4 memory. Boxx is offering a three-year warranty on the systems.

“As longtime Intel partners, Boxx is honored to be chosen to offer this state-of-the-art technology. Lightly threaded 3D content creation tools are limited by the frequency of the processor, so a faster clock speed means more creating and less waiting,” explains Boxx VP, marketing and business development Shoaib Mohammad.

Luke Scott to run newly created Ridley Scott Creative Group

Filmmaker Ridley Scott has brought together all of his RSA Films-affiliated companies together in a multi-business restructure to form the Ridley Scott Creative Group. The Ridley Scott Creative Group aims to strengthen the network across the related companies to take advantage of emerging opportunities across all entertainment genres as well as their existing work in film, television, branded entertainment, commercials, VR, short films, documentaries, music video, design and animation, and photography.

Ridley Scott

Luke Scott will assume the role of global CEO, working with founder Ridley Scott and partners Jake and Jordan Scott to oversee the future strategic direction of the newly formed group.

“We are in a new golden age of entertainment,” says Ridley Scott. “The world’s greatest brands, platforms, agencies, new entertainment players and studios are investing hugely in entertainment. We have brought together our talent, capabilities and creative resources under the Ridley Scott Creative Group, and I look forward to maximizing the creative opportunities we now see unfolding with our executive team.”

The companies that make up the RSCG will continue to operate autonomously but will now offer clients synergy under the group offering.

The group includes commercial production company RSA Films, which produced such ads such as Apple’s 1984, Budweiser’s Super Bowl favorite Lost Dog and more recently, Adidas Originals’ Original is Never Finished campaign, as well as branded content for Johnnie Walker, HBO, Jaguar, Ford, Nike and the BMW Films series; the music video production company founded by Jake Scott, Black Dog Films (Justin Timberlake, Maroon 5, Nicki Minaj, Beyoncé, Coldplay, Björk and Radiohead); the entertainment marketing company 3AM; commercial production company Hey Wonderful founded by Michael Di Girolamo; newly founded UK commercial production company Darling Films; and film and television production company Scott Free (Gladiator, Taboo, The Martian, The Good Wife), which continues to be led by David W. Zucker, president, US television; Kevin J. Walsh, president, US film; and Ed Rubin-Managing, director, UK television/film.

“Our Scott Free Films and Television divisions have an unprecedented number of movies and shows in production,” reports Luke Scott. “We are also seeing a huge appetite for branded entertainment from our brand and agency partners to run alongside high-quality commercials. Our entertainment marketing division 3AM is extending its capabilities to all our partners, while Black Dog is moving into short films and breaking new, world-class talent. It is a very exciting time to be working in entertainment.”

 

 

 

 

 

MPC directs, provides VFX, color for Fiji Water spot

To launch the new Fiji Sports Cap bottle, Wonderful Agency came up with the concept of a drop of rain from the clouds high above Fiji making its way down through the pristine environment to showcase the source of their water. The story then transitions to the Fiji Water Sports Cap bottle being used by athletes during a tough workout.

To bring that idea to life, Wonderful Agency turned to MPC with creative director Michael Gregory, who made making his MPC directorial debut, helming both spots while also leading his VFX team. These spots will air on primetime television.

Gregory’s skills in visual effects made him the perfect fit as director of the spots, since it was essential to seamlessly depict the raindrop’s fast-paced journey through the different environments. MPC was tasked with building the CG water droplet that falls from the sky, while reflecting and magnifying the beauty of the scenes shot in Fiji.

“It was key to film in low light, cloudy conditions in Fiji,” explains Gregory. “We shot over five days with a drone in the most remote parts of the main island, taking the drone above the clouds and shooting many different angles on the descent, so we had all the textures and plates we needed.”

For the Fiji section, Gregory and team used the Zenmuse X7 camera that sits on a DJI Inspire 2 drone. “We chose this because logistically it was easier to get it to Fiji by plane. It’s a much smaller drone and isn’t as battery-hungry. You can only travel with a certain amount of batteries on a plane, and the larger drones that carry the Reds and Alexas would need the batteries shipped by sea. Being smaller meant it had much longer flying times. That meant we could have it in the air at height for much longer periods. The footage was edited in Adobe Premiere.”

MPC’s VFX team then got to work. According to lead compositor Oliver Caiden, “The raindrop itself was simulated CG geometry that then had all of the different textures refracted through the UV map. This process was also applied to the droplet reflections, mapping high dynamic range skies onto the outside, so we could achieve a more immersive and richer effect.”

This process enabled the compositors to animate the raindrops and have full control over motion blur, depth of focus, refraction and reflections, making them as realistic and multifaceted as possible. The shots were a mixture of multiple plates, matte painting, 2D and CG clouds, which ultimately created a sequence that felt seamless with reality. The spot was graded by MPC’s colorist Ricky Gausis.

The tools used by MPC were Autodesk Maya, Side Effects Houdini, Adobe Photoshop as well as Foundry Nuke for the VFX and FilmLight Baselight for color.

The latest Fiji campaign marks a continued partnership between MPC and Wonderful Agency — they previously handled VFX for Wonderful Pistachios and Wonderful Halos spots — but this latest campaign sees MPC managing the production from start to finish.

Therapy Studios provided the final audio mix.

 

Creating the 3D stereo version of Ron Howard’s Solo: A Star Wars Story

At Stereo D, a company that converts 2D theatrical content into stereoscopic 3D imagery, creating 3D editions of blockbuster movies takes hundreds of people working under the leadership of a dedicated stereo producer and stereographer team. For Solo: A Star Wars Story, the studio’s Tim Broderick and Yo Aoki talked to us about their collaboration and how they supported Ron Howard’s 3D vision for the film.

Yo Aoki

Can you talk about how your partnership works to produce a 3D version of every frame in a feature film?
Tim Broderick: Think of it this way — Yo is the artistic lead and I’m the one trying to give him as much of a runway as possible. My job entails a lot more of the phone calls to the client, setting up schedules and our pipelines and managing the material as it comes over to us. Yo’s focus is mainly in the theater and working with the stereo supervisors about his approach to each show. He homes in on what the style will be, analyzes each scene and builds a stereo plan for each.

Tim Broderick

Yo Aoki: Then we come together with the clients and all talk through the work… what they’re going for and how we can add to it using the 3D. We both have our strengths in the room with the clients and we play off each other very well and comfortably, which is why we’ve been doing this together for so long.

Did your collaboration begin at Stereo D?
Aoki: Yes. My first major project at Stereo D was James Cameron’s Titanic, but my history with Tim began with Godzilla and Jurassic Park 3D, where Tim was our production supervisor. Then we worked together with Tim serving as stereo producer on Jurassic World, followed by The BFG, Rogue One, The Mummy, Ready Player One and Solo.

So, you’re not the team that leads the 3D on the Star Wars episodes?
Broderick: Lucasfilm thought it made more sense to assign a dedicated team to the stories and another to the episodes as a practical matter since the schedules were overlapping.

How do you describe the aesthetic that applies to the 3D of Solo: A Star Wars Story compared to Rogue One?
Aoki: For Rogue One, Gareth Edwards and John Knoll preferred realism, while in Solo, Ron Howard and Rob Bredow preferred to make the 3D fun and comfortable. Different approaches and both worked for the experience each film offered.

Can you explain the difference between 3D realism and 3D fun and comfortable?
Broderick: Sure, for “realism” we take more of an analytical approach, making sure objects are correct in their spatial relationships. Is that rock the correct size in perspective to K2? No. It needs to be pushed positive so it’s larger. Whereas in “fun and comfortable,” we certainly keep things realistic, but the analytical approach is more Yo putting himself in the audience’s shoes and looking at each shot and asking 1) What can we do with this shot to make it just a little more of a fun ride for the audience? And 2) Is it comfortable? What do we need to sacrifice in terms of realism to help that?

What kind of guidance did director Ron Howard provide to inform your approach to the 3D?
Aoki: Take the Kessel Run sequence, which is a wild flight through fog and debris crashing into the ship as the team runs from TIE fighters and a giant monster. There was discussion early on about playing the sequence very shallow to avoid miniaturization of the Falcon and keep it to actual scale, but by doing that you lose the fun of the 3D tunnel effect of the environment they’re in, so Ron said to cheat the reality and make it more fun. There are also rocks and debris flying past the window, which Ron wanted to pull closer to the ship for more drama. The scale is unconventional, but it’s a lot more fun playing it this way. It adds to the impact of the scene.

You’ve worked with some pretty amazing directors — Steven Spielberg, Ron Howard, James Cameron — what’s the best part of the job?
Aoki: It’s been great. After each show we walk away with more ideas to offer the next filmmaker to use in their 3D storytelling.

Broderick: Definitely one of the most fun parts of what we do is spending time with filmmakers learning what they’re going for, presenting how we can aid that story in terms of 3D and having the creative flexibility to see it through.

Carbon creates four animated thrill ride spots

Carbon was called on once again by agency Cramer-Krasselt to create four spots — Railblazer, Twisted Timbers, Steel Vengeance and HangTime — for Cedar Fair Entertainment Company, which owns and operates 11 amusement parks across North America.

Following the success of Carbon’s creepy 2017 teaser film for the ride Mystic Timbers, Cramer-Krasselt senior art director David Vaca and his team presented Carbon with four ideas, each a deep dive into the themes and backstories of the rides.

Working across four 30-second films simultaneously and leading a “tri-coastal” team of artists, CD Liam Chapple shared directing duties with lead artists Tim Little and Gary Fouchy. The studio has offices in NYC, LA and Chicago.

According to Carbon executive producer/managing director Phil Linturn, “We soaked each script in the visual language, color grades, camera framing and edits reminiscent of our key inspiration films for each world — a lone gun-slinger arriving to town at sundown in the wild west, the carefree and nostalgic surf culture of California, and extreme off-roading adventures in the twisting canyons of the southwest.”

Carbon’s technical approach to these films was dictated by the fast turnaround and having all films in production at the same time. To achieve the richness, tone and detail required to immerse the viewer in these worlds, Carbon blended stylized CGI with hyper-real matte paintings and realistic lighting to create a look somewhere between their favorite children’s storybooks, contemporary manga animation, the Spaghetti Westerns of Sergio Leone and one or two of their favorite Pixar films.

Carbon called on Side Effects Houdini (partially for their procedural ocean toolkit), Autodesk Maya’s nCloth and 3ds Max, Pixologic’s Zbrush for 3D sculpting and matte painting, Foundry’s Nuke and FilmLight’s Baselight for color.

“We always love working with Cramer-Krasselt,” concludes Linturn. “They come with awesome concepts and an open mind, challenging us to surprise them with each new deck. This was a fantastic opportunity to expand on our body of full CGI-direction work and to explore some interesting looks and styles. It also allowed us to come up with some very creative workflows across all three offices and to achieve two minutes of animation in just a few weeks. The fact that these four films are part of a much bigger broadcast campaign comprising 70-plus broadcast spots is a testament to the focus and range of the production team.”

Quick Chat: Maxon focuses on highlighting women in mograph

By Randi Altman

Maxon is well known for having hands-on artists as presenters at their trade show booths. Highlighting these artists and their work — and streaming their presentations — has been intrinsic in what they do. Sometimes they also have artists on hand at press luncheons, where Maxon talks about their Cinema 4D product and advances that have been made while at the same time highlighting users’ work.

This year’s conversation was a bit different in that it featured an all-woman panel of motion graphics artists, including moderator Tuesday McGowan, Penelope Nederlander, Julia Siemón, Caitlin Cadieux, Robyn Haddow and Sarah Wickliffe. They talked about their career experiences, the everyday challenges women face to achieve recognition and gender parity in a male-dominated work environment and strategies women can use to advance their careers.

During the panel (which you can watch here), McGowan talked about how motion graphics, in general, is very young, and how she believes there will be a coming evolution of diversity, and not just women. “I think it will happen through awareness and panel discussions like this one that will actually increase the visibility and get the word out — to influence Generation Z, the next generation of women and people of color to get involved,” says Tuesday. “We think that the young women of Generation Z, who are more familiar and very confident with technology, will branch out and become great 3D artists.”

Paul Babb

We reached out to Maxon US president/CEO Paul Babb to find out about why promoting women in motion graphics is a cause he has dug into wholeheartedly.

How did the idea for a panel like this come about?
During one of the trade shows, we got beat up a little on one of the public forums for not having enough female artists presenting. At first I was angry because in the first place I think we had more female presenters than any other company at the show, and we have historically been sensitive to it.

Then I thought about it and realized we are one of the few companies streaming our presentations. So people who do not attend shows have no idea. So I decided we had to be more proactive about it and tried to come up with some ideas to encourage more women to come out to the shows. The idea of the panel seemed to be a great starting point — what better way to find out what would encourage women in the industry than to give successful women an opportunity to discuss it and share their insights?

Did the panel turn out the way you hoped?
Absolutely. Really, all I had hoped for was to contribute to a conversation that has already started. I wanted to give some great female artists a forum to share their experiences and hopefully encourage a new generation of female artists.

The conversation was great — candid, constructive and informative.

The panel generated frank conversation about the gender gap in 3D motion graphics. Topics examined included negotiating wages, mansplaining and being “talked-over,” the importance of flexible work time for women raising families, the need for women to seek out industry mentorship and tips for industry leaders to make workplace life inclusive to women.

What were a few takeaways from the panel?
I wasn’t surprised by any of the comments — good and bad. The biggest takeaway is that we have to find ways of encouraging more women in the industry, and encourage those in the industry to be more vocal.

What are the challenges they face and what do you think needs to change in the industry in general?
Other than the usual issues of a male-dominated society, the one thing that struck me is how women need to be more empowered — to toot their own horn, to recognize they are an expert and to stand up and be heard.

Maxon announced sponsorship of a new Women in Motion Graphics website. Tell us about the site offerings. Do you expect to continue to promote female graphic artists?
The Women in Motion Graphics website is intended as a resource to help women get ahead in the industry and to promote industry role models. It includes the video of the panel we organized at NAB 2018 featuring successful female artists, each working in different areas of the motion graphics business, addressing their struggles in the workplace. The artists who were on the panel share their insights into the motion graphics industry, its influencers and best practices for artists to achieve recognition. There is also a page with links to motion graphics education and training resources.

We will continue to sponsor the site and allow the women involved to drive its growth and evolution. In addition, we will continue to make great effort to get more women to come present for us at industry events, focus on doing customer profiles that feature women artists as well as sponsor scholarships and events that promote women in the industry.

Combining 3D and 360 VR for The Cabiri: Anubis film

Whether you are using 360 VR or 3D, both allow audiences to feel in on the action and emotion of a film narrative or performance, but combine the two together and you can create a highly immersive experience that brings the audience directly into the “reality” of the scenes.

This is exactly what film producers and directors Fred Beahm and Bogdan Darev have done in The Cabiri: Anubis, a 3D/360VR performance art film showing at the Seattle International Film Festival’s (SIFF) VR Zone on May 18 through June 10.

The Cabiri is a Seattle-based performance art group that creates stylistic and athletic dance and entertainment routines at theater venues throughout North America. The 3D/360VR film can now be streamed from the Pixvana app to the new Oculus Go headset, which is specifically designed for 3D and 360 streaming and viewing.

“As a director working in cinema to create worlds where reality is presented in highly stylized stories, VR seemed the perfect medium to explore. What took me by complete surprise was the emotional impact, the intimacy and immediacy the immersive experience allows,” says Darev. “VR is truly a medium that highlights our collective responsibility to create original and diverse content through the power of emerging technologies that foster curiosity and the imagination.”

“Other than a live show, 3D/360VR is the ideal medium for viewers to experience the rhythmic movement in The Cabiri’s performances. Because they have the feeling of being within the scene, the viewers become so engaged in the experience that they feel the emotional and dramatic impact,” explains Beahm, who is also the cinematographer, editor and post talent for The Cabiri film.

Beahm has a long list of credits to his name, and a strong affinity for the post process that requires a keen sense of the look and feel a director or producer is striving to achieve in a film. “The artistic and technical functions of the post process take a film from raw footage to a good result, and with the right post artist and software tools to a great film,” he says. “This is why I put a strong emphasis on the post process, because along with a great story and cinematography, it’s a key component of creating a noteworthy film. VR and 3D require several complex steps, and you want to use tools that simplify the process so you can save time, create high-quality results and stay within budget.”

For The Cabiri film, he used the Kandao Obsidian S camera, filming in 6K 3D360, then SGO’s Mistika VR for their stereo 3D optical-flow stitching. He edited in Adobe’s Premiere Pro CC 2018 and finished in Assimilate’s Scratch VR, using their 3D/360VR painting, tracking and color grading tools. He then delivered in 4K 3D360 to Pixvana’s Spin Studio.”

“Scratch VR is fast. For example, with the VR transform-and-vector paint tools I can quickly paint out the nadir, or easily delete unwanted artifacts like portions of a camera rig and wires, or even a person. It’s also easy to add in graphics and visual effects with the built-in tracker and compositing tools. It’s also the only software I use that renders content in the background while you continue working on your project. Another advantage is that Scratch VR will automatically connect to an Oculus headset for viewing 3D and 360,” he continues. “During our color grading session, Bogdan would wear an Oculus Rift headset and give me suggestions about changes I should make, such as saturation and hues, and I could quickly do these on the fly and save the versions for comparison.”

Quick Chat: FOM’s Adam Espinoza on DirecTV graphics campaign

By Randi Altman

Denver-based creative brand firm Friends of Mine (FOM) recently completed a graphics package for DirecTV Latin America that they had been working on for almost a year. The campaign, which first aired at the start of the 2017/2018 soccer season in August, has been airing on DirecTV’s Latin American network since then.

In addition to providing the graphics packages that ran on DirecTV Sports throughout the European Football League seasons (in Spain, England and France), FOM is currently creating graphics that will promote the World Cup games, set to take place between June 14 and July 15 in Russia.

Adam Espinoza

We reached out to FOM’s co-founder and creative director, Adam Espinoza, to find out more.

How early did you get involved in the piece? How much input did you have?
We were invited to the RFP process two months before the season started. We fully developed the look and concept from their written creative brief and objectives. We had complete input on the direction and execution.

What was it the client wanted to accomplish, and what did you suggest? 
The client wanted to convey the excitement of soccer throughout the season. There were two objectives: highlight the exclusive benefits of DirectTV for its subscribers while at the same time showing footage of goals and celebrations from the best players and teams in the world. We suggested the idea of intersections and digital energy.

Why did you think the visuals you created told the story the client needed? 
The digital energy graphics created a kinetic movement inherent in the sport while connecting the players around the league. The intersections concept helped to integrate the world of soccer seamlessly with DirecTV’s message.

What exactly did you provide services-wise on the piece? 
Conceptual design, art direction, 2D and 3D animation and video editing
.

What gear/tools did you use for each of those services? 
Our secret sauce along with Cinema 4D, Adobe Premiere, Adobe After Effects and Adobe Illustrator.

What was the most challenging part of the process?
Evolving the look from month to month throughout the season and building to the climatic finals, while still staying true to the original concept.

What’s was your favorite part of the process?
Being able to fine tune a concept over such a stretch of time.

Framestore London adds joint heads of CG

Framestore has named Grant Walker and Ahmed Gharraph as joint heads of CG at its London studio. The two will lead the company’s advertising, television and immersive work alongside head of animation Ross Burgess.

Gharraph has returned to Framestore after a two-year stint at ILM, where he was lead FX artist on Star Wars: The Last Jedi, receiving a VES nomination in Outstanding Effects Simulations in a Photoreal Feature. His credits on the advertising-side as CG supervisor include Mog’s Christmas Calamity, which was Sainsbury’s 2015 festive campaign, and Shell V-Power Shapeshifter, directed by Carl Erik Rinsch.

Walker joined Framestore in 2009, and in his time at the company he has worked across film, advertising and television, building a portfolio as a CG artist with campaigns, including Freesat’s VES-nominated Sheldon. He was also instrumental in Framestore’s digital recreation of Audrey Hepburn in Galaxy’s 2013 campaign Chauffeur for AMV BBDO. Most recently, he was BAFTA-nominated for his creature work in the Black Mirror episode, “Playtest.”

Freefolk New York hires Flame artist Brandon Danowski

Freefolk’s New York studio has beefed up its staff with the addition of Brandon Danowski as lead Flame artist. Danowski joins Freefolk after spending four years at The Mill’s New York City office, where he worked on the NFL’s 2015 Super Bowl Babies spot, among other things.

His resume includes working on brand campaigns for Samsung, The New York Times, HBO, Verizon, Cadillac, Lincoln and TD Ameritrade. “I learned so much at The Mill, and it was brilliant being part of a global company of that scale. I’m now excited about working in the atmosphere of a boutique and am delighted to have joined the roster at Freefolk”.

Danowski started in the industry in 2010 as an intern with Beast, Company 3 and Method in Atlanta. He was brought on full time when his internship ended.

Working in TV, film and commercial projects, Freefolk provides full-service post and VFX, including 2D & 3D visual effects, high-end color grading, shoot supervision, animation, design, concept and direction.

VR at NAB 2018: A Parisian’s perspective

By Alexandre Regeffe

Even though my cab driver from the airport to my hotel offered these words of wisdom — “What happens in Vegas, stays in Vegas” — I’ve decided not to listen to him and instead share with you the things that impressed for the VR world at NAB 2018.

Back in September of 2017, I shared with you my thoughts on the VR offerings at the IBC show in Amsterdam. In case you don’t remember my story, I’m a French guy who jumped into the VR stuff three years ago and started a cinematic VR production company called Neotopy with a friend. Three years is like a century in VR. Indeed, this medium is constantly evolving, both technically and financially.

So what has become of VR today? Lots of different things. VR is a big bag where people throw AR, MR, 360, LBE, 180 and 3D. And from all of that, XR (Extended Reality) was born, which means everything.

Insta360 Titan

But if this blurred concept leads to some misunderstanding, is it really good for consumers? Even us pros are finding it difficult to explain what exactly VR is, currently.

While at NAB, I saw a presentation from Nick Bicanic during which he used the term “frameless media.” And, thank you, Nick, because I think that is exactly what‘s in this big bag called VR… or XR. Today, we consume a lot of content through a frame, which is our TV, computer, smartphone or cinema screen. VR allows us to go beyond the frame, and this is a very important shift for cinematographers and content creators.

But enough concepts and ideas, let us start this journey on the NAB show floor! My first stop was the VR pavilion, also called the “immersive storytelling pavilion” this year.

My next stop was to see SGO Mistika. For over a year, the SGO team has been delivering an incredible stitching software with its Mistika VR. In my opinion, there is a “before” and an “after” this tool. Thanks to its optical flow capacities, you can achieve a seamless stitching 99% of the time, even with very difficult shooting situations. The last version of the software provided additional features like stabilization, keyframe capabilities, more cameras presets and easy integration with Kandao and Insta360 camera profiles. VR pros used Mistika’s booth as sort of a base camp, meeting the development team directly.

A few steps from Misitka was Insta360, with a large, yellow booth. This Chinese company is a success story with the consumer product Insta360 One, a small 360 camera for the masses. But I was more interested in the Insta360 Pro, their 8K stereoscopic 3D360 flagship camera used by many content creators.

At the show, Insta360’s big announcement was Titan, a premium version of the Insta360 Pro offering better lenses and sensors. It’s available later this year. Oh, and there was the lightfield camera prototype, the company’s first step into the volumetric capture world.

Another interesting camera manufacturer at the show was Human Eyes Technology, presenting their Vuze+. With this affordable 3D360 camera you can dive into stereoscopic 360 content and learn the basics about this technology. Side note: The Vuze+ was chosen by National Geographic to shoot some stunning sequences in the International Space Station.

Kandao Obsidian

My favorite VR camera company, Kandao, was at NAB showing new features for its Obsidian R and S cameras. One of the best is the 6DoF capabilities. With this technology, you can generate a depth map from the camera directly in Kandao Studio, the stitching software, which comes free when you buy an Obsidian. With the combination of a 360 stitched image and depth map, you can “walk” into your movie. It’s an awesome technique for better immersion. For me this was by far the best innovation in VR technology presented on the show floor

The live capabilities of Obsidian cameras have been improved, with a dedicated Kandao Live software, which allows you to live stream 4K stereoscopic 360 with optical flow stitching on the fly! And, of course, do not forget their new Qoocam camera. With its three-lens-equipped little stick, you can either do VR 180 stereoscopic or 360 monoscopic, while using depth map technology to refocus or replace the background in post — all with a simple click. Thanks to all these innovations, Kandao is now a top player in the cinematic VR industry.

One Kandao competitor is ZCam. They were there with a couple of new products: the ZCam V1, a 3D360 camera with a tiny form factor. It’s very interesting for shooting scenes where things are very close to the camera. It keeps a good stereoscopy even on nearby objects, which is a major issue with most of VR cameras and rigs. The second one is the small E2 – while it’s not really a VR camera, it can be used as an underwater rig, for example.

ZCam K1 Pro

The ZCam product range is really impressive and completely targeting professionals, from ZCam S1 to ZCam V1 Pro. Important note: take a look at their K1 Pro, a VR 180 camera, if you want to produce high-end content for the Google VR180 ecosystem.

Another VR camera at NAB was Samsung’s Round, offering stereoscopic capabilities. This relatively compact device comes with a proprietary software suite for stitching and viewing 360 shots. Thanks to IP65 normalization, you can use this camera outdoors in difficult weather conditions, like rain, dust or snow. It was great to see the live streaming 4K 3D360 operating on the show floor, using several Round cameras combined with powerful Next Computing hardware.

VR Post
Adobe Creative Cloud 2018 remains the must-have tool to achieve VR post production without losing your mind. Numerous 360-specific functionalities have been added during the last year, after Adobe bought the Mettle Skybox suite. The most impressive feature is that you can now stay in your 360 environment for editing. You just put your Oculus rift headset on and manipulate your Premiere timeline with touch controllers and proceed to edit your shots. Think of it as a Minority Report-style editing interface! I am sure we can expect more amazing VR tools from Adobe this year.

Google’s Lightfield technology

Mettle was at the Dell booth showing their new Adobe CC 360 plugin, called Flux. After an impressive Mantra release last year, Flux is now available for VR artists, allowing them to do 3D volumetric fractals and to create entire futuristic worlds. It was awesome to see the results in a headset!

Distributing VR
So once you have produced your cinematic VR content, how can you distribute it? One option is to use the Liquid Cinema platform. They were at NAB with a major update and some new features, including seamless transitions between a “flat” video and a 360 video. As a content creator you can also manage your 360 movies in a very smart CMS linked to your app and instantly add language versions, thumbnails, geoblocking, etc. Another exciting thing is built-in 6DoF capability right in the editor with a compatible headset — allowing you to walk through your titles, graphics and more!

I can’t leave without mentioning Voysys for live-streaming VR; Kodak PixPro and its new cameras ; Google’s next move into lightfield technology ; Bonsai’s launch of a new version of the Excalibur rig ; and many other great manufacturers, software editors and partners.

See you next time, Sin City.

The-Artery embraces a VR workflow for Mercedes spots

The-Artery founder and director Vico Sharabani recently brought together an elite group of creative artists and skilled technologists to create a cross-continental VR production pipeline for Mercedes-Benz’s Masters tournament brand campaign called “What Makes Us.”

Emmy-nominated cinematographer Paul Cameron (Westworld) and VFX supervisor Rob Moggach co-directed the project, which features a series six of intense broadcast commercials — including two fully CGI spots that were “shot” in a completely virtual world.

The agency and The-Artery team, including Vico Sharabani (third from the right).

This pair of 30-second commercials, First and Can’t, are the first to be created using a novel, realtime collaborative VR software application called Nu Design with Atom View technology. While in Los Angeles, Cameron worked within a virtual world, choosing camera bodies and lenses inside the space that allowed him to “shoot” for POV and angles that would have taken weeks to complete in the real world.

The software enabled him to grab and move the camera while all artistic camera direction was recorded virtually and used for final renders. This allowed both Sharabani, who was in NYC, and Moggach, who was in Toronto, to interact live and in realtime as if they were standing together on a physical set.

We reached out to Sharabani, Cameron and Moggach for details on VR workflow, and how they see the technology impacting production and creativity.

How did you come to know about Nurulize and the Nu Design Atom View technology?
Vico Sharabani: Scott Metzger, co-founder of Nurulize, is a long-time friend, colleague and collaborator. We have all been supporting each other’s careers and initiatives, so as soon as the alpha version of Nu Design was operational, we jumped on the opportunity of deploying it in real production.

How does the ability to shoot in VR change the production paradigm moving forward?
Rob Moggach: From scout to pre-light to shoot, through to dailies and editorial, it allows us to collaborate on digital productions in a traditional filmmaking process with established roles and procedures that are known to work.

Instead of locking animated productions into a rigid board, previs, animation workflow, a director can make decisions on editorial and find unexpected moments in the capture that wouldn’t necessarily be boarded and animated otherwise. Being able to do all of this without geographical restriction and still feel like you’re together in the same room is remarkable.

What types of projects are ideal for this new production pipeline?
Sharabani: The really beautiful thing for The-Artery, as a first time user of this technology, is to prove that this workflow can be used by companies like us on every project, and not only in films by Steven Spielberg and James Cameron. The obvious ideal fit is for projects like fully CGI productions; previs of big CGI environments that need to be considered in photography; virtual previs of scouted locations in remote or dangerous locations; blocking of digital sets in pre-existing greenscreen or partially built stages; and multiple remote creative teams that need to share a vision and input

What are the specific benefits?
Moggach: With a virtual pipeline, we are able to…
1) Work much faster than traditional previs to quickly capture multiple camera setups.
2) Visualize environments and CGI with a camera in-hand to find shots you didn’t know were there on screen.
3) Interact closely regardless of location and truly feel together in the same place.
4) Use known filmmaking processes, allowing us to capitalize on established wisdom and experience.

What impacts will it have to creativity?
Paul Cameron: For me, the VR workflow added a great impact to the overall creative approach for both commercials. It enabled me to go into the environment and literally grab a camera, move around the car, be in the middle of the car, pull the camera over the car. Basically, it allowed me to put the camera in places I always wanted to put the camera, but it would take hours to get cranes or scaffold for different positions.

The other fascinating thing is that you are able to scale the set up and down. For instance, I was able to scale the car down to 25% its normal size and make a very drastic camera move over the car, handheld with a VR camera, and with the combination of slowing it down, and smoothing it down a bit, we were able to design camera moves that were very organic and very natural.

I think it also allowed me to achieve a greater understanding of the set size and space, the geometry of the set and the relationship of the car to the set. In the past, it would be a process of going through a wireframe, waiting for the rendering — in this case, the car — and programming camera moves. It basically helps with conceptualization of camera moves and shot design in a new way for me.

Also being a director of photography, it is very empowering to be able to grab the camera literally with a controller and move through that space. Again, it just takes a matter of seconds to make very dramatic camera moves, whereas even on set it could take upwards of an hour or two to move a technocrane and actually get a feel for that shot, so it is very empowering overall.

What does it now allow directors to achieve?
Cameron: One of the better features about the VR workflow is that you can actually just teleport yourself around the set while you are inside of it. So, basically, you picture yourself inside this set, and with a left hand controller and one for the right hand, you have the ability to kind of teleport yourself to different perspectives. In this case, the automobile, the geometry and wireframe geometry of the set, so it gives you a very good idea of the perspectives from different angles and you can move around really quickly.

The other thing that I found fascinating was that not only can you move around this set, in this case, I was able to fly… upwards of about 150 feet and look down on the set. This was, while you are immersed in the VR world, quite intoxicating. You are literally flying and hovering above the set, and it kind of feels like you are standing on a beam with no room to move forward or backward without falling.

Paul Cameron

So the ability to move around in an endless set perspective-wise and teleport yourself around and above the set looking down, was amazing. In the case of the Can’t commercial, I was able to teleport on the other side of the wind turbine and look back at the automobile.

Although we had the 3D CADs of sets in the past, and we were able to travel around and look at camera positions, somehow the immediacy and the power of being in the VR environment with the two controllers was quite powerful. I think for one of the sessions I had the glasses on for almost four hours straight. We recorded multiple camera moves, and everybody was quite shocked that I was in the environment for that long. But for me, it was like being on a set, almost like a pre-pre-light or something, where I was able to have my space as a director and move around and get to see my angles and design my shots.

What other tools did you use?
Sharabani: Houdini for CG,Redshift (with support of GridMarkets) for rendering, Nuke for compositing, Flame for finishing, Resolve for color grading and Premiere for editing.

V-Ray GPU is Chaos Group’s new GPU rendering architecture

Chaos Group has redesigned its V-Ray RT product. The new V-Ray GPU rendering architecture, according to the company, effectively doubles the speed of production rendering for film, broadcast and design artists. This represents a redesign of V-Ray’s kernel structure, ensuring a dual-blend of high-performance speed and accuracy.

Chaos Group has renamed V-Ray RT to V-Ray GPU, wanting to establish the latter as a professional production renderer capable of supporting volumetrics, advanced shading and other smart tech coming down the road.

Current internal tests have V-Ray GPU running 80 percent faster on the Nvidia’s Titan V, a big gain from previous benchmarks on the Titan Xp, and up to 10-15x faster than an Intel Core i7-7700K, with the same high level of accuracy across interactive and production renders. (For its testing, Chaos Group uses a battery of production scenes to benchmark each release.)

“V-Ray GPU might be the biggest speed leap we’ve ever made,” says Blagovest Taskov, V-Ray GPU lead developer at Chaos Group. “Redesigning V-Ray GPU to be modular makes it much easier for us to exploit the latest GPU architectures and to add functionality without impacting performance. With our expanded feature set, V-Ray GPU can be used in many more production scenarios, from big-budget films to data-heavy architecture projects, while providing more speed than ever before.”

Representing over two years of dedicated R&D, V-Ray GPU builds on nine years of GPU-driven development in V-Ray. New gains for production artists include:

• Volume Rendering – Fog, smoke and fire can be rendered with the speed of V-Ray GPU. It’s compatible with V-Ray Volume Grid, which supports OpenVDB, Field3D and Phoenix FD volume caches.
• Adaptive Dome Light – Cleaner image-based lighting is now faster and even more accurate.
• V-Ray Denoising – Offering GPU-accelerated denoising across render elements and animations.
• Nvidia AI Denoiser – Fast, real-time denoising based on Nvidia OptiX AI-accelerated denoising technology.
• Interface Support – Instant filtering of GPU-supported features lets artists know what’s available in V-Ray GPU (starting within 3ds Max).

V-Ray GPU will be made available as part of the next update of V-Ray Next for 3ds Max beta.

Review: Wacom Mobile Studio Pro 16

By Sophia Kyriacou

As a designer who appreciates how products are packaged, my first impression of the Mobile Studio Pro when it arrived was very positive. I loved the minimalism of the design and how everything was carefully considered and placed within the box. It felt special and aimed at a creative who had earned it.

While I have been using Wacom tablet products professionally for over 20 years, I had never previously used a Wacom PC tablet. I didn’t have any expectations or preconceived ideas of what this box of tricks was capable of. It was great to stumble across things by accident, and it felt very intuitive.

The Mobile Studio Pro is a self-contained computer tablet device. You don’t need a laptop or a desktop to use it, as everything is within one handy box. You can, however, plug the device into a separate monitor should you need the additional screen. While I haven’t done this yet myself, I would imagine a second monitor would be handy when you need to spread out your application interface.

The tablet arrives with Windows 10 pre-installed. It’s essentially a PC computer rather than a mobile tablet device. You simply install your software as you would on your laptop or desktop workstation, and off you go. It’s as simple as that. I installed my Adobe Creative Cloud, with a special interest in Photoshop, as it was perfect for painting and drawing, and even sketching initial ideas. I also installed Allegorithmic’s Substance Painter, a brilliant painting package I use for my texture mapping. I also have my Studio version of Maxon Cinema 4D installed, which I predominately use for exporting my geometry that is ready for texture mapping in Substance Painter.

Digging In
Immediately, I liked the idea of being able to see where my pen was pointing at the screen before the pen had literally touched the screen itself. The little circular indicator was very simple and very useful, as it allowed me to target my pen exactly where it was going. Simple things count. The pen is very comfortable to hold, slightly weightier but not heavier than other tablet pens. It has a sturdy rubber grip and attachment should I want to let the pen hang from the tablet itself.

 

The overall design is minimal with a set of function keys and a wheel to one side. All can be easily changed to suit your needs. The screen is semi matte and perfectly smooth, although I personally prefer a glossy screen as the blacks look more crushed, but I appreciate that is also a personal preference. The screen is super-smooth and easy to glide without the pen slipping as it could on a glossed shiny surface. I did notice some minor light bleeding at the bottom edge in three places, but this didn’t impact my actual workflow and was only slightly noticeable on start-up rather than actually interfering with my workflow.

The 16-inch model is perfect for working between 3D and 2D texturing, although again a personal choice. The full-size version comes with a Quadra Nvidia Quadro M1000M 4GB GDDRS card, which is super-punchy — working with high-resolution imagery and geometry with no lag. Texturing in 4K+ is demanding, so this high-spec box of tricks is essential. The pixel resolution is highly respectable at 3,840×2,160 and along with an i7-6567U processor and 16GB RAM you have a very powerful tablet that perhaps provides more power than you may need but it is there to be taken advantage of when you do need it. The Pro Pen 2 is very accurate with no lag and comfortable, switching between using the pen and touch function feels very natural.

One of the drawbacks for me is the weight of the top-spec model — my MacBook Pro weighs 4.46 pounds and the Mobile Studio Pro weights 4.85 pounds. As the name suggests, it’s a “mobile studio.” For me it felt only mobile from room to room, and is not a device I could carry around with me for too long. The battery drains very quickly (four hours battery time), but given the amount of hardware inside this punchy unit, it is to be expected. The battery brick is very large, so if you are carrying the Mobile Studio out and about, you have to consider this and all the peripherals. While USB-C is still new compared to the USB design, I would have preferred to see perhaps two USB-C and one USB ports, but I guess this is a forward-thinking product and an adapter will do the trick, so this can be forgiven.

I found it very useful using an inexpensive wireless Logitech keyboard with a trackpad as constantly going back and forth between the tablet keyboard and the application was a little cumbersome as it was breaking up my workflow. What I would like to see is a simple button in the top corner that you click once that brings the keyboard up and press again and it’s gone, rather than having to go into bottom menus.

Real-World Work
When I took on the task of reviewing the Wacom Mobile Studio Pro, I thought it would be best suited on a project that benefitted from heavy use of texture mapping and texture painting. I decided to start working on a “concept film” where I would use the tablet to texture all the 3D assets. As this is a work in progress project, I have attached with my review an asset I textured using the Wacom Mobile Studio Pro and plan to finish the film this year, so please come back to see the results.

I am often inspired by sounds and music. Concepts have always been my main focus and I was inspired by a piece of cinematic music, which I thought would work incredibly well. It’s a short sequence about emotion. I want to take the viewer through a series of emotions and leave them thinking and stay with them. At the moment I am inspired by concept art and surrealism and like how chain reactions take you to places. Some scenes may be logical, others not, but will have a thread that links them all together. The opening of the track has a piano piece and the keys travel downwards. To express this I built a spiral staircase travelling in a downward motion taking the viewer into another world.

Pricing
For the MobileStudio Pro 13, prices vary with storage capacity: $1,500 for a 64GB SSD, $1,800 for 128GB, $2,000 for 256GB and $2,500 for 512GB.

As for the MobileStudio Pro 16, the less expensive $2,400 model incorporates an Nvidia Quadro M600M processor with 2GB of video RAM and a 256GB SSD, while the $3,000 model has an Nvidia Quadro M1000M with 4GB of video RAM and a 512GB SSD.

Summing Up
Would I recommend the Mobile Studio Pro? Absolutely. It’s powerful and it’s a computer, so I am able to install and use my software with ease. It works very well within my wider workflow, which is how I prefer to work. I think its success also comes down to the fact that this is a computer tablet device and not just a tablet that relies only on apps.


Sophia Kyriacou is an award-winning motion designer and 3D artist with over 20 years working in the broadcast industry. She is also a full voting member at BAFTA and has presented her various projects on the international stage at IBC for Maxon. She splits her time between freelancing and the BBC in London. Follow her on Twitter (@SophiaKyriacou) and Instagram (@sophiakyriacou).

 

Behind the Title: Flight School concept artist Ruby Wang

NAME: Ruby Wang

COMPANY: Dallas-based Flight School Studio

CAN YOU DESCRIBE YOUR COMPANY?
Flight School is a place where everybody gets together and comes up with an awesome idea and then works on that idea to create something that is wonderful. We use AR, VR or other tech to tell stories. We do that with our own ideas and we also work with clients.

WHAT’S YOUR JOB TITLE?
Concept Artist.

WHAT DOES THAT ENTAIL?
As a concept artist, I take the idea of the story and visualize it. We create characters, backgrounds and props to support the story. I work at the very beginning of a project. Sometimes when working on a character, we’ll have to explain the character to the 3D modelers and texture artists so they understand how it functions: The skeleton, textures and clothes. That’s so they can create it in 3D for the rest of the team to use.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think people expect concept artists to make pretty pictures. As concept artists, we’re not necessarily fine-art illustrators. All the pretty pictures we create have to be functional and explain something to other people. We create a lot of not-so-pretty pictures — it’s all about ideas, not about how pretty the picture is. As a concept artist you can’t be afraid to create crappy drawings. As long as it conveys the idea, that is a good piece of concept work.

WHAT TOOLS DO YOU USE?
Mainly Adobe Photoshop. Sometimes we go back to hand drawings — pencil and paper. It just feels good.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is that you can actually create and foresee the final look of the project. You create it and then someone actually makes it happen! It is the most satisfying thing to see your work go through everybody’s hands. We all create something together based on my designs. That is really awesome.

WHAT’S YOUR LEAST FAVORITE?
We have to do a lot of technical and specific drawings to explain stuff and make things functional. That part is so mechanical sometimes. You really have to dig in and make sure everything works for the modeler. That part can sometimes be really boring.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
In the early morning. Super early morning when the sunlight hasn’t quite come out yet. I like to wake up early and go jogging or take a walk. Then, I come in to the studio and start my day. If you wake up super early in the morning, you have a lot of time to do other stuff and it’s not even noon yet!

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I think I would be a children’s book illustrator and telling my own stories in different ways. I am very curious, and I like trying to explain what I am learning to other people or even children. But, it is hard to convey what’s in my mind, so I use stories to tell other people my ideas and what I’ve learned. To convey a heavy or dark idea, you have to use art to help other people more easily understand.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Pretty early on. When I was little I would tell stories to my sister through my art. I would sit in a room, drawing my story, paper by paper and I would force her to listen to my stories every weekend. I always loved telling stories. When I grew up, I tried to find ways to tell stories everyday. So, I studied 3D animation and found out animation is fun, but the visual development part is what I really enjoyed. You can support the story by creating the characters and environments. I could be doing other jobs but I always want to be telling stories.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I am working on Island Time, a VR game where you crash into an island. It is a really tiny island and you have to survive by catching fish and eating coconuts. You also get to make friends with Carl, a crab who lives there. I had a really great time designing Carl. Designing stuff for Island was really fun because the game is very goofy and cartoony. It’s more fun to see your work created in 3D and in-engine because you can interact with it. It’s really fun.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It is really hard to choose. I write stories in my spare time, but I rarely show them to other people. The Flight School team and I are working on a project right now that is based around my original story. I am really excited about it because I get to do more than just concept art. It is the first time I’ve shared my stories with a larger team, and it is exciting to craft the world and tell a story from my own perspective. I wish I could talk more about it!

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
One thing is Google Maps – I am super bad at directions. I need Google Maps to survive.
My camera. Sometimes, I’ll see some lighting that is awesome in nature, or a texture I’ve never seen before and want to study it but can’t memorize it. So I need a camera to capture the moment so I can really study it. My iPhone — I have an iPhone 6, and I use it to connect with other people. I used to live in San Francisco and I had a lot of friends. Now that I’m in Texas, I get lonely and my friends take turns calling me each night to keep me company.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, but not only music. I also listen to audiobooks and podcasts. For audiobooks, it is usually a book I’ve read in English already so I don’t have to focus too much on the narrator talking. The Harry Potter audiobooks are awesome. Podcasts are whatever my other colleagues recommend to me. Dirty John and S-Town are two I have really enjoyed.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
My work is creating art and drawings, but the way I destress is by creating art and drawings… for my own pleasure. I create stuff for myself. You don’t have to make it functional – it’s just pure fun. If I don’t draw, I feel weird. It’s like when you are addicted to something and you have to do it everyday or you feel like something is wrong with you!

Industry mainstay Click3X purchased by Industrial Color Studios

Established New York City post house Click3X has been bought by Industrial Color Studios. Click3X is a 25-year-old facility that specializes in new media formats such as VR, AR, CGI and live streaming. Industrial Color Studios is a visual content production company. Founded in 1992, Industrial Color’s services range from full image capture and e-commerce photography to production support and post services, including creative editorial, color grading and CG.

With offices in New York and LA, Industrial Color has developed its own proprietary systems to support online digital asset management for video editing and high-speed file transfers for its clients working in broadcast and print media. The company is an end-to-end visual content production provider, partnering with top brands, agencies and creative professionals to accelerate multi-channel creative content.

Click3X was founded in 1993 by Peter Corbett, co-founder of numerous companies specializing in both traditional and emerging forms of media.  These include Media Circus (a digital production and web design company), IllusionFusion, Full Blue, ClickFire Media, Reason2Be, Sound Lounge and Heard City. A long-time member of the DGA as a commercial film director, Corbett emigrated to the US from Australia to pursue a career as a commercial director and, shortly thereafter, segued into integrated media and mixed media, becoming one of the first established film directors to do so.

Projects produced at Click3X have been honored with the industry’s top awards, including Cannes Lions, Clios, Andy Awards and others. Click3X also was presented with the Crystal Apple Award, presented by the New York City Mayor’s Office of Media and Entertainment, in recognition of its contributions to the city’s media landscape.

Corbett will remain in place at Click3X and eventually the companies will share the ICS space on 6th Avenue in NYC.

“We’ve seen a growing need for video production capabilities and have been in the market for a partner that would not only enhance our video offering, but one that provided a truly integrated and complementary suite of services,” says Steve Kalalian, CEO of Industrial Color Studios. “And Click3X was the ideal fit. While the industry continues to evolve at lightning speed, I’ve long admired Click3X as a company that’s consistently been on the cutting edge of technology as it pertains to creative film, digital video and new media solutions. Our respective companies share a passion for creativity and innovation, and I’m incredibly excited to share this unique new offering with our clients.”

“When Steve and I first entered into talks to align on the state of our clients’ future, we were immediately on the same page,” says Corbett, president of Click3X. “We share a vision for creating compelling content in all formats. As complementary production providers, we will now have the exciting opportunity to collaborate on a robust and highly-regarded client roster, but also expand the company’s creative and new media capabilities, using over 200,000 square feet of state-of-the-art facilities in New York, Los Angeles and Philadelphia.”

The added capabilities Click3X gives Industrial Color in video production and new media mirrors its growth in the field of e-commerce photography and image capture. The company has recently opened a new 30,000 square-foot studio in downtown Los Angeles designed to produce high-volume, high-quality product photography for advertisers. That studio complements the company’s existing e-commerce photography hub in Philadelphia.

Main Image: (L-R) Peter Corbett and Steve Kalalian

VFX house Kevin adds three industry veterans

Venice, California-based visual effects house Kevin, founded by Tim Davies, Sue Troyan and Darcy Parsons, has beefed up its team even further with the hiring of head of CG Mike Dalzell, VFX supervisor Theo Maniatis and head of technology Carl Loeffler. This three-month-old studio has already worked on spots for Jaguar, Land Rover, Target and Old Spice, and is currently working on a series of commercials for the Super Bowl.

Dalzell brings years of experience as a CG supervisor and lead artist — he started as a 3D generalist before focusing on look development and lighting — at top creative studios including Digital Domain, MPC and Psyop, The Mill, Sony Imageworks and Method. He was instrumental in look development for VFX Gold Clio and British Arrow-winner Call of Duty Seize Glory and GE’s Childlike Imagination. He has also worked on commercials for Nissan, BMW, Lexus, Visa, Cars.com, Air Force and others. Early on, Dalzell honed his skills on music videos in Toronto, and then on feature films such as Iron Man 3 and The Matrix movies, as well as The Curious Case of Benjamin Button.

Maniatis, a Flame artist and on-set VFX supervisor, has a wide breadth of experience in the US, London and his native Sydney. “Tim [Davies] and I used to work together back in Australia, so reconnecting with him and moving to LA has been a blast.”

Maniatis’s work includes spots for Apple Watch 3 + Apple Music’s Roll (directed by Sam Brown), TAG Heuer’s To Jack (directed by and featuring Patrick Dempsey), Destiny 2’s Rally the Troops and Titanfall 2’s Become One (via Blur Studios), and PlayStation VR’s Batman Arkham and Axe’s Office Love, both directed by Filip Engstrom. Prior to joining Kevin, Maniatis worked with Blur Studios, Psyop, The Mill, Art Jail and Framestore.

Loeffler is creating the studio’s production model using the latest Autodesk Flame systems, high-end 3D workstations and render nodes and putting new networking and storage systems into place. Kevin’s new Culver City studio will open its doors in Q1, 2018 and Loeffler will guide the current growth in both hardware and software, plan for the future and make sure Kevin’s studio is optimized for the needs of production. He has over two decades of experience building out and expanding the technologies for facilities including MPC and Technicolor.

Image: (L-R) Mike Dalzell, Carl Loeffler and Theo Maniatis.

Quick Chat: Ntropic CD, NIM co-founder Andrew Sinagra

Some of the most efficient tools being used by pros today were created by their peers, those working in real-world post environments who develop workflows in-house. Many are robust enough to share with the world. One such tool is NIM, a browser-based studio management app for post houses that tracks a production pipeline from start to finish.

Andrew Sinagra, co-founder of NIM Labs and creative director of Ntropic, a creative studio that provides VFX, design, color and live action, was kind enough to answer some trends questions relating to tight turnarounds in post and visual effects.

What do you feel are the biggest challenges facing post and VFX studios in the coming year?
It’s an interesting time for VFX, in general. The post-Netflix era has ushered in a whole new range of opportunities, but the demands have shifted. We’re seeing quality expectations for television soar, but schedules and budgets have remained the same — or have tightened.

The challenges that will face post production studios will be to continue to create quality and competitive work while also working with faster turnarounds and ever-fluctuating budgets. It seems like an impossible problem, but thankfully tools, technology and talent continue to improve and deliver better results at a fraction of the time. By investing in those three Ts, the forward-thinking studios can balance expectation with necessary cost.

What have you found to be the typical pain points for studios with regards to project management in the past? What are the main complaints you hear time and time again?
Throughout my career I have met with many industry pros, from on-the-box artists and creative directors through to heads of production and studio owners. They have all shared their trials and tribulations – as well as their methods for staying ahead of the curve. The common pain point question is always the same: “How can I get a clearer view of my studio operations on a daily basis from resource utilization through running actuals?” It’s a growing concern. Managing budgets has been a major pain point for studios. Most just want a better way to visualize and gain back some control over what’s being spent and where. It’s all about the need for efficiency and clarity of vision on a project.

Is business intelligence very important to post studios at this point? Do you see it as an emerging trend over 2018?
Yes, absolutely. Studios need to know what’s going on, on any project, at a moment’s notice. They need to know if it will be affected by endless change orders, or if they’re consistently underbidding on a specific discipline, or if they’re marking something up that is actually affecting their overall margins. These can be the kind of statistics and influences that can impact the bottom line, but the problem is they are incredibly difficult to pull out from an ocean of numbers on a spreadsheet.

Studios that invest in business intelligence, and can see such issues immediately quantified, will be capable of performing at a much higher efficiency level than those that do not. The status quo of comparing spreadsheets and juggling emails works to an extent, but it’s very difficult to pull analysis out of that. Studios instead need solutions that can help them to better visualize their approach from the inside out. It enables stakeholders to make decisions going by their brain, rather than their gut. I can’t imagine any studio heading into 2018 will want to brave the turbulent seas without having that kind of business intelligence on their side.

What are the limitations with today’s approaches to bidding and the time and materials model? What changes do you see around financial modeling in VFX in the coming years?
The time and materials model seems largely dead, and has been for quite some time.  I have seen a few studios still working with the time and materials model in regards to specific clients, but as a whole I find studios working to flat bids with explicitly clear statements of work. The burden is then on the studio to stay within their limits and find creative solutions to the project challenges. This puts extra stress on producers to fully understand the financial ramifications of decisions made on a day-to-day basis. Will slipping in a client request push the budget when we don’t have the margin to spare? How can I reallocate my crew to be more efficient? Can we reorganize the project so that waiting for client feedback doesn’t stop us dead in the water. These are just a few of the questions that, when answered, can squeeze out that extra 10% to get the job done.

Additionally, having the right information arms the studio with the right ammunition to approach the client for overages when the time comes. Having all the information at your fingertips to the extent of time that has been spent on a project and what any requested changes would require allows studios the opportunity to educate their clients. And educating clients is a big part of being profitable.

What will studios need to do in 2018 to ensure continued success? What advice would you give them at this stage?
Other than business intelligence, staying ahead of the curve in today’s environment will also mean staying flexible, scalable and nimble. Nimbleness is perhaps the most important of the three — studios need to have this attribute to work in the ever-changing world of post production. It is rare that projects reach the finish line with the deliveries matching exactly what was outlined in the initial bid. Studios must be able to respond to the inevitable requested changes even in the middle of production. That means being able to make informed decisions that meet the client’s expectations, while also remaining within the scope of the budget. That can mean the difference between a failed project and a triumphant delivery.

Basically, my advice is this: Going into 2018, ask yourself, “Are you using your resources to your maximum potential, or are you leaving man hours on the table?” Take a close look at everything your doing and ensure you’re not pouring budget into areas it’s simply not needed. With so many moving pieces in production it’s imperative to understand at a glance where your efforts are being placed and how you can better use your artists.

House of Moves add Selma Gladney-Edelman, Alastair Macleod

Animation and motion capture studio House of Moves (HOM) has strengthened its team with two new hires — Selma Gladney-Edelman was brought on as executive producer and Alastair Macleod as head of production technology. The two industry vets are coming on board as the studio shifts to offer more custom short- and long-form content, and expands its motion capture technology workflows to its television, feature film, video game and corporate clients.

Selma Gladney-Edelman was most recently VP of Marvel Television for their primetime and animated series. She has worked in film production, animation and visual effects, and was a producer on multiple episodic series at Walt Disney Television Animation, Cartoon Network and Universal Animation. As director of production management across all of the Discovery Channels, she oversaw thousands of hours of television and film programming including TLC projects Say Yes To the Dress, Little People, Big World and Toddlers and Tiaras, while working on the team that garnered an Oscar nom for Werner Herzog’s Encounters at the End of the World and two Emmy wins for Best Children’s Animated Series for Tutenstein.

Scotland native Alastair Macleod is a motion capture expert who has worked in production, technology development and as an animation educator. His production experience includes work on films such as Lord of the Rings: The Two Towers, The Matrix Reloaded, The Matrix Revolutions, 2012, The Twilight Saga: Breaking Dawn — Part 2 and Kubo and the Two Strings for facilities that include Laika, Image Engine, Weta Digital and others.

Macleod pioneered full body motion capture and virtual reality at the research department of Emily Carr University in Vancouver. He was also the head of animation at Vancouver Film School and an instructor at Capilano University in Vancouver. Additionally, he developed PeelSolve, a motion capture solver plug-in for Autodesk Maya.

Behind the Title: Artist Jayse Hansen

NAME: Jayse Hansen

COMPANY: Jayse Design Group

CAN YOU DESCRIBE YOUR COMPANY?
I specialize in designing and animating completely fake-yet-advanced-looking user interfaces, HUDs (head-up displays) and holograms for film franchises such as The Hunger Games, Star Wars, Iron Man, The Avengers, Guardians of the Galaxy, Spiderman: Homecoming, Big Hero 6, Ender’s Game and others.

On the side, this has led to developing untraditional, real-world, outside-the-rectangle type UIs, mainly with companies looking to have an edge in efficiency/data-storytelling and to provide a more emotional connection with all things digital.

Iron Man

WHAT’S YOUR JOB TITLE?
Designer/Creative Director

WHAT DOES THAT ENTAIL?
Mainly, I try to help filmmakers (or companies) figure out how to tell stories in quick reads with visual graphics. In a film, we sometimes only have 24 frames (one second) to get information across to the audience. It has to look super complex, but it has to be super clear at the same time. This usually involves working with directors, VFX supervisors, editorial and art directors.

With real-world companies, the way I work is similar. I help figure out what story can be told visually with the massive amount of data we have available to us nowadays. We’re all quickly finding that data is useless without some form of engaging story and a way to quickly ingest, make sense of and act on that data. And, of course, with design-savvy users, a necessary emotional component is that the user interface looks f’n rad.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
A lot of R&D! Movie audiences have become more sophisticated, and they groan if a fake UI seems outlandish, impossible or Playskool cartoon-ish. Directors strive to not insult their audience’s intelligence, so we spend a lot of time talking to experts and studying real UIs in order to ground them in reality while still making them exciting, imaginative and new.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Research, breaking down scripts and being able to fully explore and do things that have never been done before. I love the challenge of mixing strong design principles with storytelling and imagination.

WHAT’S YOUR LEAST FAVORITE?
Paperwork!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early morning and late nights. I like to jam on design when everyone else is sleeping.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I actually can’t imagine doing anything else. It’s what I dream about and obsess about day and night. And I have since I was little. So I’m pretty lucky that they pay me well for it!

If I lost my sight, I’d apply for Oculus or Meta brain implants and live in the AR/VR world to keep creating visually.

SO YOU KNEW THIS WAS YOUR PATH EARLY ON?
When I was 10 I learned that they used small models for the big giant ships in Star Wars. Mind blown! Suddenly, it seemed like I could also do that!

As a kid I would pause movies and draw all the graphic parts of films, such as the UIs in the X-wings in Star Wars, or the graphics on the pilot helmets. I never guessed this was actually a “specialty niche” until I met Mark Coleran, an amazing film UI designer who coined the term “FUI” (Fictional User Interface). Once I knew it was someone’s “everyday” job, I didn’t rest until I made it MY everyday job. And it’s been an insanely great adventure ever since.

CAN YOU TALK MORE ABOUT FUI AND WHAT IT MEANS?
FUI stands for Fictional (or Future, Fantasy, Fake) User Interface. UIs have been used in films for a long time to tell an audience many things, such as: their hero can’t do what they need to do (Access Denied) or that something is urgent (Countdown Timer), or they need to get from point A to point B, or a threat is “incoming” (The Map).

Mockingjay Part I

As audiences are getting more tech-savvy, the potential for screens to act as story devices has developed, and writers and directors have gotten more creative. Now, entire lengths of story are being told through interfaces, such as in The Hunger Games: The Mockingjay Part I where Katniss, Peeta, Beetee and President Snow have some of their most tense moments.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
The most recent projects I can talk about are Guardians of the Galaxy 2 and Spider-Man: Homecoming, both with the Cantina Creative team and Marvel. For Guardians 2, I had a ton of fun designing and animating various screens, including Rocket, Gamora and Star-Lord’s glass screens and the large “Drone Tactical Situation Display” holograms for the Sovereign (gold people). Spider-Man was my favorite superhero as a child, so I was honored to be asked to define the “Stark-Designed” UI design language of the HUDs, holograms and various AR overlays.

I spent a good amount of time researching the comic book version of Spider-man. His suit and abilities are actually quite complex, and I ended up writing a 30-plus page guide to all of its functions so I could build out the HUD and blueprint diagrams in a way that made sense to Marvel fans.

In the end, it was a great challenge to blend the combination of the more military Stark HUDs for Iron Man, which I’m very used to designing, and a new, slightly “webby” and somewhat cute “training-wheels” UI that Stark designed for the young Peter Parker. I loved the fact that in the film they played up the humor of a teenager trying to understand the complexities of Stark’s UIs.

Star Wars: The Force Awakens

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I think Star Wars: The Force Awakens is the one I was most proud to be a part of. It was my one bucket list film to work on from childhood, and I got to work with some of the best talents in the business. Not only JJ Abrams and his production team at Bad Robot, but with my longtime industry friends Navarro Parker and Andrew Kramer.

WHAT SOFTWARE DID YOU RELY ON?
As always, we used a ton of Maxon Cinema 4D, Adobe’s After Effects and Illustrator and Element 3D to pull off rather complex and lengthy design sequences such as the Starkiller Base hologram and the R2D2/BB8 “Map to Luke Skywalker” holograms.

Cinema 4D was essential in allowing us to be super creative while still meeting rather insane deadlines. It also integrates so well with the Adobe suite, which allowed us to iterate really quickly when the inevitable last-minute design changes came flying in. I would do initial textures in Adobe Illustrator, then design in C4D, and transfer that into After Effects using the Element 3D plugin. It was a great workflow.

YOU ALSO CREATE VR AND AR CONTENT. CAN YOU TELL US MORE ABOUT THAT?
Yes! Finally, AR and VR are allowing what I’ve been doing for years in film to actually happen in the real world. With a Meta (AR) or Oculus (VR) you can actually walk around your UI like an Iron Man hologram and interact with it like the volumetric UI’s we did for Ender’s Game.

For instance, today with Google Earth VR you can use a holographic mapping interface like in The Hunger Games to plan your next vacation. With apps like Medium, Quill, Tilt Brush or Gravity Sketch you can design 3D parts for your robot like Hiro did in Big Hero 6.

Big Hero 6

While wearing a Meta 2, you can surround yourself with multiple monitors of content and pull 3D models from them and enlarge them to life size.

So we have a deluge of new abilities, but most designers have only designed on flat traditional monitors or phone screens. They’re used to the two dimensions of up and down (X and Y), but have never had the opportunity to use the Z axis. So you have all kinds of new challenges like, “What does this added dimension do for my UI? How is it better? Why would I use it? And what does the back of a UI look like when other people are looking at it?”

For instance, in the Iron Man HUD, most of the time I was designing for when the audience is looking at Tony Stark, which is the back of the UI. But I also had to design it from the side. And it all had to look proper, of course, from the front. UI design becomes a bit like product design at this point.

In AR and VR, similar design challenges arise. When we are sharing volumetric UIs — we will see other people’s UIs from the back. At times, we want to be able to understand them, and at other times, they should be disguised, blurred or shrouded for privacy reasons.

How do you design when your UI can take up the whole environment? How can a UI give you important information without distracting you from the world around you? How do you deal with additive displays where black is not a color you can use? And on and on. These are all things we tackle with each film, so we have a bit of a head start in those areas.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
I love tech, but it would be fun to be stuck with just a pen, paper and a book… for a while, anyway.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m on Twitter (@jayse_), Instagram (@jayse_) and Pinterest (skyjayse). Aside from that I also started a new FUI newsletter to discuss some behind the scenes of this type of work.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Heck yeah. Lately, I find myself working to Chillstep and Deep House playlists on Spotify. But check out The Cocteau Twins. They sing in a “non-language,” and it’s awesome.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I chill with my best friend and fiancé, Chelsea. We have a rooftop wet-bar area with a 360-degree view of Las Vegas from the hills. We try to go up each evening at sunset with our puppy Bella and just chill. Sometimes it’s all fancy-like with a glass of wine and fruit. Chelsea likes to make it all pretty.

It’s a long way from just 10 years ago where we were hunting spare-change in the car to afford 99-cent nachos from Taco Bell, so we’re super appreciative of where we’ve come. And because of that, no matter how many times my machine has crashed, or how many changes my client wants — we always make time for just each other. It’s important to keep perspective and realize your work is not life or death, even though in films sometimes they try to make it seem that way.

It’s important to always have something that is only for you and your loved ones that nobody can take away. After all, as long as we’re healthy and alive, life is good!

Chaos Group acquires Render Legion and its Corona Renderer

Chaos Group has purchased Prague-based Render Legion, creator of the Corona Renderer. With this new product and Chao’s own V-Ray, the company is offering even more rendering solutions for M&E and the architectural visualization world.

Known for its ease of use, the Corona Renderer has become a popular choice for architectural visualization, but according to Chaos Group’s David Tracy, “There are a few benefits for M&E. Corona plans to implement some VFX-related features, such as hair and skin with the help of the V-Ray team. Also, Corona is sharing technology, like the way they optimize dome lights. That will definitely be a benefit for V-Ray users in the VFX space.”

The Render Legion team, including its founders and developers, will join Chaos Group as they continue to develop Corona using additional support and resources provided through the deal.

Chaos Group’s Academy Award-winning renderer, V-Ray will continue to be a core component of the company’s portfolio. Both V-Ray and Corona will benefit from joint collaborations, bringing complementary features and optimizations to each product.

The Render Legion acquisition is Chaos Group’s largest investment to date. It is the third investment in a visualization company in the last two years, including interactive presentation platform CL3VER and virtual reality pioneer Nurulize. According to Chaos Group, the computer graphics industry is expected to reach $112 billion in 2019, fueled by a rise in the demand for 3D visuals. This, they say, has presented a prime opportunity for companies who make the creation of photorealistic imagery more accessible.

Main Image: ( L-R) Chaos Group co-founder Vlado Koylazov and Render Legion CEO/co-founder Ondřej Karlík.

Red Giant Trapcode Suite 14 now available

By Brady Betzel

Red Giant has released an update to its Adobe After Effects focused plug-in toolset Trapcode Suite 14, including new versions of Trapcode Particular and Form as well as an update to Trapcode Tao.

The biggest updates seem to be in Red Giant’s flagship product Trapcode Particular 3. Trapcode Particular is now GPU accelerated through OpenGL with a proclaimed 4X speed increase over previous versions. The Designer has been re-imagined and seems to take on a more Magic Bullet-esque look and feel. You can now include multiple particle systems inside the same 3D space, which will add to the complexity and skill level needed to work with Particular.

You can now also load your own 3D model OBJ files as emitters in the Designer panel or use any image in your comp as a particle. There are also a bunch of new presets that have been added to start you on your Particular system building journey — over 210 new presets, to be exact.

Trapcode Form has been updated to version 3 with the updated Designer, ability to add 3D models and animated OBJ sequences as particle grids, load images to be used as a particle, new graphing system to gain more precise control over the system and over 70 presets in the designer.

Trapcode Tao has been updated with depth of field effects to allow for that beautiful camera-realistic blur that really sets pro After Effects users apart.

Trapcode Particular 3 and Form 3 are paid updates while Tao is free for existing users. If you want to only update Tao make sure you only select Tao for the update otherwise you will install new Trapcode plug-ins over your old ones.

Trapcode Particular 3 is available now for $399. The update is $149 and the academic version is $199. You can also get it as a part of the Trapcode Suite 14 for $999.

Trapcode Form 3 is available now for $199. The update is $99 and the academic costs $99. It can be purchased as part of the Trapcode Suite 14 for $999.

Check out the new Trapcode Suite 14 bundle.

 

Maxon debuts Cinema 4D Release 19 at SIGGRAPH

Maxon was at this year’s SIGGRAPH in Los Angeles showing Cinema 4D Release 19 (R19). This next-generation of Maxon’s pro 3D app offers a new viewport and a new Sound Effector, and additional features for Voronoi Fracturing have been added to the MoGraph toolset. It also boasts a new Spherical Camera, the integration of AMD’s ProRender technology and more. Designed to serve individual artists as well as large studio environments, Release 19 offers a streamlined workflow for general design, motion graphics, VFX, VR/AR and all types of visualization.

With Cinema 4D Release 19, Maxon also introduced a few re-engineered foundational technologies, which the company will continue to develop in future versions. These include core software modernization efforts, a new modeling core, integrated GPU rendering for Windows and Mac, and OpenGL capabilities in BodyPaint 3D, Maxon’s pro paint and texturing toolset.

More details on the offerings in R19:
Viewport Improvements provide artists with added support for screen-space reflections and OpenGL depth-of-field, in addition to the screen-space ambient occlusion and tessellation features (added in R18). Results are so close to final render that client previews can be output using the new native MP4 video support.

MoGraph enhancements expand on Cinema 4D’s toolset for motion graphics with faster results and added workflow capabilities in Voronoi Fracturing, such as the ability to break objects progressively, add displaced noise details for improved realism or glue multiple fracture pieces together more quickly for complex shape creation. An all-new Sound Effector in R19 allows artists to create audio-reactive animations based on multiple frequencies from a single sound file.

The new Spherical Camera allows artists to render stereoscopic 360° virtual reality videos and dome projections. Artists can specify a latitude and longitude range, and render in equirectangular, cubic string, cubic cross or 3×2 cubic format. The new spherical camera also includes stereo rendering with pole smoothing to minimize distortion.

New Polygon Reduction works as a generator, so it’s easy to reduce entire hierarchies. The reduction is pre-calculated, so adjusting the reduction strength or desired vertex count is extremely fast. The new Polygon Reduction preserves vertex maps, selection tags and UV coordinates, ensuring textures continue to map properly and providing control over areas where polygon detail is preserved.

Level of Detail (LOD) Object features a new interface element that lets customers define and manage settings to maximize viewport and render speed, create new types of animations or prepare optimized assets for game workflows. Level of Detail data exports via the FBX 3D file exchange format for use in popular game engines.

AMD’s Radeon ProRender technology is now seamlessly integrated into R19, providing artists a cross-platform GPU rendering solution. Though just the first phase of integration, it provides a useful glimpse into the power ProRender will eventually provide as more features and deeper Cinema 4D integration are added in future releases.

Modernization efforts in R19 reflect Maxon’s development legacy and offer the first glimpse into the company’s planned ‘under-the-hood’ future efforts to modernize the software, as follows:

  • Revamped Media Core gives Cinema 4D R19 users a completely rewritten software core to increase speed and memory efficiency for image, video and audio formats. Native support for MP4 video without QuickTime delivers advantages to preview renders, incorporate video as textures or motion track footage for a more robust workflow. Export for production formats, such as OpenEXR and DDS, has also been improved.
  • Robust Modeling offers a new modeling core with improved support for edges and N-gons can be seen in the Align and Reverse Normals commands. More modeling tools and generators will directly use this new core in future versions.
  • BodyPaint 3D now uses an OpenGL painting engine giving R19 artists painting color and adding surface details in film, game design and other workflows, a real-time display of reflections, alpha, bump or normal, and even displacement, for improved visual feedback and texture painting. Redevelopment efforts to improve the UV editing toolset in Cinema 4D continue with the first-fruits of this work available in R19 for faster and more efficient options to convert point and polygon selections, grow and shrink UV point selects, and more.

Calabash animates characters for health PSA

It’s a simple message, told in a very simple way — having a health issue and being judged for it, hurts. An animated PSA for The Simon Foundation, titled Rude2Respect, was animated by Chicago’s Calabash in conjunction with the creative design studio Group Chicago.

Opening with typography “Challenging Health Stigma,” the PSA features two friends — a short, teal-colored tear-dropped blob known simply as Blue and his slender companion Pink — taking a walk on a bright sunny day in the city. Blue nervously says, “I’m not sure about this,” to which Pink responds, “You can’t stay home forever.” From there the two embark on what seems like a simple stroll to get ice cream, but there is a deeper message about how such common events can be fraught with anxiety for those suffering from an array of health conditions that often results in awkward stares, well-intentioned but inappropriate comments or plain rudeness. Blue and Pink decide it’s the people with the comments that are in the wrong and continue on to get ice cream. The spot ends with the simple words “Health stigma hurts. We can change lives” followed by a link to www.rude2respect.org.

“We had seen Calabash’s work and sought them out,” says Barbara Lynk, Group Chicago’s creative director. “We were impressed with how well their creative team immediately understood the characters and their visual potential. Creatively they brought a depth of experience on the conceptual and production side that helped bring the characters to life. They also understood the spare visual approach we were trying to achieve. It was a wonderful creative collaboration throughout the process, and they are a really fun group of creatives to work with.”

Based on illustrated characters created by Group Chicago’s founder/creative director Kurt Meinecke, Calabash creative director Wayne Brejcha notes that early on in the creative process they decided to go with what he called a “two-and-a-half-D look.”

“There is a charm in the simplicity of Kurt’s original illustrations with the flat shapes that we had to try very hard to keep as we translated Blue and Pink to the 3D world,” Brejcha says. “We also didn’t want to overly complicate it with a lot of crazy camera moves rollercoastering through the space or rotating around the characters. We constrained it to feel a little like two-and-a-half dimensions – 2D characters, but with the lighting and textures and additional physical feel you expect with 3D animation.

“We spent a good deal of time with thumbnail boardomatics, a scratch track and stand-in music as it began to gel,” he continues. “Kurt searched out some piano music for the intro and outro, which also set tone, and we cast for voices with the personalities of the figures in mind. After a few conversations with Kurt and Barb we understood the personalities of Blue and Pink very well. They’re archetypes or incarnations of two stages of dealing with, say, going out in public with some medical apparatus you’re attached to that’s plainly visible to everyone. The Blue guy is self-conscious, defensive, readily upset and also ready to bring a little push-back to the folks who call out his non-normative qualities. Pink is a little further along in accepting the trials. She can shake off with equanimity all the outright insults, dopey condescension and the like. She’s something of a mentor or role model for Blue. The characters are made of simple shapes, so animator Nick Oropezas did a lot of tests and re-animation to get just the right movements, pauses, timing and expressions to capture those spirits.”

For Sean Henry, Calabash’s executive producer, the primary creative obstacles centered on finding the right pacing for the story. “We played with the timing of the edits all the way through production,” he explains. “The pace of it had a large role to play in the mood, which is more thoughtful than your usual rapid-fire ad. Also, finding the right emotions for the voices was also a major concern. We needed warmth and a friendly mentoring feel for Pink, and a feisty, insecure but likeable voice for Blue. Our voice talent nailed those qualities. Additionally, the dramatic events in the spot happen only in the audio with Pink and Blue responding to off-screen voices and action, so the sound design and music had a major storytelling role to play as well.”

Calabash called on Autodesk Maya for the characters and Foundry’s Nuke for effects/compositing. Adobe Premiere was used for the final edit.

Nugen adds 3D Immersive Extension to Halo Upmix

Nugen Audio has updated its Halo Upmix with a new 3D Immersive Extension, adding further options beyond the existing Dolby Atmos bed track capability. The 3D Immersive Extension now provides ambisonic-compatible output as an alternative to channel-based output for VR, game and other immersive applications. This makes it possible to upmix, re-purpose or convert channel-based audio for an ambisonic workflow.

With this 3D Immersive Extension, Halo fully supports Avid’s newly announced Pro Tools V.2.8, now with native 7.1.2 stems for Dolby Atmos mixing. The combination of Pro Tools 12.8 and Halo 3D Immersive Extension can provide a more fluid workflow for audio post pros handling multi-channel and object-based audio formats.

Halo Upmix is available immediately at a list price of $499 for both OS X and Windows, with support for Avid AAX, AudioSuite, VST2, VST3 and AU formats. The new 3D Immersive Extension replaces the Halo 9.1 Extension and can now be purchased for $199. Owners of the existing Halo 9.1 Extension can upgrade to the Halo 3D Immersive Extension for no additional cost. Support for native 7.1.2 stems in Avid Pro Tools 12.8 is available on launch.

Red’s Hydrogen One: new 3D-enabled smartphone

In their always subtle way, Red has stated that “the future of personal communication, information gathering, holographic multi-view, 2D, 3D, AR/VR/MR and image capture just changed forever” with the introduction of Hydrogen One, a pocket-sized, glasses-free “holographic media machine.”

Hydrogen One is a standalone, full-featured, unlocked multi-band smartphone, operating on Android OS, that promises “look around depth in the palm of your hand” without the need for separate glasses or headsets. The device features a 5.7-inch professional hydrogen holographic display that switches between traditional 2D content, holographic multi-view content, 3D content and interactive games, and it supports both landscape and portrait modes. Red has also embedded a proprietary H30 algorithm in the OS system that will convert stereo sound into multi-dimensional audio.

The Hydrogen system incorporates a high-speed data bus to enable a comprehensive and expandable modular component system, including future attachments for shooting high-quality motion, still and holographic images. It will also integrate into the professional Red camera program, working together with Scarlet, Epic and Weapon as a user interface and monitor.

Future-users are already talking about this “nifty smartphone with glasses-free 3D,” and one has gone so far as to describe the announcement as “the day 360-video became Betamax, and AR won the race.” Others are more tempered in their enthusiasm, viewing this as a really expensive smartphone with a holographic screen that may or might not kill 360 video. Time will tell.

Initially priced between $1,195 and $1,595, the Hydrogen One is targeted to ship in Q1 of 2018.

John Hughes, Helena Packer, Kevin Donovan open post collective

Three industry vets have combined to launch PHD, a Los Angeles-based full-service post collective. Led by John Hughes (founder of Rhythm & Hues), Helena Packer (VFX supervisor/producer) and Kevin Donovan (film/TV/commercials director), PHD works across the genres of VR/AR, independent films, documentaries, TV — including limited series and commercials. In addition to post production, including color grading, offline and online editorial, the visual effects and final delivery, they offer live-action production services. In addition to Los Angeles, PHD has locations in India, Malaysia and South Africa.

Hughes was the co-founder of the legendary VFX shop Rhythm & Hues (R&H) and led that studio for 26 years, earning three Academy Awards for “Best Visual Effects” (Babe, The Golden Compass, Life of Pi) as well as four scientific and engineering Academy Awards.

Packer was inducted into the Academy of Motion Picture Arts and Sciences (AMPAS) in 2008 for her creative contributions to filmmaking as an accomplished VFX artist, supervisor and producer. Her expertise extends beyond feature films to episodic TV, stereoscopic 3D and animation. Packer has been the VFX supervisor and Flame artist for hundreds of commercials and over 20 films, including 21 Jump Street and Charlie Wilson’s War.

Director Kevin Donovan is particularly well-versed in action and visual effects. He directed the feature film, The Tuxedo, and is currently producing the TV series What Would Trejo Do? He has shot over 700 commercials during the course of his career and is the winner of six Cannes Lions.

Since the company’s launch, PHD has worked on a number of projects — two PSAs for the Climate Change organization 5 To Do Today featuring Arnold Schwarzenegger and James Cameron called Don’t Buy It and Precipice
a PSA for the international animal advocacy group WildAid shot in Tanzania and Oregon called Talking Elephant, another for WildAid shot in Cape Town, South Africa called Talking Rhino, and two additional WildAid PSAs featuring actor Josh Duhamel called Souvenir and Situation.

“In a sense, our new company is a reconfigured version of R&H, but now we are much smarter, much more nimble and much more results driven,” says Hughes about PHD. “We have very little overhead to deal with. Our team has worked on hundreds of award-winning films and commercials…”

Main Photo: L-R:  John Hughes, Helena Packer and Kevin Donovan.

Liron Ashkenazi-Eldar joins The Artery as design director  

Creative studio The Artery has brought on Liron Ashkenazi-Eldar as lead design director. In her new role, she will spearhead the formation of a department that will focus on design and branding. Ashkenazi-Eldar and team are also developing in-house design capabilities to support the company’s VFX, experiential and VR/AR content, as well as website development, including providing motion graphics, print and social campaigns.

“While we’ve been well established for many years in the areas of production and VFX, our design team can now bring a new dimension to our company,” says Ashkenazi-Eldar, who is based in The Artery’s NYC office. “We are seeking brand clients with strong identities so that we can offer them exciting, new and even weird creative solutions that are not part of the traditional branding process. We will be taking a completely new approach to branding — providing imagery that is more emotional and more personal, instead of just following an existing protocol. Our goal is to provide a highly immersive experience for our new brand clients.”

Originally from Israel, the 27-year-old Ashkenazi-Eldar is a recent graduate of New York’s School of Visual Arts with a BFA degree in Design. She is the winner of a 2017 ADC Silver Cube Award from The One Club, in the category 2017 Design: Typography, for her contributions to a project titled Asa Wife Zine. She led the Creative Team that submitted the project via the School of Visual Arts.