Category Archives: VFX

Nvidia intros Turing-powered Titan RTX

Nvidia has introduced its new Nvidia Titan RTX, a desktop GPU that provides the kind of massive performance needed for creative applications, AI research and data science. Driven by the new Nvidia Turing architecture, Titan RTX — dubbed T-Rex — delivers 130 teraflops of deep learning performance and 11 GigaRays of raytracing performance.

Turing features new RT Cores to accelerate raytracing, plus new multi-precision Tensor Cores for AI training and inferencing. These two engines — along with more powerful compute and enhanced rasterization — will help speed the work of developers, designers and artists across multiple industries.

Designed for computationally demanding applications, Titan RTX combines AI, realtime raytraced graphics, next-gen virtual reality and high-performance computing. It offers the following features and capabilities:
• 576 multi-precision Turing Tensor Cores, providing up to 130 Teraflops of deep learning performance
• 72 Turing RT Cores, delivering up to 11 GigaRays per second of realtime raytracing performance
• 24GB of high-speed GDDR6 memory with 672GB/s of bandwidth — two times the memory of previous-generation Titan GPUs — to fit larger models and datasets
• 100GB/s Nvidia NVLink, which can pair two Titan RTX GPUs to scale memory and compute
• Performance and memory bandwidth sufficient for realtime 8K video editing
• VirtualLink port, which provides the performance and connectivity required by next-gen VR headsets

Titan RTX provides multi-precision Turing Tensor Cores for breakthrough performance from FP32, FP16, INT8 and INT4, allowing faster training and inference of neural networks. It offers twice the memory capacity of previous-generation Titan GPUs, along with NVLink to allow researchers to experiment with larger neural networks and datasets.

Titan RTX accelerates data analytics with RAPIDS. RAPIDS open-source libraries integrate seamlessly with the world’s most popular data science workflows to speed up machine learning.

Titan RTX will be available later in December in the US and Europe for $2,499.

Post production in the cloud

By Adrian Pennington

After being talked about for years, the capacity to use the cloud for the full arsenal of post workflows is possible today with huge ramifications for the facilities business.

Rendering frames for visual effects requires an extraordinary amount of compute power for which VFX studios have historically assigned whole rooms full of servers to act as their renderfarm. As visual quality has escalated, most vendors have either had to limit the scope of their projects or buy or rent new machines on-premises to cope with the extra rendering needed. In recent times this has been upended as cloud networking has enabled VFX shops to relieve internal bottlenecks to scale, and then contract, at will.

The cloud rendering process has become so established that even this once groundbreaking capability has evolved to encompass a whole host of post workflows from previz to transcoding. In doing so, the conventional business model for post is being uprooted and reimagined.

“Early on, global facility powerhouses first recognized how access to unlimited compute and remote storage could empower the creative process to reach new heights,” explains Chuck Parker, CEO of Sohonet. “Despite spending millions of dollars on hardware, the demands of working on multiple, increasingly complex projects simultaneously, combined with decreasing timeframes, stretched on-premise facilities to their limits.”

Chuck Parker

Public cloud providers (Amazon Web Services, Google Cloud Platform, Microsoft Azure) changed the game by solving space, time and capacity problems for resource-intensive tasks. “Sohonet Fastlane and Google Compute Engine, for example, enabled MPC to complete The Jungle Book on time and to Oscar-winning standards, thanks to being able to run millions of Core hours in the cloud,” notes Parker.

Small- to mid-sized companies followed suit. “They lacked the financial resources and the physical space of larger competitors, and initially found themselves priced out of major studio projects,” says Parker. “But by accessing renderfarms in the cloud they can eliminate the cost and logistics of installing and configuring physical machines. Flexible pricing and the option of preemptible instances mean only paying for the compute power used, further minimizing costs and expanding the scope of possible projects.”

Milk VFX did just this when rendering the complex sequences on Adrift. Without the extra horsepower, the London-based house could not have bid on the project in the first place.

“The technology has now evolved to a point where any filmmaker with any VFX project or theatrical, TV or spot editorial can call on the cloud to operate at scale when needed — and still stay affordable,” says Parker. “Long anticipated and theorized, the ability to collaborate in realtime with teams in multiple geographic locations is a reality that is altering the post production landscape for enterprises of all sizes.”

Parker says the new post model might look like this. He uses the example of a company headquartered in Berlin — “an innovative company might employ only a dozen managers and project supervisors on its books. They can bid with confidence on jobs of any scale and any timeframe knowing that they can readily rent physical space in any location, anywhere in the world, to flexibly take advantage of tax breaks and populate it with freelance artists: 100 one week, say, 200 in week three, 300 in week five. The only hardware (rental) costs would be thin-client workstations and Wacom tablets, plus software licenses for 3D, roto, compositing and other key tools. With the job complete, the whole infrastructure can be smoothly scaled back.”

The compute costs of spinning up cloud processing and storage can be modelled into client pitches. “But building out and managing such connectivity independently may still require considerable CAPEX — one that might be cost-prohibitive if you only need the infrastructure for short periods,” notes Parker. “Cloud-compute resources are perfect for spikes in workload but, in between those spikes, paying for bandwidth you don’t need will hurt the bottom line.

Dedicated, “burstable” connectivity speeds of 100Mbit/s up to 50Gbit/s with flexibility, security and reliability are highly desirable attributes for the creative workflow. Price points, as ever, are a motivating concern. Parker’s offerings “move your data away from Internet bandwidth, removing network congestion and decreasing the time it takes to transfer your data. With a direct link to the major cloud provider of your choice, customers can be in control of how their data is routed, leading to a more consistent network experience.

“Direct links into major studios like Pinewood UK open up realtime on-set CGI rendering with live-action photography for virtual production scenarios,” adds Parker. “It is vital that your data transits straight to the cloud and never touches the Internet.”

With file sizes set to continue to increase exponentially over the next few years as 4K and HDR become standard and new immersive media like VR emerges to the mainstream, leveraging the cloud will not only be routine for the highest budget projects and largest vendors, it will become the new post production paradigm. In the cloud creative workflows are demystified and democratized.

DigitalGlue 1.10

Video editing and VFX app HitFilm gets an upgrade

FXhome has upgraded its video editing and VFX software app. The new HitFilm Version 11.0 features Surface Studio, a new VFX plugin modeled from Video Copilot’s Damage and Decay and Cinematic Titles tutorials. Based on customer requests, this new VFX tool enables users to create smooth or rough-textured metallic and vitreous surfaces on any text or in layers. By dropping a clear PNG file onto the HitFilm timeline, text titles instantly turn into weathered, rusty and worn metallic signs.

HitFilm’s Surface Studio also joins FXhome’s expanding library of VFX plugins, Ignite Pro. This set of plugins is available on Mac and PC platforms, and is compatible with 10 of the most used host software, including Adobe Creative Cloud, Apple Final Cut Pro X, Avid, DaVinci Resolve and others.

Last month, FXhome added to its product family with Imerge Pro, a non-destructive RAW image compositor with fully flexible layers and advanced keying for content creators. FXhome is also integrating a number of Imerge Pro plugins with HitFilm, including Exposure, Outer Glow, Inner Glow and Dehaze. New Imerge Pro plugins are tightly integrated with HitFilm V.11.0’s interface ensuring smooth, uninterrupted workflows.

Minimum system requirements are for Apple are: Mac OS 10.13 High Sierra, OS X 10.12 Sierra or OS X 10.11 El Capitan. And for Windows: Microsoft Windows 10 (64-bit), Microsoft Windows 8 (64-bit)

HitFilm 11.0 is available immediately from the FXhome store for $299. FXhome is also celebrating this holiday season with its annual sale. Through December 4, 2018, they are offering a 33% discount when users purchase the FXhome Pro Bundle, which includes HitFilm 11.0, Action, Ignite and Imerge.


Behind the Title: Weta Digital’s Paolo Emilio Selva

NAME: Paolo Emilio Selva 

COMPANY: Weta Digital

CAN YOU DESCRIBE YOUR COMPANY?
In the middle of Middle-earth, Weta Digital is a VFX company with more than a thousand artists and developers. While focusing on delivering amazing movies, Weta Digital also focuses on research and development for VFX. 

WHAT’S YOUR JOB TITLE?
Head of Software Engineering 

WHAT DOES THAT ENTAIL?
In the software engineering department, we write tools for artists and make sure their creative intent is maintained across the pipeline. We also make sure production isn’t disrupted across the facility.  

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Writing code, maybe? Yeah, I’m still writing code when I can, mostly fixing bugs and off-loading other developers from nasty issues, keeping them focused on the research and development and providing support.  

HOW DID YOU START YOUR CAREER?
I started my career as researcher in Human-Computer interfaces at a university in Rome. I liked to solve problems, and the VFX industry has lots of problems to be solved 😉 

HOW LONG HAVE YOU BEEN WORKING IN VFX?
Ten years  

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
I grew up with Pixar movies and lots of animated short movies. I also played video games. I was always fascinated by what was behind those things. I wanted to replicate them, and which I did by re-writing games or effects seen in movies.

 I started by using existing tools. Then, during high school — thanks to my older cousin — I found Basic and started writing my own tools. I found that I was able to control external devices with Basic and my Commodore64. I also started enjoying electronics and micro-controllers. All of this reached the acme with my thesis at university when I created a data-glove from scratch — from the hardware to the software — and started looking at example applications for it. This was in between 1999 and 2001, when I also started working at the Human-Computer Interaction Lab.  

WHAT’S YOUR FAVORITE PART OF THE JOB?
It’s challenging, in a good way, every day. And as problem solver, I like this part of my job. 

WHAT’S YOUR LEAST FAVORITE?
Sometimes too many meetings, but it’s important to communicate with every department and understand their needs. 

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably teaching and researching at university in Human-Computer Interaction. 

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Just to name some of them: War for the Planet of the Apes, Valerian, The BFG and Guardians of the Galaxy Vol. 2.          

WHAT IS THE PROJECT/S THAT YOU ARE MOST PROUD OF?
I was lucky enough to be at Weta Digital when we worked on Avatar and The Jungle Book, which both won Oscars for Best Visual Effects, and also The Adventures of Tintin, where I was directly involved in the hair-rendering process and all the TopoClouds tools for the Pantaray pipeline.

WHAT TOOLS DO YOU USE DAY TO DAY?
Nowadays, it’s my email client, my phone and very little text-editor and C++ compilers.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Mostly enjoy time with my wife, my cats, video games and the gym when I can.


Milk VFX provides 926 shots for YouTube’s Origin series

London’s Milk VFX, known for its visual effects work on Adrift, Annihilation and Altered Carbon, has just completed production on YouTube Premium’s new sci-fi thriller original series, Origin.

Milk created all of the 926 VFX shots for Origin in 4K, encompassing a wide range of VFX work, in a four-month timeframe. Milk executed rendering entirely in the cloud (via the AWS Cloud Platform); allowing the team to scale its current roster of projects, which include Amazon’s Good Omens and feature film Four Kids and It.

VFX supervisor and Milk co-founder Nicolas Hernandez supervised the entire roster of VFX work on Origin. Milk also supervised the VFX shoot on location in South Africa.

“As we created all the VFX for the 10-episode series it was even more important for us to be on set,” says Hernandez. “As such, our VFX supervisor Murray Barber and onset production manager David Jones supervised the Origin VFX shoot, which meant being based at the South Africa shoot location for several months.”

The series is from Left Bank Pictures, Sony Pictures Television and Midnight Radio in association with China International Television Corporation (CiTVC). Created by Mika Watkins, Origin stars Tom Felton and Natalia Tena and will premiere on 14 November on YouTube Premium.

“The intense challenge of delivering and supervising a show on the scale of Origin — 900 4K shots in four months — was not only helped by our recent expansion and the use of the cloud for rendering, but was largely due to the passion and expertise of the Milk Origin team in collaboration with Left Bank Pictures,” says Cohen.

In terms of tools, Milk used Autodesk Maya, Side Effects Houdini, Foundry’s Nuke and Mari, Shotgun, Photoshop, Deadline for renderfarms and Arnold for rendering and a variety of in-house tools. Hardware includes HPz series workstations and Nvidia graphics. Storage used was Pixitmedia’s PixStor.

The series, from director Paul W.S. Anderson and the producers of The Crown and Lost, follows a group of outsiders who find themselves abandoned on a ship bound for a distant land. Now they must work together for survival, but quickly realize that one of them is far from who they claim to be.

 


Post studio Nomad adds Tokyo location

Creative editorial/VFX/sound design company Nomad has expanded its global footprint with a space in Tokyo, adding to a network that also includes offices in New York, Los Angeles and London. It will be led by managing director Yoshinori Fujisawa and executive producer Masato Midorikawa.

The Tokyo office has three client suites, an assistant support suite, production office and machine room. The tools for post workflow include Adobe Creative Cloud (Premiere, After Effects, Photoshop), Flame, Flame Assist, Avid Pro Tools and other various support tools.

Nomad partner/editor Glenn Martin says the studio often works with creatives who regularly travel between LA and Tokyo. He says Nomad will support the new Tokyo-based group with editors and VFX artists from our other offices whenever larger teams are needed.

“The role of a post production house is quite different between the US and Japan,” says Fujisawa and Midorikawa, jointly. “Although people in Japan are starting to see the value of the Western-style post production model, it has not been properly established here yet. We are able to give our Japanese directors and creatives the ability to collaborate with Nomad’s talented editors and VFX artists, who have great skills in storytelling and satisfying the needs of brands. Nomad has a comprehensive post-production workflow that enables the company to execute global projects. It’s now time for Japan to experience this process and be a part of the future of global advertising.”

Main Image: (L-R) Yoshinori Fujisawa and Masato Midorikawa


Cinesite London promotes Caroline Garrett to head of VFX

Cinesite in London has upped Caroline Garrett to head of its London VFX studio. She will oversee day-to-day operations at the independent VFX facility.  Garrett will work with her colleagues at Cinesite Montreal, Cinesite Vancouver and partner facilities Image Engine and Trixter to increase Cinesite’s global capacity for feature films, broadcast and streaming shows.

Garrett started her career in 1998 as an artist at the Magic Camera Company in Shepperton Film Studios. In 2002 she co-founded the previsualization company Fuzzygoat, working closely with directors at the start of the creative process. In 2009 she joined Cinesite, taking on the position of CG manager and overseeing the management of the 3D department as well as serving as both producer and executive producer. Prior to her most recent promotion, she was head of production, overseeing all aspects of production for the London.

With 20 years of industry experience and her own background as an animator and CG artist, Garrett understands the rigors that artists face while solving technical and creative challenges. Since joining Cinesite in 2009, she worked as senior production executive on high-profile features, including Harry Potter and the Deathly Hallows, Skyfall, World War Z, The Revenant, Fantastic Beasts and Where to Find Them and, most recently, Avengers Infinity War.

Garrett is the second woman to be appointed to a head of sudio role within the Cinesite group following Tara Kemes’ appointment as GM of Cinesite’s Vancouver animation studio earlier this year.


Sony Imageworks provides big effects, animation for Warner’s Smallfoot

By Randi Altman

The legend of Bigfoot: a giant, hairy two-legged creature roaming the forests and giving humans just enough of a glimpse to freak them out. Sightings have been happening for centuries with no sign of slowing down — seriously, Google it.

But what if that story was turned around, and it was Bigfoot who was freaked out by a Smallfoot (human)? Well, that is exactly the premise of the new Warner Bros. film Smallfoot, directed by Karey Kirkpatrick. It’s based on the book “Yeti Tracks” by Sergio Pablos.

Karl Herbst

Instead of a human catching a glimpse of the mysterious giant, a yeti named Migo (Channing Tatum) sees a human (James Corden) and tells his entire snow-filled village about the existence of Smallfoot. Of course, no one believes him so he goes on a trek to find this mythical creature and bring him home as proof.

Sony Pictures Imageworks was tasked with all of the animation and visual effects work on the film, while Warner Animation film did all of the front end work — such as adapting the script, creating the production design, editing, directing, producing and more. We reached out to Imageworks VFX supervisor Karl Herbst (Hotel Transylvania 2) to find out more about creating the animation and effects for Smallfoot.

The film has a Looney Tunes-type feel with squash and stretch. Did this provide more freedom or less?
In general, it provided more freedom since it allowed the animation team to really have fun with gags. It also gave them a ton of reference material to pull from and come up with new twists on older ideas. Once out of animation, depending on how far the performance was pushed, other departments — like the character effects team — would have additional work due to all of the exaggerated movements. But all of the extra work was worth it because everyone really loved seeing the characters pushed.

We also found that as the story evolved, Migo’s journey became more emotionally driven; We needed to find a style that also let the audience truly connect with what he was going through. We brought in a lot more subtlety, and a more truthful physicality to the animation when needed. As a result, we have these incredibly heartfelt performances and moments that would feel right at home in an old Road Runner short. Yet it all still feels like part of the same world with these truly believable characters at the center of it.

Was scale between such large and small characters a challenge?
It was one of the first areas we wanted to tackle since the look of the yeti’s fur next to a human was really important to filmmakers. In the end, we found that the thickness and fidelity of the yeti hair had to be very high so you could see each hair next to the hairs of the humans.

It also meant allowing the rigs for the human and yetis to be flexible enough to scale them as needed to have moments where they are very close together and they did not feel so disproportionate to each other. Everything in our character pipeline from animation down to lighting had to be flexible in dealing with these scale changes. Even things like subsurface scattering in the skin had dials in it to deal with when Percy, or any human character, was scaled up or down in a shot.

How did you tackle the hair?
We updated a couple of key areas in our hair pipeline starting with how we would build our hair. In the past, we would make curves that look more like small groups of hairs in a clump. In this case, we made each curve its own strand of a single hair. To shade this hair in a way that allowed artists to have better control over the look, our development team created a new hair shader that used true multiple-scattering within the hair.

We then extended that hair shading model to add control over the distribution around the hair fiber to model the effect of animal hair, which tends to scatter differently than human hair. This gave artists the ability to create lots of different hair looks, which were not based on human hair, as was the case with our older models.

Was rendering so many fury characters on screen at a time an issue?
Yes. In the past this would have been hard to shade all at once, mostly due to our reliance on opacity to create the soft shadows needed for fur. With the new shading model, we were no longer using opacity at all so the number of rays needed to resolve the hair was lower than in the past. But we now needed to resolve the aliasing due to the number of fine hairs (9 million for LeBron James’ Gwangi).

We developed a few other new tools within our version of the Arnold renderer to help with aliasing and render time in general. The first was adaptive sampling, which would allow us to up the anti-aliasing samples drastically. This meant some pixels would only use a few samples while others would use very high sampling. Whereas in the past, all pixels would get the same number. This focused our render times to where we needed it, helping to reduce overall rendering. Our development team also added the ability for us to pick a render up from its previous point. This meant that at a lower quality level we could do all of our lighting work, get creative approval from the filmmakers and pick up the renders to bring them to full quality not losing the time already spent.

What tools were used for the hair simulations specifically, and what tools did you call on in general?
We used Maya and the Nucleus solvers for all of the hair simulations, but developed tools over them to deal with so much hair per character and so many characters on screen at once. The simulation for each character was driven by their design and motion requirements.

The Looney Tunes-inspired design and motion created a challenge around how to keep hair simulations from breaking with all of the quick and stretched motion while being able to have light wind for the emotional subtle moments. We solved all of those requirements by using a high number of control hairs and constraints. Meechee (Zendaya) used 6,000 simulation curves with over 200 constraints, while Migo needed 3,200 curves with around 30 constraints.

Stonekeeper (Common) was the most complex of the characters, with long braided hair on his head, a beard, shaggy arms and a cloak made of stones. He required a cloth simulation pass, a rigid body simulation was performed for the stones and the hair was simulated on top of the stones. Our in-house tool called Kami builds all of the hair at render time and also allows us to add procedurals to the hair at that point. We relied on those procedurals to create many varied hair looks for all of the generics needed to fill the village full of yetis.

How many different types of snow did you have?
We created three different snow systems for environmental effects. The first was a particle simulation of flakes for near-ground detail. The second was volumetric effects to create lots of atmosphere in the backgrounds that had texture and movement. We used this on each of the large sets and then stored those so lighters could pick which parts they wanted in each shot. To also help with artistically driving the look of each shot, our third system was a library of 2D elements that the effects team rendered and could be added during compositing to add details late in shot production.

For ground snow, we had different systems based on the needs in each shot. For shallow footsteps, we used displacement of the ground surface with additional little pieces of geometry to add crumble detail around the prints. This could be used in foreground or background.

For heavy interactions, like tunneling or sliding in the snow, we developed a new tool we called Katyusha. This new system combined rigid body destruction with fluid simulations to achieve all of the different states snow can take in any given interaction. We then rendered these simulations as volumetrics to give the complex lighting look the filmmakers were looking for. The snow, being in essence a cloud, allowed light transport through all of the different layers of geometry and volume that could be present at any given point in a scene. This made it easier for the lighters to give the snow its light look in any given lighting situation.

Was there a particular scene or effect that was extra challenging? If so, what was it and how did you overcome it?
The biggest challenge to the film as a whole was the environments. The story was very fluid, so design and build of the environments came very late in the process. Coupling that with a creative team that liked to find their shots — versus design and build them — meant we needed to be very flexible on how to create sets and do them quickly.

To achieve this, we begin by breaking the environments into a subset of source shapes that could be combined in any fashion to build Yeti Mountain, Yeti Village and the surrounding environments. Surfacing artists then created materials that could be applied to any set piece, allowing for quick creative decisions about what was rock, snow and ice, and creating many different looks. All of these materials were created using PatternCreate networks as part of our OSL shaders. With them we could heavily leverage the portable procedural texturing between assets making location construction quicker, more flexible and easier to dial.

To get the right snow look for all levels of detail needed, we used a combination of textured snow, modeled snow and a simulation of geometric snowfall, which all needed to shade the same. For the simulated snowfall we created a padding system that could be run at any time on an environment giving it a fresh coating of snow. We did this so that filmmakers could modify sets freely in layout and not have to worry about broken snow lines. Doing all of that with modeled snow would have been too time-consuming and costly. This padding system worked not only in organic environments, like Yeti Village, but also in the Human City at the end of the film. The snow you see in the Human City is a combination of this padding system in the foreground and textures in the background.


MPC Film provides VFX for new Predator movie

From outer space to suburban streets, the hunt comes to Earth in director Shane Black’s reinvention of the Predator series. Now, the universe’s most lethal hunters are stronger, smarter and deadlier than ever before, having genetically upgraded themselves with DNA from other species.

When a young boy accidentally triggers their return to Earth, only a crew of ex-soldiers and an evolutionary biology professor can prevent the end of the human race.

 

MPC Film’s team was led by VFX supervisors Richard Little and Arundi Asregadoo, creating 500 shots for this new Predator movie. The majority of the work required hero animation for the Upgrade Predator and additional work included the Predator dogs, FX effects and a CG swamp environment.

MPC Film’s character lab department modeled, sculpted and textured the Upgrade and Predator dogs to a high level of detail, allowing the director to have flexibility to view closeups, if needed. The technical animation department applied movement to the muscle system and added the flowing motion to the dreadlocks on all of the film’s hero alien characters, an integral part of the Predator’s aesthetic in the franchise.

MPC’s artists also created photorealistic Predator One and digital double mercenary characters for the film. Sixty-four other assets were created, ranging from stones and sticks for the swamp floor to grenades, a grenade launcher and bombs.

MPC’s artists also worked on the scene where we first meet the Upgrade’s dogs who are sent out to hunt the Predator. The sequence was shot on a bluescreen stage on location. The digital environments team built a 360-degree baseball field matching the location shoot from reference photography. Creating simple geometry and re-projecting the textures helped create a realistic environment.

Once the Upgrade tracks down the “fugitive” Predator the fight begins. To create the scene, MPC used a mixture of the live-action Predator intercut with its full CG Predator. The battle culminates with the Upgrade ripping the head and spine away from the body of the fugitive. This shot was a big challenge for the FX and tech animation team, who also added green Predator blood into the mix, amplifying the gore factor.

In the hunt scene, the misfit heroes trap the Upgrade and set the ultimate hunter alight. This sequence was technically challenging for the animation, lighting and FX team, which worked very closely to create a convincing Upgrade that appeared to be on fire.

For the final battle, MPC Film’s digital environments artists created a full CG swamp where the Upgrade’s ship crash-lands. The team was tasked with matching the set and creating a 360-degree CG set extension with water effects.

A big challenge was how to ensure the Upgrade Predator interacted realistically with the actors and set. The animation, tech animation and FX teams worked hard to make the Upgrade Predator fit seamlessly into this environment.

London design, animation studio Golden Wolf sets up shop in NYC

Animation studio Golden Wolf, headquartered in London, has launched its first stateside location in New York City. The expansion comes on the heels of an alliance with animation/VFX/live-action studio Psyop, a minority investor in the company. Golden Wolf now occupies studio space in SoHo adjacent to Psyop and its sister company Blacklist, which formerly represented Golden Wolf stateside and was instrumental to the relationship.

Among the year’s highlights from Golden Wolf are an integrated campaign for Nike FA18 Phantom (client direct), a spot for the adidas x Parley Run for the Oceans initiative (TBWA Amsterdam) in collaboration with Psyop, and Marshmello’s Fly music video for Disney. Golden Wolf also received an Emmy nomination for its main title sequence for Disney’s Ducktales reboot.

Heading up Golden Wolf’s New York office are two transplants from the London studio, executive producer Dotti Sinnott and art director Sammy Moore. Both joined Golden Wolf in 2015, Sinnott from motion design studio Bigstar, where she was a senior producer, and Moore after a run as a freelance illustrator/designer in London’s agency scene.

Sinnott comments: “Building on the strength of our London team, the Golden Wolf brand will continue to grow and evolve with the fresh perspective of our New York creatives. Our presence on either side of the Atlantic not only brings us closer to existing clients, but also positions us perfectly to build new relationships with New York-based agencies and brands. On top of this, we’re able to use the time difference to our advantage to work on faster turnarounds and across a range of budgets.”

Founded in 2013 by Ingi Erlingsson, the studio’s executive creative director, Golden Wolf is known for youth-oriented work — especially content for social media, entertainment and sports — that blurs the lines of irreverent humor, dynamic action and psychedelia. Erlingsson was once a prolific graffiti artist and, later, illustrator/designer and creative director at U.K.-based design agency ilovedust. Today he inspires Golden Wolf’s creative culture and disruptive style fed in part by a wave of next-gen animation talent coming out of schools such as Gobelins in France and The Animation Workshop in Denmark.

“I’m excited about our affiliation with Psyop, which enjoys an incredible legacy producing industry-leading animated advertising content,” Erlingsson says. “Golden Wolf is the new kid on the block, with bags of enthusiasm and an aim to disrupt the industry with new ideas. The combination of the two studios means that we are able to tackle any challenge, regardless of format or technical approach, with the support of some of the world’s best artists and directors. The relationship allows brands and agencies to have complete confidence in our ability to solve even the biggest challenges.”

Golden Wolf’s initial work out of its New York studio includes spots for Supercell (client direct) and Bulleit Bourbon (Barton F. Graf). Golden Wolf is represented in the US market by Hunky Dory for the East Coast, Baer Brown for the Midwest and In House Reps for the West Coast. Stink represents the studio for Europe.

Main Photo: (L-R) Dotti Sinnott, Ingi Erlingsson and Sammy Moore.