NBCUni 7.26

Category Archives: Animation

Behind the Title: Title Designer Nina Saxon

For 40 years, Nina Saxon has been a pioneer in the area of designing movie titles. She is still one of the few women working in this part of the industry.

NAME: Nina Saxon

COMPANY: Nina Saxon Design

CAN YOU DESCRIBE YOUR COMPANY?
We design main and end titles for film and television as well as branding for still and moving images.

WHAT’S YOUR JOB TITLE?
Title Designer

WHAT DOES THAT ENTAIL?
Making a moving introduction for films, like a book cover, that introduces a film. Or it might be simple type over picture. Also watching a film and showing the director samples or storyboards of what I think should be used for the film.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
That I’m one of only a few women in this field and have worked for 40 years, hiring others to help me only if necessary.

WHAT’S YOUR FAVORITE PART OF THE JOB?
When my project is done and I get to see my finished work up on the screen.

WHAT’S YOUR LEAST FAVORITE?
Waiting to be paid.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Morning

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d probably be a psychologist.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
In 1975, I was in the film department at UCLA and decided I was determined to work in the film business.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
The upcoming documentary on Paul McCartney called Here, There and Everywhere, and upcoming entertainment industry corporate logos that will be revealed in October. In the past, I did the movie Salt with Angeline Jolie and the movie Flight with Denzel Washington.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Working on the main title open for Forrest Gump.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My iPad, iPhone and computer

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I exercise a lot, five to six days a week; drink a nice glass of wine; try to get enough sleep; listen to music while meditating before sleep; and make sure I know what I need to do the next day before I go to bed.

Ziva VFX 1.7 helps simplify CG character creation


Ziva Dynamics has introduced Ziva VFX 1.7, designed to make CG character creation easier thanks to the introduction of Art Directable Rest Shapes (ADRS). This tool allows artists to make characters conform to any shape without losing its dynamic properties, opening up a faster path to cartoons and digi-doubles.

Users can now adjust a character’s silhouette with simple sculpting tools. Once the goal shape is established, Ziva VFX can morph to match it, maintaining all of the dynamics embedded before the change. Whether unnatural or precise, ADRS works with any shape, removing the difficulty of both complex setups and time-intensive corrective work.

The Art Directable Rest Shapes feature has been in development for over a year and was created in collaboration with several major VFX and feature animation studios. According to Ziva, while outputs and art styles differed, each group essentially requested the same thing: extreme accuracy and more control without compromising the dynamics that sell a final shot.

For feature animation characters not based on humans or nature, ADRS can rapidly alter and exaggerate key characteristics, allowing artists to be expressive and creative without losing the power of secondary physics. For live-action films, where the use of digi-doubles and other photorealistic characters is growing, ADRS can minimize the setup process when teams want to quickly tweak a silhouette or make muscles fire in multiple ways during a shot.

According to Josh diCarlo, head of rigging at Sony Pictures Imageworks, “Our creature team is really looking forward to the potential of Art Directable Rest Shapes to augment our facial and shot-work pipelines by adding quality while reducing effort. Ziva VFX 1.7 holds the potential to shave weeks of work off of both processes while simultaneously increasing the quality of the end results.”

To use Art Directable Rest Shapes, artists must duplicate a tissue mesh, sculpt their new shape onto the duplicate and add the new geometry as a Rest Shape over select frames. This process will intuitively morph the character, creating a smooth, novel deformation that adheres to any artistic direction a creative team can think up. On top of ADRS, Ziva VFX 1.7 will also include a new zRBFWarp feature, which can warp NURBS surfaces, curves and meshes.

For a free 60-day trial, click here. Ziva VFX 1.7 is available now as an Autodesk Maya plugin for Windows and Linux users. Ziva VFX 1.7 can be purchased in monthly or yearly installments, depending on user type.

According to Michael Smit, chief commercial officer at Ziva Dynamics, “Ziva is working towards a new platform that will more easily allow us to deploy the software into other software packages, operating systems, and different network architectures. As an example we are currently working on our integrations into iOS and Unreal, both of which have already been used in limited release for production settings. We’re hopeful that once we launch the new platform commercially there will be an opportunity to deploy tools for macOS users.”

NBCUni 7.26

Behind the Title: Chapeau CD Lauren Mayer-Beug

This creative director loves the ideation process at the start of a project when anything is possible, and saving some of those ideas for future use.

COMPANY: LA’s Chapeau Studios

CAN YOU DESCRIBE YOUR COMPANY?
Chapeau provides visual effects, editorial, design, photography and story development fluidly with experience in design, web development, and software and app engineering.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
It often entails seeing a job through from start to finish. I look at it like making a painting or a sculpture.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Perhaps just how hands-on the process actually is. And how analog I am, considering we work in such a tech-driven environment.

Beats

WHAT’S YOUR FAVORITE PART OF THE JOB?
Thinking. I’m always thinking big picture to small details. I love the ideation process at the start of a project when anything is possible. Saving some of those ideas for future use, learning about what you want to do through that process. I always learn more about myself through every ideation session.

WHAT’S YOUR LEAST FAVORITE?
Letting go of the details that didn’t get addressed. Not everything is going to be perfect, so since it’s a learning process there is inevitably something that will catch your eye.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
My mind goes to so many buckets. A published children’s book author with a kick-ass coffee shop. A coffee bean buyer so I could travel the world.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I always skewed in this direction. My thinking has always been in the mindset of idea coaxer and gatherer. I was put in that position in my mid-20s and realized I liked it (with lots to learn, of course), and I’ve run with it ever since.

IS THERE A PROJECT YOU ARE MOST PROUD OF?
That’s hard to say. Every project is really so different. A lot of what I’m most proud of is behind the scenes… the process that will go into what I see as bigger things. With Chapeau, I will always love the Facebook projects, all the pieces that came together — both on the engineering side and the fun creative elements.

Facebook

What I’m most excited about is our future stuff. There’s a ton on the sticky board that we aim to accomplish in the very near future. Thinking about how much is actually being set in motion is mind-blowing, humbling and — dare I say — makes me outright giddy. That is why I’m here, to tell these new stories — stories that take part in forming the new landscape of narrative.

WHAT TOOLS DO YOU USE DAY TO DAY?
Anything Adobe. My most effective tool is the good-old pen to paper. That works clearly in conveying ideas and working out the knots.

WHERE DO YOU FIND INSPIRATION?
I’m always looking for inspiration and find it everywhere, as many other creatives do. However, nature is where I’ve always found my greatest inspiration. I’m constantly taking photos of interesting moments to save for later. Oftentimes I will refer back to those moments in my work. When I need a reset I hike, run or bike. Movement helps.

I’m always going outside to look at how the light interacts with the environment. Something I’ve become known for at work is going out of my way to see a sunset (or sunrise). They know me to be the first one on the roof for a particularly enchanting magic hour. I’m always staring at the clouds — the subtle color combinations and my fascination with how colors look the way they do only by context. All that said, I often have my nose in a graphic design book.

The overall mood realized from gathering and creating the ever-popular Pinterest board is so helpful. Seeing the mood color wise and texturally never gets old. Suddenly, you have a fully formed example of where your mind is at. Something you could never have talked your way through.

Then, of course, there are people. People/peers and what they are capable of will always amaze me.


OptiTrack reveals new skeletal solver

OptiTrack has a new skeletal solver that brings artifact-free, realtime character animation to its optical motion capture systems.

Key features of OptiTrack skeletal solver include:

– Accurate human movement tracking in realtime
– Major advances in solve quality and artifact-free streaming of character data
– Compatible with any OptiTrack system, including those used for live-action camera tracking, virtual camera tracking and virtual reality
– Supports industry-standard tools, including Epic Games’ Unreal Engine, Unity Technologies’ Unity realtime platform and Autodesk MotionBuilder
– Extremely low latency (less than 10 milliseconds)

As a complement to its new skeletal solver, OptiTrack has introduced an equally high-performing finger-tracking solution created in partnership with Manus VR. Embedded with OptiTrack’s signature pulse Active technology, Inertial Measurement Units (IMU) and bend sensors, the gloves deliver accurate, continuous finger-tracking data in real time that is fully compatible with existing character animation and VR pipelines when used with OptiTrack systems.


Conductor boosts its cloud rendering with Amazon EC2

Conductor Technologies’ cloud rendering platform will now support Amazon Web Services (AWS) and Amazon Elastic Compute Cloud (Amazon EC2), bringing the virtual compute resources of AWS to Conductor customers. This new capability will provide content production studios working in visual effects, animation and immersive media access to new, secure, powerful resources that will allow them — according to the company — to quickly and economically scale render capacity. Amazon EC2 instances, including cost-effective Spot Instances, are expected to be available via Conductor this summer.

“Our goal has always been to ensure that Conductor users can easily access reliable, secure instances on a massive scale. AWS has the largest and most geographically diverse compute, and the AWS Thinkbox team, which is highly experienced in all facets of high-volume rendering, is dedicated to M&E content production, so working with them was a natural fit,” says Conductor CEO Mac Moore. “We’ve already been running hundreds of thousands of simultaneous cores through Conductor, and with AWS as our preferred cloud provider, I expect we’ll be over the million simultaneous core mark in no time.”

Simple to deploy and highly scalable, Conductor is equally effective as an off-the-shelf solution or customized to a studio’s needs through its API. Conductor’s intuitive UI and accessible analytics provide a wealth of insightful data for keeping studio budgets on track. Apps supported by Conductor include Autodesk Maya and Arnold; Foundry’s Nuke, Cara VR, Katana, Modo and Ocula; Chaos Group’s V-Ray; Pixar’s RenderMan; Isotropix’s Clarisse; Golaem; Ephere’s Ornatrix; Yeti; and Miarmy. Additional software and plug-in support are in progress, and may be available upon request.

Some background on Conductor: it’s a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud. As the only rendering service that is scalable to meet the exact needs of even the largest studios, Conductor easily integrates into existing workflows, features an open architecture for customization, provides data insights and can implement controls over usage to ensure budgets and timelines stay on track.


Technicolor opens prepro studio in LA

Technicolor is opening a new studio in Los Angeles dedicated to creating a seamless pipeline for feature projects — from concept art and visualization through virtual production, production and into final VFX.

As new distribution models increase the demand for content, Technicolor Pre-Production will provide the tools, the talent and the space for creatives to collaborate from day one of their project – from helping set the vision at the start of a job to ensuring that the vision carries through to production and VFX. The result is a more efficient filmmaking process.

Technicolor Pre-Production studio is headed by Kerry Shea, an industry veteran with over 20 years of experience. She is no stranger to this work, having held executive positions at Method Studios, The Third Floor, Digital Domain, The Jim Henson Company, DreamWorks Animation and Sony Pictures Imageworks.

Kerry Shea

Credited on more than 60 feature films including The Jungle Book, Pirates of the Caribbean: Dead Men Tell No Tales and Guardians of the Galaxy Vol. 2, Shea has an extensive background in VFX and post production, as well as live action, animatronics and creature effects.

While the Pre-Production studio stands apart from Technicolor’s visual effects studios — MPC Film, Mill Film, MR. X and Technicolor VFX — it can work seamlessly in conjunction with one or any combination of them.

The Technicolor Pre-Production Studio will comprise of key departments:
– The Business Development Department will work with clients, from project budgeting to consulting on VFX workflows, to help plan and prepare projects for a smooth transition into VFX.
– The VFX Supervisors Department will offer creative supervision across all aspects of VFX on client projects, whether delivered by Technicolor’s studios or third-party vendors.
– The Art Department will work with clients to understand their vision – including characters, props, technologies, and environments – creating artwork that delivers on that vision and sets the tone for the rest of the project.
– The Virtual Production Department will partner with filmmakers to bridge the gap between them and VFX through the production pipeline. Working on the ground and on location, the department will deliver a fully integrated pipeline and shooting services with the flexibility of a small, manageable team — allowing critical players in the filmmaking process to collaborate, view and manipulate media assets and scenes across multiple locations as the production process unfolds.
– The Visualization Department will deliver visualizations that will assist in achieving on screen exactly what clients envisioned.

“With the advancements of tools and technologies, such as virtual production, filmmaking has reached an inflection point, one in which storytellers can redefine what is possible on-set and beyond,” says Shea. “I am passionate about the increasing role and influence that the tools and craft of visual effects can have on the production pipeline and the even more important role in creating more streamlined and efficient workflows that create memorable stories.”


Behind the Title: Neko founder Lirit Rosenzweig Topaz

NAME: Lirit Rosenzweig Topaz

COMPANY: Burbank’s Neko Productions

CAN YOU DESCRIBE YOUR COMPANY?
We are an animation studio working on games, TV, film, digital, AR, VR and promotional projects in a variety of styles, including super-cartoony and hyper-realistic CG and 2D. We believe in producing the best product for the budget, and giving our clients and partners peace of mind.

WHAT’S YOUR JOB TITLE?
Founder/Executive Producer

WHAT DOES THAT ENTAIL?
I established the company and built it from scratch. I am the face of the company and the force behind it. I am in touch with our clients and potential clients to make sure all are getting the best service possible.

Dr. Ruth doc

I am a part of the hiring process, making sure our team meets the standards of creativity, communication ability, responsibility and humanness. It is important for me to make sure all of our team members are great human beings, as well as being amazing and talented artists. I oversee all projects and make sure the machine is working smoothly to everyone’s satisfaction.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I am always looking at the big picture, from the macro to the micro, as well. I need to be aware of so many of the smaller details making sure everything is running smoothly for both sides, employees and clients.

WHAT HAVE YOU LEARNED OVER THE YEARS ABOUT RUNNING A BUSINESS?
I have learned that it is a roller coaster and one should enjoy the ride, and that one day doesn’t look like another day. I learned that if you are true to yourself, stick to your objectives and listen to your inner voice while doing a great job, things will work out. I always remember we are all human beings; you can succeed as a business person and have people and clients love working with you at the same time.

A LOT OF IT MUST BE ABOUT TRYING TO KEEP EMPLOYEES AND CLIENTS HAPPY. HOW DO YOU BALANCE THAT?
For sure! That is the key for everything. When employees are happy, they give their heart and soul. As a result, the workplace becomes a place they appreciate, not just a place they need to go to earn a living. Happy clients mean that you did your job well. I balance it by checking in with my team to make sure all is well by asking them to share with me any concerns they may have. At the end of the day, when the team is happy, they do a good job, and that results in satisfied clients.

WHAT’S YOUR FAVORITE PART OF THE JOB?
It is important for me that everybody comes to work with a smile on their face and to be a united team with the goal to create great projects. This usually results in their thinking out of the box and looking for ways to be efficient, to push the envelope and to make sure creativity is always at the highest level. Working on projects similar to ones we did in the past, but also to work on projects and styles we haven’t done before.

Dr. Ruth doc

I like the fact that I am a woman running a company. Being a woman allows me to juggle well, be on top of a few things at the same time and still be caring and loving.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I have two. One is the beginning of the day when I know I have a full day ahead of me to create work, influence, achieve and do many things. Two is the evening, when I am back home with my family.

CAN YOU NAME SOME RECENT CLIENTS?
Sega, Wayforward and the recent Ask Dr Ruth documentary.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My iPhone, my iPad and my computer.


Quick Chat: Sinking Ship’s Matt Bishop on live-action/CG series

By Randi Altman

Toronto’s Sinking Ship Entertainment is a production, distribution and interactive company specializing in children’s live-action and CGI-blended programming. The company has 13 Daytime Emmys and a variety of other international awards on its proverbial mantel. Sinking Ship has over 175 employees across all its divisions, including its VFX and interactive studio.

Matt Bishop

Needless to say, the company has a lot going on. We decided to reach out to Matt Bishop, founding partner at Sinking Ship, to find out more.

Sinking Ship produces, creates visual effects and posts its own content, but are you also open to outside projects?
Yes, we do work in co-production with other companies or contract our post production service to shows that are looking for cutting-edge VFX.

Have you always created your own content?
Sinking Ship has developed a number of shows and feature films, as well as worked in co-production with production companies around the world.

What came first, your post or your production services? Or were they introduced in tandem?
Both sides of company evolved together as a way to push our creative visions. We started acquiring equipment on our first series in 2004, and we always look for new ways to push the technology.

Can you mention some of your most recent projects?
Some of our current projects include Dino Dana (Season 4), Dino Dana: The Movie, Endlings and Odd Squad Mobile Unit.

What is your typical path getting content from set to post?
We have been working with Red cameras for years, and we were the first company in Canada to shoot in 4K over a decade ago. We shoot a lot of content, so we create backups in the field before the media is sent to the studio.

Dino Dana

You work with a lot of data. How do you manage and keep all of that secure?
Backups, lots of backups. We use a massive LTO-7 tape robot and we have over a 2PB of backup storage on top of that. We recently added Qumulo to our workflow to ensure the most secure method possible.

What do you use for your VFX work? What about your other post tools?
We use a wide range of software, but our main tools in our creature department are Pixologic Zbrush and Foundry Mari, with all animation happening inside Autodesk Maya.

We also have a large renderfarm to handle the amount of shots, and our render engine of choice is Arnold, which is now an Autodesk project.  In post we use an Adobe Creative Cloud pipeline with 4K HDR color grading happening in DaVinci Resolve. Qumulo is going to be a welcome addition as we continue to grow and our outputs become more complex.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 


Axis provides 1,000 VFX shots for the TV series Happy!

UK-based animation and visual effects house Axis Studios has delivered 1,000 shots across 10 episodes on the second series of the UCP-produced hit Syfy show Happy!.

Based on Grant Morrison and Darick Robertson’s graphic novel, Happy! follows alcoholic ex-cop turned hitman Nick Sax (Christopher Meloni), who teams up with imaginary unicorn Happy (voiced by Patton Oswalt). In the second season, the action moves from Christmastime to “the biggest holiday rebranding of all time” and a plot to “make Easter great again,” courtesy of last season’s malevolent child-kidnapper, Sonny Shine (Christopher Fitzgerald).

Axis Studios, working across its three creative sites in Glasgow, Bristol, and London, collaborated with executive producer and director Brian Taylor and showrunner Patrick Macmanus to raise the bar on the animation of the fully CG character. The studio also worked on a host of supporting characters, including a “chain-smoking man-baby,” a gimp-like Easter Bunny and even a Jeff Goldblum-shaped cloud. Alongside the extensive animation work, the team’s VFX workload greatly increased from the first season — including two additional episodes, creature work, matte painting, cloud simulations, asset building and extensive effects and clean-up work.

Building on the success of the first season, the 100-person team of artists further developed the animation of the lead character, Happy!, improving the rig, giving more nuanced emotions and continually working to integrate him more in the real-world environments.

UK’s Jellyfish adds virtual animation studio and Kevin Spruce

London-based visual effects and animation studio Jellyfish Pictures is opening of a new virtual animation facility in Sheffield. The new site is the company’s fifth studio in the UK, in addition to its established studios in Fitzrovia, Central London; Brixton; South London; and Oval, South London. This addition is no surprise considering Jellyfish created one of Europe’s first virtual VFX studios back in 2017.

With no hardware housed onsite, Jellyfish Pictures’ Sheffield studio — situated in the city center within the Cooper Project Complex — will operate in a completely PC-over-IP environment. With all technology and pipeline housed in a centrally-based co-location, the studio is able to virtualize its distributed workstations through Teradici’s remote visualization solution, allowing for total flexibility and scalability.

The Sheffield site will sit on the same logical LAN as the other four studios, providing access to the company’s software-defined storage (SDS) from Pixit Media, enabling remote collaboration and support for flexible working practices. With the rest of Jellyfish Pictures’ studios all TPN-accredited, the Sheffield studio will follow in their footsteps, using Pixit Media’s container solution within PixStor 5.

The innovative studio will be headed up by Jellyfish Pictures’ newest appointment, animation director Kevin Spruce. With a career spanning over 30 years, Spruce joins Jellyfish from Framestore, where he oversaw a team of 120 as the company’s head of animation. During his time at Framestore, Spruce worked as animation supervisor on feature films such as Fantastic Beasts and Where to Find Them, The Legend of Tarzan and Guardians of the Galaxy. Prior to his 17-year stint at Framestore, Spruce held positions at Canadian animation company, Bardel Entertainment and Spielberg-helmed feature animation studio Amblimation.

Jellyfish Pictures’ northern presence will start off with a small team of animators working on the company’s original animation projects, with a view to expand its team and set up with a large feature animation project by the end of the year.

“We have multiple projects coming up that will demand crewing up with the very best talent very quickly,” reports Phil Dobree, CEO of Jellyfish Pictures. “Casting off the constraints of infrastructure, which traditionally has been the industry’s way of working, means we are not limited to the London talent pool and can easily scale up in a more efficient and economical way than ever before. We all know London, and more specifically Soho, is an expensive place to play, both for employees working here and for the companies operating here. Technology is enabling us to expand our horizon across the UK and beyond, as well as offer talent a way out of living in the big city.”

For Spruce, the move made perfect sense: “After 30 years working in and around Soho, it was time for me to move north and settle in Sheffield to achieve a better work life balance with family. After speaking with Phil, I was excited to discover he was interested in expanding his remote operation beyond London. With what technology can offer now, the next logical step is to bring the work to people rather than always expecting them to move south.

“As animation director for Jellyfish Pictures Sheffield, it’s my intention to recruit a creative team here to strengthen the company’s capacity to handle the expanding slate of work currently in-house and beyond. I am very excited to be part of this new venture north with Jellyfish. It’s a vision of how creative companies can grow in new ways and access talent pools farther afield.”

 

Behind the Title: Ntropic Flame artist Amanda Amalfi

NAME: Amanda Amalfi

COMPANY: Ntropic (@ntropic)

CAN YOU DESCRIBE YOUR COMPANY?
Ntropic is a content creator producing work for commercials, music videos and feature films as well as crafting experiential and interactive VR and AR media. We have offices in San Francisco, Los Angeles, New York City and London. Some of the services we provide include design, VFX, animation, color, editing, color grading and finishing.

WHAT’S YOUR JOB TITLE?
Senior Flame Artist

WHAT DOES THAT ENTAIL?
Being a senior Flame artist involves a variety of tasks that really span the duration of a project. From communicating with directors, agencies and production teams to helping plan out any visual effects that might be in a project (also being a VFX supervisor on set) to the actual post process of the job.

Amanda worked on this lipstick branding video for the makeup brand Morphe.

It involves client and team management (as you are often also the 2D lead on a project) and calls for a thorough working knowledge of the Flame itself, both in timeline management and that little thing called compositing. The compositing could cross multiple disciplines — greenscreen keying, 3D compositing, set extension and beauty cleanup to name a few. And it helps greatly to have a good eye for color and to be extremely detail-oriented.

WHAT MIGHT SURPRISE PEOPLE ABOUT YOUR ROLE?
How much it entails. Since this is usually a position that exists in a commercial house, we don’t have as many specialties as there would be in the film world.

WHAT’S YOUR FAVORITE PART OF THE JOB?
First is the artwork. I like that we get to work intimately with the client in the room to set looks. It’s often a very challenging position to be in — having to create something immediately — but the challenge is something that can be very fun and rewarding. Second, I enjoy being the overarching VFX eye on the project; being involved from the outset and seeing the project through to delivery.

WHAT’S YOUR LEAST FAVORITE?
We’re often meeting tight deadlines, so the hours can be unpredictable. But the best work happens when the project team and clients are all in it together until the last minute.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
The evening. I’ve never been a morning person so I generally like the time right before we leave for the day, when most of the office is wrapping up and it gets a bit quieter.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Probably a tactile art form. Sometimes I have the urge to create something that is tangible, not viewed through an electronic device — a painting or a ceramic vase, something like that.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I loved films that were animated and/or used 3D elements growing up and wanted to know how they were made. So I decided to go to a college that had a computer art program with connections in the industry and was able to get my first job as a Flame assistant in between my junior and senior years of college.

ANA Airlines

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Most recently I worked on a campaign for ANA Airlines. It was a fun, creative challenge on set and in post production. Before that I worked on a very interesting project for Facebook’s F8 conference featuring its AR functionality and helped create a lipstick branding video for the makeup brand Morphe.

IS THERE A PROJECT THAT YOU ARE MOST PROUD OF?
I worked on a spot for Vaseline that was a “through the ages” concept and we had to create looks that would read as from 1880s, 1900, 1940s, 1970s and present day, in locations that varied from the Arctic to the building of the Brooklyn Bridge to a boxing ring. To start we sent the digitally shot footage with our 3D and comps to a printing house and had it printed and re-digitized. This worked perfectly for the ’70s-era look. Then we did additional work to age it further to the other eras — though my favorite was the Arctic turn-of-the-century look.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Flame… first and foremost. It really is the most inclusive software — I can grade, track, comp, paint and deliver all in one program. My monitors — the 4K Eizo and color-calibrated broadcast monitor, are also essential.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Mostly Instagram.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? 
I generally have music on with clients, so I will put on some relaxing music. If I’m not with clients, I listen to podcasts. I love How Did This Get Made and Conan O’Brien Needs a Friend.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Hiking and cooking are two great de-stressors for me. I love being in nature and working out and then going home and making a delicious meal.

Wacom updates its Intuos Pro Small tablet

Wacom has introduced a new Intuos Pro pen and touch tablet Small model to its advanced line of creative pen tablets. The new Intuos Pro Small joins the Intuos Pro Medium and Intuos Pro Large sizes already available.

Featuring Wacom’s precise Pro Pen 2 technology with over 8K pen pressure levels, pen tilt sensitivity, natural pen-on-paper feel and battery-free performance, artists now can choose the size — small, medium or large — that best fits their way of working.

The new small size features the same tablet working area as the previous model of Intuos Pro Small and targets on-the go creatives, whose Wacom tablet and PC or Mac laptops are always nearby. The space-saving tablet’s small footprint, wireless connectivity and battery-free pen technology that never needs charging makes setting makes working anywhere easy.

Built for pros, all three sizes of Intuos Pro tablets feature a TouchRing and ExpressKeys, six on the Small and eight on the Medium and Large, for the creation of customized shortcuts to speed up the creative workflow. In addition, incorporating both pen and touch on the tablet allows users to explore new ways to navigate and helps the whole creative experience become more interactive. The slim tablets, also feature a durable anodized aluminum back case and come with a desktop pen stand containing 10 replacement pen nibs.

The Wacom Pro Pen 2 features Wacom’s most advanced creative pen technology to date, with four times the pressure sensitivity as the original Pro Pen. 8,192 levels of pressure, tilt recognition and lag-free tracking effectively emulate working with traditional media by offering a natural drawing experience. Additionally, the pen’s customizable side switch allows one to easily access commonly used shortcuts, greatly speeding production.

Wacom offers two helpful accessory pens (purchased separately). The Pro Pen 3D, features a third button which can be set to perform typical 3D tasks such as tumbling objects in commonly used 3D creative apps. The newly released Pro Pen slim, supports some artists ergonomic preferences for a slimmer pen with a more pencil-like feel. Both are compatible with the Intuos Pro family and can help customize and speed the creative experience.

Intuos Pro is Bluetooth-enabled and compatible with Macs and PCs. All three sizes come with the Wacom Pro Pen 2, pen stand and feature ExpressKeys, TouchRing and multi-touch gesture control. The Intuos Pro Small ($249.95), Intuos Pro Medium ($379.95) and Intuos Pro Large ($499.95) are available now.

Marvel Studios’ Victoria Alonso to keynote SIGGRAPH 2019

Marvel Studios executive VP of production Victoria Alonso has been name keynote speaker for SIGGRAPH 2019, which will run from July 28 through August 1 in downtown Los Angeles. Registration is now open. The annual SIGGRAPH conference is a melting pot for researchers, artists and technologists, among other professionals.

“Victoria is the ultimate symbol of where the computer graphics industry is headed and a true visionary for inclusivity,” says SIGGRAPH 2019 conference chair Mikki Rose. “Her outlook reflects the future I envision for computer graphics and for SIGGRAPH. I am thrilled to have her keynote this summer’s conference and cannot wait to hear more of her story.”

One of few women in Hollywood to hold such a prominent title, Alonso’s dedication to the industry has been admired for a long time, leading to multiple awards and honors, including the 2015 New York Women in Film & Television Muse Award for Outstanding Vision and Achievement, the Advanced Imaging Society’s first female Harold Lloyd Award recipient, and the 2017 VES Visionary Award (another female first). A native of Buenos Aires, her career began in visual effects and included a four-year stint at Digital Domain.

Alonso’s film credits include productions such as Ridley Scott’s Kingdom of Heaven, Tim Burton’s Big Fish, Andrew Adamson’s Shrek, and numerous Marvel titles — Iron Man, Iron Man 2, Thor, Captain America: The First Avenger, Iron Man 3, Captain America: The Winter Soldier, Captain America: Civil War, Thor: The Dark World, Avengers: Age of Ultron, Ant-Man, Guardians of the Galaxy, Doctor Strange, Guardians of the Galaxy Vol. 2, Spider-Man: Homecoming, Thor: Ragnarok, Black Panther, Avengers: Infinity War, Ant-Man and the Wasp and, most recently, Captain Marvel.

“I’ve been attending SIGGRAPH since before there was a line at the ladies’ room,” says Alonso. “I’m very much looking forward to having a candid conversation about the state of visual effects, diversity and representation in our industry.”

She adds, “At Marvel Studios, we have always tried to push boundaries with both our storytelling and our visual effects. Bringing our work to SIGGRAPH each year offers us the opportunity to help shape the future of filmmaking.”

The 2019 keynote session will be presented as a fireside chat, allowing attendees the opportunity to hear Alonso discuss her life and career in an intimate setting.

Wonder Park’s whimsical sound

By Jennifer Walden

The imagination of a young girl comes to life in the animated feature Wonder Park. A Paramount Animation and Nickelodeon Movies film, the story follows June (Brianna Denski) and her mother (Jennifer Garner) as they build a pretend amusement park in June’s bedroom. There are rides that defy the laws of physics — like a merry-go-round with flying fish that can leave the carousel and travel all over the park; a Zero-G-Land where there’s no gravity; a waterfall made of firework sparks; a super tube slide made from bendy straws; and other wild creations.

But when her mom gets sick and leaves for treatment, June’s creative spark fizzles out. She disassembles the park and packs it away. Then one day as June heads home through the woods, she stumbles onto a real-life Wonderland that mirrors her make-believe one. Only this Wonderland is falling apart and being consumed by the mysterious Darkness. June and the park’s mascots work together to restore Wonderland by stopping the Darkness.

Even in its more tense moments — like June and her friend Banky (Oev Michael Urbas) riding a homemade rollercoaster cart down their suburban street and nearly missing an on-coming truck — the sound isn’t intense. The cart doesn’t feel rickety or squeaky, like it’s about to fly apart (even though the brake handle breaks off). There’s the sense of danger that could result in non-serious injury, but never death. And that’s perfect for the target audience of this film — young children. Wonder Park is meant to be sweet and fun, and supervising sound editor John Marquis captures that masterfully.

Marquis and his core team — sound effects editor Diego Perez, sound assistant Emma Present, dialogue/ADR editor Michele Perrone and Foley supervisor Jonathan Klein — handled sound design, sound editorial and pre-mixing at E² Sound on the Warner Bros. lot in Burbank.

Marquis was first introduced to Wonder Park back in 2013, but the team’s real work began in January 2017. The animated sequences steadily poured in for 17 months. “We had a really long time to work the track, to get some of the conceptual sounds nailed down before going into the first preview. We had two previews with temp score and then two more with mockups of composer Steven Price’s score. It was a real luxury to spend that much time massaging and nitpicking the track before getting to the dub stage. This made the final mix fun; we were having fun mixing and not making editorial choices at that point.”

The final mix was done at Technicolor’s Stage 1, with re-recording mixers Anna Behlmer (effects) and Terry Porter (dialogue/music).

Here, Marquis shares insight on how he created the whimsical sound of Wonder Park, from the adorable yet naughty chimpanzombies to the tonally pleasing, rhythmic and resonant bendy-straw slide.

The film’s sound never felt intense even in tense situations. That approach felt perfectly in-tune with the sensibilities of the intended audience. Was that the initial overall goal for this soundtrack?
When something was intense, we didn’t want it to be painful. We were always in search of having a nice round sound that had the power to communicate the energy and intensity we wanted without having the pointy, sharp edges that hurt. This film is geared toward a younger audience and we were supersensitive about that right out of the gate, even without having that direction from anyone outside of ourselves.

I have two kids — one 10 and one five. Often, they will pop by the studio and listen to what we’re doing. I can get a pretty good gauge right off the bat if we’re doing something that is not resonating with them. Then, we can redirect more toward the intended audience. I pretty much previewed every scene for my kids, and they were having a blast. I bounced ideas off of them so the soundtrack evolved easily toward their demographic. They were at the forefront of our thoughts when designing these sequences.

John Marquis recording the bendy straw sound.

There were numerous opportunities to create fun, unique palettes of sound for this park and these rides that stem from this little girl’s imagination. If I’m a little kid and I’m playing with a toy fish and I’m zipping it around the room, what kind of sound am I making? What kind of sounds am I imagining it making?

This film reminded me of being a kid and playing with toys. So, for the merry-go-round sequence with the flying fish, I asked my kids, “What do you think that would sound like?” And they’d make some sound with their mouths and start playing, and I’d just riff off of that.

I loved the sound of the bendy-straw slide — from the sound of it being built, to the characters traveling through it, and even the reverb on their voices while inside of it. How did you create those sounds?
Before that scene came to us, before we talked about it or saw it, I had the perfect sound for it. We had been having a lot of rain, so I needed to get an expandable gutter for my house. It starts at about one-foot long but can be pulled out to three-feet long if needed. It works exactly like a bendy-straw, but it’s huge. So when I saw the scene in the film, I knew I had the exact, perfect sound for it.

We mic’d it with a Sanken CO-100k, inside and out. We pulled the tube apart and closed it, and got this great, ribbed, rippling, zuzzy sound. We also captured impulse responses inside the tube so we could create custom reverbs. It was one of those magical things that I didn’t even have to think about or go hunting for. This one just fell in my lap. It’s a really fun and tonal sound. It’s musical and has a rhythm to it. You can really play with the Doppler effect to create interesting pass-bys for the building sequences.

Another fun sequence for sound was inside Zero-G-Land. How did you come up with those sounds?
That’s a huge, open space. Our first instinct was to go with a very reverberant sound to showcase the size of the space and the fact that June is in there alone. But as we discussed it further, we came to the conclusion that since this is a zero-gravity environment there would be no air for the sound waves to travel through. So, we decided to treat it like space. That approach really worked out because in the scene proceeding Zero-G-Land, June is walking through a chasm and there are huge echoes. So the contrast between that and the air-less Zero-G-Land worked out perfectly.

Inside Zero-G-Land’s tight, quiet environment we have the sound of these giant balls that June is bouncing off of. They look like balloons so we had balloon bounce sounds, but it wasn’t whimsical enough. It was too predictable. This is a land of imagination, so we were looking for another sound to use.

John Marquis with the Wind Wand.

My friend has an instrument called a Wind Wand, which combines the sound of a didgeridoo with a bullroarer. The Wind Wand is about three feet long and has a gigantic rubber band that goes around it. When you swing the instrument around in the air, the rubber band vibrates. It almost sounds like an organic lightsaber-like sound. I had been playing around with that for another film and thought the rubbery, resonant quality of its vibration could work for these gigantic ball bounces. So we recorded it and applied mild processing to get some shape and movement. It was just a bit of pitching and Doppler effect; we didn’t have to do much to it because the actual sound itself was so expressive and rich and it just fell into place. Once we heard it in the cut, we knew it was the right sound.

How did you approach the sound of the chimpanzombies? Again, this could have been an intense sound, but it was cute! How did you create their sounds?
The key was to make them sound exciting and mischievous instead of scary. It can’t ever feel like June is going to die. There is danger. There is confusion. But there is never a fear of death.

The chimpanzombies are actually these Wonder Chimp dolls gone crazy. So they were all supposed to have the same voice — this pre-recorded voice that is in every Wonder Chimp doll. So, you see this horde of chimpanzombies coming toward you and you think something really threatening is happening but then you start to hear them and all they are saying is, “Welcome to Wonderland!” or something sweet like that. It’s all in a big cacophony of high-pitched voices, and they have these little squeaky dog-toy feet. So there’s this contrast between what you anticipate will be scary but it turns out these things are super-cute.

The big challenge was that they were all supposed to sound the same, just this one pre-recorded voice that’s in each one of these dolls. I was afraid it was going to sound like a wall of noise that was indecipherable, and a big, looping mess. There’s a software program that I ended up using a lot on this film. It’s called Sound Particles. It’s really cool, and I’ve been finding a reason to use it on every movie now. So, I loaded this pre-recorded snippet from the Wonder Chimp doll into Sound Particles and then changed different parameters — I wanted a crowd of 20 dolls that could vary in pitch by 10%, and they’re going to walk by at a medium pace.

Changing the parameters will change the results, and I was able to make a mass of different voices based off of this one, individual audio file. It worked perfectly once I came up with a recipe for it. What would have taken me a day or more — to individually pitch a copy of a file numerous times to create a crowd of unique voices — only took me a few minutes. I just did a bunch of varieties of that, with smaller groups and bigger groups, and I did that with their feet as well. The key was that the chimpanzombies were all one thing, but in the context of music and dialogue, you had to be able to discern the individuality of each little one.

There’s a fun scene where the chimpanzombies are using little pickaxes and hitting the underside of the glass walkway that June and the Wonderland mascots are traversing. How did you make that?
That was for Fireworks Falls; one of the big scenes that we had waited a long time for. We weren’t really sure how that was going to look — if the waterfall would be more fiery or more sparkly.

The little pickaxes were a blacksmith’s hammer beating an iron bar on an anvil. Those “tink” sounds were pitched up and resonated just a little bit to give it a glass feel. The key with that, again, was to try to make it cute. You have these mischievous chimpanzombies all pecking away at the glass. It had to sound like they were being naughty, not malicious.

When the glass shatters and they all fall down, we had these little pinball bell sounds that would pop in from time to time. It kept the scene feeling mildly whimsical as the debris is falling and hitting the patio umbrellas and tables in the background.

Here again, it could have sounded intense as June makes her escape using the patio umbrella, but it didn’t. It sounded fun!
I grew up in the Midwest and every July 4th we would shoot off fireworks on the front lawn and on the sidewalk. I was thinking about the fun fireworks that I remembered, like sparklers, and these whistling spinning fireworks that had a fun acceleration sound. Then there were bottle rockets. When I hear those sounds now I remember the fun time of being a kid on July 4th.

So, for the Fireworks Falls, I wanted to use those sounds as the fun details, the top notes that poke through. There are rocket crackles and whistles that support the low-end, powerful portion of the rapids. As June is escaping, she’s saying, “This is so amazing! This is so cool!” She’s a kid exploring something really amazing and realizing that this is all of the stuff that she was imagining and is now experiencing for real. We didn’t want her to feel scared, but rather to be overtaken by the joy and awesomeness of what she’s experiencing.

The most ominous element in the park is the Darkness. What was your approach to the sound in there?
It needed to be something that was more mysterious than ominous. It’s only scary because of the unknown factor. At first, we played around with storm elements, but that wasn’t right. So I played around with a recording of my son as a baby; he’s cooing. I pitched that sound down a ton, so it has this natural, organic, undulating, human spine to it. I mixed in some dissonant windchimes. I have a nice set of windchimes at home and I arranged them so they wouldn’t hit in a pleasing way. I pitched those way down, and it added a magical/mystical feel to the sound. It’s almost enticing June to come and check it out.

The Darkness is the thing that is eating up June’s creativity and imagination. It’s eating up all of the joy. It’s never entirely clear what it is though. When June gets inside the Darkness, everything is silent. The things in there get picked up and rearranged and dropped. As with the Zero-G-Land moment, we bring everything to a head. We go from a full-spectrum sound, with the score and June yelling and the sound design, to a quiet moment where we only hear her breathing. For there, it opens up and blossoms with the pulse of her creativity returning and her memories returning. It’s a very subjective moment that’s hard to put into words.

When June whispers into Peanut’s ear, his marker comes alive again. How did you make the sound of Peanut’s marker? And how did you give it movement?
The sound was primarily this ceramic, water-based bird whistle, which gave it a whimsical element. It reminded me of a show I watched when I was little where the host would draw with his marker and it would make a little whistling, musical sound. So anytime the marker was moving, it would make this really fun sound. This marker needed to feel like something you would pick up and wave around. It had to feel like something that would inspire you to draw and create with it.

To get the movement, it was partially performance based and partially done by adding in a Doppler effect. I used variations in the Waves Doppler plug-in. This was another sound that I also used Sound Particles for, but I didn’t use it to generate particles. I used it to generate varied movement for a single source, to give it shape and speed.

Did you use Sound Particles on the paper flying sound too? That one also had a lot of movement, with lots of twists and turns.
No, that one was an old-fashioned fader move. What gave that sound its interesting quality — this soft, almost ethereal and inviting feel — was the practical element we used to create the sound. It was a piece of paper bag that was super-crumpled up, so it felt fluttery and soft. Then, every time it moved, it had a vocally whoosh element that gave it personality. So once we got that practical element nailed down, the key was to accentuate it with a little wispy whoosh to make it feel like the paper was whispering to June, saying, “Come follow me!”

Wonder Park is in theaters now. Go see it!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Behind the Title: Nice Shoes animator Yandong Dino Qiu

This artist/designer has taken to sketching people on the subway to keep his skills fresh and mind relaxed.

NAME: Yandong Dino Qiu

COMPANY: New York’s Nice Shoes

CAN YOU DESCRIBE YOUR COMPANY?
Nice Shoes is a full-service creative studio. We offer design, animation, VFX, editing, color grading, VR/AR, working with agencies, brands and filmmakers to help realize their creative vision.

WHAT’S YOUR JOB TITLE?
Designer/Animator

WHAT DOES THAT ENTAIL?
Helping our clients to explore different looks in the pre-production stage, while aiding them in getting as close as possible to the final look of the spot. There’s a lot of exploration and trial and error as we try to deliver beautiful still frames that inform the look of the moving piece.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Not so much for the title, but for myself, design and animation can be quite broad. People may assume you’re only 2D, but it also involves a lot of other skill sets such as 3D lighting and rendering. It’s pretty close to a generalist role that requires you to know nearly every software as well as to turn things around very quickly.

WHAT TOOLS DO YOU USE?
Photoshop, After Effects,. Illustrator, InDesign — the full Adobe Creative Suite — and Maxon Cinema 4D.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Pitch and exploration. At that stage, all possibilities are open. The job is alive… like a baby. You’re seeing it form and helping to make new life. Before this, you have no idea what it’s going to look like. After this phase, everyone has an idea. It’s very challenging, exciting and rewarding.

WHAT’S YOUR LEAST FAVORITE?
Revisions. Especially toward the end of a project. Everything is set up. One little change will affect everything else.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
2:15pm. Its right after lunch. You know you have the whole afternoon. The sun is bright. The mood is light. It’s not too late for anything.

Sketching on the subway.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be a Manga artist.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
La Mer. Frontline. Friskies. I’ve also been drawing during my commute everyday, sketching the people I see on the subway. I’m trying to post every week on Instagram. I think it’s important for artists to keep to a routine. I started up with this at the beginning of 2019, and there’ve been about 50 drawings already. Artists need to keep their pen sharp all the time. By doing these sketches, I’m not only benefiting my drawing skills, but I’m improving my observation about shapes and compositions, which is extremely valuable for work. Being able to break down shapes and components is a key principle of design, and honing that skill helps me in responding to client briefs.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
TED-Ed What Is Time? We had a lot of freedom in figuring out how to animate Einstein’s theories in a fun and engaging way. I worked with our creative director Harry Dorrington to establish the look and then with our CG team to ensure that the feel we established in the style frames was implemented throughout the piece.

TED-Ed What Is Time?

The film was extremely well received. There was a lot of excitement at Nice Shoes when it premiered, and TED-Ed’s audience seemed to respond really warmly as well. It’s rare to see so much positivity in the YouTube comments.

NAME SOME TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Wacom tablet for drawing and my iPad for reading.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I take time and draw for myself. I love that drawing and creating is such a huge part of my job, but it can get stressful and tiring only creating for others. I’m proud of that work, but when I can draw something that makes me personally happy, any stress or exhaustion from the work day just melts away.

Quick Chat: Lord Danger takes on VFX-heavy Devil May Cry 5 spot

By Randi Altman

Visual effects for spots have become more and more sophisticated, and the recent Capcom trailer promoting the availability of its game Devil May Cry 5 is a perfect example.

 The Mike Diva-directed Something Greater starts off like it might be a commercial for an anti-depressant with images of a woman cooking dinner for some guests, people working at a construction site, a bored guy trimming hedges… but suddenly each of our “Everyday Joes” turns into a warrior fighting baddies in a video game.

Josh Shadid

The hedge trimmer’s right arm turns into a futuristic weapon, the construction worker evokes a panther to fight a monster, and the lady cooking is seen with guns a blazin’ in both hands. When she runs out of ammo, and to the dismay of her dinner guests, her arms turn into giant saws. 

Lord Danger’s team worked closely with Capcom USA to create this over-the-top experience, and they provided everything from production to VFX to post, including sound and music.

We reached out to Lord Danger founder/EP Josh Shadid to learn more about their collaboration with Capcom, as well as their workflow.

How much direction did you get from Capcom? What was their brief to you?
Capcom’s fight-games director of brand marketing, Charlene Ingram, came to us with a simple request — make a memorable TV commercial that did not use gameplay footage but still illustrated the intensity and epic-ness of the DMC series.

What was it shot on and why?
We shot on both Arri Alexa Mini and Phantom Flex 4k using Zeiss Super Speed MKii Prime lenses, thanks to our friends at Antagonist Camera, and a Technodolly motion control crane arm. We used the Phantom on the Technodolly to capture the high-speed shots. We used that setup to speed ramp through character actions, while maintaining 4K resolution for post in both the garden and kitchen transformations.

We used the Alexa Mini on the rest of the spot. It’s our preferred camera for most of our shoots because we love the combination of its size and image quality. The Technodolly allowed us to create frame-accurate, repeatable camera movements around the characters so we could seamlessly stitch together multiple shots as one. We also needed to cue the fight choreography to sync up with our camera positions.

You had a VFX supervisor on set. Can you give an example of how that was beneficial?
We did have a VFX supervisor on site for this production. Our usual VFX supervisor is one of our lead animators — having him on site to work with means we’re often starting elements in our post production workflow while we’re still shooting.

Assuming some of it was greenscreen?
We shot elements of the construction site and gardening scene on greenscreen. We used pop-ups to film these elements on set so we could mimic camera moves and lighting perfectly. We also took photogrammetry scans of our characters to help rebuild parts of their bodies during transition moments, and to emulate flying without requiring wire work — which would have been difficult to control outside during windy and rainy weather.

Can you talk about some of the more challenging VFX?
The shot of the gardener jumping into the air while the camera spins around him twice was particularly difficult. The camera starts on a 45-degree frontal, swings behind him and then returns to a 45-degree frontal once he’s in the air.

We had to digitally recreate the entire street, so we used the technocrane at the highest position possible to capture data from a slow pan across the neighborhood in order to rebuild the world. We also had to shoot this scene in several pieces and stitch it together. Since we didn’t use wire work to suspend the character, we also had to recreate the lower half of his body in 3D to achieve a natural looking jump position. That with the combination of the CG weapon elements made for a challenging composite — but in the end, it turned out really dramatic (and pretty cool).

Were any of the assets provided by Capcom? All created from scratch?
We were provided with the character and weapons models from Capcom — but these were in-game assets, and if you’ve played the game you’ll see that the environments are often dark and moody, so the textures and shaders really didn’t apply to a real-world scenario.

Our character modeling team had to recreate and re-interpret what these characters and weapons would look like in the real world — and they had to nail it — because game culture wouldn’t forgive a poor interpretation of these iconic elements. So far the feedback has been pretty darn good.

In what ways did being the production company and the VFX house on the project help?
The separation of creative from production and post production is an outdated model. The time it takes to bring each team up to speed, to manage the communication of ideas between creatives and to ensure there is a cohesive vision from start to finish, increases both the costs and the time it takes to deliver a final project.

We shot and delivered all of Devil May Cry’s Something Greater in four weeks total, all in-house. We find that working as the production company and VFX house reduces the ratio of managers per creative significantly, putting more of the money into the final product.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Autodesk Arnold 5.3 with Arnold GPU in public beta

Autodesk has made its Arnold 5.3 with Arnold GPU available as a public beta. The release provides artists with GPU rendering for a set number of features, and the flexibility to choose between rendering on the CPU or GPU without changing renderers.

From look development to lighting, support for GPU acceleration brings greater interactivity and speed to artist workflows, helping reduce iteration and review cycles. Arnold 5.3 also adds new functionality to help maximize performance and give artists more control over their rendering processes, including updates to adaptive sampling, a new version of the Randomwalk SSS mode and improved Operator UX.

Arnold GPU rendering makes it easier for artists and small studios to iterate quickly in a fast working environment and scale rendering capacity to accommodate project demands. From within the standard Arnold interface, users can switch between rendering on the CPU and GPU with a single click. Arnold GPU currently supports features such as arbitrary shading networks, SSS, hair, atmospherics, instancing, and procedurals. Arnold GPU is based on the Nvidia OptiX framework and is optimized to leverage Nvidia RTX technology.

New feature summary:
— Major improvements to quality and performance for adaptive sampling, helping to reduce render times without jeopardizing final image quality
— Improved version of Randomwalk SSS mode for more realistic shading
— Enhanced usability for Standard Surface, giving users more control
— Improvements to the Operator framework
— Better sampling of Skydome lights, reducing direct illumination noise
— Updates to support for MaterialX, allowing users to save a shading network as a MaterialX look

Arnold 5.3 with Arnold GPU in public beta will be available March 20 as a standalone subscription or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection. You can also try Arnold GPU with a free 30-day trial of Arnold. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, Houdini, Cinema 4D and Katana.

Behind the Title: Legwork director of production Chris Grey

NAME: Chris Grey

COMPANY: Denver-based Legwork

CAN YOU DESCRIBE YOUR COMPANY?
Legwork is an independent creative studio combining animation and technology to create memorable stories and experiences for advertising, entertainment and education.

WHAT’S YOUR JOB TITLE?
Director of Production

WHAT DOES THAT ENTAIL?
I touch almost all parts of the business, including business development, client relationships, scoping, resourcing, strategy, producer mentorship and making sure every project that goes out the door is up to our high standards. Oh, and I still produce several projects myself.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
It might be cliché, but you still need to get your hands dirty producing things. You just can’t escape it, nor should you want to. It sets the example for your team.

Dominos

WHAT’S YOUR FAVORITE PART OF THE JOB?
The problem-solving aspect of it. No matter how tight your project plan is, it’s a given that curveballs are going to happen. Planning for those and being able to react with smart solutions is what makes every day different.

WHAT’S YOUR LEAST FAVORITE?
Anxiety isn’t fun, but it comes with the job. Just know how to deal with it and don’t let it rub off on others.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
First hour of the day for emails. I do my best to keep my afternoons meeting-free, unless it’s a client meeting, My last job put a lot of emphasis on “flow” and staying in it, so I do my best to keep all internals in the morning so the whole team can work in the afternoon, including myself.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’ve always wanted to own a cool bodega/deli type of place. We’d specialize in proper sandwiches, hard to find condiments, cheap beer. Keeping this dream alive…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I knew in college. Crispin Porter + Bogusky was moving to Boulder during my junior or senior year at Colorado University. I read up on them and thought to myself “That’s it. That’s what I want to do.” I was lucky enough to get an internship there after graduation and I haven’t really looked back.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Can I take credit for the team on these two? Cool, because we’re super-proud of these, but I didn’t “produce” them:
Rise: Hope-a-monics
Pandora: Smokepurpp

Yeti

Some stuff I worked on recently that we are equally proud of:
https://www.yeticycles.com/
https://ifthisthendominos.com/
L.L.Bean: Find Your Park

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
More than a project, our relationship with YouTube has been super rewarding. The View in 2 series is now on its fifth season and it was one of the first things I worked on when I got to Legwork. Watching the show and our relationship with the client evolve is something I am proud of. In the coming months, there will be a new show that we’re releasing with them that pushes the style even further.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
1. This is a cheat because it covers music, my calendar, email, etc., but one is my iCloud and Google accounts — because 75 percent of my life on there now.
2. My Nest camera gives me peace of mind when I’m out of town and lets me know my dog isn’t too lonely.
3. Phonograph records — old tech that I love to collect.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Besides friends and family? Lots of food-related ones (current favorites are @wdurney and @turkeyandthewolf), sports/sneakers (@houseofhighlights, @jordansdaily), history (@ww2nowandthen) and a good random one is @celebsonsandwhiches.

I also like every @theonion post.

That was all for Instagram. I save Twitter for political rants and Liverpool F.C.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
We have a Sonos at the office and more often than not it forces me to put on my headphones. Sorry, Legworkers. So it might be a podcast, Howard Stern, KEXP or something British.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’m a new dad, so that helps keep everything in perspective. That and some brewery visits on the weekend, which are totally socially acceptable to bring infants to!

Behind the Title: Gentleman Scholar MD/EP Jo Arghiris

LA-based Jo Arghiris embraces the creativity of the job and enjoys “pulling treatments together with our directors. It’s always such a fun, collaborative process.” Find out more…

Name: Jo Arghiris

Company: Gentleman Scholar (@gentscholar)

Can You Describe Your Company?
Gentleman Scholar is a creative production studio, drawn together by a love of design and an eagerness to push boundaries.  Since launching in Los Angeles in 2010, and expanding to New York in 2016, we have evolved within the disciplines of live-action production, digital exploration, print and VR. At our very core, we are a band of passionate artists and fearless makers.

The biggest thing that struck me when I joined Scholar was everyone’s willingness to roll up their sleeves and give it a go. There are so many creative people working across both our studios, it’s quite amazing what we can achieve when we put our collective minds to it. In fact, it’s really hard to put us in a category or to define what we do on a day-to-day basis. But if I had to sum it up in just one word, our company feels like “home”; there’s no place quite like it.

What’s Your Job Title?
Managing Director/EP Los Angeles

What Does That Entail?
Truth be told, it’s evolving all the time. In its purest form, my job entails having top-line involvement on everything going on in the LA studio, both from operational and new business POVs. I face inwards and outwards. I mentor and I project. I lead and I follow. But the main thing I want to mention is that I couldn’t do my job without all these incredible people by my side. It really does take a village, every single day.

What Would Surprise People the Most About What Falls Under That Title?
Not so much “surprising” but certainly different from other roles, is that my job is never done (or at least it shouldn’t be). I never go home with all my to-do’s ticked off. The deck is constantly shuffled and re-dealt. This fluidity can be off-putting to some people who like to have a clear idea of what they need to achieve on any given day. But I really like to work that way, as it keeps my mind nimble and fresh.

What’s Your Favorite Part of the Job?
Learning new things and expanding my mind. I like to see our teams push themselves in this way, too. It’s incredibly satisfying watching folks overcome challenges and grow into their roles. Also, I obviously love winning work, especially if it’s an intense pitch process. I’m a creative person and I really enjoy pulling treatments together with our directors. It’s always such a fun, collaborative process.

What’s Your Least Favorite?
Well, I guess the 24/7 availability thing that we’ve all become accustomed to and are all guilty of. It’s so, so important for us to have boundaries. If I’m emailing the team late at night or on the weekend, I will write in the subject line, “For the Morning” or “For Monday.” I sometimes need to get stuff set up in advance, but I absolutely do not expect a response at 10pm on a Sunday night. To do your best work, it’s essential that you have a healthy work/life balance.

What is Your Favorite Time of the Day?
As clichéd as it may sound, I love to get up before anyone else and sit, in silence, with a cup of coffee. I’m a one-a-day kind of girl, so it’s pretty sacred to me. Weekdays or weekends, I have so much going on, I need to set my day up in these few solitary moments. I am not a night person at all and can usually be found fast asleep on the sofa sometime around 9pm each night. Equally favorite is when my kids get up and we do “huggle” time together, before the day takes us away on our separate journeys.

Bleacher Report

Can you Name Some Recent Projects?
Gentleman Scholar worked on a big Acura TLX campaign, which is probably one of my all-time favorites. Other fun projects include Legends Club for Timberland, Upwork “Hey World!” campaign from Duncan Channon, the Sponsor Reel for the 2018 AICP Show and Bleacher Report’s Sports Alphabet.

If You Didn’t Have This Job, What Would You be Doing Instead?
I love photography, writing and traveling. So if I could do it all again, I’d be some kind of travel writer/photographer combo or a journalist or something. My brother actually does just that, and I’m super-proud of his choices. To stand behind your own creative point of view takes skill and dedication.

How Did You Know This Would Be Your Path?
The road has been long, and it has carried me from London to New York to Los Angeles. I originally started in post production and VFX, where I got a taste for creative problem-solving. The jump from this world to a creative production studio like Scholar was perfectly timed and I relished the learning curve that came with it. I think it’s quite hard to have a defined “path” these days.

My advice to anyone getting into our industry right now would be to understand that knowledge and education are powerful tools, so go out of your way to harness them. And never stand still; always keep pushing yourself.

Name Three Pieces of Technology You Can’t Live Without.
My Ear Pods — so happy to not have that charging/listening conflict with my iPhone anymore; all the apps that allow me to streamline my life and get shit done any time of day no matter what, no matter where; I think my electric toothbrush is pretty high up there too. Can I have one more? Not “tech” per se, but my super-cute mini-hair straightener, which make my bangs look on point, even after working out!

What Social Media Channels Do You Follow?
Well, I like Instagram mostly. Do you count Pinterest? I love a Pinterest board. I have many of those. And I read Twitter, but I don’t Tweet too much. To be honest, I’m pretty lame on social media, and all my accounts are private. But I realize they are such important tools in our industry so I use them on an as-needed basis. Also, it’s something I need to consider soon for my kids, who are obsessed with watching random, “how-to” videos online and periodically ask me, “Are you going to put that on YouTube?” So I need to keep on top of it, not just for work, but also for them. It will be their world very soon.

Do You Listen to Music While You Work? Care to Share Your Favorite Music to Work to?
Yes, I have a Sonos set up in my office. I listen to a lot of playlists — found ones and the random ones that your streaming services build for you. Earlier this morning I had an album called Smino by blkswn playing. Right now I’m listening to a band called Pronoun. They were on a playlist Nylon Studios released called, “All the Brooklyn Bands You Should Be Listening To.”

My drive home is all about the podcast. I’m trying to educate myself more on American history at the moment. I’m also tempted to get into Babel and learn French. With all the hours I spend in the car, I’m pretty sure I would be fluent in no time!

What Do You Do to De-stress From it All?
So many things! I literally never stop. Hot yoga, spinning, hiking, mountain biking, cooking and thinking of new projects for my house. Road tripping, camping and exploring new places with my family and friends. Taking photographs and doing art projects with my kids. My all-time favorite thing to do is hit the beach for the day, winter and summer. I find it one of the most restorative places on Earth. I’m so happy to call LA my home. It suits me down to the ground!

Spider-Man Into the Spider-Verse: sound editors talk ‘magical realism’

By Randi Altman

Sony Pictures’ Spider-Man: Into the Spider-Verse isn’t your ordinary Spider-Man movie, from its story to its look to its sound. The filmmakers took a familiar story and turned it on its head a bit, letting audiences know that Spider-Man isn’t just one guy wearing that mask… or even a guy, or even from this dimension.

The film focuses on Miles Morales, a teenager from Brooklyn, struggling with all things teenager while also dealing with the added stress of being Spider-Man.

Geoff Rubay

Audio played a huge role in this story, and we recently reached out to Sony supervising sound editors Geoff Rubay and Curt Schulkey to dig in a bit deeper. The duo recently won an MPSE Award for Outstanding Achievement in Sound Editing — Feature Animation… industry peers recognizing the work that went into creating the sound for this stylized world.

Let’s find out more about the sound process on Spider-Man: Into the Spider Verse, which won the Academy Award for Best Animated Feature.

What do you think is the most important element of this film’s sound?
Curt Schulkey: It is fun, it is bold, it has style and it has attitude. It has energy. We did everything we could to make the sound as stylistic and surprising as the imagery. We did that while supporting the story and the characters, which are the real stars of the movie. We had the opportunity to work with some incredibly creative filmmakers, and we did our best to surprise and delight them. We hope that audiences like it too.

Geoff Rubay: For me, it’s the fusion of the real and the fantastic. Right from the beginning, the filmmakers made it clear that it should feel believable — grounded — while staying true to the fantastic nature of the visuals. We did not hold back on the fantastic side, but we paid close attention to the story and made sure we were supporting that and not just making things sound awesome.

Curt Schulkey

How early did your team get involved in the film?
Rubay: We started on an SFX pre-design phase in late February for about a month. The goal was to create sounds for the picture editors and animators to work with. We ended up doing what amounted to a temp mix of some key sequences. The “Super Collider” was explored. We only worked on the first sequence for the collider, but the idea was that material could be recycled by the picture department and used in the early temp mixes until the final visuals arrived.

Justin Thompson, the production designer, was very generous with his time and resources early on. He spent several hours showing us work-in-progress visuals and concept art so that we would know where visuals would eventually wind up. This was invaluable. We were able to work on sounds long before we saw them as part of the movie. In the temp mix phase, we had to hold back or de-emphasize some of those elements because they were not relevant yet. In some cases, the sounds would not work at all with the storyboards or un-lit animation that was in the cut. Only when the final lit animation showed up would those sounds make sense.

Schulkey: I came onto the film in May, about 9.5 months before completion. We were neck-deep in following changes throughout our work. We were involved in the creation of sounds from the very first studio screening, through previews and temp mixes, right on to the end of the final mix. This sometimes gave us the opportunity to create sounds in advance of the images, or to influence the development of imagery and timing. Because they were so involved in building the movie, the directors did not always have time to discuss their needs with us, so we would speculate on what kinds of sounds they might need or want for events that they were molding visually. As Geoff said, the time that Justin Thompson spent with us was invaluable. The temp-mix process often gave us the opportunity to audition creations for the directors/producers.

What sort of direction did you receive from the directors?
Schulkey: Luckily, because of our previous experiences with producers Chris Miller and Phil Lord and editor Bob Fisher, we had a pretty good idea of their tastes and sensitivities, so our first attempts were usually pointed in the right direction. The three directors — Bob Persichetti, Peter Ramsey and Rodney Rothman — also provided input, so we were rich with direction.

As with all movies, we had hundreds of side discussions with the directors along the way about details, nuances, timing and so on. I think that the most important overall direction we got from the filmmakers was related to the dynamic arc of the movie. They wanted the soundtrack to be forceful but not so much that it hurt. They wanted it to breathe — quiet in some spots, loud in others, and they wanted it to be fun. So, we had to figure out what “fun” sounds like.

Rubay: This will sound strange, but we never did a spotting session for the movie. We just started our work and got feedback when we showed sequences or did temp mixes. Phil called when we started the pre-design phase and gave us general notes about tone and direction. He made it clear he did not want us to hold back, but he wanted to keep the film grounded. He explained the importance of the various levels of technology of different characters.

Peni Parker is from the 31st century, so her robot sidekick needed to sound futuristic. Scorpion is a pile of rusty metal. Prowler’s tech is appropriated from his surroundings and possibly with some help from Kingpin. We discussed the sound of previous Spider-Man movies and asked how much we needed to stay true to established sounds from those films. The direction was “not at all unless it makes sense.” We endeavored to make Peter Parker’s web-slings sound like the previous films. After that, we just “went for it.”

How was working on a film like this different than working on something live-action? Did it allow you more leeway?
Schulkey: In a live-action film, most or all of the imagery is shot before we begin working. Many aspects of the sound are already stamped in. On this film, we had a lot more creative involvement. At the start, a good percentage of the movie was still in storyboards, so if we expanded or contracted the timing of an event, the animators might adjust their work to fit the sounds. As the visual elements developed, we began creating layers of sound to support them.

For me, one of the best parts of an animated film’s soundtrack is that no sounds are imposed by the real world, as is often the case in live-action productions. In live-action, if a dialogue scene is shot on a city street in Brooklyn, there is a lot of uninteresting traffic noise built into the dialogue recordings.

Very few directors (or actors) want to lose the spontaneity of the original performance by re-recording dialogue in a studio, so we tweak, clean and process the dialogue to lessen unwanted noise, sometimes diminishing the quality of the recording. We sometimes make compromises with sound effects and music to support a not-so-ideal dialogue track. In an animated film, we don’t have that problem. Sound effects and ambiences can shine without getting in the way. This film has very quiet moments, which feel very natural and organic. That’s a pleasure to have in the movie.

Rubay: Everything Curt said! You have quite a bit of freedom because there is no “production track.” On the flip side, every sound that is added is just that — added. You have to be aware of that; more is not always better.

Spider-Man: Into the Spider-Verse is an animated film with a unique visual style. At times, we played the effects straight, as we might in a live-action picture, to ground it. Other times, we stripped away any notion of “reality.” Sometimes we would do both in the same scene as we cut from one angle to the next. Chris and Phil have always welcomed hard right angle turns, snapping sounds off on a cut or mixing and matching styles in close proximity. They like to do whatever supports the story and directs the audience. Often, we use sound to make your eye notice one thing or look away from another. Other times, we expand the frame, adding sounds outside of what you can see to further enhance the image.

There are many characters in the film. Can you talk about helping to create personality for each?
Rubay: There was a lot of effort made to differentiate the various “spider people” from each other. Whether it was through their web-slings or inherent technology, we were directed to give as much individual personality as possible to each character. Since that directive was baked in from the beginning, every department had it in mind. We paid attention to every visual cue. For example, Miles wears a particular pair of shoes — Nike Air Jordan 1s. My son, Alec Rubay, who was the Foley supervisor, is a real sneakerhead. He tracked down those shoes — very rare — and we recorded them, capturing every sound we could. When you hear Miles’s shoes squeak, you are hearing the correct shoes. Those shoes sound very specific. We applied that mentality wherever possible.

Schulkey: We took the opportunity to exploit the fact that some characters are from different universes in making their sound signatures different from one another. Spider-Ham is from a cartoon universe, so many of the sounds he makes are cartoon sounds. Sniffles, punches, swishes and other movements have a cartoon sensibility. Peni Parker, the anime character, is in a different sync than the rest of the cast, and her voice is somewhat more dynamic. We experimented with making Spider-Man Noir sound like he was coming from an old movie soundtrack, but that became obnoxious, so we abandoned the idea. Nicolas Cage was quite capable of conveying that aspect of the character without our help.

Because we wanted to ground characters in the real world, a lot of effort was put into attaching their voices to their images. Sync, of course, is essential, as is breathing. Characters in most animated films don’t do much breathing, but we added a lot of breaths, efforts and little stutters to add realism. That had to be done carefully. We had a very special, stellar cast and we wanted to maintain the integrity of their performances. I think that effort shows up nicely in some of the more intimate, personal scenes.

To create the unique look of this movie, the production sometimes chose to animate sections of the film “on twos.” That means that mouth movements change every other frame rather than every frame, so sync can be harder than usual to pinpoint. I worked closely with director Bob Persichetti to get dialogue to look in its best sync, doing careful reviews and special adjustments, as needed, on all dialogue in the film.

The main character in this Spider-Man thread is Miles Morales, a brilliant African-American/Puerto Rican Brooklyn teenager trying to find his way in his multi-cultural world. We took special care to show his Puerto Rican background with added Spanish-language dialogue from Miles and his friends. That required dialect coaches, special record sessions and thorough review.

The group ADR required a different level of care than most films. We created voices for crowds, onlookers and the normal “general” wash of voices for New York City. Our group voices covered many very specific characters and were cast in detail by our group leader, Caitlin McKenna. We took a very realistic approach to crowd activity. It had to be subtler than most live-action films to capture the dry nonchalance of Miles Morales’s New York.

Would you describe the sounds as realistic? Fantastical? Both?
Schulkey: The sounds are fantastically realistic. For my money, I don’t want the sounds in my movie to seem fantastical. I see our job as creating an illusion for the audience — the illusion that they are hearing what they are seeing, and that what they are seeing is real. This is an animated film, where nothing is actually real, but has its own reality. The sounds need to live in the world we are watching. When something fantastical happens in the movie’s reality, we had to support that illusion, and we sometimes got to do fun stuff. I don’t mean to say that all sounds had to be realistic.

For example, we surmised that an actual supercollider firing up below the streets of Brooklyn would sound like 10,000 computer fans. Instead, we put together sounds that supported the story we were telling. The ambiences were as authentic as possible, including subway tunnels, Brooklyn streets and school hallways. Foley here was a great tool for giving reality to animated images. When Miles walks into the cemetery at night, you hear his footsteps on snow and sidewalk, gentle cloth movements and other subtle touches. This adds to a sense that he’s a real kid in a real city. Other times, we were in the Spider-Verse and our imagination drove the work.

Rubay: The visuals led the way, and we did whatever they required. There are some crazy things in this movie. The supercollider is based on a real thing so we started there. But supercolliders don’t act as they are depicted in the movie. In reality, they sound like a giant industrial site, fans and motors, but nothing so distinct or dramatic, so we followed the visuals.

Spider-sense is a kind of magical realism that supports, informs, warns, communicates, etc. There is no realistic basis for any of that, so we went with directions about feelings. Some early words of direction were “warm,” “organic,” “internal” and “magical.” Because there are no real sounds for those words, we created sounds that conveyed the emotional feelings of those ideas to the audience.

The portals that allow spider-people to move between dimensions are another example. Again, there was no real-world event to link to. We saw the visuals and assumed it should be a pretty big deal, real “force of nature” stuff. However, it couldn’t simply be big. We took big, energetic sounds and glued them onto what we were seeing. Of course, sometimes people are talking at the same time, so we shifted the frequency center of the moment to clear for the dialog. As music is almost always playing, we had to look for opportunities within the spaces it left.

 

Can you talk about working on the action scenes?
Rubay: For me, when the action starts, the sound had to be really specific. There is dialogue for sure. The music is often active. The guiding philosophy for me at that point is not “Keep adding until there is nothing left to add,” rather, it’s, “We’re done when there is nothing left to strip out.” Busy action scene? Broom the backgrounds away. Usually, we don’t even cut BG’s in a busy action scene, but, if we do, we do so with a skeptical eye. How can we make it more specific? Also, I keep a keen eye on “scale.” One wrong, small detail sound, no matter how cool or interesting, will get the broom if it throws off the scale. Sometimes everything might be sounding nice and big; impressive but not loud, just big, and then some small detail creeps in and spoils it. I am constantly looking out for that.

The “Prowler Chase” scene was a fun exploration. There are times where the music takes over and runs; we pull out every sound we can. Other times, the sound effects blow over everything. It is a matter of give and take. There is a truck/car/prowler motorcycle crash that turns into a suspended slo-mo moment. We had to decide which sounds to play where and when. Its stripped-down nature made it among my favorite moments in the picture.

Can you talk about the multiple universes?
Rubay: The multiverse presented many challenges. It usually manifested itself as a portal or something we move between. The portals were energetic and powerful. The multiverse “place” was something that we used as a quiet place. We used it to provide contrast because, usually, there was big action on either side.

A side effect of the multiple universes interacting was a buildup or collision/overlap. When universes collide or overlap, matter from each tries to occupy the same space. Visually, this created some very interesting moments. We referred to the multi-colored prismatic-looking stuff as “Picasso” moments. The supporting sound needed to convey “force of nature” and “hard edges,” but couldn’t be explosive, loud or gritty. Ultimately, it was a very multi-layered sound event: some “real” sounds teamed with extreme synthesis. I think it worked.

Schulkey: Some of the characters in the movie are transported from another dimension into the dimension of the movie, but their bodies rebel, and from time to time their molecules try to jump back to their native dimension, causing “glitching.” We developed, with a combination of plug-ins, blending, editing and panning, a signature sound that served to signal glitching throughout the movie, and was individually applied for each iteration.

What stands out in your mind as the most challenging scenes audio wise?
Rubay: There is a very quiet moment between Miles and his dad when dad is on one side of the door and Miles is on the other. It’s a very quiet, tender one-way conversation. When a movie gets that quiet every sound counts. Every detail has to be perfect.

What about the Dolby Atmos mix? How did that enhance the film? Can you give a scene or two as an example?
Schulkey: This film was a native Atmos mix, meaning that the primary final mix was directly in the Atmos format, as opposed to making a 7.1 mix and then going back to re-mix sections using the Atmos format.

The native Atmos mix allowed us a lot more sonic room in the theater. This is an extremely complex and busy mix, heavily driven by dialogue. By moving the score out into the side and surround speakers — away from the center speaker — we were able to make the dialogue clearer and still have a very rich and exciting score. Sonic movement is much more effective in this format. When we panned sounds around the room, it felt more natural than in other formats.

Rubay: Atmos is fantastic. Being able to move sounds vertically creates so much space, so much interest, that might otherwise not be there. Also, the level and frequency response of the surround channels makes a huge difference.

You guys used Avid Pro Tools for editing, can you mention some other favorite tools you employed on this film?
Schulkey : The Delete key and the Undo key.

Rubay: Pitch ‘n’ Time, Envy, Reverbs by Exponential Audio, Recording rigs and microphones of all sorts.

What haven’t I asked that’s important?
Our crew! Just in case anyone thinks this can be done by two people, it can’t.
– re-recording mixers Michael Semanick and Tony Lamberti
– sound designer John Pospisil
– dialogue editors James Morioka and Matthew Taylor
– sound effects editors David Werntz, Kip Smedley, Andy Sisul, Chris Aud, Donald Flick, Benjamin Cook, Mike Reagan and Ando Johnson
– Foley mixer Randy Singer
– Foley artists Gary Hecker, Michael Broomberg and Rick Owens

Behind the Title: ATK PLN Technical Supervisor Jon Speer

NAME: Jon Speer

COMPANY: ATK PLN (@atkpln_studio) in Dallas

CAN YOU DESCRIBE YOUR COMPANY?
We are a strategic creative group that specializes in design and animation for commercials and short-form video productions.

WHAT’S YOUR JOB TITLE?
Technical Supervisor

WHAT DOES THAT ENTAIL?
In general, a technical supervisor is responsible for leading the technical director team and making sure that the pipeline enables our artists’ effort of fulfilling the client’s vision.

Day-to-day responsibilities include:
– Reviewing upcoming jobs and making sure we have the necessary hardware resources to complete them
– Working with our producers and VFX supervisors to bid and plan future work
– Working with our CG/VFX supervisors to develop and implement new technologies that make our pipeline more efficient
– When problems arise in production, I am there to determine the cause, find a solution and help implement the fix
– Developing junior technical directors so they can be effective in mitigating pipeline issues that crop up during production

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I would say the most surprising thing that falls under the title is the amount of people and personality management that you need to employ.

As a technical supervisor, you have to represent every single person’s different perspectives and goals. Making everyone from artists, producers, management and, most importantly, clients happy is a tough balancing act. That balancing act needs to be constantly evaluated to make sure you have both the short-term and long-term interests of the company, clients and artists in mind.

WHAT TOOLS DO YOU USE?
Maya, Houdini and Nuke are the main tools we support for shot production. We have our own internal tracking software that we also integrate with.

From text editors for coding, to content creation programs and even budgeting programs, I typically use it all.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Starting the next project. Each new project offers the chance for us to try out a new or revamped pipeline tool that we hope will make things that much better for our team. I love efficiencies, so getting to try new tools, whether they are internally or externally developed, is always fun.

WHAT’S YOUR LEAST FAVORITE?
I know it sounds cliché, but I don’t really have one. My entire job is based on figuring out why things don’t work or how they could work better. So when things are breaking or getting technically difficult, that is why I am here. If I had to pick one thing, I suppose it would be looking at spreadsheets of any kind.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early morning when no one else is in. This is the time of day that I get to see what new tools are out there and try them. This is when I get to come up with the crazy ideas and plans for what we do next from a pipeline standpoint. Most of the rest of my day usually includes dealing with issues that crop up during production, or being in meetings.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I think I would have to give teaching a try. Having studied architecture in school, I always thought it would be fun to teach architectural history.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We just wrapped on a set of Lego spots for the new Lego 2 movie.

Fallout 76

We also did an E3 piece for Fallout 76 this year that was a lot of fun. We are currently helping out with a spot for the big game this year that has been a blast.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I think I am most proud of our Lego spots we have created over the last three years. We have really experimented with pipeline on those spots. We saw a new technology out there — rendering in Octane — and decided to jump in head first. While it wasn’t the easiest thing to do, we forced ourselves to become even more efficient in all aspects of production.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Houdini really makes the difficult things simple to do. I also love Nuke. It does what it does so well, and is amazingly fast and simple to program in.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Mainly I’ll listen to soundtracks when I am working, the lack of words is best when I am programming.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Golf is something I really enjoy on the weekends. However, like a lot of people, I find travel is easily the best way to for me to hit the reset button.

London’s Jelly opens in NYC, EP relocates

London-based Jelly, an animation, design and production company that’s produced for many US-based agencies and direct clients, has opened a full-time presence in New York. Their senior creative producer Eri Panasci will relocate to lead the new entity as executive producer.

Launched in 2002, Jelly functions as both a production company and artist management agency. On the commercials front, Jelly represents a global roster of directors and creators who’ve produced animation and motion graphics for brands like Lacoste, Apple, Samsung, Adidas and others. In the latter role, it represents a roster of illustrators and designers who regularly collaborate with brands on print, digital and outdoor ad campaigns.

Panasci’s move to New York is also a homecoming. This Connecticut native and graduate of Boston University has worked in New York, San Francisco and London for McCann and Vice Media. She joined Jelly in London in 2016, overseeing design and production assignments for such clients as Virgin Media, Google, Nespresso, McDonald’s and Bombay Sapphire.

“One of the things I’ll be able to do is provide a deeper level of service for our US clients from New York versus London,” says Panasci, “and meld that with the Jelly model and culture. And being able to put a face to a name is always good, especially when you’re dealing with someone who understands the American market and its expectations.”

The studio has lined up US representation with James Bartlett of Mr. Bartlett, whose initial brief will be to handle the East Coast.

Coming from the UK, how does Panasci describe the Jelly approach? “It’s playful yet competent,” she says with enthusiasm. “We don’t take ourselves too seriously, but on the other hand we get shit done, and we do it well. We’re known for craft and solutions, and famously for not saying the word ‘no’ — unless we really have to!”

Recent Jelly projects include Hot House, a zany TVC for Virgin Mobile, co-directed by Design Lad and Kitchen; Soho, an animated short for the shared workspace company Fora and London agency Anyways, directed by Niceshit; and Escape, a spot for the outdoor clothing company Berghaus, directed by Em Cooper for VCCP that uses the director’s unique, hand-painted technique.

Panasci says the focus of Jelly’s US operations will initially be motion work, but adds their illustration talents will also be available, and they’ll be showing print portfolios along with show reels when meeting with agencies and clients. Jelly’s head of illustration, Nicki Field, will accompany Panasci in March to kick off the New York presence with a series of meetings and screenings.

While based in London, the studio is at ease working in America, Panasci says. They’ve produced campaigns for such shops as 72andSunny, Mother, Droga5, BBH, Wieden + Kennedy, Publicis and more, working with both their US and European offices.

Most recently, Jelly signed the New York-based animation team Roof to a representation agreement for the UK market; the team played a leading role in the recent “Imaginary Friends” campaign from RPA in Santa Monica.

AES/SMPTE panel: Spider-Man: Into the Spider-Verse sound

By Mel Lambert

As part of its successful series of sound showcases, a recent joint meeting of the Los Angeles Section of the Audio Engineering Society and SMPTE’s Hollywood Section focused on the soundtrack of the animated features Spider-Man: Into the Spider-Verse, which has garnered several Oscar, BAFTA, CAS and MPSE award nominations, plus a Golden Globes win.

On January 31 at Sony Pictures Studios’ Kim Novak Theater in Culver City many gathered to hear a panel discussion between the film’s sound and picture editors and re-recording mixers. Spider-Man: Into the Spider-Verse was co-directed by Peter Ramsey, Robert Persichetti Jr. and Rodney Rothman, the creative minds behind The Lego Movie and 21 Jump Street.

The panel

The Sound Showcase panel included supervising sound editors Geoffrey Rubay and Curt Schulkey, re-recording mixer/sound designer Tony Lamberti, re-recording mixer Michael Semanick and associate picture editor Vivek Sharma. The Hollywood Reporter’s Carolyn Giardina moderated. The event concluded with a screening of Spider-Man: Into the Spider-Verse, which represents a different Spider-Man Universe, since it introduces Brooklyn teen Miles Morales and the expanding possibilities of the Spider-Verse, where more than one entity can wear the arachnid mask.

Following the screening of an opening sequence from the animated feature, Rubay acknowledged that the film’s producers were looking for a different look for the Spider-Man character based on the Marvel comic books, but with a reference to previous live-action movies in the franchise. “They wanted us to make more of the period in which the new film is set,” he told the standing-room audience in the same dubbing stage where the soundtrack was re-recorded.

“[EVPs] Phil Lord and Chris Miller have a specific style of soundtrack that they’ve developed,” stated Lamberti, “and so we premixed to get that overall shape.”

“The look is unique,” conceded Semanick, “and our mix needed to match that and make it sound like a comic book. It couldn’t be too dynamic; we didn’t want to assault the audience, but still make it loud here and softer there.”

Full house

“We also kept the track to its basics,” Rubay added, “and didn’t add a sound for every little thing. If the soundtrack had been as complicated as the visuals, the audience’s heads would have exploded.”

“Yes, simpler was often better,” Lamberti confirmed, “to let the soundtrack tell the story of the visuals.”

In terms of balancing sound effects against dialog, “We did a lot of experimentation and went with what seemed the best solution,” Semanick said. “We kept molding the soundtrack until we were satisfied.” As Lamberti confirmed: “It was always a matter of balancing all the sound elements, using trial and error.”

=Nominated for a Cinema Audio Society Award in the Motion Picture — Animated category, Brian Smith, Aaron Hasson and Howard London served as original dialogue mixers on the film, with Sam Okell as scoring mixer and Randy K. Singer as Foley mixer. The crew also included sound designer John Pospisil, Foley supervisor Alec G. Rubay, SFX editors Kip Smedley, Andy Sisul, David Werntz, Christopher Aud, Ando Johnson, Benjamin Cook, Mike Reagan and Donald Flick.

During picture editorial, “we lived with many versions until we got to the sound,” explained Sharma. “The premix was fantastic and worked very well. Visuals are important but sound fulfils a complementary role. Dialogue is always key; the audience needs to hear what the characters say!”

“We present ideas and judge the results until everybody is happy,” said Semanick. “[Writer/producer] Phil Lord was very good at listening to everybody; he made the final decision, but deferred to the directors. ‘Maybe we should drop the music?’ ‘Does the result still pull the audience into the music?’ We worked until the elements worked very well together.”

The lead character’s “Spidey Sense” also discussed. As co-supervisor Schulkey explained: “Our early direction was that it was an internal feeling … like a warm, fuzzy feeling. But warm and fuzzy didn’t cut through the music. In the end there was not just a single Spidey Sense — it was never the same twice. The web slings were a classic sound that we couldn’t get too far from.”

“And we used [Dolby] Atmos to spin and pan those sounds around the room,” added Lamberti, who told the audience that Spider-Man: Into the Spider-Verse marked Sony Animation’s first native Atmos mix. “We used the format to get the most out of it,” concluded the SFX re-recording mixer, who mixed sound effects “in the box” using an Avid S6 console/controller, while Semanick handled dialogue and music on the Kim Novak Theater’s Harrison MPC4D X-Range digital console.


Mel Lambert has been intimately involved with production industries on both sides of the Atlantic for more years than he cares to remember. He can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists. 

Quick Chat: Crew Cuts’ Nancy Jacobsen and Stephanie Norris

By Randi Altman

Crew Cuts, a full-service production and post house, has been a New York fixture since 1986. Originally established as an editorial house, over the years as the industry evolved they added services that target all aspects of the workflow.

This independently-owned facility is run by executive producer/partner Nancy Jacobsen, senior editor/partner Sherri Margulies Keenan and senior editor/partner Jake Jacobsen. While commercial spots might be in their wheelhouse, their projects vary and include social media, music videos and indie films.

We decided to reach out to Nancy Jacobsen, as well as EP of finishing Stephanie Norris, to find out about trends, recent work and succeeding in an industry and city that isn’t always so welcoming.

Can you talk about what Crew Cuts provides and how you guys have evolved over the years?
Jacobsen: We pretty much do it all. We have 10 offline editors as well as artists working in VFX, 2D/3D animation, motion graphics/design, audio mix and sound design, VO record, color grading, title treatment, advanced compositing and conform. Two of our editors double as directors.

In the beginning, Crew Cuts primarily offered only editorial. As the years went by and the industry climate changed we began to cater to the needs of clients and slowly built out our entire finishing department. We started with some minimal graphics work and one staff artist in 2008.

In 2009, we expanded the team to include graphics, conform and audio mix. From there we just continued to grow and expand our department to the full finishing team we have today.

As a woman owner of a post house, what challenges have you had to overcome?
Jacobsen: When I started in this business, the industry was very different. I made less money than my male counterparts and it took me twice as long to be promoted because I am a woman. I have since seen great change where women are leading post houses and production houses and are finally getting the recognition for the hard work they deserve. Unfortunately, I had to “wait it out” and silently work harder than the men around me. This has paid off for me, and now I can help women get the credit they rightly deserve

Do you see the industry changing and becoming less male-dominated?
Jacobsen: Yes, the industry is definitely becoming less male-dominated. In the current climate, with the birth of the #metoo movement and specifically in our industry with the birth of Diet Madison Avenue (@dietmadisonave), we are seeing a lot more women step up and take on leading roles.

Are you mostly a commercial house? What other segments of the industry do you work in?
Jacobsen: We are primarily a commercial house. However, we are not limited to just broadcast and digital commercial advertising. We have delivered specs for everything from the Godzilla screen in Times Square to :06 spots on Instagram. We have done a handful of music videos and also handle a ton of B2B videos for in-house client meetings, etc., as well as banner ads for conferences and trade shows. We’ve even worked on display ads for airports. Most recently, one of our editors finished a feature film called Public Figure that is being submitted around the film festival circuit.

What types of projects are you working on most often these days?
Jacobsen: The industry is all over the place. The current climate is very messy right now. Our projects are extremely varied. It’s hard to say what we work on most because it seems like there is no more norm. We are working on everything from sizzle pitch videos to spots for the Super Bowl.

What trends have you seen over the last year, and where do you expect to be in a year?
Jacobsen: Over the last year, we have noticed that the work comes from every angle. Our typical client is no longer just the marketing agency. It is also the production company, network, brand, etc. In a year we expect to be doing more production work. Seeing as how budgets are much smaller than they used to be and everyone wants a one-stop shop, we are hoping to stick with our gut and continue expanding our production arm.

Crew Cuts has beefed up its finishing services. Can you talk about that?
Stephanie Norris: We offer a variety of finishing services — from sound design to VO record and mix, compositing to VFX, 2D and 3D motion graphics and color grading. Our fully staffed in-house team loves the visual effects puzzle and enjoys working with clients to help interpret their vision.

Can you name some recent projects and the services you provided?
Norris: We just worked on a new campaign for New Jersey Lottery in collaboration with Yonder Content and PureRed. Brian Neaman directed and edited the spots. In addition to editorial, Crew Cuts also handled all of the finishing, including color, conform, visual effects, graphics, sound design and mix. This was one of those all-hands-on-deck projects. Keeping everything under one roof really helped us to streamline the process.

New Jersey Lottery

Working with Brian to carefully plan the shooting strategy, we filmed a series of plate shots as elements that could later be combined in post to build each scene. We added falling stacks of cash to the reindeer as he walks through the loading dock and incorporated CG inflatable decorations into a warehouse holiday lawn scene. We also dramatically altered the opening and closing exterior warehouse scenes, allowing one shot to work for multiple seasons. Keeping lighting and camera positions consistent was mission-critical, and having our VFX supervisor, Dulany Foster, on set saved us hours of work down the line.

For the New Jersey Lottery Holiday spots, the Crew Cuts CG team, led by our creative director Ben McNamara created a 3D Inflatable display of lottery tickets. This was something that proved too costly and time consuming to manufacture and shoot practically. After the initial R&D, our team created a few different CG inflatable simulations prior to the shoot, and Dulany was able to mock them up live while on set. Creating the simulations was crucial for giving the art department reference while building the set, and also helped when shooting the plates needed to composite the scene together.

Ben and his team focused on the physics of the inflation, while also making sure the fabric simulations, textures and lighting blended seamlessly into the scene — it was important that everything felt realistic. In addition to the inflatables, our VFX team turned the opening and closing sunny, summer shots of the warehouse into a December winter wonderland thanks to heavy compositing, 3D set extension and snow simulations.

New Jersey Lottery

Any other projects you’d like to talk about?
Jacobsen: We are currently working on a project here that we are handling soup to nuts from production through finishing. It was a fun challenge to take on. The spot contains a hand model on a greenscreen showing the audience how to use a new product. The shoot itself took place here at Crew Cuts. We turned our common area into a stage for the day and were able to do so without interrupting any of the other employees and projects going on.

We are now working on editorial and finishing. The edit is coming along nicely. What really drives the piece here is the graphic icons. Our team is having a lot of fun designing these elements and implementing them into the spot. We are so proud because we budgeted wisely to make sure to accommodate all of the needs of the project so that we could handle everything and still turn a profit. It was so much fun to work in a different setting for the day and has been a very successful project so far. Clients are happy and so are we.

Main Image: (L-R) Stephanie Norris and Nancy Jacobsen

Efilm’s Natasha Leonnet: Grading Spider-Man: Into the Spider-Verse

By Randi Altman

Sony Pictures’ Spider-Man: Into the Spider-Verse is not your typical Spider-Man film… in so many ways. The most obvious is the movie’s look, which was designed to make the viewer feel they are walking inside a comic book. This tale, which blends CGI with 2D hand-drawn animation and comic book textures, focuses on a Brooklyn teen who is bitten by a radioactive spider on the subway and soon develops special powers.

Natasha Leonnet

When he meets Peter Parker, he realizes he’s not alone in the Spider-Verse. It was co-directed by Peter Ramsey, Robert Persichetti Jr. and Rodney Rothman and produced by Phil Lord and Chris Miller, the pair behind 21 Jump Street and The Lego Movie.

Efilm senior colorist Natasha Leonnet provided the color finish for the film, which was nominated for an Oscar in the Best Animated Feature category. We reached out to find out more.

How early were you brought on the film?
I had worked on Angry Birds with visual effects supervisor Danny Dimian, which is how I was brought onto the film. It was a few months before we started color correction. Also, there was no LUT for the film. They used the ACES workflow, developed by The Academy and Efilm’s VP of technology, Joachim “JZ” Zell.

Can you talk about the kind of look they were after and what it took to achieve that look?
They wanted to achieve a comic book look. You look at the edges of characters or objects in comic books and you actually see aspects of the color printing from the beginning of comic book printing — the CMYK dyes wouldn’t all be the same line — it creates a layered look along with the comic book dots and expression lines on faces, as if you’re drawing a comic book.

For example, if someone gets hurt you put actual slashes on their face. For me it was a huge education about the comic book art form. Justin Thompson, the art director, in particular is so knowledgeable about the history of comic books. I was so inspired I just bought my first comic book. Also, with the overall look, the light is painting color everywhere the way it does in life.

You worked closely Justin, VFX supervisor Danny Dimian and art director Dean Gordon What was that process like?
They were incredible. It was usually a group of us working together during the color sessions — a real exercise in collaboration. They were all so open to each other’s opinions and constantly discussing every change in order to make certain that the change best served the film. There was no idea that was more important than another idea. Everyone listened to each other’s ideas.

Had you worked on an animated film previously? What are the challenges and benefits of working with animation?
I’ve been lucky enough to do all of Blue Sky Studios’ color finishes so far, except for the first Ice Age. One of the special aspects of working on animated films is that you’re often working with people who are fine-art painters. As a result, they bring in a different background and way of analyzing the images. That’s really special. They often focus on the interplay of different hues.

In the case of Spider-Man: Into the Spider-Verse, they also wanted to bring a certain naturalism to the color experience. With this particular film, they made very bold choices with their use of color finishing. They used an aspect of color correctors that are used to shift all of the hues and colors; that’s usually reserved for music videos. They completely embraced it. They were basically using color finishing to augment the story and refine their hues, especially time of day and progression of the day or night. They used it as their extra lighting step.

Can you talk about your typical process? Did that differ because of the animated content?
My process actually does not differ when I’m color finishing animated content. Continuity is always at the forefront, even in animation. I use the color corrector as a creative tool on every project.

How would you describe the look of the film?
The film embodies the vivid and magical colors that I always observed in childhood but never saw reflected on the screen. The film is very color intense. It’s as if you’re stepping inside a comic book illustrator’s mind. It’s a mind-meld with how they’re imagining things.

What system did you use for color and why?
I used Resolve on this project, as it was the system that the clients were most familiar with.

Any favorite parts of the process?
My favorite part is from start to finish. It was all magical on this film.

What was your path to being a colorist?
My parents loved going to the cinema. They didn’t believe in babysitters, so they took me to everything. They were big fans of the French new wave movement and films that offered unconventional ways of depicting the human experience. As a result, I got to see some pretty unusual films. I got to see how passionate my parents were about these films and their stories and unusual way of telling them, and it sparked something in me. I think I can give my parents full credit for my career.

I studied non-narrative experimental filmmaking in college even though ultimately my real passion was narrative film. I started as a runner in the Czech Republic, which is where I’d made my thesis film for my BA degree. From there I worked my way up and met a colorist (Biggi Klier) who really inspired me. I was hooked and lucky enough to study with her and another mentor of mine in Munich, Germany.

How do you prefer a director and DP describe a look?
Every single person I’ve worked with works differently, and that’s what makes it so fun and exciting, but also challenging. Every person communicates about color differently and our vocabulary for color is so limited, therein lies the challenge.

Where do you find inspiration?
From both the natural world and the world of films. I live in a place that faces east, and I get up every morning to watch the sunrise and the color palette is always different. It’s beautiful and inspiring. The winter palettes in particular are gorgeous, with reds and oranges that don’t exist in summer sunrises.

Autodesk launches Maya 2019 for animation, rendering, more

Autodesk has released the latest version of Maya, its 3D animation, modeling, simulation and rendering software. Maya 2019 features significant updates for speed and interactivity and addresses some challenges artists face throughout production, providing faster animation playback to reduce the need for playblasts, higher quality 3D previews with Autodesk Arnold updates in viewport 2.0, improved pipeline integration with more flexible development environment support, and performance improvements that most Maya artists will notice in their daily work.

Key new Maya 2019 features include:
• Faster Animation: New cached playback increases animation playback speeds in viewport 2.0, giving animators a more interactive and responsive animating environment to produce better quality animations. It helps reduce the need to produce time-consuming playblasts to evaluate animation work, so animators can work faster.


• Higher Quality Previews Closer to Final Renders: Arnold upgrades improve realtime previews in viewport 2.0, allowing artists to preview higher quality results that are closer to the final Arnold render for better creativity and less wasted time.
• Faster Maya: New performance and stability upgrades help improve daily productivity in a range of areas that most artists will notice in their daily work.
• Refining Animation Data: New filters within the graph editor make it easier to work with motion capture data, including the Butterworth filter and the key reducer to help refine animation curves.
• Rigging Improvements: New updates help make the work of riggers and character TDs easier, including the ability to hide sets from the outliner to streamline scenes, improvements to the bake deformer tool and new methods for saving deformer weights to more easily script rig creation.
• Pipeline Integration Improvements: Development environment updates make it easier for pipeline and tool developers to create, customize and integrate into production pipelines.
• Help for Animators in Training: Sample rigged and animated characters, as well as motion capture samples, make it easier for students to learn and quickly get started animating.

Maya 2019 is available now as a standalone subscription or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection.

Asahi beer spot gets the VFX treatment

A collaboration between The Monkeys Melbourne, In The Thicket and Alt, a newly released Asahi campaign takes viewers on a journey through landscapes built around surreal Japanese iconography. Watch Asahi Super Dry — Enter Asahi here.

From script to shoot — a huge operation that took place at Sydney’s Fox Studios — director Marco Prestini and his executive producer Genevieve Triquet (from production house In The Thicket) brought on the VFX team at Alt to help realize the creative vision.

The VFX team at Alt (which has offices in Sydney, Melbourne and Los Angeles) worked with Prestini to help design and build the complex “one shot” look, with everything from robotic geishas to a gigantic CG squid in the mix, alongside a seamless blend of CG set extensions and beautifully shot live-action plates.

“VFX supervisor Dave Edwards and the team at Alt, together with my EP Genevieve, have been there since the very beginning, and their creative input and expertise were key in every step of the way,” explains Prestini. “Everything we did on set was the results of weeks of endless back and forth on technical previz, a process that required pretty much everyone’s input on a daily basis and that was incredibly inspiring for me to be part of.”

Dave Edwards, VFX supervisor at Alt, shares: “Production designer Michael Iacono designed sets in 3D, with five huge sets built for the shoot. The team then worked out camera speeds for timings based on these five sets and seven plates. DP Stefan Duscio would suggest rigs and mounts, which our team was able to then test it in previs to see if it would work with the set. During previs, we worked out that we couldn’t get the resolution and the required frame rate to shoot the high frame rate samurais, so we had to use Alexa LF. Of course, that also helped Marco, who wanted minimal lens distortion as it allowed a wide field of view without the distortion of normal anamorphic lenses.”

One complex scene involves a character battling a gigantic underwater squid, which was done via a process known as “dry for wet” — a film technique in which smoke, colored filters and/or lighting effects are used to simulate a character being underwater while filming on a dry stage. The team at Alt did a rough animation of the squid to help drive the actions of the talent and the stunt team on the day, before spending the final weeks perfecting the look of the photoreal monster.

In terms of tools, for concept design/matte painting Alt used Adobe Photoshop while previs/modeling/texturing/animation was done in Autodesk Maya. All of the effects/lighting/look development was via Side Effects Houdini; the compositing pipeline was built around Foundry Nuke; final online was completed in Autodesk Flame; and for graphics, they used Adobe After Effects.
The final edit was done by The Butchery.

Here is the VFX breakdown:

Enter Asahi – VFX Breakdown from altvfx on Vimeo.

Behind the Title: Aardman director/designer Gavin Strange

NAME: Gavin Strange

COMPANY: Bristol, England-based Aardman. They also have an office in NYC under the banner Aardman Nathan Love

CAN YOU DESCRIBE HOW YOUR CAREER AT AARDMAN BEGAN?
I can indeed! I started 10 years ago as a freelancer, joining the fledgling Interactive department (or Aardman Online as it was known back then). They needed a digital designer for a six-month project for the UK’s Channel 4.

I was a freelancer in Bristol at the time and I made it my business to be quite vocal on all the online platforms, always updating those platforms and my own website with my latest work — whether that be client work or self-initiated projects. Luckily for me, the creative director of Aardman Online, Dan Efergan, saw my work when he was searching for a designer and got in touch (it was the most exciting email ever, with the subject of “Hello from Aardman!”

The short version of this story is that I got Dan’s email, popped in for a cup of tea and a chat, and 10 years later I’m still here! Ha!

The slightly longer but still truncated version is that after the six-month freelance project was done, the role of senior designer for the online team became open and I gave up the freelance life and, very excitedly, joined the team as an official Aardmanite!

Thing is, I was never shy about sharing with my new colleagues the other work I did. My role in the beginning was primarily digital/graphic design, but in my own time, under the banner of JamFactory (my own artist alter-ego name) I put out all sorts of work that was purely passion projects; films, characters, toys, clothing, art.

Gavin Strange directed this Christmas spot for the luxury brand Fortnum & Mason .

Filmmaking was a huge passion of mine and even at the earliest stages in my career when I first started out (I didn’t go to university so I got my first role as a junior designer when I was 17) I’d always be blending graphic design and film together.

Over those 10 years at Aardman I continued to make films of all kinds and share them with my colleagues. Because of that more opportunities arose to develop my film work within my existing design role. I had the unique advantage of having a lot of brilliant mentors who guided me and helped me with my moving image projects.

Those opportunities continued to grow and happen more frequently. I was doing more and more directing here, finally becoming officially represented by Aardman and added to their roster of directors. It’s a dream come true for me, because, not only do I get to work at the place I’ve admired growing up, but I’ve been mentored and shaped by the very individuals who make this place so special — that’s a real privilege.

What I really love is that my role is so varied — I’m both a director and a senior designer. I float between projects, and I love that variety. Sometimes I’m directing a commercial, sometimes I’m illustrating icons, other times I’m animating motion graphics. To me though, I don’t see a difference — it’s all creating something engaging, beautiful and entertaining — whatever the final format or medium!

So that’s my Aardman story. Ten years in, and I just feel like I’m getting started. I love this place.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE OF DIRECTOR?
Hmm, it’s tricky, as I actually think that most people’s perception of being a director is true: it’s that person’s responsibility to bring the creative vision to life.

Maybe what people don’t know is how flexible the role is, depending on the project. I love smaller projects where I get to board, design and animate, but then I love larger jobs with a whole crew of people. It’s always hands-on, but in many different ways.

Perhaps what would surprise a lot of people is that it’s every directors responsibility to clean the toilets at the end of the day. That’s what Aardman has always told me and, of course, I honor that tradition. I mean, I haven’t actually ever seen anyone else do it, but that’s because everyone else just gets on with it quietly, right? Right!?

WHAT’S YOUR FAVORITE PART OF THE JOB?
Oh man, can I say everything!? I really, really enjoy the job as a whole — having that creative vision, working with yourself, your colleagues and your clients to bring it to life. Adapting and adjusting to changes and ensuring something great pops out the other end.

I really, genuinely, get a thrill seeing something on screen. I love concentrating on every single frame — it’s a win-win situation. You get to make a lovely image each frame, but when you stitch them together and play them really fast one after another, then you get a lovely movie — how great is that?

In short, I really love the sum total of the job. All those different exciting elements that all come together for the finished piece.

WHAT’S YOUR LEAST FAVORITE?
I pride myself on being an optimist and being a right positive pain in the bum, so I don’t know if there’s any part I don’t enjoy — if anything is tricky I try and see it as a challenge and something that will only improve my skillset.

I know that sounds super annoying doesn’t it? I know that can seem all floaty and idealistic, but I pride myself on being a “realistic’ idealist” — recognizing the reality of a tricky situation, but seeing it through an idealistic lens.

If I’m being honest, then probably that really early stage is my least favorite — when the project is properly kicking off and you’ve got that gap between what the treatment/script/vision says it will be and the huge gulf in between that and the finished thing. That’s also the most exciting too, the not knowing how it will turn out. It’s terrifying and thrilling, in all good measure. It surprises me every single time, but I think that panic is an essential part of any creative process.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
In an alternate world, I’d be a photographer, traveling the world, documenting everything I see, living the nomadic life. But that’s still a creative role, and I still class it as the same job, really. I love my graphic design roots too — print and digital design — but, again, I see it as all the same role really.

So that means, if I didn’t have this job, I’d be roaming the lands, offering to draw/paint/film/make for anyone that wanted it! (Is that a mercenary? Is there such a thing as a visual mercenary? I don’t really have the physique for that I don’t think.)

WHY DID YOU CHOOSE THIS PROFESSION?
This profession chose me. I’m just kidding, that’s ridiculous, I just always wanted to say that.

I think, like most folks, I fell into it in a series of natural choices. Art, design, graphics and games always stole my attention as a kid, and I just followed the natural path into that, which turned into my career. I’m lucky enough that I didn’t feel the need to single out any one passion, and kept them all bubbling along even as I made my career choices as designer to director. I still did and still do indulge my passion for all types of mediums in my own time.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I’m not sure. I wasn’t particularly driven or focused as a kid. I knew I loved design and art, but I didn’t know of the many, many different roles out there that existed. I like that though, I see that as a positive, and also as an achievable way to progress through a career path. I speak to a lot of students and young professionals and I think it can be so overwhelming to plot a big ‘X’ on a career map and then feel all confused about how to get there. I’m an advocate of taking it one step at a time, and make more manageable advances forward — as things always get in the way and change anyway.

I love the idea of a meandering, surprising path. Who knows where it will lead!? I think as long as your aim is to make great work, then you’ll surprise yourself where you end up.

WHAT WAS IT ABOUT DIRECTING THAT ATTRACTED YOU?
I’ve always obsessed over films, and obsessed over the creation of them. I’ll watch a behind-the-scenes on any film or bit of moving image. I just love the fact that the role is to bring something to life — it’s to oversee and create something from nothing, ensuring every frame is right. The way it makes you feel, the way it looks, the way it sounds.

It’s just such an exciting role. There’s a lot of unknowns too, on every project. I think that’s where the good stuff lies. Trusting in the process and moving forwards, embracing it.

HOW DOES DIRECTING FOR ANIMATION DIFFER FROM DIRECTING FOR LIVE ACTION — OR DOES IT?
Technically it’s different — with animation your choices are pretty much made all up front, with the storyboards and animatic as your guides, and then they’re brought to life with animation. Whereas, for me, the excitement in live action is not really knowing what you’ll get until there’s a lens on it. And even then, it can come together in a totally new way in the edit.

I don’t try to differentiate myself as an “animation director” or “live-action” director. They’re just different tools for the job. Whatever tells the best story and connects with audiences!

HOW DO YOU PICK THE PEOPLE YOU WORK WITH ON A PARTICULAR PROJECT?
Their skillset is paramount, but equally as important is their passion and their kindness. There are so many great people out there, but I think it’s so important to work with people who are great and kind. Too many people get a free pass for being brilliant and feel that celebration of their work means it’s okay to mistreat others. It’s not okay… ever. I’m lucky that Aardman is a place full of excited, passionate and engaged folk who are a pleasure to work with, because you can tell they love what they do.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’ve been lucky enough to work on a real variety of projects recently. I directed an ident for the rebrand of BBC2, a celebratory Christmas spot for the luxury brand Fortnum & Mason and an autobiographical motion graphics short film about Maya Angelou for BBC Radio 4.

Maya Angelou short film for BBC Radio 4

I love the variety of them; just those three projects alone were so different. The BBC2 ident was live-action in-camera effects with a great crew of people, whereas the Maya Angelou film was just me on design, direction and animation. I love hopping between projects of all types and sizes!

I’m working on development of a stop-frame short at the moment, which is all I can say for now, but just the process alone going from idea to a scribble in a notebook to a script is so exciting. Who knows what 2019 holds!?

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Oh man, that’s a tough one! A few years back I co-directed a title sequence for a creative festival called OFFF, which happens every year in Barcelona. I worked with Aardman legend Merlin Crossingham to bring this thing to life, and it’s a proper celebration of what we both love — it ended up being what we lovingly refer to as our “stop-frame live-action motion-graphics rap-video title-sequence.” It really was all those things.

That was really special as not only did we have a great crew, I got to work with one of my favorite rappers, P.O.S., who kindly provided the beats and the raps for the film.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT
– My iPhone. It’s my music player, Internet checker, email giver, tweet maker, picture capturer.
– My Leica M6 35mm camera. It’s my absolute pride and joy. I love the images it makes.
– My Screens. At work I have a 27-inch iMac and then two 25-inch monitors on either side. I just love screens. If I could have more, I would!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I genuinely love what I do, so I rarely feel like I “need to get away from it all.” But I do enjoy life outside of work. I’m a drummer and that really helps with any and all stress really. Even just practicing on a practice pad is cathartic, but nothing compares to smashing away on a real kit.

I like to run, and I sometimes do a street dance class, which is both great fun and excruciatingly frustrating because I’m not very good.

I’m a big gamer, even though I don’t have much time for it anymore. A blast on the PS4 is a treat. In fact, after this I’m going to have a little session on God of War before bedtime.

I love hanging with my family. My wife Jane, our young son Sullivan and our dog Peggy. Just hanging out, being a dad and being a husband is the best for de-stressing. Unless Sullivan gets up at 3am, then I change my answer back to the PS4.

I’m kidding, I love my family, I wouldn’t be anything or be anywhere without them.

Making an animated series with Adobe Character Animator

By Mike McCarthy

In a departure from my normal film production technology focus, I have also been working on an animated web series called Grounds of Freedom. Over the past year I have been directing the effort and working with a team of people across the country who are helping in various ways. After a year of meetings, experimentation and work we finally started releasing finished episodes on YouTube.

The show takes place in Grounds of Freedom, a coffee shop where a variety of animated mini-figures gather to discuss freedom and its application to present-day cultural issues and events. The show is created with a workflow that weaves through a variety of Adobe Creative Cloud apps. Back in October I presented our workflow during Adobe Max in LA, and I wanted to share it with postPerspective’s readers as well.

When we first started planning for the series, we considered using live action. Ultimately, after being inspired by the preview releases of Adobe Character Animator, I decided to pursue a new digital approach to brick filming (a film made using Legos), which is traditionally accomplished through stop-motion animation. Once everyone else realized the simpler workflow possibilities and increased level of creative control offered by that new animation process, they were excited to pioneer this new approach. Animation gives us more control and flexibility over the message and dialog, lowers production costs and eases collaboration over long distances, as there is no “source footage” to share.

Creating the Characters
The biggest challenge to using Character Animator is creating digital puppets, which are deeply layered Photoshop PSDs with very precise layer naming and stacking. There are ways to generate the underlying source imagery in 3D animation programs, but I wanted the realism and authenticity of sourcing from actual photographs of the models and figures. So we took lots of 5K macro shots of our sets and characters in various positions with our Canon 60D and 70D DSLRs and cut out hundreds of layers of content in Photoshop to create our characters and all of their various possible body positions. The only thing that was synthetically generated was the various facial expressions digitally painted onto their clean yellow heads, usually to match an existing physical reference character face.

Mike McCarthy shooting stills.

Once we had our source imagery organized into huge PSDs, we rigged those puppets in Character Animator with various triggers, behaviors and controls. The walking was accomplished by cycling through various layers, instead of the default bending of the leg elements. We created arm movement by mapping each arm position to a MIDI key. We controlled facial expressions and head movement via webcam, and the mouth positions were calculated by the program based on the accompanying audio dialog.

Animating Digital Puppets
The puppets had to be finished and fully functional before we could start animating on the digital stages we had created. We had been writing the scripts during that time, parallel to generating the puppet art, so we were ready to record the dialog by the time the puppets were finished. We initially attempted to record live in Character Animator while capturing the animation motions as well, but we didn’t have the level of audio editing functionality we needed available to us in Character Animator. So during that first session, we switched over to Adobe Audition, and planned to animate as a separate process, once the audio was edited.

That whole idea of live capturing audio and facial animation data is laughable now, looking back, since we usually spend a week editing the dialog before we do any animating. We edited each character audio on a separate track and exported those separate tracks to Character Animator. We computed lipsync for each puppet based on their dedicated dialog track and usually exported immediately. This provided a draft visual that allowed us to continue editing the dialog within Premiere Pro. Having a visual reference makes a big difference when trying to determine how a conversation will feel, so that was an important step — even though we had to throw away our previous work in Character Animator once we made significant edit changes that altered sync.

We repeated the process once we had a more final edit. We carried on from there in Character Animator, recording arm and leg motions with the MIDI keyboard in realtime for each character. Once those trigger layers had been cleaned up and refined, we recorded the facial expressions, head positions and eye gaze with a single pass on the webcam. Every re-record to alter a particular section adds a layer to the already complicated timeline, so we limited that as much as possible, usually re-recording instead of making quick fixes unless we were nearly finished.

Compositing the Characters Together
Once we had fully animated scenes in Character Animator, we would turn off the background elements, and isolate each character layer to be exported in Media Encoder via dynamic link. I did a lot of testing before settling on JPEG2000 MXF as the format of choice. I wanted a highly compressed file, but need alpha channel support, and that was the best option available. Each of those renders became a character layer, which was composited into our stage layers in After Effects. We could have dynamically linked the characters directly into AE, but with that many layers that would decrease performance for the interactive part of the compositing work. We added shadows and reflections in AE, as well as various other effects.

Walking was one of the most challenging effects to properly recreate digitally. Our layer cycling in Character Animator resulted in a static figure swinging its legs, but people (and mini figures) have a bounce to their step, and move forward at an uneven rate as they take steps. With some pixel measurement and analysis, I was able to use anchor point keyframes in After Effects to get a repeating movement cycle that made the character appear to be walking on a treadmill.

I then used carefully calculated position keyframes to add the appropriate amount of travel per frame for the feet to stick to the ground, which varies based on the scale as the character moves toward the camera. (In my case the velocity was half the scale value in pixels per seconds.) We then duplicated that layer to create the reflection and shadow of the character as well. That result can then be composited onto various digital stages. In our case, the first two shots of the intro were designed to use the same walk animation with different background images.

All of the character layers were pre-comped, so we only needed to update a single location when a new version of a character was rendered out of Media Encoder, or when we brought in a dynamically linked layer. It would propagate all the necessary comp layers to generate updated reflections and shadows. Once the main compositing work was finished, we usually only needed to make slight changes in each scene between episodes. These scenes were composited at 5K, based on the resolution off the DSLR photos of the sets we had built. These 5K plates could be dynamically linked directly into Premiere Pro, and occasionally used later in the process to ripple slight changes through the workflow. For the interactive work, we got far better editing performance by rendering out flattened files. We started with DNxHR 5K assets, but eventually switched to HEVC files since they were 30x smaller and imperceptibly different in quality with our relatively static animated content.

Editing the Animated Scenes
In Premiere Pro, we had the original audio edit, and usually a draft render of the characters with just their mouths moving. Once we had the plate renders, we placed them each in their own 5K scene sub-sequence and used those sequences as source on our master timeline. This allowed us to easily update the content when new renders were available, or source from dynamically linked layers instead if needed. Our master timeline was 1080p, so with 5K source content we could push in two and a half times the frame size without losing resolution. This allowed us to digitally frame every shot, usually based on one of two rendered angles, and gave us lots of flexibility all the way to the end of the editing process.

Collaborative Benefits of Dynamic Link
While Dynamic Link doesn’t offer the best playback performance without making temp renders, it does have two major benefits in this workflow. It ripples change to the source PSD all the way to the final edit in Premiere just by bringing each app into focus once. (I added a name tag to one character’s PSD during my presentation, and 10 seconds later, it was visible throughout my final edit.) Even more importantly, it allows us to collaborate online without having to share any exported video assets. As long as each member of the team has the source PSD artwork and audio files, all we have to exchange online are the Character Animator project (which is small once the temp files are removed), the .AEP file and the .PrProj file.

This gives any of us the option to render full-quality visual assets anytime we need them, but the work we do on those assets is all contained within the project files that we sync to each other. The coffee shop was built and shot in Idaho, our voice artist was in Florida, our puppets faces were created in LA. I animate and edit in Northern California, the AE compositing was done in LA, and the audio is mixed in New Jersey. We did all of that with nothing but a Dropbox account, using the workflow I have just outlined.

Past that point, it was a fairly traditional finish, in that we edited in music and sound effects, and sent an OMF to Steve, our sound guy at DAWPro Studios http://dawpro.com/photo_gallery.html for the final mix. During that time we added other b-roll visuals or other effects, and once we had the final audio back we rendered the final result to H.264 at 1080p and uploaded to YouTube.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Logan uses CG to showcase the luxury of the Lexus ES series

Logan, a creative studio with offices in Los Angeles and New York, worked on the new Lexus ES series “A Product of Mastery” campaign with agency Team One. The goal was to showcase the interior craftsmanship and amenities of this luxury sedan with detailed animations. Viewers are at first given just a glimpse of these features as the spot builds toward a reveal of the sedan’s design.

The campaign was created entirely in CG. “When we first saw Team One’s creative brief, we realized we would be able to control the environments, lighting and the overall mood better by using CG, which allowed us to make the campaign stand apart aesthetically and dramatically compared to shooting the products practically. From day one, our team and Team One were aligned on everything and they were an incredible partner throughout the entire process,” says Logan executive producer Paul Abatemarco.

The three spots in the campaign totaled 23 shots, highlighting things like the car’s high-end Mark Levinson sound system. They also reveal the craftsmanship of the driver seat’s reverse ventilation as infinite bars of light while in another spot, the sedan’s wide-view high-definition monitor is unveiled through a vivid use of color and shape.

Autodesk Maya was Logan’s main CG tool, but for the speaker spot they also called on Side Effects Houdini and Cinema 4D. All previs was done in Maya.

Editing was done on Adobe Premiere and they color graded in Resolve in their certified-Dolby Color Studio.

 According to Waka Ichinose and Sakona Kong, co-creative leads on the project, “We had a lot of visual ideas, and there was a lot of exploration on the design side of things. But finding the balance between the beautiful, abstract imagery and then clearly conveying the meaning of each product so that the viewers were intrigued and ultimately excited was a challenge. But it was also really fun and ultimately very satisfying to solve.”

Storage for VFX Studios

By Karen Moltenbrey

Visual effects are dazzling — inviting eye candy, if you will. But when you mention the term “storage,” the wide eyes may turn into a stifled yawn from viewers of the amazing content. Not so for the makers of that content.

They know that the key to a successful project rests within the reliability of their storage solutions. Here, we look at two visual effects studios — both top players in television and feature film effects — as they discuss how data storage enables them to excel at their craft.

Zoic Studios
A Culver City-based visual effects facility, with shops in Vancouver and New York, Zoic Studios has been crafting visual effects for a host of television series since its founding in 2002, starting with Firefly. In addition to a full plate of episodics, Zoic also counts numerous feature films and spots to its credits.

Saker Klippsten

According to Saker Klippsten, CTO, the facility has used a range of storage solutions over the past 16 years from BlueArc (before it was acquired by Hitachi), DataDirect Networks and others, but now uses Dell EMC’s Isilon cluster file storage system for its current needs. “We’ve been a fan of theirs for quite a long time now. I think we were customer number two,” he says, “back when they were trying to break into the media and entertainment sector.”

Locally, the studio uses Intel and NVMe drives for its workstations. NVMe, or non-volatile memory express, is an open logical device interface specification for accessing all-flash storage media attached via PCI Express (PCIe) bus. Previously, Zoic had been using Samsung SSD drives, with Samsung 1TB and 2TB EVO drives, but in the past year and a half, began migrating to NVMe on the local workstations.

Zoic transitioned to the Isilon system in 2004-2005 because of the heavy usage its renderfarm was getting. “Renderfarms work 24/7 and don’t take breaks. Our storage was getting really beat up, and people were starting to complain that it was slow accessing the file system and affecting playback of their footage and media,” explains Klippsten. “We needed to find something that could scale out horizontally.”

At the time, however, file-level storage was pretty much all that was available — “you were limited to this sort of vertical pool of storage,” says Klippsten. “You might have a lot of storage behind it, but you were still limited at the spigot, at the top end. You couldn’t get the data out fast enough.” But Isilon broke through that barrier by creating a cluster storage system that allotted the scale horizontally, “so we could balance our load, our render nodes and our artists across a number of machines, and access and update in parallel at the same time,” he adds.

Klippsten believes that solution was a big breakthrough for a lot of users; nevertheless, it took some time for others to get onboard. “In the media and entertainment industry, everyone seemed to be locked into BlueArc or NetApp,” he notes. Not so with Zoic.

Fairly recently, some new players have come onto the market, including Qumulo, touted as a “next-generation NAS company” built around advanced, distributed software running on commodity hardware. “That’s another storage platform that we have looked at and tested,” says Klippsten, adding that Zoic even has a number of nodes from the vendor.

There are other open-source options out there as well. Recently, Red Hat began offering Gluster Storage, an open, software-defined storage platform for physical, virtual and cloud environments. “And now with NVMe, it’s eliminating a lot of these problems as well,” Klippsten says.

Back when Zoic selected Isilon, there were a number of major issues that affected the studio’s decision making. As Klippsten notes, they had just opened the Vancouver office and were transferring data back and forth. “How do we back up that data? How do we protect it? Storage snapshot technology didn’t really exist at the time,” he says. But, Isilon had a number of features that the studio liked, including SyncIQ, software for asynchronous replication of data. “It could push data between different Isilon clusters from a block level, in a more automated fashion. It was very convenient. It offered a lot of parameters, such as moving data by time of day and access frequency.”

SyncIQ enabled the studio to archive the data. And for dealing with interim changes, such as a mistakenly deleted file, Zoic found Isilon’s SnapshotIQ ideal for fast data recovery. Moreover, Isilon was one of the first to support Aspera, right on the Isilon cluster. “You didn’t have to run it on a separate machine. It was a huge benefit because we transfer a lot of secure, encrypted data between us and a lot of our clients,” notes Klippsten.

Netflix’s The Chilling Adventures of Sabrina

Within the pipeline, Zoic’s storage system sits at the core. It is used immediately as the studio ingests the media, whether it is downloaded or transferred from hard drives – terabytes upon terabytes of data. The data is then cleaned up and distributed to project folders for tasks assigned to the various artists. In essence, it acts as a holding tank for the main production storage as an artist begins working on those specific shots, Klippsten explains.

Aside from using the storage at the floor level, the studio also employs it at the archive level, for data recovery as well as material that might not be accessed for weeks. “We have sort of a tiered level of storage — high-performance and deep-archival storage,” he says.

And the system is invaluable, as Zoic is handling 400 to 500 shots a week. If you multiply that by the number of revisions and versions that take place during that time frame, it adds up to hundreds of terabytes weekly. “Per day, we transfer between LA, Vancouver and New York somewhere around 20TB to 30TB,” he estimates. “That number increases quite a bit because we do a lot of cloud rendering. So, we’re pushing a lot of data up to Google and back for cloud rendering, and all of that hits our Isilon storage.”

When Zoic was founded, it originally saw itself as a visual effects company, but at the end of the day, Klippsten says they’re really a technology company that makes pretty pictures. “We push data and move it around to its limits. We’re constantly coming up with new, creative ideas, trying to find partners that can help provide solutions collaboratively if we cannot create them ourselves. The shot cost is constantly being squeezed by studios, which want these shots done faster and cheaper. So, we have to make sure our artists are working faster, too.”

The Chilling Adventures of Sabrina

Recently, Zoic has been working on a TV project involving a good deal of water simulations and other sims in general — which rapidly generate a tremendous amount of data. Then the data is transferred between the LA and Vancouver facilities. Having storage capable of handling that was unheard of three years ago, Klippsten says. However, Zoic has managed to do so using Isilon along with some off-the-shelf Supermicro storage with NVMe drives, enabling its dynamics department to tackle this and other projects. “When doing full simulation, you need to get that sim in front of the clients as soon as possible so they can comment on it. Simulations take a long time — we’re doing 26GB/sec, which is crazy. It’s close to something in the high-performance computing realm.”

With all that considered, it is hardly surprising to hear Klippsten say that Zoic could not function without a solid storage solution. “It’s funny. When people talk about storage, they are always saying they don’t have enough of it. Even when you have a lot of storage, it’s always running at 99 percent full, and they wonder why you can’t just go out to Best Buy and purchase another hard drive. It doesn’t work that way!”

Milk VFX
Founded just five years ago, Milk VFX is an independent visual effects facility in the UK with locations in London and Cardiff, Wales. While Milk VFX may be young, it was founded by experienced and award-winning VFX supervisors and producers. And the awards have continued, including an Oscar (Ex-Machina), an Emmy (Sherlock) and three BAFTAs, as the studio creates innovative and complex work for high-end television and feature films.

Benoit Leveau

With so much precious data, and a lot of it, the studio has to ensure that its work is secure and the storage system is keeping pace with the staff using it. When the studio was set up, it installed Pixit Media’s PixStor, a parallel file system with limitless storage, for its central storage solution. And, it has been growing with the company ever since. (Milk uses almost no local storage, except for media playback.)

“It was a carefully chosen solution due to its enterprise-level performance,” says Benoit Leveau, head of pipeline at Milk, about the decision to select PixStor. “It allowed us to expand when setting up our second studio in Cardiff and our rendering solutions in the cloud.”

When Milk was shopping for a storage offering while opening the studio, four things were forefront in their minds: speed, scalability, performance and reliability. Those were the functions the group wanted from its storage system — exactly the same four demands that the projects at the studios required.

“A final image requires gigabytes, sometimes terabytes, of data in the form of detailed models, high-resolution textures, animation files, particles and effects caches and so forth,” says Leveau. “We need to be able to review 4K image sequences in real time, so it’s really essential for daily operation.”

This year alone, Milk has completed a number of high-end visual effects sequences for feature films such as Adrift, serving as the principal vendor on this true story about a young couple lost at sea during one of the most catastrophic hurricanes in recorded history. The Milk team created all the major water and storm sequences, including bespoke 100-foot waves, all of which were rendered entirely in the cloud.

As Leveau points out, one of the shots in the film was more than 60TB, as it required complex ocean simulations. “We computed the ocean simulations on our local renderfarm, but the rendering was done in the cloud, and with this setup, we were able to access the data from everywhere almost transparently for the artists,” he explains.

Adrift

The studio also recently completed work on the blockbuster Fantastic Beasts sequel, The Crimes of Grindelwald.

For television, the studio created visual effects for an episode of the Netflix Altered Carbon sci-fi series, where people can live forever, as they digitally store their consciousness (stacks) and then download themselves into new bodies (sleeves). For the episode, the Milk crew created forest fires and the aftermath, as well as an alien planet and escape ship. For Origin, an action-thriller, the team generated 926 VFX shots in 4K for the 10-part series, spanning a wide range of work. Milk is also serving as the VFX vendor for Good Omens, a six-part horror/fantasy/drama series.

“For Origin, all the data had to be online for the duration of the four-month project. At the same time, we commenced work as the sole VFX vendor on the BBC/Amazon Good Omens series, which is now rapidly filling up our PixStor, hence the importance of scalability!” says Leveau.

Main Image: Origin via Milk VFX


Karen Moltenbrey is a veteran VFX and post writer.

Behind the Title: Lobo EP, Europe Loic Francois Marie Dubois

NAME: Loic Francois Marie Dubois

COMPANY: New York- and São Paulo, Brazil-based Lobo

CAN YOU DESCRIBE YOUR COMPANY?
We are a full-service creative studio offering design, live action, stop motion, 3D & 2D, mixed media, print, digital, AR and VR.

Day One spot Sunshine

WHAT’S YOUR JOB TITLE?
Creative executive producer for Europe and formerly head of production. I’m based in Brazil, but work out of the New York office as well.

WHAT DOES THAT ENTAIL?
Managing, hiring creative teams, designers, producers and directors for international productions (USA, Europe, Asia). Also, I have served as the creative executive director for TBWA Paris on the McDonald’s Happy Meal global campaign for the last five years. Now as creative EP for Europe, I am also responsible for streamlining information from pre-production to post production between all production parties for a more efficient and prosperous sales outcome.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
The patience and the fun psychological side you need to have to handle all the production peeps, agencies, and clients.

WHAT TOOLS DO YOU USE?
Excel, Word, Showbiz, Keynote, Pages, Adobe Package (Photoshop, Illustrator, After Effects, Premiere, InDesign), Maya, Flame, Nuke and AR/VR technology.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Working with talented creative people on extraordinary projects with a stunning design and working on great narratives, such as the work we have done for clients including Interface, Autism Speaks, Imaginary Friends, Unicef and Travelers, to name a few.

WHAT’S YOUR LEAST FAVORITE?
Monday morning.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early afternoon between Europe closing down and the West Coast waking up.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Meditating in Tibet…

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Since I was 13 years old. After shooting and editing a student short film (an Oliver Twist adaptation) with a Bolex 16mm on location in London and Paris, I was hooked.

Promoting Lacta 5Star chocolate bars

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
An animated campaign for the candy company Mondelez’s Lacta 5Star chocolate bars; an animated short film for the Imaginary Friends Society; a powerful animated short on the dangers of dating abuse and domestic violence for nonprofit Day One; a mixed media campaign for Chobani called FlipLand; and a broadcast spot for McDonald’s and Spider-Man.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
My three kids 🙂

It’s really hard to choose one project, as they are all equally different and amazing in their own way, but maybe D&AD Wish You Were Here. It stands out for the number of awards it won and the collective creative production process.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
The Internet.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Meditation and yoga.

Chaos Group to support Cinema 4D with two rendering products

At the Maxon Supermeet 2018 event, Chaos Group announced its plans to support the Maxon Cinema 4D community with two rendering products: V-Ray for Cinema 4D and Corona for Cinema 4D. Based on V-Ray’s Academy Award-winning raytracing technology, the development of V-Ray for Cinema 4D will be focused on production rendering for high-end visual effects and motion graphics. Corona for Cinema 4D will focus on artist-friendly design visualization.

Chaos Group, which acquired the V-Ray for Cinema 4D product from LAUBlab and will lead development on the product for the first time, will offer current customers free migration to a new update, V-Ray 3.7 for Cinema 4D. All users who move to the new version will receive a free V-Ray for Cinema 4D license, including all product updates, through January 15, 2020. Moving forward, Chaos Group will be providing all support, sales and product development in-house.

In addition to ongoing improvements to V-Ray for Cinema 4D, Chaos Group is also released the Corona for Cinema 4D beta 2 at Supermeet, with the final product to follow in January 2019.

Main Image: Daniel Sian created Robots using V-ray for Cinema 4D.

Promoting a Mickey Mouse watch without Mickey

Imagine creating a spot for a watch that celebrates the 90th anniversary of Mickey Mouse — but you can’t show Mickey Mouse. Already Been Chewed (ABC), a design and motion graphics studio, developed a POV concept that met this challenge and also tied in the design of the actual watch.

Nixon, a California-based premium watch company that is releasing a series of watches around the Mickey Mouse anniversary, called on Already Been Chewed to create the 20-second spot.

“The challenge was that the licensing arrangement that Disney made with Nixon doesn’t allow Mickey’s image to be in the spot,” explains Barton Damer, creative director at Already Been Chewed. “We had to come up with a campaign that promotes the watch and has some sort of call to action that inspires people to want this watch. But, at the same time, what were we going to do for 20 seconds if we couldn’t show Mickey?”

After much consideration, Damer and his team developed a concept to determine if they could push the limits on this restriction. “We came up with a treatment for the video that would be completely point-of-view, and the POV would do a variety of things for us that were working in our favor.”

The solution was to show Mickey’s hands and feet without actually showing the whole character. In another instance, a silhouette of Mickey is seen in the shadows on a wall, sending a clear message to viewers that the spot is an official Disney and Mickey Mouse release and not just something that was inspired by Mickey Mouse.

Targeting the appropriate consumer demographic segment was another key issue. “Mickey Mouse has long been one of the most iconic brands in the history of branding, so we wanted to make sure that it also appealed to the Nixon target audience and not just a Disney consumer,” Damer says. “When you think of Disney, you could brand Mickey for children or you could brand it for adults who still love Mickey Mouse. So, we needed to find a style and vibe that would speak to the Nixon target audience.”

The Already Been Chewed team chose surfing and skateboarding as dominant themes, since 16-to 30-year-olds are the target demographic and also because Disney is a West Coast brand.
Damer comments, “We wanted to make sure we were creating Mickey in a kind of 3D, tangible way, with more of a feature film and 3D feel. We felt that it should have a little bit more of a modern approach. But at the same time, we wanted to mesh it with a touch of the old-school vibe, like 1950s cartoons.”

In that spirit, the team wanted the action to start with Mickey walking from his car and then culminate at the famous Venice Beach basketball courts and skate park. Here’s the end result.

“The challenge, of course, is how to do all this in 15 seconds so that we can show the logos at the front and back and a hero image of the watch. And that’s where it was fun thinking it through and coming up with the flow of the spot and seamless transitions with no camera cuts or anything like that. It was a lot to pull off in such a short time, but I think we really succeeded.”

Already Been Chewed achieved these goals with an assist from Maxon’s Cinema 4D and Adobe After Effects. With Damer as creative lead, here’s the complete cast of characters: head of production Aaron Smock; 3D design was via Thomas King, Barton Damer, Bryan Talkish, Lance Eckert; animation was provided by Bryan Talkish and Lance Eckert; character animation was via Chris Watson; soundtrack was DJ Sean P.

How to use animation in reality TV

By Aline Maalouf

The world of animation is changing and evolving at a rapid pace — bringing photorealistic imagery to the small screen and the big screen — animation that is rendered with such detail, you can imagine the exact sensation of the water, feel the heat of the sunshine and experience the wilderness. Just look at the difference between the first Toy Story film, released in 1995, up to Toy Story 3’s release in 2010.

Over 15 years, there is a complete world of difference — we progressed from 2D to 3D, the colors are poignant, we visualize changes from shadows and lightness and the sequences move much more quickly. The third film was a major feat for a studio, and now either years later, the technology there is already on the cusp of being old news.

Technology is advancing faster than it can be implemented — and it isn’t just the Pixar’s and Disney’s of the world who have to stay ahead of the curve with each sequence released. Boutique companies are under just as much pressure to continually push the envelope on what’s possible in the animation space, while still delivering top results to clients within the sometimes demanding time constraints of film and television.

Aline Maalouf

Working in reality TV presents its own set of challenges in comparison to a fully animated program. To start, you need to seamlessly combine real-life interaction with animation — often showcasing what is there against what could be there. As animation continues to evolve, integrating with emerging technology, such as virtual reality, augmented reality and immersive platforms, understanding how users interact with the platform and how to best engage the audience will be crucial.

Here are four ways using animation can enhance a reality TV program:

Showcasing a World of Possibilities
With the introduction of 3D animation, we are able to create imagery so realistic that it is often hard to define what is “real” and what is virtually designed. The real anchor of hyper-realistic animation is the ability to add and play with light and shadows. Layers of light allow us to see reflection, to experience a room from a new angle and to challenge viewers to experience the room in both the daylight and at nighttime.

For example, within our work on Gusto TV’s Where To I Do, couples must select their perfect wedding venue — often viewing a blank space and trying to envision their theme inside it. Using animation, those spaces are brought to life in full, rich color, from dawn to the glaring midday sun to dusk and midnight — no additional film crew time required.

Speeds up Production Process
Gone are the days where studios are spending large budgets resetting room after room to showcase before-and-after options, particularly when it comes to renovation shows. It’s time-consuming and laborious. Working with an animation studio allows producers to showcase a renovated room three different ways, and the audience develops an early feel for the space without the need to see it physically set up.

It’s faster (with the right tools and technology to match TV timelines), allows more flexibility and eliminates the need to build costly sets for one-time use. Even outside of reality TV, the use of greenroom space, green stages and GCI technology allows a flexibility to filming that didn’t necessarily exist two decades ago.

Makes Viewers Part of the Program
If animation is done well, it should make the viewers feel more invested in the program — as if they are part of this experience. Animation should not break what is happening in reality. In order to make this happen, it is essential to have up-to-date software and hardware that bridges the gap between the vision and what is actually accomplished within each scene.

Software and hardware go hand-in-hand in creating high-quality animations. If the software is up to date and not the hardware, the work will be compromised as the rendering process will not be able to support the full project scope. One ripple in the wave of animation and the viewer is reminded that what they’re seeing doesn’t really exist.

Opens Doors to Immersive Experiences
Although we have scratched the surface of what’s possible when it comes to virtual reality, augmented reality and generating immersive experiences for viewers from the comfort of their living rooms, I anticipate there will be a wave of growth in this space over the next five years. Our studio is already building some of these capabilities into our current projects. Overall, studios and production companies are looking for new ways to engage an audience that is exposed to hours of content a day.

Rather than just simply viewing the animation of a wedding venue, viewers will be able to click through the space — guiding their own passage from point A to point B. They become the host of their own journey.

Programs of all genres are dazzling their audiences with the future of animation and reality TV is right there with it.


Aline Maalouf is co-founder/EVP of Neezo Studios, which has produced the animation and renderings for all six seasons of the Property Brothers and all live episodes of Brother vs Brother, in addition to other network shows.

Sony Imageworks provides big effects, animation for Warner’s Smallfoot

By Randi Altman

The legend of Bigfoot: a giant, hairy two-legged creature roaming the forests and giving humans just enough of a glimpse to freak them out. Sightings have been happening for centuries with no sign of slowing down — seriously, Google it.

But what if that story was turned around, and it was Bigfoot who was freaked out by a Smallfoot (human)? Well, that is exactly the premise of the new Warner Bros. film Smallfoot, directed by Karey Kirkpatrick. It’s based on the book “Yeti Tracks” by Sergio Pablos.

Karl Herbst

Instead of a human catching a glimpse of the mysterious giant, a yeti named Migo (Channing Tatum) sees a human (James Corden) and tells his entire snow-filled village about the existence of Smallfoot. Of course, no one believes him so he goes on a trek to find this mythical creature and bring him home as proof.

Sony Pictures Imageworks was tasked with all of the animation and visual effects work on the film, while Warner Animation film did all of the front end work — such as adapting the script, creating the production design, editing, directing, producing and more. We reached out to Imageworks VFX supervisor Karl Herbst (Hotel Transylvania 2) to find out more about creating the animation and effects for Smallfoot.

The film has a Looney Tunes-type feel with squash and stretch. Did this provide more freedom or less?
In general, it provided more freedom since it allowed the animation team to really have fun with gags. It also gave them a ton of reference material to pull from and come up with new twists on older ideas. Once out of animation, depending on how far the performance was pushed, other departments — like the character effects team — would have additional work due to all of the exaggerated movements. But all of the extra work was worth it because everyone really loved seeing the characters pushed.

We also found that as the story evolved, Migo’s journey became more emotionally driven; We needed to find a style that also let the audience truly connect with what he was going through. We brought in a lot more subtlety, and a more truthful physicality to the animation when needed. As a result, we have these incredibly heartfelt performances and moments that would feel right at home in an old Road Runner short. Yet it all still feels like part of the same world with these truly believable characters at the center of it.

Was scale between such large and small characters a challenge?
It was one of the first areas we wanted to tackle since the look of the yeti’s fur next to a human was really important to filmmakers. In the end, we found that the thickness and fidelity of the yeti hair had to be very high so you could see each hair next to the hairs of the humans.

It also meant allowing the rigs for the human and yetis to be flexible enough to scale them as needed to have moments where they are very close together and they did not feel so disproportionate to each other. Everything in our character pipeline from animation down to lighting had to be flexible in dealing with these scale changes. Even things like subsurface scattering in the skin had dials in it to deal with when Percy, or any human character, was scaled up or down in a shot.

How did you tackle the hair?
We updated a couple of key areas in our hair pipeline starting with how we would build our hair. In the past, we would make curves that look more like small groups of hairs in a clump. In this case, we made each curve its own strand of a single hair. To shade this hair in a way that allowed artists to have better control over the look, our development team created a new hair shader that used true multiple-scattering within the hair.

We then extended that hair shading model to add control over the distribution around the hair fiber to model the effect of animal hair, which tends to scatter differently than human hair. This gave artists the ability to create lots of different hair looks, which were not based on human hair, as was the case with our older models.

Was rendering so many fury characters on screen at a time an issue?
Yes. In the past this would have been hard to shade all at once, mostly due to our reliance on opacity to create the soft shadows needed for fur. With the new shading model, we were no longer using opacity at all so the number of rays needed to resolve the hair was lower than in the past. But we now needed to resolve the aliasing due to the number of fine hairs (9 million for LeBron James’ Gwangi).

We developed a few other new tools within our version of the Arnold renderer to help with aliasing and render time in general. The first was adaptive sampling, which would allow us to up the anti-aliasing samples drastically. This meant some pixels would only use a few samples while others would use very high sampling. Whereas in the past, all pixels would get the same number. This focused our render times to where we needed it, helping to reduce overall rendering. Our development team also added the ability for us to pick a render up from its previous point. This meant that at a lower quality level we could do all of our lighting work, get creative approval from the filmmakers and pick up the renders to bring them to full quality not losing the time already spent.

What tools were used for the hair simulations specifically, and what tools did you call on in general?
We used Maya and the Nucleus solvers for all of the hair simulations, but developed tools over them to deal with so much hair per character and so many characters on screen at once. The simulation for each character was driven by their design and motion requirements.

The Looney Tunes-inspired design and motion created a challenge around how to keep hair simulations from breaking with all of the quick and stretched motion while being able to have light wind for the emotional subtle moments. We solved all of those requirements by using a high number of control hairs and constraints. Meechee (Zendaya) used 6,000 simulation curves with over 200 constraints, while Migo needed 3,200 curves with around 30 constraints.

Stonekeeper (Common) was the most complex of the characters, with long braided hair on his head, a beard, shaggy arms and a cloak made of stones. He required a cloth simulation pass, a rigid body simulation was performed for the stones and the hair was simulated on top of the stones. Our in-house tool called Kami builds all of the hair at render time and also allows us to add procedurals to the hair at that point. We relied on those procedurals to create many varied hair looks for all of the generics needed to fill the village full of yetis.

How many different types of snow did you have?
We created three different snow systems for environmental effects. The first was a particle simulation of flakes for near-ground detail. The second was volumetric effects to create lots of atmosphere in the backgrounds that had texture and movement. We used this on each of the large sets and then stored those so lighters could pick which parts they wanted in each shot. To also help with artistically driving the look of each shot, our third system was a library of 2D elements that the effects team rendered and could be added during compositing to add details late in shot production.

For ground snow, we had different systems based on the needs in each shot. For shallow footsteps, we used displacement of the ground surface with additional little pieces of geometry to add crumble detail around the prints. This could be used in foreground or background.

For heavy interactions, like tunneling or sliding in the snow, we developed a new tool we called Katyusha. This new system combined rigid body destruction with fluid simulations to achieve all of the different states snow can take in any given interaction. We then rendered these simulations as volumetrics to give the complex lighting look the filmmakers were looking for. The snow, being in essence a cloud, allowed light transport through all of the different layers of geometry and volume that could be present at any given point in a scene. This made it easier for the lighters to give the snow its light look in any given lighting situation.

Was there a particular scene or effect that was extra challenging? If so, what was it and how did you overcome it?
The biggest challenge to the film as a whole was the environments. The story was very fluid, so design and build of the environments came very late in the process. Coupling that with a creative team that liked to find their shots — versus design and build them — meant we needed to be very flexible on how to create sets and do them quickly.

To achieve this, we begin by breaking the environments into a subset of source shapes that could be combined in any fashion to build Yeti Mountain, Yeti Village and the surrounding environments. Surfacing artists then created materials that could be applied to any set piece, allowing for quick creative decisions about what was rock, snow and ice, and creating many different looks. All of these materials were created using PatternCreate networks as part of our OSL shaders. With them we could heavily leverage the portable procedural texturing between assets making location construction quicker, more flexible and easier to dial.

To get the right snow look for all levels of detail needed, we used a combination of textured snow, modeled snow and a simulation of geometric snowfall, which all needed to shade the same. For the simulated snowfall we created a padding system that could be run at any time on an environment giving it a fresh coating of snow. We did this so that filmmakers could modify sets freely in layout and not have to worry about broken snow lines. Doing all of that with modeled snow would have been too time-consuming and costly. This padding system worked not only in organic environments, like Yeti Village, but also in the Human City at the end of the film. The snow you see in the Human City is a combination of this padding system in the foreground and textures in the background.

Creating super sounds for Disney XD’s Marvel Rising: Initiation

By Jennifer Walden

Marvel revealed “the next generation of Marvel heroes for the next generation of Marvel fans” in a behind-the-scenes video back in December. Those characters stayed tightly under wraps until August 13, when a compilation of animated shorts called Marvel Rising: Initiation aired on Disney XD. Those shorts dive into the back story of the new heroes and give audiences a taste of what they can expect in the feature-length animated film Marvel Rising: Secret Warriors that aired for the first time on September 30 on both the Disney Channel and Disney XD simultaneously.

L-R: Pat Rodman and Eric P. Sherman

Handling audio post on both the animated shorts and the full-length feature is the Bang Zoom team led by sound supervisor Eric P. Sherman and chief sound engineer Pat Rodman. They worked on the project at the Bang Zoom Atomic Olive location in Burbank. The sounds they created for this new generation of Marvel heroes fit right in with the established Marvel universe but aren’t strictly limited to what already exists. “We love to keep it kind of close, unless Marvel tells us that we should match a specific sound. It really comes down to whether it’s a sound for a new tech or an old tech,” says Rodman.

Sherman adds, “When they are talking about this being for the next generation of fans, they’re creating a whole new collection of heroes, but they definitely want to use what works. The fans will not be disappointed.”

The shorts begin with a helicopter flyover of New York City at night. Blaring sirens mix with police radio chatter as searchlights sweep over a crime scene on the street below. A SWAT team moves in as a voice blasts over a bullhorn, “To the individual known as Ghost Spider, we’ve got you surrounded. Come out peacefully with your hands up and you will not be harmed.” Marvel Rising: Initiation wastes no time in painting a grim picture of New York City. “There is tension and chaos. You feel the oppressiveness of the city. It’s definitely the darker side of New York,” says Sherman.

The sound of the city throughout the series was created using a combination of sourced recordings of authentic New York City street ambience and custom recordings of bustling crowds that Rodman captured at street markets in Los Angeles. Mix-wise, Rodman says they chose to play the backgrounds of the city hotter than normal just to give the track a more immersive feel.

Ghost Spider
Not even 30 seconds into the shorts, the first new Marvel character makes her dramatic debut. Ghost Spider (Dove Cameron), who is also known as Spider Gwen, bursts from a third-story window, slinging webs at the waiting officers. Since she’s a new character, Rodman notes that she’s still finding her way and there’s a bit of awkwardness to her character. “We didn’t want her to sound too refined. Her tech is good, but it’s new. It’s kind of like Spider-Man first starting out as a kid and his tech was a little off,” he says.

Sound designer Gordon Hookailo spent a lot of time crafting the sound of Spider Gwen’s webs, which according to Sherman have more of a nylon, silky kind of sound than Spider-Man’s webs. There’s a subliminal ghostly wisp sound to her webs also. “It’s not very overt. There’s just a little hint of a wisp, so it’s not exactly like regular Spider-Man’s,” explains Rodman.

Initially, Spider Gwen seems to be a villain. She’s confronted by the young-yet-authoritative hero Patriot (Kamil McFadden), a member of S.H.I.E.L.D. who was trained by Captain America. Patriot carries a versatile, high-tech shield that can do lots of things, like become a hovercraft. It shoots lasers and rockets too. The hoverboard makes a subtle whooshy, humming sound that’s high-tech in a way that’s akin to the Goblin’s hovercraft. “It had to sound like Captain America too. We had to make it match with that,” notes Rodman.

Later on in the shorts, Spider Gwen’s story reveals that she’s actually one of the good guys. She joins forces with a crew of new heroes, starting with Ms. Marvel and Squirrel Girl.

Ms. Marvel (Kathreen Khavari) has the ability to stretch and grow. When she reaches out to grab Spider Gwen’s leg, there’s a rubbery, creaking sound. When she grows 50 feet tall she sounds 50 feet tall, complete with massive, ground shaking footsteps and a lower ranged voice that’s sweetened with big delays and reverbs. “When she’s large, she almost has a totally different voice. She’s sound like a large, forceful woman,” says Sherman.

Squirrel Girl
One of the favorites on the series so far is Squirrel Girl (Milana Vayntrub) and her squirrel sidekick Tippy Toe. Squirrel Girl has  the power to call a stampede of squirrels. Sound-wise, the team had fun with that, capturing recordings of animals small and large with their Zoom H6 field recorder. “We recorded horses and dogs mainly because we couldn’t find any squirrels in Burbank; none that would cooperate, anyway,” jokes Rodman. “We settled on a larger animal sound that we manipulated to sound like it had little feet. And we made it sound like there are huge numbers of them.”

Squirrel Girl is a fan of anime, and so she incorporates an anime style into her attacks, like calling out her moves before she makes them. Sherman shares, “Bang Zoom cut its teeth on anime; it’s still very much a part of our lifeblood. Pat and I worked on thousands of episodes of anime together, and we came up with all of these techniques for making powerful power moves.” For example, they add reverb to the power moves and choose “shings” that have an anime style sound.

What is an anime-style sound, you ask? “Diehard fans of anime will debate this to the death,” says Sherman. “It’s an intuitive thing, I think. I’ll tell Pat to do that thing on that line, and he does. We’re very much ‘go with the gut’ kind of people.

“As far as anime style sound effects, Gordon [Hookailo] specifically wanted to create new anime sound effects so we didn’t just take them from an existing library. He created these new, homegrown anime effects.”

Quake
The other hero briefly introduced in the shorts is Quake (Chloe Bennet), who is the same actress who plays Daisy Johnson, aka Quake, on Agents of S.H.I.E.L.D. Sherman says, “Gordon is a big fan of that show and has watched every episode. He used that as a reference for the sound of Quake in the shorts.”

The villain in the shorts has so far remained nameless, but when she first battles Spider Gwen the audience sees her pair of super-daggers that pulse with a green glow. The daggers are somewhat “alive,” and when they cut someone they take some of that person’s life force. “We definitely had them sound as if the power was coming from the daggers and not from the person wielding them,” explains Rodman. “The sounds that Gordon used were specifically designed — not pulled from a library — and there is a subliminal vocal effect when the daggers make a cut. It’s like the blade is sentient. It’s pretty creepy.”

Voices
The character voices were recorded at Bang Zoom, either in the studio or via ISDN. The challenge was getting all the different voices to sound as though they were in the same space together on-screen. Also, some sessions were recorded with single mics on each actor while other sessions were recorded as an ensemble.

Sherman notes it was an interesting exercise in casting. Some of the actors were YouTube stars (who don’t have much formal voice acting experience) and some were experienced voice actors. When an actor without voiceover experience comes in to record, the Bang Zoom team likes to start with mic technique 101. “Mic technique was a big aspect and we worked on that. We are picky about mic technique,” says Sherman. “But, on the other side of that, we got interesting performances. There’s a realism, a naturalness, that makes the characters very relatable.”

To get the voices to match, Rodman spent a lot of time using Waves EQ, Pro Tools Legacy Pitch, and occasionally Waves UltraPitch for when an actor slipped out of character. “They did lots of takes on some of these lines, so an actor might lose focus on where they were, performance-wise. You either have to pull them back in with EQ, pitching or leveling,” Rodman explains.

One highlight of the voice recording process was working with voice actor Dee Bradley Baker, who did the squirrel voice for Tippy Toe. Most of Tippy Toe’s final track was Dee Bradley Baker’s natural voice. Rodman rarely had to tweak the pitch, and it needed no other processing or sound design enhancement. “He’s almost like a Frank Welker (who did the voice of Fred Jones on Scooby-Doo, the voice of Megatron starting with the ‘80s Transformers franchise and Nibbler on Futurama).

Marvel Rising: Initiation was like a training ground for the sound of the feature-length film. The ideas that Bang Zoom worked out there were expanded upon for the soon-to-be released Marvel Rising: Secret Warriors. Sherman concludes, “The shorts gave us the opportunity to get our arms around the property before we really dove into the meat of the film. They gave us a chance to explore these new characters.”


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter @audiojeney.

London design, animation studio Golden Wolf sets up shop in NYC

Animation studio Golden Wolf, headquartered in London, has launched its first stateside location in New York City. The expansion comes on the heels of an alliance with animation/VFX/live-action studio Psyop, a minority investor in the company. Golden Wolf now occupies studio space in SoHo adjacent to Psyop and its sister company Blacklist, which formerly represented Golden Wolf stateside and was instrumental to the relationship.

Among the year’s highlights from Golden Wolf are an integrated campaign for Nike FA18 Phantom (client direct), a spot for the adidas x Parley Run for the Oceans initiative (TBWA Amsterdam) in collaboration with Psyop, and Marshmello’s Fly music video for Disney. Golden Wolf also received an Emmy nomination for its main title sequence for Disney’s Ducktales reboot.

Heading up Golden Wolf’s New York office are two transplants from the London studio, executive producer Dotti Sinnott and art director Sammy Moore. Both joined Golden Wolf in 2015, Sinnott from motion design studio Bigstar, where she was a senior producer, and Moore after a run as a freelance illustrator/designer in London’s agency scene.

Sinnott comments: “Building on the strength of our London team, the Golden Wolf brand will continue to grow and evolve with the fresh perspective of our New York creatives. Our presence on either side of the Atlantic not only brings us closer to existing clients, but also positions us perfectly to build new relationships with New York-based agencies and brands. On top of this, we’re able to use the time difference to our advantage to work on faster turnarounds and across a range of budgets.”

Founded in 2013 by Ingi Erlingsson, the studio’s executive creative director, Golden Wolf is known for youth-oriented work — especially content for social media, entertainment and sports — that blurs the lines of irreverent humor, dynamic action and psychedelia. Erlingsson was once a prolific graffiti artist and, later, illustrator/designer and creative director at U.K.-based design agency ilovedust. Today he inspires Golden Wolf’s creative culture and disruptive style fed in part by a wave of next-gen animation talent coming out of schools such as Gobelins in France and The Animation Workshop in Denmark.

“I’m excited about our affiliation with Psyop, which enjoys an incredible legacy producing industry-leading animated advertising content,” Erlingsson says. “Golden Wolf is the new kid on the block, with bags of enthusiasm and an aim to disrupt the industry with new ideas. The combination of the two studios means that we are able to tackle any challenge, regardless of format or technical approach, with the support of some of the world’s best artists and directors. The relationship allows brands and agencies to have complete confidence in our ability to solve even the biggest challenges.”

Golden Wolf’s initial work out of its New York studio includes spots for Supercell (client direct) and Bulleit Bourbon (Barton F. Graf). Golden Wolf is represented in the US market by Hunky Dory for the East Coast, Baer Brown for the Midwest and In House Reps for the West Coast. Stink represents the studio for Europe.

Main Photo: (L-R) Dotti Sinnott, Ingi Erlingsson and Sammy Moore.

Using VFX to bring the new Volkswagen Jetta to life

LA-based studio Jamm provided visual effects for the all-new 2019 Volkswagen Jetta campaign Betta Getta Jetta. Created by Deutsch and produced by ManvsMachine, the series of 12 spots bring the Jetta to life by combining Jamm’s CG design with a color palette inspired by the car’s 10-color ambient lighting system.

“The VW campaign offered up some incredibly fun and intricate challenges. Most notably was the volume of work to complete in a limited amount of time — 12 full-CG spots in just nine weeks, each one unique with its own personality,” says VFX supervisor Andy Boyd.

Collaboration was key to delivering so many spots in such a short span of time. Jamm worked closely with ManvsMachine on every shot. “The team had a very strong creative vision which is crucial in the full 3D world where anything is possible,” explains Boyd.

Jamm employed a variety of techniques for the music-centric campaign, which highlights updated features such as ambient lighting and Beats Audio. The series includes spots titled  Remix, Bumper-to-Bumper, Turb-Whoa, Moods, Bass, Rings, Puzzle and App Magnet, along with 15-second teasers, all of which aired on various broadcast, digital and social channels during the World Cup.

For “Remix,” Jamm brought both a 1985 and a 2019 Jetta to life, along with a hybrid mix of the two, adding a cool layer of turntablist VFX, whereas for “Puzzle,” they cut up the car procedurally in Houdini​, which allowed the team to change around the slices as needed.

For Bass, Jamm helped bring personality to the car while keeping its movements grounded in reality. Animation supervisor Stew Burris pushed the car’s performance and dialed in the choreography of the dance with ManvsMachine as the Jetta discovered the beat, adding exciting life to the car as it bounced to the bassline and hit the switches on a little three-wheel motion.

We reached out to Jamm’s Boyd to find out more.

How early did Jamm get involved?
We got involved as soon as agency boards were client approved. We worked hand in hand with ManvMachine to previs each of the spots in order to lay the foundation for our CG team to execute both agency and directors’ vision.

What were the challenges of working on so many spots at once.
The biggest challenge was for editorial to keep up with the volume of previs options we gave them to present to agency.

Other than Houdini, what tools did they use?
Flame, Nuke and Maya were used as well.

What was your favorite spot of the 12 and why?
Puzzle was our favorite to work on. It was the last of the bunch delivered to Deutsch which we treated with a more technical approach, slicing up the car like a Rubix’s Cube.

 

Allegorithmic’s Substance Painter adds subsurface scattering

Allegorithmic has released the latest additions to its Substance Painter tool, targeted to VFX, game studios and pros who are looking for ways to create realistic lighting effects. Substance Painter enhancements include subsurface scattering (SSS), new projections and fill tools, improvements to the UX and support for a range of new meshes.

Using Substance Painter’s newly updated shaders, artists will be able to add subsurface scattering as a default option. Artists can add a Scattering map to a texture set and activate the new SSS post-effect. Skin, organic surfaces, wax, jade and any other translucent materials that require extra care will now look more realistic, with redistributed light shining through from under the surface.

The release also includes updates to projection and fill tools, beginning with the user-requested addition of non-square projection. Images can be loaded in both the projection and stencil tool without altering the ratio or resolution. Those projection and stencil tools can also disable tiling in one or both axes. Fill layers can be manipulated directly in the viewport using new manipulator controls. Standard UV projections feature a 2D manipulator in the UV viewport. Triplanar Projection received a full 3D manipulator in the 3D viewport, and both can be translated, scaled and rotated directly in-scene.

Along with the improvements to the artist tools, Substance Painter includes several updates designed to improve the overall experience for users of all skill levels. Consistency between tools has been improved, and additions like exposed presets in Substance Designer and a revamped, universal UI guide make it easier for users to jump between tools.

Additional updates include:
• Alembic support — The Alembic file format is now supported by Substance Painter, starting with mesh and camera data. Full animation support will be added in a future update.
• Camera import and selection — Multiple cameras can be imported with a mesh, allowing users to switch between angles in the viewport; previews of the framed camera angle now appear as an overlay in the 3D viewport.
• Full gITF support — Substance Painter now automatically imports and applies textures when loading gITF meshes, removing the need to import or adapt mesh downloads from Sketchfab.
• ID map drag-and-drop — Both materials and smart materials can be taken from the shelf and dropped directly onto ID colors, automatically creating an ID mask.
• Improved Substance format support — Improved tweaking of Substance-made materials and effects thanks to visible-if and embedded presets.

Maxon intros Cinema 4D Release 20

Maxon will be at Siggraph this year showing the next iteration of its Cinema 4D Release 20 (R20), an update of its 3D design and animation software. Release 20 introduces high-end features for VFX and motion graphics artists including node-based materials, volume modeling, CAD import and an evolution of the MoGraph toolset.

Maxon expects Cinema 4D Release 20 to be available this September for both Mac and Windows operating systems.

Key highlights in Release 20 include:
Node-Based Materials – This feature provides new possibilities for creating materials — from simple references to complex shaders — in a node-based editor. With more than 150 nodes to choose from that perform different functions, artists can combine nodes to easily build complex shading effects. Users new to a node-based material workflow still can rely on Cinema 4D’s standard Material Editor interface to create the corresponding node material in the background automatically. Node-based materials can be packaged into assets with user-defined parameters exposed in a similar interface to Cinema 4D’s Material Editor.

MoGraph Fields – New capabilities in this procedural animation toolset offer an entirely new way to define the strength of effects by combining falloffs — from simple shapes, to shaders or sounds to objects and formulas. Artists can layer Fields atop each other with standard mixing modes and remap their effects. They can also group multiple Fields together and use them to control effectors, deformers, weights and more.

CAD Data Import – Popular CAD formats can be imported into Cinema 4D R20 with a drag and drop. A new scale-based tessellation interface allows users to adjust detail to build amazing visualizations. Step, Solidworks, JT, Catia V5 and IGES formats are supported.

Volume Modeling – Users can create complex models by adding or subtracting basic shapes in Boolean-type operations using Cinema 4D R20’s OpenVDB–based Volume Builder and Mesher. They can also procedurally build organic or hard-surface volumes using any Cinema 4D object, including new Field objects. Volumes can be exported in sequenced .vdb format for use in any application or render engine that supports OpenVDB.

ProRender Enhancements — ProRender in Cinema 4D R20 extends the GPU-rendering toolset with key features including subsurface scattering, motion blur and multipasses. Also included are Metal 2 support, an updated ProRender core, out-of-core textures and other architectural enhancements.

Core Technology Modernization —As part of the transition to a more modern core in Cinema 4D, R20 comes with substantial API enhancements, the new node framework, further development on the new modeling framework and a new UI framework.

During Siggraph, Maxon will have guest artists presenting at their booth each day of the show. Presentations will be live streamed on C4DLive.com.

 

 

SIGGRAPH conference chair Roy C. Anthony: VR, AR, AI, VFX, more

By Randi Altman

Next month, SIGGRAPH returns to Vancouver after turns in Los Angeles and Anaheim. This gorgeous city, whose convention center offers a water view, is home to many visual effects studios providing work for film, television and spots.

As usual, SIGGRAPH will host many presentations, showcase artists’ work, display technology and offer a glimpse into what’s on the horizon for this segment of the market.

Roy C. Anthony

Leading up to the show — which takes place August 12-16 — we reached out to Roy C. Anthony, this year’s conference chair. For his day job, Anthony recently joined Ventuz Technology as VP, creative development. There, he leads initiatives to bring Ventuz’s realtime rendering technologies to creators of sets, stages and ProAV installations around the world

SIGGRAPH is back in Vancouver this year. Can you talk about why it’s important for the industry?
There are 60-plus world-class VFX and animation studios in Vancouver. There are more than 20,000 film and TV jobs, and more than 8,000 VFX and animation jobs in the city.

So, Vancouver’s rich production-centric communities are leading the way in film and VFX production for television and onscreen films. They are also are also busy with new media content, games work and new workflows, including those for AR/VR/mixed reality.

How many exhibitors this year?
The conference and exhibition will play host to over 150 exhibitors on the show floor, showcasing the latest in computer graphics and interactive technologies, products and services. Due to the increase in the amount of new technology that has debuted in the computer graphics marketplace over this past year, almost one quarter of this year’s 150 exhibitors will be presenting at SIGGRAPH for the first time

In addition to the traditional exhibit floor and conferences, what are some of the can’t-miss offerings this year?
We have increased the presence of virtual, augmented and mixed reality projects and experiences — and we are introducing our new Immersive Pavilion in the east convention center, which will be dedicated to this area. We’ve incorporated immersive tech into our computer animation festival with the inclusion of our VR Theater, back for its second year, as well as inviting a special, curated experience with New York University’s Ken Perlin — he’s a legendary computer graphics professor.

We’ll be kicking off the week in a big VR way with a special session following the opening ceremony featuring Ivan Sutherland, considered by many as “the father of computer graphics.” That 50-year retrospective will present the history and innovations that sparked our industry.

We have also brought Syd Mead, a legendary “visual futurist” (Blade Runner, Tron, Star Trek: The Motion Picture, Aliens, Time Cop, Tomorrowland, Blade Runner 2049), who will display an arrangement of his art in a special collection called Progressions. This will be seen within our Production Gallery experience, which also returns for its second year. Progressions will exhibit more than 50 years of artwork by Syd, from his academic years to his most current work.

We will have an amazing array of guest speakers, including those featured within the Business Symposium, which is making a return to SIGGRAPH after an absence of a few years. Among these speakers are people from the Disney Technology Innovation Group, Unity and Georgia Tech.

On Tuesday, August 14, our SIGGRAPH Next series will present a keynote speaker each morning to kick off the day with an inspirational talk. These speakers are Tony Derose, a senior scientist from Pixar; Daniel Szecket, VP of design for Quantitative Imaging Systems; and Bob Nicoll, dean of Blizzard Academy.

There will be a 25th anniversary showing of the original Jurassic Park movie, being hosted by “Spaz” Williams, a digital artist who worked on that film.

Can you talk about this year’s keynote and why he was chosen?
We’re thrilled to have ILM head and senior VP, ECD Rob Bredow deliver the keynote address this year. Rob is all about innovation — pushing through scary new directions while maintaining the leadership of artists and technologists.

Rob is the ultimate modern-day practitioner, a digital VFX supervisor who has been disrupting ‘the way it’s always been done’ to move to new ways. He truly reflects the spirit of ILM, which was founded in 1975 and is just one year younger than SIGGRAPH.

A large part of SIGGRAPH is its slant toward students and education. Can you discuss how this came about and why this is important?
SIGGRAPH supports education in all sub-disciplines of computer graphics and interactive techniques, and it promotes and improves the use of computer graphics in education. Our Education Committee sponsors a broad range of projects, such as curriculum studies, resources for educators and SIGGRAPH conference-related activities.

SIGGRAPH has always been a welcoming and diverse community, one that encourages mentorship, and acknowledges that art inspires science and science enables advances in the arts. SIGGRAPH was built upon a foundation of research and education.

How are the Computer Animation Festival films selected?
The Computer Animation Festival has two programs, the Electronic Theater and the VR Theater. Because of the large volume of submissions for the Electronic Theater (over 400), there is a triage committee for the first phase. The CAF Chair then takes the high scoring pieces to a jury comprised of industry professionals. The jury selects then become the Electronic Theater show pieces.

The selections for the VR Theater are made by a smaller panel comprised mostly of sub-committee members that watch each film in a VR headset and vote.

Can you talk more about how SIGGRAPH is tackling AR/VR/AI and machine learning?
Since SIGGRAPH 2018 is about the theme of “Generations,” we took a step back to look at how we got where we are today in terms of AR/VR, and where we are going with it. Much of what we know today couldn’t have been possible without the research and creation of Ivan Sutherland’s 1968 head-mounted display. We have a fanatic panel celebrating the 50-year anniversary of his HMD, which is widely considered and the first VR HMD.

AI tools are newer, and we created a panel that focuses on trends and the future of AI tools in VFX, called “Future Artificial Intelligence and Deep Learning Tools for VFX.” This panel gains insight from experts embedded in both the AI and VFX industries and gives attendees a look at how different companies plan to further their technology development.

What is the process for making sure that all aspects of the industry are covered in terms of panels?
Every year new ideas for panels and sessions are submitted by contributors from all over the globe. Those submissions are then reviewed by a jury of industry experts, and it is through this process that panelists and cross-industry coverage is determined.

Each year, the conference chair oversees the program chairs, then each of the program chairs become part of a jury process — this helps to ensure the best program with the most industries represented from across all disciplines.

In the rare case a program committee feels they are missing something key in the industry, they can try to curate a panel in, but we still require that that panel be reviewed by subject matter experts before it would be considered for final acceptance.

 

Boxx’s Apexx SE capable of 5.0GHz clock speed

Boxx Technologies has introduced the Apexx Special Edition (SE), a workstation featuring a professionally overclocked Intel Core i7-8086K limited edition processor capable of reaching 5.0GHz across all six of its cores.

In celebration of the 40th anniversary of the Intel 8086 (the processor that launched x86 architecture), Intel provided Boxx with a limited number of the high-performance CPUs ideal for 3D modeling, animation and CAD workflows.

Available only while supplies last and custom-configured to accelerate Autodesk’s 3ds Max and Maya, Adobe CC, Maxon Cinema 4D and other pro apps, Apexx SE features a six-core, 8th generation Intel core i7-8086K limited edition processor professionally overclocked to 5.0GHz. Unlike PC gaming systems, the liquid-cooled Apexx SE sustains that frequency across all cores — even in the most demanding situations.

Featuring a compact and metallic blue chassis, the Apexx S3 supports up to three Nvidia or AMD Radeon pro graphics cards, features solid state drives and 2600MHz DDR4 memory. Boxx is offering a three-year warranty on the systems.

“As longtime Intel partners, Boxx is honored to be chosen to offer this state-of-the-art technology. Lightly threaded 3D content creation tools are limited by the frequency of the processor, so a faster clock speed means more creating and less waiting,” explains Boxx VP, marketing and business development Shoaib Mohammad.

Luke Scott to run newly created Ridley Scott Creative Group

Filmmaker Ridley Scott has brought together all of his RSA Films-affiliated companies together in a multi-business restructure to form the Ridley Scott Creative Group. The Ridley Scott Creative Group aims to strengthen the network across the related companies to take advantage of emerging opportunities across all entertainment genres as well as their existing work in film, television, branded entertainment, commercials, VR, short films, documentaries, music video, design and animation, and photography.

Ridley Scott

Luke Scott will assume the role of global CEO, working with founder Ridley Scott and partners Jake and Jordan Scott to oversee the future strategic direction of the newly formed group.

“We are in a new golden age of entertainment,” says Ridley Scott. “The world’s greatest brands, platforms, agencies, new entertainment players and studios are investing hugely in entertainment. We have brought together our talent, capabilities and creative resources under the Ridley Scott Creative Group, and I look forward to maximizing the creative opportunities we now see unfolding with our executive team.”

The companies that make up the RSCG will continue to operate autonomously but will now offer clients synergy under the group offering.

The group includes commercial production company RSA Films, which produced such ads such as Apple’s 1984, Budweiser’s Super Bowl favorite Lost Dog and more recently, Adidas Originals’ Original is Never Finished campaign, as well as branded content for Johnnie Walker, HBO, Jaguar, Ford, Nike and the BMW Films series; the music video production company founded by Jake Scott, Black Dog Films (Justin Timberlake, Maroon 5, Nicki Minaj, Beyoncé, Coldplay, Björk and Radiohead); the entertainment marketing company 3AM; commercial production company Hey Wonderful founded by Michael Di Girolamo; newly founded UK commercial production company Darling Films; and film and television production company Scott Free (Gladiator, Taboo, The Martian, The Good Wife), which continues to be led by David W. Zucker, president, US television; Kevin J. Walsh, president, US film; and Ed Rubin-Managing, director, UK television/film.

“Our Scott Free Films and Television divisions have an unprecedented number of movies and shows in production,” reports Luke Scott. “We are also seeing a huge appetite for branded entertainment from our brand and agency partners to run alongside high-quality commercials. Our entertainment marketing division 3AM is extending its capabilities to all our partners, while Black Dog is moving into short films and breaking new, world-class talent. It is a very exciting time to be working in entertainment.”

 

 

 

 

 

Carbon creates four animated thrill ride spots

Carbon was called on once again by agency Cramer-Krasselt to create four spots — Railblazer, Twisted Timbers, Steel Vengeance and HangTime — for Cedar Fair Entertainment Company, which owns and operates 11 amusement parks across North America.

Following the success of Carbon’s creepy 2017 teaser film for the ride Mystic Timbers, Cramer-Krasselt senior art director David Vaca and his team presented Carbon with four ideas, each a deep dive into the themes and backstories of the rides.

Working across four 30-second films simultaneously and leading a “tri-coastal” team of artists, CD Liam Chapple shared directing duties with lead artists Tim Little and Gary Fouchy. The studio has offices in NYC, LA and Chicago.

According to Carbon executive producer/managing director Phil Linturn, “We soaked each script in the visual language, color grades, camera framing and edits reminiscent of our key inspiration films for each world — a lone gun-slinger arriving to town at sundown in the wild west, the carefree and nostalgic surf culture of California, and extreme off-roading adventures in the twisting canyons of the southwest.”

Carbon’s technical approach to these films was dictated by the fast turnaround and having all films in production at the same time. To achieve the richness, tone and detail required to immerse the viewer in these worlds, Carbon blended stylized CGI with hyper-real matte paintings and realistic lighting to create a look somewhere between their favorite children’s storybooks, contemporary manga animation, the Spaghetti Westerns of Sergio Leone and one or two of their favorite Pixar films.

Carbon called on Side Effects Houdini (partially for their procedural ocean toolkit), Autodesk Maya’s nCloth and 3ds Max, Pixologic’s Zbrush for 3D sculpting and matte painting, Foundry’s Nuke and FilmLight’s Baselight for color.

“We always love working with Cramer-Krasselt,” concludes Linturn. “They come with awesome concepts and an open mind, challenging us to surprise them with each new deck. This was a fantastic opportunity to expand on our body of full CGI-direction work and to explore some interesting looks and styles. It also allowed us to come up with some very creative workflows across all three offices and to achieve two minutes of animation in just a few weeks. The fact that these four films are part of a much bigger broadcast campaign comprising 70-plus broadcast spots is a testament to the focus and range of the production team.”

Behind the Title: Versus Partner/CD Justin Barnes

NAME: Justin Barnes

COMPANY: Versus (@vs_nyc)

CAN YOU DESCRIBE YOUR COMPANY?
We are “versus” the traditional model of a creative studio. Our approach is design driven and full service. We handle everything from live action to post production, animation and VFX. We often see projects from concept through delivery.

WHAT’S YOUR JOB TITLE?
Partner and Creative Director

WHAT DOES THAT ENTAIL?
I handle the creative side of Versus. From pitching to ideation, thought leadership and working closely with our editors, animators, artists and clients to make our creative — and our clients’ creative vision — the best it can be.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
There’s a lot of business and politics that you have to deal with being a creative.

Adidas

WHAT’S YOUR FAVORITE PART OF THE JOB?
Every day is different, full of new challenges and the opportunity to come up with new ideas and make really great work.

WHAT’S YOUR LEAST FAVORITE?
When I have to deal with the business side of things more than the creative side.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
For me, it’s very late at night; the only time I can work with no distractions.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Anything in the creative world.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
It’s been a natural progression for me to be where I am. Working with creative and talented people in an industry with unlimited possibilities has always seemed like a perfect fit.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
– Re-brand of The Washington Post
– Animated content series for the NCAA
– CG campaign for Zyrtec
– Live-action content for Adidas and Alltimers collaboration

Zyrtec

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I am proud of all the projects we do, but the ones that stick out the most are the projects with the biggest challenges that we have pulled together and made look amazing. That seems like every project these days.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My laptop, my phone and Uber.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I can’t live without Pinterest. It’s a place to capture the huge streams of inspiration that come at us each day.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
We have music playing in the office 24/7, everything from hip-hop to classical. We love it all. When I am writing for a pitch, I need a little more concentration. I’ll throw on my headphones and put on something that I can get lost in.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Working on personal projects is big in helping de-stress. Also time at my weekend house in Connecticut.

Foundry intros Mari 4.0

Foundry’s Mari 4.0 is the latest version of the company’s digital 3D painting and texturing tool. Foundry launches Mari 4.0 with a host of advanced features, making the tool easier to use and faster to learn. Mari 4.0 comes equipped with more flexible and configurable exporting, simpler navigation, and a raft of improved workflows.

Key benefits of Mari 4.0 include:
Quicker start-up and export: Mari 4.0 allows artists to get projects up-and-running faster with a new startup mechanism that automatically performs the steps previously completed manually by the user. Shaders are automatically built, with channels connected to them as defined by the channel presets in the startup dialog. The user also now gets the choice of initial lighting and shading setup. The new Export Manager configures the batch exporting of Channels and Bake Point Nodes. Artists can create and manage multiple export targets from the same source, as well as perform format conversions during export. This allows for far more control and flexibility when passing Mari’s texture maps down the pipeline.

Better navigation: A new Palettes Toolbar containing all Mari’s palettes offers easy access and visibility to everything Mari can do. It’s now easier to expand a Palette to fullscreen by hitting the spacebar while your mouse is hovered over it. Tools of a similar function have been grouped under a single button in the Tools toolbar, taking up less space and allowing the user to better focus on the Canvas. Various Palettes have been merged together, removing duplication and simplifying the UI, making Mari both easier to learn and use.

Improved UI: The Colors Palette is now scalable for better precision, and the component sliders have been improved to show the resulting color at each point along the control. Users can now fine tune their procedural operations with precision keyboard stepping functionality brought into Mari’s numeric controls.

The HUD has been redesigned so it no longer draws over the paint subject, allowing the user to better focus on their painting and work more effectively. Basic Node Graph mode has been removed: Advanced is now the default. For everyone learning Mari, the Non-Commercial version now has full Node Graph access.

Enhanced workflows: A number of key workflow improvements have been brought to Mari 4.0. A drag-and-drop fill mechanism allows users to fill paint across their selections in a far more intuitive manner, reducing time and increasing efficiency. The Brush Editor has been merged into the Tool Properties Palette, with the brush being used now clearly displayed. It’s now easy to browse and load sets of texture files into Mari, with a new Palette for browsing texture sets. The Layers Palette is now more intuitive when working with Group layers, allowing users to achieve the setups they desire with less steps. And users now have a shader in Mari that previews and works with the channels that match their final 3D program/shader: The Principled BRDF, based on the 2012 paper from Brent Burley of Walt Disney Animation Studios.

Core: Having upgraded to OpenSubdiv 3.1.x and introduced the features into the UI, users are able to better match the behavior of mesh subdivision that they get in software renderers. Mari’s user preference files are now saved with the application version embedded in the file names —meaning artists can work between different versions of Mari without the danger of corrupting their UI or preferences. Many preferences have had their groups, labels and tooltips modified to be easier to understand. All third-party libraries have been upgraded to match those specified by the VFX Reference Platform 2017.
Mari 4.0 is available now.

House of Moves add Selma Gladney-Edelman, Alastair Macleod

Animation and motion capture studio House of Moves (HOM) has strengthened its team with two new hires — Selma Gladney-Edelman was brought on as executive producer and Alastair Macleod as head of production technology. The two industry vets are coming on board as the studio shifts to offer more custom short- and long-form content, and expands its motion capture technology workflows to its television, feature film, video game and corporate clients.

Selma Gladney-Edelman was most recently VP of Marvel Television for their primetime and animated series. She has worked in film production, animation and visual effects, and was a producer on multiple episodic series at Walt Disney Television Animation, Cartoon Network and Universal Animation. As director of production management across all of the Discovery Channels, she oversaw thousands of hours of television and film programming including TLC projects Say Yes To the Dress, Little People, Big World and Toddlers and Tiaras, while working on the team that garnered an Oscar nom for Werner Herzog’s Encounters at the End of the World and two Emmy wins for Best Children’s Animated Series for Tutenstein.

Scotland native Alastair Macleod is a motion capture expert who has worked in production, technology development and as an animation educator. His production experience includes work on films such as Lord of the Rings: The Two Towers, The Matrix Reloaded, The Matrix Revolutions, 2012, The Twilight Saga: Breaking Dawn — Part 2 and Kubo and the Two Strings for facilities that include Laika, Image Engine, Weta Digital and others.

Macleod pioneered full body motion capture and virtual reality at the research department of Emily Carr University in Vancouver. He was also the head of animation at Vancouver Film School and an instructor at Capilano University in Vancouver. Additionally, he developed PeelSolve, a motion capture solver plug-in for Autodesk Maya.

Storage in the Studio: VFX Studios

By Karen Maierhofer

It takes talent and the right tools to generate visual effects of all kinds, whether it’s building breathtaking environments, creating amazing creatures or crafting lifelike characters cast in a major role for film, television, games or short-form projects.

Indeed, we are familiar with industry-leading content creation tools such as Autodesk’s Maya, Foundry’s Mari and more, which, when placed into the hands of creatives, the result in pure digital magic. In fact, there is quite a bit of technological magic that occurs at visual effects facilities, including one kind in particular that may not have the inherent sparkle of modeling and animation tools but is just as integral to the visual effects process: storage. Storage solutions are the unsung heroes behind most projects, working behind the scenes to accommodate artists and keep their productive juices flowing.

Here we examine three VFX facilities and their use of various storage solutions and setups as they tackle projects large and small.

Framestore
Since it was founded in 1986, Framestore has placed its visual stamp on a plethora of Oscar-, Emmy- and British Academy Film Award-winning visual effects projects, including Harry Potter, Gravity and Guardians of the Galaxy. With increasingly more projects, Framestore expanded from its original UK location in London to North American locales such as Montreal, New York, Los Angeles and Chicago, handling films as well as immersive digital experiences and integrated advertisements for iconic brands, including Guinness, Geico, Coke and BMW.

Beren Lewis

As the company and its workload grew and expanded into other areas, including integrated advertising, so, too, did its storage needs. “Innovative changes, such as virtual-reality projects, brought on high demand for storage and top-tier performance,” says NYC-based Beren Lewis, CTO of advertising and applied technologies at Framestore. “The team is often required to swiftly accommodate multiple workflows, including stereoscopic 4K and VR.”

Without hesitation, Lewis believes storage is typically the most challenging aspect of technology within the VFX workflow. “If the storage isn’t working, then neither are the artists,” he points out. Furthermore, any issues with storage can potentially lead to massive financial implications for the company due to lost time and revenue.

According to Lewis, Framestore uses its storage solution — a Pixit PixStor General Parallel File System (GPFS) storage cluster using the NetApp E-Series hardware – for all its project data. This includes backups to remote co-location sites, video preprocessing, decompression, disaster recovery preparation, scalability and high performance for VFX, finishing and rendering workloads.

The studio moved all the integrated advertising teams over to the PixStor GPFS clusters this past spring. Currently, Framestore has five primary PixStor clusters using NetApp E-Series in use at each office in London, LA, Chicago and Montreal.

According to Lewis, Framestore partnered with Pixit Media and NetApp to take on increasingly complicated and resource-hungry VR projects. “This partnership has provided the global integrated advertising team with higher performance and nonstop access to data,” he says. “The Pixit Media PixStor software-defined scale-out storage solution running on NetApp E-Series systems brings fast, reliable data access for the integrated advertising division so the team can embrace performance and consistency across all five sites, take a cost-effective, simplified approach to disaster recovery and have a modular infrastructure to support multiple workflows and future expansion.”

BMW

Framestore selected its current solution after reviewing several major storage technologies. It was looking for a single namespace that was very stable, while providing great performance, but it also had to be scalable, Lewis notes. “The PixStor ticked all those boxes and provided the right balance between enterprise-grade hardware and support, and open-source standards,” he explains. “That balance allowed us to seamlessly integrate the PixStor into our network, while still maintaining many of the bespoke tools and services that we had developed in-house over the years, with minimum development time.”

In particular, the storage solution provides the required high performance so that the studio’s VFX, finishing and rendering workloads can all run “full-out with no negative effect on the finishing editors’ or graphic artists’ user experience,” Lewis says. “This is a game-changing capability for an industry that typically partitions off these three workloads to keep artists from having to halt operations. PixStor running on E-Series consolidates all three workloads onto a single IT infrastructure with streamlined end-to-end production of projects, which reduces both time to completion and operational costs, while both IT acquisition and maintenance costs are reduced.”

At Framestore, integrating storage into the workflow is simple. The first step after a project is green-lit is the establishment of a new file set on the PixStor GPFS cluster, where ingested footage and all the CG artist-generated project data will live. “The PixStor is at the heart of the integrated advertising storage workflow from start to finish,” Lewis says. Because the PixStor GPFS cluster serves as the primary storage for all integrated advertising project data, the division’s workstations, renderfarm, editing and finishing stations connect to the cluster for review, generation and storage of project content.

Prior to the move to PixStor/NetApp, Framestore had been using a number of different storage offerings. According to Lewis, they all suffered from the same issues in terms of scalability and degradation of performance under render load — and that load was getting heavier and more unpredictable with every project. “We needed a technology that scaled and allowed us to maintain a single namespace but not suffer from continuous slowdowns for artists due to renderfarm load during crunch times or project delivery.”

Geico

As Lewis explains, with the PixStor/NetApp solution, processing was running up to 270,000 IOPS (I/O operations per second), which was at least several times what Framestore’s previous infrastructure would have been able to handle in a single namespace. “Notably, the development workflow for a major theme-park ride was unhindered by all the VR preprocessing, while backups to remote co-location sites synched every two hours without compromising the artist, rendering or finishing workloads,” he says. “This provided a cost-effective, simplified approach to disaster recovery, and Framestore now has a fast, tightly integrated platform to support its expansion plans.”

To stay at the top of its game, Framestore is always reviewing new technologies, and storage is often part of that conversation. To this end, the studio plans to build on the success it has had with PixStor by expanding the storage to handle some additional editorial playback and render workloads using an all-Non-Volatile Memory Express (NVMe) flash tier. Other projects include a review of object storage technology for use as a long-term, off-premises storage target for archival data.

Without question, the industry’s visual demands are rapidly changing. Not long ago, Framestore could easily predict storage and render requirements for a typical project. But that is no longer the case, and the studio finds itself working in ever-increasing resolutions and frame rates. Whereas projects may have been as small as 3TB in the recent past, nowadays the studio regularly handles multiple projects of 300TB or larger. And the storage must be shared with other projects of varying sizes and scope.

“This new ‘unknowns’ element of our workflow puts many strains on all aspects of our pipeline, but especially the storage,” Lewis points out. “Knowing that our storage can cope with the load and can scale allows us to turn our attention to the other issues that these new types of projects bring to Framestore.”

As Lewis notes, working with high-resolution images and large renderfarms create a unique set of challenges for any storage technology that’s not seen in many other fields. The VFX will often test any storage technology well beyond what other industries are capable of. “If there’s an issue or a break point, we will typically find it in spectacular fashion,” he adds.

Rising Sun Pictures
As a contributor to the design and execution of computer-generated effects on more than 100 feature films since its inception 22 years ago, Rising Sun Pictures (RSP) has pushed the technical bar many times over in film as well as television projects. Based in Adelaide, South Australia, RSP has built a top team of VFX artists who have tackled such box-office hits as Thor: Ragnarok, X-Men and Game of Thrones, as well as the Harry Potter and Hunger Games franchises.

Mark Day

Such demanding, high-level projects require demanding, high-level effects, which, in turn, demand a high-performance, reliable storage solution capable of handling varying data I/O profiles. “With more than 200 employees accessing and writing files in various formats, the need for a fast, reliable and scalable solution is paramount to business continuity,” says Mark Day, director of engineering at RSP.

Recently, RSP installed an Oracle ZS5 storage appliance to handle this important function. This high-performance, unified storage system provides NAS and SAN cloud-converged storage capabilities that enable on-premises storage to seamlessly access Oracle Public Cloud. Its advanced hardware and software architecture includes a multi-threading SMP storage operating system for running multiple workloads and advanced data services without performance degradation. The offering also caches data on DRAM or flash cache for optimal performance and efficiency, while keeping data safely stored on high-capacity SSD (solid state disk) or HDD (hard disk drive) storage.

Previously, the studio had been using an Dell EMC Isilon storage cluster with Avere caching appliances, and the company is still employing the solution for parts of its workflow.

When it came time to upgrade to handle RSP’s increased workload, the facility ran a proof of concept with multiple vendors in September 2016 and benchmarked their systems. Impressed with Oracle, RSP began installation in early 2017. According to Day, RSP liked the solution’s ability to support larger packet sizes — now up to 1MB. In addition, he says its “exceptional” analytics engine gives introspection into a render job.

“It has a very appealing [total cost of ownership], and it has caching right out of the box, removing the need for additional caching appliances,” says Day. Storage is at the center of RSP’s workflow, storing all the relevant information for every department — from live-action plates that are turned over from clients, scene setup files and multi-terabyte cache files to iterations of the final product. “All employees work off this storage, and it needs to accommodate the needs of multiple projects and deadlines with zero downtime,” Day adds.

Machine Room

“Visual effects scenes are getting more complex, and in turn, data sizes are increasing. Working in 4K quadruples file sizes and, therefore, impacts storage performance,” explains Day. “We needed a solution that could cope with these requirements and future trends in the industry.”

According to Day, the data RSP deals with is broad, from small setup files to terabyte geocache files. A one-minute 2K DPX sequence is 17GB for the final pass, while 4K is 68GB. “Keep in mind this is only the final pass; a single shot could include hundreds of passes for a heavy computer-generated sequence,” he points out.

Thus, high-performance storage is important to the effective operation of a visual effects company like RSP. In fact, storage helps the artists stay on the creative edge by enabling them to iterate through the creative process of crafting a shot and a look. “Artists are required to iterate their creative process many times to perfect the look of a shot, and if they experience slowdowns when loading scenes, this can have a dramatic effect on how many iterations they can produce. And in turn, this affects employees’ efficiency and, ultimately, the profitability of the company,” says Day.

Thor: Ragnarok

Most recently, RSP used its new storage solution for work on the blockbuster Thor: Ragnarok, in particular, for the Val’s Flashback sequence — which was extremely complex and involved extensive lighting and texture data, as well as high-frame-rate plates (sometimes more than 1,000fps for multiple live-action footage plates). “Before, our storage refresh early versions of this shot could take up to 24 hours to render on our server farm. But since installing our new storage, we saw this drastically reduced to six hours — that’s a 3x improvement, which is a fantastic outcome,” says Day.

Outpost VFX
A full-service VFX studio for film, broadcast and commercials, Outpost VFX, based in Bournemouth, England, has been operational since late 2012. Since that time, the facility has been growing by leaps and bounds, taking on major projects, including Life, Nocturnal Animals, Jason Bourne and 47 Meters Down.

Paul Francis

Due to this fairly rapid expansion, Outpost VFX has seen the need for increased capacity in its storage needs. “As the company grows and as resolution increases and HDR comes in, file sizes increase, and we need much more capacity to deal with that effectively,” says CTO Paul Francis.

When setting up the facility five years ago, the decision was made to go with PixStor from Pixit Media and Synology’s NAS for its storage solution. “It’s an industry-recognized solution that is extremely resilient to errors. It’s fast, robust and the team at Pixit provides excellent support, which is important to us,” says Francis.

Foremost, the solution had to provide high capacity and high speeds. “We need lots of simultaneous connections to avoid bottlenecks and ensure speedy delivery of data,” Francis adds. “This is the only one we’ve used, really. It has proved to be stable enough to support us through our growth over the last couple of years — growth that has included a physical office move and an increase in artist capacity to 80 seats.”

Outpost VFX mainly works with image data and project files for use with Autodesk’s Maya, Foundry’s Nuke, Side Effects’ Houdini and other VFX and animation tools. The challenge this presents is twofold, both large and small: concern for large file sizes, and problems the group can face with small files, such as metadata. Francis explains: “Sequentially loading small files can be time-consuming due to the current technology, so moving to something that can handle both of these areas will be of great benefit to us.”

Locally, artists use a mix of HDDs from a number of different manufacturers to store reference imagery and so forth — older-generation PCs have mostly Western Digital HDDs while newer PCs have generic SSDs. When replacing or upgrading equipment, Outpost VFX uses Samsung 900 Series SSDs, depending on the required performance and current market prices.

Life

Like many facilities, Outpost VFX is always weighing its options when it comes to finding the best solution for its current and future needs. Presently, it is looking at splitting up some of its storage solutions into smaller segments for greater resilience. “When you only have one storage solution and it fails, everything goes down. We’re looking to break our setup into smaller, faster solutions,” says Francis.

Additionally, security is a concern for Outpost VFX when it comes to its clients. According to Francis, certain shows need to be annexed, meaning the studio will need a separate storage solution outside of its main network to handle that data.

When Outpost VFX begins a job, the group ingests all the plates it needs to work on, and they reside in a new job folder created by production and assigned to a specific drive for active jobs. This folder then becomes the go-to for all assets, elements and shot iterations created throughout the production. For security purposes, these areas of the server are only visible to and accessible by artists, who in turn cannot access the Internet; this ensures that the files are “watertight and immune to leaks,” says Francis, adding that with PixStor, the studio is able to set up different partitions for different areas that artists can jump between easily.

How important is storage to Outpost VFX? “Frankly, there’d be no operation without storage!” Francis says emphatically. “We deal with hundreds of terrabytes of data in visual effects, so having high-capacity, reliable storage available to us at all times is absolutely essential to ensure a smooth and successful operation.”

47 Meters Down

Because the studio delivers visual effects across film, TV and commercials simultaneously, storage is an important factor no matter what the crew is working on. A recent film project like 47 Meters Down required the full gamut of visual effects work, as Outpost VFX was the sole vendor for the project. So, the studio needed the space and responsiveness of a storage system that enabled them to deliver more than 420 shots, a number of which featured heavy 3D builds and multiple layers of render elements.

“We had only about 30 artists at that point, so having a stable solution that was easy for our team to navigate and use was crucial,” Francis points out.

Main Image: From Outpost VFX’s Domestos commercial out of agency MullenLowe London.

Saddington Baynes adds senior lighting artist Luis Cardoso

Creative production house Saddington Baynes has hired Luis Cardoso as a senior lighting artist, adding to the studio’s creative team with specialist CGI skills in luxury goods, beauty and cosmetics. He joins the team following a four-year stint at Burberry, where he worked on high-end CGI.

He specializes in Autodesk 3ds Max, Chaos Group’s V-Ray and Adobe Photoshop. Cardoso’s past work includes imagery for all Burberry fragrances, clothing and accessories and social media assets for the Pinterest Cat Lashes campaign. He also has experience under his belt as senior CG artist at Sectorlight, and later in his career Assembly Studios.

At Saddington Baynes, Cardoso will be working on new motion cinematic sequences for online video to expand the beauty, fragrance, fashion and beverage departments and take the expertise further, particularly in regards to video lighting.

According to executive creative director James Digby-Jones, “It no longer matters whether elements are static or moving; whether the brief is for a 20,000-pixel image or 4K animation mixed with live action. We stretch creative and technical boundaries with fully integrated production that encompasses everything from CGI and motion to shoot production and VR capability.”