Tag Archives: Adobe Creative Suite

Making an animated series with Adobe Character Animator

By Mike McCarthy

In a departure from my normal film production technology focus, I have also been working on an animated web series called Grounds of Freedom. Over the past year I have been directing the effort and working with a team of people across the country who are helping in various ways. After a year of meetings, experimentation and work we finally started releasing finished episodes on YouTube.

The show takes place in Grounds of Freedom, a coffee shop where a variety of animated mini-figures gather to discuss freedom and its application to present-day cultural issues and events. The show is created with a workflow that weaves through a variety of Adobe Creative Cloud apps. Back in October I presented our workflow during Adobe Max in LA, and I wanted to share it with postPerspective’s readers as well.

When we first started planning for the series, we considered using live action. Ultimately, after being inspired by the preview releases of Adobe Character Animator, I decided to pursue a new digital approach to brick filming (a film made using Legos), which is traditionally accomplished through stop-motion animation. Once everyone else realized the simpler workflow possibilities and increased level of creative control offered by that new animation process, they were excited to pioneer this new approach. Animation gives us more control and flexibility over the message and dialog, lowers production costs and eases collaboration over long distances, as there is no “source footage” to share.

Creating the Characters
The biggest challenge to using Character Animator is creating digital puppets, which are deeply layered Photoshop PSDs with very precise layer naming and stacking. There are ways to generate the underlying source imagery in 3D animation programs, but I wanted the realism and authenticity of sourcing from actual photographs of the models and figures. So we took lots of 5K macro shots of our sets and characters in various positions with our Canon 60D and 70D DSLRs and cut out hundreds of layers of content in Photoshop to create our characters and all of their various possible body positions. The only thing that was synthetically generated was the various facial expressions digitally painted onto their clean yellow heads, usually to match an existing physical reference character face.

Mike McCarthy shooting stills.

Once we had our source imagery organized into huge PSDs, we rigged those puppets in Character Animator with various triggers, behaviors and controls. The walking was accomplished by cycling through various layers, instead of the default bending of the leg elements. We created arm movement by mapping each arm position to a MIDI key. We controlled facial expressions and head movement via webcam, and the mouth positions were calculated by the program based on the accompanying audio dialog.

Animating Digital Puppets
The puppets had to be finished and fully functional before we could start animating on the digital stages we had created. We had been writing the scripts during that time, parallel to generating the puppet art, so we were ready to record the dialog by the time the puppets were finished. We initially attempted to record live in Character Animator while capturing the animation motions as well, but we didn’t have the level of audio editing functionality we needed available to us in Character Animator. So during that first session, we switched over to Adobe Audition, and planned to animate as a separate process, once the audio was edited.

That whole idea of live capturing audio and facial animation data is laughable now, looking back, since we usually spend a week editing the dialog before we do any animating. We edited each character audio on a separate track and exported those separate tracks to Character Animator. We computed lipsync for each puppet based on their dedicated dialog track and usually exported immediately. This provided a draft visual that allowed us to continue editing the dialog within Premiere Pro. Having a visual reference makes a big difference when trying to determine how a conversation will feel, so that was an important step — even though we had to throw away our previous work in Character Animator once we made significant edit changes that altered sync.

We repeated the process once we had a more final edit. We carried on from there in Character Animator, recording arm and leg motions with the MIDI keyboard in realtime for each character. Once those trigger layers had been cleaned up and refined, we recorded the facial expressions, head positions and eye gaze with a single pass on the webcam. Every re-record to alter a particular section adds a layer to the already complicated timeline, so we limited that as much as possible, usually re-recording instead of making quick fixes unless we were nearly finished.

Compositing the Characters Together
Once we had fully animated scenes in Character Animator, we would turn off the background elements, and isolate each character layer to be exported in Media Encoder via dynamic link. I did a lot of testing before settling on JPEG2000 MXF as the format of choice. I wanted a highly compressed file, but need alpha channel support, and that was the best option available. Each of those renders became a character layer, which was composited into our stage layers in After Effects. We could have dynamically linked the characters directly into AE, but with that many layers that would decrease performance for the interactive part of the compositing work. We added shadows and reflections in AE, as well as various other effects.

Walking was one of the most challenging effects to properly recreate digitally. Our layer cycling in Character Animator resulted in a static figure swinging its legs, but people (and mini figures) have a bounce to their step, and move forward at an uneven rate as they take steps. With some pixel measurement and analysis, I was able to use anchor point keyframes in After Effects to get a repeating movement cycle that made the character appear to be walking on a treadmill.

I then used carefully calculated position keyframes to add the appropriate amount of travel per frame for the feet to stick to the ground, which varies based on the scale as the character moves toward the camera. (In my case the velocity was half the scale value in pixels per seconds.) We then duplicated that layer to create the reflection and shadow of the character as well. That result can then be composited onto various digital stages. In our case, the first two shots of the intro were designed to use the same walk animation with different background images.

All of the character layers were pre-comped, so we only needed to update a single location when a new version of a character was rendered out of Media Encoder, or when we brought in a dynamically linked layer. It would propagate all the necessary comp layers to generate updated reflections and shadows. Once the main compositing work was finished, we usually only needed to make slight changes in each scene between episodes. These scenes were composited at 5K, based on the resolution off the DSLR photos of the sets we had built. These 5K plates could be dynamically linked directly into Premiere Pro, and occasionally used later in the process to ripple slight changes through the workflow. For the interactive work, we got far better editing performance by rendering out flattened files. We started with DNxHR 5K assets, but eventually switched to HEVC files since they were 30x smaller and imperceptibly different in quality with our relatively static animated content.

Editing the Animated Scenes
In Premiere Pro, we had the original audio edit, and usually a draft render of the characters with just their mouths moving. Once we had the plate renders, we placed them each in their own 5K scene sub-sequence and used those sequences as source on our master timeline. This allowed us to easily update the content when new renders were available, or source from dynamically linked layers instead if needed. Our master timeline was 1080p, so with 5K source content we could push in two and a half times the frame size without losing resolution. This allowed us to digitally frame every shot, usually based on one of two rendered angles, and gave us lots of flexibility all the way to the end of the editing process.

Collaborative Benefits of Dynamic Link
While Dynamic Link doesn’t offer the best playback performance without making temp renders, it does have two major benefits in this workflow. It ripples change to the source PSD all the way to the final edit in Premiere just by bringing each app into focus once. (I added a name tag to one character’s PSD during my presentation, and 10 seconds later, it was visible throughout my final edit.) Even more importantly, it allows us to collaborate online without having to share any exported video assets. As long as each member of the team has the source PSD artwork and audio files, all we have to exchange online are the Character Animator project (which is small once the temp files are removed), the .AEP file and the .PrProj file.

This gives any of us the option to render full-quality visual assets anytime we need them, but the work we do on those assets is all contained within the project files that we sync to each other. The coffee shop was built and shot in Idaho, our voice artist was in Florida, our puppets faces were created in LA. I animate and edit in Northern California, the AE compositing was done in LA, and the audio is mixed in New Jersey. We did all of that with nothing but a Dropbox account, using the workflow I have just outlined.

Past that point, it was a fairly traditional finish, in that we edited in music and sound effects, and sent an OMF to Steve, our sound guy at DAWPro Studios http://dawpro.com/photo_gallery.html for the final mix. During that time we added other b-roll visuals or other effects, and once we had the final audio back we rendered the final result to H.264 at 1080p and uploaded to YouTube.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Made in NY’s free post training program continues in 2018

New York City’s post production industry continues to grow thanks to the creation of New York State’s Post Production Film Tax Credit, which was established in 2010. Since then, over 1,000 productions have applied for the credit, creating almost a million new jobs.

“While this creates more pathways for New York City residents to get into the industry, there is evidence that this growth is not equally distributed among women and people of color. In response to this need, the NYC Mayor’s Office of Media and Entertainment decided to create the Made in New York Post Production Training Program, which built on the success of the Made in New York PA Training Program, which for the last 11 years has trained over 700 production assistants for work on TV and film sets,” explains Ryan Penny, program director of the Made In NY Post Production Training Program.

The Post Production Training Program seeks to diversify New York’s post industry by training low-income and unemployed New Yorkers in the basics of editing, animation and visual effects. Created in partnership with the Blue Collar Post Collective, BRIC Media Arts and Borough of Manhattan Community College, the course is free to participants and consists of a five-week, full-time skills training and job placement program administered by workforce development non-profit Brooklyn Workforce Innovations.

Trainees take part in classroom training covering the history and theory of post production, as well as technical training in Avid Media Composer, Adobe’s Premiere, After Effects and Photoshop, as well as Foundry’s Nuke. “Upon successful completion of the training, our staff will work with graduates to identify job opportunities for a period of two years,” says Penny.

Ryan Penny, far left with the most recent graduating class.

Launched in June 2017, the Made in New York Post Production Training Program graduated its second cycle of trainees in January 2018 and is now busy establishing partnerships with New York City post houses and productions who are interested in hiring graduates of the program as post PAs, receptionists, client service representatives, media management technicians and more.

“Employers can expect entry-level employees who are passionate about post and hungry to continue learning on the job,” reports Penny. “As an added incentive, the city has created a work-based learning program specifically for MiNY Post graduates, which allows qualified employers to be reimbursed for up to 80% of the first 280 hours of a trainee’s wages. This results in a win-win for employers and employees alike.”

The Made in New York Post Production Training Program will be conducting further cycles throughout the year, beginning with Cycle 3 planned for spring 2018. More information on the program and how to hire program graduates can be found here.

ESPN’s NBA coverage gets a rebrand

The bi-coastal studio Big Block recently collaborated with ESPN to develop, design and animate a rebrand package that promotes their NBA coverage. With nearly a year of design development, the studio’s role expanded beyond a simple production partner, with Big Block executive creative director Curtis Doss and managing director Kenny Solomon leading the charge.

The package, which features a rich palette of textures and fluid elegance, was designed to reflect the style of the NBA. Additionally, Big Block embedded what they call “visual touchstones” to put the spotlight on the stars of the show — the NBA players, the NBA teams and the redesigned NBA and ESPN co-branded logo.

Big Block and ESPN’s creative teams — which included senior coordinating producer for the NBA on ESPN Tim Corrigan — collaborated closely on the logos. The NBA’s was reconfigured and simplified, allowing it to combine with ESPN’s as well as support the iconic silhouette of Jerry West as the centerpiece of the new creation.

Next, the team worked on taking the unique branding and colors of each NBA team and using them as focal points within the broadcasts. Team logos were assembled and rendered and given textures and fast-moving action, providing the broadcast with a high-end look that Big Block and ESPN feel match the face of the league itself.

Big Block provided ESPN with a complete toolkit for the integration of live game footage with team logos, supers, buttons and transitions, as well as team and player-based information like player comparisons and starting lineups. The materials were designed to be visually cohesive between ESPN’s pre-show, game and post-show broadcasts, with Big Block crafting high-end solutions to keep the sophisticated look and feel consistent across the board.

When asked if working with such iconic logos added some challenges to the project, Doss said, “It definitely adds pressure anytime your combining multiple brands, however it was not the first time ESPN and NBA have collaborated, obviously. I will say that there were needs unique to each brand that we absolutely had to consider. This did take us down many paths during the design process, but we feel that the result is a very strong marriage of the two icons that both benefit from a brand perspective.”

In terms of tools, the studio called on Adobe’s Creative Suite and Maxon Cinema 4D. Final renders were done in Cinema 4D’s Physical Render.

Bluefish444 offers new SDI features for Adobe CC, Media Composer 8, Scratch 8

Over the past week or so, Bluefish444 has made multiple announcements focused on an updated Windows Installer (V5.12.0) for its SDI input/output cards specifically for partner products Assimilate Scratch 8, Adobe Creative Cloud 2014 and Avid’s Media Composer 8.


Bluefish444 has released Windows installer 5.12.0, a Windows 7/8 driver compatible with the Epoch|4K Supernova and Epoch|Supernova S+ cards supporting new high frame rate YUV SDI output from Assimilate Scratch 8 software.

New support includes: 4K SDI output as 4096×2160 at 60fps and a much anticipated 2k/4k at 48fps SDI output.

 Bluefish444’s 5.12.0 Windows 7/8 installer support for Adobe Creative Cloud (2014) includes compatibility with all Create, Epoch and Epoch 4K Supernova video cards.

This free release also provides added functionality for live capture of 4K SDI as 4096×2160 at 60fps from digital cinema cameras using Epoch|4K Supernova and Adobe Premiere Pro CC 2014. 

The 5.12.0 Windows7/8 installer is a major update for Adobe After Effects CC 2014 users, adding 4K/2K/HD/SD RGB/YUV SDI output, full support for Adobe Mercury transmit and audio monitoring through ASIO 64.

“Bluefish444 has lifted the bar for 4K SDI high frame rate workflows with new 4K 60p SDI capture through Adobe Premiere Pro, 4K 60p SDI preview through Adobe After Effects and Assimilate Scratch 8,” says Tom Lithgow, Bluefish444 product specialist. “Bluefish444 is committed to offering our customer base the full gamut of 4K 60p workflow options and our new Windows 7/8 installer extends that support to Bluefish444 Adobe and Assimilate customers.”

Windows Installer 5.12.0 is also compatible with all Bluefish444 Epoch hardware and the Create | 3D Ultra supporting dedicated HD 1080p 30 YUV/RGB SDI I/O for Avid Media Composer 8 and Avid Symphony.

The new installer is freely available for all Bluefish444 Epoch and Create customers from the Bluefish444 homepage and is compatible with the complementary Bluefish444 Symmetry and Bluefish444 Fluid applications: IngeStore and DNxHD IngeStore.