Tag Archives: YouTube

Making an animated series with Adobe Character Animator

By Mike McCarthy

In a departure from my normal film production technology focus, I have also been working on an animated web series called Grounds of Freedom. Over the past year I have been directing the effort and working with a team of people across the country who are helping in various ways. After a year of meetings, experimentation and work we finally started releasing finished episodes on YouTube.

The show takes place in Grounds of Freedom, a coffee shop where a variety of animated mini-figures gather to discuss freedom and its application to present-day cultural issues and events. The show is created with a workflow that weaves through a variety of Adobe Creative Cloud apps. Back in October I presented our workflow during Adobe Max in LA, and I wanted to share it with postPerspective’s readers as well.

When we first started planning for the series, we considered using live action. Ultimately, after being inspired by the preview releases of Adobe Character Animator, I decided to pursue a new digital approach to brick filming (a film made using Legos), which is traditionally accomplished through stop-motion animation. Once everyone else realized the simpler workflow possibilities and increased level of creative control offered by that new animation process, they were excited to pioneer this new approach. Animation gives us more control and flexibility over the message and dialog, lowers production costs and eases collaboration over long distances, as there is no “source footage” to share.

Creating the Characters
The biggest challenge to using Character Animator is creating digital puppets, which are deeply layered Photoshop PSDs with very precise layer naming and stacking. There are ways to generate the underlying source imagery in 3D animation programs, but I wanted the realism and authenticity of sourcing from actual photographs of the models and figures. So we took lots of 5K macro shots of our sets and characters in various positions with our Canon 60D and 70D DSLRs and cut out hundreds of layers of content in Photoshop to create our characters and all of their various possible body positions. The only thing that was synthetically generated was the various facial expressions digitally painted onto their clean yellow heads, usually to match an existing physical reference character face.

Mike McCarthy shooting stills.

Once we had our source imagery organized into huge PSDs, we rigged those puppets in Character Animator with various triggers, behaviors and controls. The walking was accomplished by cycling through various layers, instead of the default bending of the leg elements. We created arm movement by mapping each arm position to a MIDI key. We controlled facial expressions and head movement via webcam, and the mouth positions were calculated by the program based on the accompanying audio dialog.

Animating Digital Puppets
The puppets had to be finished and fully functional before we could start animating on the digital stages we had created. We had been writing the scripts during that time, parallel to generating the puppet art, so we were ready to record the dialog by the time the puppets were finished. We initially attempted to record live in Character Animator while capturing the animation motions as well, but we didn’t have the level of audio editing functionality we needed available to us in Character Animator. So during that first session, we switched over to Adobe Audition, and planned to animate as a separate process, once the audio was edited.

That whole idea of live capturing audio and facial animation data is laughable now, looking back, since we usually spend a week editing the dialog before we do any animating. We edited each character audio on a separate track and exported those separate tracks to Character Animator. We computed lipsync for each puppet based on their dedicated dialog track and usually exported immediately. This provided a draft visual that allowed us to continue editing the dialog within Premiere Pro. Having a visual reference makes a big difference when trying to determine how a conversation will feel, so that was an important step — even though we had to throw away our previous work in Character Animator once we made significant edit changes that altered sync.

We repeated the process once we had a more final edit. We carried on from there in Character Animator, recording arm and leg motions with the MIDI keyboard in realtime for each character. Once those trigger layers had been cleaned up and refined, we recorded the facial expressions, head positions and eye gaze with a single pass on the webcam. Every re-record to alter a particular section adds a layer to the already complicated timeline, so we limited that as much as possible, usually re-recording instead of making quick fixes unless we were nearly finished.

Compositing the Characters Together
Once we had fully animated scenes in Character Animator, we would turn off the background elements, and isolate each character layer to be exported in Media Encoder via dynamic link. I did a lot of testing before settling on JPEG2000 MXF as the format of choice. I wanted a highly compressed file, but need alpha channel support, and that was the best option available. Each of those renders became a character layer, which was composited into our stage layers in After Effects. We could have dynamically linked the characters directly into AE, but with that many layers that would decrease performance for the interactive part of the compositing work. We added shadows and reflections in AE, as well as various other effects.

Walking was one of the most challenging effects to properly recreate digitally. Our layer cycling in Character Animator resulted in a static figure swinging its legs, but people (and mini figures) have a bounce to their step, and move forward at an uneven rate as they take steps. With some pixel measurement and analysis, I was able to use anchor point keyframes in After Effects to get a repeating movement cycle that made the character appear to be walking on a treadmill.

I then used carefully calculated position keyframes to add the appropriate amount of travel per frame for the feet to stick to the ground, which varies based on the scale as the character moves toward the camera. (In my case the velocity was half the scale value in pixels per seconds.) We then duplicated that layer to create the reflection and shadow of the character as well. That result can then be composited onto various digital stages. In our case, the first two shots of the intro were designed to use the same walk animation with different background images.

All of the character layers were pre-comped, so we only needed to update a single location when a new version of a character was rendered out of Media Encoder, or when we brought in a dynamically linked layer. It would propagate all the necessary comp layers to generate updated reflections and shadows. Once the main compositing work was finished, we usually only needed to make slight changes in each scene between episodes. These scenes were composited at 5K, based on the resolution off the DSLR photos of the sets we had built. These 5K plates could be dynamically linked directly into Premiere Pro, and occasionally used later in the process to ripple slight changes through the workflow. For the interactive work, we got far better editing performance by rendering out flattened files. We started with DNxHR 5K assets, but eventually switched to HEVC files since they were 30x smaller and imperceptibly different in quality with our relatively static animated content.

Editing the Animated Scenes
In Premiere Pro, we had the original audio edit, and usually a draft render of the characters with just their mouths moving. Once we had the plate renders, we placed them each in their own 5K scene sub-sequence and used those sequences as source on our master timeline. This allowed us to easily update the content when new renders were available, or source from dynamically linked layers instead if needed. Our master timeline was 1080p, so with 5K source content we could push in two and a half times the frame size without losing resolution. This allowed us to digitally frame every shot, usually based on one of two rendered angles, and gave us lots of flexibility all the way to the end of the editing process.

Collaborative Benefits of Dynamic Link
While Dynamic Link doesn’t offer the best playback performance without making temp renders, it does have two major benefits in this workflow. It ripples change to the source PSD all the way to the final edit in Premiere just by bringing each app into focus once. (I added a name tag to one character’s PSD during my presentation, and 10 seconds later, it was visible throughout my final edit.) Even more importantly, it allows us to collaborate online without having to share any exported video assets. As long as each member of the team has the source PSD artwork and audio files, all we have to exchange online are the Character Animator project (which is small once the temp files are removed), the .AEP file and the .PrProj file.

This gives any of us the option to render full-quality visual assets anytime we need them, but the work we do on those assets is all contained within the project files that we sync to each other. The coffee shop was built and shot in Idaho, our voice artist was in Florida, our puppets faces were created in LA. I animate and edit in Northern California, the AE compositing was done in LA, and the audio is mixed in New Jersey. We did all of that with nothing but a Dropbox account, using the workflow I have just outlined.

Past that point, it was a fairly traditional finish, in that we edited in music and sound effects, and sent an OMF to Steve, our sound guy at DAWPro Studios http://dawpro.com/photo_gallery.html for the final mix. During that time we added other b-roll visuals or other effects, and once we had the final audio back we rendered the final result to H.264 at 1080p and uploaded to YouTube.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

DevinSuperTramp: The making of a YouTube filmmaker

Devin Graham, aka DevinSuperTramp, made the unlikely journey from BYU dropout to a viral YouTube sensation who has over five million followers. After leaving school, Graham went to Hawaii to work on a documentary. The project soon ran out of money and he was stuck on the island… feeling very much a dropout and a failure. He started making fun videos with his friends to pass the time, and DevinSuperTramp was born. Now he travels, filming his view of the world, taking on daring adventures to get his next shot, and risking life and limb.

Shooting while snowboarding behind a trackhoe with a bunch of friends for a new video.

We recently had the chance to sit down with Graham to hear firsthand what lessons he’s learned along his journey, and how he’s developed into the filmmaker he is today.

Why extreme adventure content?
I grew up in the outdoors — always hiking and camping with my dad, and snowboarding. I’ve always been intrigued by pushing human limits. One thing I love about the extreme thing is that everyone we work with is the best at what they do. Like, we had the world’s best scooter riders. I love working with people who devote their entire lives to this one skillset. You get to see that passion come through. To me, it’s super inspiring to show off their talents to the world.

How did you get DevinSuperTramp off the ground? Pun intended.
I’ve made movies ever since I can remember. I was a little kid shooting Legos and stop-motion with my siblings. In high school, I took photography classes, and after I saw the movie Jurassic Park, I was like, “I want to make movies for a living. I want to do the next Jurassic Park.” So, I went to film school. Actually, I got rejected from the film program the first time I applied, which made me volunteer for every film thing going on at the college — craft service, carrying lights, whatever I could do. One day, my roommate was like, “YouTube is going to be the next big thing for videos. You should get on that.”

And you did.
Well, I started making videos just kind of for fun, not expecting anything to happen. But it blew up. Eight years later, it’s become the YouTube channel we have now, with five million subscribers. And we get to travel around the world creating content that we love creating.

Working on a promo video for Recoil – all the effects were done practically.

And you got to bring it full circle when you worked with Universal on promoting Fallen Kingdom.
I did! That was so fun and exciting. But yeah, I was always making content. I didn’t wait ‘til after I graduated. I was constantly looking for opportunities and networking with people from the film program. I think that was a big part of (succeeding at that time), just looking for every opportunity to milk it for everything I could.

In the early days, how did you promote your work?
I was creating all my stuff on YouTube, which, at that time, had hardly any solid, quality content. There was a lot of content, but it was mostly shot on whatever smartphone people had, or it was just people blogging. There wasn’t really anything cinematic, so right away our stuff stood out. One of the first videos I ever posted ended up getting like a million views right away, and people all around the world started contacting me, saying, “Hey, Devin, I’d love for you to shoot a commercial for us.” I had these big opportunities right from the start, just by creating content with my friends and putting it out on YouTube.

Where did you get the money for equipment?
In the beginning, I didn’t even own a camera. I just borrowed some from friends. We didn’t have any fancy stuff. I was using a Canon 5D Mark II and the Canon T2i, which are fairly cheap cameras compared to what we’re using now. But I was just creating the best content I could with the resources I had, and I was able to build a company from that.

If you had to start from scratch today, do you think you could do it again?
I definitely think it’s 100 percent doable, but I would have to play the game differently. Even now we are having to play the game differently than we did six months ago. Social media is hard because it’s constantly evolving. The algorithms keep changing.

Filming in Iceland for an upcoming documentary.

What are you doing today that’s different from before?
One thing is just using trends and popular things that are going on. For example, a year and a half ago, Pokémon Go was very popular, so we did a video on Pokémon and it got 20 million views within a couple weeks. We have to be very smart about what content we put out — not just putting out content to put out content.

One thing that’s always stayed true since the beginning is consistent content. When we don’t put out a video weekly, it actually hurts our content being seen. The famous people on YouTube now are the ones putting out daily content. For what we’re doing, that’s impossible, so we’ve sort of shifted platforms from YouTube, which was our bread and butter. Facebook is where we push our main content now, because Facebook doesn’t favor daily content. It just favors good-quality content.

Teens will be the first to say that grown-ups struggle with knowing what’s cool. How do you chase after topics likely to blow up?
A big one is going on YouTube and seeing what videos are trending. Also, if you go to Google Trends, it shows you the top things that were searched that day, that week, that month. So, it’s being on top of that. Or, maybe, Taylor Swift is coming out with a new album; we know that’s going to be really popular. Just staying current with all that stuff. You can also use Facebook, Twitter and Instagram to get an idea of what people are really excited about.

Can you tell us about some of the equipment you use, and the demands that your workflow puts on your storage needs?
We shoot so much content. We own two Red 8K cameras that we film everything with, and we’re shooting daily for the most part. On an average week, we’re shooting about eight terabytes, and then backing that up — so 16 terabytes a week. Obviously, we need a lot of storage, and we need storage that we can access quickly. We’re not putting it on tape. We need to pull stuff up right there and start editing on it right away.

So, we need the biggest drives that are as fast as possible. That’s why we use G-Tech’s 96TB G-Speed Shuttle XL towers. We have around 10 of those, and we’ve been shooting with those for the last three to four years. We needed something super reliable. Some of these shoots involve forking out a lot of money. I can’t take a hard drive and just hope it doesn’t fail. I need something that never fails on me — like ever. It’s just not worth taking that risk. I need a drive I can completely trust and is also super-fast.

What’s the one piece of advice that you wish somebody had given you when you were starting out?
In my early days, I didn’t have much of a budget, so I would never back up any of my footage. I was working on two really important projects and had them all on one drive. My roommate knocked that drive off the table, and I lost all that footage. It wasn’t backed up. I only had little bits and pieces still saved on the card — enough to release it, but a lot of people wanted to buy the stock footage and I didn’t have most of the original content. I lost out on a huge opportunity.

Today, we back up every single thing we do, no matter how big or how small it is. So, if I could do my early days over again, even if I didn’t have all the money to fund it, I’d figure out a way to have backup drives. That was something I had to learn the hard way.

Using humor to tell serious story for Greenpeace

By Randi Altman

When you think of the environmental organization Greenpeace, images of people protecting whales, forests and oceans come to mind. It’s serious business… but recently the non-profit decided to extend its reach with humor.

While Greenpeace videos are well viewed, it’s mostly Greenpeace enthusiasts and activists who hit play. In order to reach a more general audience the organization turned to comedy, specifically LA-based writer/director/editor Olivier Agostini.

This filmmaker has a lot of public service work experience where he uses humor to help tell a serious story. And he’s got the awards to prove it, including a first place finish for his film Piñata at the 2010 Rome Film Festival, as well as Emmys, a Gold Addy, a Silver Telly and Continue reading

The future of post — one man’s vision, part II

By Lucas Wilson

Tremors lead to earthquakes, and the industry has felt a few… the shelves are starting to rattle. And the Big One is not far away.

There is a fundamental change happening in the minds of creators right now. It is possibly the biggest shift since the dawn of film and the ability to make still pictures appear to move.

Continue reading