Tag Archives: YouTube

Posting John Krasinski’s Some Good News

By Randi Altman

Need an escape from a world filled with coronavirus and murder hornets? You should try John Krasinski’s weekly YouTube show, Some Good News. It focuses on the good things that are happening during the COVID-19 crisis, giving people a reason to smile with things such as a virtual prom, Krasinski’s chat with astronauts on the ISS and bringing the original Broadway cast of Hamilton together for a Zoom singalong.

L-R: Remy, Olivier, Josh and Lila Senior

Josh Senior, owner of Leroi and Senior Post in Dumbo, New York, is providing editing and post to SGN. His involvement began when he got a call from a mutual friend of Krasinski’s, asking if he could help put something together. They sent him clips via Dropbox, and a workflow was born.

While the show is shot at Krasinski’s house in New York at different times during the week, Senior’s Fridays, Saturdays and Sundays are spent editing and posting SGN.

In addition to his post duties, Senior is an EP on the show, along with his producing partner Evan Wolf Buxbaum at their production company, Leroi. The two work in concert with Allyson Seeger and Alexa Ginsburg, who executive produced for Krasinski’s company, Sunday Night Productions. Production meetings are held on Tuesday, and then shooting begins. After footage is captured, it’s still shared via Dropbox or good old iMessage.

Let’s find out more…

What does John use for the shoot?
John films on two iPhones. A good portion of the show is screen-recorded on Zoom, and then there’s the found footage user-generated content component.

What’s your process once you get the footage? And, I’m assuming, it’s probably a little challenging getting footage from different kinds of cameras?
Yes. In the alternate reality where there’s no coronavirus, we run a pretty big post house in Dumbo, Brooklyn. And none of the tools of the trade that we have there are really at play here, outside of our server, which exists as the ever-present backend for all of our remote work.

The assets are pulled down from wherever they originate. The masters are then housed behind an encrypted firewall, like we do for all of our TV shows at the post house. Our online editor is the gatekeeper. All the editors, assistant editors, producers, animators, sound folks — they all get a mirrored drive that they download, locally, and we all get to work.

Do you have a style guide?
We have a bible, which is a living document that we’ve made week over week. It has music cues, editing style, technique, structure, recurring themes, a living archive of all the notes that we’ve received and how we’ve addressed them. Also, any style that’s specific to segments, post processing, any phasing or audio adjustments that we make all live within a document, that we give to whoever we onboard to the show.

Evan Wolf Buxbaum

Our post producers made this really elegant workflow that’s a combination of Vimeo and Slack where we post project files and review links and share notes. There’s nothing formal about this show, and that’s really cool. I mean, at the same time, as we’re doing this, we’re rapidly finishing and delivering the second season of Ramy on Hulu. It comes out on May 29.

I bet that workflow is a bit different than SGN’s.
It’s like bouncing between two poles. That show has a hierarchy, it’s formalized, there’s a production company, there’s a network, there’s a lot of infrastructure. This show is created in a group text with a bunch of friends.

What are you using to edit and color Some Good News?
We edit in Adobe Premiere, and that helps mitigate some of the challenges of the mixed media that comes in. We typically color inside of Adobe, and we use Pro Tools for our sound mix. We online and deliver out of Resolve, which is pretty much how we work on most of our things. Some of our shows edit in Avid Media Composer, but on our own productions we almost always post in Premiere — so when we can control the full pipeline, we tend to prefer Adobe software.

Are review and approvals with John and the producers done through iMessage in Dropbox too?
Yes, and we post links on Vimeo. Thankfully we actually produce Some Good News as well as post it, so that intersection is really fluid. With Ramy it’s a bit more formalized. We do notes together and, usually internally, we get a cut that we like. Then it goes to John, and he gives us his thoughts and we retool the edit; it’s like a rapid prototyping rather than a gated milestone. There are no network cuts or anything like that.

Joanna Naugle

For me, what’s super-interesting is that everyone’s ideas are merited and validated. I feel like there’s nothing that you shouldn’t say because this show has no agenda outside of making people happy, and everybody’s uniquely qualified to speak to that. With other projects, there are people who have an experience advantage, a technical advantage or some established thought leadership. Everybody knows what makes people happy. So you can make the show, I can make the show, my mom can make the show, and because of that, everything’s almost implicitly right or wrong.

Let’s talk about specific episodes, like the ones featuring the prom and Hamilton? What were some of the challenges of working with all of that footage. Maybe start with Hamilton?
That one was a really fun puzzle. My partner at Senior Post, Joanna Naugle, edited that. She drew on a lot of her experience editing music videos, performance content, comedy specials, multicam live tapings. It was a lot like a multicam live pre-taped event being put together.

We all love Hamilton, so that helps. This was a combination of performers pre-taping the entire song and a live performance. The editing technique really dissolves into the background, but it’s clear that there’s an abundance of skill that’s been brought to that. For me, that piece is a great showcase of the aesthetic of the show, which is that it should feel homemade and lo-fi, but there’s this undercurrent of a feat to the way that it’s put together.

Getting all of those people into the Zoom, getting everyone to sound right, having the ability to emphasize or de-emphasize different faces. To restructure the grid of the Zoom, if we needed to, to make sure that there’s more than one screen worth of people there and to make sure that everybody was visible and audible. It took a few days, but the whole show is made from Thursday to Sunday, so that’s a limiting factor, and it’s also this great challenge. It’s like a 48-hour film festival at a really high level.

What about the prom episode?
The prom episode was fantastic. We made the music performances the day before and preloaded them into the live player so that we could cut to them during the prom. Then we got to watch the prom. To be able to participate as an audience member in the content that you’re still creating is such a unique feeling and experience. The only agenda is happiness, and people need a prom, so there’s a service aspect of it, which feels really good.

John Krasinski setting up his shot.

Any challenges?
It’s hard to put things together that are flat, and I think one of the challenges that we found at the onset was that we weren’t getting multiple takes of anything, so we weren’t getting a lot of angles to play with. Things are coming in pretty baked from a production standpoint, so we’ve had to find unique and novel ways to be nonlinear when we want to emphasize and de-emphasize certain things. We want to present things in an expositional way, which is not that common. I couldn’t even tell you another thing that we’ve worked on that didn’t have any subjectivity to it.

Let’s talk sound. Is he just picking up audio from the iPhones or is he wearing a mic?
Nope. No, mic. Audio from the iPhones that we just run through a few filters on Pro Tools. Nobody mics themselves. We do spend a lot of time balancing out the sound, but there’s not a lot of effect work.

Other than SGN and Ramy, what are some other shows you guys have worked on?
John Mulaney & the Sack Lunch Bunch, 2 Dope Queens, Random Acts of Flyness, Julio Torres: My Favorite Shapes by Julio Torres and others.

Anything that I haven’t asked that you think is important?
It’s really important for me to acknowledge that this is something that is enabling a New York-based production company and post house to work fully remotely. In doing this week over week, we’re really honing what we think are tangible practices that we can then turn around and evangelize out to the people that we want to work with in the future.

I don’t know when we’re going to get back to the post house, so being able to work on a show like this is providing this wonderful learning opportunity for my whole team to figure out what we can modulate from our workflow in the office to be a viable partner from home.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Making an animated series with Adobe Character Animator

By Mike McCarthy

In a departure from my normal film production technology focus, I have also been working on an animated web series called Grounds of Freedom. Over the past year I have been directing the effort and working with a team of people across the country who are helping in various ways. After a year of meetings, experimentation and work we finally started releasing finished episodes on YouTube.

The show takes place in Grounds of Freedom, a coffee shop where a variety of animated mini-figures gather to discuss freedom and its application to present-day cultural issues and events. The show is created with a workflow that weaves through a variety of Adobe Creative Cloud apps. Back in October I presented our workflow during Adobe Max in LA, and I wanted to share it with postPerspective’s readers as well.

When we first started planning for the series, we considered using live action. Ultimately, after being inspired by the preview releases of Adobe Character Animator, I decided to pursue a new digital approach to brick filming (a film made using Legos), which is traditionally accomplished through stop-motion animation. Once everyone else realized the simpler workflow possibilities and increased level of creative control offered by that new animation process, they were excited to pioneer this new approach. Animation gives us more control and flexibility over the message and dialog, lowers production costs and eases collaboration over long distances, as there is no “source footage” to share.

Creating the Characters
The biggest challenge to using Character Animator is creating digital puppets, which are deeply layered Photoshop PSDs with very precise layer naming and stacking. There are ways to generate the underlying source imagery in 3D animation programs, but I wanted the realism and authenticity of sourcing from actual photographs of the models and figures. So we took lots of 5K macro shots of our sets and characters in various positions with our Canon 60D and 70D DSLRs and cut out hundreds of layers of content in Photoshop to create our characters and all of their various possible body positions. The only thing that was synthetically generated was the various facial expressions digitally painted onto their clean yellow heads, usually to match an existing physical reference character face.

Mike McCarthy shooting stills.

Once we had our source imagery organized into huge PSDs, we rigged those puppets in Character Animator with various triggers, behaviors and controls. The walking was accomplished by cycling through various layers, instead of the default bending of the leg elements. We created arm movement by mapping each arm position to a MIDI key. We controlled facial expressions and head movement via webcam, and the mouth positions were calculated by the program based on the accompanying audio dialog.

Animating Digital Puppets
The puppets had to be finished and fully functional before we could start animating on the digital stages we had created. We had been writing the scripts during that time, parallel to generating the puppet art, so we were ready to record the dialog by the time the puppets were finished. We initially attempted to record live in Character Animator while capturing the animation motions as well, but we didn’t have the level of audio editing functionality we needed available to us in Character Animator. So during that first session, we switched over to Adobe Audition, and planned to animate as a separate process, once the audio was edited.

That whole idea of live capturing audio and facial animation data is laughable now, looking back, since we usually spend a week editing the dialog before we do any animating. We edited each character audio on a separate track and exported those separate tracks to Character Animator. We computed lipsync for each puppet based on their dedicated dialog track and usually exported immediately. This provided a draft visual that allowed us to continue editing the dialog within Premiere Pro. Having a visual reference makes a big difference when trying to determine how a conversation will feel, so that was an important step — even though we had to throw away our previous work in Character Animator once we made significant edit changes that altered sync.

We repeated the process once we had a more final edit. We carried on from there in Character Animator, recording arm and leg motions with the MIDI keyboard in realtime for each character. Once those trigger layers had been cleaned up and refined, we recorded the facial expressions, head positions and eye gaze with a single pass on the webcam. Every re-record to alter a particular section adds a layer to the already complicated timeline, so we limited that as much as possible, usually re-recording instead of making quick fixes unless we were nearly finished.

Compositing the Characters Together
Once we had fully animated scenes in Character Animator, we would turn off the background elements, and isolate each character layer to be exported in Media Encoder via dynamic link. I did a lot of testing before settling on JPEG2000 MXF as the format of choice. I wanted a highly compressed file, but need alpha channel support, and that was the best option available. Each of those renders became a character layer, which was composited into our stage layers in After Effects. We could have dynamically linked the characters directly into AE, but with that many layers that would decrease performance for the interactive part of the compositing work. We added shadows and reflections in AE, as well as various other effects.

Walking was one of the most challenging effects to properly recreate digitally. Our layer cycling in Character Animator resulted in a static figure swinging its legs, but people (and mini figures) have a bounce to their step, and move forward at an uneven rate as they take steps. With some pixel measurement and analysis, I was able to use anchor point keyframes in After Effects to get a repeating movement cycle that made the character appear to be walking on a treadmill.

I then used carefully calculated position keyframes to add the appropriate amount of travel per frame for the feet to stick to the ground, which varies based on the scale as the character moves toward the camera. (In my case the velocity was half the scale value in pixels per seconds.) We then duplicated that layer to create the reflection and shadow of the character as well. That result can then be composited onto various digital stages. In our case, the first two shots of the intro were designed to use the same walk animation with different background images.

All of the character layers were pre-comped, so we only needed to update a single location when a new version of a character was rendered out of Media Encoder, or when we brought in a dynamically linked layer. It would propagate all the necessary comp layers to generate updated reflections and shadows. Once the main compositing work was finished, we usually only needed to make slight changes in each scene between episodes. These scenes were composited at 5K, based on the resolution off the DSLR photos of the sets we had built. These 5K plates could be dynamically linked directly into Premiere Pro, and occasionally used later in the process to ripple slight changes through the workflow. For the interactive work, we got far better editing performance by rendering out flattened files. We started with DNxHR 5K assets, but eventually switched to HEVC files since they were 30x smaller and imperceptibly different in quality with our relatively static animated content.

Editing the Animated Scenes
In Premiere Pro, we had the original audio edit, and usually a draft render of the characters with just their mouths moving. Once we had the plate renders, we placed them each in their own 5K scene sub-sequence and used those sequences as source on our master timeline. This allowed us to easily update the content when new renders were available, or source from dynamically linked layers instead if needed. Our master timeline was 1080p, so with 5K source content we could push in two and a half times the frame size without losing resolution. This allowed us to digitally frame every shot, usually based on one of two rendered angles, and gave us lots of flexibility all the way to the end of the editing process.

Collaborative Benefits of Dynamic Link
While Dynamic Link doesn’t offer the best playback performance without making temp renders, it does have two major benefits in this workflow. It ripples change to the source PSD all the way to the final edit in Premiere just by bringing each app into focus once. (I added a name tag to one character’s PSD during my presentation, and 10 seconds later, it was visible throughout my final edit.) Even more importantly, it allows us to collaborate online without having to share any exported video assets. As long as each member of the team has the source PSD artwork and audio files, all we have to exchange online are the Character Animator project (which is small once the temp files are removed), the .AEP file and the .PrProj file.

This gives any of us the option to render full-quality visual assets anytime we need them, but the work we do on those assets is all contained within the project files that we sync to each other. The coffee shop was built and shot in Idaho, our voice artist was in Florida, our puppets faces were created in LA. I animate and edit in Northern California, the AE compositing was done in LA, and the audio is mixed in New Jersey. We did all of that with nothing but a Dropbox account, using the workflow I have just outlined.

Past that point, it was a fairly traditional finish, in that we edited in music and sound effects, and sent an OMF to Steve, our sound guy at DAWPro Studios http://dawpro.com/photo_gallery.html for the final mix. During that time we added other b-roll visuals or other effects, and once we had the final audio back we rendered the final result to H.264 at 1080p and uploaded to YouTube.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

DevinSuperTramp: The making of a YouTube filmmaker

Devin Graham, aka DevinSuperTramp, made the unlikely journey from BYU dropout to a viral YouTube sensation who has over five million followers. After leaving school, Graham went to Hawaii to work on a documentary. The project soon ran out of money and he was stuck on the island… feeling very much a dropout and a failure. He started making fun videos with his friends to pass the time, and DevinSuperTramp was born. Now he travels, filming his view of the world, taking on daring adventures to get his next shot, and risking life and limb.

Shooting while snowboarding behind a trackhoe with a bunch of friends for a new video.

We recently had the chance to sit down with Graham to hear firsthand what lessons he’s learned along his journey, and how he’s developed into the filmmaker he is today.

Why extreme adventure content?
I grew up in the outdoors — always hiking and camping with my dad, and snowboarding. I’ve always been intrigued by pushing human limits. One thing I love about the extreme thing is that everyone we work with is the best at what they do. Like, we had the world’s best scooter riders. I love working with people who devote their entire lives to this one skillset. You get to see that passion come through. To me, it’s super inspiring to show off their talents to the world.

How did you get DevinSuperTramp off the ground? Pun intended.
I’ve made movies ever since I can remember. I was a little kid shooting Legos and stop-motion with my siblings. In high school, I took photography classes, and after I saw the movie Jurassic Park, I was like, “I want to make movies for a living. I want to do the next Jurassic Park.” So, I went to film school. Actually, I got rejected from the film program the first time I applied, which made me volunteer for every film thing going on at the college — craft service, carrying lights, whatever I could do. One day, my roommate was like, “YouTube is going to be the next big thing for videos. You should get on that.”

And you did.
Well, I started making videos just kind of for fun, not expecting anything to happen. But it blew up. Eight years later, it’s become the YouTube channel we have now, with five million subscribers. And we get to travel around the world creating content that we love creating.

Working on a promo video for Recoil – all the effects were done practically.

And you got to bring it full circle when you worked with Universal on promoting Fallen Kingdom.
I did! That was so fun and exciting. But yeah, I was always making content. I didn’t wait ‘til after I graduated. I was constantly looking for opportunities and networking with people from the film program. I think that was a big part of (succeeding at that time), just looking for every opportunity to milk it for everything I could.

In the early days, how did you promote your work?
I was creating all my stuff on YouTube, which, at that time, had hardly any solid, quality content. There was a lot of content, but it was mostly shot on whatever smartphone people had, or it was just people blogging. There wasn’t really anything cinematic, so right away our stuff stood out. One of the first videos I ever posted ended up getting like a million views right away, and people all around the world started contacting me, saying, “Hey, Devin, I’d love for you to shoot a commercial for us.” I had these big opportunities right from the start, just by creating content with my friends and putting it out on YouTube.

Where did you get the money for equipment?
In the beginning, I didn’t even own a camera. I just borrowed some from friends. We didn’t have any fancy stuff. I was using a Canon 5D Mark II and the Canon T2i, which are fairly cheap cameras compared to what we’re using now. But I was just creating the best content I could with the resources I had, and I was able to build a company from that.

If you had to start from scratch today, do you think you could do it again?
I definitely think it’s 100 percent doable, but I would have to play the game differently. Even now we are having to play the game differently than we did six months ago. Social media is hard because it’s constantly evolving. The algorithms keep changing.

Filming in Iceland for an upcoming documentary.

What are you doing today that’s different from before?
One thing is just using trends and popular things that are going on. For example, a year and a half ago, Pokémon Go was very popular, so we did a video on Pokémon and it got 20 million views within a couple weeks. We have to be very smart about what content we put out — not just putting out content to put out content.

One thing that’s always stayed true since the beginning is consistent content. When we don’t put out a video weekly, it actually hurts our content being seen. The famous people on YouTube now are the ones putting out daily content. For what we’re doing, that’s impossible, so we’ve sort of shifted platforms from YouTube, which was our bread and butter. Facebook is where we push our main content now, because Facebook doesn’t favor daily content. It just favors good-quality content.

Teens will be the first to say that grown-ups struggle with knowing what’s cool. How do you chase after topics likely to blow up?
A big one is going on YouTube and seeing what videos are trending. Also, if you go to Google Trends, it shows you the top things that were searched that day, that week, that month. So, it’s being on top of that. Or, maybe, Taylor Swift is coming out with a new album; we know that’s going to be really popular. Just staying current with all that stuff. You can also use Facebook, Twitter and Instagram to get an idea of what people are really excited about.

Can you tell us about some of the equipment you use, and the demands that your workflow puts on your storage needs?
We shoot so much content. We own two Red 8K cameras that we film everything with, and we’re shooting daily for the most part. On an average week, we’re shooting about eight terabytes, and then backing that up — so 16 terabytes a week. Obviously, we need a lot of storage, and we need storage that we can access quickly. We’re not putting it on tape. We need to pull stuff up right there and start editing on it right away.

So, we need the biggest drives that are as fast as possible. That’s why we use G-Tech’s 96TB G-Speed Shuttle XL towers. We have around 10 of those, and we’ve been shooting with those for the last three to four years. We needed something super reliable. Some of these shoots involve forking out a lot of money. I can’t take a hard drive and just hope it doesn’t fail. I need something that never fails on me — like ever. It’s just not worth taking that risk. I need a drive I can completely trust and is also super-fast.

What’s the one piece of advice that you wish somebody had given you when you were starting out?
In my early days, I didn’t have much of a budget, so I would never back up any of my footage. I was working on two really important projects and had them all on one drive. My roommate knocked that drive off the table, and I lost all that footage. It wasn’t backed up. I only had little bits and pieces still saved on the card — enough to release it, but a lot of people wanted to buy the stock footage and I didn’t have most of the original content. I lost out on a huge opportunity.

Today, we back up every single thing we do, no matter how big or how small it is. So, if I could do my early days over again, even if I didn’t have all the money to fund it, I’d figure out a way to have backup drives. That was something I had to learn the hard way.

Using humor to tell serious story for Greenpeace

By Randi Altman

When you think of the environmental organization Greenpeace, images of people protecting whales, forests and oceans come to mind. It’s serious business… but recently the non-profit decided to extend its reach with humor.

While Greenpeace videos are well viewed, it’s mostly Greenpeace enthusiasts and activists who hit play. In order to reach a more general audience the organization turned to comedy, specifically LA-based writer/director/editor Olivier Agostini.

This filmmaker has a lot of public service work experience where he uses humor to help tell a serious story. And he’s got the awards to prove it, including a first place finish for his film Piñata at the 2010 Rome Film Festival, as well as Emmys, a Gold Addy, a Silver Telly and Continue reading

The future of post — one man’s vision, part II

By Lucas Wilson

Tremors lead to earthquakes, and the industry has felt a few… the shelves are starting to rattle. And the Big One is not far away.

There is a fundamental change happening in the minds of creators right now. It is possibly the biggest shift since the dawn of film and the ability to make still pictures appear to move.

Continue reading