Author Archives: Randi Altman

Calabash animates characters for health PSA

It’s a simple message, told in a very simple way — having a health issue and being judged for it, hurts. An animated PSA for The Simon Foundation, titled Rude2Respect, was animated by Chicago’s Calabash in conjunction with the creative design studio Group Chicago.

Opening with typography “Challenging Health Stigma,” the PSA features two friends — a short, teal-colored tear-dropped blob known simply as Blue and his slender companion Pink — taking a walk on a bright sunny day in the city. Blue nervously says, “I’m not sure about this,” to which Pink responds, “You can’t stay home forever.” From there the two embark on what seems like a simple stroll to get ice cream, but there is a deeper message about how such common events can be fraught with anxiety for those suffering from an array of health conditions that often results in awkward stares, well-intentioned but inappropriate comments or plain rudeness. Blue and Pink decide it’s the people with the comments that are in the wrong and continue on to get ice cream. The spot ends with the simple words “Health stigma hurts. We can change lives” followed by a link to www.rude2respect.org.

“We had seen Calabash’s work and sought them out,” says Barbara Lynk, Group Chicago’s creative director. “We were impressed with how well their creative team immediately understood the characters and their visual potential. Creatively they brought a depth of experience on the conceptual and production side that helped bring the characters to life. They also understood the spare visual approach we were trying to achieve. It was a wonderful creative collaboration throughout the process, and they are a really fun group of creatives to work with.”

Based on illustrated characters created by Group Chicago’s founder/creative director Kurt Meinecke, Calabash creative director Wayne Brejcha notes that early on in the creative process they decided to go with what he called a “two-and-a-half-D look.”

“There is a charm in the simplicity of Kurt’s original illustrations with the flat shapes that we had to try very hard to keep as we translated Blue and Pink to the 3D world,” Brejcha says. “We also didn’t want to overly complicate it with a lot of crazy camera moves rollercoastering through the space or rotating around the characters. We constrained it to feel a little like two-and-a-half dimensions – 2D characters, but with the lighting and textures and additional physical feel you expect with 3D animation.”

For Sean Henry, Calabash’s executive producer, the primary creative obstacles centered on finding the right pacing for the story. “We played with the timing of the edits all the way through production,” he explains. “The pace of it had a large role to play in the mood, which is more thoughtful than your usual rapid-fire ad. Also, finding the right emotions for the voices was also a major concern. We needed warmth and a friendly mentoring feel for Pink, and a feisty, insecure but likeable voice for Blue. Our voice talent nailed those qualities. Additionally, the dramatic events in the spot happen only in the audio with Pink and Blue responding to off-screen voices and action, so the sound design and music had a major storytelling role to play as well.”

Calabash called on Autodesk Maya for the characters and Foundry’s Nuke for effects/compositing. Adobe Premiere was used for the final edit.

Industry vets open NYC post boutique Twelve

Colorist Lez Rudge and veteran production and post executives Marcelo Gandola, Axel Ericson and Ed Rilli have joined forces to launch New York City-based Twelve, a high-end post boutique for the advertising, film and television industries. Twelve has already been working on campaigns for Jagermeister, Comcast, Maybelline and the NY Rangers.

Twelve’s 4,500-square-foot space in Manhattan’s NoMad neighborhood features three Blackmagic Resolve color rooms, two Autodesk Flame suites and a 4K DI theater with a 7.1 Dolby surround sound system and 25-person seating capacity. Here, clients also have access to a suite of film and production services — editorial, mastering, finishing and audio mixing — as part of a strategic alliance with Ericson and his team at Digital Arts. Ericson, who brings 25 years of experience in film and television, also serves as managing partner of Twelve.

From Twelve’s recent Avion tequila campaign.

Managing director Rilli will handle client relations, strategy, budgets and deadlines, among other deliverables for the business. He was previously head of production at Nice Shoes for 17 years. His long list of agency clients includes Hill Holiday, Publicis, Grey and Saatchi & Saatchi and projects for Dunkin Donuts, NFL, Maybelline and Ford.

Gandola was most recently chief operations officer at Harbor Picture Company. Other positions include EVP at Hogarth, SVP of creative services at Deluxe, VP of operations at Company 3 and principal of Burst @ Creative Bubble, a digital audio and sound design company.

On the creative side, Rudge was formerly a colorist and partner at Nice Shoes. Since 2015, Rudge has also been focusing on his directorial career. His most recent campaign for the NY Rangers and Madison Square Garden — a concept-to-completion project via Twelve — garnered more than 300,000 Facebook hits on its first day.

While Twelve is currently working on short-form content, such as commercials and marketing campaigns, the company is making a concerted effort to extend its reach into film and television. Meanwhile, the partners also have a significant roster expansion in the works.

“After all of these years on both the vendor and client side, we’ve learned how best to get things done,” concludes Gandola. “In a way, technology has become secondary, and artistry is where we keep the emphasis. That’s the essence of what we want to provide clients, and that’s ultimately what pushed us to open our own place.”

Main Image (L-R): Ed Rilli, Axel Ericson, Lez Rudge & Marcelo Gandola

Millennium Digital XL camera: development to delivery

By Lance Holte and Daniel Restuccio

Panavision’s Millennium DXL 8K may be one of today’s best digital cinema cameras, but it might also be one of the most misunderstood. Conceived and crafted to the exacting tradition of the company whose cameras captured such films as Lawrence of Arabia and Inception, the Millennium DXL challenges expectations. We recently sat down with Panavision to examine the history, workflow, some new features and how that all fits into a 2017 moviemaking ecosystem.

Announced at Cine Gear 2016, and released for rent through Panavision in January 2017, the Millennium DXL stepped into the digital large format field as, at first impression, a competitor to the Arri Alexa 65. The DXL was the collaborative result of a partnership of three companies: Panavision developed the optics, accessories and some of the electronics; Red Digital Cinema designed the 8K VV (VistaVision) sensor; and Light Iron provided the features, color science and general workflow for the camera system.

The collaboration for the camera first began when Light Iron was acquired by Panavision in 2015. According to Michael Cioni, Light Iron president/Millennium DXL product manager, the increase in 4K and HDR television and theatrical formats like Dolby Vision and Barco Escape created the perfect environment for the three-company partnership. “When Panavision bought Light Iron, our idea was to create a way for Panavision to integrate a production ecosystem into the post world. The DXL rests atop Red’s best tenets, Panavision’s best tenets and Light Iron’s best tenets. We’re partners in this — information can flow freely between post, workflow, color, electronics and data management into cameras, color science, ergonomics, accessories and lenses.”

HDR OLED viewfinder

Now, one year after the first announcement, with projects like the Lionsgate feature adventure Robin Hood, the Fox Searchlight drama Can You Ever Forgive Me?, the CBS crime drama S.W.A.T. and a Samsung campaign shot by Oscar-winner Linus Sandgren under the DXL’s belt, the camera sports an array of new upgrades, features and advanced tools. They include an HDR OLED viewfinder (which they say is the first), wireless control software for iOS, and a new series of lenses. According to Panavision, the new DXL offers “unprecedented development in full production-to-post workflow.”

Preproduction Considerations
With so many high-resolution cameras on the market, why pick the DXL? According to Cioni, cinematographers and their camera crew are no longer the only people that directly interact with cameras. Panavision examined the impact a camera had on each production department — camera assistants, operators, data managers, DITs, editors, and visual effects supervisors. In response to this feedback, they designed DXL to offer custom toolsets for every department. In addition, Panavision wanted to leverage the benefits of their heritage lenses and enable the same glass that photographed ‘Lawrence of Arabia’ to be available for a wider range of today’s filmmakers on DXL.

When Arri first debuted the Alexa 65 in 2014, there were questions about whether such a high-resolution, data-heavy image was necessary or beneficial. But cinematographers jumped on it and have leaned on large format sensors and glass-to-lens pictures — ranging from Doctor Strange to Rogue One — to deliver greater immersiveness, detail and range. It seems that the large format trend is only accelerating, particularly among filmmakers who are interested in the optical magnification, depth of field and field-of-view characteristics that only large format photography offers.

Kramer Morgenthau

“I think large format is the future of cinematography for the big screen,” says cinematographer Kramer Morgenthau, who shot with the DXL in 2016. “[Large format cinematography] gives more of a feeling of the way human vision is. And so, it’s more cinematic. Same thing with anamorphic glass — anamorphic does a similar thing, and that’s one of the reasons why people love it. The most important thing is the glass, and then the support, and then the user-friendliness of the camera to move quickly. But these are all important.”

The DXL comes to market offering a myriad of creative choice for filmmakers. Among the large format cameras, the Millennium DXL aims to be the crème de la crème — it’s built around an 46mm 8192×4320 Red VV sensor, custom Panavision large format spherical and anamorphic lenses, wrapped in camera department-friendly electronics, using proprietary color science — all of which complements a mixed camera environment.

“The beauty of digital, and this camera in particular, is that DXL actually stands for ‘digital extra light.’ With a core body weight of only 10 pounds, and with its small form factor, I’ve seen DXL used in the back seat of a car as well as to capture the most incredible helicopter scenes,” Cioni notes.

With the help of Light Iron, Panavision developed a tool to match DXL footage to Panavised Red Weapon cameras. Guardians of the Galaxy Vol. 2 used Red Weapon 8K VV Cameras with Panavision Primo 70 lenses. “There are shows like Netflix’s 13 Reasons Why [Season Two] that combined this special matching of the DXL and the Red Helium sensor based on the workflow of the show,” Cioni notes. “They’re shooting [the second season] with two DXLs as their primary camera, and they have two 8K Red cameras with Helium sensors, and they match each other.”

If you are thinking the Millennium DXL will bust your budget, think again. Like many Panavision cameras, the DXL is exclusively leasable through Panavision, but Cioni says they’re happy to help filmmakers to build the right package and workflow. “A lot of budgetary expense can be avoided with a more efficient workflow. Once customers learn how DXL streamlines the entire imaging chain, a DXL package might not be out of reach. We always work with customers to build the right package at a competitive price,” he says.

Using the DXL in Production
The DXL could be perceived as a classic dolly Panavision camera, especially with the large format moniker. “Not true,” says Morgenthau, who shot test footage with the camera slung over his shoulder in the back seat of a car.

He continues, “I sat in the back of a car and handheld it — in the back of a convertible. It’s very ergonomic and user-friendly. I think what’s exciting about the Millennium: its size and integration with technology, and the choice of lenses that you get with the Panavision lens family.”

Panavision’s fleet of large format lenses, many of which date back to the 1950s, made the company uniquely equipped to begin development on the new series of large format optics. To be available by the end of 2017, the Primo Artiste lenses are a full series of T/1.8 Primes — the fastest optics available for large format cinematography — with a completely internalized motor and included metadata capture. Additionally, the Primo Artiste lenses can be outfitted with an anamorphic glass attachment that retains the spherical nature of the base lens, yet induces anamorphic artifacts like directional flares and distorted bokeh.

Another new addition to the DXL is the earlier mentioned Panavision’s HDR OLED Primo viewfinder. Offering 600-nit brightness, image smoothing and optics to limit eye fatigue, the viewfinder also boasts a theoretical contrast ratio of 1,000,000:1. Like other elements on the camera, the Primo viewfinder was the result of extensive polling and camera operator feedback. “Spearheaded by Panavision’s Haluki Sadahiro and Dominick Aiello, we went to operators and asked them everything we could about what makes a good viewfinder,” notes Cioni. “Guiding an industry game-changing product meant we went through multiple iterations. We showed the first Primo HDR prototype version in November 2016, and after six months of field testing, the final version is both better and simpler, and it’s all thanks to user feedback.”

Michael Cioni

In response to the growing popularity of HDR delivery, Light Iron also provides a powerful on-set HDR viewing solution. The HDR Village cart is built with a 4K HDR Sony monitor with numerous video inputs. The system can simultaneously display A and B camera feeds in high dynamic range and standard dynamic range on four different split quadrants. This enables cinematographers to evaluate their images and better prepare for multi-format color grading in post, given that most HDR projects are also required to deliver in SDR.

Post Production
The camera captures R3D files, the same as any other Red camera, but does have metadata that is unique to the DXL, ranging from color science to lens information. It also uses Light Iron’s set of color matrices designed specifically for the DXL: Light Iron Color.

Designed by Light Iron supervising colorist Ian Vertovec, Light Iron Color deviates from traditional digital color matrices by following in the footsteps of film stock philosophy instead of direct replication of how colors look in nature. Cioni likens Light Iron Color to Kodak’s approach to film. “Kodak tried to make different film stocks for different intentions. Since one film stock cannot satisfy every creative intention, DXL is designed to allow look transforms that users can choose, export and integrate into the post process. They come in the form of cube lookup tables and are all non-destructive.”

Light Iron Color can be adjusted and tweaked by the user or by Light Iron, which Cioni says has been done on many shows. The ability to adjust Light Iron Color to fit a particular project is also useful on shows that shoot with multiple camera types. Though Light Iron Color was designed specifically for the Millennium DXL, Light Iron has used it on other cameras — including the Sony A7, and Reds with Helium and Dragon sensors — to ensure that all the footage matches as closely as possible.

While it’s possible to cut with high-resolution media online with a blazing fast workstation and storage solution, it’s a lot trickier to edit online with 8K media in a post production environment that often requires multiple editors, assistants, VFX editors, post PAs and more. The good news is that the DXL records onboard low-bitrate proxy media (ProRes or DNx) for offline editorial while simultaneously recording R3Ds without requiring the use of an external recorder.

Cioni’s optimal camera recording setup for editorial is 5:1 compression for the R3Ds alongside 2K ProRes LT files. He explains, “My rule of thumb is to record super high and super low. And if I have high-res and low-res and I need to make something else, I can generate that somewhere in the middle from the R3Ds. But as long as I have the bottom and the top, I’m good.”

Storage is also a major post consideration. An hour of 8192×4320 R3Ds at 23.976fps runs in the 1TB/hour range — that number may vary, depending on the R3D compression, but when compared to an hour of 6560×3100 Arriraw footage, which lands at 2.6TB an hour, the Millennium DXL’s lighter R3D workflow can be very attractive.

Conform and Delivery
One significant aspect of the Millennium DXL workflow is that even though the camera’s sensor, body, glass and other pipeline tools are all recently developed, R3D conform and delivery workflows remain tried and true. The onboard proxy media exactly matches the R3Ds by name and timecode, and since Light Iron Color is non-destructive, the conform and color-prep process is simple and adjustable, whether the conform is done with Adobe, Blackmagic, Avid or other software.

Additionally, since Red media can be imported into almost all major visual effects applications, it’s possible to work with the raw R3Ds as VFX plates. This retains the lens and camera metadata for better camera tracking and optical effects, as well as providing the flexibility of working with Light Iron Color turned on or off, and the 8K R3Ds are still lighter than working with 4K (as is the VFX trend) DPX or EXR plates. The resolution also affords enormous space for opticals and stabilization in a 4K master.

4K is the increasingly common delivery resolution among studios, networks and over-the-top content distributors, but in a world of constant remastering and an exponential increase in television and display resolutions, the benefit in future-proofing a picture is easily apparent. Baselight, Resolve, Rio and other grading and finishing applications can handle 8K resolutions, and even if the final project is only rendered at 4K now, conforming and grading in 8K ensures the picture will be future-proofed for some time. It’s a simple task to re-export a 6K or 8K master when those resolutions become the standard years down the line.

After having played with DXL footage provided by Light Iron, it was surprising how straightforward the workflow seems. For a very small production, the trickiest part is the requirement of a powerful workstation — or sets of workstations — to conform and play 8K Red media, with a mix of (likely) 4K VFX shots, graphics and overlays. Michael Cioni notes, “[Everyone] already knows a RedCode workflow. They don’t have to learn it, I could show the DXL to anyone who has a Red Raven and in 30 seconds they’ll confidently say, ‘I got this.’”

Assimilate and Z Cam offer second integrated VR workflow bundle

Z Cam and Assimilate are offering their second VR integrated workflow bundle, which features the Z Cam S1 Pro VR camera and the Assimilate Scratch VR Z post tools. The new Z Cam S1 Pro offers a higher level of image quality that includes better handling of low lights and dynamic range with detailed, well-saturated, noise-free video. In addition to the new camera, this streamlined pro workflow combines Z Cam’s WonderStitch optical-flow stitch feature and the end-to-end Scratch VR Z tools.

Z Cam and Assimilate have designed their combined technologies to ensure as simple a workflow as possible, including making it easy to switch back and forth between the S1 Pro functions and the Scratch VR Z tools. Users can also employ Scratch VR Z to do live camera preview, prior to shooting with the S1 Pro. Once the shoot begins with the S1 Pro, Scratch VR Z is then used for dailies and data management, including metadata. You don’t have to remove the SD cards and copy; it’s a direct connect to the PC and then to the camera via a high-speed Ethernet port. Stitching of the imagery is then done in Z Cam’s WonderStitch — now integrated into Scratch VR Z — as well as traditional editing, color grading, compositing, support for multichannel audio from the S1 or external ambisonic sound, finishing and publishing (to all final online or standalone 360 platforms).

Z Cam S1 Pro/Scratch VR Z  bundle highlights include:
• Lower light sensitivity and dynamic range – 4/3-inch CMOS image sensor
• Premium 220 degree MFT fisheye lens, f/2.8~11
• Coordinated AE (automatic exposure) and AWB ( automatic white-balance)
• Full integration with built-in Z Cam Sync
• 6K 30fps resolution (post stitching) output
• Gig-E port (video stream & setting control)
• WonderStich optical-flow based stitching
• Live Streaming to Facebook, YouTube or a private server, including text overlays and green/composite layers for a virtual set
• Scratch VR Z single, a streamlined, end-to-end, integrated VR post workflow

“We’ve already developed a few VR projects with the S1 Pro VR camera and the entire Neotopy team is awed by its image quality and performance,” says Alex Regeffe, VR post production manager at Neotopy Studio in Paris. “Together with the Scratch VR Z tools, we see this integrated workflow as a game changer in creating VR experiences, because our focus is now all on the creativity and storytelling rather than configuring multiple, costly tools and workflows.”

The Z Cam S1 Pro/Scratch VR Z bundle is available within 30 days of ordering. Priced at $11,999 (US), the bundle includes the following:
– Z CamS1 Pro Camera main unit, Z Cam S1 Pro battery unit (w/o battery cells), AC/DC power adapter unit and power connection cables (US, UK, EU).
– A Z Cam WonderStitch license, which is an optical flow-based stitching feature that performs offline stitching of files from Z Cam S1 Pro. Z Cam WonderStitch requires a valid software license associated with a designated Z Cam S1 Pro, and is nontransferable.
– A Scratch VR Z permanent license: a pro VR end-to-end, post workflow with an all-inclusive, realtime toolset for data management, dailies, conform, color grading, compositing, multichannel and ambisonic sound, and finishing, all integrated within the Z Cam S1 Pro camera. Includes one-year of support/updates.

The companies are offering a tutorial about the bundle.

MammothHD shooting, offering 8K footage

By Randi Altman

Stock imagery house MammothHD has embraced 8K production, shooting studio, macros, aerials, landscapes, wildlife and more. Clark Dunbar, owner of MammothHD, is shooting using the Red 8K VistaVision model. He’s also getting 8K submissions from his network of shooters and producers from around the world. They have been calling on the Red Helium s35 and Epic-W models.

“8K is coming fast —from feature films to broadcast to specialty uses, such as signage and exhibits — the Rio Olympics were shot partially in 8K, and the 2020 Tokyo Olympics will be broadcast in 8K,” says Dunbar. “TV and projector manufacturers of flat screens, monitors and projectors are moving to 8K and prices are dropping, so there is a current clientele for 8K, and we see a growing move to 8K in the near future.”

So why is it important to have 8K imagery while the path is still being paved? “Having an 8K master gives all the benefits of shooting in 8K, but also allows for a beautiful and better over-sampled down-rezing for 4K or lower. There is less noise (if any, and smaller noise/grain patterns) so it’s smoother and sharper and the new color space has incredible dynamic range. Also, shooting in RAW gives the advantages of working to any color grading post conforms you’d like, and with 8K original capture, if needed, there is a large canvas in which to re-frame.”

He says another benefit for 8K is in post — with all those pixels — if you need to stabilize a shot “you have much more control and room for re-framing.”

In terms of lenses, which Dunbar says “are a critical part of the selection for each shot,” current VistaVision sessions have used Zeiss Otus, Zeiss Makro, Canon, Sigma and Nikon glass from 11mm to 600mm, including extension tubes for the macro work and 2X doublers for a few of the telephotos.

“Along with how the lighting conditions affect the intent of the shot, in the field we use from natural light (all times of day), along with on-camera filtration (ND, grad ND, polarizers) with LED panels as supplements to studio set-ups with a choice of light fixtures,” explains Dunbar. “These range from flashlights, candles, LED panels from 2-x-3 inches to 1-x-2 foot panels, old tungsten units and light through the window. Having been shooting for almost 50 years, I like to use whatever tool is around that fits the need of the shot. If not, I figure out what will do from what’s in the kit.”

Dunbar not only shoots, he edits and colors as well. “My edit suite is kind of old. I have a MacPro (cylinder) with over a petabyte of online storage. I look forward to moving to the next-generation of Macs with Thunderbolt 3. On my current system, I rarely get to see the full 8K resolution. I can check files at 4K via the AJA io4K or the KiPro box to a 4K TV.

“As a stock footage house, other than our occasional demo reels, and a few custom-produced client show reels, we only work with single clips in review, selection and prepping for the MammothHD library and galleries,” he explains. “So as an edit suite, we don’t need a full bore throughput for 4K, much less 8K. Although at some point I’d love to have an 8K state-of-the-art system to see just what we’re actually capturing in realtime.”

Apps used in MammothHD’s Apple-based edit suite are Red’s RedCineX (the current beta build) using the new IPP2 pipeline, Apple’s Final Cut 7 and FCP X, Adobe’s Premiere, After Effects and Photoshop, and Blackmagic’s Resolve, along with QuickTime 7 Pro.

Working with these large 8K files has been a challenge, says Dunbar. “When selecting a single frame for export as a 16-bit tiff (via the RedCine-X application), the resulting tiff file in 8K is 200MB!”

The majority of storage used at MammothHD is Promise Pegasus and G-Tech Thunderbolt and Thunderbolt 2 RAIDs, but the company has single disks, LTO tape and even some old SDLT media ranging from FireWire to eSata.

“Like moving to 4K a decade ago, once you see it it’s hard to go back to lower resolutions. I’m looking forward to expanding the MammothHD 8K galleries with more subjects and styles to fill the 8K markets.” Until then Dunbar also remains focused on 4K+ footage, which he says is his site’s specialty.

Nugen adds 3D Immersive Extension to Halo Upmix

Nugen Audio has updated its Halo Upmix with a new 3D Immersive Extension, adding further options beyond the existing Dolby Atmos bed track capability. The 3D Immersive Extension now provides ambisonic-compatible output as an alternative to channel-based output for VR, game and other immersive applications. This makes it possible to upmix, re-purpose or convert channel-based audio for an ambisonic workflow.

With this 3D Immersive Extension, Halo fully supports Avid’s newly announced Pro Tools V.2.8, now with native 7.1.2 stems for Dolby Atmos mixing. The combination of Pro Tools 12.8 and Halo 3D Immersive Extension can provide a more fluid workflow for audio post pros handling multi-channel and object-based audio formats.

Halo Upmix is available immediately at a list price of $499 for both OS X and Windows, with support for Avid AAX, AudioSuite, VST2, VST3 and AU formats. The new 3D Immersive Extension replaces the Halo 9.1 Extension and can now be purchased for $199. Owners of the existing Halo 9.1 Extension can upgrade to the Halo 3D Immersive Extension for no additional cost. Support for native 7.1.2 stems in Avid Pro Tools 12.8 is available on launch.

Barry Sonnenfeld on Netflix’s A Series of Unfortunate Events

By Iain Blair

Director/producer/showrunner Barry Sonnenfeld has a gift for combining killer visuals with off-kilter, broad and often dark comedy, as showcased in such monster hits as the Men in Black and The Addams Family franchises.

He did learn from the modern masters of black comedy, the Coen brothers, beginning his prolific career as their DP on their first feature film, Blood Simple and then shooting such classics as Raising Arizona and Miller’s Crossing. He continued his comedy training as the DP on such films as Penny Marshall’s Big, Danny Devito’s Throw Momma from the Train and Rob Reiner’s When Harry Met Sally.

So maybe it was just a matter of time before Sonnenfeld — whose directing credits include Get Shorty, Wild Wild West, RV and Nine Lives — gravitated toward helming the acclaimed new Netflix show A Series of Unfortunate Events, based on the beloved and best-selling “Lemony Snicket” children’s series by Daniel Handler. After all, with the series’ rat-a-tat dialogue, bizarre humor and dark comedy, it’s a perfect fit for the director’s own strengths and sensibilities.

I spoke with Sonnenfeld, who won a 2007 Primetime Emmy and a DGA Award for his directorial achievement on Pushing Daisies, about making the series, the new golden age of TV, his love of post — and the real story behind why he never directed the film version of A Series of Unfortunate Events.

Weren’t you originally set to direct the 2004 film, and you even hired Handler to write the screenplay?
That’s true. I was working with producer Scott Rudin, who had done the Addams Family films with me, and Paramount decided they needed more money, so they brought in another studio, DreamWorks. But the DreamWorks producer — who had done the Men in Black films with me — and I don’t really get along. So when they came on board, Daniel and I were let go. I’d been very involved with it for a long time. I’d already hired a crew, sets were all designed, and it was very disappointing as I loved the books.

But there’s a happy ending. You are doing Netflix TV series, which seems much closer to the original books than the movie version. How important was finding the right tone?
The single most important job of a director is both finding and maintaining the right tone. Luckily, the tone of the books is exactly in my wheelhouse — creating worlds that are real, but also with some artifice in them, like the Men in Black and Addams Family movies, and Pushing Daisies. I tend to like things that are a bit dark, slightly quirky.

What did you think of the film version?
I thought it was slightly too big and loud, and I wanted to do something more like a children’s book, for adults.

The film version had to stuff Handler’s first three novels into a single movie, but the TV format, with its added length, must work far better for the books?
Far better, and the other great thing is that once Netflix hired me — and it was a long auditioning process — they totally committed. They take a long time finding the right material and pairing it with the right filmmaker but once they do, they really trust their judgment.

I really wanted to shoot it all on stages, so I could control everything. I didn’t want sun or rain. I wanted gloomy overhead. So we shot it all in Vancouver, and Netflix totally bought into that vision. I have an amazing team — the great production designer Bo Welch, who did Men in Black and other films with me, and DP Bernard Couture.

Patrick Warburton’s deadpan delivery as Lemony Snicket, the books’ unreliable narrator, is a great move compared with having just the film’s voiceover. How early on did you make that change?
When I first met with Netflix, I told them that Lemony should be an on-screen character. That was my goal. Patrick’s just perfect for the role. He’s the sort of Rod Serling/Twilight Zone presence — only more so, as he’s involved in the actual visual style of the show.

How early on do you deal with post and CG for each episode?
Even before we’re shooting. You don’t want to wait until you lock picture to start all that work, or you’ll never finish in time. I’m directing most of it — half the first season and over a third of the second. Bo’s doing some episodes, and we bring in the directors at least a month before the shoot, which is long for TV, to do a shot list. These shows, both creatively and in terms of budget, are made in prep. There should be very few decisions being made in the shoot or surprises in post because basically every two episodes equal one book, and they’re like feature films but on one-tenth of the budget and a quarter of the schedule.

We only have 24 days to do two hours worth of feature film. Our goal is to make it look as good as any feature, and I think we’ve done that. So once we have sequences we’re happy with, we show them to Netflix and start post, as we have a lot of greenscreen. We do some CGI, but not as much as we expected.

Do you also post in Vancouver?
No. We began doing post there for the first season, but we discovered that with our TV budget and my feature film demands and standards, it wasn’t working out. So now we work with several post vendors in LA and San Francisco. All the editorial is in LA.

Do you like the post process?
I’ve always loved it. As Truffaut said, the day you finish filming is the worst it’ll ever be, and then in post you get to make it great again, separating the wheat from the chaff, adding all the VFX and sound. I love prep and post — especially post as it’s the least stress and you have the most time to just think. Production is really tough. Things go wrong constantly.

You used two editors?
Yes, Stuart Bass and Skip MacDonald, and each edits two episodes/one book as we go. I’m very involved, but in TV the director gets a very short time to do their cut, and I like to give notes and then leave. My problem is I’m a micro-manager, so it’s best if I leave because I drive everyone crazy! Then the showrunner — which is also me — takes over. I’m very comfortable in post, with all the editing and VFX, and I represent the whole team and end up making all the post decisions.

Where did you mix the sound?
We did all the mixing on the Sony lot with the great Paul Ottosson who won Oscars for Zero Dark Thirty and The Hurt Locker. We go way back, as he did Men in Black 3 and other shows for me, and what’s so great about him is that he both designs the sound and then also mixes.

The show uses a lot of VFX. Who did them?
We used three main houses — Shade and Digital Sandbox in LA and Tippett in San Francisco. We also used EDI, an Italian company, who came in late to do some wire removal and clean up.

How important was the DI on this and where did you do it?
We did it all at Encore LA, and the colorist on the first season was Laura Jans Fazio, who was fantastic. It’s the equivalent to a movie DI, where you do all the final color timing, and getting the right look was crucial. The DP created very good LUTs, and our rough cut was very close to where we wanted it, and then the DP and myself piggy-backed sessions with the colorist. It’s a painful experience for me as it’s so slow, and like editing, I micro-manage. So I set looks for scenes and then leave.

Barry Sonnefeld directs Joan Cusack.

Is it a golden age for TV?
Very much so. The writing’s a very high standard, and now everyone has wide-screen TVs there’s no more protecting the 3:4 image, which is almost square. When I began doing TV, there was no such thing as a wide shot. Executives would look at my cut, and the first thing they’d always say was, “Do you have a close-up of so and so?” Now it’s all changed. But TV is so different from movies. I look back fondly at movie schedules!

How important are the Emmys and other awards?
They’re very important for Netflix and all the new platforms. If you have critical success, then they get more subscribers, more money and then they develop more projects. And it’s great to be acknowledged by your peers.

What’s next?
I’ll finish season two and we’re hopeful about season three, which would keep us busy through fall 2018. And Vancouver’s a perfect place to be as long as you’re shooting on stage and don’t have to deal with the weather.

Will there be a fourth Men in Black?
If there is, I don’t think Will or I will be involved. I suspect there won’t be one, as it might be just too expensive to make now, with all the back-end deals for Spielberg and Amblin and so on. But I hope there’s one.

Images: Joe Lederer/Netflix


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Behind the Title: Nylon Studios creative director Simon Lister

NAME: Simon Lister

COMPANY: Nylon Studios

CAN YOU DESCRIBE YOUR COMPANY?
Nylon Studios is a New York- and Sydney-based music and sound house offering original composition and sound design for films and commercials. I am based in the Australia location.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
I help manage and steer the company, while also serving as a sound designer, client liaison, soundtrack creative and thinker.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
People are constantly surprised with the amount of work that goes into making a soundtrack.

WHAT TOOLS DO YOU USE?
I use Avid Pro Tools, and some really cool plugins

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is being able to bring a film to life through sound.

WHAT’S YOUR LEAST FAVORITE?
At times, clients can be so stressed and make things difficult. However, sometimes we just need to sit back and look at how lucky we are to be in such a fun industry. So in that case, we try our best to make the client’s experience with us as relaxing and seamless as possible.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Lunchtime.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Anything that involves me having a camera in my hand and taking pictures.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was pretty young. I got a great break when I was 19 years old in one of the best music studios in New Zealand and haven’t stopped since. Now, I’ve been doing this for 31 years (cough).

Honda Civic spot

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
In the last couple of months I think I’ve counted several different car brand spots we’ve worked on, including Honda, Hyundai, Subaru, Audi and Toyota. All great spots to sink our teeth and ears into.

Also we have been working on the great wildlife series Tales by Light, which is being played on National Geographic and Netflix.

For Every Child

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It would be having the opportunity to film and direct my own commercial, For Every Child, for Unicef global rebranding TVC. We had the amazing voiceover of Liam Neeson and the incredible singing voice of Lisa Gerard (Gladiator, Heat, Black Hawk Down).

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My camera, my computer and my motorbike.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I ride motorbikes throughout Morocco, Baja, Himalayas, Mongolia, Vietnam, Thailand, New Zealand and in the traffic of India.

Mocha VR: An After Effects user’s review

By Zach Shukan

If you’re using Adobe After Effects to do compositing and you’re not using Mocha, then you’re holding yourself back. If you’re using Mettle Skybox, you need to check out Mocha VR, the VR-enhanced edition of Mocha Pro.

Mocha Pro, and Mocha VR are all standalone programs where you work entirely within the Mocha environment and then export your tracks, shapes or renders to another program to do the rest of the compositing work. There are plugins for Maxon Cinema 4D, The Foundry’s Nuke, HitFilm, and After Effects that allow you to do more with the Mocha data within your chosen 3D or compositing program. Limited-feature versions of Mocha (Mocha AE and Mocha HitFilm) come installed with the Creative Cloud versions of After Effects and HitFilm 4 Pro, and every update of these plugins is getting closer to looking like a full version of Mocha running inside of the effects panel.

Maybe I’m old school, or maybe I just try to get the maximum performance from my workstation, but I always choose to run Mocha VR by itself and only open After Effects when I’m ready to export. In my experience, all the features of Mocha run more smoothly in the standalone than when they’re launched and run inside of After Effects.**

How does Mocha VR compare to Mocha Pro? If you’re not doing VR, stick with Mocha Pro. However, if you are working with VR footage, you won’t have to bend over backwards to keep using Mocha.

Last year was the year of VR, when all my clients wanted to do something with VR. It was a crazy push to be the first to make something and I rode the wave all year. The thing is there really weren’t many tools specifically designed to work with 360 video. Now this year, the post tools for working with VR are catching up.

In the past, I forced previous versions of Mocha to work with 360 footage before the VR version, but since Mocha added its VR-specific features, stabilizing a 360-camera became cake compared to the kludgy way it works with the industry standard After Effects 360 plugin, Skybox. Also, I’ve used Mocha to track objects in 360 before the addition of an equirectangular* camera and it was super-complicated because I had to splice together a whole bunch of tracks to compensate for the 360 camera distortion. Now it’s possible to create a single track to follow objects as they travel around the camera. Read the footnote for an explanation of equirectangular, a fancy word that you need to know if you’re working in VR.

Now let’s talk about the rest of Mocha’s features…

Rotoscoping
I used to rotoscope by tracing every few frames and then refining the frames in between until I found out about the Mocha way to rotoscope. Because Mocha combines rotoscoping with tracking of arbitrary shapes, all you have to do is draw a shape and then use tracking to follow and deform all the way through. It’s way smarter and more importantly, faster. Also, with the Uberkey feature, you can adjust your shapes on multiple frames at once. If you’re still rotoscoping with After Effects alone, you’re doing it the hard way.

Planar Tracking
When I first learned about Mocha it was all about the planar tracker, and that really is still the heart of the program. Mocha’s basically my go-to when nothing else works. Recently, I was working on a shot where a woman had her dress tucked into her pantyhose, and I pretty much had to recreate a leg of a dress that swayed and flowed along with her as she walked. If it wasn’t for Mocha’s planar tracker I wouldn’t have been able to make a locked-on track of the soft-focus (solid color and nearly without detail) side of the dress. After Effects couldn’t make a track because there weren’t enough contrast-y details.

GPU Acceleration
I never thought Mocha’s planar tracking was slow, even though it is slower than point tracking, but then they added GPU acceleration a version or two ago and now it flies through shots. It has to be at least five times as fast now that it’s using my Nvidia Titan X (Pascal), and it’s not like my CPU was a slouch (an 8-core i7-5960X).

Object Removal
I’d be content using Mocha just to track difficult shots and for rotoscoping, but their object-removal feature has saved me hours of cloning/tracking work in After Effects, especially when I’ve used it to remove camera rigs or puppet rigs from shots.

Mocha’s remove module is the closest thing out there to automated object removal***. It’s as simple as 1) create a mask around the object you want to remove, 2) track the background that your object passes in front of, and then 3) render. Okay, there’s a little more to it, but compared to the cloning and tracking and cloning and tracking and cloning and tracking method, it’s pretty great. Also, a huge reason to get the VR edition of Mocha is that the remove module will work with a 360 camera.

Here I used Mocha object removal to remove ropes that pulled a go-cart in a spot for Advil.

VR Outside of After Effects?
I’ve spent most of this article talking about Mocha with After Effects, because it’s what I know best, but there is one VR pipeline that can match nearly all of Mocha VR’s capabilities: the Nuke plugin Cara VR, but there is a cost to that workflow. More on this shortly.

Where you will hit the limit of Mocha VR (and After Effects in general) is if you are doing 3D compositing with CGI and real-world camera depth positioning. Mocha’s 3D Camera Solve module is not optimized for 360 and the After Effects 3D workspace can be limited for true 3D compositing, compared to software like Nuke or Fusion.

While After Effects sort of tacked on its 3D features to its established 2D workflow, Nuke is a true 3D environment as robust as Autodesk Maya or any of the high-end 3D software. This probably sounds great, but you should also know that Cara VR is $4,300 vs. $1,000 for Mocha VR (the standalone + Adobe plugin version) and Nuke starts at $4,300/year vs. $240/year for After Effects.

Conclusion
I think of Mocha as an essential companion to compositing in After Effects, because it makes routine work much faster and it does some things you just can’t do with After Effects alone. Mocha VR is a major release because VR has so much buzz these days, but in reality it’s pretty much just a version of Mocha Pro with the ability to also work with 360 footage.

*Equirectangular is a clever way of unwrapping a 360 spherical projection, a.k.a, the view we see in VR, by flattening it out into a rectangle. It’s a great way to see the whole 360 view in an editing program, but A: it’s very distorted so it can cause problems for tracking and B: anything that is moving up or down in the equirectangular frame will wrap around to the opposite side (a bit like Pacman when he exits the screen), and non-VR tracking programs will stop tracking when something exits the screen on one side.

**Note: According to the developer, one of the main advantages to running Mocha as a plug-in (inside AE, Premiere, Nuke, etc) for 360 video work is that you are using the host program’s render engine and proxy workflow. Having the ability to do all your tracking, masking and object removal on proxy resolutions is a huge benefit when working at large 360 formats that can be as large as 8k stereoscopic. Additionally, the Mocha modules that render, such as reorient for horizon stabilization or remove module will render inside the plug-in making for a streamlined workflow.

***FayOut was a “coming soon” product that promised an even more automated method for object removal, but as of the publishing of this article it appears that they are no longer “coming soon” and may have folded or maybe their technology was purchased and it will be included in a future product. We shall see…
________________________________________
Zach Shukan is the VFX specialist at SilVR and is constantly trying his hand at the latest technologies in the video post production world.

Baby Driver editors — Syncing cuts to music

By Mel Lambert

Writer/director Edgar Wright’s latest outing is a major departure from his normal offering of dark comedies. Unlike his Three Flavours Cornetto film trilogy — Shaun of the Dead, Hot Fuzz and The World’s End — and Scott Pilgrim vs. the World, TriStar Pictures’ Baby Driver has been best described as a romantic musical disguised as a car-chase thriller.

Wright’s regular pair of London-based picture editors, Paul Machliss, ACE, and Jonathan Amos, ACE, also brought a special brand of magic to the production. Machliss, who had worked with Wright on Scott Pilgrim, The World’s End and his TV series Spaced for Channel 4, recalls that, “very early on, Edgar decided that I should come along on the shoot in Atlanta to ensure that we had the material he’d already storyboarded in a series of complex animatics for the film [using animator Steve Markowski and editor Evan Schiff]. Jon Amos joined us when we returned to London for sound and picture post production, primarily handling the action sequences, at which he excels.”

Developed by Wright over the past two decades, Baby Driver tells the story of an eponymous getaway driver (Ansel Elgort), who uses earphones to drown out the “hum-in-the-drum” of tinnitus — the result of a childhood car accident — and to orchestrate his life to carefully chosen music. But now indebted to a sinister kingpin named Doc (Kevin Spacey), Baby becomes part of a seriously focused gang of bank robbers, including Buddy and Darling (Jon Hamm and Eiza González), Bats (Jamie Foxx) and Griff (Jon Bernthal). Debora, Baby’s love interest (Lily James), dreams of heading west “in a car I can’t afford, with a plan I don’t have.” Imagine, in a sense, Jim McBride’s Breathless rubbing metaphorical shoulders with Tony Scott’s True Romance.

The film also is indebted to Wright’s 2003 music video for Mint Royale’s Blue Song, during which UK comedian/actor Noel Fielding danced in a stationery getaway car. In that same vein, Baby Driver comprises a sequence of linked songs that tightly choreograph the action and underpin the dramatic arcs being played out, often keying off the songs’ lyrics.

The film’s opener, for example, features Elgort partly lipsyncing to “Bellbottoms,” by the Jon Spencer Blues Explosion, as the villains commit their first robbery. In subsequent scenes, our hero’s movements follow the opening bass riffs of The Damned’s “Neat Neat Neat,” then later to Golden Earring’s “Radar Love” before Queen’s “Brighton Rock” adds complex guitar cacophony to a key encounter scene.

Even the film’s opening titles are accompanied by Baby performing a casual coffee run in a continuous three-minute take to Bob & Earl’s “Harlem Shuffle” — a scene that reportedly took 28 takes on the first day of practical photography in Atlanta. And the percussion and horns of “Tequila” provide syncopation for a protracted gunfight. Fold in “Egyptian Reggae,” “Unsquare Dance,” and “Easy,” followed by “Debora,” and it’s easy to appreciate that Wright is using music as a key and underpinning component of this film. The director also brought in music video choreographer Ryan Heffington to achieve the timing precision he needed.

The swift action is reflected in a fast style of editing, including whip pans and crash zooms, with cuts that are tightly synchronized to the music. “Whereas the majority of Edgar’s previous TV series and films have been parodies, for Baby Driver he had a very different idea,” explains Machliss. Wright had accumulated a playlist of over 30 songs that would inspire various scenes in his script. “It’s something that’s very much a part of my previous films,” says director Wright, “and I thought of this idea of how to take that a stage further by having a character who listens to music the entire time.”

“Edgar had organized a table read of his script in the spring of 2012 in Los Angeles, at which he recorded all of the dialog,” says Machliss. “Taking that recording, some sound effects and the music tracks, I put together a 100-minute ‘radio play’ that was effectively the whole film in audio-only form that Edgar could then use as a selling tool to convince the studios that he had a viable idea. Remember, Baby Driver was a very different format for him and not what he is traditionally known for.”

Australia-native Machliss was on set to ensure that the gunshots, lighting effects, actors and camera movements, plus car hits, all happened to the beat of the accompanying music. “We were working with music that we could not alter or speed up or slow down,” he says. “We were challenged to make sure that each sequence fit in the time frame of the song, as well as following the cadence of the music.”

Almost 95% of music included in the first draft of Wright’s script made it into the final movie according to Machliss. “I laid up the relevant animatic as a video layer in my Avid Media Composer and then confirmed how each take worked against the choreographed timeline. This way I always had a reference to it as we were filming. It was a very useful guide to see if we were staying on track.”

Editing On Location
During the Atlanta shoot, Machliss used Apple ProRes digital files captured by an In2Core QTake video assist that was recording taps from the production’s 35mm cameras. “I connected to my Mac via Ethernet so I could create a network to the video assist’s storage. I had access to his QuickTime files the instant he stopped recording. I could use Avid’s AMA function to place the clip in the timeline without the need for transcoding. This allowed almost instantaneous feedback to Edgar as the sequence was built up.”

Paul Machliss on set.

While on location, Machliss used a 15-inch MacBook Pro, Avid Mojo DX and a JVC video monitor “which could double as a second screen for the Media Composer or show full-screen video output via the Mojo DX.” He also had a Wacom tablet, an 8TB Thunderbolt drive, a LaCie 500GB rugged drive — “which would shuttle my media between set and editorial” — and an APU “so that I wouldn’t lose power if the supply was shut down by the sparks!”

LA’s Fotokem handled film processing, with negative scanning by Efilm. DNX files were sent to Company 3 in Atlanta for picture editorial, “where we would also review rushes in 2K sent down the line from Efilm,” says Machliss. “All DI on-lining and grading took place at Molinare in London.” Bill Pope, ASC, was the film’s director of photography.

Picture and Sound Editorial in London
Instead of hiring out editorial suites at a commercial facility in London, Wright and his post teams opted for a different approach. Like an increasing number of London-based productions, they elected to rent an entire floor in an office building.

They located a suitable location on Berners Street, north of the Soho-based film community. As Machliss recalls: “That allowed us to have the picture editorial team in the same space as the sound crew,” which was headed up by Wright’s long-time collaborator Julian Slater, who served as sound designer, supervising sound editor and re-recording engineer on Baby Driver. “Having ready access to Julian and his team meant that we could collaborate very closely — as we had on Edgar’s other films — and share ideas on a regular basis,” as the 10-week Director’s Cut progressed.

British-born Slater then moved across Soho to Goldcrest Films for sound effects pre-dubs, while his co-mixer, Tim Cavagin, worked on dialog and Foley pre-mixes at Twickenham Studios. Print mastering of the Dolby Atmos soundtrack occurred in February 2017 at Goldcrest, with Slater handling music and SFX, while Cavagin oversaw dialog and Foley. “Following Edgar’s concept of threading together the highly choreographed songs with linking scenes, Jon and I began the cut in London against the pre-assembled material from Atlanta,” says Machliss.

To assist Machliss during his picture cut, the film’s sound designer had provided a series of audio stems for his Avid. “Julian [Slater] had been working on his sound effects and dialog elements since principal photography ended in Atlanta. He had prepared separate, color-coded left-center-right stems of the music, dialog and SFX elements he was working on. I laid these [high-quality tracks] into Media Composer so I could better appreciate the intricacies of Julian’s evolving soundtrack. It worked a lot better than a normal rough mix of production dialog, rough sound effects and guide music.”

“From its inception, this was a movie for which music and sound design worked together as a whole piece,” Slater recalls. “There is a large amount of syncopation of the diegetic sounds [implied by the film’s action] to the music track Baby is listening to. Sometimes it’s obvious because the action was filmed with that purpose in mind. For example, walking in tempo to the music track or guns being fired in tempo. But many times it’s more subtle, including police sirens or distant trains that have been pitched and timed to the music,” and hence blend into the overall musical journey. “We strived to always do this to support the story, and to never distract from it.”

Because of the lead character’s tinnitus, Slater worked with pitch changes to interweave elements of the film’s soundtrack. “Whenever Baby is not listening to music, his tinnitus is present to some degree. But it became apparent very soon in our design process that strident, high-pitched ‘whistle tones’ would not work for a sustained period of time. Working closely with composer Steven Price, we developed a varied set of methods to convey the tinnitus — it’s rarely the same sound twice. Much of the time, the tinnitus is pitched according to either the outgoing or incoming music track. This then enabled us to use more of it, yet at the same time be quite subtle.”

Meticulous Planning for Set Pieces and Car Chases
Picture editor Amos joined the project at the start of the Director’s Cut to handle the film’s set pieces. He says, “These set pieces were conceptually very different from the vast majority of action scenes in that they were literally built up around the music and then visualized. Meticulous development and planning went into these sequences before the shoot even began, which was decisive in making the action become musical. For example, the ‘Tequila’ gunfight started as a piece of music by Button Down Brass. It was then laced with gunfire and SFX pitched to the music, and in time with the drum hits — this was done at the script stage by Mark Nicholson (aka, Osymyso, a UK musician/DJ) who specializes in mashup/bastard pop and breakbeat.”

Storyboards then grew around this scripted sound collage, which became a precise shot list for the filmed sequences. “Guns were rigged to go off in time with the music; it was all a very deliberate thing,” adds Amos. “Clearly, there was a lot of editing still to be done, but this approach illustrates that there’s a huge difference between something that is shot and edited to music, and something that is built around the music.”

“All the car chases for Baby Driver were meticulously planned, and either prevised or storyboarded,” Amos explains. “This ensured that the action would always fit into the time slot permitted within the music. The first car chase [against the song ‘Bellbottoms’] is divided into 13 sections, to align to different progressions in the music. One of the challenges resulted from the decision to never edit the music, which meant that none of these could overrun. Stunts were tested and filmed by second unit director Darrin Prescott, and the footage passed back to editorial to test against the timing allowed in the animatic. If a stunt couldn’t be achieved in the time allowed, it was revised and tweaked until it worked. This detailed planning gave the perfect backbone to the sequences.”

Amos worked on the sequences sequentially, “using the animatic and Paul’s on-set assembly as reference,” and began to break down all the footage into rolls that aligned to specific passages of the music. “There was a vast amount of footage for all the set pieces, and things are not always shot in order. So generally I spent a lot of time breaking the material down very methodically. I then began to make selects and started to build the sequences from scratch, section by section. Once I completed a pass, I spent some time building up my sound layers. I find this helps evolve the cut, generating another level of picture ideas that further tighten the syncopation of sound and picture.”

Amos’ biggest challenge, despite all the planning, was finding ways to condense the material into its pre-determined time slot. “The real world never moves quite like animatics and boards. We had very specific points in every track where certain actions had to take place; we called these anchor points. When working on a section, we would often work backwards from the anchor point knowing, for instance, that we only had 20 seconds to tell a particular part of the story. Initially, it can seem quite restrictive, but the edits become so precise.

Jonathan Amos

“The time restriction led to a level of kineticism and syncopation that became a defining feature of the movie. While the music may be the driving force of the action scenes, editorial choices were always rooted in the story and the characters. If you lose sight of the characters, the audience will disengage with the sequence, and you’ll lose all the tension you’ve worked so hard to create. Every shot choice was therefore very considered, and we worked incredibly hard to ensure we never wasted a frame, telling the story in the most compelling, rhythmic and entertaining way we could.”

“Once we had our cut,” Machliss summarizes, “we could return the tracks to Julian for re-conforming,” to accommodate edit changes. “It was an excellent way of working, with full-sounding edit mixes.”

Summing up his experience in Baby Driver, Machliss considers the film to be “the hardest job I’ve ever done, but the most fun I’ve ever had. Ultimately, our task was to create a film that on one level could be purely enjoyed as an exciting/dramatic piece of cinema, but, on repeated viewing, would reveal all the little elements ‘under the surface’ that interlock together — which makes the film unique. It’s a testament to Edgar’s singular vision and, in that regard, he is a tremendously exciting director to work with.”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.