Category Archives: TV Series

FX’s Fargo features sounds as distinctive as its characters

By Jennifer Walden

In Fargo, North Dakota, in the dead of winter, there’s been a murder. You might think you’ve heard this story before, but Noah Hawley keeps coming up with a fresh, new version of it for each season of his Fargo series on FX. Sure, his inspiration was the Coen brothers’ Oscar-winning Fargo film, but with Season 3 now underway it’s obvious that Hawley’s series isn’t simply a spin-off.

Martin Lee and Kirk Lynds.

Every season of the Emmy-winning Fargo series follows a different story, with its own distinct cast of characters, set in its own specified point in time. Even the location isn’t always the same — Season 3 takes place in Minnesota. What does link the seasons together is Hawley’s distinct black humor, which oozes from these disparate small-town homicides. He’s a writer and director on the series, in addition to being the showrunner and an executive producer. “Noah is very hands-on,” confirms re-recording mixer Martin Lee at Tattersall Sound & Picture in Toronto, part of the SIM Group family of companies, who has been mixing the show with re-recording mixer Kirk Lynds since Season 2.

Fargo has a very distinct look, feel and sound that you have to maintain,” explains Lee. “The editors, producers and Noah put a lot of work into the sound design and sound ideas while they are cutting the picture. The music is very heavily worked while they are editing the show. By the time the soundtrack gets to us there is a pretty clear path as to what they are looking for. It’s up to us to take that and flesh it out, to make it fill the 5.1 environment. That’s one of the most unique parts of the process for us.”

Season 3 follows rival brothers, Emmit and Ray Stussy (both played by Ewan McGregor). Their feud over a rare postage stamp leads to a botched robbery attempt that ultimately ends in murder (don’t worry, neither Ewan character meets his demise…yet??).

One of the most challenging episodes to mix this season, so far, was Episode 3, “The Law of Non-Contradiction.” The story plays out across four different settings, each with unique soundscapes: Minnesota, Los Angeles in 2010, Los Angeles in 1975 and an animated sci-fi realm. As police officer Gloria Burgle (Carrie Coon) unravels the homicide in Eden Valley, Minnesota, her journey leads her to Los Angeles. There the story dives into the past, to 1975, to reveal the life story of science fiction writer Thaddeus Mobley (Thomas Mann). The episode side-trips into animation land when Gloria reads Mobley’s book titled The Planet Wyh.

One sonic distinction between Los Angeles in 2010 and Los Angeles of 1975 was the density of traffic. Lee, who mixed the dialogue and music, says, “All of the scenes that were taking place in 2010 were very thick with traffic and cars. That was a technical challenge, because the recordings were very heavy with traffic.”

Another distinction is the pervasiveness of technology in social situations, like the bar scene where Gloria meets up with a local Los Angeles cop to talk about her stolen luggage. The patrons are all glued to their cell phones. As the camera pans down the bar, you hear different sounds of texting playing over a contemporary, techno dance track. “They wanted to have those sounds playing, but not become intrusive. They wanted to establish with sound that people are always tapping away on their phones. It was important to get those sounds to play through subtly,” explains Lynds.

In the animated sequences, Gloria’s voice narrates the story of a small android named MNSKY whose spaceman companion dies just before they reach Earth. The robot carries on the mission and records an eon’s worth of data on Earth. The robot is eventually reunited with members of The Federation of United Planets, who cull the android’s data and then order it to shut down. “Because it was this animated sci-fi story, we wanted to really fill the room with the environment much more so than we can when we are dealing with production sound,” says Lee. “As this little robotic character is moving through time on Earth, you see something like the history of man. There’s voiceover, sound effects and music through all of it. It required a lot of finesse to maintain all of those elements with the right kind of energy.”

The animation begins with a spaceship crashing into the moon. MNSKY wakes and approaches the injured spaceman who tells the android he’s going to die. Lee needed to create a vocal process for the spaceman, to make it sound as though his voice is coming through his helmet. With Audio Ease’s Altiverb, Lee tweaked the settings on a “long plastic tube” convolution reverb. Then he layered that processed vocal with the clean vocal. “It was just enough to create that sense of a helmet,” he says.

At the end, when MNSKY rejoins the members of the Federation on their spaceship it’s a very different environment from Earth. The large, ethereal space is awash in long, warm reverbs which Lynds applied using plug-ins like PhoenixVerb 5.1 and Altiverb. Lee also applied a long reverb treatment to the dialogue. “The reverbs have quite a significant pre-delay, so you almost have that sense of a repeat of the voice afterwards. This gives it a very distinctive, environmental feel.”

Lynds and Lee spend two days premixing their material on separate dub stages. For the premix, Lynds typically has all the necessary tracks from supervising sound editor Nick Forshager while Lee’s dialogue and music tracks come in more piecemeal. “I get about half the production dialogue on day one and then I get the other half on day two,” says Lee. “ADR dribbles in the whole time, including well into the mixing process. ADR comes in even after we have had several playbacks already.”

Fortunately, the show doesn’t rely heavily on ADR. Lee notes that they put a lot of effort into preserving the production. “We use a combination of techniques. The editors find the cleanest lines and takes (while still keeping the performance), then I spent a lot of time cleaning that up,” he says.

This season Lee relies more on Cedar’s DNS One plug-in for noise reduction and less on the iZotope RX5 (Connect version). “I’m finding with Fargo that the showrunners are uniquely sensitive to the effects of the iZotope processing. This year it took more work to find the right sound. It ends up being a combination of both the Cedar and the RX5,” reports Lee.

After premixing, Lee and Lynds bring their tracks together on Tattersall’s Stage 1. They have three days for the 5.1 final mix. They spend one (very) long day building the episode in 5.1 and then send their mix to Los Angeles for Forshager and co-producer Gregg Tilson to review. Then Lee and Lynds address the first round of notes the next morning and send the mix back to Los Angeles for another playback. Each consecutive playback is played for more people. The last playback is for Hawley on the third day.

“One of the big challenges with the workflow is mixing an episode in one day. It’s a long mix day. At least the different time zones help. We send them a mix to listen to typically around 6-7pm PST, so it’s not super late for them. We start at 8am EST the next morning, which is three hours ahead of their time. By the time they’re in the studio and ready to listen, it is 10am their time and we’ve already spent three or four hours handling the revisions. That really works to our advantage,” says Lee.

Sound in the Fargo series is not an afterthought. It’s used to build tension, like a desk bell that rings for an uncomfortably long time, or to set the mood of a space, like an overly noisy fish tank in a cheap apartment. By the time the tracks have made it to the mixers, there’s been “a lot of time and effort spent thinking about what the show was going to sound like,” says Lynds. “From that sense, the entire mix for us is a creative opportunity. It’s our chance to re-create that in a 5.1 environment, and to make that bigger and better.”

You can catch new episodes of Fargo on FX Networks, Wednesdays at 10pm EST.


Jennifer Walden is a New Jersey-based audio engineer and writer.

Assistant Editor’s Bootcamp coming to Burbank in June

The new Assistant Editors’ Bootcamp, founded by assistant/lead editors Noah Chamow (The Voice) and Conor Burke (America’s Got Talent), is a place for a assistant editors and aspiring assistants to learn and collaborate with one another in a low-stakes environment. The next Assistant Editors’ Bootcamp classes will be held on June 10-11, along with a Lead Assistant Editors’ class geared toward understanding troubleshooting and system performance on June 24-25. All classes, sponsored by AlphaDogs’ Editor’s Lounge, will be held at Skye Rentals in Burbank.

The classes will cover such topics as The Fundamentals of Video, Media Management, Understanding I/O and Drive Speed, Prepping Footage for Edit, What’s New in Media Composer, Understanding System Performance Bottlenecks and more. Cost is $199 for two days for the Assistant Editor class, and $299 for two days for the Lead Assistant Editor class. Space is on a first-come, first-served basis and is limited to 25 participants per course. You can register here.

A system with Media Composer 8.6 or later and an external hard drive is required to take the class (30-day Avid trial available) 8GB of system memory and Windows 7/OS X 10.9 or later are needed to run Media Composer 8.6. Computer rentals are available for as little as $54 a week from Hi-Tech Computer Rental in Burbank.

Chamow and Burke came up with the idea for Assistant Editors’ Bootcamp when they realized how challenging it is to gain any real on-the-job experience in today’s workplace. With today’s focus being primarily on doing things faster and more efficiently, it’s almost impossible to find the time to figure out why one method of doing something is faster than another. Having worked extensively in reality television and creating the “The Super Grouper,” a multi-grouping macro for Avid that is now widely used in reality post workflows, Chamow understands first-hand the landscape of the assistant editor’s world. “One of the most difficult things about working in the entertainment industry, especially in a technical position, is that there is never time to learn,” he says. “I’m very passionate about education and hope by hosting these classes, I can help other assistants hone their skills as well as helping those who are new to the business get the experience they need.”

Having worked as both an assistant editor and lead assistant editor, Burke has created workflows and overseen post for up to 10 projects at a time, before moving into his current position at NBC’s America’s Got Talent. “In my years of experience and working on grueling deadlines, I completely understand how difficult the job of an assistant editor can be, having little or no time to learn anything other than what’s right in front of you,” he says. “In teaching this class, I hope to make peers feel more confident and have a better understanding in their work, taking them to the next level in their careers.”

Main Image (L-R): Noah Chamow and Conor Burke.

Dell 6.15

The A-List: Director Ron Howard discusses National Geo’s Genius

By Iain Blair

Ron Howard has done it all in Hollywood. The former child star of The Andy Griffith Show and Happy Days not only successfully made the tricky transition to adult actor (at 22 he starred opposite John Wayne in The Shootist and was nominated for a Best Supporting Actor Oscar), but went on to establish himself as an Oscar-winning director and producer (A Beautiful Mind). He is also one of Hollywood’s most beloved and commercially successful and versatile helmers.

Since making his directorial debut in 1977 with Grand Theft Auto (when he was still on Happy Days), he’s made an eclectic group of films about boxers (Cinderella Man), astronauts (Apollo 13), mermaids (Splash), symbologists (The Da Vinci Code franchise), politicians (Frost/Nixon) firefighters (Backdraft), mathematicians (A Beautiful Mind), Formula One racing (Rush), whalers (In the Heart of the Sea) and the Fab Four (his first documentary, The Beatles: Eight Days a Week).

Born in Oklahoma with showbiz in his DNA — his parents were both actors — Howard “always wanted to direct” and notes that “producing gives you control.” In 1986, he co-founded Imagine Entertainment with Brian Grazer, a powerhouse in film and TV (Empire, Arrested Development) production. His latest project is the new Genius series for National Geographic.

The 10-part global event series — the network’s first scripted series — is based on Walter Isaacson’s book “Einstein: His Life and Universe” and tracks Albert Einstein’s rise from humble origins as an imaginative and rebellious thinker through his struggles to be recognized by the establishment, to his global celebrity status as the man who unlocked the mysteries of the cosmos with his theory of relativity.

But if you’re expecting a dry, intellectual by-the-numbers look at his life and career, you’re in for a big surprise.

With an impressive cast that includes Geoffrey Rush as the celebrated scientist in his later years, Johnny Flynn as Einstein in the years before he rose to international acclaim and Emily Watson as his second wife — and first cousin — Elsa Einstein, the show is full of sex, drugs and rock ‘n’ roll.

We’re mostly joking, but the series does balance the hard-to-grasp scientific theories with an entertaining exploration of a man with an often very messy private life as it follows Einstein’s alternately exhilarating emotions and heartlessness in dealing with his closest personal relationships, including his children, his two wives and the various women with whom he cheats on them.

Besides all the personal drama, there’s plenty of global drama as Genius is set against an era of international conflict over the course of two world wars. Faced with rising anti-Semitism in Europe, surveillance by spies and the potential for atomic annihilation, Einstein struggles as a husband and a father, not to mention as a man of principle, even as his own life is put in danger.

I talked recently with Ron Howard about directing the first episode and his love of production and post.

What was the appeal of doing this and making your scripted television directorial debut with the first episode?
I’ve become a big fan of all the great TV shows people are doing now, where you let a story unfold in a novelistic way, and I was envious of a lot of my peers getting into doing TV — and this was a great project that just really suits the TV format. Over the years, I had read various screenplays about Einstein but they just never worked as a movie, so when National Geographic wanted to reach out to their audience in a more ambitious way, suddenly there was this perfect platform to do this life justice and have the length it needed. It’s an ideal fit, and it was perfect to do it with National Geographic.

Given that you had considered making a film about him, how familiar were you with Einstein and his life? How do you find the drama in an academic’s life?
I thought I had some insight, but I was blown away by the book and Noah Pink’s screenplay, and everyone on the team brought their own research to the process, and it became more and more fascinating. There was this constant pressure on Einstein that I felt we could work with through the whole series, and that I never realized was there. And with that pressure, there’s drama. We came very close to not benefiting from his genius because of all the forces against him – sometimes from external forces, like governments and academic institutions, but often from his own foibles and flaws. He was even on a hit list. So I was really fascinated by his whole story.

What most surprised you about Einstein once you began delving deeper into his private life?
That he was such a Lothario! He had quite a complicated love life, but it was also that he had such a dogged commitment to his principles and logic and point-of-view. I was doing post on the Beatles documentary as we prepped, and it was the same thing with those young men. They often didn’t listen to outside influences and people telling them it couldn’t be done. They absolutely committed to their musical vision and principles with all their drive and focus, and it worked — and collectively I think you could say the band was genius.

Einstein also trusted his convictions, whether it was physics or math, and if the conventional answers didn’t satisfy his sense of logic, he’d just dig deeper. The same thing can be said for his personal life and relationships, and trying to find a balance between his career and life’s work, and family and friends. Look at his falling in love with his fellow physics student Mileva Maric, which causes all sorts of problems, especially when she unexpectedly gets pregnant. No one else thought she was particularly attractive, she was a bit of an outcast as the only female physics student, and yet his logic called him to her. The same thing with politics. He went his own way in everything. He was a true renaissance man, eternally curious about everything.

In terms of dealing with very complex ideas that aren’t necessarily very cinematic, it must have helped that you’d made A Beautiful Mind?
Yes, we saw a lot of similarities between the two. It really helped that both men were essentially visualists — Einstein even more so than John Nash. That gave us a big advantage and gave me the chance to show audiences some of his famous thought experiments in cinematic ways, and he described them very vividly and they’re a fantastic jumping-off point — it was his visualizations that helped him wrap his head around the physics. He began with something he could grasp physically and then went back to prove it with the math. Those principles gave him the amazing insights about the nature of the universe, and time and space, that we’ve all benefitted from.

I assume you began integrating post and all the VFX very early on?
Right away, in preproduction meetings in Prague, in the Czech Republic, where Einstein lived and taught early in his career. We had our whole team there on location, including our VFX supervisor Eric Durst and his team, DP Mathias Herndl, our production designers and art directors and so on. With all the VFX, we stayed pretty close to how Einstein described his thought experiments. The one that starts off this first episode is very vivid, whereas the first one he has as a 17-year-old boy is done in a more chalk-board kind of way, where he faints and can barely hang on mentally to the image. All the dailies and visual effects were done by UPP.

Where did you do the post?
We did all the editing and sound back in LA.

Do you like the post process?
I love it. I love the edit and slowly pulling it all together after the stress of the shoot.

It was edited by James Wilcox, who’s done CSI: Miami and Hawaii Five-O, along with Debby Germino and J. Kathleen Gibson. How early was James involved and was he on set?
Dan and Mike weren’t available. It’s the first time I’d worked with James and he’s very creative and did a great job. He wasn’t on the set, but we were constantly in communication and we’d send him material back to LA and then when I got back, we sat down together.

The show constantly cuts back and forth in time.
Yes, I was fascinated by all those transitions and I worked very closely with my team to make sure we had all that down, and that it all flowed smoothly in the edit. For instance, Johnny Flynn plays violin and he trained classically, so he actually plays in all those scenes. But Geoffrey doesn’t play violin, but he practiced for several months, and we had a teacher on set too. Geoffrey was so dedicated to creating this character.They both looked at tons of footage of Einstein as an older man, so Johnny could develop aspects of Einstein’s manner and behavior as the younger one, which Geoffrey could work with later, so we had a real continuity to the character. That’s a big reason why I wanted to be so hands-on with the first episode, as we were defining so many key aspects of the man and the aesthetics and the way we’d be telling the whole story.

Can you talk about working on the sound and music?
It’s always huge to me and adds so much to every scene. Lorne Balfe wrote a fantastic score and we had a great sound team: production sound mixer Peter Forejt, supervising sound editor Daniel Pagan, music editor Del Spiva and re-recording mixers Mark Hensley and Bob Bronow. For post production audio we used Smart Post Sound.

The DI must have been important?
It was very important since we were trying to do stuff with the concept of time in very subtle ways using the camera work, the palette and the lighting style. This all changed subtly depending on whether it was an Einstein memory, or a flashback to his younger, brasher self, or looking ahead to the iconic older man where it was all a little more formal. So we went for different looks to match the different energies and, of course, the editing style had to embody all of that as well. The colorist was Pankaj Bajpai, and he did a great job.

What’s next?
I plan to do more TV. Remember, I came out of TV and it’s so exciting now. I’m also developing several movie projects, including Seveneves, a sci-fi film, and Under the Banner of Heaven which is based on the Jon Krakauer bestseller. So whatever comes together first.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Game of Thrones: VFX associate producer Adam Chazen

With excitement starting to build for the seventh season of HBO’s Game of Thrones, what better time to take a quick look back at last season’s VFX workflow. HBO associate VFX producer Adam Chazen was kind enough to spend some time answering questions after just wrapping Season 7.

Tell us about your background as a VFX associate producer and what led you to Game of Thrones.
I got my first job as a PA at VFX studio Pixomondo. I was there for a few years, working under my current boss Steve Kullback (visual effects producer on Game of Thrones). He took me with him when he moved to work on Yogi Bear, and then on Game of Thrones.

I’ve been with the show since 2011, so this is my sixth year on board. It’s become a real family at this point; lots of people have been on since the pilot.

From shooting to post, what is your role working on Game of Thrones?
As the VFX associate producer, in pre-production mode I assist with organizing our previs and concept work. I help run and manage our VFX database and I schedule reviews with producers, directors and heads of departments.

During production I make sure everyone has what they need on set in order to shoot for the various VFX requirements. Also during production, we start to post the show — I’m in charge of running review sessions with our VFX supervisor Joe Bauer. I make sure that all of his notes get across to the vendors and that the vendors have everything they need to put the shots together.

Season 7 has actually been the longest we’ve stayed on set before going back to LA for post. When in Belfast, it’s all about managing the pre-production and production process, making sure everything gets done correctly to make the later VFX adjustments as streamlined as possible. We’ll have vendors all over the world working on that next step — from Australia to Spain, Vancouver, Montreal, LA, Dublin and beyond. We like to say that the sun never sets on Game of Thrones.

What’s the process for bringing new vendors onto the show?
They could be vendors that we’ve worked with in the past. Other times, we employ vendors that come recommended by other people. We check out industry reels and have studios do testing for us. For example, when we have dragon work we ask around for vendors willing to run dragon animation tests for us. A lot of it is word of mouth. In VFX, you work with the people that you know will do great work.

What’s your biggest challenge in creating Game of Thrones?
We’re doing such complex work that we need to use multiple vendors. This can be a big hurdle. In general, whether it be film or TV, when you have multiple vendors working on the same shot, it becomes a potential issue.

Linking in with cineSync helps. We can have a vendor in Australia and a vendor in Los Angeles both working on the same shot, at exactly the same time. I first started using cineSync while at Pixomondo and found it makes the revision process a lot quicker. We send notes out to vendors, but most of the time it’s easier to get on cineSync, see the same image and draw on it.

Even the simple move of hovering a cursor over the frame can answer a million questions. We have several vendors who don’t use English as their first language, such as those in Spain. In these cases, communication is a lot easier via cineSync. By pointing to a single portion of a single frame, we completely bypass the language barrier. It definitely helps to see an image on screen versus just explaining it.

What is your favorite part of the cineSync toolkit?
We’ve seen a lot of cool updates to cineSync. Specifically, I like the notes section, where you can export a PDF to include whichever frame that note is attributed to.

Honestly, just seeing a cursor move on-screen from someone else’s computer is huge. It makes things so much easier to just point and click. If we’re talking to someone on the phone, trying to tell them about an issue in the upper left hand corner, it’s going to be hard to get our meaning across. cineSync takes away all of the guesswork.

Besides post, we also heavily use cineSync for shoot needs. We shoot the show in Northern Ireland, Iceland, Croatia, Spain and Calgary. With cineSync, we are able to review storyboards, previs, techvis and concepts with the producers, directors, HODs and others, wherever they are in the world. It’s crucial that everyone is on the same page. Being able to look at the same material together helps everyone get what they want from a day on set.

Is there a specific shot, effect or episode you’re particularly proud of?
The Battle of the Bastards — it was a huge episode. Particularly, the first half of the episode when Daenerys came in with her dragons at the battle of Meereen, showing those slavers who is boss. Meereen City itself was a large CG creation, which was unusual for Game of Thrones. We usually try to stay away from fully CG environments and like to get as much in-camera as possible.

For example, when the dragon breathes fire we used an actual flamethrower we shot. Back in Season 5, we started to pre-animate the dragon, translate it to a motion control rig, and attach a flamethrower to it. It moves exactly how the dragon would move, giving us a practical element to use in the shot. CG fire can be done but it’s really tricky. Real is real, so you can’t question it.

With multiple vendors working on the sequence, we had Rodeo FX do the environment while Rhythm & Hues did the dragons. We used cineSync a lot, reviewing shots between both vendors in order to point out areas of concern. Then in the second half of the episode, which was the actual Battle of the Bastards, the work was brilliantly done by Australian VFX studio Iloura.


Jason Moss composes music for ABC’s The Toy Box

By Jennifer Walden

Children may not be the best source for deciding when bedtime should be, or deciding what’s for dinner (chicken nuggets again?), but who better to decide what toys kids want to play with? A large part of the Tom Hanks film Big was based on this premise.

ABC’s new inventor-centric series, The Toy Box, which premiered in April, features four young judges who are presented with new toy inventions. They then get to decide which toy prototypes would be popular with others in their demographic. Toy inventors competing on the show first meet with a set of “expert mentors,” a small group of adults who delve into the specifics of the toy and offer advice.

Jason Moss

If the toy makes it past that panel, it gets put into the “toy box.” The toy is then presented to the four young judges, who get to play with it, ask questions and give their critique to the toy inventor. The four young judges deliberate and make a final decision on which toy will advance to the next round. At the end of the season, the judges will chose one winning toy to be made by Mattel and sold exclusively at Toys ‘R’ Us.

The Toy Box needed a soundtrack that could both embody the essence of juvenile joviality and portray the pseudo-seriousness its pre-teen decision makers. It’s not a job for your average reality show composer. It required askew musical sensibilities. “The music is fun and super-pop sounding with cool analog synths and video game sounds. It’s really energetic and puts a smile on your face,” says the series composer/music supervisor Jason Moss at Super Sonic Noise in Los Angeles. “Then for the decision-making cues, as the kids decide whether they like a toy and what they’re going to do, it had to be something other than what you’d expect. It couldn’t sound too dark. It still had to be quirky.”

Moss knows quirky. He was the composer on IFC’s Gigi Does It, starring David Krumholtz as an eccentric Jewish grandmother living in Florida. Moss also composed the theme music for the Seeso original series Bajillion Dollar Propertie$, a partially improvised comedy series that pokes fun at real estate reality shows.

Moss covered all of The Toy Box’s musical needs — from high-energy pop and indie rock tracks when the kids are playing with the toys to comedic cues infused with ukulele and kitschy strings, and tension tracks for moments of decision. He wrote original music as well as curated selections from the Bulletproof Bear music catalog. Bulletproof Bear offers a wide variety of licensable tracks written by Moss, plus other music catalogs they represent. “It’s a big collection with over 33,000 tracks. We can really compete with bigger music license companies because we have a huge amount of diverse music that can cover the whole production from head to toe,” he says.

The Gear
Moss composes in Apple’s Logic Pro X. He performed live guitars, bass and ukulele (using the Kala U-Bass bass ukulele). For mics, he chose Miktek Audio’s CV4 large diaphragm condense tube and their C5 small diaphragm pencil condenser, each paired with Empirical Labs Mike-E pre-amps.

Moss combined the live sounds with virtual instruments, particularly those from Spectrasonics. XLN Audio’s Addictive Drums were his go-to for classic and modern drum sounds. For synths, he used reFX’s Nexus, libraries from Native Instrument’s Kontakt, Arturia’s Analog Lab and their VOX Continental V. He also called on the ROLI Equator sound engine via the ROLI Rise 25 key MIDI controller, which features soft squishy silicone keys much different from a traditional keyboard controller. The Akai MPK88 weighted key controller is Moss’ choice in that department. For processing and effects, he chose plug-ins by Soundtoys and PSP Audioware. He also incorporated various toy and video game sounds into the tracks.

The Score
The show’s two-minute opener combines three separate segments — the host (Modern Family‘s Eric Stonestreet), the expert mentor introductions and the judges introductions. Each has its own musical vibe. The host and the expert mentors have original music that Moss wrote specifically for the show. The judges have a dramatic pulsing-string track that is licensed from Bulletproof Bear’s catalog. In addition, a five-second tag for The Toy Box logo is licensed from the Bulletproof Bear catalog. That tag was composed by Jon LaCroix, who is one of Moss’ business partners. In regards to the dramatic strings on the kids’ entrance, Moss, who happened to write that cue, says, “The way they filmed the kids… it’s like they are little mini adults. So the theme has some seriousness to it. In context, it’s really cute.”

For the decision-making cues, Moss wanted to stay away from traditional tension strings. To give the track a more playful feel that would counterbalance the tension, he used video game sounds and 808 analog drum sounds. “I also wanted to use organic sounds that were arpeggiated and warm. They are decision-making tick-tock tracks, but I wanted to make it more fun and interesting,” says Moss.

“We were able to service the show on the underscore side with Bulletproof Bear’s music catalog in conjunction with my original music. It was a great opportunity for us to keep all the music within our company and give the client a one-stop shop, keeping the music process organized and easy,” he explains. “It was all about finding the right sound, or the right cue, for each of those segments. At the end of the day, I want to make sure that everybody is happy, acknowledge the showrunners’ musical vision and strive to capture that. It was a super-fun experience, and hopefully it will come back for a second, third and tenth season! It’s one of those shows you can watch with your kids. The kid judges are adorable and brutally honest, and with the myriad of adult programming out there, it’s refreshing to see a show like The Toy Box get green-lit.”


A chat with Emmy-winning comedy editor Sue Federman

This sitcom vet talks about cutting Man With A Plan and How I Met Your Mother.

By Dayna McCallum

The art of sitcom editing is overly enjoyed and underappreciated. While millions of people literally laugh out loud every day enjoying their favorite situation comedies, very few give credit to the maestro behind the scenes, the sitcom editor.

Sue Federman is one of the best in the business. Her work on the comedy How I Met Your Mother earned three Emmy wins and six nominations. Now the editor of CBS’ new series, Man With A Plan, Federman is working with comedy legends Matt LeBlanc and James Burrows to create another classic sitcom.

However, Federman’s career in entertainment didn’t start in the cutting room; it started in the orchestra pit! After working as a professional violinist with orchestras in Honolulu and San Francisco, she traded in her bow for an Avid.

We sat down to talk with Federman about the ins and outs of sitcom editing, that pesky studio audience, and her journey from musician to editor.

When did you get involved with your show, and what is your workflow like?
I came onto Man With A Plan (MWAP) after the original pilot had been picked up. They recast one of the leads, so there was a reshoot of about 75 percent of the pilot with our new Andi, Liza Snyder. My job was to integrate the new scenes with the old. It was interesting to preserve the pace and feel of the original and to be free to bring my own spin to the show.

The workflow of the show is pretty fast since there’s only one editor on a traditional audience sitcom. I usually put a show together in two to three days, then work with the producers for one to two days, and then send a pretty finished cut to the studio/network.

What are the biggest challenges you face as an editor on a traditional half-hour comedy?
One big challenge is managing two to three episodes at a time — assembling one show while doing producer or studio/network notes on another, as well as having to cut preshot playbacks for show night, which can be anywhere from three to eight minutes of material that has to be cut pretty quickly.

Another challenge is the live audience laughter. It’s definitely a unique part of this kind of show. I worked on How I Met Your Mother (HIMYM) for nine years without an audience, so I could completely control the pacing. I added fake laughs that fit the performances and things like that. When I came back to a live audience show, I realized the audience is a big part of the way the performances are shaped. I’ve learned all kinds of ways to manipulate the laughs and, hopefully, still preserve the spontaneous live energy of the show.

How would you compare cutting comedy to drama?
I haven’t done much drama, but I feel like the pace of comedy is faster in every regard, and I really enjoy working at a fast pace. Also, as opposed to a drama or anything shot single-camera, the coverage on a multi-cam show is pretty straightforward, so it’s really all about performance and pacing. There’s not a lot of music in a multi-cam, but you spend a lot of time working with the audience tracks.

What role would you say an editor has in helping to make a “bit” land in a half-hour comedy?
It’s performance, timing and camera choices — and when it works, it feels great. I’m always amazed at how changing an edit by a frame or two can make something pop. Same goes for playing something wider or closer depending on the situation.

MWAP is shot before a live studio audience. How does that affect your rhythm?
The audience definitely affects the rhythm of the show. I try to preserve the feeling of the laughs and still keep the show moving. A really long laugh is great on show night, but usually we cut it down a bit and play off more reactions. The actors on MWAP are great because they really know how to “ride” the laughs and not break character. I love watching great comedic actors, like the cast of I Love Lucy, for example, who were incredible at holding for laughs. It’s a real asset and very helpful to the editor.

Can you describe your process? And what system do you edit the show on?
I’ve always used the Avid Media Composer. Dabbled with Final Cut, but prefer Avid. I assemble the whole show in one sequence and go scene by scene. I watch all of the takes of a scene and make choices for each section or sometimes for each line. Then I chunk the scene together, sometimes putting in two choices for a line or area. I then cut into the big pieces to select the cameras for each shot. After that, I go back and find the rhythm of the scene — tightening the pace, cutting into the laughs and smoothing them.

After the show is put together, I go back and watch the whole thing again, pretending that I’ve never seen it, which is a challenge. That makes me adjust it even more. I try to send out a pretty polished first cut, without cutting any dialogue to show the producers everything, which seems to make the whole process go faster. I’m lucky that the directors on MWAP are very seasoned and don’t really give me many notes. Jimmy Burrows and Pam Fryman have directed almost all of the episodes, and I don’t send out a separate cut to either of them. Particularly with Pam, as I’ve worked with her for about 11 years, so we have a nice shorthand.

How do assistant editors work into the mix?
My assistant, Dan “Steely” Esparza, is incredible! He allows me to show up to work every day and not think about anything other than cutting the show. He’s told me, even though I always ask, that he prefers not to be an editor, so I don’t push him in that direction. He is excellent at visual effects and enjoys them, so I always have him do those. On HIMYM, we had quite a lot of visual effects, so he was pretty busy there. But on MWAP, it’s mostly rough composites for blue/greenscreen scenes and painting out errant boom shadows, boom mics and parts of people.

Your work on HIMYM was highly lauded. What are some of your favorite “editing” moments from that show and what were some of the biggest challenges they threw at you?
I really loved working on that show — every episode was unique, and it really gave me opportunities to grow as an editor. Carter Bays and Craig Thomas were amazing problem solvers. They were able to look at the footage and make something completely different out of it if need be. I remember times when a scene wasn’t working or was too long, and they would write some narration, record the temp themselves, and then we’d throw some music over it and make it into a montage.

Some of the biggest editing challenges were the music videos/sequences that were incorporated into episodes. There were three complete Robin Sparkles videos and many, many other musical pieces, almost always written by Carter and Craig. In “P.S. I Love You,” they incorporated her last video into kind of a Canadian Behind the Music about the demise of Robin Sparkles, and that was pretty epic for a sitcom. The gigantic “Subway Wars” was another big challenge, in that it had 85 “scenelets.” It was a five-way race around Manhattan to see who could get to a restaurant where Woody Allen was supposedly eating first, with each person using a different mode of transportation. Crazy fun and also extremely challenging to fit into a sitcom schedule.

You started in the business as a classical musician. How does your experience as a professional violinist influence your work as an editor?
I think the biggest thing is having a good feeling for the rhythm of whatever I’m working on. I love to be able to change the tempo and to make something really pop. And when asked to change the pacing or cut sections out, when doing various people’s notes, being able to embrace that too. Collaborating is a big part of being a musician, and I think that’s helped me a lot in working with the different personalities. It’s not unlike responding to a conductor or playing chamber music. Also having an understanding of phrasing and the overall structure of a piece is valuable, even though it was musical phrasing and structure, it’s not all that different.

Obviously, whenever there’s actual music involved, I feel pretty comfortable handling it or choosing the right piece for a scene. If classical music’s involved, I have a great deal of knowledge that can be helpful. For example, in HIMYM, we needed something to be a theme for Barney’s Playbook antics. I tried a few things, and we landed on the Mozart Rondo Alla Turca, which I’ve been hearing lately in the Progresso Soup commercials.

How did you make the transition from the concert hall to the editing room?
It’s a long story! I was playing in the San Francisco Ballet Orchestra and was feeling stuck. I was lucky enough to find an amazing career counseling organization that helped me open my mind to all kinds of possibilities, and they helped me to discover the perfect job for me. It was quite a journey, but the main thing was to be open to anything and identify the things about myself that I wanted to use. I learned that I loved music (but not playing the violin), puzzles, stories and organizing — so editing!

I sold a bow, took the summer off from playing and enrolled in a summer production workshop at USC. I wasn’t quite ready to move to LA, so I went back to San Francisco and began interning at a small commercial editing house. I was answering phones, emptying the dishwasher, getting coffees and watching the editing, all while continuing to play in the Ballet Orchestra. The people were great and gave me opportunities to learn whenever possible. Luckily for me, they were using the Avid before it came to TV and features. Eventually, there was a very rough documentary that one of the editors wanted to cut, but it wasn’t organized. They gave me the key to the office and said, “You want to be an editor? Organize this!” So I did, and they started offering me assistant work on commercials. But I wanted to cut features, so I started to make little trips to LA to meet anybody I could.

Bill Steinberg, an editor working in the Universal Syndication department who I met at USC, got me hooked up with an editor who was to be one of Roger Corman’s first Avid editors. The Avids didn’t arrive right away, but he helped me put my name in the hat to be an assistant the next time. It happened, and I was on my way! I took a sabbatical from the orchestra, went down to LA, and worked my tail off for $400 a week on three low-budget features. I was in heaven. I had enough hours to join the union as an assistant, but I needed money to pay the admission fee. So I went back to San Francisco and played one month of Nutcrackers to cover the fee, and then I took another year sabbatical. Bill offered me a month position in the syndication department to fill in for him, and show the film editors what I knew about the Avid.

Eventually Andy Chulack, the editor of Coach, was looking for an Avid assistant, and I was recommended because I knew it. Andy hired me and took me under his wing, and I absolutely loved it. I guess the upshot is, I was fearlessly naive and knew the Avid!

What do you love most about being an editor?
I love the variation of material and people that I get to work with, and I like being able to take time to refine things. I don’t have to play it live anymore!


Lost in Time game show embraces ‘Interactive Mixed Reality’

By Daniel Restuccio

The Future Group — who has partnered with Fremantle Media, Ross Video and Epic Games — have created a new super-agile entertainment platform that blends linear television and game technology into a hybrid format called “Interactive Mixed Reality.”

The brainchild of Bård Anders Kasin, this innovative content deployment medium generated a storm of industry buzz at NAB 2016, and their first production Lost in Time — a weekly primetime game show — is scheduled to air this month on Norwegian television.

The Idea
The idea originated more than 13 years ago in Los Angeles. In 2003, at age 22, Kasin, a self-taught multimedia artist from Notodden, Norway, sent his CV and a bunch of media projects to Warner Bros. in Burbank, California, in hopes of working on The Matrix. They liked it. His interview was on a Wednesday and by Friday he had a job as a technical director.

Kasin immersed himself in the cutting-edge movie revolution that was The Matrix franchise. The Wachowskis visionary production was a masterful inspiration and featured a compelling sci-fi action story, Oscar-winning editing, breakthrough visual effects (“bullet-time”) and an expanded media universe that included video games and an animè-style short The Animatrix. The Matrix Reloaded and The Matrix Revolutions were shot at the same time, as well as more than an hour of footage specifically designed for the video game. The Matrix Online, an Internet gaming platform, was a direct sequel to The Matrix Revolutions.

L-R: Bård Anders Kasin and Jens Petter Høili.

Fast forward to 2013 and Kasin has connected with software engineer and serial entrepreneur Jens Petter Høili, founder of EasyPark and Fairchance. “There was this producer I knew in Norway,” explains Kasin, “who runs this thing called the Artists’ Gala charity. He called and said, ‘There’s this guy you should meet. I think you’ll really hit it off.’” Kasin met Høili had lunch and discussed projects they each were working on. “We both immediately felt there was a connection,” recalls Kasin. No persuading was necessary. “We thought that if we combined forces we were going to get something that’s truly amazing.”

That meeting of the minds led to the merging of their companies and the formation of The Future Group. The mandate of Oslo-based The Future Group is to revolutionize the television medium by combining linear TV production with cutting-edge visual effects, interactive gameplay, home viewer participation and e-commerce. Their IMR concept ditches the individual limiting virtual reality (VR) headset, but conceptually keeps the idea of creating content that is a multi-level, intricate and immersive experience.

Lost in Time
Fast forward again, this time to 2014. Through another mutual friend, The Future Group formed an alliance with Fremantle Media. Fremantle, a global media company, has produced some of the highest-rated and longest-running shows in the world, and is responsible for top international entertainment brands such as Got Talent, Idol and The X Factor.

Kasin started developing the first IMR prototype. At this point, the Lost in Time production had expanded to include Ross Video and Epic Games. Ross Video is a broadcast technology innovator and Epic Games is a video game producer and the inventor of the Unreal game engine. The Future Group, in collaboration with Ross Video, engineered the production technology and developed a broadcast-compatible version of the Unreal game engine called Frontier, shown at NAB 2016, to generate high-resolution, realtime graphics used in the production.

On January 15, 2015 the first prototype was shown. When Freemantle saw the prototype, they were amazed. They went directly to stage two, moving to the larger stages at Dagslys Studios. “Lost in Time has been the driver for the technology,” explains Kasin. “We’re a very content-driven company. We’ve used that content to drive the development of the platform and the technology, because there’s nothing better than having actual content to set the requirements for the technology rather than building technology for general purposes.”

In Lost in Time, three studio contestants are set loose on a greenscreen stage and perform timed, physical game challenges. The audience, which could be watching at home or on a mobile device, sees the contestant seamlessly blended into a virtual environment built out of realtime computer graphics. The environments are themed as western, ice age, medieval times and Jurassic period sets (among others) with interactive real props.

The audience can watch the contestants play the game or participate in the contest as players on their mobile device at home, riding the train or literally anywhere. They can play along or against contestants, performing customized versions of the scripted challenges in the TV show. The mobile content uses graphics generated from the same Unreal engine that created the television version.

“It’s a platform,” reports partner Høili, referring to the technology behind Lost in Time. A business model is a way you make money, notes tech blogger Jonathan Clarks, and a platform is something that generates business models. So while Lost in Time is a specific game show with specific rules, built on television technology, it’s really a business technology framework where multiple kinds of interactive content could be generated. Lost in Time is like the Unreal engine itself, software that can be used to create games, VR experiences and more, limited only by the imagination of the content creator. What The Future Group has done is create a high-tech kitchen from which any kind of cuisine can be cooked up.

Soundstages and Gear
Lost in Time is produced on two greenscreen soundstages at Dagslys Studios in Oslo. The main “gameplay set” takes up all of Studio 1 (5,393 square feet) and the “base station set” is on Studio 3 (1,345 square feet). Over 150 liters (40 gallons) of ProCyc greenscreen paint was used to cover both studios.

Ross Video, in collaboration with The Future Group, devised an integrated technology of hardware and software that supports the Lost in Time production platform. This platform consists of custom cameras, lenses, tracking, control, delay, chroma key, rendering, greenscreen, lighting and switcher technology. This system includes the new Frontier hardware, introduced at NAB 2016, which runs the Unreal game engine 3D graphics software.

Eight Sony HDC-2500 cameras running HZC-UG444 software are used for the production. Five are deployed on the “gameplay set.” One camera rides on a technocrane, two are on manual pedestal dollies and one is on Steadicam. For fast-action tracking shots, another camera sits on the Furio RC dolly that rides on a straight track that runs the 90-foot length of the studio. The Furio RC pedestal, controlled by SmartShell, guarantees smooth movement in virtual environments and uses absolute encoders on all axes to send complete 3D tracking data into the Unreal engine.

There is also one Sony HDC-P1 camera that is used as a static, center stage, ceiling cam flying 30 feet above the gameplay set. There are three cameras in the home base set, two on Furio Robo dollies and one on a technocrane. In the gameplay set, all cameras (except the ceiling cam) are tracked with the SolidTrack IR markerless tracking system.

All filming is done at 1080p25 and output RGB 444 via SDI. They use a custom LUT on the cameras to avoid clipping and an expanded dynamic range for post work. All nine camera ISOs, separate camera “clean feeds,” are recorded with a “flat” LUT in RGB 444. For all other video streams, including keying and compositing, they use LUT boxes to invert the signal back to Rec 709.

Barnfind provided the fiber optic network infrastructure that links all the systems. Ross Video Dashboard controls the BarnOne frames as well as the router, Carbonite switchers, Frontier graphics system and robotic cameras.

A genlock signal distributed via OpenGear syncs all the gear to a master clock. The Future Group added proprietary code to Unreal so the render engine can genlock, receive and record linear timecode (LTC) and output video via SDI in all industry standard formats. They also added additional functionality to the Unreal engine to control lights via DMX, send and receive GPI signals, communicate with custom sensors, buttons, switches and wheels used for interaction with the games and controlling motion simulation equipment.

In order for the “virtual cameras” in the graphics systems and the real cameras viewing the real elements to have the exact same perspectives, an “encoded” camera lens is required that provides the lens focal length (zoom) and focus data. In addition the virtual lens field of view (FOV) must be properly calibrated to match the FOV of the real lens. Full servo digital lenses with 16-bit encoders are needed for virtual productions. Lost in Time uses three Canon lenses with these specifications: Canon Hj14ex4.3B-IASE, Canon Hj22ex7.6B-IASE-A and Canon Kj17ex7.7B-IASE-A.

The Lost in Time camera feeds are routed to the Carbonite family hardware: Ultrachrome HR, Carbonite production frame and Carbonite production switcher. Carbonite Ultrachrome HR is a stand-alone multichannel chroma key processor based on the Carbonite Black processing engine. On Lost in Time, the Ultrachrome switcher accepts the Sony camera RGB 444 signal and uses high-resolution chroma keyers, each with full control of delay management, fill color temperature for scene matching, foreground key and fill, and internal storage for animated graphics.

Isolated feeds of all nine cameras are recorded, plus two quad-splits with the composited material and the program feed. Metus Ingest, a The Future Group proprietary hardware solution, was used for all video recording. Metus Ingest can simultaneously capture and record  up to six HD channels of video and audio from multiple devices on a single platform.

Post Production
While the system is capable of being broadcast live, they decided not to go live for the debut. Instead they are only doing a modest amount of post to retain the live feel. That said, the potential of the post workflow on Lost in Time arguably sets a whole new post paradigm. “Post allows us to continue to develop the virtual worlds for a longer amount of time,” says Kasin. “This gives us more flexibility in terms of storytelling. We’re always trying to push the boundaries with the creative content. How we tell the story of the different challenges.”

All camera metadata, including position, rotation, lens data, etc., and all game interaction, were recorded in the Unreal engine with a proprietary system. This allowed graphics playback as a recorded session later. This also let the editors change any part of the graphics non-destructively. They could choose to replace 3D models or textures or in post change the tracking or point-of-view of any of the virtual cameras as well as add cameras for more virtual “coverage.”

Lost in Time episodes were edited as a multicam project, based on the program feed, in Adobe Premiere CC. They have a multi-terabyte storage solution from Pixit Media running Tiger Technology’s workflow manager. “The EDL from the final edit is fed through a custom system, which then builds a timeline in Unreal to output EXR sequences for a final composite.”

That’s it for now, but be sure to visit this space again to see part two of our coverage on The Future Group’s Lost in Time. Our next story will include the real and virtual lighting systems, the SolidTrack IR tracking system, the backend component, and interview with Epic Games’ Kim Libreri about Unreal engine development/integration and a Lost in Time episode editor.


Daniel Restuccio, who traveled to Oslo for this piece, is a writer, producer and teacher. He is currently multimedia department chairperson at California Lutheran in Thousand Oaks.

FMPX8.14
Mozart in the Jungle

The colorful dimensions of Amazon’s Mozart in the Jungle

By Randi Altman

How do you describe Amazon’s Mozart in the Jungle? Well, in its most basic form it’s a comedy about the changing of the guard — or maestro — at the New York Philharmonic, and the musicians that make up that orchestra. When you dig deeper you get a behind-the-scenes look at the back-biting and crazy that goes on in the lives and heads of these gifted artists.

Timothy Vincent

Timothy Vincent

Based on the novel Mozart in the Jungle: Sex, Drugs, and Classical Music by oboist Blair Tindall, the series — which won the Golden Globe last year and was nominated this year — has shot in a number of locations over its three seasons, including Mexico and Italy.

Since its inception, Mozart in the Jungle has been finishing in 4K and streaming in both SDR and HDR. We recently reached out to Technicolor’s senior color timer, Timothy Vincent, who has been on the show since the pilot to find out more about the show’s color workflow.

Did Technicolor have to gear up infrastructure-wise for the show’s HDR workflow?
We were doing UHD 4K already and were just getting our HDR workflows worked out.

What is the workflow from offline to online to color?
The dailies are done in New York based on the Alexa K1S1 709 LUT. (Technicolor On-Location Services handled dailies out of Italy, and Technicolor PostWorks in New York.) After the offline and online, I get the offline reference made with the dailies so I can look at if I have a question about what was intended.

If someone was unsure about watching in HDR versus SDR, what would you tell them?
The emotional feel of both the SDR and the HDR is the same. That is always the goal in the HDR pass for Mozart. One of the experiences that is enhanced in the HDR is the depth of field and the three-dimensional quality you gain in the image. This really plays nicely with the feel in the landscapes of Italy, the stage performances where you feel more like you are in the audience, and the long streets of New York just to name a few.

Mozart in the JungleWhen I’m grading the HDR version, I’m able to retain more highlight detail than I was in the SDR pass. For someone who has not yet been able to experience HDR, I would actually recommend that they watch an episode of the show in SDR first and then in HDR so they can see the difference between them. At that point they can choose what kind of viewing experience they want. I think that Mozart looks fantastic in both versions.

What about the “look” of the show. What kind of direction where you given?
We established the look of the show based on conversations and collaboration in my bay. It has always been a filmic look with soft blacks and yellow warm tones as the main palette for the show. Then we added in a fearlessness to take the story in and out of strong shadows. We shape the look of the show to guide the viewers to exactly the story that is being told and the emotions that we want them to feel. Color has always been used as one of the storytelling tools on the show. There is a realistic beauty to the show.

What was your creative partnership like with the show’s cinematographer, Tobias Datum?
I look forward to each episode and discovering what Tobias has given me as palette and mood for each scene. For Season 3 we picked up where we left off at the end of Season 2. We had established the look and feel of the show and only had to account for a large portion of Season 3 being shot in Italy. Making sure to feel the different quality of light and feel of the warmth and beauty of Italy. We did this by playing with natural warm skin tones and the contrast of light and shadow he was creating for the different moods and locations. The same can be said for the two episodes in Mexico in Season 2. I know now what Tobias likes and can make decisions I’m confident that he will like.

Mozart in the JungleFrom a director and cinematographer’s point of view, what kind of choices does HDR open up creatively?
It depends on if they want to maintain the same feel of the SDR or if they want to create a new feel. If they choose to go in a different direction, they can accentuate the contrast and color more with HDR. You can keep more low-light detail while being dark, and you can really create a separate feel to different parts of the show… like a dream sequence or something like that.

Any workflow tricks/tips/trouble spots within the workflow or is it a well-oiled machine at this point?
I have actually changed the way I grade my shows based on the evolution of this show. My end results are the same, but I learned how to build grades that translate to HDR much easier and consistently.

Do you have a color assistant?
I have a couple of assistants that I work with who help me with prepping the show, getting proxies generated, color tracing and some color support.

What tools do you use — monitor, software, computer, scope, etc.?
I am working on Autodesk Lustre 2017 on an HP Z840, while monitoring on both a Panasonic CZ950 and a Sony X300. I work on Omnitek scopes off the downconverter to 2K. The show is shot on both Alexa XT and Alexa Mini, framing for 16×9. All finishing is done in 4K UHD for both SDR and HDR.

Anything you would like to add?
I would only say that everyone should be open to experiencing both SDR and HDR and giving themselves that opportunity to choose which they want to watch and when.


ACE Eddie nominees include Arrival, Manchester by the Sea, Better Call Saul

The American Cinema Editors (ACE) have named the nominees for the 67th ACE Eddie Award, which recognize editing in 10 categories of film, television and documentaries.

Winners will be announced during ACE’s annual awards ceremony on January 27 at the Beverly Hilton Hotel. In addition to the regular editing awards, J.J. Abrams will receive the ACE Golden Eddie Filmmaker of the Year award.

Check out the nominees:

BEST EDITED FEATURE FILM (DRAMATIC)
Arrival
Joe Walker, ACE

Hacksaw Ridge
John Gilbert, ACE

Hell or High Water
Jake Roberts

Manchester by the Sea
Jennifer Lame
 
Moonlight
Nat Sanders, Joi McMillon

BEST EDITED FEATURE FILM (COMEDY)
Deadpool
Julian Clarke, ACE

Hail, Caesar!
Roderick Jaynes

The Jungle Book
Mark Livolsi, ACE

La La Land
Tom Cross, ACE

The Lobster
Yorgos Mavropsaridis

BEST EDITED ANIMATED FEATURE FILM
Kubo and the Two Strings
Christopher Murrie, ACE

Moana
Jeff Draheim, ACE

Zootopia
Fabienne Rawley and Jeremy Milton

BEST EDITED DOCUMENTARY (FEATURE)

13th
Spencer Averick

Amanda Knox
Matthew Hamachek

The Beatles: Eight Days a Week — The Touring Years
Paul Crowder

OJ: Made in America
Bret Granato, Maya Mumma and Ben Sozanski

Weiner
Eli B. Despres

BEST EDITED DOCUMENTARY (TELEVISION)
The Choice 2016
Steve Audette, ACE

Everything Is Copy
Bob Eisenhardt, ACE

We Will Rise: Michelle Obama’s Mission to Educate Girls Around the World
Oliver Lief

BEST EDITED HALF-HOUR SERIES
Silicon Valley: “The Uptick”
Brian Merken, ACE

Veep: “Morning After”
Steven Rasch, ACE

Veep: “Mother”
Shawn Paper

BEST EDITED ONE-HOUR SERIES — COMMERCIAL
Better Call Saul: “Fifi”
Skip Macdonald, ACE

Better Call Saul: “Klick”
Skip Macdonald, ACE & Curtis Thurber

Better Call Saul: “Nailed”
Kelley Dixon, ACE and Chris McCaleb

Mr. Robot: “eps2.4m4ster-s1ave.aes”
Philip Harrison

This is Us: “Pilot”
David L. Bertman, ACE

BEST EDITED ONE-HOUR SERIES – NON-COMMERCIAL
The Crown: “Assassins”
Yan Miles, ACE

Game of Thrones: “Battle of the Bastards”
Tim Porter, ACE

Stranger Things: “Chapter One: The Vanishing of Will Byers”
Dean Zimmerman

Stranger Things: “Chapter Seven: The Bathtub”
Kevin D. Ross

Westworld: “The Original”
Stephen Semel, ACE and Marc Jozefowicz

BEST EDITED MINISERIES OR MOTION PICTURE (NON-THEATRICAL)
All the Way
Carol Littleton, ACE

The Night Of: “The Beach”
Jay Cassidy, ACE

The People V. OJ Simpson: American Crime Story: “Marcia, Marcia, Marcia”
Adam Penn, Stewart Schill, ACE and C. Chi-yoon Chung

BEST EDITED NON-SCRIPTED SERIES:
Anthony Bourdain: Parts Unknown: “Manila” 
Hunter Gross, ACE

Anthony Bourdain: Parts Unknown: Senegal
Mustafa Bhagat

Deadliest Catch: “Fire at Sea: Part 2”
Josh Earl, ACE and Alexander Rubinow, ACE

Final ballots will be mailed on January 6, and voting ends on January 17. The Blue Ribbon screenings, where judging for all television categories and the documentary categories take place, will be on January 15. Projects in the aforementioned categories are viewed and judged by committees comprised of professional editors (all ACE members). All 850-plus ACE members vote during the final balloting of the ACE Eddies, including active members, life members, affiliate members and honorary members.

Main Image: Tilt Photo

What it sounds like when Good Girls Revolt for Amazon Studios

By Jennifer Walden

“Girls do not do rewrites,” says Jim Belushi’s character, Wick McFadden, in Amazon Studios’ series Good Girls Revolt. It’s 1969, and he’s the national editor at News of the Week, a fictional news magazine based in New York City. He’s confronting the new researcher Nora Ephron (Grace Gummer) who claims credit for a story that Wick has just praised in front of the entire newsroom staff. The trouble is it’s 1969 and women aren’t writers; they’re only “researchers” following leads and gathering facts for the male writers.

When Nora’s writer drops the ball by delivering a boring courtroom story, she rewrites it as an insightful articulation of the country’s cultural climate. “If copy is good, it’s good,” she argues to Wick, testing the old conventions of workplace gender-bias. Wick tells her not to make waves, but it’s too late. Nora’s actions set in motion an unstoppable wave of change.

While the series is set in New York City, it was shot in Los Angeles. The newsroom they constructed had an open floor plan with a bi-level design. The girls are located in “the pit” area downstairs from the male writers. The newsroom production set was hollow, which caused an issue with the actors’ footsteps that were recorded on the production tracks, explains supervising sound editor Peter Austin. “The set was not solid. It was built on a platform, so we had a lot of boomy production footsteps to work around. That was one of the big dialogue issues. We tried not to loop too much, so we did a lot of specific dialogue work to clean up all of those newsroom scenes,” he says.

The main character Patti Robinson (Genevieve Angelson) was particularly challenging because of her signature leather riding boots. “We wanted to have an interesting sound for her boots, and the production footsteps were just useless. So we did a lot of experimenting on the Foley stage,” says Austin, who worked with Foley artists Laura Macias and Sharon Michaels to find the right sound. All the post sound work — sound editorial, Foley, ADR, loop group, and final mix was handled at Westwind Media in Burbank, under the guidance of post producer Cindy Kerber.

Austin and dialog editor Sean Massey made every effort to save production dialog when possible and to keep the total ADR to a minimum. Still, the newsroom environment and several busy street scenes proved challenging, especially when the characters were engaged in confidential whispers. Fortunately, “the set mixer Joe Foglia was terrific,” says Austin. “He captured some great tracks despite all these issues, and for that we’re very thankful!”

The Newsroom
The newsroom acts as another character in Good Girls Revolt. It has its own life and energy. Austin and sound effects editor Steve Urban built rich backgrounds with tactile sounds, like typewriters clacking and dinging, the sound of rotary phones with whirring dials and bell-style ringers, the sound of papers shuffling and pencils scratching. They pulled effects from Austin’s personal sound library, from commercial sound libraries like Sound Ideas, and had the Foley artists create an array of period-appropriate sounds.

Loop group coordinator Julie Falls researched and recorded walla that contained period appropriate colloquialisms, which Austin used to add even more depth and texture to the backgrounds. The lively backgrounds helped to hide some dialogue flaws and helped to blend in the ADR. “Executive producer/series creator Dana Calvo actually worked in an environment like this and so she had very definite ideas about how it would sound, particularly the relentlessness of the newsroom,” explains Austin. “Dana had strong ideas about the newsroom being a character in itself. We followed her guide and wanted to support the scenes and communicate what the girls were going through — how they’re trying to break through this male-dominated barrier.”

Austin and Urban also used the backgrounds to reinforce the difference between the hectic state of “the pit” and the more mellow writers’ area. Austin says, “The girls’ area, the pit, sounds a little more shrill. We pitched up the phone’s a little bit, and made it feel more chaotic. The men’s raised area feels less strident. This was subtle, but I think it helps to set the tone that these girls were ‘in the pit’ so to speak.”

The busy backgrounds posed their own challenge too. When the characters are quiet, the room still had to feel frenetic but it couldn’t swallow up their lines. “That was a delicate balance. You have characters who are talking low and you have this energy that you try to create on the set. That’s always a dance you have to figure out,” says Austin. “The whole anarchy of the newsroom was key to the story. It creates a good contrast for some of the other scenes where the characters’ private lives were explored.”

Peter Austin

The heartbeat of the newsroom is the teletype machines that fire off stories, which in turn set the newsroom in motion. Austin reports the teletype sound they used was captured from a working teletype machine they actually had on set. “They had an authentic teletype from that period, so we recorded that and augmented it with other sounds. Since that was a key motif in the show, we actually sweetened the teletype with other sounds, like machine guns for example, to give it a boost every now and then when it was a key element in the scene.”

Austin and Urban also built rich backgrounds for the exterior city shots. In the series opener, archival footage of New York City circa 1969 paints the picture of a rumbling city, moved by diesel-powered buses and trains, and hulking cars. That footage cuts to shots of war protestors and police lining the sidewalk. Their discontented shouts break through the city’s continuous din. “We did a lot of texturing with loop group for the protestors,” says Austin. He’s worked on several period projects over years, and has amassed a collection of old vehicle recordings that they used to build the street sounds on Good Girls Revolt. “I’ve collected a ton of NYC sounds over the years. New York in that time definitely has a different sound than it does today. It’s very distinct. We wanted to sell New York of that time.”

Sound Design
Good Girls Revolt is a dialogue-driven show but it did provide Austin with several opportunities to use subjective sound design to pull the audience into a character’s experience. The most fun scene for Austin was in Episode 5 “The Year-Ender” in which several newsroom researchers consume LSD at a party. As the scene progresses, the characters’ perspectives become warped. Austin notes they created an altered state by slowing down and pitching down sections of the loop group using Revoice Pro by Synchro Arts. They also used Avid’s D-Verb to distort and diffuse selected sounds.

Good Girls Revolt“We got subjective by smearing different elements at different times. The regular sound would disappear and the music would dominate for a while and then that would smear out,” describes Austin. They also used breathing sounds to draw in the viewer. “This one character, Diane (Hannah Barefoot), has a bad experience. She’s crawling along the hallway and we hear her breathing while the rest of the sound slurs out in the background. We build up to her freaking out and falling down the stairs.”

Austin and Urban did their design and preliminary sound treatments in Pro Tools 12 and then handed it off to sound effects re-recording mixer Derek Marcil, who polished the final sound. Marcil was joined by dialog/music re-recording mixer David Raines on Stage 1 at Westwind. Together they mixed the series in 5.1 on an Avid ICON D-Control console. “Everyone on the show was very supportive, and we had a lot of creative freedom to do our thing,” concludes Austin.