Tag Archives: Mel Lambert

Avid’s new control surfaces for Pro Tools, Media Composer, other apps

By Mel Lambert

During a recent come-and-see MPSE Sound Advice evening at Avid’s West Coast offices in Burbank, MPSE members and industry colleagues were treated to an exclusive look at two new control surfaces for editorial suites and film/TV post stages.

The S1 and S4 controllers join the current S3 and larger S6 control surfaces. Session files from all S Series surfaces are fully compatible with one another, enabling edit and mix session data to move freely from facility to facility. All surfaces provide comprehensive control of Eucon-enabled software, including Pro Tools, Cubase, Nuendo, Logic Pro, Media Composer and other apps to create and record tracks, write automation, control plugins, set up routing and a host of other essential operations via assignable faders, buttons and rotary controls.

S1

S1

Jeff Komar, one of Avid’s pro audio solutions specialists, served as our guide during the evening’s demo sessions of the new surfaces for fully integrated sample-accurate editing and immersive mixing. Expected to ship toward the end of the year, the S1 is said to offer full software integration with Avid’s high-end consoles in a portable, slim-line surface, while the S4 — which reportedly begins shipping in September — is said to bring workstation control to small- to mid-sized post facilities in an ergonomic and compact package.

Pro-user prices start at $24,000 for a three-foot S4 with eight faders; a five-foot configuration with 24 on-surface faders and post-control sections should retail for around $50,000. The S1’s expected end-user price will be approximately $1,200.

The S4 provides extensive visual feedback, including switchable display from channel meters, groups, EQ curves and automation data, in addition to scrolling Pro Tools waveforms that can be edited from the surface. The semi-modular architecture accommodates between eight and 24 assignable faders in eight-fader blocks, with add-on displays, joysticks, PEC/direct paddles and all-knob attention modules. The S4 also features assignable talkback, listen back and speaker sources/levels for Foley/ADR recording plus Dolby Atmos and other formats of immersive audio monitoring. The unit can command two connected playback/record workstations. In essence, the S4 replaces the current S6 M10 system.

Avid’s Jeff Komar

From recording and editing tracks to mixing and monitoring in stereo or surround, the smaller S1 surface provides comprehensive control and visual feedback with full-on Eucon compatibility for Pro Tools and Media Composer. There is also native support for third-party applications, such as Apple Logic Pro, Steinberg Cubase, Adobe Premiere Pro and others. Users can connect up to four units — and also add a Pro Tools|Dock — to create an extended controller. Each S1 has an upper shelf designed to hold an iOS- or Android-compatible tablet running the Pro Tools|Control app. With assignable motorized faders and knobs, as well as fast-access touchscreen workflows and programmable Soft Keys, the S1 is said to offer the speed and versatility needed to accelerate post and video projects.

Reaching deeper into the S4’s semi-modular topology, the surface can be configured with up to three Channel Strip Modules (offering a maximum of 24 faders), four Display Modules to provide visual feedback of each session, and up to three optional modules. The Display Module features a high-resolution TFT screen to show channel names, channel meters, routing, groups, automation data and DAW settings, as well as scrolling waveforms and master meters.

Eucon connectivity can be used to control two different software applications simultaneously, with single key press of editing plugins, writing session automation and other complex tasks. Adding joysticks, PEC/Direct paddles and attention panels enable more functions to be controlled simultaneously from the modular control surface to handle various editing and mixing workflows.

S4

The Master Touch Module (MTM) provides fast access to mix and control parameters through a tilting 12.1-inch multipoint touchscreen, with eight programmable rotary encoders and dedicated knobs and keys. The Master Automation Module (MAM) streamlines session navigation plus project automation and features a comprehensive transport control section with shuttle/jog wheel, a Focus Fader, automation controls and numeric keypad. The Channel Strip Module (CSM) handles control-track levels, plugins and other parameters through eight channel faders, 32 top-lit knobs (four per channel) plus other programmable keys and switches.

For mixing and panning surround and immersive audio projects, including Atmos and Ambisonics, the Joystick Module features a pair of controllers with TFT and OLED displays. The Post Module enables switching between live and recorded tracks/stems through two rows of 10 PEC/direct paddles, while the Attention Knob Module features 32 top-lit knobs — or up to 64 via two modules — to provide extra assignable controls and feedback for plugins, EQ, dynamics, panning and more.

Dependent upon the number of Channel Strip Modules and other options, a customized S4 surface can be housed in either a three-, four- or five -foot pre-assembled frame. As a serving suggestion, the S4-3_CB_Top includes one CSM, one MTM, one MAM and filler panels/plates in a three-foot frame, reaching up to an S4-24-fader, five-foot base system that includes three CSMs, one MTM, one MAM and filler panels/plates in a five-foot frame.

My sincere thanks to members of Avid’s Burbank crew, including pro audio solutions specialists Tony Joy and Gil Gowing, together with Richard McKernan, professional console sales manager for the western region, for their hospitality and patience with my probing questions.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

London’s Media Production Show: technology for content creation

By Mel Lambert

The fourth annual Media Production Show, held June 11-12 at Olympia West, London, once again attracted a wide cross section of European production, broadcast, post and media-distribution pros. According to its organizers, the two-day confab drew 5,300 attendees and “showcased the technology and creativity behind content creation,” focusing on state-of-the-art products and services. The full program of standing room-only discussion seminars covered a number of contemporary topics, while 150-plus exhibitors presented wares from the media industry’s leading brands.

The State of the Nation: Post Production panel.

During a session called “The State of the Nation: Post Production,” Rowan Bray, managing director of Clear Cut Pictures, said that “while [wage and infrastructure] costs are rising, our income is not keeping up.” And with salaries, facility rent and equipment amortization representing 85% of fixed costs, “it leaves little over for investment in new technology and services. In other words, increasing costs are preventing us from embracing new technologies.”

Focusing on the long-term economic health of the UK post industry, Bray pointed out that few post facilities in London’s Soho area are changing hands, which she says “indicates that this is not a healthy sector [for investment].”

“Several years ago, a number of US companies [including Technicolor and Deluxe] invested £100 million [$130 million] in Soho; they are now gone,” stated Ian Dodd, head of post at Dock10.

Some 25 years ago, there were at least 20 leading post facilities in London. “Now we have a handful of high-end shops, a few medium-sized ones and a handful of boutiques,” Dodd concluded. Other panelists included Cara Kotschy, managing director of Fifty Fifty Post Production.

The Women in Sound panel

During his keynote presentation called “How we made Bohemian Rhapsody,” leading production designer Aaron Haye explained how the film’s large stadium concert scenes were staged and supplemented with high-resolution CGI; he is currently working on Charlie’s Angels (2019) with director/actress Elizabeth Banks.

The panel discussion “Women in Sound” brought together a trio of re-recording mixers with divergent secondary capabilities and experience. Participants were Emma Butt, a freelance mixer who also handles sound editorial and ADR recordings; Lucy Mitchell, a freelance sound editor and mixer; plus Kate Davis, head of sound at Directors Cut Films. As the audience discovered, their roles in professional sound differ. While exploring these differences, the panel revealed helpful tips and tricks for succeeding in the post world.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

AES/SMPTE panel: Spider-Man: Into the Spider-Verse sound

By Mel Lambert

As part of its successful series of sound showcases, a recent joint meeting of the Los Angeles Section of the Audio Engineering Society and SMPTE’s Hollywood Section focused on the soundtrack of the animated features Spider-Man: Into the Spider-Verse, which has garnered several Oscar, BAFTA, CAS and MPSE award nominations, plus a Golden Globes win.

On January 31 at Sony Pictures Studios’ Kim Novak Theater in Culver City many gathered to hear a panel discussion between the film’s sound and picture editors and re-recording mixers. Spider-Man: Into the Spider-Verse was co-directed by Peter Ramsey, Robert Persichetti Jr. and Rodney Rothman, the creative minds behind The Lego Movie and 21 Jump Street.

The panel

The Sound Showcase panel included supervising sound editors Geoffrey Rubay and Curt Schulkey, re-recording mixer/sound designer Tony Lamberti, re-recording mixer Michael Semanick and associate picture editor Vivek Sharma. The Hollywood Reporter’s Carolyn Giardina moderated. The event concluded with a screening of Spider-Man: Into the Spider-Verse, which represents a different Spider-Man Universe, since it introduces Brooklyn teen Miles Morales and the expanding possibilities of the Spider-Verse, where more than one entity can wear the arachnid mask.

Following the screening of an opening sequence from the animated feature, Rubay acknowledged that the film’s producers were looking for a different look for the Spider-Man character based on the Marvel comic books, but with a reference to previous live-action movies in the franchise. “They wanted us to make more of the period in which the new film is set,” he told the standing-room audience in the same dubbing stage where the soundtrack was re-recorded.

“[EVPs] Phil Lord and Chris Miller have a specific style of soundtrack that they’ve developed,” stated Lamberti, “and so we premixed to get that overall shape.”

“The look is unique,” conceded Semanick, “and our mix needed to match that and make it sound like a comic book. It couldn’t be too dynamic; we didn’t want to assault the audience, but still make it loud here and softer there.”

Full house

“We also kept the track to its basics,” Rubay added, “and didn’t add a sound for every little thing. If the soundtrack had been as complicated as the visuals, the audience’s heads would have exploded.”

“Yes, simpler was often better,” Lamberti confirmed, “to let the soundtrack tell the story of the visuals.”

In terms of balancing sound effects against dialog, “We did a lot of experimentation and went with what seemed the best solution,” Semanick said. “We kept molding the soundtrack until we were satisfied.” As Lamberti confirmed: “It was always a matter of balancing all the sound elements, using trial and error.”

=Nominated for a Cinema Audio Society Award in the Motion Picture — Animated category, Brian Smith, Aaron Hasson and Howard London served as original dialogue mixers on the film, with Sam Okell as scoring mixer and Randy K. Singer as Foley mixer. The crew also included sound designer John Pospisil, Foley supervisor Alec G. Rubay, SFX editors Kip Smedley, Andy Sisul, David Werntz, Christopher Aud, Ando Johnson, Benjamin Cook, Mike Reagan and Donald Flick.

During picture editorial, “we lived with many versions until we got to the sound,” explained Sharma. “The premix was fantastic and worked very well. Visuals are important but sound fulfils a complementary role. Dialogue is always key; the audience needs to hear what the characters say!”

“We present ideas and judge the results until everybody is happy,” said Semanick. “[Writer/producer] Phil Lord was very good at listening to everybody; he made the final decision, but deferred to the directors. ‘Maybe we should drop the music?’ ‘Does the result still pull the audience into the music?’ We worked until the elements worked very well together.”

The lead character’s “Spidey Sense” also discussed. As co-supervisor Schulkey explained: “Our early direction was that it was an internal feeling … like a warm, fuzzy feeling. But warm and fuzzy didn’t cut through the music. In the end there was not just a single Spidey Sense — it was never the same twice. The web slings were a classic sound that we couldn’t get too far from.”

“And we used [Dolby] Atmos to spin and pan those sounds around the room,” added Lamberti, who told the audience that Spider-Man: Into the Spider-Verse marked Sony Animation’s first native Atmos mix. “We used the format to get the most out of it,” concluded the SFX re-recording mixer, who mixed sound effects “in the box” using an Avid S6 console/controller, while Semanick handled dialogue and music on the Kim Novak Theater’s Harrison MPC4D X-Range digital console.


Mel Lambert has been intimately involved with production industries on both sides of the Atlantic for more years than he cares to remember. He can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists. 

The Girl in the Spider’s Web: immersive audio and picture editing

By Mel Lambert

Key members of the post crew responsible for the fast-paced look and feel of director Fede Alvarez’s new film, The Girl in the Spider’s Web, came to the project via a series of right time/right place situations. First, co-supervising sound editor Julian Slater (who played a big role in Baby Driver’s audio post) met picture editor Tatiana Riegel at last year’s ACE Awards.

During early 2018, Slater was approached to work on the lastest adaptation of the crime novels by the Swedish author Stieg Larsson. Alvarez was impressed with Slater’s contribution to both Baby Driver and the Oscar-winning Mad Max: Fury Road (2015). “Fede told me that he uses the soundtrack to Mad Max to show off his home Atmos playback system,” says Slater, who served as sound designer on that film. “I was happy to learn that Tatiana had also been tagged to work on The Girl in the Spider’s Web.”

Back row (L-R): Micah Loken, Sang Kim, Mandell Winter, Dan Boccoli, Tatiana Riegel, Kevin O’Connell, Fede Alvarez, Julian Slater, Hamilton Sterling, Kyle Arzt, Del Spiva and Maarten Hofmeijer. Front row (L-R): Pablo Prietto, Lola Gutierrez, Mathew McGivney and Ben Sherman.

Slater, who would also be working on the crime drama Bad Times at the El Royale for director Drew Goddard, wanted Mandell Winter as his co-supervising sound editor. “I very much liked his work on The Equalizer 2, Death Wish and The Magnificent Seven, and I knew that we could co-supervise well together. I came on full time after completing El Royale.”

Editor Riegel (Gringo, I Tonya, Million Dollar Arm, Bad Words) was a fan of the original Stieg Larsson Millennium Series films —The Girl With the Dragon Tattoo, The Girl Who Kicked the Hornet’s Nest and The Girl Who Played with Fire — as well as David Fincher’s 2011 remake of The Girl With the Dragon Tattoo. She was already a fan of Alvarez, admiring his previous suspense film, Don’t Breathe, and told him she enjoyed working on different types of films to avoid being typecast. “We hit it off immediately,” says Riegel, who then got together with Julian Slater and Mandell Winter to discuss specifics.

The latest outing in the Stieg Larsson franchise, The Girl in the Spider’s Web: A New Dragon Tattoo Story, stars English actress Claire Foy (The Crown) in the eponymous role of a young computer hacker Lisbeth Salander who, along with journalist Mikael Blomkvist, gets caught up in a web of spies, cybercriminals and corrupt government officials. The screenplay was co-written by Jay Basu and Alvarez from the novel by David Lagercrantz. The cast also includes Sylvia Hoeks, Stephen Merchant and Lakeith Stanfield.

Having worked previously with Niels Arden Oplev, the Swedish director of 2009’s The Girl with the Dragon Tattoo, Winter knew the franchise and was interested in working on the newest offering. He was also excited about working with director Fede Alvarez. “I loved the use of color and lighting choices that Fede selected for Don’t Breathe, so when Julian Slater called I jumped at the opportunity. None of us had worked together before, and it was Fede’s first large-budget film, having previously specialized in independent offerings. I was eager to help shepherd the film’s immersive soundtrack through the intricate process from location to the dub stage.”

From the very outset, Slater argued for a native Dolby Atmos soundtrack, with a 7.1-channel Avid Pro Tools bed that evolved through editorial, with appropriate objects being assigned during re-recording to surround and overhead locations. “We knew that the film would be very atmospheric,” Slater recalls, “so we decided to use spaces and ambiences to develop a moody, noir thriller.”

The film was dubbed on the William Holden Stage at Sony Pictures Studios, with Kevin O’ Connell handling dialog and music, and Slater overseeing sound effects elements.

Cutting Picture on Location
Editor Riegel and two assistants joined the project at its Berlin location last January. “It was a 10-month journey until final print mastering in mid-October,” she says. “We knew CGI elements would be added later. Fede didn’t do any previz, instead focusing on VFX during post production. We set up Avid Media Composers and assemble-edited the dailies as we went” against early storyboards. “Fede wanted to play up the film’s rogue theme; he had a very, very clear focus of the film as spectacle. He wanted us to stay true to the Lisbeth Salander character from the original films, yet retain that dark, Scandinavian feel from the previous outings. The film is a fun ride!”

The team returned to Los Angeles in April and turned the VFX over to Pixomondo, which was brought on to handle the greenscreen CGI sequences. “We adjourned to Pivotal Post in Burbank for the Director’s Cut and then to the Sony lot in Culver City for the first temp mix,” explains Riegel. “My editing decisions were based on the innate DNA of the shot material, and honoring the script. I asked Fede a lot of questions to ensure that the story and the pacing were crystal clear. Our first assembly was around two hours and 15 minutes, which we trimmed to just under two hours during a series of refinements. We then removed 15 minutes to reach our final 1:45 running time, which worked for all of us. The cut was better without the dropped section.”

Daniel Boccoli served as first assistant picture editor, Patrick Clancey was post finishing editor, Matthew McGivney was VFX editor and Andrew McGivney was VFX assistant editor.

Because Riegel likes to cut against an evolving soundtrack, she developed a temporary dialog track in her Avid workstation, adding sound effects taken from commercial libraries. “But there is a complex fight and chase sequence in the middle of the film that I turned over to Mandell and Julian early on so I could secure realistic effects elements to help inform the cut,” she explains. “Those early tracks were wonderful and gave me a better idea of what the final film would sound like. That way I can get to know the film better — I can also open up the cut to make space for a sound if it works within the film’s creative arcs.”

“Our overall direction from Fede Alvarez was to make the soundtrack feel cold when we were outside and to grab the audience with the action… while focusing on the story,” Winter explains. “We were also working against a very tight schedule and had little time for distractions. After the first temp, Julian and I got notes from Fede and Tatiana and set off using that feedback, which continued through three more temp mixes.”

Having complete supervising The Equalizer 2, Mandell came aboard full time in mid-June, with temp mixes running through the beginning of September. “We were finaling by the last week of September, ahead of the film’s World Premiere on October 19 at the International Rome Film Festival.”

Since there was no spotting session, from day one we were in a tight post schedule, according to Slater. “There were a number of high-action scenes that needed intricate sound design, including the eight-minute sequence that begins with explosions in Lisbeth Salander’s apartment and the subsequent high-speed motorbike chase.”

Sound designer Hamilton Sterling crafted major sections of the film’s key fight and chase sequences.

Intricate Sound Design
“We liked Hamilton’s outstanding work on Independence Day: Resurgence and Logan and relied upon him to develop truly unique sounds for the industrial heating towers, motorbikes and fights,” says Winter. “Sound effects editor Ryan Collins cut the gas mask fight sequence, as well as a couple of reels, while Karen Vassar Triest handled another couple of reels, and David Esparza worked on several of the early sequences.”

Other sound effects editors included Ando Johnson and Robert Stambler, together with dialog editor Micah Loken and supervising Foley editor Sang Jun Kim.

Sterling is particularly proud of several sequences he designed for the film. “During a scene in which the lead character Lisbeth Salander is drugged, I used the Whoosh plug-in [from the German company, Tonsturm] inside Native Instruments’ Reaktor [modular music software] to create a variable, live-performable heartbeat. I used muffled explosion samples that were Doppler-shifted at different speeds against the picture to mimic the pulse-changing effects of various drugs. I also used Whoosh to create different turbo sounds for the Ducati motorcycle driven by Lisbeth, together with air-release sounds. They were subtle effects, because we didn’t want the result to sound like a ‘sci-fi bike’ — just a souped-up twin-cylinder Ducati.”

For the car chases, Sterling used whale-spout blasts to mimic the sound of a car driving through deep puddles with water striking the inside of the wheel wells. For frightening laughs in another sequence, the sound designer turned to Tonsturm’s Doppler program, which he used in an unorthodox way. “The program can be set to break up a sound sample using, for example, a 5.1-channel star pattern with small Doppler shifts to produce very disturbing laughter,” he says. “For the heating towers I used several sound components, including slowed-down toaster noises to add depth and resonance — a hum from the heating elements, plus ticks and clangs as they warmed up. Julian suggested that we use ‘chittery’ effects for the computer user interfaces, so I used The Cargo Cult’s Envy plug-in to create unusual sounds, and to avoid the conventional ‘bips” and ‘boops’ noises. Envy is a spectral-shift, pitch- and amplitude-change application that is very pitch manipulatable. I also turned to the Sound Particles app to generate complex wind sounds that I delivered as immersive 7.1.2 Pro Tools tracks.”

“We also had a lot of Foley, which was recorded on Stage B at Sony Studios by Nerses Gezalyan with Foley artists Sara Monat and Robin Harlen,” Winter adds. “Unfortunately, the production dialog had a number of compromised tracks from the Berlin locations. As a result, we had a lot of ADR to shoot. Scheduling the ADR was complicated by the time difference, as most of our actors were in London, Berlin, Oslo or Stockholm. We used Foley to support the cleaned-up dialog tracks and backfilled tracks. Our dialog editor was very knowledgeable with iZotope RX7 Advance software. Micah Loken really understood how to use it, and how not to use it. He can dig deep into a track without affecting the quality of the voice, and without overdoing the processing.”

The music from composer Roque Baños — who also worked with Alvarez on Don’t Breathe and Evil Dead — arrived very late in the project, “and remained something of a mystery,” Riegel recalls. “Being a musician himself, Fede knew what he wanted and how to achieve that result. He would disappear into an edit suite close to the stage with the music editors Maarten Hofmeijer and Del Spiva, where they cut together the score against the locked picture — or as locked as it ever was! After that we could balance the music against the dialog and sound effects.”

Regarding sound effects elements, Winter acknowledges that his small editorial team needed to work against a tight schedule. “We had a 7.1.2 template that allowed Tony [Lamberti] and later Julian to use the automated panning data. For the final mix in Atmos, we used objects minimally for the music and dialog. However, we used overhead objects strategically for effects and design. In an early sequence we put the sound of the rope — used to suspend an abusive husband — above the audience.” Re-recording mixer Tony Lamberti handled some of the early temp mixes in Slater’s absence.

Collaborative Re-Recording Process
When the project reached the William Holden Stage, “we could see the overall shape of the film with the VFX elements and decide what sounds would now be needed to match the visuals, since we had a lot of new technology to cover, including computer screens,” Riegel says.

Mandell agrees: “Yes, we could now see where Fede Alvarez wanted to take the film and make suggestions about new material. We started asking: ‘What do you think about this and that option?’ Or, ‘What’s missing?’ It was an ongoing series of conversation through the temp mixes, re-mixes and then the final.”

Having handled the first temp mix at Sony Studios, Slater returned full-time for the final Atmos mixes. “After so many temp mixes using the same templates, I knew that we would not be re-inventing the wheel on the William Holden Stage. We simply focused on changing the spatiality of what we had. Having worked with Kevin O’ Connell on both Jumanji: Welcome to the Jungle and The Public, I knew that I had to do my homework and deliver what he needed from my side of the console. Kevin is very involved. He’ll make suggestions, but always based on what is best for the film. I learned a lot by seeing how he works; he is very experienced. It’s easy to find what works with Kevin, since he has experience with a wide range of technologies and keeps up with new advances.”

Describing the re-recording process as being highly collaborative, Mandell remained objective about creative options. “You can get too close to the soundtrack. With a number of German and English actors, we constantly had to ask ourselves: ‘Do we have clarity?’ If not, can we fix it in the track or turn to ADR? We maintained a continuing conversation with Tatiana and Fede, with ideas that we would circulate backwards and forwards. Since we had a lot of new people working on the crew, trust became a major factor. Everybody was incredibly professional.”

“It was a very rewarding experience working with so many talented new people,” Slater concludes. “I quickly tuned into Fede Alvarez’s specific needs and sensibilities. It was a successful liaison.”

Riegel says that her biggest challenge was “trying to figure out what the film is supposed to be — from the script and pre-production through the shoot and first assembly. It’s a gradual process and one that involves regular conversations with my assistant editors and the director as we develop characters and clarify the information being shown. But I didn’t want to hit the audience over the head with too much information. We needed to decide: ‘What is important?’ and retain as much realism as possible. It’s a complex, creative process … and one that I totally love being a part of!”


Mel Lambert has been involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is principal of Content Creators, a Los Angeles-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. He is also a long-time member of the UK’s National Union of Journalists.

Post developments at the AES Berlin Convention

By Mel Lambert

The AES Convention returned to Berlin after a three-year absence, and once again demonstrated that the Audio Engineering Society can organize a series of well-attended paper programs, seminars and workshops, in addition to an exhibition of familiar brands, for the European tech-savvy post community. 

Held at the Maritim Hotel in the creative heart of Berlin in late May, the 142nd AES Convention was co-chaired by Sascha Spors from University of Rostock in Germany and Nadja Wallaszkovits from the Austrian Academy of Sciences. According to AES executive director Bob Moses, attendance was 1,800 — a figure at least 10% higher than last year’s gathering in Paris — with post professional from several overseas countries, including China and Australia.

During the opening ceremonies, current AES president Alex Case stated that, “AES conventions represent an ideal interactive meeting place,” whereas “social media lacks the one-on-one contact that enhances our communications bandwidth with colleagues and co-workers.” Keynote speaker Dr. Alex Arteaga, whose research integrates aesthetic and philosophical practices, addressed the thorny subject of “Auditory Architecture: Bringing Phenomenology, Aesthtic Practices and Engineering Together,” arguing that when considering the differences between audio soundscapes, “our experience depends upon the listening environment.” His underlying message was that a full appreciation of the various ways in which we hear immersive sounds requires a deeper understanding of how listeners interact with that space.

As part of his Richard C. Heyser Memorial Lecture, Prof. Dr. Jorg Sennheiser outlined “A Historic Journey in Audio-Reality: From Mono to AMBEO,” during which he reviewed the basis of audio perception and the interdependence of hearing with other senses. “Our enjoyment and appreciation of audio quality is reflected in the continuous development from single- to multi-channel reproduction systems that are benchmarked against sonic reality,” he offered. “Augmented and virtual reality call for immersive audio, with multiple stakeholders working together to design the future of audio.”

Post-Focused Technical Papers
There were several interesting technical papers that covered the changing requirements of the post community, particularly in the field of immersive playback formats for TV and cinema. With the new ATSC 3.0 digital television format scheduled to come online soon, including object-based immersive sound, there is increasing interest in techniques for capturing surround material and then delivering the same to consumer audiences.

In a paper titled “The Median-Plane Summing Localization in Ambisonics Reproduction,” Bosun Xie from the South China University of Technology in Guangzhou explained that, while one aim of Ambisonics playback is to recreate the perception of a virtual source in arbitrary directions, practical techniques are unable to recreate correct high-frequency spectra in binaural pressures that are referred to as front-back and vertical localization cues. Current research shows that changes of interaural time difference/ITD that result from head-turning for Ambisonics playback match with those of a real source, and hence provide dynamic cue for vertical localization, especially in the median plane. In addition, the LF virtual source direction can be approximately evaluated by using a set of panning laws.

“Exploring the Perceptual Sweet Area in Ambisonics,” presented by Matthias Frank from University of Music in Graz, Austria, described how the sweet-spot area does not match the large area needed in the real world. A method was described to experimentally determine the perceptual sweet spot, which is not limited to assessing the localization of both dry and reverberant sound using different Ambisonic encoding orders.

Another paper, “Perceptual Evaluation of Synthetic Early Binaural Room Impulse Responses Based on a Parametric Model,” presented by Philipp Stade from the Technical University of Berlin, described how an acoustical environment can be modeled using sound-field analysis plus spherical head-related impulse response/HRIRs — and the results compared with measured counterparts. Apparently, the selected listening experiment showed comparable performance and, in the main, was independent from room and test signals. (Perhaps surprisingly, the synthesis of direct sound and diffuse reverberation yielded almost the same results as for the parametric model.)

“Influence of Head Tracking on the Externalization of Auditory Events at Divergence between Synthesized and Listening Room Using a Binaural Headphone System,” presented by Stephan Werner from the Technical University of Ilmenau, Germany, reported on a study using a binaural headphone system that considered the influence of head tracking on the localization of auditory events. Recordings were conducted of impulse responses from a five-channel loudspeaker set-up in two different acoustic rooms. Results revealed that head tracking increased sound externalization, but that it did not overcome the room-divergence effect.

Heiko Purnhagen from Dolby Sweden, in a paper called “Parametric Joint Channel Coding of Immersive Audio,” described a coding scheme that can deliver channel-based immersive audio content in such formats as 7.1.4, 5.1.4, or 5.1.2 at very low bit rates. Based on a generalized approach for parametric spatial coding of groups of two, three or more channels using a single downmix channel, together with a compact parametrization that guarantees full covariance re-instatement in the decoder, the coding scheme is implemented using Dolby AC-4’s A-JCC standardized tool.

Hardware Choices for Post Users
Several manufacturers demonstrated compact near-field audio monitors targeted at editorial suites and pre-dub stages. Adam Audio focused on their new near/mid-fieldS Series, which uses the firm’s ART (Accelerating Ribbon Technology) ribbon tweeter. The five models, which are comprised of the S2V, S3H, S3V, S5V and S5H for horizontal or vertical orientation. The firm’s newly innovated LF and mid-range drivers with custom-designed waveguides for the tweeter — and MF driver on the larger, multiway models — are powered by a new DSP engine that “provides crossover optimization, voicing options and expansion potential,” according to the firm’s head of marketing, Andre Zeugner.

The Eve Audio SC203 near-field monitor features a three-inch LF/MF driver plus a AMT ribbon tweeter, and is supplied with a v-shaped rubberized pad that allows the user to decouple the loudspeaker from its base and reduce unwanted resonances while angling it flat or at a 7.5- or 15-degree angle. An adapter enables mounting directly on any microphone or speaker stand with a 3/8-inch thread. Integral DSP and a passive radiator located at the rear are said to reinforce LF reproduction to provide a response to 62Hz (-3dB).

Genelec showcased The Ones, a series of point-source monitors that are comprised of the current three-way Model 8351 plus the new two-way Model 8331 and three-way Model 8341. All three units include a co-axial MF/HF driver plus two acoustically concealed LF drivers for vertical and horizontal operation. A new Minimum Diffraction Enclosure/MDE is featured together with the firm’s loudspeaker management and alignment software via a dedicated Cat5 network port.

The Neumann KH-80 DSP near-field monitor is designed to offer automatic system alignment using the firm’s control software that is said to “mathematically model dispersion to deliver excellent detail in any surroundings.” The two-way active system features a four-inch LF/MF driver and one-inch HF tweeter with an elliptical, custom-designed waveguide. The design is described as offering a wide horizontal dispersion to ensure a wide sweet spot for the editor/mixer, and a narrow vertical dispersion to reduce sound reflections off the mix console.

To handle multiple monitoring sources and loudspeaker arrays, the Trinnov D-Mon Series controllers enable stereo to 7.1-channel monitoring from both analog and digital I/Os using Ethernet- and/or MIDI-based communication protocols and a fast-switching matrix. An internal mixer creates various combinations of stems, main or aux mixes from discrete inputs. An Optimizer processor offers tuning of the loudspeaker array to match studio acoustics.

Unveiled at last year’s AES Convention in Paris, the Eventide H9000 multichannel/multi-element processing system has been under constant development during the past 12 months with new functions targeted at film and TV post, including EQ, dynamics and reverb effects. DSP elements can be run in parallel or in a series to create multiple, fully-programmable channel strips per engine. Control plug-ins for Avid Pro Tools and other DAWs are being finalized, together with Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking.

Filmton, the German association for film sound professionals, explained to AES visitors its objective “to reinforce the importance of sound at an elemental level for the film community.” The association promotes the appreciation of film sound, together with the local film industry and its policy toward the public, while providing “an expert platform for technical, creative and legal issues.”

Philipp Sehling

Lawo demonstrated the new mc²96 Grand Audio production console, an IP-based networkable design for video post production, available with up to 200 on-surface faders. Innovative features include automatic gain control across multiple channels and miniature TFT color screens above each fader that display LiveView thumbnails of the incoming channel sources.

Stage Tec showed new processing features for its Crescendo Platinum TV post console, courtesy of v4.3 software, including an automixer based on gain sharing that can be used on every input channel, loudness metering to EBU R128 for sum and group channels, a de-esser on every channel path, and scene automation with individual user-adjustable blend curves and times for each channel.

Avid demonstrated native support for the new 7.1.2 Dolby Atmos channel-bed format — basically the familiar 9.1-channel bed with two height channels — for editorial suites and consumer remastering, plus several upgrades for Pro Tools, including new panning software for object-based audio and the ability to switch between automatable object and buss outputs. Pro Tools HD is said to be the only DAW natively supporting in-the-box Atmos mixing for this 10-channel 7.1.2 format. Full integration for Atmos workflows is now offered for control surfaces such as the Avid S6.

Jon Schorah

There was a new update to Nugen Audio’s popular Halo Upmix plug-in for Pro Tools — in addition to stereo to 5.1, 7.1 or 9.1 conversion it is now capable of delivering 7.1.2-channel mixes for Dolby Atmos soundtracks.

A dedicated Dante Pavilion featured several manufacturers that offer network-capable products, including Solid State Logic, whose Tempest multi-path processing engine and router is now fully Audinate Dante-capable for T Series control surfaces with unique arbitration and ownership functions; Bosch RTS intercom systems featuring Dante connectivity with OCA system control; HEDD/Heinz Electrodynamic Designs, whose Series One monitor speakers feature both Dante and AES67/Ravenna ports; Focusrite, whose RedNet series of modular pre-amps and converters offer “enhanced reliability, security and selectivity” via Dante, according to product specialist for EMEA/Germany, Dankmar Klein; and NTP Technology’s DAD Series DX32R and RV32 Dante/MADI router bridges and control room monitor controllers, which are fully compatible with Dante-capable consoles and outboard systems, according to the firm’s business development manager Jan Lykke.

What’s Next For AES
The next European AES convention will be held in Milan during the spring of 2018. “The society also is planning a new format for the fall convention in New York,” said Moses, as the AES is now aligning with the National Association of Broadcasters. “Next January we will be holding a new type of event in Anaheim, California, to be titled AES @ NAMM.” Further details will be unveiled next month. He also explained there will be no West Coast AES Convention next year. Instead the AES will return to New York in the autumn of 2018 with another joint AES/NAB gathering at the Jacob K. Javits Convention Center.


Mel Lambert is an LA-based writer and photographer. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Netflix's Stranger Things

AES LA Section & SMPTE Hollywood: Stranger Things sound

By Mel Lambert

The most recent joint AES/SMPTE meeting at the Sportsmen’s Lodge in Studio City showcased the talents of the post production crew that worked on the recent Netflix series Stranger Things at Technicolor’s facilities in Hollywood.

Over 160 attendees came to hear how supervising sound editor Brad North, sound designer Craig Henighan, sound effects editor Jordan Wilby, music editor David Klotz and dialog/music re-recording mixer Joe Barnett worked their magic on last year’s eight-episode Season One (Sadly, effects re-recording mixer Adam Jenkins was unable to attend the gathering.) Stranger Things, from co-creators Matt Duffer and Ross Duffer, is scheduled to return in mid-year for Season 2.

L-R: Jordan Wilby, Brad North, Craig Henighan, Joe Barnett, David Klotz and Mel Lambert. Photo Credit: Steve Harvey.

Attendees heard how the crew developed each show’s unique 5.1-channel soundtrack, from editorial through re-recording — including an ‘80s-style, synth-based music score, from Austin-based composers Kyle Dixon and Michael Stein, that is key to the show’s look and feel — courtesy of a full-range surround sound playback system supplied by Dolby Labs.

“We drew our inspiration — subconsciously, at least — from sci-fi films like Alien, The Thing and Predator,” Henighan explained. The designer also revealed how he developed a characteristic sound for the monster that appears in key scenes. “The basic sound is that of a seal,” he said. “But it wasn’t as simple as just using a seal vocal, although it did provide a hook — an identifiable sound around which I could center the rest of the monster sounds. It’s fantastic to take what is normally known as a nice, light, fun-loving sound and use it in a terrifying way!” Tim Prebble, a New Zealand-based sound designer, and owner of sound effects company Hiss and A Roar, offers a range of libraries, including SD003 Seal Vocals|Hiss and A Roar.

Gear used includes Avid Pro Tools DAWs — everybody works in the box — and Avid 64-fader, dual-operator S6 console at the Technicolor Seward Stage. The composers use Apple Logic Pro to record and edit their AAF-format music files.


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

 

Editor Joe Walker on establishing a rhythm for Denis Villeneuve’s Arrival

By Mel Lambert

For seasoned picture editor Joe Walker, ACE, his work with directors Denis Villeneuve and Steve McQueen might best be described as “three times a charm.” His trio of successes with Villeneuve include the drug enforcement drama Sicario, the alien visitor film Arrival and the much-anticipated, upcoming sci-fi drama Blade Runner 2049, which is currently in post. His three films with McQueen include Hunger, Shame and the 2014 Oscar-winner for Best Picture 12 Years a Slave, which earned Walker a nomination for his editing work.

In addition, he has worked on a broad array of films, ranging from director Michael Mann’s cyber thriller Blackhat to writer/director Rupert Wyatt’s The Escapist to director Daniel Barber’s Harry Brown to writer/director Rowan Joffe’s Brighton Rock, which is a reworking of the Graham Greene classic.

Arrival - Paramount

We are currently in midst of awards season, and recently Paramount’s Arrival received eight Oscar noms, including Best Director and a Best Editing nod for Walker. The film was also nominated for nine BAFTA Award nominations, including Best Picture Editing, Best Director and Best Film. It has also been nominated for an American Cinema Editors Eddie in the Best Edited Feature Film — Dramatic category. (Read our interview with director Denis Villeneuve here.)

“My approach to all the films I have edited is to find the basic ‘rhythm’ of a scene,” Walker concedes. His background as a sound designer and composer enhance those sensibilities, in terms of internal pacing, beat and dramatic pulse.

The editor’s path toward Villeneuve began at a 2010 screening of Incendies in his native London. ”I was blown away and set my heart on working with this director. That same heart was beating out of my chest a few years later watching 2014’s Prisoners. While finishing Michael Mann’s Blackhat in 2015, my agent got me into the room with Denis for Sicario, which had a very solid script. That evolution felt like it was going in the right direction for me. Cinematographer Roger Deakins produced stunning work — he’s also cinematographer on Blade Runner 2049.” (Deakins was nominated for both Oscar and BAFTA Awards for Sicario.)

The Edit
For Arrival, Walker’s biggest challenge was reconciling the two parallel worlds that existed within the evolving dramatic arcs. While several alien spacecraft land around the world, a linguistics expert (Amy Adams) is recruited by the military to determine whether they come in peace. “On the one hand we have the natural setting of the mother/daughter relationship, with beautiful, intimate material shot by a lakeside near Montreal, and the narrative content on a far lower gas,” explains Walker. “That’s pitted against the high-tech world of space ships as we learn more about the alien visitors and the psychological task faced as the lead character tries to decode their complex written language. Without CGI visuals of the Heptapods — the multi-limb visitors — I had to make early decisions about what space to leave in a scene for their eventual movements. From what was shot on set, all we had were puppeteers holding tennis balls on a stick.”

ARRIVAL by Paramount PicturesWalker saw every Arrival daily and started his cut early. “We had to turn over the Heptapod sequences to Montreal VFX house Hybride almost as soon as the director’s cut began,” he says. “And because, for me, sound always drives a lot of what I do, I brought on creature sound designer Dave Whitehead ahead of the game. I’d been impressed by Dave’s work on [Neill Blomkamp’s] District 9. I needed to know what type of sounds would be used for the aliens, and cut accordingly. He developed a coherent language with an inbuilt syntax and really nailed the ‘character’ of the Heptapods. I laid up his sounds onto tracks in my Avid Media Composer and they stayed pretty much unchanged all the way through post.”

In terms of pace and narrative arcs, Walker states that director Villeneuve “chose to starve the audience of information and just offer intriguing nuggets, teasing out the suspense and keeping them waiting for the pay off. For example, on one scene we hold on Amy Adams’ face watching the breaking news on the TV rather than the TV show itself,” which was reporting the mysterious spacecraft touching down in 12 cities. “Forest Whitaker [US Army Colonel Weber] plays our first audio of the Heptapods on a Dictaphone and it stimulates such curiosity about how they may look or behave. We avoided any pressure of cutting for the sake of cutting. Instead, we stayed on a shot, let it play and did not do all the thinking for the audience. While editing 12 Years a Slave, we stay on the hanging scene and don’t cut away. There’s no relief, it allows the audience to be truly troubled by the horrible inertia of the scene.”

ARRIVAL by Paramount PicturesAgain, the word “rhythm” figures prominently within Walker’s creative vocabulary. “I always try to find the rhythm of a scene — one that works with the sounds and music elements. For Sicario, I developed peaks and troughs in the dramatic flow that supported different points of view” as the audience slowly begins to understand the complexity of the drug enforcement campaign. “Bad sound disturbs me, including distorted or widely variable dialogue levels. I always work hard to get the best out of the production tracks, perhaps more than I really have time for.

“With both Steve McQueen and Denis Villeneuve, I’ve always tried to avoid using music temp tracks, so that we do not become too influenced during the editing process,” he continues. “By holding off until we’re late into a final cut, we can stay critical in our judgments about the story and characters. When brought in later, music becomes a huge bonus since you’ve already been ruthless with the story. You use music only where it’s absolutely necessary, allowing silence or sound effects to have their day. I think composers want the freedom of a blank canvas. Otherwise, as the English composer Matthew Herbert once said, ‘Music is in an abusive relationship with film.’”

Changing Direction During Edit
While cutting Arrival, Walker recalls that one key scene took a dramatic left turn. “As scripted and shot,” he explains, “the nightmare sequence started out as a normal scene in which Amy Adams’ character, Louise, is visited in her quarters by colleague Ian [Jeremy Renner] and her boss, Colonel Webber, who decides to bench her. This was the beginning of a long piece of story tubing, which felt redundant. We’d tried to discard it, but the scene had an essential piece of information that we couldn’t live without: the notion that exposure to a language can rewire your mind.

ARRIVAL by Paramount Pictures“We thought about conveying that information elsewhere as voiceover or ADR, but instead, as an experiment, we strung together very crudely only the pieces we needed, thereby creating at one point a jarring join between one line of Ian’s dialogue and another. I always try to be ballsy with material, to stay on it with confidence or maul it, to tell the story a better way.”

In that pivotal scene in Arrival, during a close-up, Adams’ character is looking off-camera toward Whitaker. “But we never cut to him because it would take us down the path we wanted to avoid,” explains Walker. “As it happened, that same day in the cutting room, we saw the first test shots from Hybride’s VFX team of an alien crawling forward, looking like an elephant shrouded in mist. That first look inspired our decision to hold onto Adams’ off-camera look for as long as we could, and then — instead of going to a matching reverse revealing Forest Whitaker — we cut to this huge alien crouching in the corner of her bedroom.

“The scene was rounded off by a shot of Amy’s character waking up and looking utterly thrown. We kept the jarring cut [from Ian and then back to him], and added the incongruous sound of a canary, since it signaled early on that all is not as it seems. A nightmare was a great way to get inside Louise’s head. Ian’s presence in her dream also platforms their romance, which enters so late in the story. Normally, returning material to a cut can feel like putting wet swimming trunks back on, but here it set our minds alight.”

Adams’ performance throughout Arrival was thrilling to cut, says Walker. “She is very real in every take and always true to character, keeping her performance at just the right temperature for each scene. Every nuance counts, particularly in a film that has to hold up to scrutiny on a second or third viewing when more is understood about the true nature of things. To hold the audience’s attention in a scene, an editor’s craft involves a balance between time and tension.”

ARRIVAL by Paramount PicturesWalker says, “Time is our superpower since we can slow a moment down, speed it up or jump from one shard of a timeline to another. In Arrival we had two parallel worlds: the real-life world of the army camp with all the news on TVs and heavy technology. In opposition is the child’s world of caterpillars and nature. I could cut those together at will and flip quickly from one to the other.”

Walker says that after the 10-week shoot for Arrival, he spent a week finalizing his editor’s cut and then 10 to 14 weeks on the director’s cut with basic CGI. “We then went through test screenings as the final photorealistic CGI elements slowly took shape,” he recalls. “We refined the film’s overall pace and rhythm and made sure that each tiny fragment of this fantastic puzzle was told as well as we could. I consider the result to be really one of the most successful edits I have been involved with.”


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.


Main Image: Joe Walker and Denis Villeneuve. Photo Credit Javier Marcheselli. 

 

AES Conference focuses on immersive audio for VR/AR

By  Mel Lambert

The AES Convention, which was held at the Los Angeles Convention Center in early October, attracted a broad cross section of production and post professionals looking to discuss the latest technologies and creative offerings. The convention had approximately 13,000 registered attendees and more than 250 brands showing wares in the exhibits halls and demo rooms.

Convention Committee co-chairs Valerie Tyler and Michael MacDonald, along with their team, created the comprehensive schedule of workshops, panels and special events for this year’s show. “The Los Angeles Convention Center’s West Hall was a great new location for the AES show,” said MacDonald. “We also co-located the AVAR conference, and that brought 3D audio for gaming and virtual reality into the mainstream of the AES.”

“VR seems to be the next big thing,” added AES executive director Bob Moses, “[with] the top developers at our event, mapping out the future.”

The two-day, co-located Audio for Virtual and Augmented Reality Conference was expected to attract about 290 attendees, but with aggressive marketing and outreach to the VR and AR communities, pre-registration closed at just over 400.

Aimed squarely at the fast-growing field of virtual/augmented reality audio, this conference focused on the creative process, applications workflow and product development. “Film director George Lucas once stated that sound represents 50 percent of the motion picture experience,” said conference co-chair Andres Mayo. “This conference demonstrates that convincing VR and AR productions require audio that follows the motions of the subject and produces a realistic immersive experience.”

Spatial sound that follows head orientation for headsets powered either by dedicated DSP, game engines or smartphones opens up exciting opportunities for VR and AR producers. Oculus Rift, HTC Vive, PlayStation VR and other systems are attracting added consumer interest for the coming holiday season. Many immersive-audio innovators, including DST and Dolby, are offering variants of their cinema systems targeted at this booming consumer marketplace via binaural headphone playback.

Sennheiser’s remarkable new Ambeo VR microphone (pictured left) can be used to capture 3D sound and then post produced to prepare different spatial perspectives — a perfect adjunct for AR/VR offerings. At the high end, Nokia unveiled its Ozo VR camera, equipped with eight camera sensors and eight microphones, as an alternative to a DIY assembly of GoPro cameras, for example.

Two fascinating keynotes bookended the AVAR Conference. The opening keynote, presented by Philip Lelyveld, VR/AR initiative program manager at the USC Entertainment Technology Center, Los Angeles, and called “The Journey into Virtual and Augmented Reality,” defined how virtual, augmented and mixed reality will impact entertainment, learning and social interaction. “Virtual, Augmented and Mixed Reality have the potential of delivering interactive experiences that take us to places of emotional resonance, give us agency to form our own experiential memories, and become part of the everyday lives we will live in the future,” he explained.

“Just as TV programming progressed from live broadcasts of staged performances to today’s very complex language of multithread long-form content,” Lelyveld stressed, “so such media will progress from the current early days of projecting existing media language with a few tweaks to a headset experience into a new VR/AR/MR-specific language that both the creatives and the audience understand.”

Is his closing keynote, “Future Nostalgia, Here and Now: Let’s Look Back on Today from 20 Years Hence,” George Sanger, director of sonic arts at Magic Leap, attempted to predict where VR/AR/MR will be in two decades. “Two decades of progress can change how we live and think in ways that boggle the mind,” he acknowledged. “Twenty years ago, the PC had rudimentary sound cards, now the entire ‘multitrack recording studio’ lives on our computers. By 2036, we will be wearing lightweight portable devices all day. Our media experience will seamlessly merge the digital and physical worlds; how we listen to music will change dramatically. We live in the Revolution of Possibilities.”

According to conference co-chair Linda Gedemer, “It has been speculated by Wall Street [pundits] that VR/AR will be as game changing as the advent of the PC, so we’re in for an incredible journey!”

Mel Lambert, who also gets photo credit on pictures from the show, is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com Follow him on Twitter @MelLambertLA

Industry pros gather to discuss sound design for film and TV

By Mel Lambert

The third annual Mix Presents Sound for Film and Television conference attracted some 500 production and post pros to Sony Pictures Studios in Culver City, California, last week to hear about the art of sound design.

Subtitled “The Merging of Art, Technique and Tools,” the one-day conference kicked off with a keynote address by re-recording mixer Gary Bourgeois, followed by several panel discussions and presentations from Avid, Auro-3D, Steinberg, JBL Professional and Dolby.

L-R: Brett G. Crockett, Tom McCarthy, Gary Bourgeois and Mark Ulano.

During his keynote, Bourgeois advised, “Sound editors and re-recording mixers should be aware of the talent they bring to the project as storytellers. We need to explore the best ways of using technology to be creative and support the production.” He concluded with some more sage advice: “Do not let the geek take over! Instead,” he stressed, “show the passion we have for the final product.”

Other highlights included a “Sound Inspiration Within the Storytelling Process” panel organized by MPSE and moderated by Carolyn Giardina from The Hollywood Reporter. Panelists included Will Files, Mark P. Stoeckinger, Paula Fairfield, Ben L. Cook, Paul Menichini and Harry Cohen. The discussion focused on where sound designers find their inspiration and the paths they take to create unique soundtracks.

CAS hosted a sound-mixing panel titled “Workflow for Musicals in Film and Television Production” that focused on live recording and other techniques to give musical productions a more “organic” sound. Moderated by Glen Trew, the panel included music editor David Klotz, production mixer Phil Palmer, playback specialist Gary Raymond, production mixer Peter Kurland, re-recording mixer Gary Bourgeois and music editor Tim Boot.

Sound Inspiration Within the Storytelling Process panel (L-R): Will Files, Ben L. Cook, Mark P. Stoeckinger, Carolyn Giardina, Harry Cohen, Paula Fairfield and Paul Menichini.

Sponsored by Westlake Pro, a panel called “Building an Immersive Room: Small, Medium and Large” covered basic requirements of system design and setup — including console/DAW integration and monitor placement — to ensure that soundtracks translate to the outside world. Moderated by Westlake Pro’s CTO, Jonathan Deans, the panel was made up of Bill Johnston from Formosa Group, Nathan Oishi from Sony Pictures Studios, Jerry Steckling of JSX, Brett G. Crockett from Dolby Labs, Peter Chaikin from JBL and re-recording mixers Mark Binder and Tom Brewer.

Avid hosted a fascinating panel discussion called “The Sound of Stranger Things,” which focused on the soundtrack for the Netflix original series, with its signature sound design and ‘80s-style, synthesizer-based music score. Moderated by Avid’s Ozzie Sutherland, the panel included sound designer Craig Henighan, SSE Brad North, music editor David Klotz and sound effects editor Jordan Wilby. “We drew our inspiration from such sci-fi films as Alien, The Thing and Predator,” Henighan said. Re-recording mixers Adam Jenkins and Joe Barnett joined the discussion via Skype from the Technicolor Seward stage.

The Barbra Streisand Scoring Stage.

A stand-out event was the Production Sound Pavilion held on the Barbra Streisand Scoring Stage, where leading production sound mixers showed off their sound carts, with manufacturers also demonstrating wireless, microphone and recorder technologies. “It all starts on location, with a voice in a microphone and a clean recording,” offered CAS president Mark Ulano. “But over the past decade production sound has become much more complex, as technologies and workflows evolved both on-set and in post production.”

Sound carts on display included Tom Curley’s Sound Devices 788t recorder and Sound Devices CL9 mixer combination; Michael Martin’s Zaxcom Nomad 12 recorder and Zaxcom Mix-8 mixer; Danny Maurer’s Sound Devices 664 recorder and Sound Devices 633 mixer; Devendra Cleary’s Sound Devices 970, Pix 260i and 664 recorders with Yamaha 01V and Sound Devices CL-12 mixers; Charles Mead’s Sound Devices 688 recorder with CL-12 mixer; James DeVotre’s Sound Devices 688 recorder with CL-12 Alaia mixer; Blas Kisic’s Boom Recorder and Sound Devices 788 with Mackie Onyx 1620 mixer; Fernando Muga’s Sound Devices 788 and 633 recorders with CL-9 mixer; Thomas Cassetta’s Zaxcom Nomad 12 recorder with Zaxcom Oasis mixer; Chris Howland’s Boom Recorder, Sound Devices and 633 recorders, with Mackie Onyx 1620 and Sound Devices CL-12 mixers; Brian Patrick Curley’s Sound Devices 688 and 664 recorders with Sound Devices CL-12 Alaia mixer; Daniel Powell’s Zoom F8 recorder/mixer; and Landon Orsillo’s Sound Devices 688 recorder.

Lon Neumann

CAS also organized an interesting pair of Production Sound Workshops. During the first one, consultant Lon Neumann addressed loudness control with an overview of loudness levels and surround sound management of cinema content for distribution via broadcast television.

The second presentation, hosted by Bob Bronow (production mixer on Deadliest Catch) and Joe Foglia (Marley & Me, Scrubs and From the Earth to the Moon), covered EQ and noise reduction in the field. While it was conceded that, traditionally, any type of signal processing on location is strongly discouraged — such decisions normally being handled in post — the advent of multitrack recording and isolated channels means that it is becoming more common for mixers to use processing on the dailies mix track.

New for this year was a Sound Reel Showcase that featured short samples from award-contending and to-be-released films. The audience in the Dolby Atmos- and Auro 3D-equipped William Holden Theatre was treated to a high-action sequence from Mel Gibson’s new film, Hacksaw Ridge, which is scheduled for release on November 4. It follows the true story of a WWII army medic who served during the harrowing Battle of Okinawa and became the first conscientious objector to be awarded the Medal of Honor. The highly detailed Dolby Atmos soundtrack was created by SSE/sound designer/recording mixer Robert Mackenzie working at Sony Pictures Studios with dialogue editor Jed M. Dodge and ADR supervisor Kimberly Harris, with re-recording mixers Andy Wright and Kevin O’Connell.

Mel Lambert is principal of Content Creators, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

All photos by Mel Lambert.

 

AES Paris: A look into immersive audio, cinematic sound design

By Mel Lambert

The Audio Engineering Society (AES) came to the City of Light in early June with a technical program and companion exhibition that attracted close to 2,600 pre-registrants, including some 700 full-pass attendees. “The Paris International Convention surpassed all of our expectations,” AES executive director Bob Moses told postPerspective. “The research community continues to thrive — there was great interest in spatial sound and networked audio — while the business community once again embraced the show, with a 30 percent increase in exhibitors over last year’s show in Warsaw.” Moses confirmed that next year’s European convention will be held in Berlin, “probably in May.”

Tom Downes

Getting Immersed
There were plenty of new techniques and technologies targeting the post community. One presentation, in particular, caught my eye, since it posed some relevant questions about how we perceive immersive sound. In the session, “Immersive Audio Techniques in Cinematic Sound Design: Context and Spatialization,” co-authors Tom Downes and Malachy Ronan — both of who are AES student members currently studying at the University of Limerick’s Digital Media and Arts Research Center, Ireland — questioned the role of increased spatial resolution in cinematic sound design. “Our paper considered the context that prompted the use of elevated loudspeakers, and examined the relevance of electro-acoustic spatialization techniques to 3D cinematic formats,” offered Downes. The duo brought with them a scene from writer/director Wolfgang Petersen’s submarine classic, Das Boot, to illustrate their thesis.

Using the university’s Spatialization and Auditory Display Environment (SpADE) linked to an Apple Logic Pro 9 digital audio workstations and a 7.1.4 playback configuration — with four overhead speakers — the researchers correlated visual stimuli with audio playback. (A 7.1-channel horizontal playback format was determined by the DAW’s I/O capabilities.) Different dynamic and static timbre spatializations were achieved by using separate EQ plug-ins assigned to horizontal and elevated loudspeaker channels.

“Sources were band-passed and a 3dB boost applied at 7kHz to enhance the perception of elevation,” Downes continued. “A static approach was used on atmospheric sounds to layer the soundscape using their dominant frequencies, whereas bubble sounds were also subjected to static timbre spatialization; the dynamic approach was applied when attempting to bridge the gap between elevated and horizontal loudspeakers. Sound sources were split, with high frequencies applied to the elevated layer, and low frequencies to the horizontal layer. By automating the parameters within both sets of equalization, a top-to-bottom trajectory was perceived. However, although the movement was evident, it was not perceived as immersive.”

The paper concluded that although multi-channel electro-acoustic spatialization techniques are seen as a rich source of ideas for sound designers, without sufficient visual context they are limited in the types of techniques that can be applied. “Screenwriters and movie directors must begin to conceptualize new ways of utilizing this enhanced spatial resolution,” said Downes.

Rich Nevens

Rich Nevens

Tools
Merging Technologies demonstrated immersive-sound applications for the v.10 release of Pyramix DAW software, with up to 30.2-channel routing and panning, including compatibly for Barco Auro, Dolby Atmos and other surround formats, without the need for additional plug-ins or apps, while Avid showcased additions for the modular S6 Assignable Digital Console, including a Joystick Panning Module and a new Master Film Module with PEC/DIR switching.

“The S6 offers improved ergonomics,” explained Avid’s Rich Nevens, director of worldwide pro audio solutions, “including enhanced visibility across the control surface, and full Ethernet connectivity between eight-fader channel modules and the Pro Tools DSP engines.” Reportedly, more than 1,000 S6 systems have been sold worldwide since its introduction in December 2013, including two recent installations at Sony Pictures Studios in Culver City, California.

Finally, Eventide came to the Paris AES Convention with a remarkable new multichannel/multi-element processing system that was demonstrated by invitation only to selected customers and distributors; it will be formally introduced during the upcoming AES Convention in Los Angeles in October. Targeted at film/TV post production, the rackmount device features 32 inputs and 32 discrete outputs per DSP module, thereby allowing four multichannel effects paths to be implemented simultaneously. A quartet of high-speed ARM processors mounted on plug-in boards can be swapped out when more powerful DSP chips became available.

Joe Bamberg and Ray Maxwell

Joe Bamberg and Ray Maxwell

“Initially, effects will be drawn from our current H8000 and H9 processors — with other EQ, dynamics plus reverb effects in development — and can be run in parallel or in series, to effectively create a fully-programmable, four-element channel strip per processing engine,” explained Eventide software engineer Joe Bamberg.

“Remote control plug-ins for Avid Pro Tools and other DAWs are in development,” said Eventide’s VP of sales and marketing, Ray Maxwell. The device can also be used via a stand-alone application for Apple iPad tablets or Windows/Macintosh PCs.

Multi-channel I/O and processing options will enable object-based EQ, dynamic and ambience processing for immersive-sound production. End user price for the codenamed product, which will also feature Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking, has yet to be announced.

Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Media Toaster creates efficient QC, 
post workflow for film, TV

By Mel Lambert

“We have developed a two-tier solution that expedites QC on one end, while enhancing and simplifying asset management and delivery on the other,” describes Michael Meis, chief technology officer at Media Toaster, a recently opened multi-room post facility in Burbank. The studio has developed an innovative business model using proprietary technologies — including QiFile and MonsterFile — to speed up media review, approval, archival and delivery processes.

Meis was joined earlier this year by long-time collaborator Michael DeFusco, who is director of post production. The two met while working at Crest Digital for several years; later DeFusco moved on to Post Logic and then Sony Pictures, where he again worked with Meis.

L-R: Michael Meis and Mike DeFusco

The standard QC (quality control) model is to send a printed report to the client, who then has to either search through a DVD or individual clips in order to make a decision about their material. “The QiFile is a way of enhancing the critical QC process by embedding an entire quality-control report — complete with suggested changes — into a relatively small HD file,” DeFusco explains. “This, in turn, enables the client to make informed decisions in a timely manner. Turnaround times are further improved by setting up a secured, easy-to-use virtual desktop where clients can play and download the QiFile [or other media] directly from our production server to [the client’s] computer or mobile device; the client can then work as efficiently as if they were accessing the file from within our facility.”

“We QC content from a variety of sources, is destined for delivery via a number of outlets and formats,” adds Meis. “Since we only have between 24 and 48 hours to perform our critical quality-control services, this proprietary process has noticeably increased our efficiency and throughput.”

By way of an example, Meis and DeFusco cite ongoing projects with Starz Entertainment. “Our embedded QC reports accompany the media files throughout the process and can be accessed by our operators and remote clients,” says DeFusco. The facility currently handles QC and media delivery for Starz’ Black Sails, Ash vs Evil Dead and other offerings.

What makes the process unique, the collaborators say, is that most archival and delivery workflows are limited by the number of available tracks, but, says DeFusco, “MonsterFile has the capacity to hold an unlimited number of audio and video tracks. Also, repurposing is typically done through various departments and by different operators. With our processes, many tasks — including transcodes, conversions, captions, pitch-correction, audio compliance and final delivery to anywhere in the world — can be quickly completed by one operator who never needs to leave his workstation,” reports Meis. “All of which saves our clients time and money.

Mike DeFusco and Michael Meis1NEW
Media Toaster’s control room and Mike DeFusco and Michael Meis at work.

“Media Toaster uses industry-standard Aspera technology file-transfers and MPAA-sanctioned firewalls to ensure high-speed access across multiple data networks. “We use high-speed Fibre Channel interconnects and a 10GbE intranet,” explains Meis. “Aspera’s software solution lets us move data at maximum speed, regardless of file size, transfer distance or network conditions.”

The facility operates a total of six post/QC suites, with a staff of close to a dozen operators and support staff. Apple Final Cut Pro X is used exclusively for picture editing, with extra support from Adobe Premiere Pro and Avid Media Composer. Staff call on industry-standard Avid Pro Tools for audio deliveries. The in-house server infrastructure uses about 0.5 Petabyte of Promise Technology raw storage, with dual-band fiber-optic wiring and 10 Gbit Ethernet speeds to move data around the facility.

Media Toaster offers a range of services “from new content creation, minor picture and sound tweaks, all the way up to complete overhaul or digital distribution, including 4k/UHD and DCP creation,” says Meis. Other services include picture and sound editorial, color grading, voiceover, music scoring, ADR and Foley.

Marc Vanocur

Marc Vanocur

Video and broadcast material are only a part of Media Toaster’s offerings. For independent film productions, the company provides all the post services for modest-budget motion pictures. For Aristar Entertainment and Incendiary Features’ Dead Awake, directed by Phillip Guzman and written by Jeffrey Reddick (Final Destination), co-executive producer/producer Galen Walker opted to use the Media Toaster for a variety of key post functions. Peter Devaney was the picture editor for this one, while Jussi Tegelman was sound supervisor.

“Marc Vanocur of Shout| Softly has located to our facility providing additional services. Marc brings a production services component with camera, grip and lighting, and a large suite with both color and finishing and full music scoring capabilities. It is a highly collaborative effort that’s saving us time and money; we have shaved maybe six weeks off our post schedule. The QiFile has been the key to our  tracking processes though the film’s completion.

Mel Lambert is principal of Content Creators, an LA-based editorial service. He can be reached at mel.lambert@content-creators.com, and follow him on Twitter @MelLambertLA.

Mark Mangini keynotes The Art of Sound 
Design at Sony Studios

Panels focus on specifics of music, effects and dialog sound design, and immersive soundtracks

By Mel Lambert

Defining a sound designer as somebody “who uses sound to tell stories,” Mark Mangini, MPSE, was adamant that “sound editors and re-recording mixers should be authors of a film’s content, and take creative risks. Art doesn’t get made without risk.”

A sound designer/re-recording mixer at Hollywood’s Formosa Group Features, Mangini outlined his sound design philosophy during a keynote speech at the recent The Art of Sound Design: Music, Effects and Dialog in an Immersive World conference, which took place at Sony Pictures Studios in Culver City.

Mangini is recipient of three Academy Award nominations for The Fifth Element (1997), Aladdin (1992) and Star Trek IV: The Voyage Home (1986).

Acknowledging that an immersive soundtrack should fully engage the audience, Mangini outlined two ways to achieve that goal. “Physically, we can place sound around an audience, but we also need to engage them emotionally with the narrative, using sound to tell the story,” he explained to the 500-member audience. “We all need to better understand the role that sound plays in the filmmaking process. For me, sound design is storytelling — that may sound obvious, but it’s worth reminding ourselves on a regular basis.”

While an understanding of the tools available to a sound designer is important, Mangini readily concedes, “Too much emphasis on technology keeps us out of the conversation; we are just seen as technicians. Sadly, we are all too often referred to as ‘The Sound Guy.’ How much better would it be for us if the director asked to speak with the ‘Audiographer,’ for example. Or the ‘Director of Sound’ or the ‘Sound Artist?’ — terms that better describe what we actually do? After all, we don’t refer to a cinematographer as ‘The Image Guy.’”

Mangini explained that he always tries to emphasize the why and not the how, and is not tempted to imitate somebody else’s work. “After all, when you imitate you ensure that you will only be ‘almost’ as good as the person or thing you imitate. To understand the ‘why,’ I break down the script into story arcs and develop a sound script so I can reference the dramatic beats rather than the visual cues, and articulate the language of storytelling using sound.”

Past Work
Offering up examples of his favorite work as a soundtrack designer, Mangini provided two clips during his keynote. “While working on Star Trek [in 2009] with supervising sound editor Mark Stoeckinger, director J. J. Abrams gave me two days to prepare — with co-designer Mark Binder — a new soundtrack for the two-minute mind meld sequence. J. J. wanted something totally different from what he already had. We scrapped the design work we did on the first day, because it was only different, not better. On day two we rethought how sound could tell the story that J. J. wanted to tell. Having worked on three previous Star Trek projects [different directors], I was familiar with the narrative. We used a complex combination of orchestral music and sound effects that turned the sequence on its head; I’m glad to say that J. J. liked what we did for his film.”

The two collaborators received the following credit: “Mind Meld Soundscape by Mark Mangini and Mark Binder.”

Turning to his second soundtrack example, Mangini recalled receiving a call from Australia about the in-progress soundtrack for George Miller’s Mad Max: Fury Road, the director’s fourth outing with the franchise. “The mix they had prepared in Sydney just wasn’t working for George. I was asked to come down and help re-invigorate the track. One of the obstacles to getting this mix off the ground was the sheer abundance of material to choose from. When you have so many choices on a soundtrack, the mix can be an agonizing process of ‘Sound Design by Elimination.’ We needed to tell him, ‘Abandon what you have and start over.’ It was up to me, as an artist, to tell George that his V8 needed an overhaul and not just a tune-up!”

“We had 12 weeks, working at Formosa with co-supervising sound editor Scott Hecker — and at Warner Bros Studios with re-recording mixers Chris Jenkins and Greg Rudloff — to come up with what George Miller was looking for. We gave each vehicle [during the extended car-chase sequence that opens the film] a unique character with sound, and carefully defined [the lead proponent Max Rockatansky’s] changing mental state during the film. The desert chase became ‘Moby Dick,’ with the war rig as the white whale. We focused on narrative decisions as we reconstructed the soundtrack, always referencing ‘the why’ for our design choices in order to provide a meaningful sonic immersion. Miller has been quoted as saying, ‘Mad Max is a film where we see with our ears.’ This from a director who has been making films for 40 years!”

His advice to fledgling sound designers? Mangini kept it succinct: “Ask yourself why, not how. Be the author of content, take risks, tell stories.”

Creating a Sonic Immersive Experience
Subsequent panels during the all-day conference addressed how to design immersive music, sound effects and dialog elements used on film and TV soundtracks. For many audiences, a 5.1-channel format is sufficient for carrying music, effects and dialog in an immersive, surround experience, but 7.1-channel — with added side speakers, in addition to the new Dolby Atmos, Barco/Auro 3D and DTS:X/MDA formats — can extend that immersive experience.

“During editorial for Guardians of the Galaxy we had so many picture changes that the re-recording mixers needed all of the music stems and breakouts we could give them,” said music editor Will Kaplan, MPSE, from Warner Bros. Studio Facilities, during the “Music: Composing, Editing and Mixing Beyond 5.1” panel. It was presented by Formosa Group and moderated by scoring mixer Dennis Sands, CAS. “In a quieter movie we can deliver an entire orchestral track that carries the emotion of a scene.”

Music: Composing, Editing and Mixing Beyond 5.1 panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

‘Music:Composing, Editing and Mixing Beyond 5.1’ panel (L-R): Andy Koyama, Bill Abbott, Joseph Magee, moderator Dennis Sands, Steven Saltzman and Will Kaplan.

Describing his collaboration with Tim Burton, music editor Bill Abbott, MPSE from Formosa reported that the director “liked to hear an entire orchestral track for its energy, and then we recorded it section by section with the players remaining on the stage, which can get expensive!”

Joseph Magee, CAS, (supervising music mixer on such films as Pitch Perfect 2, The Wedding Ringer, Saving Mr. Banks and The Muppets) likes to collaborate closely with the effects editor to decide who handles which elements from each song. “Who gets the snaps and dance shoes How do we divide up the synchronous ambience and the design ambience? The synchronous ambience from the set might carry tails from the sing-offs, and needs careful matching. What if they pitch shift the recorded music in post? We then need to change the pitch of the music captured in the audience mics using DAW plug-ins.”

“I like to invite the sound designer to the music spotting session,” advised Abbott, “and discuss who handles what — is it a music cue or a sound effect?”

“We need to immerse audiences with sound and use the surrounds for musical elements,” explained Formosa’s re-recording mixer, Andy Koyama, CAS. “That way we have more real estate in the front channels for sound effects.”

“We should get the sound right on the set because it can save a lot of processing time on the dub stage,” advised production mixer Lee Orloff, CAS, during the “A Dialog on Dialog: From Set to Screen” panel moderated by Jeff Wexler, CAS.

A Dialog on Dialog: From Set to Screen panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

‘A Dialog on Dialog: From Set to Screen’ panel (L-R): Lee Orloff, Teri Dorman, CAS president Mark Ulano, moderator Jeff Wexler, Gary Bourgeois, Marla McGuire and Steve Tibbo.

“I recall working on The Patriot, where the director [Roland Emmerich] chose to create ground mist using smoke machines known as a Smoker Boats,” recalled Orloff, who received Oscar and BAFTA Awards for Terminator 2: Judgment Day (1991). “The trouble was that they contained noisy lawnmower engines, whose sound can be heard under all of the dialog tracks. We couldn’t do anything about it! But, as it turned out, that low-level noise added to the sense of being there.”

“I do all of my best work in pre-production,” added Wexler, “by working out the noise problems we will face on location. It is more than just the words that we capture; a properly recorded performance tells you so much about the character.”

“I love it when the production track is full of dynamics,” added dialog/music re-recording mixer Gary Bourgeois, CAS. “The voice is an instrument; if I mask out everything that is not needed I lose the ‘essence’ of the character’s performance. The clarity of dialog is crucial.”

“We have tools that can clean up dialog,” conceded supervising sound editor Marla McGuire, MPSE, “but if we apply them too often and too deeply it takes the life out of the track.”

“Sound design can make an important scene more impactful, but you need to remember that you’re working in the service of the film,” advised sound designer/supervising sound editor Richard King, MPSE, during the “Sound Effects: How Far Can You Go?” moderated by David Bondelevitch, MPSE, CAS.

Sound Effects: How Far Can You Go? panel L_R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

‘Sound Effects: How Far Can You Go?’ panel L-R: Mandell Winter, Scott Gershin, moderator David Bondelevitch, Greg Hedgpath, Richard King and Will Files.

In terms of music co-existing with sound effects, Formosa’s Scott Gershin, MPSE, advised, “During a plane crash sequence, I pitch shifted the sound effect to match the music.”

“I like to go to the music spotting session and ask if the director wants the music to serve as a rhythmic or thematic/tonal part of the soundtrack,” added sound effects re-recording mixer Will File from Fox Post Production Services. “I just take the other one. Or if it’s all rhythm — a train ride, for example — we’ll agree to split [the elements].”

“On the stage, I’m constantly shifting sync and pitch shifting the sound effects to match the music track,” stated Gershin. “For Pacific Rim we had many visual effects arriving late with picture changes. Director Guillermo del Toro received so many new eight-frame VFX cues he wanted to use that the music track ended up looking like bar code” in the final Pro Tools sessions.

In terms of working with new directors, “I like to let them see some good movies with good sound design to start the conversation” offered Files. “I front load the process by giving the director and picture editors a great sounding temp track using dialog predubs that they can load into the Avid Media Composer to get them used to our sound ideas It also helps the producers dazzle the studio!”

“Successful soundtrack design is a collaborative effort from production sound onwards,” advised re-recording mixer Mike Minkler, CAS, during “The Mix: Immersive Sound, Film and Television” panel, presented by DTS and moderated by Mix editor Tom Kenny. “It’s about storytelling. Somebody has to be the story’s guardian during the mix,” stated Minkler, who received Academy Awards for Dreamgirls (2006), Chicago (2002) and Black Hawk Down (2001). “Filmmaking is the ultimate collaboration. We need to be aware of what the director wants and what the picture needs. To establish your authority you need to gain their confidence.”

“For immersive mixes, you should start in Dolby Atmos as your head mix,” advised Jeremy Pearson, CAS, who is currently re-recording The Hunger Games: Mockingjay – Part 2 at Warner Bros. Studio. He also worked in that format on Mockingjay – Part 1 and Catching Fire. “Atmos is definitely the way to go; it’s what everyone can sign off on. In terms of creative decisions during an Atmos mix, I always ask myself, ‘Am I helping the story by moving a sound, or distracting the audience?’ After all, the story is up on the screen. We can enhance sound depth to put people into the scene, or during calmer, gentler scenes you can pinpoint sounds that engage the audience with the narrative.”

Kim Novak Theater at Sony Pictures Studios

Kim Novak Theater at Sony Pictures Studios.

Minkler reported that he is currently working on director Quentin Tarantino’s The Hateful Eight, “which will be released initially for two weeks in a three-hour version on 70mm film to 100 screens, with an immersive 5.1-channel soundtrack mastered to 35 mm analog mag.”

Subsequently, the film will be released next year in a slightly different version via a conventional digital DCP.

“Our biggest challenge,” reported Matt Waters, CAS, sound effects re-recording mixer for HBO’s award-winning Game of Thrones, “is getting everything competed in time. Changes are critical and we might spend half a day on a sequence and then have only 10 minutes to update the mix when we receive picture changes.”

“When we receive new visuals,” added Onnalee Blank, CAS, who handles music and dialog re-recording on the show, “[the showrunners] tell us, ‘it will not change the sound.’ But if the boats become dragons…”

Photos by Mel Lambert.

Mel Lambert is principal of Content Creators, an LA-based editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Cine Gear Expo showcases production, post solutions

By Mel Lambert

With its focus on sound and image acquisition, the annual Cine Gear Expo — now in its 20th year — offers attendees the opportunity to examine a wide cross section of systems targeted at the production and post communities, including capture, storage and delivery configurations that accommodate 4K and HDR workflows. Held last Friday and Saturday at the Paramount Studios complex in central Hollywood, a large number of companies showed off new innovations within Stages 31 and 32, in addition to outdoor booths located throughout the New York Street area. This year’s event reportedly attracted in excess of 12,000 attendees.

One highlight was a rare 70mm screening by Band Pro Film & Digital of Baraka, followed by a Q&A with producer Mark Magidson. Shot in 25 countries on six continents, the film includes a number of scenes that director Ron Fricke defines as “a guided meditation on humanity.” Originally released in 1992, Baraka was the first film in over 20 years to be photographed in the 70mm Todd-AO format, and reportedly the first film ever to be restored and scanned at 8K resolution. “Last year we screened Samsara in 4K at Cine Gear,” reports Band Pro president/CEO Amnon Band. “The response was huge, and we wanted this year to be just as amazing. Baraka is a film that deserves to be projected and appreciated on the big screen.” (As critic Roger Ebert once commented: “If man sends another Voyager to the distant stars and it can carry only one film on board, that film might be Baraka.”)

Panavision Primo 70 lenses and the Red Weapon 8K.

Panavision Primo 70 lenses and the Red Weapon 8K.

Panavision/Light Iron showed test footage from director Quentin Tarantino’s The Hateful Eight, which was shot by Robert Richardson, ASC, in Ultra Panavision 70 and projected from 70mm anamorphic film at the Paramount Theater. The first production since Khartoum (1966) to be shot in Ultra Panavision 70, the anamorphic format is captured on 65mm negative stock to deliver an approximately 2.7:1 image that is described as “sharp but not clinical, with painterly bokeh and immersive depth.” Also shown in the Panavision/Light Iron booth was a demo of 8K footage shot on the RED Weapon 8K with Panavision Primo 70 lenses, PanaNet, a high-speed fiber network between Panavision locations for transferring media at up to 10GB per second; LightScan, a low-cost telecine solution that transfers ProRes UHD quality targeted at independent films, commercials and TV shows that prefer film optics; and Live Play 3, an iPad dailies app for Mac OS X.

Canon’s EOS C300 Mark II digital cinema camera.

Canon took the opportunity to showcase the new EOS C300 Mark II digital cinema camera. According to Joseph Bogacz, a Canon advisor on professional engineering and solutions, the Mk II is a completely new camera, and not derived from the original C300. “The Mk II offers more than 15 stops of dynamic range, with ISO from 160 to 102,400. We have also included 10-bit recording for 4K shoots, in addition to 10- or 12-bit HD/2K resolutions. The Mk II also offers internal 4K recording, for less complexity on a film or TV set.” The camera’s power system has also been beefed up to 14.4 volts, with Lemo connectors.

Also shown was the new portable DP-V2410 24-inch 4K reference monitor, which is designed for on-set use during 4K cinema and 4K/UHD TV/commercial productions. The monitor “delivers a consistent look throughout the entire workflow,” according to Jon Sagud, a professional marketing manager with Canon Imaging Technologies and Communications Group. “It connects via a single cable to the C300 MkII and also accepts HDMI sources.”

Gale Tattersall

The RGB LED backlight panel is rated at 400 NIT light levels with several built-in waveform displays, and can be powered from 24V supplies. It can also de-Bayer live 4K RAW video from EOS C500 and C300 Mark II cameras, and supports 4K ACES proxy (ACES 1.0) to maintain a desired “look” throughout a production-to-post workflow.

The company also hosted a panel discussion, “A First Look at the EOS C300 Mark II with Gale Tattersall,” during which the acclaimed director of photography presented his first impressions of the new camera, together with reactions from first AC Tony Gutierrez, second AC Zoe Van Brunt and Steadicam operator Ari Robbins, while shooting Trick Shot, the first short to be shot entirely with the new system. “I was immediately impressed by the C300 Mk II’s wide dynamic range and output quality,” Tattersall confided. “I could avoid white-level clipping and hold shadow detail; you can go beyond the 15-stop range if you want to. We were working with a 50-1,000 T5 Canon lens, which is a perfect all-round zoom. With Netflix and other studios specifying 4K resolution, the MkII’s on-board recording will definitely streamline our workflows.”

Probably best known for his work as DP on Fox’s House television series, Tattersall currently is working on Netflix’s Grace and Frankie series, using a competitive 4K camera. “When you have [series principals] Jane Fonda and Lily Tomlin – ‘ladies in their seventies’ – wearing black against black backgrounds, dynamic range become a key factor! The Mk II offers outstanding performance down in the critical 15 IRE low-level range.”

During a panel discussion organized by Sony, cinematographer Rob Hardy, BSC, shared details of his work on director Alex Garland’s Ex Machina, using a F65 CineAlta digital camera. Because of his prior experience shooting 35mm film for commercials, “I wanted to retain the same operator workflow,” Hardy concedes. “During pre-production 4K tests [in the UK at Pinewood Studios] we compared the look of Red Dragon, Arri Alexa and Sony F65 cameras, with new and old glass [lenses]. I needed to capture in the camera what I was seeing on the set; skin tones became a key parameter across a range of interior and exterior lighting levels.

DP Rob Hardy during a panel discussion on using Sony CineAlta F65 camera to shoot Ex Machina.

DP Rob Hardy during a panel discussion on using Sony F65 CineAlta camera to shoot Ex Machina.

“We opted for Xtal Express anamorphic glass on the F65, a combination that offered everything I was looking for. The resultant images had the depth that I needed for the film; the F65 ‘read’ the glass perfectly for me at T2.8 or T2.3 apertures.” UK-based Joe Dunton Cameras supplied the Cooke Xtal (Crystal) Express anamorphic lenses, which are derived from vintage Cooke spherical lenses that, in the eighties, were rehoused and modified with anamorphic elements.

Turning to other booth displays at Cine Gear, Amimon demonstrated the Connex series of 5GHz wireless transmission units that are said to deliver full HD video quality with zero-latency transmission over distances up to 3,300 feet, which are targeted at feature films, documentaries, music videos and other production applications that need realtime control of a camera and drone. A built-in OSD provides telemetry information; commands can also be sent via Futaba S-Bus protocol to a drone’s gimbal; the unit supports simultaneous multicasting to four screens.

Audio Intervisual Design (AID) showed examples of recent post-production design and installation projects, including a multi-function dub stage and DI/color grading suite for Blumhouse Productions, which has enjoyed recent success with the Paranormal Activity, The Purge, Insidious and Sinister franchises, in addition to Oscar success with Whiplash and Emmy success with HBO’s The Normal Heart. Also shown at the AID booth was an Avid S6 Console surface for Pro Tools and examples of IHSEusa’s extensive range of KVM switches and extenders, plus DVI splitters and converters.

GoPro demonstrated application of its free-of-charge GoPro Studio software that imports, trims and playbacks videos and timelapse photo sequences; edit templates offer music, edit points and slow-motion effects. Video playback speeds can also be changed for ultra-slow and fast motion using the built-in Flux app.

G-Tech's Aimee Davos with G-Drive ev ATC drives.

G-Tech’s Aimee Davos with G-Drive ev ATC drives.

G-Tech showed the new G-Drive ev ATC with either Thunderbolt or USB3 interfaces, designed to withstand life while on hostile locations. The ruggedized, watertight housing with tethered cable holds a removable hard drive and is available in various capacities. The ATC’s all-terrain case is compatible with the firm’s Evolution Series, with a durable 7,200 RPM drive that is said to leverage the speed of Thunderbolt while providing the flexibility of USB. A 1TB USB drive sells for $179 and $229 for a 1TB Thunderbolt model. Also shown was the RAID 8-Bay Thunderbolt 2 storage solution designed to support multi-stream compressed 4K workflows at transfer rates up to 1,350MB/s

London-based iDailies offers 35/16mm processing and 35 mm printing, together with telecine transfer and color grading; only two such film-processing facilities currently exist in the UK. “We are handling all of the processing for [Walt Disney Pictures’] new Star Wars: Episode 7–The Force Awakens, which is being shot entirely on film by director J. J. Abrams,” explains the firm’s senior colorist Dan Russell. Reportedly, the facility has processed every studio film shot in the UK since March 2013, including Spectre, Mission Impossible 5, Cinderella and Fury, together with The Imitation Game and Far From The Madding Crowd. It also supports the majority of film schools, to help “encourage and enable the next generation of filmmakers to discover the unique attributes of film origination.”

L-R: SNS's Steve McKenna with the John Diel.

L-R: SNS’s Steve McKenna with  John Diel.

Sound Devices showcased the PIX-E Series on-camera video monitors, which includes 1,920-by-1,080 five-inch and or 1,920-by-1,200 seven-inch LCDs, with integral monitoring tools, SDI and HDMI I/O, plus the ability to record 4K and Apple ProRes 4444 to mSATA-based SpeedDrives. PIX-E monitors feature compact, die-cast metal housings and Gorilla Glass 2. Also shown was the 12-input Model 688 production mixer with 16-track recorder, offering eight outputs plus digital mixing and routing and the MixAssist automatically drops the volume of inactive inputs and maintains consistent background levels.

Studio Network Solutions (SNS) showcased practical applications for ShareBrowser, a file/project/asset management interface for OS X and Windows, and which is included with every EVO shared-storage system. “ShareBrowser lets post users search, index, share, preview and verify all assets,” explained sales manager Steve McKenna. “More than a file manager, the app enables automatic project locking for Apple Final Cut Pro, Adobe Premier, Avid Pro Tools and other editors, as well as Avid project and bin sharing, and allows search across all EVO storage as well as local, offline and other network disks.”

Cine Gear photos by Mel Lambert

 

NAB 2015: Love and hate, plus blogs and videos

By Randi Altman

I have been to more NABs than I would like to admit, and I loved them all… I’ve also hated them all, but that is my love/hate relationship with the show. I love seeing the new technology, trends and friends I’ve made from my many years in the business.

I hate the way my feet feel at the end of the day. I hate the way that there is not enough lotion on the planet to keep my skin from falling off.  I extra-hate the cab lines, but mostly I hate not being able to see everything that needs to be seen.

Continue reading

Sound developments at the NAB Show

Spotlighting Pro Sound Effects library, Genelec 7.1.4 Array, Avid Master Joystick Module and Sennheiser AVX wireless mic

By Mel Lambert

With a core theme of ”Crave More,” which is intended to reflect the passion of our media and entertainment communities, and with products from 1,700 exhibitors this year – including over 200 first-time companies – there were plenty of new developments to see and hear at the NAB Show, which continues in Las Vegas until Thursday afternoon.

In addition to unveiling Master Library 2.0, which adds more that 30,000 new sound effects, online access, annual updates and new subscription pricing, Pro Sound Effects demonstrated a Continue reading

D-Cinema Summit: standardization of immersive sound formats

By Mel Lambert

“Our goal is to develop an interoperative audio-creation workflow and a single DCP that can be used to render to whatever playback format – Dolby Atmos, Barco/Auro 3D, DTS:X/MDA – has been installed in the exhibition space,” stated Brian Vessa, chairman of SMPTE Technology Committee 25CSS, which is considering a common standardized method for delivering immersive audio to cinemas. Vessa, who also serves as executive director of Digital Audio Mastering at Sony Pictures Entertainment, was speaking at this past weekend’s joint SMPTE/NAB Technology Summit on Cinema during a session focused on immersive sound formats, Continue reading

Oscar-nominated sound editors, mixers share insights with AES LA section

By Mel Lambert

A recent meeting of the Audio Engineering Society’s Los Angeles section offered an opportunity to hear from a number of Oscar nominees and winners as they shared their experiences while preparing dramatic film soundtracks, including how the various sound elements were secured, edited and mixed to picture, plus the types of hardware used in editorial suites and dubbing stages.

Whiplash, written and directed by Damien Chazelle, was re-recorded on Technicolor at Paramount’s Stage 4 by dialog/music mixer Craig Mann and sound effects mixer Ben Wilkins (see our interview with Wilkins), using tracks secured on location by production mixer Thomas Continue reading

‘Future of Audio Tech’ confab tackles acoustics, loudness, more

By Mel Lambert

Organized by the Audio Engineering Society, “The Future of Audio Entertainment Technology: Cinema, Television and the Internet” conference addressed the myriad challenges facing post professionals working in the motion picture and home delivery industries. Co-chaired by Dr. Sean Olive and Brian McCarty, and held at the TCL Chinese Theatre in Hollywood in early March, the three-day gathering comprised several keynote addresses, workshops and papers sessions.

In addition to sponsorship from Dolby, Harman, Auro3D, Avid, Sennheiser, DTS, NBC Universal Studio Post, MPSE and SMPTE, the event attracted a reported 155 attendees.

Referencing a report last year in The Hollywood Reporter that more than 350 different Continue reading