Tag Archives: AES

Post developments at the AES Berlin Convention

By Mel Lambert

The AES Convention returned to Berlin after a three-year absence, and once again demonstrated that the Audio Engineering Society can organize a series of well-attended paper programs, seminars and workshops, in addition to an exhibition of familiar brands, for the European tech-savvy post community. 

Held at the Maritim Hotel in the creative heart of Berlin in late May, the 142nd AES Convention was co-chaired by Sascha Spors from University of Rostock in Germany and Nadja Wallaszkovits from the Austrian Academy of Sciences. According to AES executive director Bob Moses, attendance was 1,800 — a figure at least 10% higher than last year’s gathering in Paris — with post professional from several overseas countries, including China and Australia.

During the opening ceremonies, current AES president Alex Case stated that, “AES conventions represent an ideal interactive meeting place,” whereas “social media lacks the one-on-one contact that enhances our communications bandwidth with colleagues and co-workers.” Keynote speaker Dr. Alex Arteaga, whose research integrates aesthetic and philosophical practices, addressed the thorny subject of “Auditory Architecture: Bringing Phenomenology, Aesthtic Practices and Engineering Together,” arguing that when considering the differences between audio soundscapes, “our experience depends upon the listening environment.” His underlying message was that a full appreciation of the various ways in which we hear immersive sounds requires a deeper understanding of how listeners interact with that space.

As part of his Richard C. Heyser Memorial Lecture, Prof. Dr. Jorg Sennheiser outlined “A Historic Journey in Audio-Reality: From Mono to AMBEO,” during which he reviewed the basis of audio perception and the interdependence of hearing with other senses. “Our enjoyment and appreciation of audio quality is reflected in the continuous development from single- to multi-channel reproduction systems that are benchmarked against sonic reality,” he offered. “Augmented and virtual reality call for immersive audio, with multiple stakeholders working together to design the future of audio.”

Post-Focused Technical Papers
There were several interesting technical papers that covered the changing requirements of the post community, particularly in the field of immersive playback formats for TV and cinema. With the new ATSC 3.0 digital television format scheduled to come online soon, including object-based immersive sound, there is increasing interest in techniques for capturing surround material and then delivering the same to consumer audiences.

In a paper titled “The Median-Plane Summing Localization in Ambisonics Reproduction,” Bosun Xie from the South China University of Technology in Guangzhou explained that, while one aim of Ambisonics playback is to recreate the perception of a virtual source in arbitrary directions, practical techniques are unable to recreate correct high-frequency spectra in binaural pressures that are referred to as front-back and vertical localization cues. Current research shows that changes of interaural time difference/ITD that result from head-turning for Ambisonics playback match with those of a real source, and hence provide dynamic cue for vertical localization, especially in the median plane. In addition, the LF virtual source direction can be approximately evaluated by using a set of panning laws.

“Exploring the Perceptual Sweet Area in Ambisonics,” presented by Matthias Frank from University of Music in Graz, Austria, described how the sweet-spot area does not match the large area needed in the real world. A method was described to experimentally determine the perceptual sweet spot, which is not limited to assessing the localization of both dry and reverberant sound using different Ambisonic encoding orders.

Another paper, “Perceptual Evaluation of Synthetic Early Binaural Room Impulse Responses Based on a Parametric Model,” presented by Philipp Stade from the Technical University of Berlin, described how an acoustical environment can be modeled using sound-field analysis plus spherical head-related impulse response/HRIRs — and the results compared with measured counterparts. Apparently, the selected listening experiment showed comparable performance and, in the main, was independent from room and test signals. (Perhaps surprisingly, the synthesis of direct sound and diffuse reverberation yielded almost the same results as for the parametric model.)

“Influence of Head Tracking on the Externalization of Auditory Events at Divergence between Synthesized and Listening Room Using a Binaural Headphone System,” presented by Stephan Werner from the Technical University of Ilmenau, Germany, reported on a study using a binaural headphone system that considered the influence of head tracking on the localization of auditory events. Recordings were conducted of impulse responses from a five-channel loudspeaker set-up in two different acoustic rooms. Results revealed that head tracking increased sound externalization, but that it did not overcome the room-divergence effect.

Heiko Purnhagen from Dolby Sweden, in a paper called “Parametric Joint Channel Coding of Immersive Audio,” described a coding scheme that can deliver channel-based immersive audio content in such formats as 7.1.4, 5.1.4, or 5.1.2 at very low bit rates. Based on a generalized approach for parametric spatial coding of groups of two, three or more channels using a single downmix channel, together with a compact parametrization that guarantees full covariance re-instatement in the decoder, the coding scheme is implemented using Dolby AC-4’s A-JCC standardized tool.

Hardware Choices for Post Users
Several manufacturers demonstrated compact near-field audio monitors targeted at editorial suites and pre-dub stages. Adam Audio focused on their new near/mid-fieldS Series, which uses the firm’s ART (Accelerating Ribbon Technology) ribbon tweeter. The five models, which are comprised of the S2V, S3H, S3V, S5V and S5H for horizontal or vertical orientation. The firm’s newly innovated LF and mid-range drivers with custom-designed waveguides for the tweeter — and MF driver on the larger, multiway models — are powered by a new DSP engine that “provides crossover optimization, voicing options and expansion potential,” according to the firm’s head of marketing, Andre Zeugner.

The Eve Audio SC203 near-field monitor features a three-inch LF/MF driver plus a AMT ribbon tweeter, and is supplied with a v-shaped rubberized pad that allows the user to decouple the loudspeaker from its base and reduce unwanted resonances while angling it flat or at a 7.5- or 15-degree angle. An adapter enables mounting directly on any microphone or speaker stand with a 3/8-inch thread. Integral DSP and a passive radiator located at the rear are said to reinforce LF reproduction to provide a response to 62Hz (-3dB).

Genelec showcased The Ones, a series of point-source monitors that are comprised of the current three-way Model 8351 plus the new two-way Model 8331 and three-way Model 8341. All three units include a co-axial MF/HF driver plus two acoustically concealed LF drivers for vertical and horizontal operation. A new Minimum Diffraction Enclosure/MDE is featured together with the firm’s loudspeaker management and alignment software via a dedicated Cat5 network port.

The Neumann KH-80 DSP near-field monitor is designed to offer automatic system alignment using the firm’s control software that is said to “mathematically model dispersion to deliver excellent detail in any surroundings.” The two-way active system features a four-inch LF/MF driver and one-inch HF tweeter with an elliptical, custom-designed waveguide. The design is described as offering a wide horizontal dispersion to ensure a wide sweet spot for the editor/mixer, and a narrow vertical dispersion to reduce sound reflections off the mix console.

To handle multiple monitoring sources and loudspeaker arrays, the Trinnov D-Mon Series controllers enable stereo to 7.1-channel monitoring from both analog and digital I/Os using Ethernet- and/or MIDI-based communication protocols and a fast-switching matrix. An internal mixer creates various combinations of stems, main or aux mixes from discrete inputs. An Optimizer processor offers tuning of the loudspeaker array to match studio acoustics.

Unveiled at last year’s AES Convention in Paris, the Eventide H9000 multichannel/multi-element processing system has been under constant development during the past 12 months with new functions targeted at film and TV post, including EQ, dynamics and reverb effects. DSP elements can be run in parallel or in a series to create multiple, fully-programmable channel strips per engine. Control plug-ins for Avid Pro Tools and other DAWs are being finalized, together with Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking.

Filmton, the German association for film sound professionals, explained to AES visitors its objective “to reinforce the importance of sound at an elemental level for the film community.” The association promotes the appreciation of film sound, together with the local film industry and its policy toward the public, while providing “an expert platform for technical, creative and legal issues.”

Philipp Sehling

Lawo demonstrated the new mc²96 Grand Audio production console, an IP-based networkable design for video post production, available with up to 200 on-surface faders. Innovative features include automatic gain control across multiple channels and miniature TFT color screens above each fader that display LiveView thumbnails of the incoming channel sources.

Stage Tec showed new processing features for its Crescendo Platinum TV post console, courtesy of v4.3 software, including an automixer based on gain sharing that can be used on every input channel, loudness metering to EBU R128 for sum and group channels, a de-esser on every channel path, and scene automation with individual user-adjustable blend curves and times for each channel.

Avid demonstrated native support for the new 7.1.2 Dolby Atmos channel-bed format — basically the familiar 9.1-channel bed with two height channels — for editorial suites and consumer remastering, plus several upgrades for Pro Tools, including new panning software for object-based audio and the ability to switch between automatable object and buss outputs. Pro Tools HD is said to be the only DAW natively supporting in-the-box Atmos mixing for this 10-channel 7.1.2 format. Full integration for Atmos workflows is now offered for control surfaces such as the Avid S6.

Jon Schorah

There was a new update to Nugen Audio’s popular Halo Upmix plug-in for Pro Tools — in addition to stereo to 5.1, 7.1 or 9.1 conversion it is now capable of delivering 7.1.2-channel mixes for Dolby Atmos soundtracks.

A dedicated Dante Pavilion featured several manufacturers that offer network-capable products, including Solid State Logic, whose Tempest multi-path processing engine and router is now fully Audinate Dante-capable for T Series control surfaces with unique arbitration and ownership functions; Bosch RTS intercom systems featuring Dante connectivity with OCA system control; HEDD/Heinz Electrodynamic Designs, whose Series One monitor speakers feature both Dante and AES67/Ravenna ports; Focusrite, whose RedNet series of modular pre-amps and converters offer “enhanced reliability, security and selectivity” via Dante, according to product specialist for EMEA/Germany, Dankmar Klein; and NTP Technology’s DAD Series DX32R and RV32 Dante/MADI router bridges and control room monitor controllers, which are fully compatible with Dante-capable consoles and outboard systems, according to the firm’s business development manager Jan Lykke.

What’s Next For AES
The next European AES convention will be held in Milan during the spring of 2018. “The society also is planning a new format for the fall convention in New York,” said Moses, as the AES is now aligning with the National Association of Broadcasters. “Next January we will be holding a new type of event in Anaheim, California, to be titled AES @ NAMM.” Further details will be unveiled next month. He also explained there will be no West Coast AES Convention next year. Instead the AES will return to New York in the autumn of 2018 with another joint AES/NAB gathering at the Jacob K. Javits Convention Center.


Mel Lambert is an LA-based writer and photographer. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Netflix's Stranger Things

AES LA Section & SMPTE Hollywood: Stranger Things sound

By Mel Lambert

The most recent joint AES/SMPTE meeting at the Sportsmen’s Lodge in Studio City showcased the talents of the post production crew that worked on the recent Netflix series Stranger Things at Technicolor’s facilities in Hollywood.

Over 160 attendees came to hear how supervising sound editor Brad North, sound designer Craig Henighan, sound effects editor Jordan Wilby, music editor David Klotz and dialog/music re-recording mixer Joe Barnett worked their magic on last year’s eight-episode Season One (Sadly, effects re-recording mixer Adam Jenkins was unable to attend the gathering.) Stranger Things, from co-creators Matt Duffer and Ross Duffer, is scheduled to return in mid-year for Season 2.

L-R: Jordan Wilby, Brad North, Craig Henighan, Joe Barnett, David Klotz and Mel Lambert. Photo Credit: Steve Harvey.

Attendees heard how the crew developed each show’s unique 5.1-channel soundtrack, from editorial through re-recording — including an ‘80s-style, synth-based music score, from Austin-based composers Kyle Dixon and Michael Stein, that is key to the show’s look and feel — courtesy of a full-range surround sound playback system supplied by Dolby Labs.

“We drew our inspiration — subconsciously, at least — from sci-fi films like Alien, The Thing and Predator,” Henighan explained. The designer also revealed how he developed a characteristic sound for the monster that appears in key scenes. “The basic sound is that of a seal,” he said. “But it wasn’t as simple as just using a seal vocal, although it did provide a hook — an identifiable sound around which I could center the rest of the monster sounds. It’s fantastic to take what is normally known as a nice, light, fun-loving sound and use it in a terrifying way!” Tim Prebble, a New Zealand-based sound designer, and owner of sound effects company Hiss and A Roar, offers a range of libraries, including SD003 Seal Vocals|Hiss and A Roar.

Gear used includes Avid Pro Tools DAWs — everybody works in the box — and Avid 64-fader, dual-operator S6 console at the Technicolor Seward Stage. The composers use Apple Logic Pro to record and edit their AAF-format music files.


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

 

Stranger Things

Upcoming AES LA meeting features Netflix’s Stranger Things sound team

On January 31, the AES LA Section monthly meeting will showcase the sound editorial and re-recording of the Netflix series Stranger Things. Attendees will hear first-hand how the sound team creates the 5.1-channel soundtrack, including the eerie music that is key to the show’s look and feel. A second season from the Duffer Brothers is scheduled to start later this year, with its haunting ’80s-style, synth-based musical score.

For those of you not familiar with the show, it’s set in Indiana in 1983 and focuses on a 12-year-old boy gone missing and the resulting search for him by the police chief and his friends.

The editorial team for Stranger Things is headed up by supervising sound editor Brad North, who works closely with sound designer Craig Henighan, sound effects editor Jordan Wilby and music editor David Klotz. The re-recording crew, working at the Technicolor Seward stage, is Joe Barnett, who handles dialogue and music, and Adam Jenkins, who handles sound effects.

“We drew our inspiration — subconsciously, at least — from such sci-fi films as Alien, The Thing and Predator,” Henighan recalls. Part sci-fi, part horror and part family drama, Stranger Things is often considered an homage to 80’s movies like Close Encounters of the Third Kind and ET.

The joint AES/SMPTE January meeting, which will be held at the Sportsmen’s Lodge in Studio City on Tuesday, January 31, is open to both AES and SMPTE members and non-members.

Panelists will include Adam Jenkins, Jordan Wilby, Joe Barnett, David Klotz, Brad North and Craig Henighan.

AES Conference focuses on immersive audio for VR/AR

By  Mel Lambert

The AES Convention, which was held at the Los Angeles Convention Center in early October, attracted a broad cross section of production and post professionals looking to discuss the latest technologies and creative offerings. The convention had approximately 13,000 registered attendees and more than 250 brands showing wares in the exhibits halls and demo rooms.

Convention Committee co-chairs Valerie Tyler and Michael MacDonald, along with their team, created the comprehensive schedule of workshops, panels and special events for this year’s show. “The Los Angeles Convention Center’s West Hall was a great new location for the AES show,” said MacDonald. “We also co-located the AVAR conference, and that brought 3D audio for gaming and virtual reality into the mainstream of the AES.”

“VR seems to be the next big thing,” added AES executive director Bob Moses, “[with] the top developers at our event, mapping out the future.”

The two-day, co-located Audio for Virtual and Augmented Reality Conference was expected to attract about 290 attendees, but with aggressive marketing and outreach to the VR and AR communities, pre-registration closed at just over 400.

Aimed squarely at the fast-growing field of virtual/augmented reality audio, this conference focused on the creative process, applications workflow and product development. “Film director George Lucas once stated that sound represents 50 percent of the motion picture experience,” said conference co-chair Andres Mayo. “This conference demonstrates that convincing VR and AR productions require audio that follows the motions of the subject and produces a realistic immersive experience.”

Spatial sound that follows head orientation for headsets powered either by dedicated DSP, game engines or smartphones opens up exciting opportunities for VR and AR producers. Oculus Rift, HTC Vive, PlayStation VR and other systems are attracting added consumer interest for the coming holiday season. Many immersive-audio innovators, including DST and Dolby, are offering variants of their cinema systems targeted at this booming consumer marketplace via binaural headphone playback.

Sennheiser’s remarkable new Ambeo VR microphone (pictured left) can be used to capture 3D sound and then post produced to prepare different spatial perspectives — a perfect adjunct for AR/VR offerings. At the high end, Nokia unveiled its Ozo VR camera, equipped with eight camera sensors and eight microphones, as an alternative to a DIY assembly of GoPro cameras, for example.

Two fascinating keynotes bookended the AVAR Conference. The opening keynote, presented by Philip Lelyveld, VR/AR initiative program manager at the USC Entertainment Technology Center, Los Angeles, and called “The Journey into Virtual and Augmented Reality,” defined how virtual, augmented and mixed reality will impact entertainment, learning and social interaction. “Virtual, Augmented and Mixed Reality have the potential of delivering interactive experiences that take us to places of emotional resonance, give us agency to form our own experiential memories, and become part of the everyday lives we will live in the future,” he explained.

“Just as TV programming progressed from live broadcasts of staged performances to today’s very complex language of multithread long-form content,” Lelyveld stressed, “so such media will progress from the current early days of projecting existing media language with a few tweaks to a headset experience into a new VR/AR/MR-specific language that both the creatives and the audience understand.”

Is his closing keynote, “Future Nostalgia, Here and Now: Let’s Look Back on Today from 20 Years Hence,” George Sanger, director of sonic arts at Magic Leap, attempted to predict where VR/AR/MR will be in two decades. “Two decades of progress can change how we live and think in ways that boggle the mind,” he acknowledged. “Twenty years ago, the PC had rudimentary sound cards, now the entire ‘multitrack recording studio’ lives on our computers. By 2036, we will be wearing lightweight portable devices all day. Our media experience will seamlessly merge the digital and physical worlds; how we listen to music will change dramatically. We live in the Revolution of Possibilities.”

According to conference co-chair Linda Gedemer, “It has been speculated by Wall Street [pundits] that VR/AR will be as game changing as the advent of the PC, so we’re in for an incredible journey!”

Mel Lambert, who also gets photo credit on pictures from the show, is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com Follow him on Twitter @MelLambertLA

AES: Avid intros Pro Tools 12.6 and new MRTX audio interface

Avid was at AES in LA with several new tools and updates for audio post pros. New releases include Pro Tools 12.6 software and Pro Tools MTRX, an audio interface for Pro Tools, HDX and HD Native.

Avid Pro Tools 12.6 delivers new editing capabilities, including Clip Effects and layered editing features, making it possible to edit and prepare mixes faster. Production can also be accelerated using automatic playlist creation and selection using shortcut keys. Enhanced “in-the-box” dubber workflows have also been included.

Pro Tools MTRX, developed by Digital Audio Denmark, gives Pro Tools users the superior sonic quality of DAD’s A to D and D to A converters, along with flexible monitoring, I/O and routing capabilities, all in one unit. MTRX will let users gain extended monitor control and flexible routing with Pro Tools S6, S3 and other EUCON surfaces, use the converter as a high-performance 64-channel Pro Tools HD interface, and get automatic sample rate conversion on AES inputs. MTRX (our main photo) will be available later this year.

Tony Cariddi

During AES LA, we caught up with Tony Cariddi, director of product and solutions marketing for Avid, to see what he had to say about where Avid is going next. “What we have seen in the industry is that there is no shortage of innovation and there are new solutions for problems that are always emerging,” says Cariddi. “But what happens when you have all of these different solutions is it puts a lot of pressure on the user to make sure everything works together seamlessly. So what you’ll see from Avid Everywhere going forward is a continuation of trying to connect our own products closer together on the MediaCentral Platform, so it’s really fluid for our users, but also for people to be able to integrate other solutions into that platform just as easily.

“We also have to be responsive to how people want to access our tools,” he continued. “What kind of packages are they looking for? Do they want to subscribe? Do they want to buy? Enterprise licensing? Floating license? So you’ll probably see bundles and new ways to access licensing and new flexible ways to maybe rent the software when you need it. We’re trying to be very responsive to the multifaceted needs of the industry, and part of that is workflow, part of that is financial and part of that is the integration of everything.”

AR/VR audio conference taking place with AES show in fall


The AES is tackling the augmented reality and virtual reality creative process, applications workflow and product development for the first time with a dedicated conference that will take place on 9/30-10/1 during the 141st AES Convention at the LA Convention Center’s West Hall.

The two-day program of technical papers, workshops, tutorials and manufacturer’s expo will highlight the creative and technical challenges of providing immersive spatial audio to accompany virtual reality and augmented reality media.

The conference will attract content developers, researchers, manufacturers, consultants and students, in addition to audio engineers seeking to expand their knowledge about sound production for virtual and augmented reality. The companion expo will feature displays from leading-edge manufacturers and service providers looking to secure industry metrics for this emerging field.

“Film director George Lucas once stated that sound represents 50 percent of the motion picture experience,” shares conference co-chair Andres Mayo. “This conference will demonstrate that VR and AR productions, using a variety of playback devices, require audio that follows the motions of the subject, and produces a realistic immersive experience. Our program will spotlight the work of leading proponents in this exciting field of endeavor, and how realistic spatial audio can be produced from existing game console and DSP engines.”

Proposed topics include object-based audio mixing for VR/AR, immersive audio in VR/AR broadcast, live VR audio production, developing audio standards for VR/AR, cross platform audio considerations in VR and streaming immersive audio content.

Costs range from $195 for a one-day pass for AES members ($295 for a two-day pass) and $125 for accredited students, to $280/$435 for non-members; Early-bird discounts also are available.

Conference registrants can also attend the 141st AES Convention’s companion exhibition, select educational sessions and special events free of charge with an exhibits-plus badge.

Oscar-nominated sound editors, mixers share insights with AES LA section

By Mel Lambert

A recent meeting of the Audio Engineering Society’s Los Angeles section offered an opportunity to hear from a number of Oscar nominees and winners as they shared their experiences while preparing dramatic film soundtracks, including how the various sound elements were secured, edited and mixed to picture, plus the types of hardware used in editorial suites and dubbing stages.

Whiplash, written and directed by Damien Chazelle, was re-recorded on Technicolor at Paramount’s Stage 4 by dialog/music mixer Craig Mann and sound effects mixer Ben Wilkins (see our interview with Wilkins), using tracks secured on location by production mixer Thomas Continue reading

‘Future of Audio Tech’ confab tackles acoustics, loudness, more

By Mel Lambert

Organized by the Audio Engineering Society, “The Future of Audio Entertainment Technology: Cinema, Television and the Internet” conference addressed the myriad challenges facing post professionals working in the motion picture and home delivery industries. Co-chaired by Dr. Sean Olive and Brian McCarty, and held at the TCL Chinese Theatre in Hollywood in early March, the three-day gathering comprised several keynote addresses, workshops and papers sessions.

In addition to sponsorship from Dolby, Harman, Auro3D, Avid, Sennheiser, DTS, NBC Universal Studio Post, MPSE and SMPTE, the event attracted a reported 155 attendees.

Referencing a report last year in The Hollywood Reporter that more than 350 different Continue reading