Category Archives: Audio

VR Audio: Virtual and spacial soundscapes

By Beth Marchant

The first things most people think of when starting out in VR is which 360-degree camera rig they need and what software is best for stitching. But virtual reality is not just a Gordian knot for production and post. Audio is as important — and complex — a component as the rest. In fact, audio designers, engineers and composers have been fascinated and challenged by VR’s potential for some time and, working alongside future-looking production facilities, are equally engaged in forging its future path. We talked to several industry pros on the front lines.

Howard Bowler

Music industry veteran and Hobo Audio founder Howard Bowler traces his interest in VR back to the groundbreaking film Avatar. “When that movie came out, I saw it three times in the same week,” he says. I was floored by the technology. It was the first time I felt like you weren’t just watching a film, but actually in the film.” As close to virtual reality as 3D films had gotten to that point, it was the blockbuster’s evolved process of motion capture and virtual cinematography that ultimately delivered its breathtaking result.

“Sonically it was extraordinary, but visually it was stunning as well,” he says. “As a result, I pressed everyone here at the studio to start buying 3D televisions, and you can see where that has gotten us — nowhere.” But a stepping stone in technology is more often a sturdy bridge, and Bowler was not discouraged. “I love my 3D TVs, and I truly believe my interest in that led me and the studio directly into VR-related projects.”

When discussing the kind of immersive technology Hobo Sound is involved with today, Bowler — like others interviewed for this series — clearly define VR’s parallel deliverables. “First, there’s 360 video, which is passive viewing, but still puts you in the center of the action. You just don’t interact with it. The second type, more truly immersive VR, lets you interact with the virtual environment as in a video game. The third area is augmented reality,” like the Pokemon Go phenomenon of projecting virtual objects and views onto your actual, natural environment. “It’s really important to know what you’re talking about when discussing these types of VR with clients, because there are big differences.”

With each segment comes related headsets, lenses and players. “Microsoft’s HoloLens, for example, operates solely in AR space,” says Hobo producer Jon Mackey. “It’s a headset, but will project anything that is digitally generated, either on the wall or to the space in front of you. True VR separates you from all that, and really good VR separates all your senses: your sight, your hearing and even touch and feeling, like some of those 4D rides at Disney World.” Which technology will triumph? “Some think VR will take it, and others think AR will have wider mass adoption,” says Mackey. “But we think it’s too early to decide between either one.”

Boxed Out

‘Boxed Out’ is a Hobo indie project about how gentrification is affecting artists studios in the Gowanus section of Brooklyn.

Those kinds of end-game obstacles are beside the point, says Bowler. “The main reason why we’re interested in VR right now is that the experiences, beyond the limitations of whatever headset you watch it on, are still mind-blowing. It gives you enough of a glimpse of the future that it’s incredible. There are all kinds of obstacles it presents just because it’s new technology, but from our point of view, we’ve honed it to make it pretty seamless. We’re digging past a lot of these problem areas, so at least from the user standpoint, it seems very easy. That’s our goal. Down the road, people from medical, education and training are going to need to understand VR for very productive reasons. And we’re positioning ourselves to be there on behalf of our clients.”

Hobo’s all-in commitment to VR has brought changes to its services as well. “Because VR is an emerging technology, we’re investing in it globally,” says Bowler. “Our company is expanding into complete production, from concepting — if the client needs it — to shooting, editing and doing all of the audio post. We have the longest experience in audio post, but we find that this is just such an exciting area that we wanted to embrace it completely. We believe in it and we believe this is where the future is going to be. Everybody here is completely on board to move this forward and sees its potential.”

To ramp up on the technology, Hobo teamed up with several local students who were studying at specialty schools. “As we expanded out, we got asked to work with a few production companies, including East Coast Digital and End of Era Productions, that are doing the video side of it. We’re bundling our services with them to provide a comprehensive set of services.” Hobo is also collaborating with Hidden Content, a VR production and post production company, to provide 360 audio for premium virtual reality content. Hidden Content’s clients include Samsung, 451 Media, Giant Step, PMK-BNC, Nokia and Popsugar.

There is still plenty of magic sauce in VR audio that continues to make it a very tricky part of the immersive experience, but Bowler and his team are engineering their way through it. “We’ve been developing a mixing technique that allows you to tie the audio to the actual object,” he says. “What that does is disrupt the normal stereo mix. Say you have a public speaker in the center of the room; normally that voice would turn with you in your headphones if you turn away from him. What we’re able to do is to tie the audio of the speaker to the actual object, so when you turn your head, it will pan to the right earphone. That also allows you to use audio as signaling devices in the storyline. If you want the viewer to look in a certain direction in the environment, you can use an audio cue to do that.”

Hobo engineer Diego Jimenez drove a lot of that innovation, says Mackey. “He’s a real VR aficionado and just explored a lot of the software and mixing techniques required to do audio in VR. We started out just doing a ton of tests and they all proved successful.” Jimenez was always driven by new inspiration, notes Bowler. “He’s certainly been leading our sound design efforts on a lot of fronts, from creating instruments to creating all sorts of unusual and original sounds. VR was just the natural next step for him, and for us. For example, one of the spots that we did recently was to create a music video and we had to create an otherworldly environment. And because we could use our VR mixing technology, we could also push the viewer right into the experience. It was otherworldly, but you were in that world. It’s an amazing feeling.”

boxed-out

‘Boxed Out’

What advice do Bowler and Mackey have for those interested in VR production and post? “360 video is to me the entry point to all other versions of immersive content,” says Bowler. “It’s the most basic, and it’s passive, like what we’re used to — television and film. But it’s also a completely undefined territory when it comes to production technique.” So what’s the way in? “You can draw on some of the older ways of doing productions,” he says, “but how do you storyboard in 360? Where does the director sit? How do you hide the crew? How do you light this stuff? All of these things have to be considered when creating 360 video. That also includes everyone on camera: all the viewer has to do is look around the virtual space to see what’s going on. You don’t want anything that takes the viewer out of that experience.”

Bowler thinks 360 video is also the perfect entry point to VR for marketers and advertisers creating branded VR content, and Hobo’s clients agree. “When we’ve suggested 360 video on certain projects and clients want to try it out, what that does is it allows the technology to breathe a little while it’s underwritten at the same time. It’s a good way to get the technology off the ground and also to let clients get their feet wet in it.”

Any studio or client contemplating VR, adds Mackey, should first find what works for them and develop an efficient workflow. “This is not really a solidified industry yet,” he says. “Nothing is standard, and everyone’s waiting to see who comes out on top and who falls by the wayside. What’s the file standard going to be? Or the export standard?  Will it be custom-made apps on (Google) YouTube or Facebook? We’ll see Facebook and Google battle it out in the near term. Facebook has recently acquired an audio company to help them produce audio in 360 for their video app and Google has the Daydream platform,” though neither platform’s codec is compatible with the other, he points out. “If you mix your audio to Facebook audio specs, you can actually have your audio come out in 360. For us, it’s been trial and error, where we’ve experimented with these different mixing techniques to see what fits and what works.”

Still, Bowler concedes, there is no true business yet in VR. “There are things happening and people getting things out there, but it’s still so early in the game. Sure, our clients are intrigued by it, but they are still a little mystified by what the return will be. I think this is just part of what happens when you deal with new technology. I still think it’s a very exciting area to be working in, and it wouldn’t surprise me if it doesn’t touch across many, many different subjects, from history to the arts to original content. Think about applications for geriatrics, with an aging population that gets less mobile but still wants to experience the Caribbean or our National Parks. The possibilities are endless.”

At one point, he admits, it may even become difficult to distinguish one’s real memory from one’s virtual memory. But is that really such a bad thing? “I’m already having this problem. I was watching an immersive video of Cuban music, that was pretty beautifully done, and by the end of the five-minute spot, I had the visceral experience that I was actually there. It’s just a very powerful way of experiencing content. Let me put it another way: 3D TVs were at the rabbit hole, and immersive video will take you down the rabbit hole into the other world.”

Source Sound
LA-based Source Sound, which has provided supervision and sound design on a number of Jaunt-produced cinematic VR experiences, including a virtual fashion show, a horror short and a Godzilla short film written and directed by Oscar-winning VFX artist Ian Hunter, as well as final Atmos audio mastering for the early immersive release Sir Paul McCartney Live, is ready for spacial mixes to come. That wasn’t initially the case.

Tim

Tim Gedemer

“When Jaunt first got into this space three years ago, they went to Dolby to try to figure out the audio component,” says Source Sound owner/supervising sound designer/editor Tim Gedemer. “I got a call from Dolby, who told me about what Jaunt was doing, and the first thing I said was, ‘I have no idea what you are talking about!’ Whatever it is, I thought, there’s really no budget and I was dragging my feet. But I asked them to show me exactly what they were doing. I was getting curious at that point.”

After meeting the team at Jaunt, who strapped some VR goggles on him and showed him some footage, Gedemer was hooked. “It couldn’t have been more than 30 seconds in and I was just blown away. I took off the headset and said, ‘What the hell is this?! We have to do this right now.’ They could have reached out to a lot of people, but I was thrilled that we were able to help them by seizing the moment.”

Gedemer says Source Sound’s business has expanded in multiple directions in the past few years, and VR is still a significant part of the studio’s revenue. “People are often surprised when I tell them VR counts for about 15-20 percent of our business today,” he says. “It could be a lot more, but we’d have to allocate the studios differently first.”

With a background in mixing and designing sound for film and gaming and theatrical trailers, Gedemer and his studio have a very focused definition of immersive experiences, and it all includes spacial audio. “Stereo 360 video with mono audio is not VR. For us, there’s cinematic, live-action VR, then straight-up game development that can easily migrate into a virtual reality world and, finally, VR for live broadcast.” Mass adoption of VR won’t happen, he believes, until enterprise and job training applications jump on the bandwagon with entertainment. “I think virtual reality may also be a stopover before we get to a world where augmented reality is commonplace. It makes more sense to me that we’ll just overlay all this content onto our regular days, instead of escaping from one isolated experience to the next.”

On set for the European launch of the Nokia Ozo VR camera in London, which featured a live musical performances captured in 360 VR.

For now, Source Sound’s VR work is completed in dedicated studios configured with gear for that purpose. “It doesn’t mean that we can’t migrate more into other studios, and we’re certainly evolving our systems to be dual-purpose,” he says. “About a year ago we were finally able to get a grip on the kinds of hardware and software we needed to really start coagulating this workflow. It was also clear from the beginning of our foray into VR that we needed to partner with manufacturers, like Dolby and Nokia. Both of those companies’ R&D divisions are on the front lines of VR in the cinematic and live broadcast space, with Dolby’s Atmos for VR and Nokia’s Ozo camera.”

What missing tools and technology have to be developed to achieve VR audio nirvana? “We delivered a wish list to Dolby, and I think we got about a quarter of the list,” he says. “But those guys have been awesome in helping us out. Still, it seems like just about every VR project that we do, we have to invent something to get us to the end. You definitely have to have an adventurous spirit if you want to play in this space.”

The work has already influenced his approach to more traditional audio projects, he says, and he now notices the lack of inter-spacial sound everywhere. “Everything out there is a boring rectangle of sound. It’s on my phone, on my TV, in the movie theater. I didn’t notice it as much before, but it really pops out at me now. The actual creative work of designing and mixing immersive sound has realigned the way I perceive it.”

Main Image: One of Hobo’s audio rooms, where the VR magic happens.


Beth Marchant has been covering the production and post industry for 21 years. She was the founding editor-in-chief of Studio/monthly magazine and the co-editor of StudioDaily.com. She continues to write about the industry.

 

VR Audio: What you need to know about Ambisonics

By Claudio Santos

The explosion of virtual reality as a new entertainment medium has been largely discussed in the filmmaking community in the past year, and there is still no consensus about what the future will hold for the technology. But regardless of the predictions, it is a fact that more and more virtual reality content is being created and various producers are experimenting to find just how the technology fits into the current market.

Out of the vast possibilities of virtual reality, there is one segment that is particularly close to us filmmakers, and that is 360 videos. They are becoming more and more popular on platforms such as YouTube and Facebook and present the distinct advantage that —  beside playing in VR headsets, such as the GearVR or the DayDream — these videos can also be played in standalone mobile phones, tablets and stationary desktops. This considerably expands the potential audience when compared to the relatively small group of people who own virtual reality headsets.

But simply making the image immerse the viewer into a 360 environment is not enough. Without accompanying spatial audio the illusion is very easily broken, and it becomes very difficult to cue the audience to look in the direction in which the main action of each moment is happening. While there are technically a few ways to design and implement spatial audio into a 360 video, I will share some thoughts and tips on how to work with Ambisonics, the spatial audio format chosen as the standard for platforms such as YouTube.

VR shoot in Bryce Canyons with Google for the Hidden Worlds of the National Parks project. Credit: Hunt Beaty Picture by: Hunt Beaty

First, what is Ambisonics and why are we talking about it?
Ambisonics is a sound format that is slightly different from your usual stereo/surround paradigm because its channels are not attached to speakers. Instead, an Ambisonics recording actually represents the whole spherical soundfield around a point. In practice, it means that you can represent sound coming from all directions around a listening position and, using an appropriate decoder, you can playback the same recording in any set of speakers with any number of channels arranged around the listener horizontally or vertically. That is exactly why it is so interesting to us when we are working with spatial sound for VR.

The biggest challenge of VR audio is that you can’t predict which direction the viewer will be looking at in any given time. Using Ambisonics we can design the whole sound sphere and the VR player decodes the sound to match the direction of the video in realtime, decoding it into binaural for accurate headphone playback. The best part is that the decoding process is relatively light on processing power, which makes this a suitable option for mediums with limited resources such as smartphones.

In order to work with Ambisonics we have two options: to record the sound on location with an Ambisonics microphone, which gives us a very realistic representation of the sound in the location and is very well suited to ambiance recordings, for example; or we can encode other sound formats such as mono and stereo into Ambisonics and then manipulate the sound in the sphere from there, which gives us great flexibility in post production to use sound libraries and create interesting effects by carefully adjusting the positioning and width of a sound in the sphere.

Example: Mono “voice of God” placement. The left shows the soundfield completely filled, which gives the “in-head” illusion.

There are plenty of resources online explaining the technical nature of Ambisonics, and I definitely recommend reading them so you can better understand how to work with it and how the spatiality is achieved. But there aren’t many discussions yet about the creative decisions and techniques used in sound for 360 videos with Ambisonics, so that’s what we will be focusing on from now on.

What to do with mono “in-head” sources such as VO?
That was one of the first tricky challenges we found with Ambisonics. It is not exactly intuitive to place a sound source equally in all directions of the soundfield. The easiest solution comes more naturally once you understand how the four channels of the Ambisonics audio track interact with each other.

The first channel of the ambisonics audio, named W, is omnidirectional and contains the level information of the sound. The other three channels describe the position of the sound in the soundfield through phase relationships. Each one of the channels represents one dimension, which enables the positioning of sounds in three dimensions.

Now, if we want the sound to play at the same level and centered from every direction, what we want is for the sound source to be at the center of the soundfield “sphere,” where the listeners head is. In practice, that means that if you play the sound out of the first channel only, with no information into either of the other three channels, the sound will play “in-head.”

What to do with stereo non-diegetic music?
This is the natural question that follows the one of knowing what to do with mono sources. And the answer is a bit trickier. The mono, first channel trick doesn’t work perfectly with stereo sources because for that to work you would have to first sum the stereo to mono, which might be undesirable depending on your track.

If you want to maintain the stereo width of the source, one good option we found was to mirror the sound in two directions. Some plug-in suites, such as the Ambix VST, offer the functionality to mirror hemispheres of the soundfield. That could also be accomplish with careful positioning of a copy of the source, but this will make things easier.

Example of sound paced in the “left” of the soundfield in ambisonics.

Generally, what you want is to place the center of the stereo source in the focus of the action your audience will be looking at and mirror the top-bottom and the front-back. This will keep the music playing at the same level regardless of the direction the viewer looks at, but will keep the spatiality of the source. The downside is that the sound is not anchored to the viewer, so changes in direction of the sources will be noted as the viewer turns around, notably inverting the sides when looking at the back. I usually find this to be an interesting effect nonetheless, and it doesn’t distract the audience too much. If the directionality is too noticeable you can always mix a bit of the mono sum of the music into both channels in order to reduce the perceived width of the track.

How to creatively use reverberation in Ambisonics?
There is a lot you can do with reverberation in Ambisonics and this is only a single trick I find very useful when dealing with scenes in which you have one big obstacle in one direction (such as a wall), and no obstacles in the opposite direction.

In this situation, the sound would reflect from the barrier and return to the listener from one direction, while on the opposite side there would be no significant reflections because of the open field. You can simulate that by placing a slightly delayed reverb coming from the direction of the barrier only. You can adjust the width of the reflection sound to match the perceived size of the barrier and the delay based on the distance the barrier is from the viewer. In this case the effect usually works better with drier reverbs with defined early reflections but not a lot of late reflections.

Once you experiment with this technique you can use variations of if to simulate a variety of spaces and achieve even more realistic mixes that will fool anyone into believing the sounds you placed in post production were recorded on location.

Main Caption: VR shoot in Hawaii with Google for the Hidden Worlds of the National Parks project. Credit: Hunt Beaty.


Claudio Santos is a sound editor at Silver Sound/SilVR in New York.

G-Tech 6-15

Sony Pictures Post adds home theater dub stage

By Mel Lambert

Reacting to the increasing popularity of home theater systems that offer immersive sound playback, Sony Pictures Post Production has added a new mix stage to accommodate next-generation consumer audio formats.

Located in the landmark Thalberg Building on the Sony Pictures lot in Culver City, the new Home Theater Immersive Mix Stage features a flexible array of loudspeakers that can accommodate not only Dolby Atmos and Barco Auro-3D immersive consumer formats, but also other configurations as they become available, including DTS:X, as well as conventional 5.1- and 7.1-channel legacy formats.

The new room has already seen action on an Auro-3D consumer mix for director Paul Feig’s Ghostbusters and director Antoine Fuqua’s Magnificent Seven in both Atmos and Auro-3D. It is scheduled to handle home theater mixes for director Morten Tyldum’s new sci-fi drama Passengers, which will be overseen by Kevin O’Connell and Will Files, the re-recording mixers who worked on the theatrical release.

L-R: Nathan Oishi; Diana Gamboa, director of Sony Pictures Post Sound; Kevin O’Connell, re-recording mixer on ‘Passengers’; and Tom McCarthy.

“This new stage keeps us at the forefront in immersive sound, providing an ideal workflow and mastering environment for home theaters,” says Tom McCarthy, EVP of Sony Pictures Post Production Services. “We are empowering mixers to maximize the creative potential of these new sound formats, and deliver rich, enveloping soundtracks that consumers can enjoy in the home.”

Reportedly, Sony is one of the few major post facilities that currently can handle both Atmos and Auro-3D immersive formats. “We intend to remain ahead of the game,” McCarthy says.

The consumer mastering process involves repurposing original theatrical release soundtrack elements for a smaller domestic environment at reduced playback levels suitable for Blu-ray, 4K Ultra HD disc and digital delivery. The Home Atmos format involves a 7.4.1 configuration, with a horizontal array of seven loudspeakers — three up-front, two side channels and two rear surrounds — in addition to four overhead/height and a subwoofer/LFE channel. The consumer Auro-3D format, in essence, involves a pair of 5.1-channel loudspeaker arrays — left, center, right plus two rear surround channels — located one above the other, with all speakers approximately six feet from the listening position.

Formerly an executive screening room, the new 600-square-foot stage is designed to replicate the dimensions and acoustics of a typical home-theater environment. According to the facility’s director of engineering, Nathan Oishi, “The room features a 24-fader Avid S6 control surface console with Pan/Post modules. The four in-room Avid Pro Tools HDX 3 systems provide playback and record duties via Apple 12-Core Mac Pro CPUs with MADI interfaces and an 8TB Promise Pegasus hard disk RAID array, plus a wide array of plug-ins. Picture playback is from a Mac Mini and Blackmagic HD Extreme video card with a Brainstorm DCD8 Clock for digital sync.”

An Avid/DAD AX32 Matrix controller handles monitor assignments, which then route to a BSS BLU 806 programmable EQ that handles all of standard B-chain duties for distribution to the room’s loudspeaker array. These comprise a total of 13 JBL LSR-708i two-way loudspeakers and two JBL 4642A dual 15 subwoofers powered by Crown DCI Series networked amplifiers. Atmos panning within Pro Tools is accommodated by the familiar Dolby Rendering and Mastering Unit/RMU.

During September’s “Sound for Film and Television Conference,” Dolby’s Gary Epstein demo’d Atmos. ©2016 Mel Lambert.

“A Delicate Audio custom truss system, coupled with Adaptive Technologies speaker mounts, enables the near-field monitor loudspeakers to be re-arranged and customized as necessary,” adds Oishi. “Flexibility is essential, since we designed the room to seamlessly and fully support both Dolby Atmos and Auro formats, while building in sufficient routing, monitoring and speaker flexibility to accommodate future immersive formats. Streaming and VR deliverables are upon us, and we will need to stay nimble enough to quickly adapt to new specifications.”

Regarding the choice of a mixing controller for the new room, McCarthy says that he is committed to integrating more Avid S6 control surfaces into the facility’s workflow, witnessed by their current use within several theatrical stages on the Sony lot. “Our talent is demanding it,” he states. “Mixing in the box lets our editors and mixers keep their options open until print mastering. It’s a more efficient process, both creatively and technically.”

The new Immersive Mix Stage will also be used as a “Flex Room” for Atmos pre-dubs when other stages on the lot are occupied. “We are also planning to complete a dedicated IMAX re-recording stage early next year,” reports McCarthy.

“As home theaters grow in sophistication, consumers are demanding immersive sound, ultra HD resolution and high-dynamic range,” says Rich Berger, SVP of digital strategy at Sony Pictures Home Entertainment. “This new stage allows our technicians to more closely replicate a home theater set-up.”

“The Sony mix stage adds to the growing footprint of Atmos-enabled post facilities and gives the Hollywood creative community the tools they need to deliver an immersive experience to consumers,” states Curt Behlmer, Dolby’s SVP of content solutions and industry relations.

Adds Auro Technologies CEO Wilfried Van Baelen, “Having major releases from Sony Pictures Home Entertainment incorporating Auro-3D helps provide this immersive experience to ensure they are able to enjoy films how the creator intended.”


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.


Industry vet Tim Claman to lead Avid Audio business

Tim Claman, VP, platform and solutions at Avid, is now responsible for all of the company’s audio solutions. This move is intended to further unite the development of Avid’s tools and workflow solutions for media creation, distribution and optimization.

tim-claman

Tim Claman

As part of the company’s platform strategy, Avid will further integrate its audio solutions — including Pro Tools, Sibelius and Avid Venue products — into the MediaCentral Platform, which is their open, integrated platform designed for media.

Claman comes from the artist side of the business, working as an editor, sound designer and mixer. Claman first joined Avid as a senior product manager for Pro Tools in 1998, designing many of the features that led to Pro Tools’ 2004 Academy Award. Over 14 years, he played a major role in shaping the company’s technology and product strategies as he rose through the ranks to become CTO. After working as CTO at Snell Advanced Media (SAM) and Quantel from 2013 through 2015, Claman rejoined Avid in February 2016 as VP, platform and solutions.

“I’m looking forward to working closely with our customer community to shape our collective future in the audio post production, music creation and live sound industries,” says Claman. “Through tight integration with MediaCentral, we’re dedicated to innovating our audio solutions to meet our customers’ emerging needs.”


Qwire’s tool for managing scoring, music licensing upped to v.2.0

Qwire, a maker of cloud-based tools for managing scoring and licensing music to picture, has launched QwireMusic 2.0, which expands the collaboration, licensing and cue sheet capabilities of QwireMusic. The tool also features a new and intuitive user interface as well as support for the Windows OS. User feedback played a role in many of the new updates, including marker import of scenes from Avid for post, Excel export functions for all forms and reports and expanded file sharing options.

QwireMusic is a suite of integrated modules that consolidates and streamlines a wide range of tasks and interactions for pros involved with music and picture across all stages of post, as well as music clearance and administration. QwireMusic was created to help facilitate collaboration among picture editors and post producers, music supervisors and clearance, composers, music editors and production studios.

Here are some highlights of the new version:
Presentations — Presentations allow music cues and songs to be shared between music providers (supervisors and composers) and their clients (picture editors, studio music departments, directors and producers. With Presentations, selected music is synced to video, where viewers can independently adjust the balance between music and dialogue, adding comments on each track. The time-saving efficiency of this tool centralizes the music sharing and review process, eliminating the need for the confusing array of QuickTimes, Web links, emails and unsecured FTP sites that sometimes accompany post production.

Real-time licensing status — QwireMusic 2.0 allows music supervisors to easily audition music, generate request letters, and share potential songs with anyone who needs to review them. When the music supervisor receives a quote approval, the picture editor and music editor are notified, and the studio music budget is updated instantly and seamlessly. In addition, problem songs can be instantly flagged. As with the original version of QwireMusic, request letters can be generated and emailed in one step with project-specific letterhead and signatures.

Electronic Cue Sheets — QwireMusic’s “visual cue sheet,” allows users to review all of the information in a cue sheet displayed alongside the final picture lock.  The cue sheet is automatically populated from data already entered in qwireMusic by the composer, music supervisor and music editor. Any errors or missing information are flagged. When the review is complete, a single button submits the cue sheet electronically to ASCAP and BMI.

QwireMusic has been used by music supervisors, composers, picture editors and music editors on over 40 productions in 2016, including Animals (HBO); Casual (Hulu); Fargo (FX); Guilt (Freeform); Harley and the Davidsons (Discovery); How to Get Away With Murder (ABC); Pitch (Fox); Shameless (Showtime); Teen Wolf (MTV); This Is Us (NBC); and Z: The Beginning of Everything (Amazon).

“Having everyone in the know on every cue ever put in a show saves a huge amount of time,” says Patrick Ward, a post producer for the shows Parenthood, The West Wing and Pure Genius. “With QwireMusic I spend about a tenth of the time that I used to disseminating cue information to different places and entities.”

NAB 1/17

Utopic editor talks post for David Lynch tribute Psychogenic Fugue

Director Sandro Miller called on Utopic partner and editorCraig Lewandowski to collaborate on Psychogenic Fugue, a 20-minute film starring John Malkovich in which the actor plays seven characters in scenes recreated from some of filmmaker David Lynch’s films and TV shows. These characters include The Log Lady, Special Agent Dale Cooper, and even Lynch himself as narrator of the film.

It is part of a charity project called Playing Lynch that will benefit the David Lynch Foundation, which seeks to introduce at-risk populations affected by trauma to transcendental meditation.

craigChicago-based Utopic handled all the post, including editing, graphics, VFX and sound design. The film is part of a multimedia fundraiser hosted by Squarespace and executed by Austin-based agency, Preacher. The seven vignettes were released one at a time on Playinglynch,com.

To find out more about Utopic’s work on the film, we reached out to Lewandowski with some questions.

How early were you brought in on the film?
We were brought in before the project was even finalized. There were a couple other ideas that were kicked around before this one rose to the top.

We cut together a timing board using all the pieces we would later be recreating. We also pulled some hallway scenes from an old Playstation commercial that he directed, and we then scratched in all the “Lynch” lines for timing.

You were on set. Can you talk about why and what the benefits were for the director and you as an editor?
My job on the set was to have our reference movie at the ready and make sure we were matching timing, framing, lighting, etc. Sandro would often check the reference to make sure we were on track.

For scenes like the particles in Eraserhead, I had the DP shoot it at various frame rates and at the highest possible resolution, so we could shoot it vertical and use the particles falling. I also worked with the Steadicam operator to get a variety of shots in the hallway since I knew we’d need to create some jarring cutaways.

How big of a challenge was it dealing with all those different iconic characters, especially in a 20-minute film?
Sandro was adamant that we not try to “improve” on anything that David Lynch originally shot. Having had a lot of experience with homages, Sandro knew that we couldn’t take liberties. So the sets and action were designed to be as close as possible to the original characters.

In shots where it was only one character originally (The Lady in the Radiator, Special Agent Dale Cooper, Elephant Man) it was easier, but in scenes where there were originally more characters and now it was just Malkovich, we had to be a little more creative (Frank Booth, Mystery Man). Ultimately, with the recreations, my job was to line up as closely as possible with what was originally done, and then with the audio do my best to stay true to the original.

Can you talk about your process and how you went about matching the original scenes? Did you feel much pressure?
Sandro and I have worked together before, so I didn’t feel a lot of pressure from him, but I think I probably put a fair amount on myself because I knew how important this project was for so many people. And, as is the case with anything I edit, I don’t take it lightly that all of that effort that went into preproduction and production now sits on my shoulders.

Again, with the recreations it was actually fairly straightforward. It was the corridor shots where Malkovich plays Lynch and recites lines taken from various interviews that offered the biggest opportunity, and challenge. Because there was no visual reference for this, I could have some more fun with it. Most of the recreations are fairly slow and ominous, so I really wanted these corridor shots to offset the vignettes, kind of jar you out of the trance you were just put in, make you uneasy and perhaps squirm a bit, before being thrust into the next recreation.

What about the VFX? Can you talk about how they fit in and how you worked with them?
Many of the VFX were either in-camera or achieved through editorial, but there were spots — like where he’s in the corridor and snaps from the front to the back — that I needed something more than I could accomplish on my own, so I used our team at Utopic. However, when cutting the trailer, I relied heavily on our motion graphics team for support.

Psychogenic Fugue is such an odd title, so the writer/creative director, Stephen Sayadin, came up with the idea of using the dictionary definition. We took it a step further, beginning the piece with the phonetic spelling and then seamlessly transitioning the whole thing. They then tried different options for titling the characters. I knew I wanted to use the hallway shot, close-ups of the characters and ending on Lynch/Malkovich in the chair. They gave me several great options.

What was the film shot on, and what editing system did you use?
The film was shot on Red at 6K. I worked in Adobe Premiere, using the native Red files. All of our edit machines at Utopic are custom-built, high-performance PCs assembled by the editors themselves.

What about tools for the visual effects?
Our compositor/creative finisher used an Autodesk Flame, and our motion graphics team used Adobe After Effects.

Can you talk about the sound design?
I absolutely love working on sound design and music, so this was a dream come true for me. With both the film and the trailer, our composer Eric Alexandrakis provided me with long, odd, disturbing tracks, complete with stems. So I spent a lot of time just taking his music and sound effects and manipulating them. I then had our sound designer at Brian Lietner jump in and go crazy.

Is there a scene that you are most proud of, or that was most challenging, or both?
I really like the snap into the flame/cigarette at the very beginning. I spent a long time just playing with that shot, compositing a bunch of shots together, manipulating them, adjusting timing, coming back in the next morning and changing it all up again. I guess that and Eraserhead. We had so many passes of particles and layered so many throughout the piece. That shot was originally done with him speaking to camera, but we had this pass of him just looking around, and realized it was way more powerful to have the lines delivered as though they were internal monologue. It also allowed us to play with the timings in a way that we wouldn’t be able to with a one-take shot.

As far as what I’m most proud of, it’s the trailer. We worked really hard to get the recreations and full film done. Then I was able to take some time away from it all and come back fresh. I knew that there was a ton of great footage to work with and we had to do something that wasn’t just a cutdown. It was important to me that the trailer feel every bit as demented as the film itself, if not more. I think we accomplished that.

Check out the trailer here:

NAB 1/17

Margarita Mix’s Pat Stoltz gives us the low-down on VR audio

By Randi Altman

Margarita Mix, one of Los Angeles’ long-standing audio and video post facilities, has taken on virtual reality with the addition of 360-degree sound rooms at their facilities in Santa Monica and Hollywood. This Fotokem company now offers sound design, mix and final print masters for VR video and remixing current spots for a full-surround environment.

Workflows for VR are new and developing every day — there is no real standard. So creatives are figuring it out as they go, but they can also learn from those who were early to the party, like Margarita Mix. They recently worked on a full-length VR concert film with the band Eagles of Death Metal and director/producer Art Haynie of Big Monkey Films. The band’s 2015 tour came to an abrupt end after playing the Bataclan concert hall during last year’s terrorist attacks in Paris. The film is expected to be available online and via apps shortly.

Eagles of Death Metal film.

We reached out to Margarita Mix’s senior technical engineer, Pat Stoltz, to talk about his experience and see how the studio is tackling this growing segment of the industry.

Why was now the right time to open VR-dedicated suites?
VR/AR is an exciting emerging market and online streaming is a perfect delivery format, but VR pre-production, production and post is in its infancy. We are bringing sound design, editorial and mixing expertise to the next level based on our long history of industry-recognized work, and elevating audio for VR from a gaming platform to one suitable for the cinematic and advertising realms where VR content production is exploding.

What is the biggest difference between traditional audio post and audio post for VR?
Traditional cinematic audio has always played a very important part in support of the visuals. Sound effects, Foley, background ambiance, dialog and music clarity to set the mood have aided in pulling the viewer into the story. With VR and AR you are not just pulled into the story, you are in the story! Having the ability to accurately recreate the audio of the filmed environment through higher order ambisonics, or object-based mixing, is crucial. Audio does not only play an important part in support of the visuals, but is now a director’s tool to help draw the viewer’s gaze to what he or she wants the audience to experience. Audio for VR is a critical component of storytelling that needs to be considered early in the production process.

What is the question you asked the most from clients in terms of sound for VR?
Surprisingly none! VR/AR is so new that directors and producers are just figuring things out as they go. On a traditional production set, you have audio mixers and boom operators capturing audio on set. On a VR/AR set, there is no hiding. No boom operators or audio mixers can be visible capturing high-quality audio of the performance.

Some productions have relied on the onboard camera microphones. Unfortunately, in most cases, this turns out to be completely unusable. When the client gets all the way to the audio post, there is a realization that hidden wireless mics on all the actors would have yielded a better result. In VR especially, we recommend starting the sound consultation in pre-production, so that we can offer advice and guide decisions for the best quality product.

What question should clients ask before embarking on VR?
They should ask what they want the viewer to get out of the experience. In VR, no two people are going to walk away with the same viewing experience. We recommend staying focused on the major points that they would like the viewer to walk away with. They should then expand that to answer: What do I have to do in VR to drive that point home, not only mentally, but drawing their gaze for visual support? Based on the genre of the project, considerations should be made to “physically” pull the audience in the direction to tell the story best. It could be through visual stepping stones, narration or audio pre-cues, etc.

What tools are you using on VR projects?
Because this is a nascent field, new tools are becoming available by the day, and we assess and use the best option for achieving the highest quality. To properly address this question, we ask: Where is your project going to be viewed? If the content is going to be distributed via a general Web streaming site, then it will need to be delivered in that audio file format.

There are numerous companies writing plug-ins that are quite good to deliver these formats. If you will be delivering to a Dolby VR (object-based preparatory format) supported site, such as Jaunt, then you will need to generate the proper audio file for that platform. Facebook (higher order ambisonics) requires even a different format. We are currently working in all these formats, as well as working closely with leaders in VR sound to create and test new workflows and guide developments in this new frontier.

What’s the one thing you think everyone should know about working and viewing VR?
As we go through life, we each have our own experiences or what we choose to experience. Our frame of reference directs our focus on things that are most interesting to us. Putting on VR goggles, the individual becomes the director. The wonderful thing about VR is now you can take that individual anywhere they want to go… both in this world and out of it. Directors and producers should think about how much can be packed into a story to draw people into the endless ways they perceive their world.


The sound of fighting in Jack Reacher: Never Go Back

By Jennifer Walden

Tom Cruise is one tough dude, and not just on the big screen. Cruise, who seems to be aging very gracefully, famously likes to do his own stunts, much to the dismay of many film studio execs.

Cruise’s most recent tough guy turn is in the sequel to 2014’s Jack Reacher. Jack Reacher: Never Go Back, which is in theaters now, is based on the protagonist in author Lee Child’s series of novels. Reacher, as viewers quickly find out, is a hands-on type of guy — he’s quite fond of hand-to-hand combat where he can throw a well-directed elbow or headbutt a bad guy square in the face.

Supervising sound editor Mark P. Stoeckinger, based at Formosa Group’s Santa Monica location, has worked on numerous Cruise films, including both Jack Reachers, Mission: Impossible II and III, The Last Samurai and he helped out on Edge of Tomorrow. Stoeckinger has a ton of respect for Cruise, “He’s my idol. Being about the same age, I’d love to be as active and in shape as he is. He’s a very amazing guy because he is such a hard worker.”

The audio post crew on ‘Jack Reacher: Never Go Back.’ Mark Stoeckinger is on the right.

Because he does his own stunts, and thanks to the physicality of Jack Reacher’s fighting style, sometimes Cruise gets a bruise or two. “I know he goes through a fair amount of pain, because he’s so extreme,” says Stoeckinger, who strives to make the sound of Reacher’s punches feel as painful as they are intended to be. If Reacher punches through a car window to hit a guy in the face, Stoeckinger wants that sound to have power. “Tom wants to communicate the intensity of the impacts to the audience, so they can appreciate it. That’s why it was performed that way in the first place.”

To give the fights that Reacher feel of being visceral and intense, Stoeckinger takes a multi-frequency approach. He layers high-frequency sounds, like swishes and slaps to signify speed, with low-end impacts to add weight. The layers are always an amalgamation of sound effects and Foley.

Stoeckinger prefers pulling hit impacts from sound libraries, or creating impacts specifically with “oomph” in mind. Then he uses Foley to flesh out the fight, filling in the details to connect the separate sound effects elements in a way that makes the fights feel organic.

The Sounds of Fighting
Under Stoeckinger’s supervision, a fight scene’s sound design typically begins with sound effects. This allows his sound team to start immediately, working with what they have at hand. On Jack Reacher: Never Go Back this task was handed over to sound effects editor Luke Gibleon at Formosa Group. Once the sound effects were in place, Stoeckinger booked the One Step Up Foley stage with Foley artist Dan O’Connell. “Having the effects in place gives us a very clear idea of what we want to cover with Foley,” he says. “Between Luke and Dan, the fight soundscapes for the film came to life.”

Jack Reacher: Never Go BackThe culminating fight sequence, where Reacher inevitably prevails over the bad guy, was Stoeckinger’s favorite to design. “The arc of the film built up to this fight scene, so we got to use some bigger sounds. Although, it still needed to seem as real as a Hollywood fight scene can be.”

The sound there features low-frequency embellishments that help the audience to feel the fight and not just hear it. The fight happens during a rowdy street festival in New Orleans in honor of the Day of the Dead. Crowds cavort with noisemakers, bead necklaces rain down, music plays and fireworks explode. “Story wise, the fireworks were meant to mask any gunshots that happened in the scene,” he says. “So it was about melding those two worlds — the fight and the atmosphere of the crowds — to help mask what we were doing. That was fun and challenging.”

The sounds of the street festival scene were all created in post since there was music playing during filming that wasn’t meant to stay on the track. The location sound did provide a sonic map of the actual environment, which Stoeckinger considered when rebuilding the scene. He also relied on field recordings captured by Larry Blake, who lives in New Orleans. “Then we searched for other sounds that were similar because we wanted it to sound fun and festive but not draw the ear too much since it’s really just the background.”

Stoeckinger sweetened the crowd sounds with recordings they captured of various noisemakers, tambourines, bead necklaces and group ADR to add mid-field and near-field detail when desired. “We tried to recreate the scene, but also gave it a Hollywood touch by adding more specifics and details to bring it more to life in various shots, and bring the audience closer to it or further away from it.”

Jack Reacher: Never Go BackStoeckinger also handled design on the film’s other backgrounds. His objective was to keep the locations feeling very real, so he used a combination of practical effects they recorded and field recordings captured by effect editor Luke Gibleon, in addition to library effects. “Luke [Gibleon] has a friend with access to an airport, so Luke did some field recordings of the baggage area and various escalators with people moving around. He also captured recordings of downtown LA at night. All of those field recordings were important in giving the film a natural sound.”

There where numerous locations in this film. One was when Reacher meets up with a teenage girl who he’s protecting from the bad guys. She lives in a sketchy part of town, so to reinforce the sketchiness of the neighborhood, Stoeckinger added nearby train tracks to the ambience and created street walla that had an edgy tone. “It’s nothing that you see outside of course, but sound-wise, in the ambient tracks, we can paint that picture,” he explains.
In another location, Stoeckinger wanted to sell the idea that they were on a dock, so he added in a boat horn. “They liked the boat horn sound so much that they even put a ship in the background,” he says. “So we had little sounds like that to help ground you in the location.”

Tools and the Mix
At Formosa, Stoeckinger has his team work together in one big Avid Pro Tools 12 sessions that included all of their sounds: the Foley, the backgrounds, sound effects, loop group and design elements. “We shared it,” he says. “We had a ‘check out’ system, like, ‘I’m going to check out reel three and work on this sequence.’ I did some pre-mixing, where I went through a scene or reel and decided what’s working or what sections needed a bit more. I made a mark on a timeline and then handed that off to the appropriate person. Then they opened it up and did some work. This master session circulated between two or three of us that way.” Stoeckinger, Gibleon and sound designer Alan Rankin, who handled guns and miscellaneous fight sounds, worked on this section of the film.

All the sound effects, backgrounds, and Foley were mixed on a Pro Tools ICON, and kept virtual from editorial to the final mix. “That was helpful because all the little pieces that make up a sound moment, we were able to adjust them as necessary on the stage,” explains Stoeckinger.

Jack Reacher: Never Go BackPremixing and the final mixes were handled at Twentieth Century Fox Studios on the Howard Hawks Stage by re-recording mixers James Bolt (effects) and Andy Nelson (dialogue/music). Their console arrangement was a hybrid, with the effects being mixed on an Avid ICON, and the dialogue and music mixed on an AMS Neve DFC console.

Stoeckinger feels that Nelson did an excellent job of managing the dialogue, particularly for moments where noisy locations may have intruded upon subtle line deliveries. “In emotional scenes, if you have a bunch of noise that happens to be part of the dialogue track, that detracts from the scene. You have to get all of the noise under control from a technical standpoint.” On the creative side, Stoeckinger appreciated Nelson’s handling of Henry Jackman’s score.

On effects, Stoeckinger feels Bolt did an amazing job in working the backgrounds into the Dolby Atmos surround field, like placing PA announcements in the overheads, pulling birds, cars or airplanes into the surrounds. While Stoeckinger notes this is not an overtly Atmos film, “it helped to make the film more spatial, helped with the ambiences and they did a little bit of work with the music too. But, they didn’t go crazy in Atmos.”


SMPTE: The convergence of toolsets for television and cinema

By Mel Lambert

While the annual SMPTE Technical Conferences normally put a strong focus on things visual, there is no denying that these gatherings offer a number of interesting sessions for sound pros from the production and post communities. According to Aimée Ricca, who oversees marketing and communications for SMPTE, pre-registration included “nearly 2,500 registered attendees hailing from all over the world.” This year’s conference, held at the Loews Hollywood Hotel and Ray Dolby Ballroom from October 24-27, also attracted more than 108 exhibitors in two exhibit halls.

Setting the stage for the 2016 celebration of SMPTE’s Centenary, opening keynotes addressed the dramatic changes that have occurred within the motion picture and TV industries during the past 100 years, particularly with the advent of multichannel immersive sound. The two co-speakers — SMPTE president Robert Seidel and filmmaker/innovator Doug Trumbull — chronicled the advance in audio playback sound since, respectively, the advent of TV broadcasting after WWII and the introduction of film soundtracks in 1927 with The Jazz Singer.

Robert Seidel

ATSC 3.0
Currently VP of CBS Engineering and Advanced Technology, with responsibility for TV technologies at CBS and the CW networks, Seidel headed up the team that assisted WRAL-HD, the CBS affiliate in Raleigh, North Carolina, to become the first TV station to transmit HDTV in July 1996.  The transition included adding the ability to carry 5.1-channel sound using Advanced Television Systems Committee (ATSC) standards and Dolby AC-3 encoding.

The 45th Grammy Awards Ceremony broadcast by CBS Television in February 2004 marked the first scheduled HD broadcast with a 5.1 soundtrack. The emergent ATSC 3.0 standard reportedly will provide increased bandwidth efficiency and compression performance. The drawback is the lack of backwards compatibility with current technologies, resulting in a need for new set-top boxes and TV receivers.

As Seidel explained, the upside for ATSC 3.0 will be immersive soundtracks, using either Dolby AC-4 or MPEG-H coding, together with audio objects that can carry alternate dialog and commentary tracks, plus other consumer features to be refined with companion 4K UHD, high dynamic range and high frame rate images. In June, WRAL-HD launched an experimental ATSC 3.0 channel carrying the station’s programming in 1080p with 4K segments, while in mid-summer South Korea adopted ATSC 3.0 and plans to begin broadcasts with immersive audio and object-based capabilities next February in anticipation of hosting the 2018 Winter Olympics. The 2016 World Series games between the Cleveland Indians and the Chicago Cubs marked the first live ATSC 3.0 broadcast of a major sporting event on experimental station Channel 31, with an immersive-audio simulcast on the Tribune Media-owned Fox affiliate WJW-TV.

Immersive audio will enable enhanced spatial resolution for 3D sound-source localization and therefore provide an increased sense of envelopment throughout the home listening environment, while audio “personalization” will include level control for dialog elements, alternate audio tracks, assistive services, other-language dialog and special commentaries. ATSC 3.0 also will support loudness normalization and contouring of dynamic range.

Doug Trumbull

Higher Frame Rates
With a wide range of experience within the filmmaking and entertainment technologies, including visual effects supervision on 2001: A Space Odyssey, Close Encounters of the Third Kind, Star Trek: The Motion Picture and Blade Runner, Trumbull also directed Silent Running and Brainstorm, as well as special venue offerings. He won an Academy Award for his Showscan process for high-speed 70mm cinematography, helped develop IMAX technologies and now runs Trumbull Studios, which is innovating a new MAGI process to offer 4K 3D at 120fps. High production costs and a lack of playback environments meant that Trumbull’s Showscan format never really got off the ground, which was “a crushing disappointment,” he conceded to the SMPTE audience.

But meanwhile, responding to falling box office receipts during the ‘50s and ‘60s, Hollywood added more consumer features, including large-screen presentations and surround sound, although the movie industry also began to rely on income from the TV community for broadcast rights to popular cinema releases.

As Seidel added, “The convergence of toolsets for both television and cinema — including 2K, 4K and eventually 8K — will lead to reduced costs, and help create a global market around the world [with] a significant income stream.” He also said that “cord cutting” — substituting cable subscription services for Amazon.com, Hulu, iTunes, Netflix and the like — is bringing people back to over-the-air broadcasting.

Trumbull countered that TV will continue at 60fps “with a live texture that we like,” whereas film will retain its 24fps frame rate “that we have loved for years and which has a ‘movie texture.’ Higher frame rates for cinema, such as 48fps used by Peter Jackson for several of the Lord of the Rings films, has too much of a TV look. Showscan at 120fps and a 360-degree shutter avoided that TV look, which is considered objectionable.” (Early reviews of director Ang Lee’s upcoming 3D film Billy Lynn’s Long Halftime Walk, which was shot in 4K at 120fps, have been critical of its video look and feel.)

complex-tv-networkNext-Gen Audio for Film and TV
During a series of “Advances in Audio Reproduction” conference sessions, chaired by Chris Witham, director of digital cinema technology at Walt Disney Studios, three presentations covered key design criteria for next-generation audio for TV and film. During his discussion called “Building the World’s Most Complex TV Network — A Test Bed for Broadcasting Immersive & Interactive Audio,” Robert Bleidt, GM of Fraunhofer USA’s audio and multimedia division, provided an overview of a complete end-to-end broadcast plant that was built to test various operational features developed by Fraunhofer, Technicolor and Qualcomm. These tests were used to evaluate an immersive/object-based audio system based on MPEG-H for use in Korea during planned ATSC 3.0 broadcasting.

“At the NAB Convention we demonstrated The MPEG Network,” Bleidt stated. “It is perhaps the most complex combination of broadcast audio content ever made in a single plant, involving 13 different formats.” This includes mono, stereo, 5.1-channel and other sources. “The network was designed to handle immersive audio in both channel- and HOA-based formats, using audio objects for interactivity. Live mixes from a simulated sports remote was connected to a network operating center, with distribution to affiliates, and then sent to a consumer living room, all using the MPEG-H audio system.”

Bleidt presented an overview of system and equipment design, together with details of a critical AMAU (audio monitoring and authoring unit) that will be used to mix immersive audio signals using existing broadcast consoles limited to 5.1-channel assignment and panning.

Dr. Jan Skoglund, who leads a team at Google developing audio signal processing solutions, addressed the subject of “Open-source Spatial Audio Compression for VR Content,” including the importance of providing realistic immersive audio experiences to accompany VR presentations and 360-degree 3D video.

“Ambisonics have reemerged as an important technique in providing immersive audio experiences,” Skoglund stated. “As an alternative to channel-based 3D sound, Ambisonics represent full-sphere sound, independent of loudspeaker location.” His fascinating presentation considered the ways in which open-source compression technologies can transport audio for various species of next-generation immersive media. Skoglund compared the efficacy of several open-source codecs for first-order Ambisonics, and also the progress being made toward higher-order Ambisonics (HOA) for VR content delivered via the internet, including enhanced experience provided by HOA.

Finally, Paul Peace, who oversees loudspeaker development for cinema, retail and commercial applications at JBL Professional — and designed the Model 9350, 9300 and 9310 surround units — discussed “Loudspeaker Requirements in Object-Based Cinema,” including a valuable in-depth analysis of the acoustic delivery requirements in a typical movie theater that accommodates object-based formats.

Peace is proposing the use of a new metric for surround loudspeaker placement and selection when the layout relies on venue-specific immersive rendering engines for Dolby Atmos and Barco Auro-3D soundtracks, with object-based overhead and side-wall channels. “The metric is based on three foundational elements as mapped in a theater: frequency response, directionality and timing,” he explained. “Current set-up techniques are quite poor for a majority of seats in actual theaters.”

Peace also discussed new loudspeaker requirements and layout criteria necessary to ensure a more consistent sound coverage throughout such venues that can replay more accurately the material being re-recorded on typical dub stages, which are often smaller and of different width/length/height dimensions than most multiplex environments.


Mel Lambert, who also gets photo credit on pictures from the show, is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com Follow him on Twitter @MelLambertLA.

 

Behind the Title: Sound mixer/sound designer Rob DiFondi

Name: Rob DiFondi

Company: New York City’s Sound Lounge

Can you describe your company?
Sound Lounge is an audio post company that provides creative services for TV and radio commercials, feature films, television series, digital campaigns, gaming and other emerging media. Artist-owned and operated, we’re made up of an incredibly diverse, talented and caring group of people who all love the advertising and film worlds.

We recently celebrated Sound Lounge’s 18th birthday. I’m proud to say I’ve been a part of the SL family for over 13 years now, and I couldn’t ask for a better group of friends to hang out with every day.

What’s your job title?
Senior Mixer/Sound Designer

What does that entail?
I have actors in my booth all day recording VO (voiceover) for different commercials. My clients (usually brands, ad agencies, production companies, or editorials) hang in my room, and together we get the best possible read from the actor while they’re in the booth. I then craft sound design for the spot by either pulling sound effects from my library or recreating the necessary sounds myself (a.k.a. “Foley”). Once that’s set, I’ll take the lines the actor recorded, the sound effects I created, and any music, and then mix them all together so the spot sounds perfect (and is legal for TV broadcast)!

Being a mixer in the advertising post world isn’t easy. I also have to be able to provide a solid lunch recommendation — I always need to make sure I know where my clients can get the best sushi in the Flatiron district!

What would surprise people the most about what falls under that title?
That most of us are musicians who wanted to be rock stars but thought better of it. Maybe that isn’t so surprising though.

Sound Lounge

What’s your favorite part of the job?
The people, and the social part of the advertising industry. This business is filled with so many kind, funny and talented people, and it’s so nice to have them be a part of your life. And how can you beat partying every year at the MOMA for the AICP Gala?

What’s your least favorite?
Probably the lack of travel. I love our office, but it would be fun to do my job in different cities once in a while.

What is your favorite time of the day?
Walking in my front door and seeing my wife and kids.

If you didn’t have this job, what would you be doing instead?
Something that involves beaches and nice weather.

How early on did you know this would be your path?
I totally fell into this profession. I went to school to become a music engineer/producer. I had no idea there was a whole industry for mixing TV spots. Once I got into it though, I knew immediately that I loved it.

Can you name some recent projects you have worked on?
I worked on some really nice pieces for Maybelline, Google, Lincoln and TD Ameritrade.

What is the project that you are most proud of?
Miracle Stain, a Super Bowl commercial that I mixed for Tide a few years back. I finished the mix at 10pm on Thursday and got a call at 2am that there had been some changes, so I had to come back to work in the middle of the night. I tweaked the mix until the sun came up and had it ready to ship by 9am. It was one of those very epic projects that had all the classic markings of a Super Bowl spot.

Name three pieces of technology you can’t live without.
My iPhone, my DSLR camera and iZotope RX.

What social media channels do you follow?
I’m a big Instagram guy. I love seeing people’s lives told through photos. Facebook is so 2015.

Do you listen to music while you work? Care to share your favorite music to work to?
Since I work in audio I can’t listen to music while I work, but when I’m not working I listen to a lot of modern country music, Dave Matthews Band (not afraid to say it!), prog metal and pretty much everything in between.

This is a high stress job with deadlines and client expectations. What do you do to de-stress from it all?
I just leased a Jeep Wrangler Unlimited. There’s nothing like putting the top down and taking a drive to the beach!