Category Archives: Blog

Sight Sound & Story 2017: TV editing and Dylan Tichenor, ACE

By Amy Leland

This year, I was asked to live tweet from Sight Sound & Story on behalf of Blue Collar Post Collective. As part of their mission to make post events as accessible to members of our industry as possible, they often attend events like this one and provide live blogging, tweeting and recaps of the events for their members via their Facebook group. What follows are the recaps that I posted to that group after the event and massaged a bit for the sake of postPerspective.

TV is the New Black
Panelists included Kabir Akhtar, ACE, Suzy Elmiger, ACE, Julius Ramsay and moderator Michael Berenbaum, ACE.

While I haven’t made it a professional priority to break into scripted TV editing because my focus is on being a filmmaker, with editing as “just” a day job, I still love this panel, and every year it makes me reconsider that goal. This year’s was especially lively because two of the panelists, Kabir Akhtar and Julius Ramsay, have known each other from very early on in their careers and each had hilarious war stories to share.

Kabir Akhtar

The panelists were asked how they got into scripted TV editing, and if they had any advice for the audience who might want to do the same. One thing they all agreed on is that a good editor is a good editor. They said having experience in the exact same genre is less important than understanding how to interpret the style and tone of a show correctly. They also all agreed that people who hire editors often don’t get that. There is a real danger of being pigeonholed in our industry. If you start out editing a lot of reality TV and want to crossover to scripted you’ll almost definitely have to take a steep pay cut and start lower down on the ladder. There is still the problem in the industry of people assuming if you’ve cut comedy but not drama, you can’t cut drama. The same can be said for film versus TV and half-hour versus hour, etc. They all emphasized the importance of figuring out what kind of work you want to do, and pursuing that. Don’t just rush headlong into all kinds of work. Find as much focus as you can. Akhtar said, “You’re better off at the bottom of a ladder you want to climb than high up on one that doesn’t interest you.”

They all also said to seek out the people doing the kind of work you want to do, because those are the people who can help you. Ramsay said the most important networking tool is a membership to IMDB Pro. This gives you contact information for people you might want to find. He said the first time someone contacts him unsolicited he will probably ignore it, but if they contact him more than once, and it’s obvious that it’s a real attempt at personal contact with him, he will most likely agree to meet with that person.

Next they discussed the skills needed to be a successful editor. They agreed that while being a fast editor with strong technical knowledge of the tools isn’t by itself enough to be a successful editor, it is an important part of being one. If you have people in the room with you, the faster and more dexterously you can do what they are asking, the better the process will be for everyone.

There was agreement that, for the most part, they don’t look at things like script notes and circle takes. As an editor, you aren’t hired just for your technical skills, but for your point of view. Use it. Don’t let someone decide for you what the good takes are. You have to look at all of the footage and decide for yourself. They said what can feel like a great take on the set may not be a great take in the context of the cut. However, it is important to understand why something was a circle take for the director. That may be an important aspect of the scene that needs to be included, even if it isn’t on that take.

The panel also spoke about the importance of sound. They’ve all met editors who aren’t as skilled at hearing and creating good sound. That can be the difference between a passable editor and a great editor. They said that a great assistant editor needs to be able to do at least some decent sound mixing, since most producers expect even first cuts to sound good, and that task is often given to the assistant. They all keep collections of music and sound to use as scratch tracks as they cut. This way they don’t have to wait until the sound mix to start hearing how it will all come together.

The entire TV is the New Black panel.

All agreed that the best assistant editors are those who are hungry and want to work. Having a strong artistic sense and drive are more important to them than specific credits or experience. They want someone they know will help them make the show the best. In return, they have all given assistants opportunities that have led to them rising to editor positions.

When talking about changes and notes, they discussed needing that flexibility to show other options, even if you really believe in the choices you’ve made. But they all agreed the best feeling was when you’ve been asked to show other things, and in the long run, the producer or director comes back to what you had in the first place. They said when people give notes, they are pointing out the problems. Be very wary when they start telling you the solutions or how to fix the problems.

Check out the entire panel here. The TV panel begins at about 20:00.

Inside the Cutting Room
This panel focused on editor Dylan Tichenor, ACE, and was moderated by Bobbie O’Steen .

Of all of the Sight Sound & Story panels, this is by far the hardest to summarize effectively. Bobbie O’Steen is a film historian. Her preparation for interviews like this is incredibly deep and detailed. Her subject is always someone with an impressive list of credits. Dylan Tichenor has been Paul Thomas Anderson’s editor for most of his films. He has also edited such films as Brokeback Mountain, The Royal Tenenbaums and Zero Dark Thirty.

With that in mind, I will share some of the observations I wrote down while listening raptly to what was said. From the first moment, we got a great story. Tichenor’s grandfather worked as a film projector salesman. He described the first time he became aware of the concept of editing. When he was nine years old, he unspooled a film reel from an Orson Welles movie that his grandfather had left at the house and looked carefully at all of the frames. He noticed that between a frame of a wide shot and a frame of a close-up, there was a black line. And that was his first understanding of film having “cuts.” He also described an early love for classic films because of those reels his grandfather kept around, especially Murnau’s Nosferatu.

Much of what was discussed was his longtime collaboration with P.T. Anderson. In discussing Anderson’s influences, they described the blend of Martin Scorsese’s long tracking shots with Robert Altman’s complex tapestry of ensemble casts. Through his editing work on those films, Tichenor saw how Anderson wove those two things together. The greatest challenges were combining those long takes with coverage, and answering the question, “Whose story are we telling?” To illustrate this, he showed the party scene in Boogie Nights in which Scotty first meets Dirk Diggler.

Dylan Tichenor and Bobbi O’Steen.

For those complex tapestries of characters, there are frequent transitions from one person’s storyline to another’s. Tichenor said it’s important to transition with the heart and not just the head. You have to find the emotional resonance that connects those storylines.

He echoed the sentiment from one of the other panels (this will be covered in my next recap) about not simply using the director’s circle takes. He agreed with the importance of understanding what they were and what the director saw in them on set, but in the cut, it was important to include that important element, not necessarily to use that specific take.

O’Steen brought up the frequent criticism of Magnolia — that the film is too long. While Tichenor agreed that it was a valid criticism, he stood by the film as one that took chances and had something to say. More importantly, it asked something of the audience. When a movie doesn’t take chances and asks the audience to work a little, it’s like eating cotton candy. When the audience exerts effort in watching the story, that effort leads to catharsis.

In discussing The Royal Tenenbaums, they talked about the challenge of overlapping dialogue, illustrated by a scene between Gene Hackman and Danny Glover. Of course, what the director and actors want is to have freedom on the set, and let the overlapping dialogue flow. As an editor this can be a nightmare. In discussions with actors and directors, it can help to remind them that sometimes that overlapping dialogue can create situations where a take can’t be used. They can be robbed of a great performance by that overlap.

O’Steen described Wes Anderson as a mathematical editor. Tichenor agreed, and showed a clip with a montage of flashbacks from Tenenbaums. He said that Wes Anderson insisted that each shot in the montage be exactly the same duration. In editing, what Tichenor found was that those moments of breaking away from the mathematical formula, of working slightly against the best of the music, were what gave it emotional life.

Tichenor described Brokeback Mountain as the best screenplay adaptation of a short story he had ever seen. He talked about a point during the editing when they all felt it just wasn’t working, specifically Heath Ledger’s character wasn’t resonating emotionally the way he should be. Eventually they realized the problem was that Ledger’s natural warmth and affectionate nature were coming through too much in his performance. He had moments of touching someone on the arm or the shoulder, or doing something else gentle and demonstrative.

He went back through and cut out every one of those moments he could find, which he admitted meant in some cases leaving “bad” cuts in the film. To be fair, in some cases that difference was maybe half a second of action and the cuts were not as bad as he feared, but the result was that the character suddenly felt cold and isolated in a way that was necessary. Tichenor also referred back to Nosferatu and how the editing of that film had inspired him. He pointed to the scene in which Jack comes to visit Ennis; he mimicked an editing trick from that film to create a moment of rush and surprise as Ennis ran down the stairs to meet him.

Dylan Tichenor

One thing he pointed out was that it can feel more vulnerable to cut a scene with a slower pace than an action scene. In an action scene, the cuts become almost a mosaic, blending into one another in a way that helps to make each cut a bit more anonymous. In a slower scene, each cut stands out more and draws more attention.

When P.T. Anderson and Tichenor came together again to collaborate on There Will Be Blood, they approached it very differently from Boogie Nights and Magnolia. Instead of the parallel narratives of that ensemble tapestry, this was a much more focused and, often, operatic, story. They decided to approach it, in both shooting and editing, like a horror film. This meant framing shots in an almost gothic way, which allowed for building tension without frequent cutting. He showed an example of this in a clip of Daniel and his adopted son H.W. having Sunday dinner with the family to discuss buying their land.

He also talked about the need to humanize Daniel and make him more relatable and sympathetic. The best path to this was through the character of H.W. Showing how Daniel cared for the boy illuminated a different side to this otherwise potentially brutal character. He asked Anderson for additional shots of him to incorporate into scenes. This even led to additional scenes between the two being added to the story.

After talking about this film, though there were still so many more that could be discussed, the panel sadly ran out of time. One thing that was abundantly clear was that there is a reason Tichenor has worked with some of the finest filmmakers. His passion for and knowledge of film flowed through every moment of this wonderful chat. He is the editor for many films that should be considered modern classics. Undoubtedly between the depth of preparation O’Steen is known for, and the deep well of material his career provided, they could have gone on much longer without running dry of inspirational and entertaining stories to share.

Check out the entire panel here. The interview begins at about 02:17:30.

———————————
Amy Leland is a film director and editor. Her short film, Echoes, is now available on Amazon Video. Her feature doc, Ambassador of Rhythm, is in post. She also has a feature screenplay in development and a new doc in pre-production. She is also an editor for CBS Sports Network. Find out more about Amy on her site http://amyleland.net and follow her on social media on Twitter at @amy-leland and Instagram at @la_directora.

What was new at GTC 2017

By Mike McCarthy

I, once again, had the opportunity to attend Nvidia’s GPU Technology Conference (GTC) in San Jose last week. The event has become much more focused on AI supercomputing and deep learning as those industries mature, but there was also a concentration on VR for those of us from the visual world.

The big news was that Nvidia released the details of its next-generation GPU architecture, code named Volta. The flagship chip will be the Tesla V100 with 5,120 CUDA cores and 15 Teraflops of computing power. It is a huge 815mm chip, created with a 12nm manufacturing process for better energy efficiency. Most of its unique architectural improvements are focused on AI and deep learning with specialized execution units for Tensor calculations, which are foundational to those processes.

Tesla V100

Similar to last year’s GP100, the new Volta chip will initially be available in Nvidia’s SXM2 form factor for dedicated GPU servers like their DGX1, which uses the NVLink bus, now running at 300GB/s. The new GPUs will be a direct swap-in replacement for the current Pascal based GP100 chips. There will also be a 150W version of the chip on a PCIe card similar to their existing Tesla lineup, but only requiring a single half-length slot.

Assuming that Nvidia puts similar processing cores into their next generation of graphics cards, we should be looking at a 33% increase in maximum performance at the top end. The intermediate stages are more difficult to predict, since that depends on how they choose to tier their cards. But the increased efficiency should allow more significant increases in performance for laptops, within existing thermal limitations.

Nvidia is continuing its pursuit of GPU-enabled autonomous cars with its DrivePX2 and Xavier systems for vehicles. The newest version will have a 512 Core Volta GPU and a dedicated deep learning accelerator chip that they are going to open source for other devices. They are targeting larger vehicles now, specifically in the trucking industry this year, with an AI-enabled semi-truck in their booth.

They also had a tractor showing off Blue River’s AI-enabled spraying rig, targeting individual plants for fertilizer or herbicide. It seems like farm equipment would be an optimal place to implement autonomous driving, allowing perfectly straight rows and smooth grades, all in a flat controlled environment with few pedestrians or other dynamic obstructions to be concerned about (think Interstellar). But I didn’t see any reference to them looking in that direction, even with a giant tractor in their AI booth.

On the software and application front, software company SAP showed an interesting implementation of deep learning that analyzes broadcast footage and other content looking to identify logos and branding, in order to provide quantifiable measurements of the effectiveness of various forms of brand advertising. I expect we will continue to see more machine learning implementations of video analysis, for things like automated captioning and descriptive video tracks, as AI becomes more mature.

Nvidia also released an “AI-enabled” version of I-Ray to use image prediction to increase the speed of interactive ray tracing renders. I am hopeful that similar technology could be used to effectively increase the resolution of video footage as well. Basically, a computer sees a low-res image of a car and says, “I know what that car should look like,” and fills in the rest of the visual data. The possibilities are pretty incredible, especially in regard to VFX.

Iray AI

On the VR front, Nvidia announced a new SDK that allows live GPU-accelerated image stitching for stereoscopic VR processing and streaming. It scales from HD to 5K output, splitting the workload across one to four GPUs. The stereoscopic version is doing much more than basic stitching, processing for depth information and using that to filter the output to remove visual anomalies and improve the perception of depth. The output was much cleaner than any other live solution I have seen.

I also got to try my first VR experience recorded with a Light Field camera. This not only gives the user a 360 stereo look around capability, but also the ability to move their head around to shift their perspective within a limited range (based on the size the recording array). The project they were using to demo the technology didn’t highlight the amazing results until the very end of the piece, but when it did that was the most impressive VR implementation I have had the opportunity to experience yet.
———-
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Dell 6.15

The VFX Industry: Where are the women?

By Jennie Zeiher

As anyone in the visual effects industry would know, Marvel’s Victoria Alonso was honored earlier this year with the Visual Effects Society Visionary Award. Victoria is an almighty trailblazer, one of whom us ladies can admire, aspire to and want to be.

Her acceptance speech was an important reminder to us of the imbalance of the sexes in our industry. During her speech, Victoria stated: “Tonight there were 476 of you nominated. Forty-three of which are women. We can do better.”

Over the years, I’ve had countless conversations with industry people — executives, supervisors and producers — about why there are fewer women in artist and supervisory roles. A recent article in the NY Times suggested that female VFX supervisors made up only five percent of the 250 top-grossing films of 2014. Pretty dismal.

I’ve always worked in male-dominated industries, so I’m possibly a bit blasé about it. I studied IT and worked as a network engineer in the late ‘90s, before moving to the United States where I worked on 4K digital media projects with technologists and scientists. One of a handful of women, I was always just one of the boys. To me it was the norm.

Moving into VFX about 10 years ago, I realized this industry was no different. From my viewpoint, I see about 1/8 ratio of female to male artists. The same is true from what I’ve seen through our affiliated training courses. Sadly, I’ve heard of some facilities that have no women in artist roles at all!

Most of the females in our industry work in other disciplines. At my workplace, Australia’s Rising Sun Pictures, half of our executive members are women (myself included), and women generally outweigh men in indirect overhead roles (HR, finance, administration and management), as well as production management.

Women bring unique qualities to the workplace: they’re team players, hard working, generous and empathetic. Copious reports have found that companies that have women on their board of directors and in leadership positions perform better than those that don’t. So in our industry, why do we see such a male-dominated artist, technical and supervisory workforce?

By no means am I undervaluing the women in those other disciplines (we could not have functioning businesses without them), I’m just merely trying to understand why there aren’t more women inclined to pursue artistic jobs and, ultimately, supervision roles.

I can’t yet say that one of the talented female artists I’ve had the pleasure of working with over the years has risen to the ranks of being a VFX supervisor… and that’s not to say that they couldn’t have, just that they didn’t, or haven’t yet. This is something that disappoints me deeply. I consider myself a (liberal) feminist. Someone who, in a leadership position, wants to enable other women to become the best they can be and to be equal among their male counterparts.
So, why? Where are the women?

Men and Women Are Wired Differently
A study by LiveScience suggests men and women really are wired differently. It says,  “Male brains have more connections within hemispheres to optimize motor skills, whereas female brains are more connected between hemispheres to combine analytical and intuitive thinking.”

Apparently this difference is at its greatest during the adolescent years (13-17 years), however with age these differences get smaller. So, during the peak of an adolescent girl’s education, she’s more inclined to be analytical and intuitive. Is that a direct correlation to them not choosing a technical vocation? But then again I would have thought that STEM/STEAM careers would be something of interest to girls if they’re brains are wired to be analytical?

This would also explain women having better organizational and management skills and therefore seeking out more “indirectly” associated roles.

Lean Out
For those women already in our industry, are they too afraid to seek out higher positions? Women are often more self-critical and self-doubting. Men will promote themselves and dive right in, even if they’re less capable. I have experienced this first hand and didn’t actual recognize it in myself until I read Sheryl Sandberg’s Lean In.

Or, is it just simply that we’re in a “boys club” — that these career opportunities are not being presented to our female artists, and that we’d prefer to promote men over women?

The Star Wars Factor
Possibly one of the real reasons that there is a lack of women in our industry is what I call “The Star Wars factor.” For the most part, my male counterparts grew up watching (and being inspired by) Star Wars and Star Trek, whereas, personally, I was more inclined to watch Girls Just Want to Have Fun and Footloose. Did these adolescent boys want to be Luke or Han, or George for that matter? Were they so inspired by John Dykstra’s lightsabers that they wanted to do THAT when they grew up? And if this is true, maybe Jyn, Rae and Captain Marvel —and our own Captain Marvel, Victoria Alonso — will spur on a new generation of women in the industry. Maybe it’s a combination of all of these factors. Maybe it’s none.

I’m very interested in exploring this further. To address the problem, we need to ask ourselves why, so please share your thoughts and experiences — you can find me at jz@vfxjz.com. At least now the conversation has started.

One More Thing!
I am very proud that one of my female colleagues, Alana Newell (pictured with her fellow nominees), was nominated for a VES Award this year for Outstanding Compositing in a Photoreal Feature for X-Men: Apocalypse. She was one of the few, but hopefully as time goes by that will change.

Main Image: The woman of Rising Sun Pictures.
——–

Jennie Zeiher is head of sales & business development at Adelaide, Australia’s Rising Sun Pictures.


Focusing on sound bars at CES 2017

By Tim Hoogenakker

My day job is as a re-recording mixer and sound editor working on long-form projects, so when I attended this year’s Consumer Electronics Show in Las Vegas, I honed in on the leading trends in home audio playback. It was important for me to see what the manufacturers are planning regarding multi-channel audio reproduction for the home. From the look of it, sound bars seem to be leading the charge. My focus was primarily with immersive sound bars, single-box audio components capable of playing Dolby Atmos and DTS:X as close as they can in their original format.

Klipsch TheaterBar

Klipsch Theaterbar

Now I must admit, I’ve kicked and screamed about sound bars in the past, audibly rolling my eyes at the concept. We audio mixers are used to working in perfect discrete surround environments, but I wanted to keep an open mind. Whether we as sound professionals like it or not, this is where the consumer product technology is headed. That and I didn’t see quite the same glitz and glam over discrete surround speaker systems at CES.

Here are some basic details with immersive sound bars in general:

1. In addition to the front channels, they often have up-firing drivers on the left and right edges (normally on the top and sides) that are intended to reflect onto the walls and the ceiling of the room. This is to replicate the immersiveness as much as possible. Sure this isn’t exact replication, but I’ll certainly give manufacturers praise for their creativity.
2. Because of the required reflectivity, the walls have to be of a flat enough surface to reflect the signal, yet still balanced so that it doesn’t sound like you’re sitting in the middle of your shower.
3. There is definitely a sweet spot in the seating position when listening to sound bars. If you move off-axis, you may experience somewhat of a wash sitting near the sides, but considering what they’re trying to replicate, it’s an interesting take.
4. They usually have an auto-tuning microphone system for calculating the room for the closest accuracy.
5. I’m convinced that there’s a conspiracy by the manufacturers to make each and every sound bar, in physical appearance, resemble the enigmatic Monolith in 2001: A Space Odyssey…as if literally someone just knocked it over.

Yamaha YSP5600

My first real immersive sound bar experience happened last year with the Yamaha YSP-5600, which comes loaded with 40 (yes 40!) drivers. It’s a very meaty 26-pound sound bar with a height of 8.5 inches and width of 3.6 feet. I heard a few projects that I had mixed in Dolby Atmos played back on this system. Granted, even when correctly tuned it’s not going to sound the same as my dubbing stage or with dedicated home theater speakers, but knowing this I was pleasantly surprised. A few eyebrows were raised for sure. It was fun playing demo titles for friends, watching them turn around and look for surround speakers that weren’t there.

A number of the sound bars displayed at CES bring me to my next point, which honestly is a bit of a complaint. Many were very thin in physical design, often labeled as “ultra-thin,” which to me means very small drivers, which tells me that there’s an elevated frequency crossover line for the subwoofer(s). Sure, I understand that they need to look sleek so they can sell and be acceptable for room aesthetics, but I’m an audio nerd. I WANT those low- to mid-frequencies carried through from the drivers, don’t just jam ALL the low- and mid-frequencies to the sub. It’ll be interesting to see how this plays out as these products reach market during the year.

Sony HTST 5000

Besides immersive audio, most of these sound bars will play from a huge variety of sources, formats and specs, such as Blu-ray, Blu-ray UHD, DVD, DVD-Audio, streaming via network and USB, as well as connections for Wi-Fi, Bluetooth and 4K pass-through.

Some of these sound bars — like many things at CES 2017 — are supported with Amazon Alexa and Google Home. So, instead of fighting over the remote control, you and your family can now confuse Alexa with arguments over controlling your audio between “Game of Thrones” and Paw Patrol.

Finally, I probably won’t be installing a sound bar on my dub stage for reference anytime soon, but I do feel that professionally it’s very important for me to know the pros and the cons — and the quirks — so we can be aware how our audio mixes will translate through these systems. And considering that many major studios and content creators are becoming increasingly ready to make immersive formats their default deliverable standard, especially now with Dolby Vision, I’d say it’s a necessary responsibility.

Looking forward to seeing what NAB has up its sleeve on this as well.

Here are some of the more notable soundbars debuted:

LG SJ9

Sony HT-ST5000: This sound bar is compatible with Google Home. They say it works well with ceilings as high as 17 feet. It’s not DTS:X-capable yet, but Sony said that will happen by the end of the year.LG SJ9: The LG SJ9 sound bar is currently noted by LG as “4K high resolution audio” (which is an impossible statement). It’s possible that they mean it’ll pass through a 4K signal, but the LG folks couldn’t clarify. That snafu aside, it has a very wide dimensionality, which helps for stereo imaging. It will be Dolby Vision/HDR-capable via a future firmware upgrade.

The Klipsch “Theaterbar”: This another eyebrow raiser. It’ll release in Q4 of 2017. There’s no information on the web yet, but they’re showcasing this at CES.

Pioneer Elite FS-EB70: There’s no information on the web yet, but they were showcasing this at CES.

Onkyo SBT-A500 Network: Also no information but it was shown at CES.


Formosa Group re-recording mixer and sound editor Tim Hoogenakker has over 20 years of experience in audio post for music, features and documentaries, television and home entertainment formats. He had stints at Prince’s Paisley Park Studios and POP Sound before joining Formosa.


Industry pros gather to discuss sound design for film and TV

By Mel Lambert

The third annual Mix Presents Sound for Film and Television conference attracted some 500 production and post pros to Sony Pictures Studios in Culver City, California, last week to hear about the art of sound design.

Subtitled “The Merging of Art, Technique and Tools,” the one-day conference kicked off with a keynote address by re-recording mixer Gary Bourgeois, followed by several panel discussions and presentations from Avid, Auro-3D, Steinberg, JBL Professional and Dolby.

L-R: Brett G. Crockett, Tom McCarthy, Gary Bourgeois and Mark Ulano.

During his keynote, Bourgeois advised, “Sound editors and re-recording mixers should be aware of the talent they bring to the project as storytellers. We need to explore the best ways of using technology to be creative and support the production.” He concluded with some more sage advice: “Do not let the geek take over! Instead,” he stressed, “show the passion we have for the final product.”

Other highlights included a “Sound Inspiration Within the Storytelling Process” panel organized by MPSE and moderated by Carolyn Giardina from The Hollywood Reporter. Panelists included Will Files, Mark P. Stoeckinger, Paula Fairfield, Ben L. Cook, Paul Menichini and Harry Cohen. The discussion focused on where sound designers find their inspiration and the paths they take to create unique soundtracks.

CAS hosted a sound-mixing panel titled “Workflow for Musicals in Film and Television Production” that focused on live recording and other techniques to give musical productions a more “organic” sound. Moderated by Glen Trew, the panel included music editor David Klotz, production mixer Phil Palmer, playback specialist Gary Raymond, production mixer Peter Kurland, re-recording mixer Gary Bourgeois and music editor Tim Boot.

Sound Inspiration Within the Storytelling Process panel (L-R): Will Files, Ben L. Cook, Mark P. Stoeckinger, Carolyn Giardina, Harry Cohen, Paula Fairfield and Paul Menichini.

Sponsored by Westlake Pro, a panel called “Building an Immersive Room: Small, Medium and Large” covered basic requirements of system design and setup — including console/DAW integration and monitor placement — to ensure that soundtracks translate to the outside world. Moderated by Westlake Pro’s CTO, Jonathan Deans, the panel was made up of Bill Johnston from Formosa Group, Nathan Oishi from Sony Pictures Studios, Jerry Steckling of JSX, Brett G. Crockett from Dolby Labs, Peter Chaikin from JBL and re-recording mixers Mark Binder and Tom Brewer.

Avid hosted a fascinating panel discussion called “The Sound of Stranger Things,” which focused on the soundtrack for the Netflix original series, with its signature sound design and ‘80s-style, synthesizer-based music score. Moderated by Avid’s Ozzie Sutherland, the panel included sound designer Craig Henighan, SSE Brad North, music editor David Klotz and sound effects editor Jordan Wilby. “We drew our inspiration from such sci-fi films as Alien, The Thing and Predator,” Henighan said. Re-recording mixers Adam Jenkins and Joe Barnett joined the discussion via Skype from the Technicolor Seward stage.

The Barbra Streisand Scoring Stage.

A stand-out event was the Production Sound Pavilion held on the Barbra Streisand Scoring Stage, where leading production sound mixers showed off their sound carts, with manufacturers also demonstrating wireless, microphone and recorder technologies. “It all starts on location, with a voice in a microphone and a clean recording,” offered CAS president Mark Ulano. “But over the past decade production sound has become much more complex, as technologies and workflows evolved both on-set and in post production.”

Sound carts on display included Tom Curley’s Sound Devices 788t recorder and Sound Devices CL9 mixer combination; Michael Martin’s Zaxcom Nomad 12 recorder and Zaxcom Mix-8 mixer; Danny Maurer’s Sound Devices 664 recorder and Sound Devices 633 mixer; Devendra Cleary’s Sound Devices 970, Pix 260i and 664 recorders with Yamaha 01V and Sound Devices CL-12 mixers; Charles Mead’s Sound Devices 688 recorder with CL-12 mixer; James DeVotre’s Sound Devices 688 recorder with CL-12 Alaia mixer; Blas Kisic’s Boom Recorder and Sound Devices 788 with Mackie Onyx 1620 mixer; Fernando Muga’s Sound Devices 788 and 633 recorders with CL-9 mixer; Thomas Cassetta’s Zaxcom Nomad 12 recorder with Zaxcom Oasis mixer; Chris Howland’s Boom Recorder, Sound Devices and 633 recorders, with Mackie Onyx 1620 and Sound Devices CL-12 mixers; Brian Patrick Curley’s Sound Devices 688 and 664 recorders with Sound Devices CL-12 Alaia mixer; Daniel Powell’s Zoom F8 recorder/mixer; and Landon Orsillo’s Sound Devices 688 recorder.

Lon Neumann

CAS also organized an interesting pair of Production Sound Workshops. During the first one, consultant Lon Neumann addressed loudness control with an overview of loudness levels and surround sound management of cinema content for distribution via broadcast television.

The second presentation, hosted by Bob Bronow (production mixer on Deadliest Catch) and Joe Foglia (Marley & Me, Scrubs and From the Earth to the Moon), covered EQ and noise reduction in the field. While it was conceded that, traditionally, any type of signal processing on location is strongly discouraged — such decisions normally being handled in post — the advent of multitrack recording and isolated channels means that it is becoming more common for mixers to use processing on the dailies mix track.

New for this year was a Sound Reel Showcase that featured short samples from award-contending and to-be-released films. The audience in the Dolby Atmos- and Auro 3D-equipped William Holden Theatre was treated to a high-action sequence from Mel Gibson’s new film, Hacksaw Ridge, which is scheduled for release on November 4. It follows the true story of a WWII army medic who served during the harrowing Battle of Okinawa and became the first conscientious objector to be awarded the Medal of Honor. The highly detailed Dolby Atmos soundtrack was created by SSE/sound designer/recording mixer Robert Mackenzie working at Sony Pictures Studios with dialogue editor Jed M. Dodge and ADR supervisor Kimberly Harris, with re-recording mixers Andy Wright and Kevin O’Connell.

Mel Lambert is principal of Content Creators, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

All photos by Mel Lambert.

 


AES Paris: A look into immersive audio, cinematic sound design

By Mel Lambert

The Audio Engineering Society (AES) came to the City of Light in early June with a technical program and companion exhibition that attracted close to 2,600 pre-registrants, including some 700 full-pass attendees. “The Paris International Convention surpassed all of our expectations,” AES executive director Bob Moses told postPerspective. “The research community continues to thrive — there was great interest in spatial sound and networked audio — while the business community once again embraced the show, with a 30 percent increase in exhibitors over last year’s show in Warsaw.” Moses confirmed that next year’s European convention will be held in Berlin, “probably in May.”

Tom Downes

Getting Immersed
There were plenty of new techniques and technologies targeting the post community. One presentation, in particular, caught my eye, since it posed some relevant questions about how we perceive immersive sound. In the session, “Immersive Audio Techniques in Cinematic Sound Design: Context and Spatialization,” co-authors Tom Downes and Malachy Ronan — both of who are AES student members currently studying at the University of Limerick’s Digital Media and Arts Research Center, Ireland — questioned the role of increased spatial resolution in cinematic sound design. “Our paper considered the context that prompted the use of elevated loudspeakers, and examined the relevance of electro-acoustic spatialization techniques to 3D cinematic formats,” offered Downes. The duo brought with them a scene from writer/director Wolfgang Petersen’s submarine classic, Das Boot, to illustrate their thesis.

Using the university’s Spatialization and Auditory Display Environment (SpADE) linked to an Apple Logic Pro 9 digital audio workstations and a 7.1.4 playback configuration — with four overhead speakers — the researchers correlated visual stimuli with audio playback. (A 7.1-channel horizontal playback format was determined by the DAW’s I/O capabilities.) Different dynamic and static timbre spatializations were achieved by using separate EQ plug-ins assigned to horizontal and elevated loudspeaker channels.

“Sources were band-passed and a 3dB boost applied at 7kHz to enhance the perception of elevation,” Downes continued. “A static approach was used on atmospheric sounds to layer the soundscape using their dominant frequencies, whereas bubble sounds were also subjected to static timbre spatialization; the dynamic approach was applied when attempting to bridge the gap between elevated and horizontal loudspeakers. Sound sources were split, with high frequencies applied to the elevated layer, and low frequencies to the horizontal layer. By automating the parameters within both sets of equalization, a top-to-bottom trajectory was perceived. However, although the movement was evident, it was not perceived as immersive.”

The paper concluded that although multi-channel electro-acoustic spatialization techniques are seen as a rich source of ideas for sound designers, without sufficient visual context they are limited in the types of techniques that can be applied. “Screenwriters and movie directors must begin to conceptualize new ways of utilizing this enhanced spatial resolution,” said Downes.

Rich Nevens

Rich Nevens

Tools
Merging Technologies demonstrated immersive-sound applications for the v.10 release of Pyramix DAW software, with up to 30.2-channel routing and panning, including compatibly for Barco Auro, Dolby Atmos and other surround formats, without the need for additional plug-ins or apps, while Avid showcased additions for the modular S6 Assignable Digital Console, including a Joystick Panning Module and a new Master Film Module with PEC/DIR switching.

“The S6 offers improved ergonomics,” explained Avid’s Rich Nevens, director of worldwide pro audio solutions, “including enhanced visibility across the control surface, and full Ethernet connectivity between eight-fader channel modules and the Pro Tools DSP engines.” Reportedly, more than 1,000 S6 systems have been sold worldwide since its introduction in December 2013, including two recent installations at Sony Pictures Studios in Culver City, California.

Finally, Eventide came to the Paris AES Convention with a remarkable new multichannel/multi-element processing system that was demonstrated by invitation only to selected customers and distributors; it will be formally introduced during the upcoming AES Convention in Los Angeles in October. Targeted at film/TV post production, the rackmount device features 32 inputs and 32 discrete outputs per DSP module, thereby allowing four multichannel effects paths to be implemented simultaneously. A quartet of high-speed ARM processors mounted on plug-in boards can be swapped out when more powerful DSP chips became available.

Joe Bamberg and Ray Maxwell

Joe Bamberg and Ray Maxwell

“Initially, effects will be drawn from our current H8000 and H9 processors — with other EQ, dynamics plus reverb effects in development — and can be run in parallel or in series, to effectively create a fully-programmable, four-element channel strip per processing engine,” explained Eventide software engineer Joe Bamberg.

“Remote control plug-ins for Avid Pro Tools and other DAWs are in development,” said Eventide’s VP of sales and marketing, Ray Maxwell. The device can also be used via a stand-alone application for Apple iPad tablets or Windows/Macintosh PCs.

Multi-channel I/O and processing options will enable object-based EQ, dynamic and ambience processing for immersive-sound production. End user price for the codenamed product, which will also feature Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking, has yet to be announced.

Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.


Learning about LTO and Premiere workflows

By Chelsea Taylor

In late March, I attended a workflow event by Facilis Technology and StorageDNA in New York City. I didn’t know much going in other than it would be about collaborative workflows and shared storage for Adobe Premiere. While this event was likely set up to sell some systems, I did end up learning some worthwhile information about archiving and backup.

Full disclosure: going into this event I knew very little about LTO archiving. Previously I had been archiving all of my projects by throwing a hard drive into the corner of my edit. Well, not really but close! It seems that a lot of companies out there don’t put too much importance on archiving until after it becomes a problem (“All of our edits are crashing and we don’t know why!”).

At my last editing job where we edited short form content on Avid, our media manager would consolidate projects in Avid, create a FileMaker database that cataloged footage, manually add metadata, then put the archived files onto different G-Tech G-RAID drives (which of course could die after a couple of years). In short, it wasn’t the best way to archive and backup media, especially when an editor wanted to find something. They would have to walk over to the computer where the database was, figure out how to use the UI, search for the project (If it had the right metadata), find the physical drive, plug the drive into their machine, go through different files/folders until they found what they were looking for, copy the however many large files to the SAN, and then start working. Suffice to say I had a lot to learn about archiving and was very excited to attend this event.

I arrived at the event about 30 minutes early, which turned out to be a good thing because I was immediately greeted by some of the experts and presenters from Facilis and StorageDNA. Not fully realizing who I was talking to, I started asking tons of questions about their products. What does StorageDNA do? How can it integrate with Premiere? Why is LTO tape archiving better? Who adds the metadata? How fast can you access the backup? And before I knew it, I was in a heated discussion with Jeff Krueger, worldwide VP of sales at StorageDNA, and Doug Hynes, director of product and solution marketing at StorageDNA, about their products and the importance of archiving. Fully inspired to archive and with tons more questions, our conversation got cut short as the event was about to begin.

While the Facilis offerings look cool (I want all of them!), I wasn’t at the event to buy things — I wanted to hear about the workflow and integration with Adobe Premiere (which is a language I better understand). As someone who would be actually using these products and not in charge of buying them, I didn’t care about the tech specs or new features. “Secure sharing with permissions. Low-level media management. Block-level virtualized storage pools.” It was hardware spec after hardware spec (which you can check out on their website). As the presenter spoke of the new features and specifications of their new models, I just kept thinking about what Jeff Krueger had told me right before the event about archiving, which I will share with you here.

StorageDNA presented on a product line called DNAevolution, which is an archive engine built on LTO tapes. Each model provides different levels of LTO automation, LTO drives and server hardware. As an editor, I was more concerned with the workflow.

The StorageDNA Workflow for Premiere
1. Card contents are ingested onto the SAN.
2. The high-res files are written to LTO/ LTFS through DNAevolution and become permanent camera master files.
3. Low-res proxies are created and ingested onto the SAN for use in editorial. DNAevolution is pointed to the proxies, indexes them and links to the high-res clips on LTO.
4. Once the files are written to and verified on LTO, you can delete the high-res files from your spinning disk storage.
5. The editor works with the low-res proxies in Premiere Pro.
6. When complete, the editor exports an EDL that DNAevolution parses and locates the high-res files on LTO from the database.
7. DNAevolution restores high-res files to the finishing station or SAN storage.
8. The editor can relink the media and distribute in high-res/4K.

The StorageDNA Archive Workflow
1. In the DNAevolution Archive Console, select your Premiere Pro project file.
2. DNAevolution scans the project, and generates a list of files to be archived. It then writes all associated media files and the project itself to LTO tape(s).
3. Once the files are written to and verified on LTO, you can delete the high-res files from your spinning disk storage.

Why I Was impressed
All of your media is immediately backed up, ensuring it is in a safe place and not taking up your local or shared storage. You can delete the high-res files from your SAN storage immediately and work with proxies, onlining later down the line. The problem I’ve had with SAN storage is that it fills up very quickly with large files, eventually slowing down your systems and leading to playback problems. Why have all of your RAW unused media just sitting there eating up your valuable space when you can free it up immediately?

DNAevolution works easily with Adobe’s Premiere, Prelude and Media Encoder. It uses the Adobe CC toolset to automate the process of creating LTO/LTFS camera masters while creating previews via Media Encoder.

DNAevolution archives all media from your Premiere projects with a single click and notifies you if files are missing. It also checks your files for existing camera and clip metadata. Meaning if you add all of that in at the start it will make archiving much easier.

You have direct access to files on LTO tape, enabling third-party applications access to media directly on LTO, such as transcoding, partial restore and playout. DNAevolution’s Archive Asset Management toolset allows you to browse/search archived content and provides proxy playback. It even has a drag and drop functionality with Premiere where you literally drop a file straight from the archive into your Premiere timeline, with little rendering, and start editing.

I have never tested an LTO archive workflow and am curious what other people’s experiences have been like. Feel free to leave your thoughts on LTO vs. Cloud vs. Disk in the comments below.

Chelsea Taylor is a freelance editor who has worked on a wide range of content: from viral videos and sizzles to web series and short films. She also works as an assistant editor on feature films and documentaries. Check out her site a at StillRenderingProductions.com.


Talking storage with LaCie at NAB

By Isaac Spedding

As I power-walked my way through the NAB show floor, carefully avoiding eye contact with hopeful booth minders, my mind was trying to come up with fancy questions to ask the team at LaCie that would cement my knowledge of storage solutions and justify my press badge. After drawing a blank, I decided to just ask what I had always wanted to know about storage companies in general: How reliable are your drives and how do you prove it? Why is there a blue bubble on your enclosures? Why are drives still so damn heavy?

Fortunately, I met with two members of the LaCie team, who kindly answered my tough questions with valuable information and great stories. I should note that just prior to this NAB trip I had submitted an RMA for 10 ADATA USB.3.0 drives, as all the connectors on them had become loose and fallen out or into the single-piece enclosure. So, as you can imagine, at that moment in time, I was not exactly the biggest fan of hard drive companies in general.

“We are never going to tell you (a drive) will never fail,” said Clement Barberis, marketing manager for LaCie. “We tell people to keep multiple copies. It doesn’t matter how, just copies. It’s not about losing your drive it’s about losing your data.”

LaCie offers a three-to five-year warranty on all its products and has several services available, including fast replacement and data recovery. Connectors and drives are the two main points of failure for any portable drive product.

two shot

LaCie’s Clement Barberis and Kristin Macrostie.

Owned by Seagate, LaCie has a very close connection with that team and can select drives based on what the product needs. Design, development and target-user all have an impact on drive and connection selection. Importantly, LaCie decides on the connection options not by what is the newest but by what works best with the internal drive speed.

Their brand new 12-bay enclosure, the LaCie 12big Thunderbolt 3 (our main image), captures the speed of Thunderbolt 3, and with a 96TB capacity (around 100 hours of uncompressed 4K), the system can transfer around 2600 MB/s (yes, not bits). It is targeted at small production houses shooting high-resolution material.

Why So Heavy?
After Barberis showed me the new LaCie 12big, I asked why the form factor and weight had not been redesigned after all these years. I mean, 96TB is great and all but it’s not light — at 17.6kg (38.9 pounds) it’s not easy to take on the plane. Currently, the largest single drive available is 8TB and features six platters inside the traditional form factor. Each additional platter increases the weight of each drive (and its capacity), but the weight increase means that a smaller form factor for a drive array is possible. That’s why drive arrays have been staying the same size and gaining weight and storage capacity. So your sleek drive will be getting heavier.

LaCie produces several ranges of hard drives with different designs. It’s most visually noticeable in LaCie’s Rugged drive series, which features bright orange bumpers. Other products feature a “Porsche-like” design and feature the blue LaCie bubble. If you are like me, you might be curious how this look came about.

rugged

According to Kristin MacRostie, PR manager for LaCie, “The company founder, Philippe Spruch, wasn’t happy with the design of the products LaCie was putting out 25 years ago — in his words, they were ‘geeky and industrial.’ So, Spruch took a hard drive and a sticky note and he wrote, ‘Our hard drives look like shit, please help,’ and messengered it over to (designer) Philippe Starck’s office in Paris. Starck called Spruch right away.”

The sleek design started with Philippe Starck and then Neil Poulton, who was an apprentice to Starck, and who was brought on to design the drives we see today. The drive designs target the intended consumers, with the “Porsche design” aligning itself to Apple users.

Hearing the story behind LaCie’s design choice, the recommendation to keep multiple drives and not rely on just one, and the explanation of why each product is designed, convinced me that LaCie is producing drive solutions that are built for reliability and usability. Although not the cheapest option on the market today, the LaCie solutions justify this with solid design and logic behind the decision of components, connectors and cost. Besides, at the end of the day, your data is the most important thing and you shouldn’t be keeping it on the cheapest possible drive you found at Best Buy.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.


Dolby Audio at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for the company’s offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. In this post, the focus will be on Dolby’s audio technologies.

Why would Dolby create AC-4? Dolby AC-3 is over 20 years old, and as a function of its age, it does not do new things well. What are those new things and how will Dolby AC-4 elevate your audio experience?

First, let’s define some acronyms, as they are part of the past and present of Dolby audio in broadcasting. OTA stands for Over The Air, as in what you can receive with an antenna. ATSC stands for Advanced Television Systems Committee, an organization based in the US that standardized HDTV (ATSC 1.0) in the US 20 years ago and is working to standardize Ultra HDTV broadcasts as ATSC 3.0. Ultra HD is referred to as UHD.

Now, some math. Dolby AC-3, which is used with ATSC 1.0, uses up to 384 kbps for 5.1 audio. Dolby AC-4 needs only 128 kbps for 5.1 audio. That increased coding efficiency, along with a maximum bit rate of 640 kbps, leaves 512 kbps to work with. What can be done with that extra 512 kbps?

If you are watching sporting events, Dolby AC-4 allows broadcasters to provide you with the option to select which audio stream you are listening to. You can choose which team’s audio broadcast to listen to, listen to another language, hear what is happening on the field of play, or listen to the audio description of what is happening. This could be applicable to other types of broadcasts, though the demos I have heard, including one at this year’s NAB Show, have all been for sporting events.

Dolby AC-4 allows the viewer to select from three types of dialog enhancement: none, low and high. The dialog enhancement processing is done at the encoder, where it runs a sophisticated dialog identification algorithm and then creates a parametric description that is included as metadata in the Dolby AC-4 bit stream.

What if I told you that after implementing what I described above in a Dolby AC-4 bit stream that there were still bits available for other audio content? It is true, and Dolby AC-4 is what allows Dolby Atmos, a next-generation, rich, and complex object audio system, to be inside ATSC 3.0 audio streams in the US, At my NAB demo, I heard a clip of Mad Max: Fury Road, which was mixed in Dolby Atmos, from a Yamaha sound bar. I perceived elements of the mix coming from places other than the screen, even though the sound bar was where all of the sound waves originated from. Whatever is being done with psychoacoustics to make the experience of surround sound from a sound bar possible is convincing.

The advancements in both the coding and presentation of audio have applications beyond broadcasting. The next challenge that Dolby is taking on is mobile. Dolby’s audio codecs are being licensed to mobile applications, which allows them to be pushed out via apps, which in turn removes the dependency from the mobile device’s OS. I heard a Dolby Atmos clip from a Samsung mobile device. While the device had to be centered in front of me to perceive surround sound, I did perceive it.

Years of R&D at Dolby have yielded efficiencies in coding and new ways of presenting audio that will elevate your experience. From home theater, to mobile, and once broadcasters adopt ATSC 3.0, Ultra HDTV.

Check out my coverage of Dolby’s Dolby Vision offerings at NAB as well.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

NAB 2016: My pick for this year’s gamechanger is Lytro

By Isaac Spedding

There has been a lot of buzz around what the gamechanger was at this year’s NAB show. What was released that will really change the way we all work? I was present for the conference session where an eloquent Jon Karafin, head of Light Field Video, explained that Lytro has created a camera system that essentially captures every aspect of your shot and allows you to recreate it in any way, at any position you want, using light field technology.

Typically, with game changing technology comes uncertainty from the established industry, and that was made clear during the rushed Q+A session, where several people (after congratulating the Lytro team) nervously asked if they had thought about the fate of positions in the industry which the technology would make redundant. Jon’s reply was that core positions won’t change, however, the way in which they operate will. The mob of eager filmmakers, producers and young scientists that queued to meet him (I was one of them) was another sign that the technology is incredibly interesting and exciting for many.

Lytro2“It’s a birth of a new technology that very well could replace the way that Hollywood makes films.” These are words from Robert Stromberg (DGA), CCO and founder of The Virtual Reality Company, in the preview video for Lytros’ debut film Life, which will be screened on Tuesday to an audience of 500 lucky attendees. Karafin and Jason Rosenthal, CEO at Lytro, will provide a Lytro Cinema demonstration and breakdown of the short film.

Lytro Cinema is my pick for the NAB 2016 game changing technology and it looks like it will not only advance capture, but also change post production methodology and open up new roles, possibilities and challenges for everyone in the industry.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.