Category Archives: Blog

The VFX Industry: Where are the women?

By Jennie Zeiher

As anyone in the visual effects industry would know, Marvel’s Victoria Alonso was honored earlier this year with the Visual Effects Society Visionary Award. Victoria is an almighty trailblazer, one of whom us ladies can admire, aspire to and want to be.

Her acceptance speech was an important reminder to us of the imbalance of the sexes in our industry. During her speech, Victoria stated: “Tonight there were 476 of you nominated. Forty-three of which are women. We can do better.”

Over the years, I’ve had countless conversations with industry people — executives, supervisors and producers — about why there are fewer women in artist and supervisory roles. A recent article in the NY Times suggested that female VFX supervisors made up only five percent of the 250 top-grossing films of 2014. Pretty dismal.

I’ve always worked in male-dominated industries, so I’m possibly a bit blasé about it. I studied IT and worked as a network engineer in the late ‘90s, before moving to the United States where I worked on 4K digital media projects with technologists and scientists. One of a handful of women, I was always just one of the boys. To me it was the norm.

Moving into VFX about 10 years ago, I realized this industry was no different. From my viewpoint, I see about 1/8 ratio of female to male artists. The same is true from what I’ve seen through our affiliated training courses. Sadly, I’ve heard of some facilities that have no women in artist roles at all!

Most of the females in our industry work in other disciplines. At my workplace, Australia’s Rising Sun Pictures, half of our executive members are women (myself included), and women generally outweigh men in indirect overhead roles (HR, finance, administration and management), as well as production management.

Women bring unique qualities to the workplace: they’re team players, hard working, generous and empathetic. Copious reports have found that companies that have women on their board of directors and in leadership positions perform better than those that don’t. So in our industry, why do we see such a male-dominated artist, technical and supervisory workforce?

By no means am I undervaluing the women in those other disciplines (we could not have functioning businesses without them), I’m just merely trying to understand why there aren’t more women inclined to pursue artistic jobs and, ultimately, supervision roles.

I can’t yet say that one of the talented female artists I’ve had the pleasure of working with over the years has risen to the ranks of being a VFX supervisor… and that’s not to say that they couldn’t have, just that they didn’t, or haven’t yet. This is something that disappoints me deeply. I consider myself a (liberal) feminist. Someone who, in a leadership position, wants to enable other women to become the best they can be and to be equal among their male counterparts.
So, why? Where are the women?

Men and Women Are Wired Differently
A study by LiveScience suggests men and women really are wired differently. It says,  “Male brains have more connections within hemispheres to optimize motor skills, whereas female brains are more connected between hemispheres to combine analytical and intuitive thinking.”

Apparently this difference is at its greatest during the adolescent years (13-17 years), however with age these differences get smaller. So, during the peak of an adolescent girl’s education, she’s more inclined to be analytical and intuitive. Is that a direct correlation to them not choosing a technical vocation? But then again I would have thought that STEM/STEAM careers would be something of interest to girls if they’re brains are wired to be analytical?

This would also explain women having better organizational and management skills and therefore seeking out more “indirectly” associated roles.

Lean Out
For those women already in our industry, are they too afraid to seek out higher positions? Women are often more self-critical and self-doubting. Men will promote themselves and dive right in, even if they’re less capable. I have experienced this first hand and didn’t actual recognize it in myself until I read Sheryl Sandberg’s Lean In.

Or, is it just simply that we’re in a “boys club” — that these career opportunities are not being presented to our female artists, and that we’d prefer to promote men over women?

The Star Wars Factor
Possibly one of the real reasons that there is a lack of women in our industry is what I call “The Star Wars factor.” For the most part, my male counterparts grew up watching (and being inspired by) Star Wars and Star Trek, whereas, personally, I was more inclined to watch Girls Just Want to Have Fun and Footloose. Did these adolescent boys want to be Luke or Han, or George for that matter? Were they so inspired by John Dykstra’s lightsabers that they wanted to do THAT when they grew up? And if this is true, maybe Jyn, Rae and Captain Marvel —and our own Captain Marvel, Victoria Alonso — will spur on a new generation of women in the industry. Maybe it’s a combination of all of these factors. Maybe it’s none.

I’m very interested in exploring this further. To address the problem, we need to ask ourselves why, so please share your thoughts and experiences — you can find me at jz@vfxjz.com. At least now the conversation has started.

One More Thing!
I am very proud that one of my female colleagues, Alana Newell (pictured with her fellow nominees), was nominated for a VES Award this year for Outstanding Compositing in a Photoreal Feature for X-Men: Apocalypse. She was one of the few, but hopefully as time goes by that will change.

Main Image: The woman of Rising Sun Pictures.
——–

Jennie Zeiher is head of sales & business development at Adelaide, Australia’s Rising Sun Pictures.

Focusing on sound bars at CES 2017

By Tim Hoogenakker

My day job is as a re-recording mixer and sound editor working on long-form projects, so when I attended this year’s Consumer Electronics Show in Las Vegas, I honed in on the leading trends in home audio playback. It was important for me to see what the manufacturers are planning regarding multi-channel audio reproduction for the home. From the look of it, sound bars seem to be leading the charge. My focus was primarily with immersive sound bars, single-box audio components capable of playing Dolby Atmos and DTS:X as close as they can in their original format.

Klipsch TheaterBar

Klipsch Theaterbar

Now I must admit, I’ve kicked and screamed about sound bars in the past, audibly rolling my eyes at the concept. We audio mixers are used to working in perfect discrete surround environments, but I wanted to keep an open mind. Whether we as sound professionals like it or not, this is where the consumer product technology is headed. That and I didn’t see quite the same glitz and glam over discrete surround speaker systems at CES.

Here are some basic details with immersive sound bars in general:

1. In addition to the front channels, they often have up-firing drivers on the left and right edges (normally on the top and sides) that are intended to reflect onto the walls and the ceiling of the room. This is to replicate the immersiveness as much as possible. Sure this isn’t exact replication, but I’ll certainly give manufacturers praise for their creativity.
2. Because of the required reflectivity, the walls have to be of a flat enough surface to reflect the signal, yet still balanced so that it doesn’t sound like you’re sitting in the middle of your shower.
3. There is definitely a sweet spot in the seating position when listening to sound bars. If you move off-axis, you may experience somewhat of a wash sitting near the sides, but considering what they’re trying to replicate, it’s an interesting take.
4. They usually have an auto-tuning microphone system for calculating the room for the closest accuracy.
5. I’m convinced that there’s a conspiracy by the manufacturers to make each and every sound bar, in physical appearance, resemble the enigmatic Monolith in 2001: A Space Odyssey…as if literally someone just knocked it over.

Yamaha YSP5600

My first real immersive sound bar experience happened last year with the Yamaha YSP-5600, which comes loaded with 40 (yes 40!) drivers. It’s a very meaty 26-pound sound bar with a height of 8.5 inches and width of 3.6 feet. I heard a few projects that I had mixed in Dolby Atmos played back on this system. Granted, even when correctly tuned it’s not going to sound the same as my dubbing stage or with dedicated home theater speakers, but knowing this I was pleasantly surprised. A few eyebrows were raised for sure. It was fun playing demo titles for friends, watching them turn around and look for surround speakers that weren’t there.

A number of the sound bars displayed at CES bring me to my next point, which honestly is a bit of a complaint. Many were very thin in physical design, often labeled as “ultra-thin,” which to me means very small drivers, which tells me that there’s an elevated frequency crossover line for the subwoofer(s). Sure, I understand that they need to look sleek so they can sell and be acceptable for room aesthetics, but I’m an audio nerd. I WANT those low- to mid-frequencies carried through from the drivers, don’t just jam ALL the low- and mid-frequencies to the sub. It’ll be interesting to see how this plays out as these products reach market during the year.

Sony HTST 5000

Besides immersive audio, most of these sound bars will play from a huge variety of sources, formats and specs, such as Blu-ray, Blu-ray UHD, DVD, DVD-Audio, streaming via network and USB, as well as connections for Wi-Fi, Bluetooth and 4K pass-through.

Some of these sound bars — like many things at CES 2017 — are supported with Amazon Alexa and Google Home. So, instead of fighting over the remote control, you and your family can now confuse Alexa with arguments over controlling your audio between “Game of Thrones” and Paw Patrol.

Finally, I probably won’t be installing a sound bar on my dub stage for reference anytime soon, but I do feel that professionally it’s very important for me to know the pros and the cons — and the quirks — so we can be aware how our audio mixes will translate through these systems. And considering that many major studios and content creators are becoming increasingly ready to make immersive formats their default deliverable standard, especially now with Dolby Vision, I’d say it’s a necessary responsibility.

Looking forward to seeing what NAB has up its sleeve on this as well.

Here are some of the more notable soundbars debuted:

LG SJ9

Sony HT-ST5000: This sound bar is compatible with Google Home. They say it works well with ceilings as high as 17 feet. It’s not DTS:X-capable yet, but Sony said that will happen by the end of the year.LG SJ9: The LG SJ9 sound bar is currently noted by LG as “4K high resolution audio” (which is an impossible statement). It’s possible that they mean it’ll pass through a 4K signal, but the LG folks couldn’t clarify. That snafu aside, it has a very wide dimensionality, which helps for stereo imaging. It will be Dolby Vision/HDR-capable via a future firmware upgrade.

The Klipsch “Theaterbar”: This another eyebrow raiser. It’ll release in Q4 of 2017. There’s no information on the web yet, but they’re showcasing this at CES.

Pioneer Elite FS-EB70: There’s no information on the web yet, but they were showcasing this at CES.

Onkyo SBT-A500 Network: Also no information but it was shown at CES.


Formosa Group re-recording mixer and sound editor Tim Hoogenakker has over 20 years of experience in audio post for music, features and documentaries, television and home entertainment formats. He had stints at Prince’s Paisley Park Studios and POP Sound before joining Formosa.

G-Tech 6-15

Industry pros gather to discuss sound design for film and TV

By Mel Lambert

The third annual Mix Presents Sound for Film and Television conference attracted some 500 production and post pros to Sony Pictures Studios in Culver City, California, last week to hear about the art of sound design.

Subtitled “The Merging of Art, Technique and Tools,” the one-day conference kicked off with a keynote address by re-recording mixer Gary Bourgeois, followed by several panel discussions and presentations from Avid, Auro-3D, Steinberg, JBL Professional and Dolby.

L-R: Brett G. Crockett, Tom McCarthy, Gary Bourgeois and Mark Ulano.

During his keynote, Bourgeois advised, “Sound editors and re-recording mixers should be aware of the talent they bring to the project as storytellers. We need to explore the best ways of using technology to be creative and support the production.” He concluded with some more sage advice: “Do not let the geek take over! Instead,” he stressed, “show the passion we have for the final product.”

Other highlights included a “Sound Inspiration Within the Storytelling Process” panel organized by MPSE and moderated by Carolyn Giardina from The Hollywood Reporter. Panelists included Will Files, Mark P. Stoeckinger, Paula Fairfield, Ben L. Cook, Paul Menichini and Harry Cohen. The discussion focused on where sound designers find their inspiration and the paths they take to create unique soundtracks.

CAS hosted a sound-mixing panel titled “Workflow for Musicals in Film and Television Production” that focused on live recording and other techniques to give musical productions a more “organic” sound. Moderated by Glen Trew, the panel included music editor David Klotz, production mixer Phil Palmer, playback specialist Gary Raymond, production mixer Peter Kurland, re-recording mixer Gary Bourgeois and music editor Tim Boot.

Sound Inspiration Within the Storytelling Process panel (L-R): Will Files, Ben L. Cook, Mark P. Stoeckinger, Carolyn Giardina, Harry Cohen, Paula Fairfield and Paul Menichini.

Sponsored by Westlake Pro, a panel called “Building an Immersive Room: Small, Medium and Large” covered basic requirements of system design and setup — including console/DAW integration and monitor placement — to ensure that soundtracks translate to the outside world. Moderated by Westlake Pro’s CTO, Jonathan Deans, the panel was made up of Bill Johnston from Formosa Group, Nathan Oishi from Sony Pictures Studios, Jerry Steckling of JSX, Brett G. Crockett from Dolby Labs, Peter Chaikin from JBL and re-recording mixers Mark Binder and Tom Brewer.

Avid hosted a fascinating panel discussion called “The Sound of Stranger Things,” which focused on the soundtrack for the Netflix original series, with its signature sound design and ‘80s-style, synthesizer-based music score. Moderated by Avid’s Ozzie Sutherland, the panel included sound designer Craig Henighan, SSE Brad North, music editor David Klotz and sound effects editor Jordan Wilby. “We drew our inspiration from such sci-fi films as Alien, The Thing and Predator,” Henighan said. Re-recording mixers Adam Jenkins and Joe Barnett joined the discussion via Skype from the Technicolor Seward stage.

The Barbra Streisand Scoring Stage.

A stand-out event was the Production Sound Pavilion held on the Barbra Streisand Scoring Stage, where leading production sound mixers showed off their sound carts, with manufacturers also demonstrating wireless, microphone and recorder technologies. “It all starts on location, with a voice in a microphone and a clean recording,” offered CAS president Mark Ulano. “But over the past decade production sound has become much more complex, as technologies and workflows evolved both on-set and in post production.”

Sound carts on display included Tom Curley’s Sound Devices 788t recorder and Sound Devices CL9 mixer combination; Michael Martin’s Zaxcom Nomad 12 recorder and Zaxcom Mix-8 mixer; Danny Maurer’s Sound Devices 664 recorder and Sound Devices 633 mixer; Devendra Cleary’s Sound Devices 970, Pix 260i and 664 recorders with Yamaha 01V and Sound Devices CL-12 mixers; Charles Mead’s Sound Devices 688 recorder with CL-12 mixer; James DeVotre’s Sound Devices 688 recorder with CL-12 Alaia mixer; Blas Kisic’s Boom Recorder and Sound Devices 788 with Mackie Onyx 1620 mixer; Fernando Muga’s Sound Devices 788 and 633 recorders with CL-9 mixer; Thomas Cassetta’s Zaxcom Nomad 12 recorder with Zaxcom Oasis mixer; Chris Howland’s Boom Recorder, Sound Devices and 633 recorders, with Mackie Onyx 1620 and Sound Devices CL-12 mixers; Brian Patrick Curley’s Sound Devices 688 and 664 recorders with Sound Devices CL-12 Alaia mixer; Daniel Powell’s Zoom F8 recorder/mixer; and Landon Orsillo’s Sound Devices 688 recorder.

Lon Neumann

CAS also organized an interesting pair of Production Sound Workshops. During the first one, consultant Lon Neumann addressed loudness control with an overview of loudness levels and surround sound management of cinema content for distribution via broadcast television.

The second presentation, hosted by Bob Bronow (production mixer on Deadliest Catch) and Joe Foglia (Marley & Me, Scrubs and From the Earth to the Moon), covered EQ and noise reduction in the field. While it was conceded that, traditionally, any type of signal processing on location is strongly discouraged — such decisions normally being handled in post — the advent of multitrack recording and isolated channels means that it is becoming more common for mixers to use processing on the dailies mix track.

New for this year was a Sound Reel Showcase that featured short samples from award-contending and to-be-released films. The audience in the Dolby Atmos- and Auro 3D-equipped William Holden Theatre was treated to a high-action sequence from Mel Gibson’s new film, Hacksaw Ridge, which is scheduled for release on November 4. It follows the true story of a WWII army medic who served during the harrowing Battle of Okinawa and became the first conscientious objector to be awarded the Medal of Honor. The highly detailed Dolby Atmos soundtrack was created by SSE/sound designer/recording mixer Robert Mackenzie working at Sony Pictures Studios with dialogue editor Jed M. Dodge and ADR supervisor Kimberly Harris, with re-recording mixers Andy Wright and Kevin O’Connell.

Mel Lambert is principal of Content Creators, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

All photos by Mel Lambert.

 


AES Paris: A look into immersive audio, cinematic sound design

By Mel Lambert

The Audio Engineering Society (AES) came to the City of Light in early June with a technical program and companion exhibition that attracted close to 2,600 pre-registrants, including some 700 full-pass attendees. “The Paris International Convention surpassed all of our expectations,” AES executive director Bob Moses told postPerspective. “The research community continues to thrive — there was great interest in spatial sound and networked audio — while the business community once again embraced the show, with a 30 percent increase in exhibitors over last year’s show in Warsaw.” Moses confirmed that next year’s European convention will be held in Berlin, “probably in May.”

Tom Downes

Getting Immersed
There were plenty of new techniques and technologies targeting the post community. One presentation, in particular, caught my eye, since it posed some relevant questions about how we perceive immersive sound. In the session, “Immersive Audio Techniques in Cinematic Sound Design: Context and Spatialization,” co-authors Tom Downes and Malachy Ronan — both of who are AES student members currently studying at the University of Limerick’s Digital Media and Arts Research Center, Ireland — questioned the role of increased spatial resolution in cinematic sound design. “Our paper considered the context that prompted the use of elevated loudspeakers, and examined the relevance of electro-acoustic spatialization techniques to 3D cinematic formats,” offered Downes. The duo brought with them a scene from writer/director Wolfgang Petersen’s submarine classic, Das Boot, to illustrate their thesis.

Using the university’s Spatialization and Auditory Display Environment (SpADE) linked to an Apple Logic Pro 9 digital audio workstations and a 7.1.4 playback configuration — with four overhead speakers — the researchers correlated visual stimuli with audio playback. (A 7.1-channel horizontal playback format was determined by the DAW’s I/O capabilities.) Different dynamic and static timbre spatializations were achieved by using separate EQ plug-ins assigned to horizontal and elevated loudspeaker channels.

“Sources were band-passed and a 3dB boost applied at 7kHz to enhance the perception of elevation,” Downes continued. “A static approach was used on atmospheric sounds to layer the soundscape using their dominant frequencies, whereas bubble sounds were also subjected to static timbre spatialization; the dynamic approach was applied when attempting to bridge the gap between elevated and horizontal loudspeakers. Sound sources were split, with high frequencies applied to the elevated layer, and low frequencies to the horizontal layer. By automating the parameters within both sets of equalization, a top-to-bottom trajectory was perceived. However, although the movement was evident, it was not perceived as immersive.”

The paper concluded that although multi-channel electro-acoustic spatialization techniques are seen as a rich source of ideas for sound designers, without sufficient visual context they are limited in the types of techniques that can be applied. “Screenwriters and movie directors must begin to conceptualize new ways of utilizing this enhanced spatial resolution,” said Downes.

Rich Nevens

Rich Nevens

Tools
Merging Technologies demonstrated immersive-sound applications for the v.10 release of Pyramix DAW software, with up to 30.2-channel routing and panning, including compatibly for Barco Auro, Dolby Atmos and other surround formats, without the need for additional plug-ins or apps, while Avid showcased additions for the modular S6 Assignable Digital Console, including a Joystick Panning Module and a new Master Film Module with PEC/DIR switching.

“The S6 offers improved ergonomics,” explained Avid’s Rich Nevens, director of worldwide pro audio solutions, “including enhanced visibility across the control surface, and full Ethernet connectivity between eight-fader channel modules and the Pro Tools DSP engines.” Reportedly, more than 1,000 S6 systems have been sold worldwide since its introduction in December 2013, including two recent installations at Sony Pictures Studios in Culver City, California.

Finally, Eventide came to the Paris AES Convention with a remarkable new multichannel/multi-element processing system that was demonstrated by invitation only to selected customers and distributors; it will be formally introduced during the upcoming AES Convention in Los Angeles in October. Targeted at film/TV post production, the rackmount device features 32 inputs and 32 discrete outputs per DSP module, thereby allowing four multichannel effects paths to be implemented simultaneously. A quartet of high-speed ARM processors mounted on plug-in boards can be swapped out when more powerful DSP chips became available.

Joe Bamberg and Ray Maxwell

Joe Bamberg and Ray Maxwell

“Initially, effects will be drawn from our current H8000 and H9 processors — with other EQ, dynamics plus reverb effects in development — and can be run in parallel or in series, to effectively create a fully-programmable, four-element channel strip per processing engine,” explained Eventide software engineer Joe Bamberg.

“Remote control plug-ins for Avid Pro Tools and other DAWs are in development,” said Eventide’s VP of sales and marketing, Ray Maxwell. The device can also be used via a stand-alone application for Apple iPad tablets or Windows/Macintosh PCs.

Multi-channel I/O and processing options will enable object-based EQ, dynamic and ambience processing for immersive-sound production. End user price for the codenamed product, which will also feature Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking, has yet to be announced.

Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.


Learning about LTO and Premiere workflows

By Chelsea Taylor

In late March, I attended a workflow event by Facilis Technology and StorageDNA in New York City. I didn’t know much going in other than it would be about collaborative workflows and shared storage for Adobe Premiere. While this event was likely set up to sell some systems, I did end up learning some worthwhile information about archiving and backup.

Full disclosure: going into this event I knew very little about LTO archiving. Previously I had been archiving all of my projects by throwing a hard drive into the corner of my edit. Well, not really but close! It seems that a lot of companies out there don’t put too much importance on archiving until after it becomes a problem (“All of our edits are crashing and we don’t know why!”).

At my last editing job where we edited short form content on Avid, our media manager would consolidate projects in Avid, create a FileMaker database that cataloged footage, manually add metadata, then put the archived files onto different G-Tech G-RAID drives (which of course could die after a couple of years). In short, it wasn’t the best way to archive and backup media, especially when an editor wanted to find something. They would have to walk over to the computer where the database was, figure out how to use the UI, search for the project (If it had the right metadata), find the physical drive, plug the drive into their machine, go through different files/folders until they found what they were looking for, copy the however many large files to the SAN, and then start working. Suffice to say I had a lot to learn about archiving and was very excited to attend this event.

I arrived at the event about 30 minutes early, which turned out to be a good thing because I was immediately greeted by some of the experts and presenters from Facilis and StorageDNA. Not fully realizing who I was talking to, I started asking tons of questions about their products. What does StorageDNA do? How can it integrate with Premiere? Why is LTO tape archiving better? Who adds the metadata? How fast can you access the backup? And before I knew it, I was in a heated discussion with Jeff Krueger, worldwide VP of sales at StorageDNA, and Doug Hynes, director of product and solution marketing at StorageDNA, about their products and the importance of archiving. Fully inspired to archive and with tons more questions, our conversation got cut short as the event was about to begin.

While the Facilis offerings look cool (I want all of them!), I wasn’t at the event to buy things — I wanted to hear about the workflow and integration with Adobe Premiere (which is a language I better understand). As someone who would be actually using these products and not in charge of buying them, I didn’t care about the tech specs or new features. “Secure sharing with permissions. Low-level media management. Block-level virtualized storage pools.” It was hardware spec after hardware spec (which you can check out on their website). As the presenter spoke of the new features and specifications of their new models, I just kept thinking about what Jeff Krueger had told me right before the event about archiving, which I will share with you here.

StorageDNA presented on a product line called DNAevolution, which is an archive engine built on LTO tapes. Each model provides different levels of LTO automation, LTO drives and server hardware. As an editor, I was more concerned with the workflow.

The StorageDNA Workflow for Premiere
1. Card contents are ingested onto the SAN.
2. The high-res files are written to LTO/ LTFS through DNAevolution and become permanent camera master files.
3. Low-res proxies are created and ingested onto the SAN for use in editorial. DNAevolution is pointed to the proxies, indexes them and links to the high-res clips on LTO.
4. Once the files are written to and verified on LTO, you can delete the high-res files from your spinning disk storage.
5. The editor works with the low-res proxies in Premiere Pro.
6. When complete, the editor exports an EDL that DNAevolution parses and locates the high-res files on LTO from the database.
7. DNAevolution restores high-res files to the finishing station or SAN storage.
8. The editor can relink the media and distribute in high-res/4K.

The StorageDNA Archive Workflow
1. In the DNAevolution Archive Console, select your Premiere Pro project file.
2. DNAevolution scans the project, and generates a list of files to be archived. It then writes all associated media files and the project itself to LTO tape(s).
3. Once the files are written to and verified on LTO, you can delete the high-res files from your spinning disk storage.

Why I Was impressed
All of your media is immediately backed up, ensuring it is in a safe place and not taking up your local or shared storage. You can delete the high-res files from your SAN storage immediately and work with proxies, onlining later down the line. The problem I’ve had with SAN storage is that it fills up very quickly with large files, eventually slowing down your systems and leading to playback problems. Why have all of your RAW unused media just sitting there eating up your valuable space when you can free it up immediately?

DNAevolution works easily with Adobe’s Premiere, Prelude and Media Encoder. It uses the Adobe CC toolset to automate the process of creating LTO/LTFS camera masters while creating previews via Media Encoder.

DNAevolution archives all media from your Premiere projects with a single click and notifies you if files are missing. It also checks your files for existing camera and clip metadata. Meaning if you add all of that in at the start it will make archiving much easier.

You have direct access to files on LTO tape, enabling third-party applications access to media directly on LTO, such as transcoding, partial restore and playout. DNAevolution’s Archive Asset Management toolset allows you to browse/search archived content and provides proxy playback. It even has a drag and drop functionality with Premiere where you literally drop a file straight from the archive into your Premiere timeline, with little rendering, and start editing.

I have never tested an LTO archive workflow and am curious what other people’s experiences have been like. Feel free to leave your thoughts on LTO vs. Cloud vs. Disk in the comments below.

Chelsea Taylor is a freelance editor who has worked on a wide range of content: from viral videos and sizzles to web series and short films. She also works as an assistant editor on feature films and documentaries. Check out her site a at StillRenderingProductions.com.


Talking storage with LaCie at NAB

By Isaac Spedding

As I power-walked my way through the NAB show floor, carefully avoiding eye contact with hopeful booth minders, my mind was trying to come up with fancy questions to ask the team at LaCie that would cement my knowledge of storage solutions and justify my press badge. After drawing a blank, I decided to just ask what I had always wanted to know about storage companies in general: How reliable are your drives and how do you prove it? Why is there a blue bubble on your enclosures? Why are drives still so damn heavy?

Fortunately, I met with two members of the LaCie team, who kindly answered my tough questions with valuable information and great stories. I should note that just prior to this NAB trip I had submitted an RMA for 10 ADATA USB.3.0 drives, as all the connectors on them had become loose and fallen out or into the single-piece enclosure. So, as you can imagine, at that moment in time, I was not exactly the biggest fan of hard drive companies in general.

“We are never going to tell you (a drive) will never fail,” said Clement Barberis, marketing manager for LaCie. “We tell people to keep multiple copies. It doesn’t matter how, just copies. It’s not about losing your drive it’s about losing your data.”

LaCie offers a three-to five-year warranty on all its products and has several services available, including fast replacement and data recovery. Connectors and drives are the two main points of failure for any portable drive product.

two shot

LaCie’s Clement Barberis and Kristin Macrostie.

Owned by Seagate, LaCie has a very close connection with that team and can select drives based on what the product needs. Design, development and target-user all have an impact on drive and connection selection. Importantly, LaCie decides on the connection options not by what is the newest but by what works best with the internal drive speed.

Their brand new 12-bay enclosure, the LaCie 12big Thunderbolt 3 (our main image), captures the speed of Thunderbolt 3, and with a 96TB capacity (around 100 hours of uncompressed 4K), the system can transfer around 2600 MB/s (yes, not bits). It is targeted at small production houses shooting high-resolution material.

Why So Heavy?
After Barberis showed me the new LaCie 12big, I asked why the form factor and weight had not been redesigned after all these years. I mean, 96TB is great and all but it’s not light — at 17.6kg (38.9 pounds) it’s not easy to take on the plane. Currently, the largest single drive available is 8TB and features six platters inside the traditional form factor. Each additional platter increases the weight of each drive (and its capacity), but the weight increase means that a smaller form factor for a drive array is possible. That’s why drive arrays have been staying the same size and gaining weight and storage capacity. So your sleek drive will be getting heavier.

LaCie produces several ranges of hard drives with different designs. It’s most visually noticeable in LaCie’s Rugged drive series, which features bright orange bumpers. Other products feature a “Porsche-like” design and feature the blue LaCie bubble. If you are like me, you might be curious how this look came about.

rugged

According to Kristin MacRostie, PR manager for LaCie, “The company founder, Philippe Spruch, wasn’t happy with the design of the products LaCie was putting out 25 years ago — in his words, they were ‘geeky and industrial.’ So, Spruch took a hard drive and a sticky note and he wrote, ‘Our hard drives look like shit, please help,’ and messengered it over to (designer) Philippe Starck’s office in Paris. Starck called Spruch right away.”

The sleek design started with Philippe Starck and then Neil Poulton, who was an apprentice to Starck, and who was brought on to design the drives we see today. The drive designs target the intended consumers, with the “Porsche design” aligning itself to Apple users.

Hearing the story behind LaCie’s design choice, the recommendation to keep multiple drives and not rely on just one, and the explanation of why each product is designed, convinced me that LaCie is producing drive solutions that are built for reliability and usability. Although not the cheapest option on the market today, the LaCie solutions justify this with solid design and logic behind the decision of components, connectors and cost. Besides, at the end of the day, your data is the most important thing and you shouldn’t be keeping it on the cheapest possible drive you found at Best Buy.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.


Dolby Audio at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for the company’s offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. In this post, the focus will be on Dolby’s audio technologies.

Why would Dolby create AC-4? Dolby AC-3 is over 20 years old, and as a function of its age, it does not do new things well. What are those new things and how will Dolby AC-4 elevate your audio experience?

First, let’s define some acronyms, as they are part of the past and present of Dolby audio in broadcasting. OTA stands for Over The Air, as in what you can receive with an antenna. ATSC stands for Advanced Television Systems Committee, an organization based in the US that standardized HDTV (ATSC 1.0) in the US 20 years ago and is working to standardize Ultra HDTV broadcasts as ATSC 3.0. Ultra HD is referred to as UHD.

Now, some math. Dolby AC-3, which is used with ATSC 1.0, uses up to 384 kbps for 5.1 audio. Dolby AC-4 needs only 128 kbps for 5.1 audio. That increased coding efficiency, along with a maximum bit rate of 640 kbps, leaves 512 kbps to work with. What can be done with that extra 512 kbps?

If you are watching sporting events, Dolby AC-4 allows broadcasters to provide you with the option to select which audio stream you are listening to. You can choose which team’s audio broadcast to listen to, listen to another language, hear what is happening on the field of play, or listen to the audio description of what is happening. This could be applicable to other types of broadcasts, though the demos I have heard, including one at this year’s NAB Show, have all been for sporting events.

Dolby AC-4 allows the viewer to select from three types of dialog enhancement: none, low and high. The dialog enhancement processing is done at the encoder, where it runs a sophisticated dialog identification algorithm and then creates a parametric description that is included as metadata in the Dolby AC-4 bit stream.

What if I told you that after implementing what I described above in a Dolby AC-4 bit stream that there were still bits available for other audio content? It is true, and Dolby AC-4 is what allows Dolby Atmos, a next-generation, rich, and complex object audio system, to be inside ATSC 3.0 audio streams in the US, At my NAB demo, I heard a clip of Mad Max: Fury Road, which was mixed in Dolby Atmos, from a Yamaha sound bar. I perceived elements of the mix coming from places other than the screen, even though the sound bar was where all of the sound waves originated from. Whatever is being done with psychoacoustics to make the experience of surround sound from a sound bar possible is convincing.

The advancements in both the coding and presentation of audio have applications beyond broadcasting. The next challenge that Dolby is taking on is mobile. Dolby’s audio codecs are being licensed to mobile applications, which allows them to be pushed out via apps, which in turn removes the dependency from the mobile device’s OS. I heard a Dolby Atmos clip from a Samsung mobile device. While the device had to be centered in front of me to perceive surround sound, I did perceive it.

Years of R&D at Dolby have yielded efficiencies in coding and new ways of presenting audio that will elevate your experience. From home theater, to mobile, and once broadcasters adopt ATSC 3.0, Ultra HDTV.

Check out my coverage of Dolby’s Dolby Vision offerings at NAB as well.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.


NAB 2016: My pick for this year’s gamechanger is Lytro

By Isaac Spedding

There has been a lot of buzz around what the gamechanger was at this year’s NAB show. What was released that will really change the way we all work? I was present for the conference session where an eloquent Jon Karafin, head of Light Field Video, explained that Lytro has created a camera system that essentially captures every aspect of your shot and allows you to recreate it in any way, at any position you want, using light field technology.

Typically, with game changing technology comes uncertainty from the established industry, and that was made clear during the rushed Q+A session, where several people (after congratulating the Lytro team) nervously asked if they had thought about the fate of positions in the industry which the technology would make redundant. Jon’s reply was that core positions won’t change, however, the way in which they operate will. The mob of eager filmmakers, producers and young scientists that queued to meet him (I was one of them) was another sign that the technology is incredibly interesting and exciting for many.

Lytro2“It’s a birth of a new technology that very well could replace the way that Hollywood makes films.” These are words from Robert Stromberg (DGA), CCO and founder of The Virtual Reality Company, in the preview video for Lytros’ debut film Life, which will be screened on Tuesday to an audience of 500 lucky attendees. Karafin and Jason Rosenthal, CEO at Lytro, will provide a Lytro Cinema demonstration and breakdown of the short film.

Lytro Cinema is my pick for the NAB 2016 game changing technology and it looks like it will not only advance capture, but also change post production methodology and open up new roles, possibilities and challenges for everyone in the industry.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.


Nvidia’s GTC 2016: VR, A.I. and self driving cars, oh my!

By Mike McCarthy

Last week, I had the opportunity to attend Nvidia’s GPU Technology Conference, GTC 2016. Five thousand people filled the San Jose Convention Center for nearly a week to learn about GPU technology and how to use it to change our world. GPUs were originally designed to process graphics (hence the name), but are now used to accelerate all sorts of other computational tasks.

The current focus of GPU computing is in three areas:

Virtual reality is a logical extension of the original graphics processing design. VR requires high frame rates with low latency to keep up with user’s head movements, otherwise the lag results in motion sickness. This requires lots of processing power, and the imminent release of the Oculus Rift and HTC Vive head-mounted displays are sure to sell many high-end graphics cards. The new Quadro M6000 24GB PCIe card and M5500 mobile GPU have been released to meet this need.

Autonomous vehicles are being developed that will slowly replace many or all of the driver’s current roles in operating a vehicle. This requires processing lots of sensor input data and making decisions in realtime based on inferences made from that information. Nvidia has developed a number of hardware solutions to meet these needs, with the Drive PX and Drive PX2 expected to be the hardware platform that many car manufacturers rely on to meet those processing needs.

This author calls the Tesla P100 "a monster of a chip."

This author calls the Tesla P100 “a monster of a chip.”

Artificial Intelligence has made significant leaps recently, and the need to process large data sets has grown exponentially. To that end, Nvidia has focused their newest chip development — not on graphics, at least initially — on a deep learning super computer chip. The first Pascal generation GPU, the Tesla P100 is a monster of a chip, with 15 billion 16nm transistors on a 600mm2 die. It should be twice as fast as current options for most tasks, and even more for double precision work and/or large data sets. The chip is initially available in the new DGX-1 supercomputer for $129K, which includes eight of the new GPUs connected in NVLink. I am looking forward to seeing the same graphics processing technology on a PCIe-based Quadro card at some point in the future.

While those three applications for GPU computing all had dedicated hardware released for them, Nvidia has also been working to make sure that software will be developed that uses the level of processing power they can now offer users. To that end, there are all sorts of SDKs and libraries they have been releasing to help developers harness the power of the hardware that is now available. For VR, they have Iray VR, which is a raytracing toolset for creating photorealistic VR experiences, and Iray VR Lite, which allows users to create still renderings to be previewed with HMD displays. They also have a broader VRWorks collection of tools for helping software developers adapt their work for VR experiences. For Autonomous vehicles they have developed libraries of tools for mapping, sensor image analysis, and a deep-learning decision-making neural net for driving called DaveNet. For A.I. computing, cuDNN is for accelerating emerging deep-learning neural networks, running on GPU clusters and supercomputing systems like the new DGX-1.

What Does This Mean for Post Production?
So from a post perspective (ha!), what does this all mean for the future of post production? First, newer and faster GPUs are coming, even if they are not here yet. Much farther off, deep-learning networks may someday log and index all of your footage for you. But the biggest change coming down the pipeline is virtual reality, led by the upcoming commercially available head-mounted displays (HMD). Gaming will drive HMDs into the hands of consumers, and HMDs in the hand of consumers will drive demand for a new type of experience for story-telling, advertising and expression.

As I see it, VR can be created in a variety of continually more immersive steps. The starting point is the HMD, placing the viewer into an isolated and large feeling environment. Existing flat video or stereoscopic content can be viewed without large screens, requiring only minimal processing to format the image for the HMD. The next step is a big jump — when we begin to support head tracking — to allow the viewer to control the direction that they are viewing. This is where we begin to see changes required at all stages of the content production and post pipeline. Scenes need to be created and filmed at 360 degrees.

At the conference, this high-fidelity VR simulation that uses scientifically accurate satellite imagery and data from NASA was shown.

The cameras required to capture 360 degrees of imagery produce a series of video streams that need to be stitched together into a single image, and that image needs to be edited and processed. Then the entire image is made available to the viewer, who then chooses which angle they want to view as it is played. This can be done as a flatten image sphere or, with more source data and processing, as a stereoscopic experience. The user can control the angle they view the scene from, but not the location they are viewing from, which was dictated by the physical placement of the 360-camera system. Video-Stitch just released a new all-in-one package for capturing, recording and streaming 360 video called the Orah 4i, which may make that format more accessible to consumers.

Allowing the user to fully control their perspective and move around within a scene is what makes true VR so unique, but is also much more challenging to create content for. All viewed images must be rendered on the fly, based on input from the user’s motion and position. These renders require all content to exist in 3D space, for the perspective to be generated correctly. While this is nearly impossible for traditional camera footage, it is purely a render challenge for animated content — rendering that used to take weeks must be done in realtime, and at much higher frame rates to keep up with user movement.

For any camera image, depth information is required, which is possible to estimate with calculations based on motion, but not with the level of accuracy required. Instead, if many angles are recorded simultaneously, a 3D analysis of the combination can generate a 3D version of the scene. This is already being done in limited cases for advance VFX work, but it would require taking it to a whole new level. For static content, a 3D model can be created by processing lots of still images, but storytelling will require 3D motion within this environment. This all seems pretty far out there for a traditional post workflow, but there is one case that will lend itself to this format.

Motion capture-based productions already have the 3D data required to render VR perspectives, because VR is the same basic concept as motion tracking cinematography, except that the viewer controls the “camera” instead of the director. We are already seeing photorealistic motion capture movies showing up in theaters, so these are probably the first types of productions that will make the shift to producing full VR content.

The Maxwell Kepler family of cards.

Viewing this content is still a challenge, where again Nvidia GPUs are used on the consumer end. Any VR viewing requires sensor input to track the viewer, which much be processed, and the resulting image must be rendered, usually twice for stereo viewing. This requires a significant level of processing power, so Nvidia has created two tiers of hardware recommendations to ensure that users can get a quality VR experience. For consumers, the VR-Ready program includes complete systems based on the GeForce 970 or higher GPUs, which meet the requirements for comfortable VR viewing. VR-Ready for Professionals is a similar program for the Quadro line, including the M5000 and higher GPUs, included in complete systems from partner ISVs. Currently, MSI’s new WT72 laptop with the new M5500 GPU is the only mobile platform certified VR Ready for Pros. The new mobile Quadro M5500 has the same system architecture as the desktop workstation Quadro M5000, with all 2048 CUDA cores and 8GB RAM.

While the new top-end Maxwell-based Quadro GPUs are exciting, I am really looking forward to seeing Nvidia’s Pascal technology used for graphics processing in the near future. In the meantime, we have enough performance with existing systems to start processing 360-degree videos and VR experiences.

Mike McCarthy is a freelance post engineer and media workflow consultant based in Northern California. He shares his 10 years of technology experience on www.hd4pc.com, and he can be reached at mike@hd4pc.com.

Creative Thievery : Who owns the art?

By Kristine Pregot

Last month, I had the pleasure of checking out a very compelling panel at SXSW, led by Mary Crosse of Derby Content: Creative Thievery = What’s Yours is Mine?

It was a packed house, and I heard many people mention that this was their absolute favorite panel at SXSW, so it seemed like a good idea to continue the conversation.

How did you conceptualize this panel?
I had seen Richard Prince’s Instagram exhibit last year, and it caused a heated debate about what is art and who owns what outside of the typical art world. I felt it would be interesting to bring a debate about fine art into discussion with professionals in film, interactive and music attending SXSW. These appropriation discussions are so relevant to what we do everyday in the more commercial arts world.

Tell me about the panelists?
I had top panelists participate, including Sergio Munoz Sarmiento, a fine arts lawyer; Hrag Vartanien the co-founder/editor-in-chief of Hyperallergic, a fine arts blogazine; and Jonathan Rosen, an appropriation artist and ex-advertising creative and commercial director. This trio gave us really unique and informed insights into all aspects of the examples I showed.

The first subject you talked about was Richard Prince taking a photograph of the famous Marlboro Man ad and selling this photo for a lot of money.
This is a pretty famous case in the art world. Richard Prince has made his career off of appropriating others’ work in the extreme. The panel had a mixed reaction to this, although by a near unanimous vote of hands, the crowd was much harsher and felt that what Richard Prince did was morally wrong.

Marlboro

What are your thoughts about Richard Prince?
I personally find the work to be an interesting statement on art, meaning and intent in a piece and on ownership. The fact that it has created so much dialogue about what is fine art over the years makes him relevant. I think many people don’t want to give him that much credit, and perhaps I shouldn’t. However, I think he’s made his art in the act of stealing itself, and if you look at this statement that he’s made with his work in that way, then it’s easier to see it as art.

I thought that Mike Tyson’s tattoo artist and his lawsuit to Warner Bros. for the use of this artwork in the film, The Hangover II, was very interesting subject matter. Can you break this case down a little bit?
The tattoo artist who designed Mike Tyson’s face tattoo sued Warner Bros. for a copyright infringement in Hangover II. In the film, Stu (Ed Helms) wakes up after a crazy night of partying in a Bangkok hotel with a replica of Mike Tyson’s face tattoo. The tattoo artist designed it specifically for Mike Tyson and claimed it was a copyrighted work that Warner Bros. had no right to put in the film or on any promotional materials for the film.

The lawsuit nearly affected the release of the film, and there was a possibility that if the two parties couldn’t come to an agreement, the face tattoo would have to be digitally removed for the home video release. In the end, Warner Bros. settled the claim for an undisclosed amount.

This case does open up an interesting discussion about an individual not even owning the design tattooed to their body without a legal document from the tattoo artist saying as much. And creates the need for filmmakers and advertisers to clear one more element in our work.

What surprised you the most about the panel? Did the audience’s morally correct “vote” surprise you?
We decided that after we discussed what was acceptable in the art world and what was legally right, we’d ask the audience what they felt was morally right. The audience, nearly unanimously, voted together on all examples shown, and very differently from how the art world felt things were acceptable and how the court ruled.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.