NBCUni 7.26

Category Archives: Blog

Nvidia, AMD and Intel news from Computex

By Mike McCarthy

A number of new technologies and products were just announced at this year’s Computex event in Taipei, Taiwan. Let’s take a look at ones that seem relevant to media creation pros.

Nvidia released a line of mobile workstation GPUs based on its newest Turing architecture. Like the GeForce lineup, the Turing line has versions without the RTX designation. The Quadro RTX 5000, 4000 and 3000 have raytracing and Tensor cores, while the Quadro T2000 and T1000 do not, similar to the GeForce 16 products. The RTX 5000 matches the desktop version, with slightly more CUDA cores than the GeForce RTX 2080, although at lower clock speeds for reduced power consumption.

Nvidia’s new RTX 5000

The new Quadro RTX 3000 has similar core configuration to the desktop Quadro RTX 4000 and GeForce RTX 2070. This leaves the new RTX 4000 somewhere in between, with more cores than the desktop variant, aiming to provide similar overall performance at lower clock speeds and power consumption. While I can respect the attempt to offer similar performance at given tiers, doing so makes it more complicated than just leaving consistent naming for particular core configurations.

Nvidia also announced a new “RTX Studio” certification program for laptops targeted at content creators. These laptops are designed to support content creation applications with “desktop-like” performance. RTX Studio laptops will include an RTX GPU (either GeForce or Quadro), an H-Series or better Intel CPU, at least 16GB RAM and 512GB SSD, and at least a 1080p screen. Nvidia also announced a new line of studio drivers that are supposed to work with both Quadro and GeForce hardware. They are optimized for content creators and tested for stability with applications from Adobe, Autodesk, Avid, and others. Hopefully these drivers will simplify certain external GPU configurations that mix Quadro and GeForce hardware. It is unclear whether or not these new “Studio” drivers will replace the previously announced “Creator Ready” series of drivers.

Intel announced a new variant of its top end 9900K CPU. The i9-9900KS has a similar configuration, but runs at higher clock speeds on more cores, with a 4GHz base frequency and allowing 5GHz boost speeds on all eight cores. Intel also offered more details on its upcoming 10nm Ice Lake products with Gen 11 integrated graphics, which offers numerous performance improvements and VNNI support to accelerate AI processing. Intel is also integrating support for Thunderbolt 3 and Wi-Fi 6 into the new chipsets, which should lead to wider support for those interfaces. The first 10nm products to be released will be the lower-power chip for tablets and ultra portable laptops with higher power variants coming further in the future.

AMD took the opportunity to release new generations of both CPUs and GPUs. On the CPU front, AMD has a number of new third-generation 7nm Ryzen processors, with six to 12 cores in the 4GHz range and supporting 20 lanes of fourth-gen PCIe. Priced between $200 and $500, they are targeted at consumers and gamers and are slated to be available July 7th. These CPUs compete with Intel’s 9900K and similar CPUs, which have been offering top performance for Premiere and After Effects users due to their high clock speed. It will be interesting to see if AMD’s new products offer competitive performance at that price point.

AMD also finally publicly released its Navi generation GPU architecture, in the form of the new Radeon 5700. The 5000 series has an entirely new core design, which they call Radeon DNA (RDNA) to replace the GCN architecture first released seven years ago. RDNA is supposed to offer 25% more performance per clock cycle and 50% more performance per watt. This is important, because power consumption was AMD’s weak point compared to competing products from Nvidia.

AMD president and CEO Dr. Lisa Su giving her keynote.

While GPU power consumption isn’t as big of a deal for gamers using it a couple hours a day, commercial compute tasks that run 24/7 see significant increases in operating costs for electricity and cooling when power consumption is higher. AMD’s newest Radeon 5700 is advertised to compete performance-wise with the GeForce RTX 2070, meaning that Nvidia still holds the overall performance crown for the foreseeable future. But the new competition should drive down prices in the mid-range performance segment, which are the cards most video editors need.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB 2019: An engineer’s perspective

By John Ferder

Last week I attended my 22nd NAB, and I’ve got the Ross lapel pin to prove it! This was a unique NAB for me. I attended my first 20 NABs with my former employer, and most of those had me setting up the booth visits for the entire contingent of my co-workers and making sure that the vendors knew we were at each booth and were ready to go. Thursday was my “free day” to go wandering and looking at the equipment, cables, connectors, test gear, etc., that I was looking for.

This year, I’m part of a new project, so I went with a shopping list and a rough schedule with the vendors we needed to see. While I didn’t get everywhere I wanted to go, the three days were very full and very rewarding.

Beck Video IP panel

Sessions and Panels
I also got the opportunity to attend the technical sessions on Saturday and Sunday. I spent my time at the BEITC in the North Hall and the SMPTE Future of Cinema Conference in the South Hall. Beck TV gave an interesting presentation on constructing IP-based facilities of the future. While SMPTE ST2110 has been completed and issued, there are still implementation issues, as NMOS is still being developed. Today’s systems are and will for the time being be hybrid facilities. The decision to be made is whether the facility will be built on an IP routing switcher core with gateways to SDI, or on an SDI routing switcher core with gateways to IP.

Although more expensive, building around an IP core would be more efficient and future-proof. Fiber infrastructure design, test equipment and finding engineers who are proficient in both IP and broadcast (the “Purple Squirrels”) are large challenges as well.

A lot of attention was also paid to cloud production and distribution, both in the BEITC and the FoCC. One such presentation, at the FoCC, was on VFX in the cloud with an eye toward the development of 5G. Nathaniel Bonini of BeBop Technology reported that BeBop has a new virtual studio partnership with Avid, and that the cloud allows tasks to be performed in a “massively parallel” way. He expects that 5G mobile technology will facilitate virtualization of the network.

VFX in the Cloud panel

Ralf Schaefer, of the Fraunhofer Heinrich-Hertz Institute, expressed his belief that all devices will be attached to the cloud via 5G, resulting in no cables and no mobile storage media. 5G for AR/VR distribution will render the scene in the network and transmit it directly to the viewer. Denise Muyco of StratusCore provided a link to a virtual workplace: https://bit.ly/2RW2Vxz. She felt that 5G would assist in the speed of the collaboration process between artist and client, making it nearly “friction-free.” While there are always security concerns, 5G would also help the prosumer creators to provide more content.

Chris Healer of The Molecule stated that 5G should help to compress VFX and production workflows, enable cloud computing to work better and perhaps provide realtime feedback for more perfect scene shots, showing line composites of VR renders to production crews in remote locations.

The Floor
I was very impressed with a number of manufacturers this year. Ross Video demonstrated new capabilities of Inception and OverDrive. Ross also showed its new Furio SkyDolly three-wheel rail camera system. In addition, 12G single-link capability was announced for Acuity, Ultrix and other products.

ARRI AMIRA (Photo by Cotch Diaz)

ARRI showed a cinematic multicam system built using the AMIRA camera with a DTS FCA fiber camera adapter back and a base station controllable by Sony RCP1500 or Skaarhoj RCP. The Sony panel will make broadcast-centric people comfortable, but I was very impressed with the versatility of the Skaarhoj RCP. The system is available using either EF, PL, or B4 mount lenses.

During the show, I learned from one of the manufacturers that one of my favorite OLED evaluation monitors is going to be discontinued. This was bad news for the new project I’ve embarked on. Then we came across the Plura booth in the North Hall. Plura as showing a new OLED monitor, the PRM-224-3G. It is a 24.5-inch diagonal OLED, featuring two 3G/HD/SD-SDI and three analog inputs, built-in waveform monitors and vectorscopes, LKFS audio measurement, PQ and HLG, 10-bit color depth, 608/708 closed caption monitoring, and more for a very attractive price.

Sony showed the new HDC-3100/3500 3xCMOS HD cameras with global shutter. These have an upgrade program to UHD/HDR with and optional processor board and signal format software, and a 12G-SDI extension kit as well. There is an optional single-mode fiber connector kit to extend the maximum distance between camera and CCU to 10 kilometers. The CCUs work with the established 1000/1500 series of remote control panels and master setup units.

Sony’s HDC-3100/3500 3xCMOS HD camera

Canon showed its new line of 4K UHD lenses. One of my favorite lenses has been the HJ14ex4.3B HD wide-angle portable lens, which I have installed in many of the studios I’ve worked in. They showed the CJ14ex4.3B at NAB, and I even more impressed with it. The 96.3-degree horizontal angle of view is stunning, and the minimization of chromatic aberration is carried over and perhaps improved from the HJ version. It features correction data that support the BT.2020 wide color gamut. It works with the existing zoom and focus demand controllers for earlier lenses, so it’s  easily integrated into existing facilities.

Foot Traffic
The official total of registered attendees was 91,460, down from 92,912 in 2018. The Evertz booth was actually easy to walk through at 10a.m. on Monday, which I found surprising given the breadth of new interesting products and technologies. Evertz had to show this year. The South Hall had the big crowds, but Wednesday seemed emptier than usual, almost like a Thursday.

The NAB announced that next year’s exhibition will begin on Sunday and end on Wednesday. That change might boost overall attendance, but I wonder how adversely it will affect the attendance at the conference sessions themselves.

I still enjoy attending NAB every year, seeing the new technologies and meeting with colleagues and former co-workers and clients. I hope that next year’s NAB will be even better than this year’s.

Main Image: Barbie Leung.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.

NBCUni 7.26

NAB NY: A DP’s perspective

By Barbie Leung

At this year’s NAB New York show, my third, I was able to wander the aisles in search of tools that fit into my world of cinematography. Here are just a few things that caught my eye…

Blackmagic, which had large booth at the entrance to the hall, was giving demos of its Resolve 15, among other tools. Panasonic also had a strong presence mid-floor, with an emphasis on the EVA-1 cameras. As usual, B&H attracted a lot of attention, as did Arri, which brought a couple of Arri Trinity rigs to demo.

During the HDR Video Essentials session, colorist Juan Salvo of TheColourSpace, talked about the emerging HDR 10+ standard proposed by Samsung and Amazon Video. Also mentioned was the trend of consumer displays getting brighter every year and that impact on content creation and content grading. Salvo pointed out the affordability of LG’s C7 OLEDs (about 700 Nits) for use as client monitors, while Flanders Scientific (which had a booth at the show) remains the expensive standard for grading. It was interesting to note that LG, while being the show’s Official Display Partner, was conspicuously absent from the floor.

Many of the panels and presentations unsurprisingly focused on content monetization — how to monetize faster and cheaper. Amazon Web Service’s stage sessions emphasized various AWS Elemental technologies, including automating the creation of video highlight clips for content like sports videos using facial recognition algorithms to generate closed captioning, and improving the streaming experience onboard airplanes. The latter will ultimately make content delivery a streamlined enough process for airlines that it would enable advertisers to enter this currently untapped space.

Editor Janis Vogel, a board member of the Blue Collar Post Collective, spoke at the #galsngear “Making Waves” panel, and noted the progression toward remote work in her field. She highlighted the fact that DaVinci Resolve, which had already made it possible for color work to be done remotely, is now also making it possible for editors to collaborate remotely. The ability to work remotely gives professionals the choice to work outside of the expensive-to-live-in major markets, which is highly desirable given that producers are trying to make more and more content while keeping budgets low.

Speaking at the same panel, director of photography/camera operator Selene Richholt spoke to the fact that crews are being monetized with content producers either asking production and post pros to provide standard service at substandard rates, or more services without paying more.

On a more exciting note, she cited recent 9×16 projects that she has shot with the camera mounted vertically (as opposed to shooting 16×9 and cropping in) in order to take full advantage of lens properties. She looks forward to the trend of more projects that can mix aspects ratios and push aesthetics.

Well, that’s it for this year. I’m already looking forward to next year.

 


Barbie Leung is a New York-based cinematographer and camera operator working in film, music video and branded content. Her work has played Sundance, the Tribeca Film Festival, Outfest and Newfest. She is also the DCP mastering technician at the Tribeca Film Festival.


I was an IBC virgin

By Martina Nilgitsalanont

I recently had the opportunity to attend the IBC show in Amsterdam. My husband, Mike Nuget, was asked to demonstrate workflow and features of FilmLight’s Baselight software, and since I was in between projects — I’m an assistant editor on Showtime’s Billions and will start on Season 4 in early October — we turned his business trip into a bit of a vacation as well.

Although I’ve worked in television for quite some time, this was my first trip to an industry convention, and what an eye opener it was! The breadth and scope of the exhibit halls, the vendors, the attendees and all the fun tech equipment that gets used in the film and television industry took my breath away (dancing robotic cameras??!!). My husband attempted to prepare me for it before we left the states, but I think you have to experience it to fully appreciate it.

Since I edit on Media Composer, I stopped by Avid’s booth to see what new features they were showing off, and while I saw some great new additions, I was most tickled when one of the questions I asked stumped the coders. They took a note of what I was asking of the feature, and let me know, “We’ll work on that.” I’ll be keeping an eye out!

Of course, I spent some time over at the FilmLight booth. It was great chatting with the folks there and getting to see some of Baselight’s new features. And since Mike was giving a demonstration of the software, I got to attend some of the other demos as well. It was a real eye opener as to how much time and effort goes into color correction, whether it’s on a 30-second commercial, documentary or feature film.

Another booth I stopped by was Cinedeck, over at the Launchpad. I got a demo of their CineXtools, and I was blown away. How many times do we receive a finished master (file) that we find errors in? With this software, instead of making the fixes and re-exporting (and QCing) a brand-new file, you can insert the fixes and be done! You can remap audio tracks if they’re incorrect, or even fix an incorrect closed caption. This is, I’m sure, a pretty watered down explanation of some of the things the CineX software is capable of, but I was floored by what I was shown. How more finishing houses aren’t aware of this is beyond me. It seems like it would be a huge time saver for the operator(s) that need to make the fixes.

Amsterdam!
Since we went spent the week before the convention in Amsterdam, Mike and I got to do some sightseeing. One of our first stops was the Van Gogh Museum, which was very enlightening and had an impressive collection of his work. We took a canal cruise at night, which offered a unique vantage point of the city. And while the city is beautiful during the day, it’s simply magical at night —whether by boat or simply strolling through the streets— with the warm glow from living rooms and streetlights reflected in the water below.

One of my favorite things was a food tour in the Jordaan district, where we were introduced to a fantastic shop called Jwo Lekkernijen. They sell assorted cheeses, delectable deli meats, fresh breads and treats. Our prime focus while in Amsterdam was to taste the cheese, so we made a point of revisiting later in the week so that we could delight in some of the best sandwiches EVER.

I could go on and on about all our wanderings (Red Light District? Been there. Done that. Royal Palace? Check.), but I’ll keep it short and say that Amsterdam is definitely a city that should be explored fully. It’s a vibrant and multicultural metropolis, full of warm and friendly people, eager to show off and share their heritage with you.  I’m so glad I tagged along!


Presenting at IBC vs. NAB

By Mike Nuget

I have been lucky enough to attend NAB a few times over the years, both as an onlooker and as a presenter. In 2004, I went to NAB for the first time as an assistant online editor, mainly just tagging along with my boss. It was awesome! It was very overwhelming and, for the most part, completely over my head.  I loved seeing things demonstrated live by industry leaders. I felt I was finally a part of this crazy industry that I was new to. It was sort of a rite of passage.

Twelve years later, Avid asked me to present on the main stage. Knowing that I would be one of the demo artists that other people would sit down and watch — as I had done just 12 years earlier — was beyond anything I thought I would do back when I first started. The demo showed the Avid and FilmLight collaboration between the Media Composer and the Baselight color system. Two of my favorite systems to work on. (Watch Mike’s presentation here.)

Thanks to my friend and now former co-worker Matt Schneider, who also presented alongside of me, I had developed a very good relationship with the Avid developers and some of the people who run the Avid booth at NAB. And at the same time, the Filmlight team was quickly being put on my speed dial and that relationship strengthened as well.

This past NAB, Avid once again asked me to come back and present on the main stage about Avid Symphony Color and FilmLight’s Baselight Editions plug-in for Avid, but this time I would get to represent myself and my new freelance career change — I had just left my job at Technicolor-Postworks in New York a few weeks prior. I thought that since I was now a full-time freelancer this might be the last time I would ever do this kind of thing. That was until this past July, when I got an email from the FilmLight team asking me to present at IBC in Amsterdam. I was ecstatic.

Preparing for IBC was similar enough as far as my demo, but I was definitely more nervous than I was at NAB. I think it was two reasons: First, presenting in front of many different people in an international setting. Even though I am from the melting pot of NYC, it is a different and interesting feeling being surrounded by so many different nationalities all day long, and pretty much being the minority. On a personal note, I loved it. My wife and I love traveling, and to us this was an exciting chance to be around people from other cultures. On a business level, I guess I was a little afraid that my fast-talking New Yorker side would lose some people, and I didn’t want that to happen.

The second thing was that this was the first time that I was presenting strictly for FilmLight and not Avid. I have been an Avid guy for over 15 years. It’s my home, it’s my most comfortable system, and I feel like I know it inside and out. I discovered Baselight in 2012, so to be presenting in front of FilmLight people, who might have been using their systems for much longer, was a little intimidating.

When I walked into the room, they had setup a full-on production, along with spotlights, three cameras, a projector… the nerves rushed once again. The demo was standing room only. Sometimes when you are doing presentations, time seems to fly by, so I am not sure I remember every minute of the 50-minute presentation, but I do remember at one point within the first few minutes my voice actually trembled, which internally I thought was funny, because I do not tend to get nervous. So instead of fighting it, I actually just said out loud “Sorry guys, I’m a little nervous here,” then took a deep breath, gathered myself, and fell right into my routine.

I spent the rest of the day watching the other FilmLight demos and running around the convention again saying hello to some new vendors and goodbye to those I had already seen, as Sunday was my last day at the show.

That night I got to hang out with the entire Filmlight staff for dinner and some drinks. These guys are hilarious, what a great tight-knit family vibe they have. At one point they even started to label each other, the uncle, the crazy brother, the funny cousin. I can’t thank them enough for being so kind and welcoming. I kind of felt like a part of the family for a few days, and it was tremendously enjoyable and appreciated.

Overall, IBC felt similar enough to NAB, but with a nice international twist. I definitely got lost more since the layout is much more confusing than NAB’s. There are 14 halls!

I will say that the “relaxing areas” at IBC are much better than NAB’s! There is a sandy beach to sit on, a beautiful canal to sit by while having a Heineken (of course) and the food trucks were much, much better.

I do hope I get to come back one day!


Mike Nuget (known to most as just “Nuget”) is a NYC-based colorist and finishing editor. He recently decided to branch out on his own and become a freelancer after 13 years with Technicolor-Postworks. He has honed a skill set across multiple platforms, including FilmLight’s Baselight, Blackmagic’s Resolve, Avid and more. 


IBC 2018: Convergence and deep learning

By David Cox

In the 20 years I’ve been traveling to IBC, I’ve tried to seek out new technology, work practices and trends that could benefit my clients and help them be more competitive. One thing that is perennially exciting about this industry is the rapid pace of change. Certainly, from a post production point of view, there is a mini revolution every three years or so. In the past, those revolutions have increased image quality or the efficiency of making those images. The current revolution is to leverage the power and flexibly of cloud computing. But those revolutions haven’t fundamentally changed what we do. The images might have gotten sharper, brighter and easier to produce, but TV is still TV. This year though, there are some fascinating undercurrents that could herald a fundamental shift in the sort of content we create and how we create it.

Games and Media Collide
There is a new convergence on the horizon in our industry. A few years ago, all the talk was about the merge between telecommunications companies and broadcasters, as well as the joining of creative hardware and software for broadcast and film, as both moved to digital.

The new convergence is between media content creation as we know it and the games industry. It was subtle, but technology from gaming was present in many applications around the halls of IBC 2018.

One of the drivers for this is a giant leap forward in the quality of realtime rendering by the two main game engine providers: Unreal and Unity. I program with Unity for interactive applications, and their new HDSRP rendering allows for incredible realism, even when being rendered fast enough for 60+ frames per second. In order to create such high-quality images, those game engines must start with reasonably detailed models. This is a departure from the past, where less detailed models were used for games than were used for film CGI shots, to protect for realtime performance. So, the first clear advantage created by the new realtime renderers is that a film and its inevitable related game can use the same or similar model data.

NCam

Being able to use the same scene data between final CGI and a realtime game engine allows for some interesting applications. Habib Zargarpour from Digital Monarch Media showed a system based on Unity that allows a camera operator to control a virtual camera in realtime within a complex CGI scene. The resulting camera moves feel significantly more real than if they had been keyframed by an animator. The camera operator chases high-speed action, jumps at surprises and reacts to unfolding scenes. The subtleties that these human reactions deliver via minor deviations in the movement of the camera can convey the mood of a scene as much as the design of the scene itself.

NCam was showing the possibilities of augmenting scenes with digital assets, using their system based on the Unreal game engine. The NCam system provides realtime tracking data to specify the position and angle of a freely moving physical camera. This data was being fed to an Unreal game engine, which was then adding in animated digital objects. They were also using an additional ultra-wide-angle camera to capture realtime lighting information from the scene, which was then being passed back to Unreal to be used as a dynamic reflection and lighting map. This ensured that digitally added objects were lit by the physical lights in the realworld scene.

Even a seemingly unrelated (but very enlightening) chat with StreamGuys president Kiriki Delany about all things related to content streaming still referenced gaming technology. Delany talked about their tests to build applications with Unity to provide streaming services in VR headsets.

Unity itself has further aspirations to move into storytelling rather than just gaming. The latest version of Unity features an editing timeline and color grading. This allows scenes to be built and animated, then played out through various virtual cameras to create a linear story. Since those scenes are being rendered in realtime, tweaks to scenes such as positions of objects, lights and material properties are instantly updated.

Game engines not only offer us new ways to create our content, but they are a pathway to create a new type of hybrid entertainment, which sits between a game and a film.

Deep Learning
Other undercurrents at IBC 2018 were the possibilities offered by machine learning and deep learning software. Essentially, a normal computer program is hard wired to give a particular output for a given input. Machine learning allows an algorithm to compare its output to a set of data and adjust itself if the output is not correct. Deep learning extends that principle by using neural network structures to make a vast number of assessments of input data, then draw conclusions and predications from that data.

Real-world applications are already prevalent and are largely related in our industry to processing viewing metrics. For example, Netflix suggests what we might want to watch next by comparing our viewing habits to others with a similar viewing pattern.

But deep learning offers — indeed threatens — much more. Of course, it is understandable to think that, say, delivery drivers might be redundant in a world where autonomous vehicles rule, but surely creative jobs are safe, right? Think again!

IBM was showing how its Watson Studio has used deep learning to provide automated editing highlights packages for sporting events. The process is relatively simple to comprehend, although considerably more complicated in practice. A DL algorithm is trained to scan a video file and “listen” for a cheering crowd. This finds the highlight moment. Another algorithm rewinds back from that to find the logical beginning of that moment, such as the pass forward, the beginning of the volley etc. Taking the score into account helps decide whether that highlight was pivotal to the outcome of the game. Joining all that up creates a highlight package without the services of an editor. This isn’t future stuff. This has been happening over the last year.

BBC R&D was talking about their trials to have DL systems control cameras at sporting events, as they could be trained to follow the “two thirds” framing rule and to spot moments of excitement that justified close-ups.

In post production, manual tasks such as rotoscoping and color matching in color grading could be automated. Even styles for graphics, color and compositing could be “learned” from other projects.

It’s certainly possible to see that deep learning systems could provide a great deal of assistance in the creation of day-to-day media. Tasks that are based on repetitiveness or formula would be the obvious targets. The truth is, much of our industry is repetitive and formulaic. Investors prefer content that is more likely to be a hit, and this leads to replication over innovation.

So, are we heading for “Skynet” and need Arnold to save us? I thought it was very telling that IBM occupied the central stand position in Hall 7 — traditionally the home of the tech companies that have driven creativity in post. Clearly, IBM and its peers are staking their claim. I have no doubt that DL and ML will make massive changes to this industry in the years ahead. Creativity is probably, but not necessarily, the only defence for mere humans to keep a hand in.

That said, at IBC2018 the most popular place for us mere humans to visit was a bar area called The Beach, where we largely drank Heineken. If the ultimate deep learning system is tasked to emulate media people, surely it would create digital alcohol and spend hours talking nonsense, rather than try and take over the media world? So perhaps we have a few years left yet.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.


Riding the digital storage bus at the HPA Tech Retreat

By Tom Coughlin

At the 2018 HPA Tech Retreat in Palm Desert there were many panels that spoke to the changing requirements for digital storage to support today’s diverse video workflows. While at the show, I happened to snap a picture of the Maxx Digital bus — these guys supply video storage and RAID. I liked this picture because it had the logos of a number of companies with digital storage products serving the media and entertainment industry. So, this blog will ride the storage bus to see where digital storage in M&E is going.

Director of photography Bill Bennett, ASC, and senior scientist for RealD Tony Davis gave an interesting talk about why it can be beneficial to capture content at high frame rates, even if it will ultimately be shown at much lower frame rate. They also offered some interesting statics about Ang Lee’s 2016 technically groundbreaking movie, Billy Lynn’s Long Halftime Walk, which was shot in in 3D at 4K resolution and 120 frames per second.

The image above is a slide from the talk describing the size of the data generated in creating this movie. Single Sony F65 frames with 6:1 compression were 5.2MB in size with 7.5TB of average footage per day over 49 days. They reported that 104-512GB cards were used to capture and transfer the content and the total raw negative size (including test materials) was 404TB. This was stored on 1.5PB of hard disk storage. The actual size of the racks used for storage and processing wasn’t all that big. The photo below shows the setup in Ang Lee’s apartment.

Bennett and Davis went on to describe the advantages of shooting at high frame rates. Shooting at high frame rates gives greater on-set flexibility since no motion data is lost during shooting, so things can be fixed in post more easily. Even when shown at lower resolution in order to get conventional cinematic aesthetics, a synthetic shutter can be created with different motion sense in different parts of the frame to create effective cinematic effects using models for particle motion, rotary motion and speed ramps.

During Gary Demos’s talk on Parametric Appearance Compensation he discussed the Academy Color Encoding System (ACES) implementation and testing. He presented an interesting slide on a single master HDR architecture shown below. A master will be an important element in an overall video workflow that can be part of an archival package, probably using the SMPTE (and now ISO) Archive eXchange Format (AXF) standard and also used in a SMPTE Interoperable Mastering Format (IMF) delivery package.

The Demo Area
At the HPA Retreat exhibits area we found several interesting storage items. Microsoft had on exhibit one of it’s Data Boxes, that allow shipping up to 100 TB of data to its Azure cloud. The Microsoft Azure Data Box joins Amazon’s Snowball and Google’s similar bulk ingest box. Like the AWS Snowball, the Azure Data Box includes an e-paper display that also functions as a shipping label. Microsoft did early testing of their Data Box with Oceaneering International, which performs offline sub-sea oil industry inspection and uploaded their data to Azure using Data Box.

ATTO was showing its Direct2GPU technology that allowed direct transfer from storage to GPU memory for video processing without needing to pass through a system CPU. ATTO is a manufacturer of HBA and other connectivity solutions for moving data, and developing smarter connectors that can reduce overall system overhead.

Henry Gu’s GIC company was showing its digital video processor with automatic QC, and IMF tool set enabling conversion of any file type to IMF and transcoding to any file format and playback of all file types including 4K/UHD. He was doing his demonstration using a DDN storage array (right).

Digital storage is a crucial element in modern professional media workflows. Digital storage enables higher frame rate, HDR video recording and processing to create a variety of display formats. Digital storage also enables uploading bulk content to the cloud and implementing QC and IMF processes. Even SMPTE standards for AXF, IMF and others are dependent upon digital storage and memory technology in order to make them useful. In a very real sense, in the M&E industry, we are all riding the digital storage bus.


Dr. Tom Coughlin, president of Coughlin Associates, is a storage analyst and consultant. Coughlin has six patents to his credit and is active with SNIA, SMPTE, IEEE and other pro organizations. Additionally, Coughlin is the founder and organizer of the annual Storage Visions Conference as well as the Creative Storage Conference.
.


HPA Tech Retreat — Color flow in the desert

By Jesse Korosi

I recently had the opportunity to attend the HPA Tech Retreat in Palm Desert, California, not far from Palm Springs. If you work in post but aren’t familiar with this event, I would highly recommend attending. Once a year, many of the top technologists working in television and feature films get together to share ideas, creativity and innovations in technology. It is a place where the most highly credited talent come to learn alongside those that are just beginning their career.

This year, a full day was dedicated to “workflow.” As the director of workflow at Sim, an end-to-end service provider for content creators working in film and TV, this was right up my alley. This year, I was honored to be a presenter on the topic of color flow.

Color flow is a term I like to use when describing how color values created on set translate into each department that needs access to them throughout post. In the past, this process had been very standardized, but over the last few years it has become much more complex.

I kicked off the presentation by showing everyone an example of an offline edit playing back through a projector. Each shot had a slight variance in luminance, had color shifts, extended to legal changes, etc. During offline editing, the editor should not be distracted by color shifts like these. It’s also not uncommon to have executives come into the room to see the cut. The last thing you want is the questioning of VFX shots because they are seeing these color anomalies. The shots coming back from the visual effects team will have the original dailies color baked into them and need to blend into the edit.

So why does this offline edit often look this way? The first thing to really hone in on is the number of options now available for color transforms. If you show people who aren’t involved in this process day to day a Log C image, compared to a graded image, they will tell you, “You applied a LUT, no big deal.” But it’s a misconception to think that if you give all of the departments that require access to this color the same LUT, they are going to see the same thing. Unfortunately, that’s not the case!

Traditionally, LUTs consisted of a few different formats, but now camera manufacturers and software developers have started creating their own color formats, each having their own bit depths, ranges and other attributes to further complicate matters. You can no longer simply use the blanket term LUT, because that is often not a clear definition of what is now being used.

What makes this tricky is that each of these formats is only compatible within certain software or hardware. For example, Panasonic has created its own color transform called VLTs. This color file cannot be put into a Red camera or an Arri. Only certain software can read it. Continue down the line through the plethora of other color transform options available and each can only be used by certain software/departments across the post process.

Aside from all of these competing formats, we also have an ease-of-use issue. A great example to highlight on this issue would be a DP coming to me and saying (something I hear often), “I would like to create a set of six LUTs. I will write on the camera report the names of the ones I monitored with on set, and then you can apply it within the dailies process.”

For about 50 percent of the jobs we do, we deliver DPX or EXR frames to the VFX facility, along with the appropriate color files they need. However, we give the other 50 percent the master media, and along with doing their own conversion to DPX, this vendor is now on the hook to find out which of those LUTs the DP used on set, go with which shots. This is a manual process for the majority of jobs using this workflow. For my presentation, I broke down why this is not a realistic request to put on vendors, which often leads to them simply not using the LUTs.

Workarounds
For my presentation, I broke down how to get around this LUT issue by staying within CDL compatibility. I also spoke about how to manage these files in post, while the onset crew uses equivalent LUTs. This led to the discussion of how you should be prepping your color flow at the top of each job, as well as a few case studies on real-world jobs. One of those jobs was a BLG workflow providing secondaries on set that could track through into VFX and to the final colorist, while also giving the final colorist the ability to re-time shots when we needed to do a reprint without the need to re-render new MXFs to be relinked in the Avid.

After a deep dive into competing formats, compatibility, ease of use, and a few case studies, the big take away I wanted to leave the audience with was this:
– Ensure a workflow call happens, ideally covering color flow with your on set DIT or DP, dailies vendor, VFX and DI representative
– Ensure a color flow pipeline test runs before day one of the shoot
– Allow enough time to react to issues
– When you aren’t sure how a certain department will get their color, ask!


Jesse Korosi is director of workflow at Sim.


Sight Sound & Story 2017: TV editing and Dylan Tichenor, ACE

By Amy Leland

This year, I was asked to live tweet from Sight Sound & Story on behalf of Blue Collar Post Collective. As part of their mission to make post events as accessible to members of our industry as possible, they often attend events like this one and provide live blogging, tweeting and recaps of the events for their members via their Facebook group. What follows are the recaps that I posted to that group after the event and massaged a bit for the sake of postPerspective.

TV is the New Black
Panelists included Kabir Akhtar, ACE, Suzy Elmiger, ACE, Julius Ramsay and moderator Michael Berenbaum, ACE.

While I haven’t made it a professional priority to break into scripted TV editing because my focus is on being a filmmaker, with editing as “just” a day job, I still love this panel, and every year it makes me reconsider that goal. This year’s was especially lively because two of the panelists, Kabir Akhtar and Julius Ramsay, have known each other from very early on in their careers and each had hilarious war stories to share.

Kabir Akhtar

The panelists were asked how they got into scripted TV editing, and if they had any advice for the audience who might want to do the same. One thing they all agreed on is that a good editor is a good editor. They said having experience in the exact same genre is less important than understanding how to interpret the style and tone of a show correctly. They also all agreed that people who hire editors often don’t get that. There is a real danger of being pigeonholed in our industry. If you start out editing a lot of reality TV and want to crossover to scripted you’ll almost definitely have to take a steep pay cut and start lower down on the ladder. There is still the problem in the industry of people assuming if you’ve cut comedy but not drama, you can’t cut drama. The same can be said for film versus TV and half-hour versus hour, etc. They all emphasized the importance of figuring out what kind of work you want to do, and pursuing that. Don’t just rush headlong into all kinds of work. Find as much focus as you can. Akhtar said, “You’re better off at the bottom of a ladder you want to climb than high up on one that doesn’t interest you.”

They all also said to seek out the people doing the kind of work you want to do, because those are the people who can help you. Ramsay said the most important networking tool is a membership to IMDB Pro. This gives you contact information for people you might want to find. He said the first time someone contacts him unsolicited he will probably ignore it, but if they contact him more than once, and it’s obvious that it’s a real attempt at personal contact with him, he will most likely agree to meet with that person.

Next they discussed the skills needed to be a successful editor. They agreed that while being a fast editor with strong technical knowledge of the tools isn’t by itself enough to be a successful editor, it is an important part of being one. If you have people in the room with you, the faster and more dexterously you can do what they are asking, the better the process will be for everyone.

There was agreement that, for the most part, they don’t look at things like script notes and circle takes. As an editor, you aren’t hired just for your technical skills, but for your point of view. Use it. Don’t let someone decide for you what the good takes are. You have to look at all of the footage and decide for yourself. They said what can feel like a great take on the set may not be a great take in the context of the cut. However, it is important to understand why something was a circle take for the director. That may be an important aspect of the scene that needs to be included, even if it isn’t on that take.

The panel also spoke about the importance of sound. They’ve all met editors who aren’t as skilled at hearing and creating good sound. That can be the difference between a passable editor and a great editor. They said that a great assistant editor needs to be able to do at least some decent sound mixing, since most producers expect even first cuts to sound good, and that task is often given to the assistant. They all keep collections of music and sound to use as scratch tracks as they cut. This way they don’t have to wait until the sound mix to start hearing how it will all come together.

The entire TV is the New Black panel.

All agreed that the best assistant editors are those who are hungry and want to work. Having a strong artistic sense and drive are more important to them than specific credits or experience. They want someone they know will help them make the show the best. In return, they have all given assistants opportunities that have led to them rising to editor positions.

When talking about changes and notes, they discussed needing that flexibility to show other options, even if you really believe in the choices you’ve made. But they all agreed the best feeling was when you’ve been asked to show other things, and in the long run, the producer or director comes back to what you had in the first place. They said when people give notes, they are pointing out the problems. Be very wary when they start telling you the solutions or how to fix the problems.

Check out the entire panel here. The TV panel begins at about 20:00.

Inside the Cutting Room
This panel focused on editor Dylan Tichenor, ACE, and was moderated by Bobbie O’Steen .

Of all of the Sight Sound & Story panels, this is by far the hardest to summarize effectively. Bobbie O’Steen is a film historian. Her preparation for interviews like this is incredibly deep and detailed. Her subject is always someone with an impressive list of credits. Dylan Tichenor has been Paul Thomas Anderson’s editor for most of his films. He has also edited such films as Brokeback Mountain, The Royal Tenenbaums and Zero Dark Thirty.

With that in mind, I will share some of the observations I wrote down while listening raptly to what was said. From the first moment, we got a great story. Tichenor’s grandfather worked as a film projector salesman. He described the first time he became aware of the concept of editing. When he was nine years old, he unspooled a film reel from an Orson Welles movie that his grandfather had left at the house and looked carefully at all of the frames. He noticed that between a frame of a wide shot and a frame of a close-up, there was a black line. And that was his first understanding of film having “cuts.” He also described an early love for classic films because of those reels his grandfather kept around, especially Murnau’s Nosferatu.

Much of what was discussed was his longtime collaboration with P.T. Anderson. In discussing Anderson’s influences, they described the blend of Martin Scorsese’s long tracking shots with Robert Altman’s complex tapestry of ensemble casts. Through his editing work on those films, Tichenor saw how Anderson wove those two things together. The greatest challenges were combining those long takes with coverage, and answering the question, “Whose story are we telling?” To illustrate this, he showed the party scene in Boogie Nights in which Scotty first meets Dirk Diggler.

Dylan Tichenor and Bobbi O’Steen.

For those complex tapestries of characters, there are frequent transitions from one person’s storyline to another’s. Tichenor said it’s important to transition with the heart and not just the head. You have to find the emotional resonance that connects those storylines.

He echoed the sentiment from one of the other panels (this will be covered in my next recap) about not simply using the director’s circle takes. He agreed with the importance of understanding what they were and what the director saw in them on set, but in the cut, it was important to include that important element, not necessarily to use that specific take.

O’Steen brought up the frequent criticism of Magnolia — that the film is too long. While Tichenor agreed that it was a valid criticism, he stood by the film as one that took chances and had something to say. More importantly, it asked something of the audience. When a movie doesn’t take chances and asks the audience to work a little, it’s like eating cotton candy. When the audience exerts effort in watching the story, that effort leads to catharsis.

In discussing The Royal Tenenbaums, they talked about the challenge of overlapping dialogue, illustrated by a scene between Gene Hackman and Danny Glover. Of course, what the director and actors want is to have freedom on the set, and let the overlapping dialogue flow. As an editor this can be a nightmare. In discussions with actors and directors, it can help to remind them that sometimes that overlapping dialogue can create situations where a take can’t be used. They can be robbed of a great performance by that overlap.

O’Steen described Wes Anderson as a mathematical editor. Tichenor agreed, and showed a clip with a montage of flashbacks from Tenenbaums. He said that Wes Anderson insisted that each shot in the montage be exactly the same duration. In editing, what Tichenor found was that those moments of breaking away from the mathematical formula, of working slightly against the best of the music, were what gave it emotional life.

Tichenor described Brokeback Mountain as the best screenplay adaptation of a short story he had ever seen. He talked about a point during the editing when they all felt it just wasn’t working, specifically Heath Ledger’s character wasn’t resonating emotionally the way he should be. Eventually they realized the problem was that Ledger’s natural warmth and affectionate nature were coming through too much in his performance. He had moments of touching someone on the arm or the shoulder, or doing something else gentle and demonstrative.

He went back through and cut out every one of those moments he could find, which he admitted meant in some cases leaving “bad” cuts in the film. To be fair, in some cases that difference was maybe half a second of action and the cuts were not as bad as he feared, but the result was that the character suddenly felt cold and isolated in a way that was necessary. Tichenor also referred back to Nosferatu and how the editing of that film had inspired him. He pointed to the scene in which Jack comes to visit Ennis; he mimicked an editing trick from that film to create a moment of rush and surprise as Ennis ran down the stairs to meet him.

Dylan Tichenor

One thing he pointed out was that it can feel more vulnerable to cut a scene with a slower pace than an action scene. In an action scene, the cuts become almost a mosaic, blending into one another in a way that helps to make each cut a bit more anonymous. In a slower scene, each cut stands out more and draws more attention.

When P.T. Anderson and Tichenor came together again to collaborate on There Will Be Blood, they approached it very differently from Boogie Nights and Magnolia. Instead of the parallel narratives of that ensemble tapestry, this was a much more focused and, often, operatic, story. They decided to approach it, in both shooting and editing, like a horror film. This meant framing shots in an almost gothic way, which allowed for building tension without frequent cutting. He showed an example of this in a clip of Daniel and his adopted son H.W. having Sunday dinner with the family to discuss buying their land.

He also talked about the need to humanize Daniel and make him more relatable and sympathetic. The best path to this was through the character of H.W. Showing how Daniel cared for the boy illuminated a different side to this otherwise potentially brutal character. He asked Anderson for additional shots of him to incorporate into scenes. This even led to additional scenes between the two being added to the story.

After talking about this film, though there were still so many more that could be discussed, the panel sadly ran out of time. One thing that was abundantly clear was that there is a reason Tichenor has worked with some of the finest filmmakers. His passion for and knowledge of film flowed through every moment of this wonderful chat. He is the editor for many films that should be considered modern classics. Undoubtedly between the depth of preparation O’Steen is known for, and the deep well of material his career provided, they could have gone on much longer without running dry of inspirational and entertaining stories to share.

Check out the entire panel here. The interview begins at about 02:17:30.

———————————
Amy Leland is a film director and editor. Her short film, Echoes, is now available on Amazon Video. Her feature doc, Ambassador of Rhythm, is in post. She also has a feature screenplay in development and a new doc in pre-production. She is also an editor for CBS Sports Network. Find out more about Amy on her site http://amyleland.net and follow her on social media on Twitter at @amy-leland and Instagram at @la_directora.

What was new at GTC 2017

By Mike McCarthy

I, once again, had the opportunity to attend Nvidia’s GPU Technology Conference (GTC) in San Jose last week. The event has become much more focused on AI supercomputing and deep learning as those industries mature, but there was also a concentration on VR for those of us from the visual world.

The big news was that Nvidia released the details of its next-generation GPU architecture, code named Volta. The flagship chip will be the Tesla V100 with 5,120 CUDA cores and 15 Teraflops of computing power. It is a huge 815mm chip, created with a 12nm manufacturing process for better energy efficiency. Most of its unique architectural improvements are focused on AI and deep learning with specialized execution units for Tensor calculations, which are foundational to those processes.

Tesla V100

Similar to last year’s GP100, the new Volta chip will initially be available in Nvidia’s SXM2 form factor for dedicated GPU servers like their DGX1, which uses the NVLink bus, now running at 300GB/s. The new GPUs will be a direct swap-in replacement for the current Pascal based GP100 chips. There will also be a 150W version of the chip on a PCIe card similar to their existing Tesla lineup, but only requiring a single half-length slot.

Assuming that Nvidia puts similar processing cores into their next generation of graphics cards, we should be looking at a 33% increase in maximum performance at the top end. The intermediate stages are more difficult to predict, since that depends on how they choose to tier their cards. But the increased efficiency should allow more significant increases in performance for laptops, within existing thermal limitations.

Nvidia is continuing its pursuit of GPU-enabled autonomous cars with its DrivePX2 and Xavier systems for vehicles. The newest version will have a 512 Core Volta GPU and a dedicated deep learning accelerator chip that they are going to open source for other devices. They are targeting larger vehicles now, specifically in the trucking industry this year, with an AI-enabled semi-truck in their booth.

They also had a tractor showing off Blue River’s AI-enabled spraying rig, targeting individual plants for fertilizer or herbicide. It seems like farm equipment would be an optimal place to implement autonomous driving, allowing perfectly straight rows and smooth grades, all in a flat controlled environment with few pedestrians or other dynamic obstructions to be concerned about (think Interstellar). But I didn’t see any reference to them looking in that direction, even with a giant tractor in their AI booth.

On the software and application front, software company SAP showed an interesting implementation of deep learning that analyzes broadcast footage and other content looking to identify logos and branding, in order to provide quantifiable measurements of the effectiveness of various forms of brand advertising. I expect we will continue to see more machine learning implementations of video analysis, for things like automated captioning and descriptive video tracks, as AI becomes more mature.

Nvidia also released an “AI-enabled” version of I-Ray to use image prediction to increase the speed of interactive ray tracing renders. I am hopeful that similar technology could be used to effectively increase the resolution of video footage as well. Basically, a computer sees a low-res image of a car and says, “I know what that car should look like,” and fills in the rest of the visual data. The possibilities are pretty incredible, especially in regard to VFX.

Iray AI

On the VR front, Nvidia announced a new SDK that allows live GPU-accelerated image stitching for stereoscopic VR processing and streaming. It scales from HD to 5K output, splitting the workload across one to four GPUs. The stereoscopic version is doing much more than basic stitching, processing for depth information and using that to filter the output to remove visual anomalies and improve the perception of depth. The output was much cleaner than any other live solution I have seen.

I also got to try my first VR experience recorded with a Light Field camera. This not only gives the user a 360 stereo look around capability, but also the ability to move their head around to shift their perspective within a limited range (based on the size the recording array). The project they were using to demo the technology didn’t highlight the amazing results until the very end of the piece, but when it did that was the most impressive VR implementation I have had the opportunity to experience yet.
———-
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

The VFX Industry: Where are the women?

By Jennie Zeiher

As anyone in the visual effects industry would know, Marvel’s Victoria Alonso was honored earlier this year with the Visual Effects Society Visionary Award. Victoria is an almighty trailblazer, one of whom us ladies can admire, aspire to and want to be.

Her acceptance speech was an important reminder to us of the imbalance of the sexes in our industry. During her speech, Victoria stated: “Tonight there were 476 of you nominated. Forty-three of which are women. We can do better.”

Over the years, I’ve had countless conversations with industry people — executives, supervisors and producers — about why there are fewer women in artist and supervisory roles. A recent article in the NY Times suggested that female VFX supervisors made up only five percent of the 250 top-grossing films of 2014. Pretty dismal.

I’ve always worked in male-dominated industries, so I’m possibly a bit blasé about it. I studied IT and worked as a network engineer in the late ‘90s, before moving to the United States where I worked on 4K digital media projects with technologists and scientists. One of a handful of women, I was always just one of the boys. To me it was the norm.

Moving into VFX about 10 years ago, I realized this industry was no different. From my viewpoint, I see about 1/8 ratio of female to male artists. The same is true from what I’ve seen through our affiliated training courses. Sadly, I’ve heard of some facilities that have no women in artist roles at all!

Most of the females in our industry work in other disciplines. At my workplace, Australia’s Rising Sun Pictures, half of our executive members are women (myself included), and women generally outweigh men in indirect overhead roles (HR, finance, administration and management), as well as production management.

Women bring unique qualities to the workplace: they’re team players, hard working, generous and empathetic. Copious reports have found that companies that have women on their board of directors and in leadership positions perform better than those that don’t. So in our industry, why do we see such a male-dominated artist, technical and supervisory workforce?

By no means am I undervaluing the women in those other disciplines (we could not have functioning businesses without them), I’m just merely trying to understand why there aren’t more women inclined to pursue artistic jobs and, ultimately, supervision roles.

I can’t yet say that one of the talented female artists I’ve had the pleasure of working with over the years has risen to the ranks of being a VFX supervisor… and that’s not to say that they couldn’t have, just that they didn’t, or haven’t yet. This is something that disappoints me deeply. I consider myself a (liberal) feminist. Someone who, in a leadership position, wants to enable other women to become the best they can be and to be equal among their male counterparts.
So, why? Where are the women?

Men and Women Are Wired Differently
A study by LiveScience suggests men and women really are wired differently. It says,  “Male brains have more connections within hemispheres to optimize motor skills, whereas female brains are more connected between hemispheres to combine analytical and intuitive thinking.”

Apparently this difference is at its greatest during the adolescent years (13-17 years), however with age these differences get smaller. So, during the peak of an adolescent girl’s education, she’s more inclined to be analytical and intuitive. Is that a direct correlation to them not choosing a technical vocation? But then again I would have thought that STEM/STEAM careers would be something of interest to girls if they’re brains are wired to be analytical?

This would also explain women having better organizational and management skills and therefore seeking out more “indirectly” associated roles.

Lean Out
For those women already in our industry, are they too afraid to seek out higher positions? Women are often more self-critical and self-doubting. Men will promote themselves and dive right in, even if they’re less capable. I have experienced this first hand and didn’t actual recognize it in myself until I read Sheryl Sandberg’s Lean In.

Or, is it just simply that we’re in a “boys club” — that these career opportunities are not being presented to our female artists, and that we’d prefer to promote men over women?

The Star Wars Factor
Possibly one of the real reasons that there is a lack of women in our industry is what I call “The Star Wars factor.” For the most part, my male counterparts grew up watching (and being inspired by) Star Wars and Star Trek, whereas, personally, I was more inclined to watch Girls Just Want to Have Fun and Footloose. Did these adolescent boys want to be Luke or Han, or George for that matter? Were they so inspired by John Dykstra’s lightsabers that they wanted to do THAT when they grew up? And if this is true, maybe Jyn, Rae and Captain Marvel —and our own Captain Marvel, Victoria Alonso — will spur on a new generation of women in the industry. Maybe it’s a combination of all of these factors. Maybe it’s none.

I’m very interested in exploring this further. To address the problem, we need to ask ourselves why, so please share your thoughts and experiences — you can find me at jz@vfxjz.com. At least now the conversation has started.

One More Thing!
I am very proud that one of my female colleagues, Alana Newell (pictured with her fellow nominees), was nominated for a VES Award this year for Outstanding Compositing in a Photoreal Feature for X-Men: Apocalypse. She was one of the few, but hopefully as time goes by that will change.

Main Image: The woman of Rising Sun Pictures.
——–

Jennie Zeiher is head of sales & business development at Adelaide, Australia’s Rising Sun Pictures.

Focusing on sound bars at CES 2017

By Tim Hoogenakker

My day job is as a re-recording mixer and sound editor working on long-form projects, so when I attended this year’s Consumer Electronics Show in Las Vegas, I honed in on the leading trends in home audio playback. It was important for me to see what the manufacturers are planning regarding multi-channel audio reproduction for the home. From the look of it, sound bars seem to be leading the charge. My focus was primarily with immersive sound bars, single-box audio components capable of playing Dolby Atmos and DTS:X as close as they can in their original format.

Klipsch TheaterBar

Klipsch Theaterbar

Now I must admit, I’ve kicked and screamed about sound bars in the past, audibly rolling my eyes at the concept. We audio mixers are used to working in perfect discrete surround environments, but I wanted to keep an open mind. Whether we as sound professionals like it or not, this is where the consumer product technology is headed. That and I didn’t see quite the same glitz and glam over discrete surround speaker systems at CES.

Here are some basic details with immersive sound bars in general:

1. In addition to the front channels, they often have up-firing drivers on the left and right edges (normally on the top and sides) that are intended to reflect onto the walls and the ceiling of the room. This is to replicate the immersiveness as much as possible. Sure this isn’t exact replication, but I’ll certainly give manufacturers praise for their creativity.
2. Because of the required reflectivity, the walls have to be of a flat enough surface to reflect the signal, yet still balanced so that it doesn’t sound like you’re sitting in the middle of your shower.
3. There is definitely a sweet spot in the seating position when listening to sound bars. If you move off-axis, you may experience somewhat of a wash sitting near the sides, but considering what they’re trying to replicate, it’s an interesting take.
4. They usually have an auto-tuning microphone system for calculating the room for the closest accuracy.
5. I’m convinced that there’s a conspiracy by the manufacturers to make each and every sound bar, in physical appearance, resemble the enigmatic Monolith in 2001: A Space Odyssey…as if literally someone just knocked it over.

Yamaha YSP5600

My first real immersive sound bar experience happened last year with the Yamaha YSP-5600, which comes loaded with 40 (yes 40!) drivers. It’s a very meaty 26-pound sound bar with a height of 8.5 inches and width of 3.6 feet. I heard a few projects that I had mixed in Dolby Atmos played back on this system. Granted, even when correctly tuned it’s not going to sound the same as my dubbing stage or with dedicated home theater speakers, but knowing this I was pleasantly surprised. A few eyebrows were raised for sure. It was fun playing demo titles for friends, watching them turn around and look for surround speakers that weren’t there.

A number of the sound bars displayed at CES bring me to my next point, which honestly is a bit of a complaint. Many were very thin in physical design, often labeled as “ultra-thin,” which to me means very small drivers, which tells me that there’s an elevated frequency crossover line for the subwoofer(s). Sure, I understand that they need to look sleek so they can sell and be acceptable for room aesthetics, but I’m an audio nerd. I WANT those low- to mid-frequencies carried through from the drivers, don’t just jam ALL the low- and mid-frequencies to the sub. It’ll be interesting to see how this plays out as these products reach market during the year.

Sony HTST 5000

Besides immersive audio, most of these sound bars will play from a huge variety of sources, formats and specs, such as Blu-ray, Blu-ray UHD, DVD, DVD-Audio, streaming via network and USB, as well as connections for Wi-Fi, Bluetooth and 4K pass-through.

Some of these sound bars — like many things at CES 2017 — are supported with Amazon Alexa and Google Home. So, instead of fighting over the remote control, you and your family can now confuse Alexa with arguments over controlling your audio between “Game of Thrones” and Paw Patrol.

Finally, I probably won’t be installing a sound bar on my dub stage for reference anytime soon, but I do feel that professionally it’s very important for me to know the pros and the cons — and the quirks — so we can be aware how our audio mixes will translate through these systems. And considering that many major studios and content creators are becoming increasingly ready to make immersive formats their default deliverable standard, especially now with Dolby Vision, I’d say it’s a necessary responsibility.

Looking forward to seeing what NAB has up its sleeve on this as well.

Here are some of the more notable soundbars debuted:

LG SJ9

Sony HT-ST5000: This sound bar is compatible with Google Home. They say it works well with ceilings as high as 17 feet. It’s not DTS:X-capable yet, but Sony said that will happen by the end of the year.LG SJ9: The LG SJ9 sound bar is currently noted by LG as “4K high resolution audio” (which is an impossible statement). It’s possible that they mean it’ll pass through a 4K signal, but the LG folks couldn’t clarify. That snafu aside, it has a very wide dimensionality, which helps for stereo imaging. It will be Dolby Vision/HDR-capable via a future firmware upgrade.

The Klipsch “Theaterbar”: This another eyebrow raiser. It’ll release in Q4 of 2017. There’s no information on the web yet, but they’re showcasing this at CES.

Pioneer Elite FS-EB70: There’s no information on the web yet, but they were showcasing this at CES.

Onkyo SBT-A500 Network: Also no information but it was shown at CES.


Formosa Group re-recording mixer and sound editor Tim Hoogenakker has over 20 years of experience in audio post for music, features and documentaries, television and home entertainment formats. He had stints at Prince’s Paisley Park Studios and POP Sound before joining Formosa.

Industry pros gather to discuss sound design for film and TV

By Mel Lambert

The third annual Mix Presents Sound for Film and Television conference attracted some 500 production and post pros to Sony Pictures Studios in Culver City, California, last week to hear about the art of sound design.

Subtitled “The Merging of Art, Technique and Tools,” the one-day conference kicked off with a keynote address by re-recording mixer Gary Bourgeois, followed by several panel discussions and presentations from Avid, Auro-3D, Steinberg, JBL Professional and Dolby.

L-R: Brett G. Crockett, Tom McCarthy, Gary Bourgeois and Mark Ulano.

During his keynote, Bourgeois advised, “Sound editors and re-recording mixers should be aware of the talent they bring to the project as storytellers. We need to explore the best ways of using technology to be creative and support the production.” He concluded with some more sage advice: “Do not let the geek take over! Instead,” he stressed, “show the passion we have for the final product.”

Other highlights included a “Sound Inspiration Within the Storytelling Process” panel organized by MPSE and moderated by Carolyn Giardina from The Hollywood Reporter. Panelists included Will Files, Mark P. Stoeckinger, Paula Fairfield, Ben L. Cook, Paul Menichini and Harry Cohen. The discussion focused on where sound designers find their inspiration and the paths they take to create unique soundtracks.

CAS hosted a sound-mixing panel titled “Workflow for Musicals in Film and Television Production” that focused on live recording and other techniques to give musical productions a more “organic” sound. Moderated by Glen Trew, the panel included music editor David Klotz, production mixer Phil Palmer, playback specialist Gary Raymond, production mixer Peter Kurland, re-recording mixer Gary Bourgeois and music editor Tim Boot.

Sound Inspiration Within the Storytelling Process panel (L-R): Will Files, Ben L. Cook, Mark P. Stoeckinger, Carolyn Giardina, Harry Cohen, Paula Fairfield and Paul Menichini.

Sponsored by Westlake Pro, a panel called “Building an Immersive Room: Small, Medium and Large” covered basic requirements of system design and setup — including console/DAW integration and monitor placement — to ensure that soundtracks translate to the outside world. Moderated by Westlake Pro’s CTO, Jonathan Deans, the panel was made up of Bill Johnston from Formosa Group, Nathan Oishi from Sony Pictures Studios, Jerry Steckling of JSX, Brett G. Crockett from Dolby Labs, Peter Chaikin from JBL and re-recording mixers Mark Binder and Tom Brewer.

Avid hosted a fascinating panel discussion called “The Sound of Stranger Things,” which focused on the soundtrack for the Netflix original series, with its signature sound design and ‘80s-style, synthesizer-based music score. Moderated by Avid’s Ozzie Sutherland, the panel included sound designer Craig Henighan, SSE Brad North, music editor David Klotz and sound effects editor Jordan Wilby. “We drew our inspiration from such sci-fi films as Alien, The Thing and Predator,” Henighan said. Re-recording mixers Adam Jenkins and Joe Barnett joined the discussion via Skype from the Technicolor Seward stage.

The Barbra Streisand Scoring Stage.

A stand-out event was the Production Sound Pavilion held on the Barbra Streisand Scoring Stage, where leading production sound mixers showed off their sound carts, with manufacturers also demonstrating wireless, microphone and recorder technologies. “It all starts on location, with a voice in a microphone and a clean recording,” offered CAS president Mark Ulano. “But over the past decade production sound has become much more complex, as technologies and workflows evolved both on-set and in post production.”

Sound carts on display included Tom Curley’s Sound Devices 788t recorder and Sound Devices CL9 mixer combination; Michael Martin’s Zaxcom Nomad 12 recorder and Zaxcom Mix-8 mixer; Danny Maurer’s Sound Devices 664 recorder and Sound Devices 633 mixer; Devendra Cleary’s Sound Devices 970, Pix 260i and 664 recorders with Yamaha 01V and Sound Devices CL-12 mixers; Charles Mead’s Sound Devices 688 recorder with CL-12 mixer; James DeVotre’s Sound Devices 688 recorder with CL-12 Alaia mixer; Blas Kisic’s Boom Recorder and Sound Devices 788 with Mackie Onyx 1620 mixer; Fernando Muga’s Sound Devices 788 and 633 recorders with CL-9 mixer; Thomas Cassetta’s Zaxcom Nomad 12 recorder with Zaxcom Oasis mixer; Chris Howland’s Boom Recorder, Sound Devices and 633 recorders, with Mackie Onyx 1620 and Sound Devices CL-12 mixers; Brian Patrick Curley’s Sound Devices 688 and 664 recorders with Sound Devices CL-12 Alaia mixer; Daniel Powell’s Zoom F8 recorder/mixer; and Landon Orsillo’s Sound Devices 688 recorder.

Lon Neumann

CAS also organized an interesting pair of Production Sound Workshops. During the first one, consultant Lon Neumann addressed loudness control with an overview of loudness levels and surround sound management of cinema content for distribution via broadcast television.

The second presentation, hosted by Bob Bronow (production mixer on Deadliest Catch) and Joe Foglia (Marley & Me, Scrubs and From the Earth to the Moon), covered EQ and noise reduction in the field. While it was conceded that, traditionally, any type of signal processing on location is strongly discouraged — such decisions normally being handled in post — the advent of multitrack recording and isolated channels means that it is becoming more common for mixers to use processing on the dailies mix track.

New for this year was a Sound Reel Showcase that featured short samples from award-contending and to-be-released films. The audience in the Dolby Atmos- and Auro 3D-equipped William Holden Theatre was treated to a high-action sequence from Mel Gibson’s new film, Hacksaw Ridge, which is scheduled for release on November 4. It follows the true story of a WWII army medic who served during the harrowing Battle of Okinawa and became the first conscientious objector to be awarded the Medal of Honor. The highly detailed Dolby Atmos soundtrack was created by SSE/sound designer/recording mixer Robert Mackenzie working at Sony Pictures Studios with dialogue editor Jed M. Dodge and ADR supervisor Kimberly Harris, with re-recording mixers Andy Wright and Kevin O’Connell.

Mel Lambert is principal of Content Creators, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

All photos by Mel Lambert.

 

AES Paris: A look into immersive audio, cinematic sound design

By Mel Lambert

The Audio Engineering Society (AES) came to the City of Light in early June with a technical program and companion exhibition that attracted close to 2,600 pre-registrants, including some 700 full-pass attendees. “The Paris International Convention surpassed all of our expectations,” AES executive director Bob Moses told postPerspective. “The research community continues to thrive — there was great interest in spatial sound and networked audio — while the business community once again embraced the show, with a 30 percent increase in exhibitors over last year’s show in Warsaw.” Moses confirmed that next year’s European convention will be held in Berlin, “probably in May.”

Tom Downes

Getting Immersed
There were plenty of new techniques and technologies targeting the post community. One presentation, in particular, caught my eye, since it posed some relevant questions about how we perceive immersive sound. In the session, “Immersive Audio Techniques in Cinematic Sound Design: Context and Spatialization,” co-authors Tom Downes and Malachy Ronan — both of who are AES student members currently studying at the University of Limerick’s Digital Media and Arts Research Center, Ireland — questioned the role of increased spatial resolution in cinematic sound design. “Our paper considered the context that prompted the use of elevated loudspeakers, and examined the relevance of electro-acoustic spatialization techniques to 3D cinematic formats,” offered Downes. The duo brought with them a scene from writer/director Wolfgang Petersen’s submarine classic, Das Boot, to illustrate their thesis.

Using the university’s Spatialization and Auditory Display Environment (SpADE) linked to an Apple Logic Pro 9 digital audio workstations and a 7.1.4 playback configuration — with four overhead speakers — the researchers correlated visual stimuli with audio playback. (A 7.1-channel horizontal playback format was determined by the DAW’s I/O capabilities.) Different dynamic and static timbre spatializations were achieved by using separate EQ plug-ins assigned to horizontal and elevated loudspeaker channels.

“Sources were band-passed and a 3dB boost applied at 7kHz to enhance the perception of elevation,” Downes continued. “A static approach was used on atmospheric sounds to layer the soundscape using their dominant frequencies, whereas bubble sounds were also subjected to static timbre spatialization; the dynamic approach was applied when attempting to bridge the gap between elevated and horizontal loudspeakers. Sound sources were split, with high frequencies applied to the elevated layer, and low frequencies to the horizontal layer. By automating the parameters within both sets of equalization, a top-to-bottom trajectory was perceived. However, although the movement was evident, it was not perceived as immersive.”

The paper concluded that although multi-channel electro-acoustic spatialization techniques are seen as a rich source of ideas for sound designers, without sufficient visual context they are limited in the types of techniques that can be applied. “Screenwriters and movie directors must begin to conceptualize new ways of utilizing this enhanced spatial resolution,” said Downes.

Rich Nevens

Rich Nevens

Tools
Merging Technologies demonstrated immersive-sound applications for the v.10 release of Pyramix DAW software, with up to 30.2-channel routing and panning, including compatibly for Barco Auro, Dolby Atmos and other surround formats, without the need for additional plug-ins or apps, while Avid showcased additions for the modular S6 Assignable Digital Console, including a Joystick Panning Module and a new Master Film Module with PEC/DIR switching.

“The S6 offers improved ergonomics,” explained Avid’s Rich Nevens, director of worldwide pro audio solutions, “including enhanced visibility across the control surface, and full Ethernet connectivity between eight-fader channel modules and the Pro Tools DSP engines.” Reportedly, more than 1,000 S6 systems have been sold worldwide since its introduction in December 2013, including two recent installations at Sony Pictures Studios in Culver City, California.

Finally, Eventide came to the Paris AES Convention with a remarkable new multichannel/multi-element processing system that was demonstrated by invitation only to selected customers and distributors; it will be formally introduced during the upcoming AES Convention in Los Angeles in October. Targeted at film/TV post production, the rackmount device features 32 inputs and 32 discrete outputs per DSP module, thereby allowing four multichannel effects paths to be implemented simultaneously. A quartet of high-speed ARM processors mounted on plug-in boards can be swapped out when more powerful DSP chips became available.

Joe Bamberg and Ray Maxwell

Joe Bamberg and Ray Maxwell

“Initially, effects will be drawn from our current H8000 and H9 processors — with other EQ, dynamics plus reverb effects in development — and can be run in parallel or in series, to effectively create a fully-programmable, four-element channel strip per processing engine,” explained Eventide software engineer Joe Bamberg.

“Remote control plug-ins for Avid Pro Tools and other DAWs are in development,” said Eventide’s VP of sales and marketing, Ray Maxwell. The device can also be used via a stand-alone application for Apple iPad tablets or Windows/Macintosh PCs.

Multi-channel I/O and processing options will enable object-based EQ, dynamic and ambience processing for immersive-sound production. End user price for the codenamed product, which will also feature Audinate Dante, Thunderbolt, Ravenna/AES67 and AVB networking, has yet to be announced.

Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Learning about LTO and Premiere workflows

By Chelsea Taylor

In late March, I attended a workflow event by Facilis Technology and StorageDNA in New York City. I didn’t know much going in other than it would be about collaborative workflows and shared storage for Adobe Premiere. While this event was likely set up to sell some systems, I did end up learning some worthwhile information about archiving and backup.

Full disclosure: going into this event I knew very little about LTO archiving. Previously I had been archiving all of my projects by throwing a hard drive into the corner of my edit. Well, not really but close! It seems that a lot of companies out there don’t put too much importance on archiving until after it becomes a problem (“All of our edits are crashing and we don’t know why!”).

At my last editing job where we edited short form content on Avid, our media manager would consolidate projects in Avid, create a FileMaker database that cataloged footage, manually add metadata, then put the archived files onto different G-Tech G-RAID drives (which of course could die after a couple of years). In short, it wasn’t the best way to archive and backup media, especially when an editor wanted to find something. They would have to walk over to the computer where the database was, figure out how to use the UI, search for the project (If it had the right metadata), find the physical drive, plug the drive into their machine, go through different files/folders until they found what they were looking for, copy the however many large files to the SAN, and then start working. Suffice to say I had a lot to learn about archiving and was very excited to attend this event.

I arrived at the event about 30 minutes early, which turned out to be a good thing because I was immediately greeted by some of the experts and presenters from Facilis and StorageDNA. Not fully realizing who I was talking to, I started asking tons of questions about their products. What does StorageDNA do? How can it integrate with Premiere? Why is LTO tape archiving better? Who adds the metadata? How fast can you access the backup? And before I knew it, I was in a heated discussion with Jeff Krueger, worldwide VP of sales at StorageDNA, and Doug Hynes, director of product and solution marketing at StorageDNA, about their products and the importance of archiving. Fully inspired to archive and with tons more questions, our conversation got cut short as the event was about to begin.

While the Facilis offerings look cool (I want all of them!), I wasn’t at the event to buy things — I wanted to hear about the workflow and integration with Adobe Premiere (which is a language I better understand). As someone who would be actually using these products and not in charge of buying them, I didn’t care about the tech specs or new features. “Secure sharing with permissions. Low-level media management. Block-level virtualized storage pools.” It was hardware spec after hardware spec (which you can check out on their website). As the presenter spoke of the new features and specifications of their new models, I just kept thinking about what Jeff Krueger had told me right before the event about archiving, which I will share with you here.

StorageDNA presented on a product line called DNAevolution, which is an archive engine built on LTO tapes. Each model provides different levels of LTO automation, LTO drives and server hardware. As an editor, I was more concerned with the workflow.

The StorageDNA Workflow for Premiere
1. Card contents are ingested onto the SAN.
2. The high-res files are written to LTO/ LTFS through DNAevolution and become permanent camera master files.
3. Low-res proxies are created and ingested onto the SAN for use in editorial. DNAevolution is pointed to the proxies, indexes them and links to the high-res clips on LTO.
4. Once the files are written to and verified on LTO, you can delete the high-res files from your spinning disk storage.
5. The editor works with the low-res proxies in Premiere Pro.
6. When complete, the editor exports an EDL that DNAevolution parses and locates the high-res files on LTO from the database.
7. DNAevolution restores high-res files to the finishing station or SAN storage.
8. The editor can relink the media and distribute in high-res/4K.

The StorageDNA Archive Workflow
1. In the DNAevolution Archive Console, select your Premiere Pro project file.
2. DNAevolution scans the project, and generates a list of files to be archived. It then writes all associated media files and the project itself to LTO tape(s).
3. Once the files are written to and verified on LTO, you can delete the high-res files from your spinning disk storage.

Why I Was impressed
All of your media is immediately backed up, ensuring it is in a safe place and not taking up your local or shared storage. You can delete the high-res files from your SAN storage immediately and work with proxies, onlining later down the line. The problem I’ve had with SAN storage is that it fills up very quickly with large files, eventually slowing down your systems and leading to playback problems. Why have all of your RAW unused media just sitting there eating up your valuable space when you can free it up immediately?

DNAevolution works easily with Adobe’s Premiere, Prelude and Media Encoder. It uses the Adobe CC toolset to automate the process of creating LTO/LTFS camera masters while creating previews via Media Encoder.

DNAevolution archives all media from your Premiere projects with a single click and notifies you if files are missing. It also checks your files for existing camera and clip metadata. Meaning if you add all of that in at the start it will make archiving much easier.

You have direct access to files on LTO tape, enabling third-party applications access to media directly on LTO, such as transcoding, partial restore and playout. DNAevolution’s Archive Asset Management toolset allows you to browse/search archived content and provides proxy playback. It even has a drag and drop functionality with Premiere where you literally drop a file straight from the archive into your Premiere timeline, with little rendering, and start editing.

I have never tested an LTO archive workflow and am curious what other people’s experiences have been like. Feel free to leave your thoughts on LTO vs. Cloud vs. Disk in the comments below.

Chelsea Taylor is a freelance editor who has worked on a wide range of content: from viral videos and sizzles to web series and short films. She also works as an assistant editor on feature films and documentaries. Check out her site a at StillRenderingProductions.com.

Talking storage with LaCie at NAB

By Isaac Spedding

As I power-walked my way through the NAB show floor, carefully avoiding eye contact with hopeful booth minders, my mind was trying to come up with fancy questions to ask the team at LaCie that would cement my knowledge of storage solutions and justify my press badge. After drawing a blank, I decided to just ask what I had always wanted to know about storage companies in general: How reliable are your drives and how do you prove it? Why is there a blue bubble on your enclosures? Why are drives still so damn heavy?

Fortunately, I met with two members of the LaCie team, who kindly answered my tough questions with valuable information and great stories. I should note that just prior to this NAB trip I had submitted an RMA for 10 ADATA USB.3.0 drives, as all the connectors on them had become loose and fallen out or into the single-piece enclosure. So, as you can imagine, at that moment in time, I was not exactly the biggest fan of hard drive companies in general.

“We are never going to tell you (a drive) will never fail,” said Clement Barberis, marketing manager for LaCie. “We tell people to keep multiple copies. It doesn’t matter how, just copies. It’s not about losing your drive it’s about losing your data.”

LaCie offers a three-to five-year warranty on all its products and has several services available, including fast replacement and data recovery. Connectors and drives are the two main points of failure for any portable drive product.

two shot

LaCie’s Clement Barberis and Kristin Macrostie.

Owned by Seagate, LaCie has a very close connection with that team and can select drives based on what the product needs. Design, development and target-user all have an impact on drive and connection selection. Importantly, LaCie decides on the connection options not by what is the newest but by what works best with the internal drive speed.

Their brand new 12-bay enclosure, the LaCie 12big Thunderbolt 3 (our main image), captures the speed of Thunderbolt 3, and with a 96TB capacity (around 100 hours of uncompressed 4K), the system can transfer around 2600 MB/s (yes, not bits). It is targeted at small production houses shooting high-resolution material.

Why So Heavy?
After Barberis showed me the new LaCie 12big, I asked why the form factor and weight had not been redesigned after all these years. I mean, 96TB is great and all but it’s not light — at 17.6kg (38.9 pounds) it’s not easy to take on the plane. Currently, the largest single drive available is 8TB and features six platters inside the traditional form factor. Each additional platter increases the weight of each drive (and its capacity), but the weight increase means that a smaller form factor for a drive array is possible. That’s why drive arrays have been staying the same size and gaining weight and storage capacity. So your sleek drive will be getting heavier.

LaCie produces several ranges of hard drives with different designs. It’s most visually noticeable in LaCie’s Rugged drive series, which features bright orange bumpers. Other products feature a “Porsche-like” design and feature the blue LaCie bubble. If you are like me, you might be curious how this look came about.

rugged

According to Kristin MacRostie, PR manager for LaCie, “The company founder, Philippe Spruch, wasn’t happy with the design of the products LaCie was putting out 25 years ago — in his words, they were ‘geeky and industrial.’ So, Spruch took a hard drive and a sticky note and he wrote, ‘Our hard drives look like shit, please help,’ and messengered it over to (designer) Philippe Starck’s office in Paris. Starck called Spruch right away.”

The sleek design started with Philippe Starck and then Neil Poulton, who was an apprentice to Starck, and who was brought on to design the drives we see today. The drive designs target the intended consumers, with the “Porsche design” aligning itself to Apple users.

Hearing the story behind LaCie’s design choice, the recommendation to keep multiple drives and not rely on just one, and the explanation of why each product is designed, convinced me that LaCie is producing drive solutions that are built for reliability and usability. Although not the cheapest option on the market today, the LaCie solutions justify this with solid design and logic behind the decision of components, connectors and cost. Besides, at the end of the day, your data is the most important thing and you shouldn’t be keeping it on the cheapest possible drive you found at Best Buy.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.

Dolby Audio at NAB 2016

By Jonathan Abrams

Dolby, founded over 50 years ago as an audio company, is elevating the experience of watching movies and TV content through new technologies in audio and video, the latter of which is a relatively new area for the company’s offerings. This is being done with Dolby AC-4 and Dolby Atmos for audio, and Dolby Vision for video. In this post, the focus will be on Dolby’s audio technologies.

Why would Dolby create AC-4? Dolby AC-3 is over 20 years old, and as a function of its age, it does not do new things well. What are those new things and how will Dolby AC-4 elevate your audio experience?

First, let’s define some acronyms, as they are part of the past and present of Dolby audio in broadcasting. OTA stands for Over The Air, as in what you can receive with an antenna. ATSC stands for Advanced Television Systems Committee, an organization based in the US that standardized HDTV (ATSC 1.0) in the US 20 years ago and is working to standardize Ultra HDTV broadcasts as ATSC 3.0. Ultra HD is referred to as UHD.

Now, some math. Dolby AC-3, which is used with ATSC 1.0, uses up to 384 kbps for 5.1 audio. Dolby AC-4 needs only 128 kbps for 5.1 audio. That increased coding efficiency, along with a maximum bit rate of 640 kbps, leaves 512 kbps to work with. What can be done with that extra 512 kbps?

If you are watching sporting events, Dolby AC-4 allows broadcasters to provide you with the option to select which audio stream you are listening to. You can choose which team’s audio broadcast to listen to, listen to another language, hear what is happening on the field of play, or listen to the audio description of what is happening. This could be applicable to other types of broadcasts, though the demos I have heard, including one at this year’s NAB Show, have all been for sporting events.

Dolby AC-4 allows the viewer to select from three types of dialog enhancement: none, low and high. The dialog enhancement processing is done at the encoder, where it runs a sophisticated dialog identification algorithm and then creates a parametric description that is included as metadata in the Dolby AC-4 bit stream.

What if I told you that after implementing what I described above in a Dolby AC-4 bit stream that there were still bits available for other audio content? It is true, and Dolby AC-4 is what allows Dolby Atmos, a next-generation, rich, and complex object audio system, to be inside ATSC 3.0 audio streams in the US, At my NAB demo, I heard a clip of Mad Max: Fury Road, which was mixed in Dolby Atmos, from a Yamaha sound bar. I perceived elements of the mix coming from places other than the screen, even though the sound bar was where all of the sound waves originated from. Whatever is being done with psychoacoustics to make the experience of surround sound from a sound bar possible is convincing.

The advancements in both the coding and presentation of audio have applications beyond broadcasting. The next challenge that Dolby is taking on is mobile. Dolby’s audio codecs are being licensed to mobile applications, which allows them to be pushed out via apps, which in turn removes the dependency from the mobile device’s OS. I heard a Dolby Atmos clip from a Samsung mobile device. While the device had to be centered in front of me to perceive surround sound, I did perceive it.

Years of R&D at Dolby have yielded efficiencies in coding and new ways of presenting audio that will elevate your experience. From home theater, to mobile, and once broadcasters adopt ATSC 3.0, Ultra HDTV.

Check out my coverage of Dolby’s Dolby Vision offerings at NAB as well.

Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource.

NAB 2016: My pick for this year’s gamechanger is Lytro

By Isaac Spedding

There has been a lot of buzz around what the gamechanger was at this year’s NAB show. What was released that will really change the way we all work? I was present for the conference session where an eloquent Jon Karafin, head of Light Field Video, explained that Lytro has created a camera system that essentially captures every aspect of your shot and allows you to recreate it in any way, at any position you want, using light field technology.

Typically, with game changing technology comes uncertainty from the established industry, and that was made clear during the rushed Q+A session, where several people (after congratulating the Lytro team) nervously asked if they had thought about the fate of positions in the industry which the technology would make redundant. Jon’s reply was that core positions won’t change, however, the way in which they operate will. The mob of eager filmmakers, producers and young scientists that queued to meet him (I was one of them) was another sign that the technology is incredibly interesting and exciting for many.

Lytro2“It’s a birth of a new technology that very well could replace the way that Hollywood makes films.” These are words from Robert Stromberg (DGA), CCO and founder of The Virtual Reality Company, in the preview video for Lytros’ debut film Life, which will be screened on Tuesday to an audience of 500 lucky attendees. Karafin and Jason Rosenthal, CEO at Lytro, will provide a Lytro Cinema demonstration and breakdown of the short film.

Lytro Cinema is my pick for the NAB 2016 game changing technology and it looks like it will not only advance capture, but also change post production methodology and open up new roles, possibilities and challenges for everyone in the industry.

Isaac Spedding is a New Zealand-based creative technical director, camera operator and editor. You can follow him on Twitter @Isaacspedding.

Nvidia’s GTC 2016: VR, A.I. and self driving cars, oh my!

By Mike McCarthy

Last week, I had the opportunity to attend Nvidia’s GPU Technology Conference, GTC 2016. Five thousand people filled the San Jose Convention Center for nearly a week to learn about GPU technology and how to use it to change our world. GPUs were originally designed to process graphics (hence the name), but are now used to accelerate all sorts of other computational tasks.

The current focus of GPU computing is in three areas:

Virtual reality is a logical extension of the original graphics processing design. VR requires high frame rates with low latency to keep up with user’s head movements, otherwise the lag results in motion sickness. This requires lots of processing power, and the imminent release of the Oculus Rift and HTC Vive head-mounted displays are sure to sell many high-end graphics cards. The new Quadro M6000 24GB PCIe card and M5500 mobile GPU have been released to meet this need.

Autonomous vehicles are being developed that will slowly replace many or all of the driver’s current roles in operating a vehicle. This requires processing lots of sensor input data and making decisions in realtime based on inferences made from that information. Nvidia has developed a number of hardware solutions to meet these needs, with the Drive PX and Drive PX2 expected to be the hardware platform that many car manufacturers rely on to meet those processing needs.

This author calls the Tesla P100 "a monster of a chip."

This author calls the Tesla P100 “a monster of a chip.”

Artificial Intelligence has made significant leaps recently, and the need to process large data sets has grown exponentially. To that end, Nvidia has focused their newest chip development — not on graphics, at least initially — on a deep learning super computer chip. The first Pascal generation GPU, the Tesla P100 is a monster of a chip, with 15 billion 16nm transistors on a 600mm2 die. It should be twice as fast as current options for most tasks, and even more for double precision work and/or large data sets. The chip is initially available in the new DGX-1 supercomputer for $129K, which includes eight of the new GPUs connected in NVLink. I am looking forward to seeing the same graphics processing technology on a PCIe-based Quadro card at some point in the future.

While those three applications for GPU computing all had dedicated hardware released for them, Nvidia has also been working to make sure that software will be developed that uses the level of processing power they can now offer users. To that end, there are all sorts of SDKs and libraries they have been releasing to help developers harness the power of the hardware that is now available. For VR, they have Iray VR, which is a raytracing toolset for creating photorealistic VR experiences, and Iray VR Lite, which allows users to create still renderings to be previewed with HMD displays. They also have a broader VRWorks collection of tools for helping software developers adapt their work for VR experiences. For Autonomous vehicles they have developed libraries of tools for mapping, sensor image analysis, and a deep-learning decision-making neural net for driving called DaveNet. For A.I. computing, cuDNN is for accelerating emerging deep-learning neural networks, running on GPU clusters and supercomputing systems like the new DGX-1.

What Does This Mean for Post Production?
So from a post perspective (ha!), what does this all mean for the future of post production? First, newer and faster GPUs are coming, even if they are not here yet. Much farther off, deep-learning networks may someday log and index all of your footage for you. But the biggest change coming down the pipeline is virtual reality, led by the upcoming commercially available head-mounted displays (HMD). Gaming will drive HMDs into the hands of consumers, and HMDs in the hand of consumers will drive demand for a new type of experience for story-telling, advertising and expression.

As I see it, VR can be created in a variety of continually more immersive steps. The starting point is the HMD, placing the viewer into an isolated and large feeling environment. Existing flat video or stereoscopic content can be viewed without large screens, requiring only minimal processing to format the image for the HMD. The next step is a big jump — when we begin to support head tracking — to allow the viewer to control the direction that they are viewing. This is where we begin to see changes required at all stages of the content production and post pipeline. Scenes need to be created and filmed at 360 degrees.

At the conference, this high-fidelity VR simulation that uses scientifically accurate satellite imagery and data from NASA was shown.

The cameras required to capture 360 degrees of imagery produce a series of video streams that need to be stitched together into a single image, and that image needs to be edited and processed. Then the entire image is made available to the viewer, who then chooses which angle they want to view as it is played. This can be done as a flatten image sphere or, with more source data and processing, as a stereoscopic experience. The user can control the angle they view the scene from, but not the location they are viewing from, which was dictated by the physical placement of the 360-camera system. Video-Stitch just released a new all-in-one package for capturing, recording and streaming 360 video called the Orah 4i, which may make that format more accessible to consumers.

Allowing the user to fully control their perspective and move around within a scene is what makes true VR so unique, but is also much more challenging to create content for. All viewed images must be rendered on the fly, based on input from the user’s motion and position. These renders require all content to exist in 3D space, for the perspective to be generated correctly. While this is nearly impossible for traditional camera footage, it is purely a render challenge for animated content — rendering that used to take weeks must be done in realtime, and at much higher frame rates to keep up with user movement.

For any camera image, depth information is required, which is possible to estimate with calculations based on motion, but not with the level of accuracy required. Instead, if many angles are recorded simultaneously, a 3D analysis of the combination can generate a 3D version of the scene. This is already being done in limited cases for advance VFX work, but it would require taking it to a whole new level. For static content, a 3D model can be created by processing lots of still images, but storytelling will require 3D motion within this environment. This all seems pretty far out there for a traditional post workflow, but there is one case that will lend itself to this format.

Motion capture-based productions already have the 3D data required to render VR perspectives, because VR is the same basic concept as motion tracking cinematography, except that the viewer controls the “camera” instead of the director. We are already seeing photorealistic motion capture movies showing up in theaters, so these are probably the first types of productions that will make the shift to producing full VR content.

The Maxwell Kepler family of cards.

Viewing this content is still a challenge, where again Nvidia GPUs are used on the consumer end. Any VR viewing requires sensor input to track the viewer, which much be processed, and the resulting image must be rendered, usually twice for stereo viewing. This requires a significant level of processing power, so Nvidia has created two tiers of hardware recommendations to ensure that users can get a quality VR experience. For consumers, the VR-Ready program includes complete systems based on the GeForce 970 or higher GPUs, which meet the requirements for comfortable VR viewing. VR-Ready for Professionals is a similar program for the Quadro line, including the M5000 and higher GPUs, included in complete systems from partner ISVs. Currently, MSI’s new WT72 laptop with the new M5500 GPU is the only mobile platform certified VR Ready for Pros. The new mobile Quadro M5500 has the same system architecture as the desktop workstation Quadro M5000, with all 2048 CUDA cores and 8GB RAM.

While the new top-end Maxwell-based Quadro GPUs are exciting, I am really looking forward to seeing Nvidia’s Pascal technology used for graphics processing in the near future. In the meantime, we have enough performance with existing systems to start processing 360-degree videos and VR experiences.

Mike McCarthy is a freelance post engineer and media workflow consultant based in Northern California. He shares his 10 years of technology experience on www.hd4pc.com, and he can be reached at mike@hd4pc.com.

Creative Thievery : Who owns the art?

By Kristine Pregot

Last month, I had the pleasure of checking out a very compelling panel at SXSW, led by Mary Crosse of Derby Content: Creative Thievery = What’s Yours is Mine?

It was a packed house, and I heard many people mention that this was their absolute favorite panel at SXSW, so it seemed like a good idea to continue the conversation.

How did you conceptualize this panel?
I had seen Richard Prince’s Instagram exhibit last year, and it caused a heated debate about what is art and who owns what outside of the typical art world. I felt it would be interesting to bring a debate about fine art into discussion with professionals in film, interactive and music attending SXSW. These appropriation discussions are so relevant to what we do everyday in the more commercial arts world.

Tell me about the panelists?
I had top panelists participate, including Sergio Munoz Sarmiento, a fine arts lawyer; Hrag Vartanien the co-founder/editor-in-chief of Hyperallergic, a fine arts blogazine; and Jonathan Rosen, an appropriation artist and ex-advertising creative and commercial director. This trio gave us really unique and informed insights into all aspects of the examples I showed.

The first subject you talked about was Richard Prince taking a photograph of the famous Marlboro Man ad and selling this photo for a lot of money.
This is a pretty famous case in the art world. Richard Prince has made his career off of appropriating others’ work in the extreme. The panel had a mixed reaction to this, although by a near unanimous vote of hands, the crowd was much harsher and felt that what Richard Prince did was morally wrong.

Marlboro

What are your thoughts about Richard Prince?
I personally find the work to be an interesting statement on art, meaning and intent in a piece and on ownership. The fact that it has created so much dialogue about what is fine art over the years makes him relevant. I think many people don’t want to give him that much credit, and perhaps I shouldn’t. However, I think he’s made his art in the act of stealing itself, and if you look at this statement that he’s made with his work in that way, then it’s easier to see it as art.

I thought that Mike Tyson’s tattoo artist and his lawsuit to Warner Bros. for the use of this artwork in the film, The Hangover II, was very interesting subject matter. Can you break this case down a little bit?
The tattoo artist who designed Mike Tyson’s face tattoo sued Warner Bros. for a copyright infringement in Hangover II. In the film, Stu (Ed Helms) wakes up after a crazy night of partying in a Bangkok hotel with a replica of Mike Tyson’s face tattoo. The tattoo artist designed it specifically for Mike Tyson and claimed it was a copyrighted work that Warner Bros. had no right to put in the film or on any promotional materials for the film.

The lawsuit nearly affected the release of the film, and there was a possibility that if the two parties couldn’t come to an agreement, the face tattoo would have to be digitally removed for the home video release. In the end, Warner Bros. settled the claim for an undisclosed amount.

This case does open up an interesting discussion about an individual not even owning the design tattooed to their body without a legal document from the tattoo artist saying as much. And creates the need for filmmakers and advertisers to clear one more element in our work.

What surprised you the most about the panel? Did the audience’s morally correct “vote” surprise you?
We decided that after we discussed what was acceptable in the art world and what was legally right, we’d ask the audience what they felt was morally right. The audience, nearly unanimously, voted together on all examples shown, and very differently from how the art world felt things were acceptable and how the court ruled.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.


Ergonomics from a post perspective (see what I did there?)

By Cory Choy

Austin’s SXSW is quite a conference, with pretty much something for everyone. I attended this year for three reasons: I’m co-producer and re-recording mixer on director Musa Syeed’s narrative feature film in competition, A Stray; I’m a member of the New York Post Alliance and was helping out at our trade show booth; and I’m a blogger and correspondent for this here online publication.

Given that my studio, Silver Sound in New York, has been doing a lot of sound for virtual reality recently, and with the mad scramble that every production company, agency and corporation has been in to make virtual reality content, I was pretty darn sure that my first post was going to be about VR (and don’t fear, I will be following up with one soon), but while I was checking out the new 360-degree video camera and rig offerings from Theta360 and 360Heros, and taking a good look at the new Micro Cinema Camera from Blackmagic, I noticed a pretty enthused and sizable crowd at one of the booths. The free Stella Artois beer samples were behind me, so I was pretty excited to go check out what I was sure must be the hip, new virtual reality demonstration, The Martian VR Experience.

To my surprise, the hot demo wasn’t for a new camera rig or stitching software. It was for a chair… sort of. Folks were gathered around a tall table playing with Legos while resting on the Mogo, the latest “leaning seat” offering from inventor Martin Keen’s company, Focal Upright. It’s kind of a mix between a monopod, a swivel stool and an exercise ball chair, and it comes in a neat little portable bag — have chair, will travel! Leaning chairs allow people to comfortably maintain good posture while at their workstations. They also encourage you to work in a position that, unlike most traditional chairs, allows for good blood flow through the legs.

They were raffling off one of those suckers, hence all the people around. I didn’t win, but I did have the opportunity to talk to Keen about his products — a full line of leaning chairs, standing desks and workstations. Keen’s a really nice fellow, and I’m going to follow-up with a more in-depth interview in the future. For now, though, the basics are that Keen’s company, Focal Upright, is one of several companies that have emerged to help folks who spend the majority of their days sitting (i.e. all of us post professionals) figure out a way to bring better posture and health back into their daily routines.

As a sound engineer, and therefore as someone who spends a whole lot of time every day at a console or mixing board, ergonomics is something I’ve had to pay a lot of attention to. So I thought I might share some of my, and my colleagues’, ergonomics experiences, thoughts and solutions.

Standing, Sitting and Posture
We’ve all been hearing about it for a while — sitting for extended periods of time can be bad for you. Sitting with bad posture can be even worse. My buddy and co-worker Luke Allen has been doing design and editing at a standing desk for the last couple of years, and he swears that it’s one of the best work decisions he’s ever made. After the first couple of months though, I noticed that he was complaining that his feet were getting tired and his knees hurt. In the same pickle? Luke solved his problem with a chef’s mat, like this one. Want to move around a little more at the standing desk? Check out the Level from FluidStance, another exhibitor at this year’s SXSW show. Not ready for a standing desk? Maybe try exploring a ball chair or fluid disc from physical therapy equipment manufacturer Isokinetics Inc.

Feel a little silly with that stuff? Instead, try getting up and walking around, or stretching every 20 minutes or so — 30 seconds to a minute should do. When I was getting started in this business, I was lucky enough to have the opportunity to apprentice under sound master craftsman Bernie Hajdenberg. I first got to observe him working in the mix, and then after some time, I had the privilege of operating sessions with him. One of the things that struck me was that Bernie usually stood up for the majority of the mixing sessions, and he would pace while discussing changes. When I was operating for him, he had me sit in a seat with no arms that could be raised pretty high. He told me this was very important, and it’s something that I’ve continued throughout my career. And lo and behold, I now realize that part of what Bernie had me do was to make sure that I wasn’t cutting off the circulation in my legs by keeping them extended and a little in front of me. And the chair with no arms helped keep my back straight.

Repetitive Stress
People who use their fingers a lot, whether typing or using a mouse, run the risk of developing a repetitive stress injury. Personally, I had a lot of wrist pain after my first year or so. What to do? First, make sure that your set-up isn’t forcing you to put your hands or wrists in an uncomfortable position. One of the things I did was elevate my mouse pad and keyboard. My buddy Tarcisio, and many others, use a trackball mouse. Try to break up your typing or mouse movements every couple of minutes with frequent, short bursts of finger stretches. After a few weeks of introducing stretching into my routine, my wrist and finger pain was alleviated greatly.

Cory Choy is an audio engineer and co-founder of Silver Sound Studios in New York City. He was recently nominated for an Emmy for “Outstanding Sound Mixing for Live Action” for Born To Explore.

Catching up with Foundation Edit’s Jason Uson

By Kristine Pregot

Austin’s Foundation Editorial is a four-year-old editorial facility founded by editor Jason Uson. Nice Shoes and Foundation Edit have been working together since 2014, when our companies launched a remote partnership allowing clients in Austin to work with Nice Shoes colorists in New York, Chicago and Minneapolis. So, when it came time to pick a location for our 2016 SXSW party, which we hosted with our friends at Sound Lounge, Derby Content and Audio Network, Foundation Edit was a natural choice.

In-between the epic program of parties, panels and screenings, I was able to chat with Jason about his edit shop, SXSW, remote color, and the tattoo artist giving out real tattoos at our party…

What was the genesis of Foundation Editorial?
I started my career at Rock Paper Scissors, and spent four years there learning from the best. I then freelanced all over Los Angeles at the top shops and worked with some of the most talented editors in the industry, both in broadcast and film. I always dreamed of having my own shop and after years of building amazing relationships, it was time.

What platforms do you edit on?
I am an Media Composer editor. I always have been, but I haven’t touched it in over two years. Apple FCP 7 has been our go-to, as well as Adobe Premiere. They are both amazing tools, but there is something special about Avid Media Composer that I miss.

How many editors do you have at Foundation Edit?
We have two editors: myself and Blake Skaggs. Our styles are different, but our workflow is very similar. It’s nice to have someone with his caliber of talent working alongside me.

How do you usually spend SXSW?
I usually spend SXSW in my edit bay, typically booked on some fun projects. I was lucky enough this year to get Sunday off for the party. I hit up a few movies and shows.

How did the 2016 SXSW party come together?
It was a no-brainer. We are lucky to be in the heart of it all and surrounded by so much creativity. We have a great location that lends itself to hosting our clients, friends and colleagues, but with so many people involved and with SXSW being as big as it is, it was no small fete. It had its challenges, but in the end it was a great success.

The tattoo artist at the party was amazing. 
My partner, Transistor Studios, came up with the idea, and I thought it was a perfect fit for us. We all have tattoos and love the process, and we thought it would be a great addition to the party. Damon Meena, Aaron Baumle and Jamie Rockaway flew our tattoo artist, Mike Lucena, in from Brooklyn.

What’s your favorite thing about Austin?
That’s a loaded question. There is so much to love about Austin. I think it starts with the spirit of the city. Austin is a genuine community of people that celebrate and encourage talent, creativity and artistry. It’s in the DNA of who Austin is. Although the city is growing at a massive pace, and we all see and feel the changes, there is still that heart — that core Austin feeling. Let’s be honest though, the food is a major favorite! I’ll just leave you with some key words: barbeque and tacos.

Before I let you go, can you talk about the last collaboration between Nice Shoes and Foundation Edit?
Nice Shoes colorist Gene Curley outdid himself this time working on See What They See for Walgreens. We created six long-form pieces, three 30-second spots, and somewhere in the area of 50 social videos.

GSD&M’s Group creative director, Bryan Edwards, and his team — Joel Guidry, Gregg Wyatt and Barrett Michaels — worked with associate producer Dylan Heimbrock. They went to Uganda and put cameras in kids’ hands to, “See What They See.” So their campaign needed two “looks.” The beauty of Uganda for the first look, and then our second look needed to not only be beautiful and thoughtful, but different enough to tell the story through these kids’ eyes.

Gene really found that common thread that it needed to be successful. It’s really an amazing service to be able to collaborate with the entire team of Nice Shoes colorists in realtime between New York City and Austin.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.

A glimpse at what Sony has in store for NAB

By Fergus Burnett

I visited Sony HQ in Manhattan for their pre-NAB Show press conference recently. In a board room with tiny muffins, mini bagels and a great view of New York, we sat pleasantly for a few hours to learn about the direction the company is taking in 2016.

Sony announced details for a slew of 4K-, HDR-capable broadcast cameras and workflow systems, all backwards compatible with standard HD to ease the professional and consumer transition to Ultra-HD.

As well as broadcast and motion picture, Sony’s Pro division has a finger in the corporate, healthcare, education and faith markets. They have been steadily pushing their new products and systems into universities, private companies, hospitals and every other kind of institution. Last year, they helped to fit out the very first 4K church.

I work as a DIT/dailies technician in the motion picture industry rather than broadcast, so many of these product announcements were outside my sphere of professional interest, but it was fascinating to gain an understanding of the immense scale and variety of markets that Sony is working in.

There were only a handful of new additions the CineAlta (pictured) line, firmware updates for the F5 and F55, and a new 4K recording module. These two cameras have really endured in popularity since their introduction in 2012.

The new AXS-R7 recording module (right) offers a few improvements over its predecessor the AXS-R5. It’s capable of full 4K up to 120fps and has a nifty 30-second cache capability, which is going to be really useful for shooting water droplets in slow motion. The AXS-R7 uses a new kind of high-speed media card that looks like a slightly smaller SxS — it’s called AXSM-S48. Sony is really on fire with these names!

A common and unfortunate problem when I am dealing with on-set dailies is sketchy card readers. This is something that ALL motion picture camera companies are guilty of producing. USB 3.0 is just not fast enough when copying huge chunks of critical camera data to multiple drives, and I’ve found the power connector on the current AXS card reader to be touchy on separate occasions with different readers, causing the card to eject in the midst of offloading. Though there are no details yet, I was assured that the AXSM-S48 reader would use a faster connection than USB 3.0. I certainly hope so; it’s a weak point in what is otherwise a fairly trouble-free camera ecosystem.

Looming at the top of the CineAlta lineup, the F65 is still Sony’s flagship camera for cinema production. Its specs were outrageous four years ago and still are, but it never became a common sight on film sets. The 8K resolution was mostly unnecessary even for top-tier productions. I inquired where Sony saw the F65 sitting among its competition, from Arri and Red, as well as their own F55 which has become a staple of TV drama.

Sony sees the F65 as their true cinema camera, ideally suited for projection on large screens. They admitted that while uptake of the camera was slow after its introduction, rentals have been increasing as more DPs gain experience with the camera, enjoying its low-light capabilities, color gamut and sheer physical bulk.

Sony manufactures a gigantic fleet of sensible, soberly named cameras for every conceivable purpose. They are very capable production tools, but it’s only a small part of Sony’s overall strategy.

With 4K HDR delivery fast becoming standard and expected, we are headed for a future world where pictures are more appealing than reality. From production to consumption, Sony could well be set to dominate that world. We already watch Sony-produced movies shot on Sony cameras playing on Sony screens, and we listen to Sony musicians on Sony stereos as we make our way to worship the God of sound and vision in a 4K church.

Enjoy NAB everyone!

Quick Chat from Sundance: ‘Mobilize’ director Caroline Monnet

By Kristine Pregot

Caroline Monnet’s Mobilize takes viewers on a journey from Canada’s far north to its urban south, telling the story of those who live on the land and “are driven by the pulse of the natural world.” Mobilize is part of Souvenir, a four-film series addressing the Aboriginal identity and representation by reworking material in the National Film Board of Canada’s archives.

The above description of Mobilize doesn’t do the film justice. It was amazing, and I was so impressed with this while attending Sundance’s Short Program 1 — the way the footage was brought together through the music and editing — that I had to interview Monnet about her process.

Can you explain how you conceived of the idea for the short?
I was one of four filmmakers approached by the National Film Board of Canada to create a four-minute film addressing Aboriginal identity. The idea was to revamp their archives in a contemporary way, with new meaning and context. I decided to focus on a positive representation of natives and explored the idea of moving forward. I used images with movement, people building stuff and showing off their skills. Really just natives kicking ass on screen!

I also thought it was interesting to use archival footage to speak about the future, to express an idea of contemporaneity while still honoring the past. I knew I wanted the film to feel like a journey, be fast paced and exhilarating. I wanted our hearts to start pounding as if it is time to stand up and mobilize.

How did you choose your music?
I decided to go with Tanya Tagaq’s song Uja to complement the visuals and inform the editing process. Tagaq is a Canadian Inuit throat singer. Her metal/punk/tribal sound helped in adding a level of urgency and intensity and in making the footage contemporary. Her music makes up 50 percent of the experience.

What was your editing process like?
The turn-around in making the films was extremely short — I had approximately one month. Along with editor Jesse Rivière, who cut on Adobe Premiere, we edited over an intense week. It was good to have a specific concept; this allowed me to go through the archives and search key words.

That must be a huge archive?
The National Film Board of Canada has over 700 hundred films in their catalog, so you can imagine the amount of archival footage they have available. I did not purposely choose footage from a specific film. I wanted images that could work well together and would fit my concept. I went with my instincts and began to naturally pick clips from specific films.

In Mobilize, I used a lot of footage from films such as Cree Hunters of Mistassini, César et son canot d’écorce and High Steel, among others. These films are recognizable because they were quite successful NFB films.

For my part, I deconstructed the films and placed them in a different context. I really focused on labor and the expression of specific skills. I think cultural expression remains cultural expression, but with Mobilize we don’t necessarily focus on a specific character or narrative, we focus on the work and celebrating the amazing skills of these individuals. I juxtaposed a lot of footage of people building stuff and moving in a specific direction. The way I’ve reworked the footage make it seem as if people are preparing for something… getting ready for something important coming.

I wanted Mobilize to be an experience where viewers would be in for an upbeat adventure where their heart would start pounding, they would be out of breath and bombarded by a positive rendition of indigenous expression — a fast-paced ride where they would feel indigenous people are very well alive, moving forward, anchored in today’s reality, vibrant and contemporary.

What was the original footage shot on?
The original footage was shot on 16mm film, and I purposely decided to only use that kind of footage. It had to be color, and it had to be film. This was important because I wanted the film to have a certain consistency. I wanted audiences to wonder if I shot the footage myself or if it was really found footage. The 16mm footage adds a level of nostalgia and warmth to the piece without being outdated.

Where was the footage converted?
The National Film Board of Canada must have spent months digitizing all the films they produced over the years. I felt very lucky to have access to that material. There is some very valuable footage of indigenous expression that is still relevant today and could be used as a tool for education.

​There are two parts of the piece — can you talk about that?
Mobilize is a call for action. It is also a call to change perceptions on native people.  It’s about being capable of movement, mobilizing us to keep moving forward and encouraging people to act for political and social change. I also think the title has a double meaning, because there are different ways of mobilizing ourselves. Building a canoe or snowshoes takes massive skills, and I wanted to showcase that.

I also wanted the outcomes to be positive. Today, being “urban indigenous” doesn’t make you any less native, or any more assimilated. It is just a reality that exists. For me, the ending of the film is not about assimilation or that there there is more opportunities in the cities. I wanted to celebrate the value of hard labor, whether it’s done in an urban or a natural setting.

The sequencing of the images speak a bit about my own family history, where my grandparents where living in the bush, and with the passing generations we became more and more urban. However, this does not mean that I cannot go back to the bush and learn all these things.

I refer to “people always moving forward” as a statement to say that we are everywhere, well present, active and ready to kick some serious ass.

Where can viewers watch this? Is this available online?
For now Mobilize is doing the festival circuit. It played at the Holiday Village Cinema in Park City on January 30. Next is Berlinale (Berlin), Uppsala (Sweden) and Tampere (Finland). Also, the National Film Board of Canada is planning on putting the film online this spring.

In the meantime, you can check out a clip at www.nfb.ca/film/mobilize/clip/mobilize_clip.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.


The era of the demo reel has come and gone

By Joel PIlger

Be honest, when business gets slow at your creative studio, does someone suggest, “Let’s cut a new demo reel?”

Ah, the “demo reel.” That classic, rapid-fire, glitzy montage of your studio’s best work. The first demo reels I ever saw came from legendary studios like Pittard Sullivan, GRFX Novocom and Telezign. They gave me goose bumps. That was 1994.

Amazing how the demo reel (or “sizzle reel”) has remained the stalwart calling card for motion design studios and production companies for over two decades. Or has it? After all, the goose bumps are long gone.

I believe the venerable demo reel is finally dying. I submit that if we creative studios took the traditional “demo reel” out back and shot it, our clients would cheer.

Speed Dating Reality Check
Here is a real-world example, which makes my point: PromaxBDA Client/Agency Speed Dating. This event consists of 10 TV network clients sitting across from 10 top creative studios. As one of those studios, you’ve got just 10 minutes to introduce your firm and make your pitch. When the bell rings, you slide over to the next TV network and repeat, 10 times.

Those poor TV network folks… After one such event I asked a dazed looking client, “So what stood out?” He said, “Uh, I kinda remember the last demo reel. Maybe.”

Speed dating is quite a reality check. Yes, playing your demo reel for 10 top TV networks is thrilling, but going up against nine other studios — also showing demo reels — is not thrilling. In that context you suddenly realize all demo reels pretty much look the same. Speed dating is the chance of a lifetime, yet you’re walking away wishing you could have shown something… more.

Lori Pate, an industry veteran who has represented many creative studios, tells it like it is: “Montages do nothing more than list the clients who have trusted you with their brand. So if you insist on a montage, just show logos! I prefer to show full case studies.

Demo reel montages don’t work at speed dating, or maybe demo reels just don’t work.

Has the demo reel gone the way of the View Master? This writer thinks so.

What a Demo Reel Says
If your creative studio features a demo reel on your website, it says two things to visitors: 1) You can edit a montage. 2) You look like everyone else.

Demo reels used to be a great way to make so-so work look stronger, but today’s savvy clients are not impressed. They want to get right to the work that best demonstrates your expertise and your personality. Michael Waldron, VP, creative director of art and design at TV Land, who also spent years on the vendor side as a principal at NYC creative studio Nailgun, says, “The only thing a demo reel shows is that the company is really good at editing. I would rather see their newest and best works. I want to know what the challenge was and how they solved it. A lot of that is stripped away in a montage.”

Montages are even less acceptable in the commercial world. Ellie Anderson, CEO at Griffin Archer, a Minneapolis-based ad agency, had this to say: “I don’t think a ‘sexy montage’ is a very accurate representation of the work because you’re only seeing the flashy bits and not the entire concept. Like a movie trailer showing you the one great joke when the rest of the film sucks. Personally, I’d rather see the work in its entirety.”

No one is going to tell you this truth, so I will: prospective client don’t watch your demo reel. And don’t ever make the mistake of broadcasting a message like, “Check out our new reel!” across all your social media channels. That just makes matters worse.

Forget the Sizzle, Show the Steak
So the trend has shifted to presenting projects in their entirety. Why? Because showing entire projects answers questions about your creative studio’s ability to solve much bigger problems, such as, is the concept yours? Did you handle the copywriting? Did you shoot it? Direct it? Produce it? It’s tempting to hide behind a montage, but you’re not fooling anyone. Not anymore.

One former executive producer, Heidi Bayer, who now reps studios in the broadcast space via Numodo, says, “I have spoken with many network executives; they find it difficult to tell just what a company did on a particular project when viewed within a traditional sizzle/demo reel. Instead, I try to figure out the client’s need then curate spots which speak to that.”

It’s really that simple: try to figure out what your client needs, then show them solutions.

Make Sure You Answer This One Question
At the end of the day, the biggest question clients have goes something like this: If I hire your studio, can you deal with demanding requirements, never drop the ball, make me look great and yet somehow keep the creative pure? A demo reel can’t answer those questions.

However you choose to showcase your firm’s work, always remember expertise and personality get you noticed, not a demo reel. You can now safely take that old school approach out back and shoot it.

For 20 years, Joel Pilger helmed Impossible Pictures. Now as a consultant with RevThink, Joel advises owners of top agencies, studios and production companies. You can follow him on Twitter @joelpilger.

Slamdance, Sundance: Why it’s 
important to audio post pros

By Cory Choy

Why are we, audio post professionals, in Park City right now? The most immediate reason is Silver Sound has some skin in the game this year: we are both executive producers and the post sound team for Driftwood, a feature narrative in competition at Slamdance that was shot completely MOS. We also provided production and audio post on content Resonance and World Tour for Google’s featured VR Google Cardboard demos at Sundance’s New Frontier.

Sundance’s footprint is everywhere here. During the festival, the entirety of Park City is transformed — schools, libraries, cafes, restaurants, hotels and office buildings are now venues for screenings, panel discussions and workshops. A complex and comprehensive network of shuttle busses allows festival goers to get around without having to rely on their own vehicles.

Tech companies, such as Samsung and Canon, set up public areas for people to rest, talk, demo their wares and mingle. You can’t take three steps in any direction without bumping into a director, producer or someone who provides services to filmmakers. In addition to being chock full of industry folk — and this is a very important ingredient —Park City is charming, beautiful and very different than the American film hubs, New York and Los Angeles. So people are in a relaxed and friendly mood.

Films in competition at Sundance often feature big-name actors, receive critical acclaim and more and more often are receiving distribution. In short, this is the place to make personal connections with “indie” filmmaking professionals who are either directly, or through friends, connected to the studio system.

As a partner and engineer at a boutique sound studio in Manhattan, I see this as a fantastic opportunity to cut through the noise and hopefully put myself, and my company, on the radar of folks with whom I might not otherwise get a chance to meet or collaborate. It’s a chance for me, a post professional in the indie world, to elevate my game.

Slamdance
Slamdance sets up shop in one very specific location, the Treasure Mountain Inn on Main Street in Park City. It happens at the same time as Sundance — and is located right in eye of the storm — but has built a reputation for celebrating the most indie of the indies. Films in competition at Slamdance must have budgets under one million dollars (and many often have budgets far below that.) Where Sundance is a sprawling behemoth — long lines, hard-to-get tickets, dozens of venues, the inability to see all that is offered — Slamdance sort of feels like a friend’s very nice living room.

Slamdance logo

Many folks see most of or even the entire line-up of films. There’s no rushing about to different locations. Slamdance embraces the DIY, and is about empowering people outside of the industry establishment. Tech companies such as Blackmagic and Digital Bolex hold workshops geared towards enabling filmmakers with smaller budgets to be able to make films unencumbered by technical limits. This is a place where daring and often new or first-time filmmakers showcase their work. Often this is one of the first times or perhaps even the first time they’ve gone through the post and finishing process. It is the perfect place for an audio professional to shine.

In my experience, the films that screen best at Slamdance — the ones that are the most immersive and get the most attention — are the ones with a solid sound mix and a creative sound design. This is because some of the films in competition have had minimal or no post sound. They are enjoyable, but the audience finds itself sporadically taken out of the story for technical reasons. The directors and producers of these films are going to keep creating, and after being exposed to and competing against films with very good sound, are probably going to be looking to forge a creative partnership — one that could quite possibly grow and last the entirety or majority of their future careers — with a post sound person or team. Like Silver Sound!

Cory Choy is an audio engineer and co-founder of Silver Sound Studios in New York City.

The immeasurable beauty of film

This senior grader/colorist loves the look and feel of celluloid.

By Paul Dean

During my 34-year career as a film and digital grader/colorist, there has been much technological advancement. However, there is one enduring constant that never ceases to amaze me: film’s unique ability to capture and render light so beautifully.

This led me to ask myself some deep questions as to exactly why I prefer film over digital. The answer was quite a revelation and has more to do with human perception than anything that can be measured technically.

Regardless of how good next-generation digital cameras are, in my opinion, they simply fail to capture images with the same subtle, natural feel. This, I believe, is due to the unique way film captures light, which is incredibly similar to the way our eyes process light and color through the rods and cones of the human visual cortex.

If you view a very low-light scene in real life and allow your mind to describe what you are actually seeing, you will notice an effect comparable to film grain in dark areas as your eyes try to decipher what little light there is, creating subtle step-less progressions as the more discernible features emerge from granular darkness, forming an analogue curve that is perfectly replicated when film captures light.

The human subconscious has evolved over millions of years to be capable of detecting when something is instinctively wrong, without our conscious minds ever being aware of any impending danger. These deep subconscious survival instincts equip human beings with warning emotions such as the “sixth sense.” We have all experienced that gut feeling when we intuitively know when a situation is not as it appears and we are being deceived.

We are a highly developed, finely-tuned, organic, analog, three-dimensional species — is it really any wonder our powerful subconscious minds detect and reject two-dimensional 1s and 0s masquerading as human reality?

Cinelab recently worked on the dailies for Suffragette.

When viewing images captured on film I become totally immersed and engaged in the story and performances; the emotion and energy projected from the cast is fully conveyed to the audience, the movement and flow of the action effortlessly watchable.

Conversely, when I view images captured digitally, this all-enveloping engagement does not occur due to the constant distraction from my subconscious alarm warning, “Don’t trust it, it’s not real.” I feel that film absorbs the audience into the story itself, where digital leaves them on the outside looking in.

When you consider the huge amount of skill, time, passion, dedication and energy — not to mention the often vast sums of money involved in filmmaking — why choose to capture all of this on anything other than a technology that has been refined and perfected for over 100 years, and is absolutely unique in its ability to capture every facet of a production, every emotion and the very soul of a performance?

Of course, digital has its place and is rightly admired and respected; its technological achievements cannot be denied when specifications boast a resolution equal to, or greater than film, but I don’t agree with the fixation on these numbers — it is simply not better, it is just different.

Co-existence
The “Which is Best?” debate is irrelevant; the two technologies can and should co-exist. The choice will inevitably be genre led, however we must ensure a choice remains by not allowing film to disappear from the cinematographers palette.

Capturing a story in moving pictures is both complex and technical, but capturing emotion takes something more, something special. Film is something special, so let’s not lose it. If I had a story to tell, I would capture it on the beautiful canvas of film, and rest assured that nothing was lost in translation.

Paul Dean is head of telecine at Cinelab in London. For 20 years he has worked as a senior colorist, specializing in dailies/archive, working at Todd-AO UK, Soho Film Lab and Deluxe on hundreds of features and TV dramas. Dean joined Cinelab in 2013 to head up and develop its telecine, scanning and grading services.

Why fast file transfers are critical to video production, post


By Katie Staveley

Accelerated file transfer software is not new. It’s been around for many years and has been used by the world’s largest media brands. For those teams of content producers, it has been a critical piece of their workflow architecture, but it wasn’t until recently that this kind of software has become more accessible to every size company, not just the largest. And just in time.

It goes without saying that the process of producing and delivering content is ever-evolving. New problems and, as a result, new solutions arise all the time. However, a few challenges in particular seem to define the modern media landscape, including support for a globally distributed team, continuous demand for high-resolution content and managing the cost of production.

KatieStaveley_i[1]

These challenges can be thought of from many different angles, and likewise resolved in different ways. One aspect that is often overlooked is how organizations are moving their precious video content around as part of the pre-production, post and distribution phases of the workflow. The impact of distributed teams, higher resolution content and increasing costs are driving organizations of all sizes to rethink how they are moving content. Solutions that were once “good enough” to get the job done — like FTP or shipping physical media — are rapidly being replaced with purpose-built file transfer tools.

Here are some of the reasons why:

1. Distributed teams require a new approach
Bringing final content to market very rarely happens under one roof or in one location anymore. More and more teams of media professionals are working around the globe. Obviously, production teams work remotely when they are filming on location. And now, with the help of technology, media organizations can build distributed teams and get access to the talent they need regardless of where they’re located, giving them a competitive advantage. In order to make this work well, organizations need to consider implementing a fast file transfer solution that is not only accessible globally, but moves large files fast, especially when bandwidth speeds are less than optimal.

2. File sizes are growing
The demand for higher resolution content is driving innovation of production technology like cameras, audio equipment and software. While HD and even Ultra HD (UHD) content is becoming more mainstream, media professionals have to think about how their entire toolset is helping them meet those demands. High-resolution content means exponentially larger files sizes. Moving large files around within the prepro and post workflows, or distributing final content to clients, can be especially difficult when you don’t have the right tools in place. If your team is delivering HD or UHD content today, or plans to in the future, implementing a fast file transfer solution that will help you send content of any size without disrupting your business is no longer a nice-to-have. It’s business critical.

3. You can’t afford delays
When it comes to getting your files where they need to be, hope is not a strategy. The reality is that production will often finish up later than you hoped. Deadlines are hard and you still need to get your content out the door. Any number of factors can cause you to miss deadlines, but transferring content files shouldn’t be your biggest delay. You can’t afford slow transfer times, or even worse, interruptions that force you start the transfer all over again. Implementing a solution that gives you reliable, fast file transfer and predictability around when your files will arrive is a strategy. Not only will it enable your employees and partners to focus on producing the content, it will help you to create a positive experience for your customers whether they are reviewing pre-release content, or receiving the final cut.

4. Customer experience matters
Any time your customers are interacting with your brand they are forming an opinion of you. In today’s highly-competitive world, it’s imperative that you delight your customers with the content you’re producing and their experience working with you. Your file transfer solution is part of building that positive experience. The solution needs to be reliable and fast and not leave your customers disappointed because the file didn’t arrive when they expected; or make them feel frustrated because it was too painful to use. They should be able to focus on your content, not on how you’re delivering it to them — your solution should just work. It’s a necessary part of today’s media business to have a cost-efficient, low-maintenance way to send and share content that ensures a delightful customer experience.

5. Your business is growing
Moving digital video content has been part of the media business for over a decade, and there have been solutions that have worked well enough for many organizations. But when considering the rapid growth in file sizes, increased distribution of teams and the importance of customer experience, you’ll find that those solutions are not built to scale as your business grows. Planning for the future means finding a solution that has flexibility of deployment, is easy to manage and maintain, and the cost of expansion is proportional to your size. Growth is hard, but managing your file transfer tools doesn’t have to be.

Managing cost and keeping profit margins healthy is as imperative as always. Fortunately the days where every technology purchase requires significant capital investment are waning. The good news is that the availability of cloud-hosted solutions and other advancements have given rise to powerful solutions that are accessible to every size company. As a result, media professionals have affordable access to the technology they need to stay competitive without breaking the bank, which includes fast file transfer software. Investing today in the right solution will make a big impact on your business now and into the future.

Katie Staveley is VP of marketing at Signiant.

‘Fix it in…Prep?’ panel via Produced By: New York

Focusing on the value of getting post artists involved early

By Kristine Pregot

Working as a producer for the past 10 years, I have watched — along with many fellow producers — the enormous changes in technology and post workflows. The days of digitizing Digi Betas are long gone. It is truly an amazing and exciting time for post.

Producers are constantly (sometimes desperately) striving to stay current with advances in post production. So we at Nice Shoes thought it would be great to share the latest post workflows with the New York television and film community by participating in a panel at Produced By: New York, which was held last month at the Time Warner Center. We collaborated with partners FilmLight and Sony to develop this concept and co-sponsor the discussion.

Kristine Pregot introducing the Produced By "Fix It In Prep" panel.

Kristine Pregot introducing the “Fix It In… Prep?” panel.

The panel included experts who shared their tips on how to save time, money and, most importantly, headaches.

The discussion was moderated by Jennifer Lane, post production supervisor/secretary of the Post NY Alliance. She has supervised projects such as Billy Lynn’s Long Halftime Walk and Into the Woods. Lane acknowledged that while most producers have a strong concept of what they want a project to look like, they don’t always consider editorial, visual effects, music or color grading during the pre-production phase. Bringing post artists into the conversation at the start of a project leads to enhancing imagery rather than “fixing” it in post, she emphasized.

Other members of the panel included Alison Beckett (Kill the Messenger, The Hundred-Foot Journey, Bessie); Brad Carpenter (Vinyl, Boardwalk Empire, Nurse Jackie); Nice Shoes colorist Chris Ryan; Peter Saraf (About Ray, Safety Not Guaranteed, Little Miss Sunshine); Psyop senior VFX supervisor Dan Schrecker (Hail Caesar!, Black Swan); and Tim Squyres (Life of Pi, Unbroken).

L-R:  Peter Saraf, Chris Ryan and Brad Carpenter.

L-R: Peter Saraf, Chris Ryan and Brad Carpenter.

This panel talked about collaborating with their post teams and sharing how the early reliance on their skills and experience expanded the possibilities for the project. There were even a few funny stories of how post saved the day. One example involved a song swap for a key scene that was saved by the editor. Another involved a crafty blend of visual effects and editorial skills that allowed filmmakers to create an exciting new ending for a film that previously ended in a mundane way.

Summing up, filmmaking is a team sport, and with new technologies blurring the lines between production and post, we are all in this together. So, yes, you can fix it in post, but it will cost you!

Kristine Pregot is a senior producer at New York City-based Nice Shoes.


Blog: The Hamptons International Film Festival 2015  

By Kristine Pregot

Recently, I attended the 23rd annual Hamptons International Film Festival (HIFF) — one of the East Coast’s best — which offers amazing films along with some gorgeous fall foliage by the ocean. It was a great weekend, one where I, quite, literally rubbed elbows with Alec Baldwin at a film screening — I ended up getting the armrest BTW.

Nice Shoes was a sponsor of this year’s festival, which was founded to showcase independent film — long, short, fiction and documentary — and to introduce a unique and varied spectrum of international films and filmmakers to the New York market. The festival is committed to exhibiting films that offer global perspectives and innovative messaging, with the hope that these programs will enlighten audiences.

Our sponsorship contribution to the festival was an in-kind service of color grading for the festival’s Best Documentary Feature Film Award, which went to David Shapiro for his documentary, Missing People.  It was a compelling story, taking the view deep into dark worlds of art and violence.

The Nice Shoes crew, including

The Nice Shoes crew, including Kristine Pregot, (fourth from left).

HIFF is an Oscar-qualifying festival for short films, and they host various competitions, including a series that focuses specifically on early-career filmmakers. The festival also helps to develop a discussion around their films, both within the film community and beyond.

HIFF also ensures that films screened in the festival garner attention and coverage, by working closely with the New York Film Critics Circle, a group of NY based film writers who come out to screen and recap this year’s buzz-worthy films.

NYWIFT (New York Women In Film and Television) had a nice presence at the festival, with the organization hosting a joint venture for the 13th year with the festival organizers, called “Women Calling the Shots.” This series gives voice to the creative visions of women through film and video, including narrative, documentary, animation and experimental works.

They generously hosted a brunch on Sunday morning where filmmakers gathered to discuss films in the festival as well as upcoming projects. The organization awarded several scholarships, grants and other awards. The brunch made me extremely proud to be a member of the women filmmaking community in New York.

That evening, Nice Shoes co-hosted a party with the festival organizers for the filmmakers at the lovely East Hampton hang, Race Lane. We had a blast chatting with filmmakers about upcoming projects around the lovely fireplace.

In a nutshell, the Hamptons International Film Festival offers an amazing opportunity to connect with an array of talented artists. We can’t wait to do it again next year.


Kristine Pregot is a senior producer at New York City-based Nice Shoes.


Dolby Cinema combines 
HDR video, immersive surround sound

By Mel Lambert

In addition to its advances in immersive surround sound, culminating in the new object-based Atmos format for theatrical and consumer playback, Dolby remains committed to innovating video solutions for the post and digital cinema communities.

Leveraging video technologies developed for high-resolution video monitors targeted at on-location, colorist and QC displays, the company also has been developing Dolby Cinema, which combines proprietary high dynamic range (HDR) Dolby Vision with Dolby Atmos immersive sound playback.

The first Dolby Cinema installations comprise a joint venture with AMC Entertainment — the nation’s second-largest theater chain — and, according AMC’s EVP of US operations, John McDonald, the companies are planning to unveil up to 100 such “Dolby Cinema at AMC Prime” theaters around the world within the next decade. To date, approximately a dozen such premium large format (PLF) locations have opened in the US and Europe.

Dolby Vision requires two specially modified, HDR Christie Digital 4K laser projectors, together with state-of-the-art optics and image processing, to provide an HDR output with light levels significantly greater than conventional Xenon digital projectors. Dolby Vision’s HDR output, with enhanced color technology, has been lauded by filmmakers for its enhanced contrast, high brightness and gamut range that is said to more closely match human vision.

Unique to the Dolby Vision projection system, beyond its brightness and vivid color reproduction, is its claimed ability to deliver HDR images with an extended contrast ratio that exceeds any other image technology currently on the market. The result is described by Dolby as a “richer, more detailed viewing experience, with strikingly vivid and realistic images that transport audiences into a movie’s immersive world.”

During a recent system demo at AMC16 in Burbank, Doug Darrow, Dolby’s SVP of Cinema, said, “Today’s movie audiences have an insatiable appetite for experiences. They want to be moved, and they want to feel [the on-screen action]. The combination of our Dolby Vision technology and Dolby Atmos offers audiences an immersive audio-video experience.”

The new proprietary system offers up to 31-foot-Lamberts of screen brightness for 2D Dolby Vision content, more than twice the 14 fL required by the Digital Cinema Initiatives (DCI) specification.

Recent films released in Dolby Cinema include Sony’s The Perfect Guy; Paramount’s Mission: Impossible – Rogue Nation; Fox’s Maze Runner: The Scorch Trials; Fox’s The Martian, Warner’s Pan; and Universal’s’ Everest. Upcoming releases include Warner’s In the Heart of the Sea; Lionsgate’s The Hunger Games: Mockingjay — Part 2 and Disney’s The Jungle Book.

During a series of endorsement videos shown at the Burbank showcase, Wes Ball, director of Maze Runner: The Scorch Trials, said, “It’s the only way I want to show movies.”

The new theatrical presentation format fits into existing post workflows, according to Stuart Bowling, Dolby’s director of content and creative relations. “Digital cameras are capable of capturing images with tremendous dynamic range that is suitable for Dolby Vision, which is capable of delivering a wide P3 color gamut. Laser projection can also extend the P3 color space to exceed Rec. 2020 [ITU-R Recommendation BT.2020], which is invaluable for animation and VFW. For now, however, we will likely see filmmakers stay within the P3 gamut.”

For enhanced visual coverage, the large-format screens extend from wall to wall and floor to ceiling, with matte-back side wall and fittings to reduce ambient light scattering that can easily diminish the HDR experience. “Whereas conventional presentations offer maybe 2,000:1 contrast ratios,” Bowling stressed, “Dolby Vision offers 1,000,000:1 [dynamic range], with true, inky blacks.”

Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

IBC 2015: Adventures in archiving

By Tom Coughlin

Once you have your content and have completed that award-winning new project, Oscar-nominated film or brilliant and effective commercial, where does your data go? Today, video content can be reused and repurposed a number of times, producing a continuing revenue stream by providing viewing for many generations of people. That makes video archives valuable and also requires changes from in-active to more active archives. This article looks at some of the archiving products on display at the 2015 IBC.

The figure to the right shows our estimate of revenue spent on various media and entertainment storage markets in 2014 (from the Digital Storage in Media and Entertainment Report from Coughlin Associates). Note that although almost 96 percent of all M&E storage capacity is used for archiving about 45 percent of the spending is for archiving.

Quantum showcased its StorNext 5 shared storage architecture, which includes high-performance online storage, extended online storage and tape- and cloud-based archives. The company also highlighted the StorNext Connect, a management and monitoring console that provides an at-a-glance dashboard of the entire StorNext environment. At IBC, Quantum introduced their Q-Cloud Archive that extends StorNext workflow capabilities to the cloud, allowing end-to-end StorNext environments to leverage cloud storage fully with no additional hardware, separate applications or programming while maintaining full compatibility with existing software applications.

The Quantum Storage Manager migrates data from online storage to its object-based Lattus, allowing secure, long-term storage with greater durability than RAID and extremely high scalability. Content can be migrated from Lattus to tape archives or Q-Cloud archives automatically. In addition Quantum’s Artico intelligent NAS archive appliance was on display, offering low cost scale-out storage for active media archives that can scale to PBs of content across HDDs, extended online storage, tape and cloud storage.

Also during IBC, the LTO Program Technology Provider Companies — HP, IBM and Quantum —announced the LTO-7 tape format that will be available in late 2015. The native capacity of this drive is 6TB, while 2.5:1 compression provides 15TB of storage with up to 750MB/s data rates. This product will provide over twice the capacity of the LTO-6 drive generation. The LTO roadmap goes out to a generation 10 product with up to 120TB of compressed content and about 48TB native capacity.

LTO proponents said that tape has some advantages over hard disk drives for archiving, despite the difference in latency to access content. In particular, they said tape has and error rate two orders of magnitude lower than HDDs, providing more accurate recording and reading of content. Among the interesting LTO developments at IBC were the M-Logic Thunderbolt interface tape drives.

Tape can also be combined with capacity SATA HDDs to provide storage systems with performance approaching hard disk drive arrays and costs approaching magnetic tape libraries. Crossroads has teamed up with Fujifilm to provide NAS systems combining HDDs and tape and including cloud storage combining tape and HDDs. In fact archiving is becoming one of the biggest growing applications in the media and entertainment industry, according to the 2015 Digital Storage in Media and Entertainment Report from Coughlin Associates.

Oracle was also showing its tape storage systems with 8TB native storage capacity in a half-inch tape form factor. Oracle now includes Front Porch Digital with its cloud archiving platform as well as digital ingest solutions for older analog and digital format media.

Some companies also use flash memory as a content cache in order to match the high speeds of data transfers to and from a tape library system. Companies such as Xendata provide LTO tape and optical disc libraries for media and entertainment customers. Spectra Logic has made a big push into HDD-based archiving, using shingled magnetic recording (SMR) 3.5-inch HDDs in their DPE storage system to provide unstructured storage costs as low as 9 cents/GB. This system can provide up to 7.4PB of raw capacity in a single rack with 1GB/s data rates. This sort of system is best for data that is seldom or never overwritten because of the use of SMR HDDS.

Sony was showing its 300GB Blu-ray optical WORM discs, although it was not clear if the product is being shipped in storage cartridges in 2015. Archiving is a significant driver of M&E storage demand. This is because all video eventually ends up in an archive. Because of more frequent access of archived content, the performance requirements of many archives are more demanding than in the past. This has led to the use of HDD-based archives and archives combining HDDs and magnetic tape. Even flash memory can play a role as a write and read cache in a tape based system.

Dr. Tom Coughlin, president of Coughlin Associates, has over 35 years in the data storage industry. Coughlin is also the founder and organizer of the annual Storage Visions Conference, a partner to the International Consumer Electronics Show, as well as the Creative Storage Conference

Blog: What will the future hold?

By Josh Rizzo

In our previous installment “Millennials are Know it Alls… and So is Everyone Else,” we asked the question “WTF is going on with the entertainment industry and what are the implications?” This time we are looking to the future of where we are going, making educated guesses, having a little fun making outright predictions, all the while considering what we can do to be relevant in the industry of the future.

The assumption this author makes is that the distance between the creative spark of one human and the experience of that idea by another human is going to be filled with more technology and fewer people. Software will continue to become more complex, packing in more and more hard-fought life lessons and scientific advancements. Hardware will become exponentially more powerful and tasks that we, the previous generations, thought were impossible (or at the very least fiscally impractical) will become everyday occurrences. Software will be able to take tens of thousands of hours of source elements and, under the guidance of a director or programmer, execute both technical and creative decisions.

Not long after that, technology will be able to flat out make creative decisions. Some readers may be offended or even insulted at this prospect, but real money is up to wager that, in our lifetime, aspects of many creative decisions could be automated by software. Any takers?

This then leads to the questioning, “Why do we want to climb this mountain of technological innovation?” This answer, as the saying goes, is, “Because it is there.” Technology is a tool that will make our dreams come true and probably cause a few nightmares along the way. We, as humans, will continue to endeavor to create new ways to bridge the gap between idea and experience using any and all capabilities within our grasp. Let’s not forget that the 100-plus years of film craft was upset by a few years of IT tech advancements. Humans, based on our nature, will increasingly use the latest and greatest technology to try to tell a story (or sell a car). This could be good or bad, depending on the perspective.

The kicker here is that just because a person helped to build the industry as it stands today does not mean that same person is guaranteed a place in its future. The socially preferred way to consume media and experience curated presentations will evolve. Who knows, perhaps one day soon “going to the movies” will be on par with “going to the opera” or to see a classical orchestra and relegated to older “intellectual” crowds while later generations do whatever it is they do for their audio visual experiences. All of this in turn will create new, never-before-imagined job opportunities while completely frustrating those who are left behind.

What if we change the scope of the discussion from the individual to the company or corporate level and ask the same question: “How will a company stay relevant?” In the future, we have only to look to the lessons of the past, where successful companies suddenly became irrelevant, then reflect on the circumstances of their demise in an effort not to make the same mistakes.

Recognizing Opportunities
In the interest of transparency, parts of the following examples are borrowed (ok, stolen) from the ex-Apple evangelist, author and all-around swell guy Guy Kawasaki (@GuyKawasaki) and his discussion of the Art of Innovation.

Guy speaks of the history of the “ice” business, pre-industrial revolution. In the early days, folks at home would have an icebox and often paid a service to deliver blocks of ice to their homes. The ice would be placed in the top compartment of the icebox and due to proximity and insulation, would keep perishable foods cold.

At first the source of the ice was harvesting, where a company would find a frozen lake, cut the ice with huge saw blades, then move the large chunks to distribution centers near residential areas where the large block would be cut into smaller blocks for home delivery. See the opening scenes of Disney’s Frozen for an animated example.

The next evolution in the ice biz was the creation of ice factories, where machines would freeze water into blocks to be delivered out to homes. The last and most familiar evolution was the refrigerator, which was essentially the miniaturization of the ice factory — a personal home ice factory.

The core point that Guy makes here is that 100 percent of ice harvest companies did not become ice factories, and 100 percent of ice factories did not become refrigerator manufacturers. Through some combination of culture and leadership, these businesses disregarded a trend in new technology, seeing it as a threat, not an opportunity.

If we look back at the history of the entertainment industry we can find examples of companies that did not successfully “jump to the next curve,” as Guy likes to say.

While Kodak was the originator of the Bayer filter, which is at the heart of raw image processing used to this day, they balked somewhere in the middle and are no longer as close to the nucleus of imaging innovation as they once were. Bell & Howell and Aaton are more names that have been placed in our memory banks.

Companies that continue to thrive today have the culture and leadership to leverage the company’s resources on the development of new technology or new business processes. We continue to use cameras systems from Sony, Canon and Arri. IBM, once synonymous with industrial, business and personal computing, no longer earns its bread and butter by manufacturing hardware; rather, when it saw the coming changes of a connected “big data” world, IBM made the change to “Business Intelligence” consulting — outsourced R&D. IBM’s Watson system gains media headlines for being Artificial Intelligence, but serves mainly to help make sense of the information mess we as humans create.

In the business of entertainment we find ourselves in a world where the production of smaller, short-form content is heavily subsidized by a distribution platform that “gives away” the content for free, knowing that the real money is in the analytics and yes, laser-focused advertising. This platform allows its talent to rise to fame, organically instead of forcing semi-fictionalized narratives like the Hollywood star system of old.

Keep Learning
Even as these words are written, I am daunted by the implications of what the future may hold. It brings on a small panic attack to imagine that I too must completely reinvent myself every few years just to keep up with the curve. So what is the plan? How do we not only survive, but thrive in a software-driven information economy?

First off, never stop learning. We must continually fill ourselves with child-like curiosity and wonder at the world around us. Then, ask questions. Lots of questions! Smart people ask questions while fools assume they know. Next, play. Try, download, get a loaner, break your iPhone, rebuild your computer OS, be worried that you do not know the outcome of what you are about to do — always, always make a few back-ups first.

Lastly, network, share, be patient with others… care. We must focus our energy to a life of learning and not waste what precious time we have on this planet worrying about things we cannot change. Rather, we must rely on each other as a community and work to positively affect the future in ways that are meaningful. Then, and only then, will we discover that the journey to the next curve is less a leap of faith and more a waypoint on a shared journey. Easy, right?

Josh Rizzo is a technologist, creative and lover all things food. He is also CTO of Hula Post Production.  He can follow him on Twitter@joshrizzo.

Public, Private, Hybrid Cloud: the basics and benefits

By Alex Grossman

The cloud is everywhere, and media facilities are constantly being inundated with messages about the benefits the cloud offers in every area, from production to delivery. While it is easy to locate information on how the cloud can be used for ingest, storage, post operations, transcoding, rendering, archive and, of course, delivery, many media facilities still have questions about the public, private and hybrid clouds, and how each of these cloud models can relate to their business. The following is a brief guide intended to answer these questions.

Public
Public cloud is the cloud as most people see it: a set of services hosted outside a facility and accessed through the Web, either securely through a gateway appliance or simply through a browser. The public nature of this cloud model does not mean that content from one person or one company can be accessed by another. It simply means that the same physical hardware is being shared by multiple users — a “multi-tenant” arrangement in which data from different users resides on one system. Through this approach, users get to take advantage of the scale of many servers and storage systems at the cloud facility. This scale can also improve accessibility and performance, which can be key considerations for many content creators.

Public cloud is the most versatile type of cloud, and it can offer a range of services, from hosted applications to “compute” capabilities. For media companies these services range from transcoding, rendering and animation to business services such as project tracking, billing and, in some cases, file sharing. (Box and Dropbox are good examples of file sharing enabled by public cloud.) Services may be generic or branded, and they are most often offered by a software vendor using a public cloud, or by the public cloud vendor itself. Public clouds are popular for content and asset storage, both for short-term transcode to delivery or project-in-process storage and for longer-term “third copy” off-site archive.

Public clouds can be very appealing due to the OPEX or pay-as-you-go nature of billing and the lack of any capital expense with ongoing hardware purchase and refresh, but the downside of this is that public clouds remove control over workflow. While most public cloud vendors today are large and financially stable, it remains important to choose carefully.

Moreover, taking advantage of public cloud is rarely easy. This path involves dealing with new vendors, and possibly with unfamiliar applications and hardware gateways, and there can be unexpected charges for simple operations such as retrieving data. Although content security concerns are mostly overblown, they nevertheless are a source of apprehension for many potential public cloud users. These uncertainties have a lot of media companies looking to private cloud.

Private
Private cloud can most simply be defined for media companies as a walled machine room environment with workflow compute and storage capabilities offering outside connectivity, while at the same time preventing outside intrusions into the facility.

A well-designed private cloud will allow facilities to extend most of their production and archive capabilities to remote users. The main difference between this approach and most current (non-cloud) storage and compute operations in a facility today is simply that a private cloud can isolate the current workflow from the outside world while extending a portion to remote users based on preferences and polices.

The idea of remote access is not confined to private cloud. It is possible to provide facility access to external users through normal networking protocols, but the private cloud takes this a step further through easier access for authorized users and greater security for others. The media facility remains in complete control of its content and assets. Furthermore, it can host its applications, and its content remains in its on-site storage, safe and secure, with no retrieval fees.

A facility that embraces private cloud cannot take advantage of the scale or pay-as-you-go benefits of public cloud. Thus, in order to provide greater accessibility and flexibility, some media companies have adopted a private cloud model as an extension of their online operations. Private cloud can effectively replace much of the current hardware used today in post and archive operations, so it is a more cost-effective solution for many considering cloud benefits.

Hybrid
Hybrid cloud is an interesting proposition. In the enterprise IT world, hybrid cloud implementations are seen as way to bridge private and public and realize the best of both worlds — lower OPEX for certain functions such as SAAS (software as a service) and the security of keeping valuable data back in their data centers.

For media professionals, hybrid cloud may have even greater benefits. Considering the changing delivery requirements facing the industry and the sheer volume of content being created and reviewed — and, of course, keeping in mind the value of re-monetization — hybrid cloud has exciting potential. Well-designed hybrid cloud can provide the benefits of public and private cloud while taking advantage of the cost savings and reduced complexity that come with maintaining on-premise end-to end-hardware. By sharing the load between the hardware at a facility and the massive scale of a public cloud, a media company can extend its workflow easily while controlling every stage — even on a project-by-project basis.

Choosing between public, private and hybrid cloud can be a daunting task. It is a decision that must start with understanding the unique needs and goals of the media company and its operations, and then involve careful mapping of the various vendors’ offering solutions — with cost considerations always in mind. In the end, a facility may choose neither public cloud, private cloud nor hybrid cloud, but then it may miss out on the many and growing benefits enabled by the cloud.

Alex Grossman, a cloud workflow expert, is VP, Media and Entertainment at Quantum. You can follow him on Twitter @activeguy.

Notes from Adobe Max 2015

By Daniel Restuccio

Creativity, community, collaboration and the cloud were the dominant themes of Adobe Max 2015, which attracted over 7,000 creative types to the Los Angeles Convention Center earlier this week.

Adobe’s Creative Cloud, with 5.3 million subscribers, wants to be the antidote to the phenomenon of “content velocity” — the increasing demand for more content to be delivered faster, better and less expensively. They highlighted the following solutions for managing the emerging, worldwide, 24/7 work schedule.

Here are some highlights:
– Mobile apps: projects on mobile devices with apps like Adobe Comp or Adobe Clip and then get sent to desktop apps In-Design or Premiere for finishing. Newly announced app Adobe Capture aggregates Adobe Shape, Color, Hue and Brush into one app.
– Creative Sync: all assets are in the Creative Cloud and can be instantly updated by anyone on any desktop or mobile device.
– All Adobe apps are now touch enabled on Microsoft Windows.
– Adobe Stock — the company’s royalty-free collection of high-quality photos, illustrations and graphics, which offers 40 million images, will soon expand to video.
– Video editing: Premiere is already UHD-, 4K- and 8K-enabled with Dolby Vision HDR extended dynamic range exhibition on the horizon.

Director Tim Miller (R) with Deadpool movie with Adobe’s senior VP, Bryan Lamkin,

Director Tim Miller (R) with Deadpool movie with Adobe’s senior VP, Bryan Lamkin,

Movies and Premiere Pro
Deadpool director Tim Miller addressed the crowd and spoke with Adobe’s Senior VP/GM, Bryan Lamkin. Miller shared that David Fincher persuaded him to make the switch to Adobe Premiere Pro for this film. In fact, Premiere editor Vashi Nedomansky set up the Premiere Pro systems on Deadpool, which was shot on Arri Raw.

Nedomansky trained the editors and assistant editors on the system, which is pretty much a clone of the one he set up for David Fincher — including using Jeff Brue’s solid state drives via OpenDrive — on Gone Girl.

The Coen brothers next film, Hail Caesar!, is also being cut on Premiere, so I think we’ve hit the tipping point with Premiere and feature work. I don’t suspect anyone’s going to throw out their Media Composers anytime soon, but Premiere is now the little engine that could, like Final Cut was back in the day.

Photos: Elizabeth Lippman

What I saw at the Toronto Film Fest 2015

By Kristine Pregot

For the 40th year, the city of Toronto has hosted one of the world’s biggest international movie festivals. I headed up to the festival to scout and chat with directors about future collaborations with Nice Shoes artists. The streets were swarming with some of the biggest names of Hollywood, Bollywood and stars from the silver screen of China. The Toronto International Film Festival (TIFF) truly lives up to its status as an international festival. Every continent (except for Antarctica) had films showcased.

The festival drew the biggest film enthusiasts (cough, film nerds), from around the world to screen films, meet filmmakers and attend industry panels. The two-week festival shows over 100 films per day… that’s a lot of popcorn sales.

It’s mind boggling to make selections on what to see, because there will always inevitably be FOMO (Fear Of Missing Out). I started the festival watching the film Brooklyn. It is a beautifully made film, capturing the mood of Brooklyn immigrants in the 1940s. The filmmaker captured the beautiful and diverse melting pot that is New York. The film’s costumes, set design and art direction alone made the film worth seeing. And more than just a few tears flowed while I watched this film on September 11.

I met Cynthia Wade, the producer of Freeheld, at a Producers Guild event, and she talked about how she directed the Oscar-winning short of the same name. She had read a New York Times article about terminally ill New Jersey police officer Laurel Hester and her legal battle to pass on her pension benefits to her domestic partner. Wade sought out the women and knew she had to make this into a film. After hearing her story, I knew I had to add that feature to my screening list. It was a deeply moving film and to hear how it came to be made was a true inspiration.

Out of all the films I watched,  I Smile Back may have been one of the most depressing (in a virtuous way). Sarah Silverman’s character was extremely persuasive and I am not sure I will ever see her in the light, ever again. After watching this film, I felt like I had to go to confession.

By far the best thriller I screened was Green Room. Writer/director Jeremy Saulnier nailed it with this Midnight Madness pick. The film had a gritty tone, featuring punk rock, skinheads, killer dogs and Patrick Stewart — I can’t think of spookier combination.

Green Room's Jeremy Saulnie, Anton Yelchin, Imogen Poots, Patrick Stewart, Alia Shawkat, Callum Turner and Joe Cole.

Panel: the cast of ‘Green Room’

During my time in Toronto, I was able to attend a few panels as well. One of the most educational ones detailed the Canadian tax incentives. Between the exchange rate, and the amazing cash back for post and VFX, it is no surprise so many filmmakers are turning to our friends up north for a hand.

The best tip I can provide for anyone attending next year’s festival is this: be sure to grab a drink at the Shangri-La bar. It is a very elegant hub between screenings — great drinks and fun celebrity sightings. Plus, you can eavesdrop on some of the best “for my next feature” pitching happening all around the bar.

Kristine Pregot is a senior producer at New York City-based Nice Shoes.

IBC Report: Making high-resolution panoramic video

By Tom Coughlin

Higher resolution content is becoming the norm in today’s media workflows, but pixel count is not the only element that is changing. In addition to the pixel density the depth of image, color gamut, frame rates and even the number of simultaneous streams of video will be important. At the 2015 IBC in Amsterdam there was a clear picture of a future that includes UHD 4K and 8K video, as well as virtual reality, as the future path to more immersive video and entertainment experiences.

NHK, a pioneer in 8K video hardware and infrastructure development has given more details on its introduction of this higher resolution format. They will start test broadcasts of their 8K technology in 2016, followed by significant satellite video transmission in 2018 and widespread deployment in 2020 in time for the Tokyo Olympic Games. The company is looking at using HEVC compression to put a 72Gb/s video stream with 22:2 channel audio into a 100Mb/s delivery channel.

In the Technology Zone at the IBC there were displays of virtual reality, 8K video developments, (mostly by NHK), as well as multiple camera set-ups for creating virtual reality video and various ways to use panoramic video. Sphericam 2 is a Kickstarter-funded product that provides 60 frames per second 4K video capture for creating VR content. This six-camera device is compact and can be placed on a stick and used like a selfie camera to capture a 360-degree view.

Sphericam 2

Sphericam 2

At the 2015 Google Developers Conference, GoPro demonstrated a 360-degree camera rig (our main image) using 16 GoPro cameras to capture panoramic video. At the IBC, GoPro displayed a more compact 360 Hero six-camera rig for 3D video capture.

In the Technology Zone, Al Jeezera had an eight-camera rig for 4K video capture (made using a 3D printer) and were using software to create panoramic videos. There are many such videos on YouTube that can be viewed as panoramic videos, which change perspective when viewed on a smart phone that has an accelerometer that will create a reference around which the viewer can look at the panoramic activities. The Kolor software actually provides a number of different ways to view the captured content.

Eight Camera rig

Eight-camera rig at Al Jeezera stand.

While many viewing devices for VR video use special split-screen displays, or even use smart phones with a split screen image while using the phone’s accelerometers to give the sense of being surrounded by the viewed image — like the Google Cardboard — there are other ways to create an immersive experience. As mentioned earlier, panoramic videos with a single (or split screen) are available on YouTube. There are also spherical display devices where the still or video image can be rotated by moving your hand across the sphere like the one shown below.

Higher resolution content is becoming mainstream, with 4K TVs set to be the majority that are sold within the next few years. 8K video production, pioneered by NHK and others in Japan, could be the next 4K video by the start of the next decade, driving even more realistic content capture and higher bandwidth and higher storage capacity post.

Multi-camera content is also growing in popularity to support virtual reality games and other applications. This growth is enabled by the proliferation of low cost, high-resolution cameras and sophisticated software that combine the video from these cameras to create a panoramic video and virtual reality experience.

The trends toward higher resolution, combined with a greater color gamut, higher frame rate and color depth will transform video experiences by the next decade, leading to new requirements for storage, networking and processing in video production and display.

Dr. Tom Coughlin, president of Coughlin Associates, has over 35 years in the data storage industry. Coughlin is also the founder and organizer of the annual Storage Visions Conference, a partner to the International Consumer Electronics Show, as well as the Creative Storage Conference

IBC 2015 Blog: HDR displays

By Simon Ray

It was an interesting couple of days in Amsterdam. I was hoping to get some more clarity on where things were going with the High Dynamic Range concept in both professional and consumer panels, as well as delivery mechanisms to get it to the consumers. I am leaving IBC knowing more, but no nearer a coherent idea as to exactly where this is heading.

I initially visited Dolby to get an update on Dolby Vision (our main image), see where they were with their Dolby Vision technology and most importantly get my reserved tickets for the screening of Fantastic Four in the Auditorium (Laser Projection and Dolby Atmos). It all sounded very positive with news of a number of consumer panel manufacturers being close to releasing Dolby Vision-capable TVs. For example, Vizio with their Reference Series panel and streaming services like VUDU streaming Dolby Vision HDR content, although this is just in the USA to begin with. I also had my first look at a Dolby “Quantum Dot” HDR display panel, which did look good and surely has the best name of any tech out here.

There are other HDR offerings out there with Amazon Prime having announced in August that they will be streaming HDR content in the UK, but not initially in the Dolby Vision format (HDR video is available with the Amazon Instant Video app for Samsung SUHD TVs like the JS9000, JS9100 and JS9500 series) and selected LG TVs (G9600 and G9700 series) and the “big” TV manufacturers have or are about to launch HDR panels. So far so good.

Pro HDR Monitors
Things got bit more vague again when I started looking into HDR-equipped professional panels for color correction. There are only two I could find in the show: Sony had an impressive HDR-ready panel connected to a Filmlight BaseLight tucked away on their large stand in Hall 12; and Canon, who had their equally impressive prototype display tucked away in Hall 11 connected to a SGO Mistika. Both displays had different brightness specs and gamma options.

canon

When I asked some other manufacturers about their HDR panels the response was the same: “We are going to wait until the specifications are finalized before committing to an HDR monitor.” This leaves me to think this is a bad time to be buying a monitor. You are either going to buy an HDR monitor now, which may not be correct to the final specifications, or you are going to be buying a non-HDR monitor that is likely to be superseded in the near future.

Another thing I noticed was that the professional HDR panels were all being shown off in a carefully (or as carefully as a trade show allows) light environment to give them the best opportunity to make an impact. Any ambient light getting into the viewing environment is going to detract from the benefits of having the increased dynamic range and brightness of the HDR display, which I imagine might be a problem in the average living room. I hope this does not reduce the chance of this technology making an impact because it is great to see images seemingly having more depth and quality to them. As a representative on the Sony stand said, “It feels more immersive — I am so much more engaged in the picture.”

sony

Dolby
The problem of the ambient light was also picked up on in an interesting talk in the Auditorium as part of the “HDR: From zero to infinity” series. There were speakers from iMax, Dolby, Barco and Sony talking about the challenges of bringing HDR to the cinema. I had come across the idea of HDR in cinema from Dolby through their “Dolby Cinema” project, which brings together HDR picture and immersive sound with Dolby Atmos.

I am in the process of building a theatre to mix theatrical soundtracks in Dolby Atmos, but despite the exciting opportunities for sound that Atmos offers the sound teams, in the UK at least the take up by Cinemas is slow. One of the best things about Dolby Atmos for me is that if you go to see a film in Atmos, you know that the speaker system is going to be of a certain standard, otherwise Dolby would not have given it Atmos status. For too long, cinemas have been allowed to let the speaker systems wear down to the point where it becomes unlistenable. If these new initiatives can give cinemas an opportunity to reinvest in the equipment (and the various financial implications and challenges and who would meet these costs were discussed) and get a return on that investment it could be a chance to stop the rot and improve the cinema going experience. And, importantly, for us in post it gives us an exciting high bench mark to be aiming for when working on films.

Simon Ray is head of operations and engineering Goldcrest Post Production in London.

IBC 2015 Blog: Beautiful clouds in the sky, content in the cloud

By Robert Keske

The weather this year during IBC might be the best I ever experienced in Amsterdam. Inside the RAI, IBC seemed quieter this year — the halls were less crowded and easy to navigate. I have to assume that everyone was enjoying the weather instead of being inside the RAI.

The theme of the 2015 show was “Content Everywhere.” This is a productization taking place to incorporate mobile and cloud technology into the production and post production process. Creative and collaborative applications are now running on tablets and smart phones in some innovative ways, from content bypassing traditional distribution to direct-to-mobile consumption.

After taking in the overall conference, I paid a visit to a few of our vendors to see what they were presenting at this year’s show.

FilmLight FLIP + FLIP remote

FilmLight FLIP + FLIP remote

FilmLight has continued to impress me with their focus on delivering a full-service product line, offering solutions from on-set all the way through the beginnings of a complete finishing toolset.

Autodesk has made some nice advancements to the latest release of Flame 2016 Premium. The latest workflow and UI improvements appear to have incorporated user feedback and will surely be welcomed by the Flame user community.

SGO Mistika has also listened to feedback from the community, with the beginnings of a new UI, and the media management UI has greatly improved.

Another bright spot is the work Henry Gu is performing in content delivery automation. Henry was at the Data Direct Networks booth, and I highly recommend paying him a visit to see his work.

New York-based Robert Keske is CIO/CTO at Nice Shoes (@niceshoesonline).

Releases & Updates: We are in this ecosystem together

By Sean Mullen

Just a few weeks ago, Adobe released a major new upgrade to its Creative Cloud services. While these updates are welcomed by the community with excitement, there’s also a period of — for lack of better words — stressful chaos as the third-party software and plug-in developers scramble to ensure their products will be compatible.

When Adobe speaks, the community listens. When Adobe does something new, they listen even closer, because when they do something new, it’s usually some amazing a leap forward that only makes our lives easier and our work look that much better. The latest updates to Adobe Creative Cloud are no different.

All of us at Rampant Design are big fans, and Adobe CC is big part of what we do every day. It’s no mistake that our Style Effects complement Adobe CC so well. But we also understand — being part of this VFX community — that while change is great, those changes have impact on the software and plug-in developers who make their living enhancing the Adobe CC workflow. But I’ll get to that in a minute.

Adobe After Effects CC

Adobe After Effects CC

The Updates
Here are a couple of top-of-mind things that get us excited. We zeroed in on some of the applications and features within CC that impact us most on a daily basis, and those are the features in Premiere Pro and After Effects.

The Iridas acquisition of a couple of years ago is really showing its value, especially with this update. The Lumetri Color panel is amazing!  You’re getting seriously powerful color tools built right into Premiere Pro. That’s pretty significant. Morph Cut is part voodoo and part rocket science — a very cool tool that smoothes out jump cuts and pauses. There are some notable changes to After Effects too. While the AE Comp Scrollbar is now missing, the uninterrupted preview is a fantastic addition. The new Face Tracker is impressive as well.

The Adobe Ecosystem: Plug-Ins
There is most definitely an ecosystem around Adobe, an entire sub-segment of the post production software industry who make tools to enhance the workflow — the plug-in developers.

Adobe Premiere

Adobe Premiere

In any third-party plug-in environment, you have the host developer (in this case Adobe) and the third party plug-in developer  companies like Red Giant, Video Copilot, Genarts, BorisFX, to name a few. While the host developers keep the third parties informed as much as possible, their main focus is on rolling out a solid product release.

So,inevitably, some things slip through the cracks — mainly their ability to interact with the plug-in developers in a timely way — at least from the plug-in developers perspective. As a result, you’ll notice a slew of newsletters and social network posts from these third parties claiming that their products currently do or do not work with the latest release.

I’m sure the weeks up to and following a major release can be a hectic time for developers. Plug-in engineering isn’t free, so there is a small window within that the current build of any given third-party plug-in will work. Major releases come out every year and dot releases happen quite often.

At Rampant, our situation is a little different. We make tools that enhance the CC workflow, but also the plug-ins themselves. Style Effects aren’t alternative to plug-ins, they are complementary. If we were bakers or chefs, Style Effects would be the spices or finishing touches. If we were carpenters, Style Effects would be the varnish. Style Effects work hand in hand with your favorite plug-ins.

Style Effects are QuickTime-based, so as long as you have QuickTime, these effects will work with any Adobe update. In our reality, artists and editors want instant gratification. Very few of us get the time to play. Most producers want to see something yesterday, and this is why the plug-in and Style Effects ecosystems are so critical. Major new host releases will always be challenging — and stressful — but the end product of all of us working together is what helps all of us create amazing content. We’re proud to be a part of it!

Sean Mullen is the founder/president of Rampant Design Tools. He is an award-winning VFX artist, but he’s also the creator of Rampant Style Effects, UHD visual effects and designs. Style Effects are packaged as QuickTime files, enabling artists to drag and drop them to any editing platform.