Cinnafilm 6.6.19

Category Archives: post production

HPA Tech Retreat 2019: An engineer’s perspective

By John Ferder

Each year, I look forward to attending the Hollywood Professional Association’s Tech Retreat, better known as the HPA Tech Retreat. Apart from escaping the New York winter, it gives me new perspectives, a chance to exchange ideas with friends and colleagues and explore the latest technical and creative information. As a broadcast engineer, I get a renewed sense of excitement and purpose.

Also, as secretary/treasurer of SMPTE, the Board of Governors meetings as well as the Strategy Day held each year before the Tech Retreat energize me. This year, we invited a group of younger professionals to tell us what SMPTE could do to attract them to SMPTE and HPA, and what they needed from us as experienced professionals.

Their enthusiasm and honesty were refreshing and encouraging. We learned that while we have been trying to reach out to them, they have been looking for us to invite them into the Society. They have been looking for mentors and industry leaders to engage them one-on-one and introduce them to SMPTE and how it can be of value to them.

Presentations and Hot Topics
While it is true that the Hollywood motion picture community is behind producing this Tech Retreat, it is by no means limited to the film industry. There was plenty of content and information for those of us on the broadcast side to learn and incorporate into our workflows and future planning, including a presentation on the successor to SMPTE timecode. Peter Symes, formerly director of standards for SMPTE and a SMPTE Fellow, presented an update on the TLX Project and the development of what is to be SMPTE Standard ST2120, the Extensible Time Label.

This suite of standards will be built on the work already done in ST2059, which describes the use of the IEEE1588 Precision Time Protocol to synchronize video equipment over an IP network. This Extensible Time Label will succeed, not replace ST12, which is the analog timecode that we have used with great success for 50 years. As production moves increasingly toward using IP networks, this work will produce a digital time labeling system that will be as universal as ST12 timecode has been. Symes invited audience members to join the 32NF80 Technology Committee, which is developing and drafting the standard.

Phil Squyres

What were the hot topics this year? HDR, Wide Color Gamut, AI/machine learning, IMF and next-generation workflows had a large number of presentations. While this may seem to be the “same old, same old,” the amount of both technical and practical information presented this year was a real eye-opener to many of us.

Phil Squyres gave a talk on next generation versus broadcast production workflows that revealed that the amount of time and storage needed to complete a program episode for OTT distribution versus broadcast is 2.2X or greater. This echoed the observations of an earlier panel of colorists and post specialists for Netflix feature films, one of whom stated that instead of planning to complete post production two weeks prior to release, plan on completing five to six weeks prior in order to allow for the extra work needed for the extra QC of both HDR and SDR releases.

Artificial Intelligence and Machine Learning
Perhaps the most surprising presentation for me was given by Rival Theory, a company that generates AI personas based on real people’s memories, behaviors and mannerisms. They detailed the process by which they are creating a persona of Tony Robbins, famous motivational speaker and investor in Rival Theory. Robbins intends to have a life-like persona created to help people with life coaching and continue his mission to end suffering throughout the world, even after he dies. In addition to the demonstration of the multi-camera storing and rendering of his face while talking and displaying many emotions, they showed how Robbins’ speech was saved and synthesized for the persona. A rendering of the completed persona was presented and was very impressive.

Many presentations focused on applications of AI and machine learning in existing production and post workflows. I appreciated that a number of the presenters stressed that their solutions were meant not to replace the human element in these workflows, but to instead apply AI/ML to the redundant and tedious tasks, not the creative ones. Jason Brahms of Video Gorillas brought that point home in his presentation on “AI Film Restoration at 12 Million Frames per Second,” as did Tim Converse of Adobe in “Leveraging AI in Post Production.”

Broadcasters panel

Panels and Roundtables
Matthew Goldman of MediaKind chaired the annual Broadcasters Panel, which included Del Parks (Sinclair), Dave Siegler (Cox Media Group), Skip Pizzi (NAB) and Richard Friedel (Fox). They discussed the further development and implementation of the ATSC 3.0 broadcast standard, including the Pearl Consortium initiative in Phoenix and other locations, the outlook for ATSC 3.0 tuner chips in future television receivers and the applications of the standard beyond over-the-air broadcasting, with an emphasis on data-casting services.

All of the members of the panel are strong proponents of the implementation of the ATSC 3.0 standard, and more broadcasters are joining the evolution toward implementing it. I would have appreciated including on the panel someone of similar stature who is not quite so gung-ho on the standard to discuss some of the challenges and difficulties not addressed so that we could get a balanced presentation. For example, there is no government mandate nor sponsorship for the move to ATSC 3.0 as there was for the move to ATSC 1.0, so what really motivates broadcasters to make this move? Have the effects of the broadcast spectrum re-packing on available bandwidth negatively affected the ability of broadcasters in all markets to accommodate both ATSC 3.0 and ATSC 1.0 channels?

I really enjoyed “Adapting to a COTS Hardware World,” moderated by Stan Moote of the IABM. Paul Stechly, president of Applied Electronics, noted that more and more end users are building their own in-house solutions, assisted by manufacturers moving away from proprietary applications to open APIs. Another insight panelists shared was that COTS no longer applies to data hubs and switches only. Today, that term can be extended to desktop computers and consumer televisions and video displays as well. More and more, production and post suites are incorporating these into their workflows and environments to test their finished productions on the equipment on which their audience would be viewing them.

Breakfast roundtables

Breakfast Roundtables, which were held on Wednesday, Thursday and Friday mornings, are among my conference “must attends.” Over breakfast, manufacturers and industry experts are given a table to present a topic for discussion by all the participants. The exchange of ideas and approaches benefits everyone at the tables and is a great wake-up exercise leading into the presentations. My favorite, and one of the most popular of the Tech Retreat, is on Friday when S. Merrill Weiss of the Merrill Weiss Group, as he has for many years, presents us with a list of about 12 topics to discuss. This year, his co-host was Karl Paulsen, CTO of Diversified Systems, and the conversations were lively indeed. Some of the topics we discussed were the costs of building a facility based on ST2110, the future of coaxial cable in the broadcast plant, security in modern IP networks and PTP, and the many issues in the evolution from ATSC 1.0 to ATSC 3.0.

As usual, a few people were trying to fit in at or around the table, as it is always full. We didn’t address every topic, and we had to cut the discussions short or risk missing the first presentation of the day.

Final Thoughts
The HPA Tech Retreat’s presentations, panels and discussion forums are a continuing tool in my professional development. Attending this year reaffirmed and amplified my belief that this event is one that should be on each broadcasters’ and content creators’ calendar. The presentations showed that the line between the motion picture and television communities is further blurring and that the techniques embraced by the one community are also of benefit to the other.

The HPA Tech Retreat is still small enough for engaging conversations with speakers and industry professionals, sharing their industry, technical, and creative insights, issues and findings.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.

Review: HP’s double-hinged ZBook Studio x360 mobile workstation

By Mike McCarthy

I recently had the opportunity to test HP’s ZBook Studio x360 mobile workstation over the course of a few weeks. HP’s ZBook mobile workstation division has really been thinking outside the box lately, with the release of the ZBook X2 tablet, the HP Z-VR backpack-mounted system and now the ZBook Studio x360.

The ZBook Studio x360 is similar in design functionality to HP’s other x360 models — the Pavilion, Spectre, Envy, ProBook and Elitebook x360 — in that the display is double-hinged. The keyboard can be folded all the way behind the screen, allowing it to be used similarly to a tablet or placed in “tent” or “presentation” mode with the keyboard partially folded behind it. But the ZBook is clearly the top-end option of the systems available in that form factor. And it inherits all of the engineering from the rest of HP’s extensive product portfolio, in regards to security, serviceability, and interface.

Performance-wise, this Studio x360 model sits somewhere in the middle of HP’s extensive ZBook mobile workstation lineup. It is above the lightweight ZBook 14U and 15U and X2 tablet with their low-voltage U-Series CPUs and the value-oriented 15v. It is similar to the more traditional clamshell ultrabook ZBook Studio, and has less graphics power and RAM than the top-end ZBook 15 and 17.

It is distinguished from the ZBook Studio by its double-hinged 360 folding chassis, and its touch and pen inking capability. It is larger than the ZBook X2 with more powerful internal hardware. This model is packed with processing power in the form of a 6-core 8th generation Xeon processor, 32GB RAM and an Nvidia Quadro P1000 GPU. The 15-inch UHD screen boosts up to 400 nits at full brightness and, of course, supports touch and pen input.

Configuration Options
The unit has a number of interesting configuration options with two M.2 slots and a 2.5-inch bay allowing up to 6TB of internal storage, but most users will forgo the 2.5-inch SATA bay for an extended 96whr battery. There is the option of choosing between a 4G WWAN card or DreamColor display, giving users a wide selection of possible capabilities.

Because of the work I do, I am mostly interested in answering the question: “How small and light can I go, and still get my work done effectively?” In order to answer that question, I am reviewing a system with most of the top-end options. I started at a 17-inch Lenovo P71 last year, then tried a large 15-inch PNY PrevailPro and now am trying out this much lighter 15-inch book. There is no compromise with the 6-core CPU, as that is the same as in a 17-inch beast. So the biggest difference is in the GPU, with the mobile Quadro P1000 only having the 512 CUDA core, one third the power of the Quadro P4000 I last tested. So VR is not going to work, but besides heavy color grading, most video editing tasks should be supported. And 32GB of RAM should be enough for most users, but I installed a second NVMe drive, giving me a total of 2TB of storage.

Display
The 15.6-inch display is available in a number of different options, all supporting touch and digital pen input. The base-level full-HD screen can be upgraded to a Sure View screen, allowing the user to selectively narrow the viewing angle at the press of a key in order to increase their privacy. Next up is the beautiful 400-nit UHD screen that my unit came with. And the top option is a 600-nit DreamColor calibrated UHD panel. All of the options fully support touch and pen input.

Connectivity
The unit has dual-Thunderbolt 3 ports, supporting DisplayPort 1.3, as well as HDMI, dual-USB3.1 Type-A ports, an SDXC card slot and an audio jack. The main feature I am missing is an RJ-45 jack for Gigabit Ethernet. I get that there are trade-offs to be made in any configuration, but that is the item I am missing from this unit. On the flip side, with the release of affordable Thunderbolt-based 10GbE adapters, that is probably what I would pair with this unit if I was going to be using it to edit assets I have stored on my network. So that is a solvable problem.

Serviceability
Unlike the heavier ZBook 15 and 17 models, it does not have a tool-less chassis, but that is an understandable a compromise to reduce size and weight, and totally reasonable. I was able to remove the bottom cover with a single torx screwdriver, giving me access to the RAM, wireless cards, and M.2 slots I was populating with a second NVMe drive to test. The battery can also be replaced that way should the need arise, but the 96whr long-life battery is fully covered by the system warranty, be that three or five years depending on your service level.

Security
There are a number of unique features that this model shares with many others in HP’s lineup. The UEFI-based HP Sure Start BIOS and pre-boot environment provide a host of options for enterprise-level IT management, and make it less likely that the boot process will get corrupted. HP Sure Click is a security mechanism that isolates each Chromium browser tab in its own virtual machine, protecting the rest of your system from any malware that it might otherwise be exposed to. Sure Run and Sure Recover are designed to prevent and recover from security failures that render the system unusable.

The HP Client Security Manager brings the controls for all of this functionality into one place and uses the system’s integrated fingerprint reader. HP Workwise is a utility for integrating the laptop with one’s cell phone, allowing automatic system lock and unlock when the cell phone leaves or enters Bluetooth range and phone notifications from the other “Sure” security applications.

Thunderbolt Dock
HP also supplied me with their new Thunderbolt dock. The single most important feature on that unit from my perspective is the Gigabit Ethernet port, since there isn’t one built into the laptop. It also adds two DisplayPorts and one VGA output and includes five more USB ports. I was able to connect my 8K display to the DisplayPort output and it ran fine at 30Hz, as is to be expected from a single Thunderbolt connection. The dock should run anything smaller than that at 60Hz, including two 4K displays.

The dock also supports an optional audio module to facilitate better conference calls, with a built-in speaker, microphone and call buttons. It is a nice idea but a bit redundant since the laptop has a “world-facing” microphone for noise cancellation or group calling and even has “Collaboration Keys” for controlling calls built into the top of the keyboard. Apparently, HP sees this functionality totally replacing office phones.

I initially struggled to get the dock to work — besides the DisplayPorts — but this was because I connected it before boot-up. Unlike docking stations from back in the day, Thunderbolt is fully hot-swappable and actually needs to be powered on the first time it is connected in order to trigger the dialog box, which gives it low-level access to your computer for security reasons. Once I did that, it has worked seamlessly.

The two-part cable integrates a dedicated power port and Thunderbolt 3 connection, magnetically connected for simple usage while maintaining flexibility for future system compatibility. The system can receive power from the Thunderbolt port, but for maximum power and performance uses a 130W dedicated power plug as well, which appears to be standardized across much of HP’s line of business products.

Touchscreens and Pens
I had never seriously considered tablets or touchscreen solutions for my own work until one of HP’s reps showed me an early prototype of the ZBook X2 a few years ago. I initially dismissed it until he explained how much processing power they had packed into it. Only then did I recognize that HP had finally fulfilled two of my very different and long-standing requests in a way that I hadn’t envisioned. I had been asking the display team for a lightweight battery-powered DreamColor display, and I had been asking the mobile workstation team for a 12- or 14-inch Nvidia-powered model — this new device was both.

I didn’t end up reviewing the X2 during its initial release last year, although I plan to soon. But once the X2 shifted my thinking about tablet and touch-based tools, I saw this ZBook Studio x360 as an even more powerful implementation of that idea, in a slightly larger form factor. While I have used pens on other people’s systems in the past, usually when doing tech support for other editors, this is my first attempt to do real work with a pen instead of a mouse and keyboard.

One of the first obstacles I encountered was getting the pen to work at all. Unlike the EMR-based pens from Wacom tablets and the ZBook X2, the x360 uses an AES-based pen, which requires power and a Bluetooth connection to communicate with the system. I am not the only user to be confused by this solution, but I have been assured by HP that the lack of documentation and USB-C charging cable have been remedied in currently shipping systems.

It took me a while (and some online research) to figure out that there was a USB-C port hidden in the pen and that it needed to be charged and paired with the system. Once I did that, it has functioned fine for me. The pen itself works great, with high precision and 4K levels of pressure sensitivity and tilt support. I am not much of a sketcher or painter, but I do a lot of work in Photoshop, either cleaning images up or creating facial expressions for my Character Animator puppets. The pen is a huge step up from the mouse for creating smooth curves and natural lines. And the various buttons worked well for me once I got used to them. But I don’t do a lot of work that benefits from having the pen support, and trying to adapt other tasks to the pen-based input was more challenging than I anticipated.

The other challenge I encountered was with the pen holder, which fits into the SD card slot. The design is good and works better than I would have expected, but removing the original SD plug that protects the slot was far more difficult than it should be. I assume the plug is necessary for the system to pass the 13 MilSpec type tests that HP runs all of its ZBooks through, but I probably won’t be wedging it back in that slot as long as I have the system.

Inking
I am not much of a tablet user as of yet since this was my first foray into that form factor, but the system is a bit large and bulky when folded back into tablet mode. I have hit the power button by accident on multiple occasions, hibernating the system while I was trying to use it. This has primarily been an issue when I am using it in tablet mode and holding it with my left hand in that area by default. But the biggest limitation I encountered in tablet mode was recognizing just how frequently I use the keyboard during the course of my work. While Windows Inking does allow for an onscreen keyboard to be brought up for text entry, functions like holding Alt for anchor-based resizing are especially challenging. I am curious to see if some of these issues are alleviated on the X2 by the buttons they built into the edge of the display. As long as I have easy access to Shift, Ctrl, Alt, C, V and a couple others, I think I would be good to go, but it is one of those things that you can’t know for sure until you try it yourself. And different people with varying habits and preferences might prefer different solutions to the same tasks. In my case, I have not found the optimal touch and inking experience yet.

Performance
I was curious to see what level of performance I would get from the Quadro P1000, as I usually use systems with far more GPU power. But I was impressed with how well it was able to handle the animating and editing of the 5K assets for my Grounds of Freedom animated series. I was even able to dynamically link between the various Adobe apps with a reasonable degree of interactive feedback. That is where you start to see a difference between this mobile system and a massive desktop workstation.

eGPU
Always looking for more power, I hooked up Sonnet’s Breakaway Box 550 with a variety of different Nvidia GPUs to accelerate the graphics performance of the system. The Quadro P6000 was the best option, as it used the same Quadro driver and Pascal architecture as the integrated P1000 GPU but greatly increased performance.

It allowed me to use my Lenovo Explorer WMR headset to edit 360 video in VR with Premiere Pro, and I was able to playback 8K DNxHR files at full resolution in Premiere to my Dell 8K LCD display. I was also able to watch 8K HEVC files in Windows movie player smoothly. Pretty impressive for a 15-inch convertible laptop, but the 6-Core Xeon processor pairs well with the desktop GPU, making this an ideal system to harness the workflow possibilities offered by eGPU solutions.

Media Export Benchmarks
I did extensive benchmark testing, measuring the export times of various media at different settings with different internal and external GPU options. The basic conclusion was that currently simple transcodes and conversions are not much different with an eGPU, but that once color correction and other effects are brought into the equation, increasing GPU power makes processing two to five times faster.

I also tested DCP exports with Quvis’ Wraptor plugin for AME and found the laptop took less than twice as long as my top-end desktop to make DCPs, which I consider to be a good thing. You can kick out a 4K movie trailer in under 10 minutes. And if you want to export a full feature film, I would recommend a desktop, but this will do it in a couple of hours.

Final Observations
The ZBook Studio x360 is a powerful machine and an optimal host for eGPU workflows. While it exceeded my performance expectations, I did not find the touch and ink solution to be optimal for my needs as I am a heavy keyboard user, even when doing artistic tasks. (To be clear, I haven’t found a better solution. This just doesn’t suitably replace my traditional mouse and keyboard approach to work.) So if buying one for myself, I would personally opt for the non-touch ZBook Studio model. But for anyone to whom inking is a critical part of their artistic workflow, who needs a powerful system on the go, this is a very capable model that doesn’t appear to have too many similar alternatives. It blends the power of the ZBook Studio with the inking experience of HP’s other x360 products.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Cinnafilm 6.6.19

SGO’s Mistika Ultima integrates AJA’s Kona 5

SGO has integrated AJA‘s Kona 5 audio and video I/O cards into its full finishing and workflow solution Mistika Ultima, providing simplified and optimized 8K production for broadcast clients.

The new Mistika Ultima 8K System provides a realtime finishing workflow for 8K full UHD at 60p, even with uncompressed formats. It is comprised of an AJA Kona 5 card with 12G-SDI I/O connectivity, Mistika Ultima software, an HP Z8 workstation, a high-performance SGO storage solution, and other industry-standard hardware.

Kona 5 is a high-performance eight-lane PCIe 3.0 capture and output card featuring 12G-SDI I/O and HDMI 2.0 output. For OEM partners, the card is supported on AJA’s SDK for Mac OS, Windows and Linux, offering advanced features such as 8K and multi-channel 4K. Kona 5 is also compatible with creative tools such as Adobe Premiere Pro, Apple Final Cut Pro X and Avid Media Composer, via AJA’s proven Mac OS and Windows drivers and application plug-ins. Kona 5 enables simultaneous capture with signal passthrough when using 12G-SDI, and offers HDMI 2.0 output, as well as deep-color and multi-format HDR support.


Molinare hires Nigel Bennett as commercial director

Nigel Bennett will be joining London’s Molinare as commercial director. He was most recently at Pinewood Studios and starts in May. 

Bennett brings experience managing creative, technical and financial pressures within post production.

At Pinewood Studios, Bennett was the group director of creative services, a position he had held since 2014, where he oversaw the opening of Pinewood Digital in Atlanta. With a career in post, Nigel worked his way up from re-recording mixer, through operations management across film, TV and games, head of operations of digital content services, up to his most recent role.

As a re-recording mixer at Shepperton Studios, he worked on a range of titles such as Nanny McPhee, Troy, Love Actually, Gosford Park and Last Orders. 

The London facility looks to build on the success of award-winning dramas Killing Eve and Bodyguard, the Primetime Emmy award-nominated Patrick Melrose, the documentary Three Identical Strangers and feature Mission: Impossible – Fallout, all from last year.


Sound designer Ash Knowlton joins Silver Sound

Emmy Award-winning NYC sound studio Silver Sound has added sound engineer Ash Knowlton to its roster. Knowlton is both a location sound recordist and sound designer, and on rare and glorious occasions she is DJ Hazyl. Knowlton has worked on film, television, and branded content for clients such as NBC, Cosmopolitan and Vice, among others.

“I know it might sound weird but for me, remixing music and designing sound occupy the same part of my brain. I love music, I love sound design — they are what make me happy. I guess that’s why I’m here,” she says.

Knowlton moved to Brooklyn from Albany when she was 18 years old. To this day, she considers making the move to NYC and surviving as one of her biggest accomplishments. One day, by chance, she ran into filmmaker John Zhao on the street and was cast on the spot as the lead for his feature film Alexandria Leaving. The experience opened Knowlton’s eyes to the wonders and complexity of the filmmaking process. She particularly fell in love with sound mixing and design.

Ten years later, with over seven independent feature films now under her belt, Knowlton is ready for the next 10 years as an industry professional.

Her tools of choice at Silver Sound are Reaper, Reason and Kontakt.

Main Photo Credit: David Choy


Method Studios adds Bill Tlusty joins as global head of production

Method Studios has brought on veteran production executive and features VFX Producer Bill Tlusty on board in the new role of global head of production. Reporting to EVP of global features VFX, Erika Burton, Tlusty will oversee Method’s global feature film and episodics production operation, leading teams worldwide.

Tlusty’s career as both a VFX producer and executive spans two decades. Most recently, as an executive with Universal Pictures, he managed more than 30 features, including First Man and The Huntsman: Winter’s War. His new role marks a return to Method Studios, as he served as head of studio in Vancouver prior to his gig at Universal. Tlusty also spent eight years as a VFX producer and executive producer at Rhythm & Hues.

In this capacity he was lead executive on Snow White and the Huntsman and the VFX Oscar-winning Life of Pi. His other VFX producer credits include Night at the Museum: Battle of the Smithsonian, The Mummy: Tomb of the Emperor Dragon and Yogi Bear, and he served as production manager on Hulk and Peter Pan and coordinator on A.I Artificial Intelligence. Early in his career Tlusty worked as a production aAssistant at American Zoetrope, working for its iconic filmmaker founders, Francis Ford Coppola and George Lucas. His VFX career began at Industrial Light & Magic where he worked in several capacities on the Star Wars prequel trilogy, first as a VFX coordinator and later, production  manager on the series. He is a member of the Producers Guild of America.

“Method has pursued intelligent growth, leveraging the strength across all of its studios, gaining presence in key regions and building on that to deliver high quality work on a massive scale,” Tlusty. “Coming from the client side, I understand how important it is to have the flexibility to grow as needed for projects.”

Tlusty is based in Los Angeles and will travel extensively among Method’s global studios.


Updated Quantum Xcellis targets robust video workflows

Quantum has updated its Xcellis storage environment, which allow users to ingest, edit, share and store media content. These new appliances, which are powered by the company’s StorNext platform, are based on a next-generation server architecture that includes dual eight-core Intel Xeon CPUs, 64GB memory, SSD boot drives and dual 100Gb Ethernet or 32Gb Fibre Channel ports.

The enhanced CPU and 50% increase in RAM over the previous generation greatly improve StorNext metadata performance. These enhancements make tasks such as file auditing less time-intensive, support an even greater number of clients per node and enable the management of billions of files per node. Users operating in a dynamic application environment on storage nodes will also see performance improvements.

With the ability to provide cross-protocol locking for shared files across SAN, NFS and SMB, Xcellis targets organizations that have collaborative workflows and need to share content across both Fibre Channel and Ethernet.

Leveraging this next-generation hardware platform, StorNext will provide higher levels of streaming performance for video playback. Xcellis appliances provide a high-performance gateway for StorNext advanced data management software to integrate tiers of scalable on-premise and cloud-based storage. This end-to-end capability provides a cost-effective solution to retain massive amounts of data.

StorNext offers a variety of features that ensure data-protection of valuable content over its entire life-cycle. Users can easily copy files to off-site tiers and take advantage of versioning to roll back to an earlier point in time (prior to a malware attack, for example) as well as set up automated replication for disaster recovery purposes — all of which is designed to protect digital assets.

Quantum’s latest Xcellis appliances are available now.


AICE Awards rebranded to AICP Post Awards

AICP has announced the Call for Entries for the AICP Post Awards, its revamped and rebranded competition for excellence in the post production arts. Formerly the AICE Awards, its categories have been re-imagined with a focus on recognizing standout examples of various crafts and technique in editing, audio, design, visual effects artistry and finishing. The AICP Post Awards are a part of the AICP Awards suite of competitions, which also include The AICP Show: The Art & Technique of the American Commercial and the AICP Next Awards, both of which are also currently accepting entries.

Among the changes for the AICP Post Awards this year are the opening of the competition to any entity having involvement in the creation of a piece of content beyond the AICP membership —previously the AICE Awards was a “members only” competition.

For the full rundown on rules, categories, eligibility and fees, visit the AICP Post Awards entry portal. Deadline for entries is Thursday, February 8 at 11:59pm PST. Entrants can use the portal to cross-enter work between all three of the 2019 AICP competitions, including the AICP Show: The Art & Technique of the American Commercial and the AICP Next Awards.

Regarding categories, the competition has regrouped its existing categories, introduced a range of new sections, expanded others and added an entirely new category for vertical video.

Danny Rosenbloom

“While we’ll continue to recognize editorial across a wide range of product, genre and technique categories, we now have a wider range of subcategories in areas like audio, visual effects and design and color grading,” says Danny Rosenbloom, AICP’s VP, post and digital Production.

“We saw this as an opportunity to make the Post Awards more reflective of the varied artists working across the spectrum of post production disciplines,” noted Matt Miller, president/CEO of AICP.  “Now that we’ve brought all this post production expertise into AICP, we want the Post Awards to be a real celebration of creative talent and achievement.”

A full list of AICP Post Awards categories now includes the following:

Editorial Categories
Automotive
Cause Marketing
Comedy
Dialogue
Monologue/Spoken Word
Docu-Style
Fashion/Beauty
Montage
Music Video
Storytelling
National Campaign
Regional Campaign

Audio Categories
Audio Mix
Sound Design With Composed Music
Sound Design Without Composed Music

Color Categories
Color :60
Color :30
Color Other Lengths
Color Music Video

Design, Visual Effects & Finishing Categories
Character Design & Animation
Typography Design & Animation
Graphic Design & Animation
End Tag
CGI
Compositing & Visual Effects
Vertical

In addition to its category winners and Best of Show honoree, the AICP Post Awards will continue to recognize Best of Region winners that represent the best work emanating from companies submitting within each AICP Chapter. These now encompass East, Florida, Midwest, Minnesota, Southeast, Southwest and West.


Industry vets open editorial, post studio Made-SF

Made-SF, a creative studio offering editorial and other services, has been launched by executive producer Jon Ettinger, editor/director Doug Walker and editors Brian Lagerhausen and Connor McDonald, all formerly of Beast Editorial. Along with creative editorial (Adobe Premiere), the company will provide motion graphic design (After Effects, Mocha), color correction and editorial finishing (likely Flame and Resolve). Eventually, it plans to add concept development, directing and production to its mix.

“Clients today are looking for creative partners who can help them across the entire production chain,” says Ettinger. “They need to tell stories and they have limited budgets available to tell them. We know how to do both, and we are gathering the resources to do so under one roof.”

Made is currently set up in interim quarters while completing construction of permanent studio space. The latter will be housed in a century-old structure in San Francisco’s North Beach neighborhood and will feature five editorial suites, two motion graphics suites, and two post production finishing suites with room for further expansion.

The four Made partners bring deep experience in traditional advertising and branded content, working both with agencies and directly with clients. Ettinger and Walker have worked together for more than 20 years and originally teamed up to launch FilmCore, San Francisco. Both joined Beast Editorial in 2012. Similarly, Lagerhausen and McDonald have been editing in the Bay Area for more than two decades. Collectively, their credits include work for agencies in San Francisco and nationwide. They’ve also helped to create content directly for Google, Facebook, LinkedIn, Salesforce and other corporate clients.

Made is indicative of a trend where companies engaged in content development are adopting fluid business models to address a diversifying media landscapes and where individual talent is no longer confined to a single job title. Walker, for example, has recently served as director on several projects, including a series of short films for Kelly Services, conceived by agency Erich & Kallman and produced by Caruso Co.

“People used to go to great pains to make a distinction about what they do,” Ettinger observes. “You were a director or an editor or a colorist. Today, those lines have blurred. We are taking advantage of that flattening out to offer clients a better way to create content.”

Main Image Caption: (L-R) Doug Walker, Brian Lagerhausen, Jon Ettinger and Connor McDonald.

Company 3 to open Hollywood studio, adds Roma colorist Steve Scott

Company 3 has added Steve Scott as EVP/senior finishing artist. His long list of credits includes Alfonso Cuarón’s Oscar-nominated Roma and Gravity; 19 Marvel features, including The Avengers, Iron Man and Guardians of the Galaxy franchises; and many Academy-Award-winning films, including The Jungle Book, Birdman or The Unexpected Virtue of Ignorance and The Revenant (both took Oscars for director Alejandro Iñárritu and cinematographer Emmanuel Lubezki).

Roma

The addition of Scott comes at a time when Company 3 is completing work on a new location at 950 Lillian Way in Hollywood. This new space represents the first phase of a planned much larger footprint in that area of Los Angeles. This new space will enable the company to significantly expand its capacity while providing the level of artistry and personalized service the industry expects from Company 3. It will also enable them to service more East Side and Valley-based clients.

“Steve is someone I’ve always wanted to work with and I am beyond thrilled that he has agreed to work with us at Company 3,” says CEO Stefan Sonnenfeld. “As we continue the process of re-imagining the entire concept of what ‘post production’ means creatively and technically, it makes perfect sense to welcome a leading innovator and brilliant artist to our team.”

Sonnenfeld and Scott will oversee every facet of this new boutique-style space to ensure it offers the same flexible experience clients have come to expect when working at Company 3. Scott, a devoted student of art and architecture, with extensive professional experience as a painter and architectural illustrator, says, “The opportunity to help design a new cutting-edge facility in my Hollywood hometown was too great to pass up.”

Scott oversees a team of additional artists to offer filmmakers the significantly increased ability to augment and refine imagery as part of the finishing process.

“The industry is experiencing a renaissance of content,” says Sonnenfeld. “The old models of feature film vs. television, long- vs. short-form are changing rapidly. Workflows and delivery methods are undergoing revolutionary changes with more content, and innovative content, coming from a whole array of new sources. It’s a very exciting and challenging time and I think these major additions to our roster and infrastructure will go a long way towards our goal of continuing Company 3’s role as a major force in the industry.”

Main Image Credit: 2018 HPA Awards Ceremony/Ryan Miller/Capture Imaging

BlacKkKlansman director Spike Lee

By Iain Blair

Spike Lee has been on a roll recently. Last time we sat down for a talk, he’d just finished Chi-Raq, an impassioned rap reworking of Aristophanes’ “Lysistrata,” which was set against a backdrop of Chicago gang violence. Since then, he’s directed various TV, documentary and video projects. And now his latest film BlacKkKlansman has been nominated for a host of Oscars, including Best Picture, Best Director, Best Adapted Screenplay, Best Film Editing,  Best Original Score and Best Actor in a Supporting Role (Adam Driver).

Set in the early 1970s, the unlikely-but-true story details the exploits of Ron Stallworth (John David Washington), the first African-American detective to serve in the Colorado Springs Police Department. Determined to make a name for himself, Stallworth sets out on a dangerous mission: infiltrate and expose the Ku Klux Klan. The young detective soon recruits a more seasoned colleague, Flip Zimmerman (Adam Driver), into the undercover investigation. Together, they team up to take down the extremist hate group as the organization aims to sanitize its violent rhetoric to appeal to the mainstream. The film also stars Topher Grace as David Duke.

Behind the scenes, Lee reteamed with co-writer Kevin Willmott, longtime editor Barry Alexander Brown and composer Terence Blanchard, along with up-and-coming DP Chayse Irvin. I spoke with the always-entertaining Lee, who first burst onto the scene back in 1986 with She’s Gotta Have It, about making the film, his workflow and the Oscars.

Is it true Jordan Peele turned you onto this story?
Yeah, he called me out of the blue and gave me possibly the greatest six-word pitch in film history — “Black man infiltrates Ku Klux Klan.” I couldn’t resist it, not with that pitch.

Didn’t you think, “Wait, this is all too unbelievable, too Hollywood?”
Well, my first question was, “Is this actually true? Or is it a Dave Chappelle skit?” Jordan assured me it’s a true story and that Ron wrote a book about it. He sent me a script, and that’s where we began, but Kevin Willmott and I then totally rewrote it so we could include all the stuff like Charlottesville at the end.

Iain Blair and Spike Lee

Did you immediately decide to juxtapose the story’s period racial hatred with all the ripped-from-the-headlines news footage?
Pretty much, as the Charlottesville rally happened August 11, 2017 and we didn’t start shooting this until mid-September, so we could include all that. And then there was the terrible synagogue massacre, and all the pipe bombs. Hate crimes are really skyrocketing under this president.

Fair to say, it’s not just a film about America, though, but about what’s happening everywhere — the rise of neo-Nazism, racism, xenophobia and so on in Europe and other places?
I’m so glad you said that, as I’ve had to correct several people who want to just focus on America, as if this is just happening here. No, no, no! Look at the recent presidential elections in Brazil. This guy — oh my God! This is a global phenomenon, and the common denominator is fear. You fire up your base with fear tactics, and pinpoint your enemy — the bogeyman, the scapegoat — and today that is immigrants.

What were the main challenges in pulling it all together?
Any time you do a film, it’s so hard and challenging. I’ve been doing this for decades now, and it ain’t getting any easier. You have to tell the story the best way you can, given the time and money you have, and it has to be a team effort. I had a great team with me, and any time you do a period piece you have added challenges to get it looking right.

You assembled a great cast. What did John David Washington and Adam Driver bring to the main roles?
They brought the weight, the hammer! They had to do their thing and bring their characters head-to-head, so it’s like a great heavyweight fight, with neither one backing down. It’s like Inside Man with Denzel and Clive Owen.

It’s the first time you’ve worked with the Canadian DP Chayse Irvin, who mainly shot shorts before this. Can you talk about how you collaborated with him?
He’s young and innovative, and he shot a lot of Beyonce’s Lemonade long-form video. What we wanted to do was shoot on film, not digital. I talked about all the ‘70s films I grew up with, like French Connection and Dog Day Afternoon. So that was the look I was after. It had to match the period, but not be too nostalgic. While we wanted to make a period film, I also wanted it to feel and look contemporary, and really connect that era with the world we live in now. He really nailed it. Then my great editor, Barry Alexander Brown, came up with all the split-screen stuff, which is also very ‘70s and really captured that era.

How tough was the shoot?
Every shoot’s tough. It’s part of the job. But I love shooting, and we used a mix of practical locations and sets in Brooklyn and other places that doubled for Colorado Springs.

Where did you post?
Same as always, in Brooklyn, at my 40 Acres and a Mule office.

Do you like the post process?
I love it, because post is when you finally sit down and actually make your film. It’s a lot more relaxing than the shoot — and a lot of it is just me and the editor and the Avid. You’re shaping and molding it and finding your way, cutting and adding stuff, flopping scenes, and it never really follows the shooting script. It becomes its own thing in post.

Talk about editing with Barry Alexander Brown, the Brit who’s cut so many of your films. What were the big editing challenges?
The big one was finding the right balance between the humor and the very serious subject matter. They’re two very different tones, and then the humor comes from the premise, which is absurd in itself. It’s organic to the characters and the situations.

Talk about the importance of sound and music, and Terence Blanchard’s spare score that blends funk with classical.
He’s done a lot of my films, and has never been nominated for an Oscar — and he should have been. He’s a truly great composer, trumpeter and bandleader, and a big part of what I do in post. I try to give him some pointers that aren’t restrictive, and then let him do his thing. I always put as much as emphasis on sound and music as I do on the acting, editing and cinematography. It’s hugely important, and once we have the score, we have a film.

I had a great sound team. Phil Stockton, who began with me back on School Daze, was the sound designer. David Boulton, Mike Russo and Howard London did the ADR mix, and my longtime mixer Tommy Fleischman was on it. We did it all at C5 in New York. We spent a long time on the mix, building it all up.

Where did you do the DI and how important is it to you?
At Company 3 with colorist Tom Poole, who’s so good. It’s very important but I’m in and out, as I know Tom and the DP are going to get the look I want.

Spike Lee on set.

Did the film turn out the way you hoped?
Here’s the thing. You try to do the best you can, and I can’t predict what the reaction will be. I made the film I wanted to make, and then I put it out in the world. It’s all about timing. This was made at the right time and was made with a lot of urgency. It’s a crazy world and it’s getting crazier by the minute.

How important are industry awards and nomination to you? 
They’re very important in that they bring more attention, more awareness to a film like this. One of the blessings from the strong critical response to this has been a resurgence in looking at my earlier films again, some of which may have been overlooked, like Bamboozled and Summer of Sam.

Do you see progress in Hollywood in terms of diversity and inclusion?
There’s been movement, maybe not as fast as I’d like, but it’s slowly happening, so that’s good.

What’s next?
We just finished the second season of She’s Gotta Have It for Netflix, and I have some movie things cooking. I’m pretty busy.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Post in the cloud company BeBop adds three tech pros

BeBop Technology, a provider of secure software solutions for moving media workflows to the cloud, has added three to its management team: director of business development Michael Kammes, VP of product management Patrick Cooper and Director of technical sales Nathaniel Bonini.

Michael Kammes joins BeBop from the media technology reseller and integrator Key Code Media, where he was director of technology. In his new position, he will leverage his experience with creative technology and tools providers to accelerate growth and provide strategic perspective across marketing, sales and partnerships. In addition to his experience as an integrator, Kammes brings more than 15 years of experience in technology consulting for the media and entertainment industry. His 5 Things web series breaks down technology and techniques. Kammes is a graduate of Columbia College in Chicago.

Cooper joins BeBop from Nokia, where he served as product manager and technical lead for tools and workflows. As part of Nokia’s Ozo camera team, he was instrumental in developing software and hardware products and designing workflows for virtual reality pros. Cooper also led film restoration and theatrical feature image processing projects at Lowry Digital and was a key contributor to the creation of its Academy Award-winning motion picture imaging technology. He is a graduate of the University of Southern California.

Bonini brings more than 30 years of experience as a technologist for cinema and broadcast. He joins BeBop from Meredith Corporation and Time Inc., where he served as director of video engineering. Throughout his career Bonini has provided crucial technology guidance as a digital cinema consultant, worked in numerous on-set and post roles, was director of integration for AbelCine and CTO for Madstone Films. He is a graduate of Rochester Institute of Technology.

BeBop’s cloud technology solutions include its flagship post production platform. It provides robust and secure virtualized desktops capable of processing-heavy tasks such as editing and visual effects, as well as “over the shoulder” collaboration, review and approval. Creatives can use industry-standard tools such as Adobe Creative Cloud on BeBop using their existing software licenses, and collaborate, process images, render, review and approve, ingest, manage and deliver media files from anywhere in the world using any computer with a 20mbps Internet connection.

Image Caption: Michael Kammes, Nathaniel Bonini, Patrick Cooper.

Catching up with Aquaman director James Wan

By Iain Blair

Director James Wan has become one of the biggest names in Hollywood thanks to the $1.5 billion-grossing Fast & Furious 7, as well as the Saw, Conjuring and Insidious films — three of the most successful horror franchises of the last decade.

Now the Malaysian-born, Australian-raised Wan, who also writes and produces, has taken on the challenge of bringing Aquaman and Atlantis to life. The origin story of half-surface dweller, half-Atlantean Arthur Curry stars Jason Momoa in the title role. Amber Heard plays Mera, a fierce warrior and Aquaman’s ally throughout his journey.

James Wan and Iain Blair

Additional cast includes Willem Dafoe as Vulko, council to the Atlantean throne; Patrick Wilson as Orm, the present King of Atlantis; Dolph Lundgren as Nereus, King of the Atlantean tribe Xebel; Yahya Abdul-Mateen II as the revenge-seeking Manta; and Nicole Kidman as Arthur’s mom, Atlanna.

Wan’s team behind the scenes included such collaborators as Oscar-nominated director of photography Don Burgess (Forrest Gump), his five-time editor Kirk Morri (The Conjuring), production designer Bill Brzeski (Iron Man 3), visual effects supervisor Kelvin McIlwain (Furious 7) and composer Rupert Gregson-Williams (Wonder Woman).

I spoke with the director about making the film, dealing with all the effects, and his workflow.

Aquaman is definitely not your usual superhero. What was the appeal of doing it? 
I didn’t grow up with Aquaman, but I grew up with other comic books, and I always was well aware of him as he’s iconic. A big part of the appeal for me was he’d never really been done before — not on the big screen and not really on TV. He’s never had the spotlight before. The other big clincher was this gave me the opportunity to do a world-creation film, to build a unique world we’ve never seen before. I loved the idea of creating this big fantasy world underwater.

What sort of film did you set out to make?
Something that was really faithful and respectful to the source material, as I loved the world of the comic book once I dove in. I realized how amazing this world is and how interesting Aquaman is. He’s bi-racial, half-Atlantean, half-human, and he feels he doesn’t really fit in anywhere at the start of the film. But by the end, he realizes he’s the best of both worlds and he embraces that. I loved that. I also loved the fact it takes place in the ocean so I could bring in issues like the environment and how we treat the sea, so I felt it had a lot of very cool things going for it — quite apart from all the great visuals I could picture.

Obviously, you never got the Jim Cameron post-Titanic memo — never, ever shoot in water.
(Laughs) I know, but to do this we unfortunately had to get really wet as over 2/3rds of the film is set underwater. The crazy irony of all this is when people are underwater they don’t look wet. It’s only when you come out of the sea or pool that you’re glossy and dripping.

We did a lot of R&D early on, and decided that shooting underwater looking wet wasn’t the right look anyway, plus they’re superhuman and are able to move in water really fast, like fish, so we adopted the dry-for-wet technique. We used a lot of special rigs for the actors, along with bluescreen, and then combined all that with a ton of VFX for the hair and costumes. Hair is always a big problem underwater, as like clothing it behaves very differently, so we had to do a huge amount of work in post in those areas.

How early on did you start integrating post and all the VFX?
It’s that kind of movie where you have to start post and all the VFX almost before you start production. We did so much prep, just designing all the worlds and figuring out how they’d look, and how the actors would interact with them. We hired an army of very talented concept artists, and I worked very closely with my production designer Bill Brzeski, my DP Don Burgess and my visual effects supervisor Kelvin McIlwain. We went to work on creating the whole look and trying to figure out what we could shoot practically with the actors and stunt guys and what had to be done with VFX. And the VFX were crucial in dealing with the actors, too. If a body didn’t quite look right, they’d just replace them completely, and the only thing we’d keep was the face.

It almost sounds like making an animated film.
You’re right, as over 90% of it was VFX. I joke about it being an animated movie, but it’s not really a joke. It’s no different from, say, a Pixar movie.

Did you do a lot of previs?
A lot, with people like Third Floor, Day For Nite, Halon, Proof and others. We did a lot of storyboards too, as they are quicker if you want to change a camera angle, or whatever, on the fly. Then I’d hand them off to the previs guys and they’d build on those.

What were the main technical challenges in pulling it all together on the shoot?
We shot most of it Down Under, near Brisbane. We used all nine of Village Roadshow Studios’ soundstages, including the new Stage 9, as we had over 50 sets, including the Atlantis Throne Room and Coliseum. The hardest thing in terms of shooting it was just putting all the actors in the rigs for the dry-for-wet sequences; they’re very cumbersome and awkward, and the actors are also in these really outrageous costumes, and it can be quite painful at times for them. So you can’t have them up there too long. That was hard. Then we used a lot of newish technology, like virtual production, for scenes where the actors are, say, riding creatures underwater.

We’d have it hooked up to the cameras so you could frame a shot and actually see the whole environment and the creature the actor is supposed to be on — even though it’s just the actors and bluescreen and the creature is not there. And I could show the actors — look, you’re actually riding a giant shark — and also tell the camera operator to pan left or right. So it was invaluable in letting me adjust performance and camera setups as we shot, and all the actors got an idea of what they were doing and how the VFX would be added later in post. Designing the film was so much fun, but executing it was a pain.

The film was edited by Kirk Morri, who cut Furious 7, and worked with you on the Insidious and The Conjuring films. How did that work?
He wasn’t on set but he’d visit now and again, especially when we were shooting something crazy and it would be cool to actually see it. Then we’d send dailies and he’d start assembling, as we had so much bluescreen and VFX stuff to deal with. I’d hop in for an hour or so at the end of each day’s shoot to go over things as I’m very hands on — so much so that I can drive editors crazy, but Kirk puts up with all that.

I like to get a pretty solid cut from the start. I don’t do rough assemblies. I like to jump straight into the real cut, and that was so important on this because every shot is a VFX shot. So the sooner you can lock the shot, the better, and then the VFX teams can start their work. If you keep changing the cut, then you’ll never get your VFX shots done in time. So we’d put the scene together, then pass it to previs, so you don’t just have actors floating in a bluescreen, but they’re in Atlantis or wherever.

Where did you do the post?
We did most of it back in LA on the Warner lot.

Do you like the post process?
I absolutely love it, and it’s very important to my filmmaking style. For a start, I can never give up editing and tweaking all the VFX shots. They have to pull it away from me, and I’d say that my love of all the elements of the post process — editing, sound design, VFX, music — comes from my career in suspense movies. Getting all the pieces of post right is so crucial to the end result and success of any film. This post was creatively so much fun, but it was long and hard and exhausting.

James Wan

All the VFX must have been a huge challenge.
(Laughs) Yes, as there’s over 2,500 VFX shots and we had everyone working on it — ILM, Scanline, Base, Method, MPC, Weta, Rodeo, Digital Domain, Luma — anyone who had a computer! Every shot had some VFX, even the bar scene where Arthur’s with his dad. That was a set, but the environment outside the window was all VFX.

What was the hardest VFX sequence to do?
The answer is, the whole movie. The trench sequence was hard, but Scanline did a great job. Anything underwater was tough, and then the big final battle was super-difficult, and ILM did all that.

Did the film turn out the way you hoped?
For the most part, but like most directors, I’m never fully satisfied.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Review: Picture Instruments’ plugin and app, Color Cone 2

By Brady Betzel

There are a lot of different ways to color correct an image. Typically, colorists will start by adjusting contrast and saturation followed by adjusting the lift, gamma and gain (a.k.a. shadows, midtones and highlights). For video, waveforms and vectorscopes are great ways of measuring color values and are about the only way to get the most accurate scientific facts on the colors you are manipulating.

Whether you are in Blackmagic Resolve, Avid Media Composer, Adobe Premiere Pro, Apple FCP X or any other nonlinear editor or color correction app, you usually have similar color correction tools across apps — whether you color based on curves, wheels, sliders or even interactively on screen. So when I heard about the way that Picture Instruments Color Cone 2 color corrects — via a Cone (or really a bicone) — I was immediately intrigued.

Color Cone 2 is a standalone app but also, more importantly, a plugin for Adobe After Effects, Adobe Premiere Pro and FCP X. In this review I am focusing on the Premiere Pro plugin, but keep in mind that the standalone version works on still images and allows you to export a 3dl or cube LUTs — a great way for a client to see what type of result you can get quickly from just a still image.

Color Cone 2 is literally a color corrector when used as a plugin for Adobe Premiere. There are no contrast and saturation adjustments, just the ability to select a color and transform it. For instance, you can select a blue sky and adjust the hue, chromanance (saturation) and/or luminance of the resulting color inside of the Color Cone plugin.

To get started you apply the Color Cone 2 plugin to your clip — the plugin is located under Picture Instruments in the Effects tab. Then you click the little square icon in the effect editor panel to open up the Color Cone 2 interface. The interface contains the bicone image representation of the color correction, presets to set up a split-tone color map or a three-point color correct, and the radius slider to adjust the effect your correction has on surrounding color.

Once you are set on a look you can jump out of the Color Cone interface and back into the effect editor inside of Premiere. There you can keyframe all of the parameters you adjusted in the Color Cone interface. This allows for a nice and easy way to transition from no color correction to color correction.

The Cone
The Cone itself is the most interesting part of this plugin. Think of the bicone as the 3D side view of a vectorscope. In other words, if the vectorscope view from a traditional scope is the top view — the bicone in Color Cone would be a side view. Moving your target color from the top cone to the bottom cone will adjust your lightness to darkness (or luminance). At the intersection of the cones is the saturation (or chromanance) and when moving from the center outwards saturation is increased. When a color is selected using the eye dropper you will see a square, which represents the source color selection, a circle representing the target color and an “x” with a line for reference on the middle section.

Additionally, there is a black circle on the saturation section in the middle that shows the boundaries of how far you can push your chromanance. There is a light circle that represents the radius of how surrounding colors are affected. Each video clip can have effects layered on them and one instance of the plugin can handle five colors. If you need more than five, you can add another instance of the plugin to the same clip.

If you are looking to export 3dl and Cube LUTs of your work you will need to use the standalone Color Cone 2 app. The one caveat to using the standalone app is that you can only apply color to still images. Once you do that you can export the LUT to be used in any modern NLE/color correction app.

Summing Up
To be honest, working in Color Cone 2 was a little weird for me. It’s not your usual color correction workflow, so I would need to sit with the plugin for a while to get used to its setup. That being said, it has some interesting components that I wish other color correction apps would use, such as the Cone view. The bicone is a phenomenal way to visualize color correction in realtime.

In my opinion, if Picture Instruments would sell just the Cone as a color measurement tool to work in conjunction with Lumetri, they would have another solid tool. Color Cone 2 has a very unique and interesting way to color correct in Premiere that acts as an advanced secondary color correct tool to the Lumetri color correction tools.

The Color Cone 2 standalone app and plugin costs $139 when purchased together, or $88 individually. In my opinion, video people should probably just stick to the plugin version. Check out Picture Instrument’s website for more info on Color Cone 2 as well as their other products. And check them out on Twitter @Pic_instruments.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Full-service creative agency Carousel opens in NYC

Carousel, a new creative agency helmed by Pete Kasko and Bernadette Quinn, has opened its doors in New York City. Billing itself as “a collaborative collective of creative talent,” Carousel is positioned to handle projects from television series to ad campaigns for brands, media companies and advertising agencies.

Clients such as PepsiCo’s Pepsi, Quaker and Lays brands; Victoria’s Secret; Interscope Records; A&E Network and The Skimm have all worked with the company.

Designed to provide full 360 capabilities, Carousel allows its brand partners to partake of all its services or pick and choose specific offerings including strategy, creative development, brand development, production, editorial, VFX/GFX, color, music and mix. Along with its client relationships, Carousel has also been the post production partner for agencies such as McGarryBowen, McCann, Publicis and Virtue.

“The industry is shifting in how the work is getting done. Everyone has to be faster and more adaptable to change without sacrificing the things that matter,” says Quinn. “Our goal is to combine brilliant, high-caliber people, seasoned in all aspects of the business, under one roof together with a shared vision of how to create better content in a more efficient way.”

According to managing director Dee Tagert comments, “The name Carousel describes having a full set of capabilities from ideation to delivery so that agencies or brands can jump on at any point in their process. By having a small but complete agency team that can manage and execute everything from strategy, creative development and brand development to production and post, we can prove more effective and efficient than a traditional agency model.”

Danielle Russo, Dee Tagert, AnaLiza Alba Leen

AnaLiza Alba Leen comes on board Carousel as creative director with 15 years of global agency experience, and executive producer Danielle Russo brings 12 years of agency experience.
Tagert adds, “The industry has been drastically changing over the last few years. As clients’ hunger for content is driving everything at a much faster pace, it was completely logical to us to create a fully integrative company to be able to respond to our clients in a highly productive, successful manner.”

Carousel is currently working on several upcoming projects for clients including Victoria’s Secret, DNTL, Subway, US Army, Tazo Tea and Range Rover.

Main Image: Bernadette Quinn and Pete Kasko

Boxx adds new Apexx S-class workstations with 9th-gen Intel processors

Boxx Technologies is offering a new line of Apexx S-class workstations featuring the company’s flagship Apexx S3. Purpose-built for 3D design, CAD and motion media workflows requiring CPU frequencies suitable for lightly threaded apps, the compact Apexx S3 now features a 9th-generation, eight-core Intel Core i7 or i9 processor (professionally overclocked to 5.1GHz) to support more heavily threaded applications as well.

Designed to optimize Autodesk tools, Adobe Creative Cloud, Maxon Cinema 4D and other applications, the overclocked and liquid-cooled Apexx S3 sustains its 5.1GHz frequency across all cores. With increased storage and upgradability, as well as multiple Nvidia Quadro or AMD Radeon Pro graphics cards, S3 is also ideal for light GPU compute or virtual reality.

New to the S-class line is Apexx Enigma S3. Built to accelerate professional 3D applications, Enigma S3 is also configurable with 9th-generation, eight-core Intel Core i7/i9 processors overclocked to 5.1GHz and up to three professional GPUs, making it suitable for workflows that include significant GPU rendering or GPU compute work.

The compact Apexx S3 and Enigma S3 are joined by the Apexx S1. The S1 also features an overclocked, eight-core Intel Core i7 for 3D content creation, CAD design and motion media. With its ultra-compact chassis, the S1 is a good solution for limited desktop space, an open environment or workflows where a graphics card is used primarily for display.

Rounding out the S-class family is the Apexx S4, a rack-mount system designed for heavy rendering or GPU compute.

Technicolor welcomes colorists Trent Johnson and Andrew Francis

Technicolor in Los Angeles will be beefing up its color department in January with the addition of colorists Andrew Francis and Trent Johnson.

Francis joins Technicolor after spending the last three years building the digital intermediate department of Sixteen19 in New York. With recent credits that include Second Act, Night School, Hereditary and Girls Trip. Francis is a trained fine artist who has established a strong reputation of integrating the bleeding edge of technology in support of the craft of color.

Johnson, a Technicolor alumnus, returns after stints as a digital colorist at MTI, Deluxe and Sony Colorworks. His recent credits include horror hits Slender Man and The Possession of Hannah Grace, as well as comedies Overboard and Ted 2.

Johnson will be using FilmLight and Resolve for his work, while Francis will toggle between Resolve, BaseLight and Lustre, depending on the project.

Francis and Johnson join Technicolor LA’s roster, which includes Pankaj Bajpai, Tony Dustin, Doug Delaney, Jason Fabbro, recent HPA award-winner Maxine Gervais, Michael Hatzer, Roy Vasich, Tim Vincent, Sparkle and others.

Main Image: Trent Johnson and Andrew Francis

Rohde & Schwarz’s storage system R&S SpycerNode shipping

First shown at IBC 2018, Rohde & Schwarz’s new media storage system, R&S SpycerNode, is now available for purchase. This new storage system uses High Performance Computing (HPC), a term that refers to the system’s performance, scalability and redundancy. HPC is a combination of hardware, file system and RAID approach. HPC employs redundancy using software RAID technologies called erasure coding in combination with declustering to increase performance and reduce rebuild times. Also, system scalability is almost infinite and expansion is possible during operation.

According to Rohde & Schwarz, in creating this new storage system, their engineers looked at many of the key issues that impact on media storage systems within high-performance video editing environments — from annoying maintenance requirements, such as defraging, to much more serious system failures, including dying disk drives.

R&S SpycerNode features Rohde & Schwarz‘s device manager web application that makes it much easier to set up and use Rohde & Schwarz solutions in an integrated fashion. Device manager helps to reduce setup times and simplifies maintenance and service due to its intuitive web-based UI-operated through a single client.

To ensure data security, Rohde & Schwarz has introduced data protection systems based on erasure coding and declustering within the R&S SpycerNode. Erasure coding means that a data block is always written including parity.

Declustering is a part of the data protection approach of HPC setups (formerly known as RAID). It is software based, and in comparison to a traditional RAID setup the spare disk is spread over all other disks and is not a dedicated disk. This will decrease rebuild times and reduce performance impact. Also, there are no limitations with the RAID controller, which results in much higher IOPS (input/output operations per second). Importantly, there is no impact on system performance over time due to declustering.

R&S SpycerNode comes in multiple 2U and 5U chassis designs, which are available with NL-SAS HDD and SAS SSDs in different capacities. An additional 2U24 chassis design is a pure Flash system with main processor units and JBOD units. A main unit is always redundant, equipped with two appliance controllers (AP). Each AP features two 100Gb interfaces, resulting in four 100Gbinterfaces per main unit.

The combination of different chassis systems makes R&S SpycerNode applicable to a very broad range of applications. The 2U system represents a compact, lightweight unit that works well within mobile productions as well as offering a very dense, high-speed storage device for on-premise applications. A larger 5U system offers sophisticated large-scale storage facilities on-premise within broadcast production centers and post facilities.

Storage for Post Studios

By Karen Moltenbrey

The post industry relies heavily on storage solutions, without question. Facilities are jugging a variety of tasks and multiple projects all at once. And deadlines are always looming. Thus, these studios need a storage solution that is fast and reliable. Each studio has different needs and searches to find the right system to fit their particular workflow. Luckily, there are many storage choices for pros to choose from.

For this article, we spoke with two post houses about their storage solutions and why they are a good fit for each of their needs.

Sugar Studios LA
Sugar Studios LA is one-stop shop playground for filmmakers that offers a full range of post production services, including editorial, color, VFX, audio, production and finishing, with each department led by seasoned professionals. Its office suites in the Wiltern Theater Tower, in the center of LA, serve an impressive list of clients, from numerous independent film producers and distributors to Disney, Marvel, Sony, MGM, Universal, Showtime, Netflix, AMC, Mercedes-Benz, Ferrari and others.

Jijo Reed and Sting in one of their post suites.

With so much important data in play at one time, Sugar needs a robust, secure and reliable storage system. However, with diverse offerings come diverse requirements. For its online and color projects, Sugar uses a Symply SAN with 200TB of usable storage. The color workstations are connected via 10Gb Ethernet over Fibre with a 40Gb uplink to the network. For mass storage and offline work, the studio uses a MacOS server acting as a NAS, with 530TB of usable storage connected via a 40Gb network uplink. For Avid offline jobs, the facility has an Avid Nexis Pro with 40TB of storage, and for Avid Pro Tools collaboration, a Facilis TerraBlock with 40TB of usable storage.

“We can collaborate with any and all client stations working on the same or different media and sharing projects across multiple software platforms,” says Jijo Reed, owner/executive producer of Sugar. “No station is limited to what it can do, since every station has access to all media. Centralized storage is so important because not only does it allow collaboration, we always have access to all media and don’t have to fumble through drives. It is also RAID-protected, so we don’t have to be concerned with losing data.”

Prior to employing the centralized storage, Sugar had been using G-Technology’s G-RAID drives, changing over in late 2016. “Once our technical service advisor, Zach Moller, came on board, he began immediately to institute a storage network solution that was tailored to our workflow,” says Reed.

Reed, an award-winning director/producer, founded the company in 2012, using a laptop (running Final Cut Pro 7) and an external hard drive he had purchased on sale at Fry’s. His target base at the time was producers and writers needing sizzle trailers to pitch their projects — at a time when the term “sizzle trailer” was not part of the common vernacular. “I attended festivals to pitch my wares, producing over 15 sizzles the first year,” he says, “and it grew from there.”

Since Reed was creating sizzles for yet-to-be-made features, he was in “pole position” to handle the post for some of these independent films when they got funded. In 2015, he, along with his senior editor, Paul Buhl, turned their focus to feature post work, which was “more lucrative and less exhausting, but mostly, we wanted to tell stories – the whole story.” He rebranded and changed the name of the company from Sizzlepitch to Sugar Studios, and brought on a feature post producer, Chris Harrington. Reed invested heavily in the company, purchasing equipment and acquiring space. Soon, one bay became two, then three and so on. Currently, the company spans three full floors, including the penthouse of the Wiltern Theater Tower.

As Reed proudly points out, the studio space features 21 bays and workstations, two screening theaters, including a 25-seat color and mix DI stage with a Barco DP4K projector and Dolby Atmos configuration. “We are fully staffed, all under one roof, with editorial, full audio services, color correction/grading, VFX and a greenscreen cyclorama stage with on-site 4K cameras, grip and lighting,” he details. “But, it’s the people who make this work. Our passion is obvious to our clients.”

While Sugar was growing and expanding, so, too, was its mass storage solution. According to Zach Moller, it started with the NAS due to its low price and fast (10Gb) connection to every client machine. “The Symply SAN solution was needed because we required a high-bandwidth system for online and color playback that used Fibre Channel technology for the low latency and local drive configuration,” he says.

Moreover, the facility wanted flexibility with its SAN solution; it was very expensive to have every machine connected via Fibre Channel, “and frankly, we didn’t need that bandwidth,” Reed says. “Symply allowed us to have client machines choose whether they connected via Fibre Channel or 10Gb. If this wasn’t the case, we would have been in a pickle, having to purchase expansion chassis for every machine to open up additional PCI slots.” (The bulk of the machines at Sugar connect using the pre-existing 10Gb Ethernet over Fibre network, thus negating the need to use another PCI slot on a Fibre Channel card.)

American Dreamer

At Sugar, the camera masters and production audio are loaded directly to the NAS for mass storage. Then, the group archives the camera masters to LTO for deep archival, for an additional backup. During LTO archival, the studio creates the dailies for the offline edit on either Avid Media Composer (where the MXFs are migrated to the Avid Nexis server) or Adobe Premiere (where the ProRes dailies continue to live on the NAS).

When adding visual effects, the artists render to the Symply SAN when preparing for the online, color and finishing.

The studio works with a wide range of codecs, some of which are extremely taxing on the systems. And, the SAN is ideal, especially for the raster image files (EXRs), since each frame has such a high density — and there can be 100,000 frames per folder. “This can only be accomplished with a premium storage solution: our SAN,” Reed says.

When the studio moved to the EXR codec for the VFX on the American Dreamer feature film, for example, its original NAS solution over 10Gb didn’t have enough bandwidth for playback on its systems (1.2GB/sec). Once it upgraded the SAN solution with dual 16Gb Fibre Channel, they were able to play back uncompressed 4K EXR footage without the headache or frustration of stuttering.

“We have created an environment that caters to the creative process with a technical infrastructure that is superfast and solid. Filmmakers love us, and I couldn’t be prouder of my team for making this happen,” says Reed.

Mike Seabrooke

Postal
Established in 2015, Postal is a boutique creative studio that produces motion graphics, visual effects, animation, live action and editorial, with the vision of transcending all mediums — whether it’s short animations for social media or big-budget visual effects for broadcast. “As a studio, we love to experiment with different techniques. We feel strongly that the idea should always come first,” says Mike Seabrooke, producer at New York’s Postal.

To ensure that these ideas make it to the final stage of a project, the company uses a mixture of hard drives, LTO tapes and servers that house the content while the artists are working on projects, as well as for archival purposes. Specifically, the studio employs the EditShare Storage v.7 shared storage platform and EditShare Ark Tape for managing the LTO tape libraries that serve as nearline and offline backup. This is the system setup that Postal deployed initially when it started up a few years ago, and since then Postal has been continuously updating and expanding it based on its growth as a studio.

Let’s face it, hard drives always have the possibility of failing. But, failure is not something that Postal — or any other post house — can afford. That is why the studio keeps two instances per job on archive drives: a master and a backup. “Organized hard drives give us quick access to previous jobs if need be, which sometimes can be quite the lifesaver,” says Seabrooke.

 

Postal’s Nordstrom project.

LTO tapes, meanwhile, are used to back up the facility’s servers running EditShare v7 – which house Postal’s editorial jobs — on the off chance that something happens to that precious piece of hardware. “The recovery process isn’t the fastest, but the system is compact, self-contained and gives us peace of mind in case anything does go wrong,” Seabrooke explains.

In addition, the studio uses Retrospect backup and restore software for its working projects server. Seabrooke says, “We chose it because it offers a backup service that does not require much oversight.”

When Postal began shopping for a solution for its studio three years ago, reliability was at the top of its list. The facility needed a system it could rely on to back up its data, which would comprise the facility’s entire scope of work. Ease of use was also a concern, as was access. This decision prompted questions such as: Would we have to monitor it constantly? In what timeframe would we be able to access the data? Moreover, cost was yet another factor: Would the solution be effective without breaking our budget?

Postal’s solution indeed enabled them to check off every one of those boxes. “Our projects demand a system that we can count on, with the added benefit of quick retrieval,” Seabrooke says.

Throughout the studio’s production process, the artists are accessing project data on the servers. Then, once they complete the project, the data is transferred to the archival drives for backup. This frees up space on the company servers for new jobs, while providing access to the stored data if needed.

“Storage is so important in our work because it is our work. Starting over on a project is an outcome we cannot allow, so responsible storage is a necessity,” concludes Seabrooke.


Karen Moltenbrey is a long-time VFX and post production writer.

Post house Cinematic Media opens in Mexico City, targets film, TV

Mexico City is now home to Cinematic Media, a full-service post production finishing facility focused on television and cinema content   Located on the lot at Estudios GGM, the facility offers dailies, look development, editorial finishing, color grading and other services, and aims to capitalize on entertainment media production in Mexico and throughout Central and South America.

Scot Evans

In its first project, Cinematic Media provided finishing services for the second season of the Netflix series Ingobernable.

CEO Scot Evans brings more than 25 years of post experience and has managed large-scale post production operations in the United States, Mexico and Canada. His recent posts include executive VP at Technicolor PostWorks in New York, managing director of Technicolor in Vancouver and managing director of Moving Picture Company (MPC) in Mexico City.

“We’re excited about the future for entertainment production in Mexico,” says Evans. “Netflix opened the door and now Amazon is in Mexico. We expect film production to also grow. Through its geographic location, strong infrastructure and cinematic history, Mexico is well-positioned to become a strong producer of content for the world market.”

Cinematic Media has been built from the ground up with a workflow modeled after top-tier facilities in Hollywood and geared toward television and cinema finishing. Engineering design was supervised by John Stevens, whose four decades of post experience includes stints at Cinesite, Efilm, The Post Group, Encore Hollywood, MTI Film and, currently, the Foundation.

Resources include a DI theater with DaVinci Resolve, 4K projection and 7.1 surround sound, four color suites supporting 2K, 4K and HDR, multiple editorial finishing suites, and a Colorfront On-Set Dailies system. The facility also offers look development services to assist productions in creating end-to-end color pipelines, as well as quality control and deliverable services for streaming, broadcast and cinema. Plans to add visual effects services are in the works.

“We can handle six or seven series simultaneously,” says Evans. “There is a lot of redundancy built into our pipeline, making it incredibly efficient and virtually eliminating downtime. A lot of facilities in Hollywood would be envious of what we have here.”

Cinematic Media features high-speed connectivity via the private network Sohonet. It will be employed to share media with studios, producers and distributors around the globe securely and efficiently. It will also be used to facilitate remote collaboration with directors, cinematographers, editors, colorists and other production partners.

Evans cites as a further plus Cinematic Media’s location within Estudios GGM, which has six sound stages, production and editorial office space, grip and lighting resources and more. Producers can take projects from concept to the screen from within the confines of the site. “We can literally walk down a flight of stairs to support a project shooting on one of the stages,” he says. “Proximity is important. We expect many productions to locate their offices and editorial teams here.”

Managing director Arturo Sedano will oversee day-to-day operations. He has supervised post for thousands of hours of television and cinema content on behalf of studios and producers from around the globe, including Netflix, Telemundo, Sony Pictures, Viacom, Lionsgate, HBO, TV Azteca, Grupo Imagen and Fox.

Other key staff includes senior colorist Ana Montaño whose experience as a digital colorist spans facilities in Mexico City, Barcelona, London, Dublin and Rome; producer and post supervisor Cyntia Navarro, previously with Lejana Films and Instituto Mexicano de Cinematografía (IMCINE). Her credits span episodic television, feature film and documentaries, and include projects for IFC Films, Canal Once, UPI, Discovery Channel, Netflix and Amazon.

Additional staff includes chief technology officer Oliver De Gante, previously with Ollin VFX, where his credits included the hit films Chappie, Her, Tron: Legacy and The Social Network, as well as the Netflix series House of Cards; technical director Gabriel Kerlegand, a workflow specialist and digital imaging technologist with 18 years of experience in cinema and television; and coordinator and senior conform editor Humberto Flores, formerly senior editor at Zenith Adventure Media.

Industry vets launch hybrid studio, Olio Creative

Colorist Marshall Plante, producer Natalie Westerfield and director/creative director Justin Purser founded hybrid studio Olio Creative, which has opened its doors in Venice, California.

Olio features vintage-style décor and an open floor plan and the space is adaptable for freelancers, mobile artists and traveling talent, with two color suites and a suite set up to toggle between editorial and Flame work.

Marshall Plante is a well-known colorist who has built his career at shops such as Digital Magic, Riot, Syndicate and, most recently, at Ntropic where he headed up the color department. His commercial credits include Samsung, Audi, Olay, Nike, Honda, Budweiser, and direct-to-brand projects for Apple and Riot Games. Recently, the Nick Jr. Girls in Charge: Girl Power campaign he graded won an Emmy for Outstanding Daytime Promo Announcement Brand Image Campaign, and the Uber campaign he graded, Rolling With the Champion with Lebron James, won a bronze Cannes Lion.

Marshall’s long-time producer, Natalie Westerfield, has over 10 years of experience producing at companies including The Mill and Ntropic. As executive producer, Westerfield will provide oversight to guide all projects that come through Olio’s pipeline.

The third member of the team is director/creative director Justin Purser. As a director, Purser has worked at production companies A Band Apart and Anonymous Content. He was one of the original creators and directors behind Maker Studios (acquired by Walt Disney Corp.) that pioneered the multi-channel YouTube-centric companies of today.

The three partners will bring an element of experimentation and collaboration to the post production field. “The ability to be chameleons within the industry keeps us open to fresh ideas,” says Pursur. “Our motto is, ‘Try it. If it doesn’t work, pivot.’ And if we thrive in a new way of working, we’re going to share that with everyone. We want to not only make noise for ourselves, but for others in the same business.”

Quick Chat: Westwind Media president Doug Kent

By Dayna McCallum

Doug Kent has joined Westwind Media as president. The move is a homecoming of sorts for the audio post vet, who worked as a sound editor and supervisor at the facility when they opened their doors in 1997 (with Miles O’ Fun). He comes to Westwind after a long-tenured position at Technicolor.

While primarily known as an audio post facility, Burbank-based Westwind has grown into a three-acre campus comprised of 10 buildings, which also house outposts for NBCUniversal and Technicolor, as well as media focused companies Keywords Headquarters and Film Solutions.

We reached out to Kent to find out a little bit more about what is happening over at Westwind, why he made the move and changes he has seen in the industry.

Why was now the right time to make this change, especially after being at one place for so long?
Well, 17 years is a really long time to stay at one place in this day and age! I worked with an amazing team, but Westwind presented a very unique opportunity for me. John Bidasio (managing partner) and Sunder Ramani (president of Westwind Properties) approached me with the role of heading up Westwind and teaming with them in shaping the growth of their media campus. It was literally an offer I couldn’t refuse. Because of the campus size and versatility of the buildings, I have always considered Westwind to have amazing potential to be one of the premier post production boutique destinations in the LA area. I’m very excited to be part of that growth.

You’ve worked at studios and facilities of all sizes in your career. What do you see as the benefit of a boutique facility like Westwind?
After 30 years in the post audio business — which seems crazy to say out loud — moving to a boutique facility allows me more flexibility. It also lets me be personally involved with the delivery of all work to our customers. Because of our relationships with other facilities, we are able to offer services to our customers all over the Los Angeles area. It’s all about drive time on Waze!

What does your new position at Westwind involve?
The size of our business allows me to actively participate with every service we offer, from business development to capital expenditures, while also working with our management team’s growth strategy for the campus. Our value proposition, as a nimble post audio provider, focuses on our high-quality brick and motor facility, while we continue to expand our editorial and mix talent working with many of the best mix facilities and sound designers in the LA area. Luckily, I now get to have a hand in all of it.

Westwind recently renovated two stages. Did Dolby Atmos certification drive that decision?
Netflix, Apple and Amazon all use Atmos materials for their original programming. It was time to move forward. These immersive technologies have changed the way filmmakers shape the overall experience for the consumer. These new object-based technologies enhance our ability to embellish and manipulate the soundscape of each production, creating a visceral experience for the audience that is more exciting and dynamic.

How to Get Away With Murder

Can you talk specifically about the gear you are using on the stages?
Currently, Westwind runs entirely on a Dante network design. We have four dub stages, including both of the Atmos stages, outfitted with Dante interfaces. The signal path from our Avid Pro Tools source machines — all the way to the speakers — is entirely in Dante and the BSS Blu link network. The monitor switching and stage are controlled through custom made panels designed in Harman’s Audio Architect. The Dante network allows us to route signals with complete flexibility across our network.

What about some of the projects you are currently working on?
We provide post sound services to the team at ShondaLand for all their productions, including Grey’s Anatomy, which is now in its 15th year, Station 19, How to Get Away With Murder and For the People. We are also involved in the streaming content market, working on titles for Amazon, YouTube Red and Netflix.

Looking forward, what changes in technology and the industry do you see having the most impact on audio post?
The role of post production sound has greatly increased as technology has advanced.  We have become an active part of the filmmaking process and have developed closer partnerships with the executive producers, showrunners and creative executives. Delivering great soundscapes to these filmmakers has become more critical as technology advances and audiences become more sophisticated.

The Atmos system creates an immersive audio experience for the listener and has become a foundation for future technology. The Atmos master contains all of the uncompressed audio and panning metadata, and can be updated by re-encoding whenever a new process is released. With streaming speeds becoming faster and storage becoming more easily available, home viewers will most likely soon be experiencing Atmos technology in their living room.

What haven’t I asked that is important?
Relationships are the most important part of any business and my favorite part of being in post production sound. I truly value my connections and deep friendships with film executives and studio owners all over the Los Angeles area, not to mention the incredible artists I’ve had the great pleasure of working with and claiming as friends. The technology is amazing, but the people are what make being in this business fulfilling and engaging.

We are in a remarkable time in film, but really an amazing time in what we still call “television.” There is growth and expansion and foundational change in every aspect of this industry. Being at Westwind gives me the flexibility and opportunity to be part of that change and to keep growing.

AI for M&E: Should you take the leap?

By Nick Gold

In Hollywood, the promise of artificial intelligence is all the rage. Who wouldn’t want a technology that adds the magic of AI to smarter computers for an instant solution to tedious, time-intensive problems? With artificial intelligence, anyone with abundant rich media assets can easily churn out more revenue or cut costs, while simplifying operations … or so we’re told.

If you attended IBC, you probably already heard the pitch: “It’s an ‘easy’ button that’s simple to add to the workflow and foolproof to operate, turning your massive amounts of uncategorized footage into metadata.”

But should you take the leap? Before you sign on the dotted line, take a closer look at the technology behind AI and what it can — and can’t — do for you.

First, it’s important to understand the bigger picture of artificial intelligence in today’s marketplace. Taking unstructured data and generating relevant metadata from it is something that other industries have been doing for some time. In fact, many of the tools we embrace today started off in other industries. But unlike banking, finance or healthcare, our industry prioritizes creativity, which is why we have always shied away from tools that automate. The idea that we can rely on the same technology as a hedge fund manager just doesn’t sit well with many people in our industry, and for good reason.

Nick Gold talks AI for a UCLA Annex panel.

In the media and entertainment industry, we’re looking for various types of metadata that could include a transcript of spoken words, important events within a period of time or information about the production (e.g., people, location, props), and currently there’s no single machine-learning algorithm that will solve for all these types of metadata parameters. For that reason, the best starting point is to define your problems and identify which machine learning tools may be able to solve them. Expecting to parse reams of untagged, uncategorized and unstructured media data is unrealistic until you know what you’re looking for.

What works for M&E?
AI has become pretty good at solving some specific problems for our industry. Speech-to-text is one of them. With AI, extracting data from a generally accurate transcription offers an automated solution that saves time. However, it’s important to note that AI tools still have limitations. An AI tool, known as “sentiment analysis,” could theoretically look for the emotional undertones described in spoken word, but it first requires another tool to generate a transcript for analysis.

But no matter how good the algorithms are, they won’t give you the qualitative data that a human observer would provide, such as the emotions expressed through body language. They won’t tell you the facial expressions of the people being spoken to, or the tone of voice, pacing and volume level of the speaker, or what is conveyed by a sarcastic tone or a wry expression. There are sentiment analysis engines that try to do this, but breaking down the components ensures the parameters you need will be addressed and solved.

Another task at which machine learning has progressed significantly is logo recognition. Certain engines are good at finding, for example, all the images with a Coke logo in 10,000 hours of video. That’s impressive and quite useful, but it’s another story if you want to also find footage of two people drinking what are clearly Coke-shaped bottles where the logo is obscured. That’s because machine-learning engines tend to have a narrow focus, which goes back to the need to define very specifically what you hope to get from it.

There are a bevy of algorithms and engines out there. If you license a service that will find a specific logo, then you haven’t solved your problem for finding objects that represent the product as well. Even with the right engine, you’ve got to think about how this information fits in your pipeline, and there are a lot of workflow questions to be explored.

Let’s say you’ve generated speech-to-text with audio media, but have you figured out how someone can search the results? There are several options. Sometimes vendors have their own front end for searching. Others may offer an export option from one engine into a MAM that you either already have on-premise or plan to purchase. There are also vendors that don’t provide machine learning themselves but act as a third-party service organizing the engines.

It’s important to remember that none of these AI solutions are accurate all the time. You might get a nudity detection filter, for example, but these vendors rely on probabilistic results. If having one nude image slip through is a huge problem for your company, then machine learning alone isn’t the right solution for you. It’s important to understand whether occasional inaccuracies will be acceptable or deal breakers for your company. Testing samples of your core content in different scenarios for which you need to solve becomes another crucial step. And many vendors are happy to test footage in their systems.

Although machine learning is still in its nascent stages, there is a lot of interest in learning how to make it work in the media workflow. It can do some magical things, but it’s not a magic “easy” button (yet, anyway). Exploring the options and understanding in detail what you need goes hand-in-hand with finding the right solution to integrate with your workflow.


Nick Gold is lead technologist for Baltimore’s Chesapeake Systems, which specializes in M&E workflows and solutions for the creation, distribution and preservation of content. Active in both SMPTE and the Association of Moving Image Archivists (AMIA), Gold speaks on a range of topics. He also co-hosts the Workflow Show Podcast.
 

Behind the Title: Pace Pictures owner Heath Ryan

NAME: Heath Ryan

COMPANY: Pace Pictures (@PacePictures)

CAN YOU DESCRIBE YOUR COMPANY?
We are a dailies-to-delivery post house, including audio mixing.

Pace’s Dolby Atmos stage.

WHAT’S YOUR JOB TITLE?
Owner and editor.

WHAT DOES THAT ENTAIL?
As owner, I need to make sure everyone is happy.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Psychology. I deal with a lot of producers, directors and artists that all have their own wants and needs. Sometimes what that entails is not strictly post production but managing personalities.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Editing. My company grew out of my love for editing. It’s the final draft of any film. In the over 30 years I have been editing, the power of what an editor can do has only grown.

WHAT’S YOUR LEAST FAVORITE?
Chasing unpaid invoices. It’s part of the job, but it’s not fun.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Late, late in the evening when there are no other people around and you can get some real work done.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Not by design but through sheer single mindedness, I have no other skill set but film production. My sense of direction is so bad that armed with a GPS super computer in my phone even Uber driver is not an option.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started making films in the single digit years. I won a few awards for my first short film in my teens and never looked back. I’m lucky to have found this passion early.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
This year I edited the reboot to Daddy Daycare called Grand-Daddy Daycare (2019) for Universal. I got to work with director Ron Oliver and actor Danny Trejo, and it meant a lot to me. It deals with what we do with our elders as time creeps up on us all. Sadly, we lost Ron’s mom while we were editing the film so it took on extra special meaning to us both.

Lawless Range

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Lawless Range and The Producer. I produced and edited both projects with my dear friend and collaborator Sean McGinly. A modern-day Western and a behind-the-scenes of a Hollywood pilot. They were very satisfying projects because there was no one to blame but ourselves.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Meridian Sound system, the Internet and TV.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, I love it. I have always set the tone in the edit bay with music. Especially during dailies – I like to put music on, sometimes films scores, to set the mood of what we are making.

Behind the Title: Post supervisor Chloe Blackwell

NAME: Chloe Blackwell

COMPANY: UK-based Click Post Production

CAN YOU DESCRIBE YOUR COMPANY?
I provide bespoke post solutions, which include consultancy and development courses for production companies. I’m also currently working on an online TV series full time. More on that later!

WHAT’S YOUR JOB TITLE?
Post Production Supervisor

WHAT DOES THAT ENTAIL?
Each job that I take on is quite different, so my role will evolve to suit each company’s needs.

Usually my job starts at the early stages of production, so I will meet with the editorial team to work out what they are looking to achieve visually. From this I can ascertain how their post will work most effectively, and work back from their delivery dates to put an edit and finishing schedule together.

For every shoot I will oversee the rushes being ingested and investigate any technical issues that crop up. Once the post production phase starts, I will be in charge of managing the offline. This includes ensuring editors are aware of deadlines and working with executives and/or directors and producers to ensure smooth running of their show.

This also requires me to liaise with the post house, keeping them informed of production’s requirements and schedules, and trouble shooting any obstacles that inevitably crop up along the way.

I also deal directly with the broadcaster, ensuring delivery requirements are clear, ironing out any technical queries from both sides and ensuring the final masters are delivered in timely manner. This also means that I have to be meticulous about quality control of the final product, as any errors can cause huge delays. As the post supervisor managing the post production budget, efficiently is vital. I keep a constant eye on spending and keep the production team up to date with cost reports.

Alternatively, I also offer my services as a consultant, if all a production needs is some initial support. I’m also in the process of setting up courses for production teams that will help them gain a better understanding of the new 4KHDR world, and how they can work to realistic timing and budgets.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably the amount of decisions I have to make on a daily basis. There are so many different ways of doing things, from converting frame rates, working with archive and creating the workflows for editorial to work with.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I think I have the best job in the world! I am one of the very few people on any production that sees the show from early development, right through to delivery. It’s a very privileged position.

WHAT’S YOUR LEAST FAVORITE?
My role can be quite intensive, so there is usually a real lack of downtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
As I have quite a long commute, I find that first thing in the morning is my most productive time. From about 6am I have a few hours of uninterrupted work I can do to set my day up to run smoothly.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would have joined the military!

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
As cheesy as it sounds, post production actually found me! I was working for a production company very early in my career, and I was going to be made redundant. Luckily, I was a valued member of the company and was re-drafted into their post production team. At first I thought it was a disaster, however with lots of help, I hit my stride and fell in love with the job.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
For the last three years I have been working on The Grand Tour for Amazon Prime.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
That’s a hard question as I have worked on so many.

But The Grand Tour has been the most technically challenging. It was the first ever 4K HDR factual entertainment show! Coupled with the fact that it was all shot at 23.98 with elements shot as live. It was one of those jobs where you couldn’t really ask people for advice because it just hadn’t been done.

However, I am also really proud of some of the documentaries I have made, including Born to be Different, Power and the Women’s World and VE day.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My coffee machine, my toaster and the Avid Media Composer.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
All of them…I have to! Part of being in post is being aware of all the new technologies, shows and channels/online platforms out there. You have to keep ahead of the times.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, I love music! I have an eclectic, wide-ranging taste, which means I have a million playlists on Spotify! I love finding new music and playing it for Jess (Jessica Redman, my post production coordinator). We are often shimmying around the office. It keeps the job light, especially during the most demanding days.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I am fortunate enough to be able to take my dog Mouse with me to work. She keeps me sane and keeps me calm, whilst also providing those I work with, with a little joy too!

I am also an obsessive reader, so any down time I get I am often found curled up under a blanket with a good book.

My passion for television really knows no bounds, so I watch TV a lot too! I try to watch at least the first episode of all new TV programs. I rarely get to go to the cinema, but when I do it’s such a treat to watch films on the big screen.

Encore adds colorist Andrea Chlebak, ups Genevieve Fontaine to director of production

Encore has added colorist Andrea Chlebak to its roster and promoted veteran post producer Genevieve Fontaine to director of production. Chlebak brings a multidisciplinary background in feature films, docu-series and commercials across a range of aesthetics. Fontaine has been a post producer since joining the Encore team in early 2010.

Chlebak’s credits include award-winning indies Mandy and Prospect, Neill Blomkamp features Elysium and Chappie and animated adaptation Kahlil Gibran’s “The Prophet.” Having worked primarily in the digital landscape, her experience as an artist, still photographer, film technician, editor and compositor are evident in both her work and how she’s able to streamline communication with directors and cinematographers in delivering their vision.

In her new role, Fontaine’s responsibilities shift toward ensuring organized, efficient and future-proof workflows. Fontaine began her career as a telecine and dailies producer at Riot before moving to Encore, where she managed post for up to 11 shows at a time, including Marvel’s The Defenders series for Netflix. She understands all the building blocks necessary to keep a facility running smoothly and has been instrumental in establishing Encore, a Deluxe company, as a leader in advanced formats, helping coordinate 4K, HDR and IMF-based workflows.

Main Image: (L-R) Genevieve Fontaine and Andrea Chlebak.

A Conversation: 3P Studio founder Haley Stibbard

Australia’s 3P Studio is a post house founded and led by artisan Haley Stibbard. The company’s portfolio of work includes commercials for brands such as Subway, Allianz and Isuzu Motor Company as well as iconic shows like Sesame Street. Stibbard’s path to opening her own post house was based on necessity.

After going on maternity to have her first child in 2013, she returned to her job at a content studio to find that her role had been made redundant. She was subsequently let go. Needing and wanting to work, she began freelancing as an editor — working seven days a week and never turning down a job. Eventually she realized that she couldn’t keep up with that type of schedule and took her fate into her own hands. She launched 3P Studio, one of Brisbane’s few women-led post facilities.

We reached out to Stibbard to ask about her love of post and her path to 3P Studio.

What made you want to get into post production? School?
I had a strong love of film, which I got from my late dad, Ray. He was a big film buff and would always come home from work when I was a kid with a shopping bag full of $2 movies from the video store and he would watch them. He particularly liked the crime stories and thrillers! So I definitely got my love of film and television from him.

We did not have any film courses at high school in the ‘90s, so the closest I could get was photography. Without a show reel it was hard to get a place at university in the college of art; a portfolio was a requirement and I didn’t have one. I remember I had to talk my way into the film program, and in the end I think they just got sick of me and let me into the course through the back door without a show reel — I can be very persistent when I want to be. I always had enjoyed editing and I was good at it, so in group tasks I was always chosen as the editor and then my love of post came from there.

What was your first job?
My very first job was quite funny, actually. I was working in both a shoe store and a supermarket at the time, and two post positions became available one day, an in-house editor for a big furniture chain and a job as a production assistant for a large VFX company at Movie World on the Gold Coast. Anyone who knows me knows that I would be the worst PA in the world. So, luckily for that company director, I didn’t get the PA job and became the in-house editor for the furniture chain.

I’m glad that I took that job, as it taught me so much — how to work under pressure, how to use an Avid, how to work with deadlines, what a key number was, how to dispatch TVCS to the stations, be quick, be accurate, how to take constructive feedback.

I made every mistake known to man, including one weekend when I forgot to remove the 4×3 safe bars from a TVC and my boss saw it on TV. I ended up having to drive to the office, climb the fence that was locked to get into the office and pull it off air. So I’ve learned a lot of things the hard way, but my boss was a very patient and forgiving man, and 18 years later is now a client of mine!

What job did you hold when you went out on maternity leave?
Before I left on maternity leave to have my son Dashiell, I was an editor for a small content company. I have always been a jack-of-all-trades and I took care of everything from offline to online, grading in Resolve, motion graphics in After Effects and general design. I loved my job and I loved the variety that it brought. Doing something different every day was very enjoyable.

After leaving that job, you started freelancing as an editor. What systems did you edit on at the time and what types of projects? How difficult a time was that for you? New baby, working all the time, etc.
I started freelancing when my son was just past seven months old. I had a mortgage and had just come off six months of unpaid maternity leave, so I needed to make a living and I needed to make it quickly. I also had the added pressure of looking after a young child under the age of one who still needed his mother.

So I started contacting advertising agencies and production companies that I thought may be interested in my skill set. I just took every job that I could get my hands on, as I was always worried that every job that I took could potentially be my last for a while. I was lucky that I had an incredibly well-behaved baby! I never said “no” to a job.

As my client base started to grow, my clients would always book me since they knew that I would never say “no” (they know I still don’t say no!). It got to the point where I was working seven days a week. I worked all day when my son was in childcare and all night after he would go to bed. I would take the baby monitor downstairs where I worked out of my husband’s ‘man den.’

As my freelance business grew, I was so lucky that I had the most supportive husband in the world who was doing everything for me, the washing, the cleaning, the cooking, bath time, as well has holding down his own full-time job as an engineer. I wouldn’t have been able to do what I did for that period of time without his support and encouragement. This time really proved to be a huge stepping stone for 3P Studio.

Do you remember the moment you decided you would start your own business?
There wasn’t really a specific moment where I decided to start my own business. It was something that seemed to just naturally come together. The busier I became, the more opportunities came about, like having enough work through the door to build a space and hire staff. I have always been very strategic in regard to the people that I have brought on at 3P, and the timing in which they have come on board.

Can you walk us through that bear of a process?
At the start of 2016, I made the decision to get out of the house. My work life was starting to blend in with my home life and I needed to have that separation. I worked out of a small office for 12 months, and about six months into that it came to a point where I was able to purchase an office space that would become our studio today.

I went to work planning the fit out for the next six months. The studio was an investment in the business and I needed a place that my clients could also bring their clients for approvals, screenings and collaboration on jobs, as well as just generally enjoying the space.

The office space was an empty white shell, but the beauty of coming into a blank canvas was that I was able to create a studio that was specifically built for post production. I was lucky in that I had worked in some of the best post houses in the country as an editor, and this being a custom build I was able to take all the best bits out of all the places I had previously worked and put them into my studio without the restriction of existing walls.

I built up the walls, ripped down the ceilings and was able to design the edit suites and infrastructure all the way down to designing and laying the cable runs myself that I knew would work for us down the line. Then, we saved money and added more equipment to the studio bit by bit. It wasn’t 0 to 100 overnight, I had to work at the business development side of the company a lot, and I spent a lot of long days sitting by myself in those edit suites doing everything. Soon, word of mouth started to circulate and the business started to grow on the back of some nice jobs from my existing loyal clients.

What type of work do you do, and what gear do you call on?
3P Studio is a boutique post production studio that specializes in full-service post production, we also shoot content when required.

Our clients range anywhere from small content videos for the web all the way up to large commercial campaigns and everything in between.

There are currently six of us working full time in the studio, and we handle everything in-house from offline editing to VFX to videography and sound design. We work primarily in the Adobe Creative suite for offline editing in Premiere, mixed with Maxon Cinema 4D/Autodesk Maya for 3D work, Autodesk Flame and Side Effects Houdini for online compositing and VFX, Blackmagic Resolve for color grading and Pro Tools HD for sound mixing. We use EditShare EFS shared storage nodes for collaborative working and sharing of content between the mix of creative platforms we use.

This year we have invested in a Red Digital Cinema camera as well as an EditShare XStream 200 EFS scale-out single-node server so we can become that one-stop shop for our clients. We have been able to create an amazing creative space for our clients to come and work with us, be it from the bespoke design of our editorial suites or the high level of client service we offer.

How did you build 3P Studios to be different from other studios you’ve worked at?
From a personal perspective, the culture that we have been able to build in the studio is unlike anywhere else I have worked in that we genuinely work as a team and support each other. On the business side, we cater to clients of all sizes and budgets while offering uncompromising services and experience whether they be large or small. Making sure they walk away feeling that they have had great value and exemplary service for their budget means that they will end up being a customer of ours for life. This is the mantra that I have been able to grow the business on.

What is your hiring process like, and how do you protect employees who need to go out on maternity or family leave?
When I interview people to join 3P, attitude and willingness to learn is everything to me — hands down. You can be the most amazing operator on the planet, but if your attitude stinks then I’m really not interested. I’ve been incredibly lucky with the team that I have, and I have met them along the journey at exactly the right times. We have an amazing team culture and as the company grows our success is shared.

I always make it clear that it’s swings and roundabouts and that family is always number one. I am there to support my team if they need me to be, not just inside of work but outside as well and I receive the same support in return. We have flexible working hours, I have team members with young families who, at times, are able to work both in the studio and from home so that they can be there for their kids when they need to be. This flexibility works fine for us. Happy team members make for a happy, productive workplace, and I like to think that 3P is forward thinking in that respect.

Any tips for young women either breaking into the industry or in it that want to start a family but are scared it could cost them their job?
Well, for starters, we have laws in Australia that make it illegal for any woman in this country to be discriminated against for starting a family. 3P also supports the 18 weeks paid maternity leave available to women heading out to start a family. I would love to see more female workers in post production, especially in operator roles. We aren’t just going to be the coffee and tea girls, we are directors, VFX artists, sound designers, editors and cinematographers — the future is female!

Any tips for anyone starting a new business?
Work hard, be nice to people and stay humble because you’re only as good as your last job.

Main Image: Haley Stibbard (second from left) with her team.

IBC 2018: Convergence and deep learning

By David Cox

In the 20 years I’ve been traveling to IBC, I’ve tried to seek out new technology, work practices and trends that could benefit my clients and help them be more competitive. One thing that is perennially exciting about this industry is the rapid pace of change. Certainly, from a post production point of view, there is a mini revolution every three years or so. In the past, those revolutions have increased image quality or the efficiency of making those images. The current revolution is to leverage the power and flexibly of cloud computing. But those revolutions haven’t fundamentally changed what we do. The images might have gotten sharper, brighter and easier to produce, but TV is still TV. This year though, there are some fascinating undercurrents that could herald a fundamental shift in the sort of content we create and how we create it.

Games and Media Collide
There is a new convergence on the horizon in our industry. A few years ago, all the talk was about the merge between telecommunications companies and broadcasters, as well as the joining of creative hardware and software for broadcast and film, as both moved to digital.

The new convergence is between media content creation as we know it and the games industry. It was subtle, but technology from gaming was present in many applications around the halls of IBC 2018.

One of the drivers for this is a giant leap forward in the quality of realtime rendering by the two main game engine providers: Unreal and Unity. I program with Unity for interactive applications, and their new HDSRP rendering allows for incredible realism, even when being rendered fast enough for 60+ frames per second. In order to create such high-quality images, those game engines must start with reasonably detailed models. This is a departure from the past, where less detailed models were used for games than were used for film CGI shots, to protect for realtime performance. So, the first clear advantage created by the new realtime renderers is that a film and its inevitable related game can use the same or similar model data.

NCam

Being able to use the same scene data between final CGI and a realtime game engine allows for some interesting applications. Habib Zargarpour from Digital Monarch Media showed a system based on Unity that allows a camera operator to control a virtual camera in realtime within a complex CGI scene. The resulting camera moves feel significantly more real than if they had been keyframed by an animator. The camera operator chases high-speed action, jumps at surprises and reacts to unfolding scenes. The subtleties that these human reactions deliver via minor deviations in the movement of the camera can convey the mood of a scene as much as the design of the scene itself.

NCam was showing the possibilities of augmenting scenes with digital assets, using their system based on the Unreal game engine. The NCam system provides realtime tracking data to specify the position and angle of a freely moving physical camera. This data was being fed to an Unreal game engine, which was then adding in animated digital objects. They were also using an additional ultra-wide-angle camera to capture realtime lighting information from the scene, which was then being passed back to Unreal to be used as a dynamic reflection and lighting map. This ensured that digitally added objects were lit by the physical lights in the realworld scene.

Even a seemingly unrelated (but very enlightening) chat with StreamGuys president Kiriki Delany about all things related to content streaming still referenced gaming technology. Delany talked about their tests to build applications with Unity to provide streaming services in VR headsets.

Unity itself has further aspirations to move into storytelling rather than just gaming. The latest version of Unity features an editing timeline and color grading. This allows scenes to be built and animated, then played out through various virtual cameras to create a linear story. Since those scenes are being rendered in realtime, tweaks to scenes such as positions of objects, lights and material properties are instantly updated.

Game engines not only offer us new ways to create our content, but they are a pathway to create a new type of hybrid entertainment, which sits between a game and a film.

Deep Learning
Other undercurrents at IBC 2018 were the possibilities offered by machine learning and deep learning software. Essentially, a normal computer program is hard wired to give a particular output for a given input. Machine learning allows an algorithm to compare its output to a set of data and adjust itself if the output is not correct. Deep learning extends that principle by using neural network structures to make a vast number of assessments of input data, then draw conclusions and predications from that data.

Real-world applications are already prevalent and are largely related in our industry to processing viewing metrics. For example, Netflix suggests what we might want to watch next by comparing our viewing habits to others with a similar viewing pattern.

But deep learning offers — indeed threatens — much more. Of course, it is understandable to think that, say, delivery drivers might be redundant in a world where autonomous vehicles rule, but surely creative jobs are safe, right? Think again!

IBM was showing how its Watson Studio has used deep learning to provide automated editing highlights packages for sporting events. The process is relatively simple to comprehend, although considerably more complicated in practice. A DL algorithm is trained to scan a video file and “listen” for a cheering crowd. This finds the highlight moment. Another algorithm rewinds back from that to find the logical beginning of that moment, such as the pass forward, the beginning of the volley etc. Taking the score into account helps decide whether that highlight was pivotal to the outcome of the game. Joining all that up creates a highlight package without the services of an editor. This isn’t future stuff. This has been happening over the last year.

BBC R&D was talking about their trials to have DL systems control cameras at sporting events, as they could be trained to follow the “two thirds” framing rule and to spot moments of excitement that justified close-ups.

In post production, manual tasks such as rotoscoping and color matching in color grading could be automated. Even styles for graphics, color and compositing could be “learned” from other projects.

It’s certainly possible to see that deep learning systems could provide a great deal of assistance in the creation of day-to-day media. Tasks that are based on repetitiveness or formula would be the obvious targets. The truth is, much of our industry is repetitive and formulaic. Investors prefer content that is more likely to be a hit, and this leads to replication over innovation.

So, are we heading for “Skynet” and need Arnold to save us? I thought it was very telling that IBM occupied the central stand position in Hall 7 — traditionally the home of the tech companies that have driven creativity in post. Clearly, IBM and its peers are staking their claim. I have no doubt that DL and ML will make massive changes to this industry in the years ahead. Creativity is probably, but not necessarily, the only defence for mere humans to keep a hand in.

That said, at IBC2018 the most popular place for us mere humans to visit was a bar area called The Beach, where we largely drank Heineken. If the ultimate deep learning system is tasked to emulate media people, surely it would create digital alcohol and spend hours talking nonsense, rather than try and take over the media world? So perhaps we have a few years left yet.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Panavision, Sim, Saban Capital agree to merge

Saban Capital Acquisition Corp., a publicly traded special purpose acquisition company, Panavision and Sim Video International have agreed to combine their businesses to create a premier global provider of end-to-end production and post production services to the entertainment industry. Under the terms of the business combination agreement, Panavision and Sim will become wholly owned subsidiaries of Saban Capital Acquisition Corp. Upon completion, Saban Capital Acquisition Corp. will change its name to Panavision Holdings Inc. and is expected to continue to trade on the Nasdaq stock exchange. Kim Snyder, president and chief executive officer of Panavision, will serve as chairman and chief executive officer. Bill Roberts, chief financial officer of Panavision, will serve in that role for the combined company.

Panavision designs, manufactures and provides high-precision optics and camera technology for the entertainment industry and is a leading global provider of production equipment and services. Sim is a leading provider of production and post production solutions with facilities in Los Angeles, Vancouver, Atlanta, New York and Toronto.

“This acquisition will leverage the best of Panavision’s and Sim’s resources by providing comprehensive products and services to best address the ever-adapting needs of content creators globally,” says Snyder.

“We’re combining the talent and integrated services of Sim with two of the biggest names in the business, Panavision and Saban,” adds James Haggarty, president and CEO of Sim. “The resulting scale of the new combined enterprise will better serve our clients and help shape the content-creation landscape.”

The respective boards of directors of Saban Capital Acquisition Corp., Panavision and Sim have unanimously approved the merger with completion subject to Saban Capital Acquisition Corp. stockholder approval, certain regulatory approvals and other customary closing conditions. The parties expect that the process will be completed in the first quarter of 2019.

HPA Tech Retreat 2019 opens call for proposals

The Hollywood Professional Association has issued the call for proposals for the 2019 HPA Tech Retreat, the annual gathering of professionals from around the world who work at the intersection of technology and content creation. The main conference is determined by the proposals submitted during this process.

The HPA Tech Retreat is comprised of Tech Retreat Extra (TR-X), the Supersession, breakfast roundtables, an Innovation Zone and the main conference.  Also open now are submissions for the breakfast roundtables.

Now in its 24th year, the HPA Tech Retreat will take place February 11-15, 2019 at the JW Marriott Desert Springs Resort & Spa in Palm Desert, California, near Palm Springs.

The main program presentations are set for Wednesday, February 13 through Friday, February 15. These presentations are strictly reserved for marketing-free content.  Mark Schubin, who has programmed the Tech Retreat since its inception, notes that main program sessions can include a wide range of content. “We are looking for the most interesting, thought provoking, challenging and important ideas, diving into almost anything that is related to moving images and associated sounds. That includes, but is not limited to: alternative content for cinema, AR, broadcast in the age of broadband, content protection, dynamic range, enhanced cinema, frame rate, global mastering, higher immersion, international law, joke generation, kernel control, loss recovery, media management, night vision, optical advances, plug-‘n’-play, queasiness in VR, robo-post, surround imagery, Terabyte thumb drives, UHD II, verification, wilderness production, x-band Internet access, yield strength of lighting trusses and zoological holography.”

It is a far-ranging and creative call to the most innovative thinkers exploring the most interesting ideas and work. He concludes with his annual salvo, “Anything from scene to seen and gear to ear is fair game. So are haptic/tactile, olfactory and gustatory applications.”

Proposals, which are informal in nature and can be as short as a few sentences in length, must be submitted by the would-be presenter. Submitters will be contacted if the topic is of interest. Presentations in the main program are typically 30 minutes long, including set-up and Q&A. The deadline to submit main program proposals is end of day, Friday, October 26, 2018. Submissions should be sent to tvmark@earthlink.net.

Breakfast roundtables take place Wednesday to Friday, beginning at 7:30am. Unlike the main program, moderator-led breakfast roundtables can include marketing information. Schubin comments, “Table moderators are free to teach, preach, inquire, ask, call-to-task, sell or do anything else that keeps conversation flowing for an hour.”

There is no vetting process for breakfast roundtables. All breakfast roundtable moderators must be registered for the retreat, and there is no retreat registration discount conveyed by moderating a breakfast roundtable. Proposals for breakfast roundtables must be submitted by their proposed moderators, and once the maximum number of tables is reached (32 per day) no more can be accepted.

Further details for the 2019 HPA Tech Retreat will be announced in the coming weeks, including TR-X focus, supersession topics and Innovation Zone details, as well as seminars and meetings held in advance of the Tech Retreat.

Roundtable Post tackles HFR, UHD and HDR image processing

If you’re involved in post production, especially episodic TV, documentaries and feature films, then it’s highly probable that High Frame Rate (HFR), Ultra High Definition (UHD) and High Dynamic Range (HDR) have come your way.

“On any single project, the combination of HFR, UHD and HDR image-processing can be a pretty demanding, cutting-edge technical challenge, but it’s even more exacting when particular specs and tight turnarounds are involved,” says Jack Jones, digital colorist and CTO of full-service boutique facility Roundtable Post Production.

Among the central London facility’s credits are online virals for brands including Kellogg’s, Lurpak, Rolex and Ford, music films for Above & Beyond and John Mellencamp, plus broadcast TV series and feature documentaries for ITV, BBC, Sky, Netflix, Amazon, Discovery, BFI, Channel 4, Showtime and film festivals worldwide. These include Sean McAllister’s A Northern Soul, Germaine Bloody Greer (BBC) and White Right: Meeting The Enemy (ITV Exposure/Netflix).

“Yes, you can render-out HFR/UHD/HDR deliverables from a variety of editing and grading systems, but there are not many that can handle the simultaneous combination of these formats, never mind the detailed delivery stipulations and crunching deadlines that often accompany such projects,” says Jones.

Rewinding to the start of 2017, Jones says that, “Looking forward, to the future landscape of post, the proliferation of formats, resolutions, frame rates and color spaces involved in modern screened entertainment seemed an inevitability for our business. We realized that we were going to need to tackle the impending scenario head-on. Having assessed the alternatives, we took the plunge and gambled on Colorfront Transkoder.”

Transkoder is a standalone, automated system for fast digital file conversion. Roundtable Post’s initial use of Colorfront Transkoder turned out to be the creation of encrypted DCP masters and worldwide deliverables of a variety of long-form projects, such as Nick Broomfield’s Whitney: Can I Be Me, Noah Media Group’s Bobby Robson: More Than a Manager, Peter Medak’s upcoming feature The Ghost of Peter Sellers, and the Colombian feature-documentary To End A War, directed by Marc Silver.

“We discovered from these experiences that, along with incredible quality in terms of image science, color transforms and codecs, Transkoder is fast,” says Jones. “For example, the deliverables for To End A War, involved 10 different language versions, plus subtitles. It would have taken several days to complete these out straight of out of an Avid, but rendering in Transkoder took just four hours.”

More recently, Roundtable Post was faced with the task of delivering country-specific graphics packages, designed and created by production agency Noah Media Group, for use by FIFA rights holders and broadcasters during the 2018 World Cup.

The project involved delivering a mix of HFR, UHD, HDR and HD SDR formats, resulting in 240 bespoke animations, and the production of a mammoth 1,422 different deliverables. These included: 59.94p UHD HDR, 50p UHD HDR, 59.94p HD SDR, 50p HD SDR, 59.94i HD SDR and 50i HD SDR with a variety of clock, timecode, pre-roll, soundtrack, burn-in and metadata requirements as part of the overall specification. Furthermore, the job encompassed the final QC of all deliverables, and it had to be completed within a five-day work week.

“For a facility of our size, this was a significant job in terms of its scale and deadline,” says Jones. “Traditionally, projects like these would involve throwing a lot of people and time at them, and there’s always the chance of human error creeping in. Thankfully, we already had positive experiences with Transkoder, and were eager to see how we could harness its power.”

Using technical data from FIFA, Jones built an XML file containing timelines all of the relevant timecode, clock, image metadata, Wav audio and file-naming information of the required deliverables. He also liaised with Colorfront’s R&D team, and was quickly provided with an initial set of Python script templates that would help to automate the various requirements of the job in Transkoder.

Roundtable Post was able to complete the FIFA 2018 World Cup job, including the client-attend QC of the 1,422 different UHD HDR and HD SDR assets, in under three days.

The Meg: What does a giant shark sound like?

By Jennifer Walden

Warner Bros. Pictures’ The Meg has everything you’d want in a fun summer blockbuster. There are explosions, submarines, gargantuan prehistoric sharks and beaches full of unsuspecting swimmers. Along with the mayhem, there is comedy and suspense and jump-scares. Best of all, it sounds amazing in Dolby Atmos.

The team at E² Sound, led by supervising sound editors Erik Aadahl, Ethan Van der Ryn and Jason Jennings, created a soundscape that wraps around the audience like a giant squid around a submersible. (By the way, that squid vs. submersible scene is so fun for sound!)

L-R: Ethan Van der Ryn and Erik Aadahl.

We spoke to the E² Sound team about the details of their recording sessions for the film. They talk about how they approached the sound for the megalodons, how they used the Atmos surround field to put the audience underwater and much more.

Real sharks can’t make sounds, but Hollywood sharks do. How did director Jon Turteltaub want to approach the sound of the megalodon in his film?
Erik Aadahl: Before the film was even shot, we were chatting with producer Lorenzo di Bonaventura, and he said the most important thing in terms of sound for the megalodon was to sell the speed and power. Sharks don’t have any organs for making sound, but they are very large and powerful and are able to displace water. We used some artistic sonic license to create the quick sound of them moving around and displacing water. Of course, when they breach the surface, they have this giant mouth cavity that you can have a lot of fun with in terms of surging water and creating terrifying, guttural sounds out of that.

Jason Jennings: At one point, director Turteltaub did ask the question, “Would it be appropriate for The Meg to make a growl or roar?”

That opened up the door for us to explore that avenue. The megalodon shouldn’t make a growling or roaring sound, but there’s a lot that you can do with the sound of water being forced through the mouth or gills, whether you are above or below the water. We explored sounds that the megalodon could be making with its body. We were able to play with sounds that aren’t animal sounds but could sound animalistic with the right amount of twisting. For example, if you have the sound of a rock being moved slowly through the mud, and you process that a certain way, you can get a sound that’s almost vocal but isn’t an animal. It’s another type of organic sound that can evoke that idea.

Aadahl: One of my favorite things about the original Jaws was that when you didn’t see or hear Jaws it was more terrifying. It’s the unknown that’s so scary. One of my favorite scenes in The Meg was when you do not see or hear it, but because of this tracking device that they shot into its fin, they are able to track it using sonar pings. In that scene, one of the main characters is in this unbreakable shark enclosure just waiting out in the water for The Meg to show up. All you hear are these little pings that slowly start to speed up. To me, that’s one of the scariest scenes because it’s really playing with the unknown. Sharks are these very swift, silent, deadly killers, and the megalodon is this silent killer on steroids. So it’s this wonderful, cinematic moment that plays on the tension of the unknown — where is this megalodon? It’s really gratifying.

Since sharks are like the ninjas of the ocean (physically, they’re built for stealth), how do you use sound to help express the threat of the megalodon? How were you able to build the tension of an impending attack, or to enhance an attack?
Ethan Van der Ryn: It’s important to feel the power of this creature, so there was a lot of work put into feeling the effect that The Meg had on whatever it’s coming into contact with. It’s not so much about the sounds that are emitting directly from it (like vocalizations) but more about what it’s doing to the environment around it. So, if it’s passing by, you feel the weight and power of it passing by. When it attacks — like when it bites down on the window — you feel the incredible strength of its jaws. Or when it attacks the shark cage, it feels incredibly shocking because that sound is so terrifying and powerful. It becomes more about feeling the strength and power and aggressiveness of this creature through its movements and attacks.

Jennings: In terms of building tension leading up to an attack, it’s all about paring back all the elements beforehand. Before the attack, you’ll find that things get quiet and calmer and a little sparse. Then, all of a sudden, there’s this huge explosion of power. It’s all about clearing a space for the attack so that it means something.

The attack on the window in the underwater research station, how did you build that sequence? What were some of the ways you were able to express the awesomeness of this shark?
Aadahl: That’s a fun scene because you have the young daughter of a scientist on board this marine research facility located in the South China Sea and she’s wandered onto this observation deck. It’s sort of under construction and no one else is there. The girl is playing with this little toy — an iPad-controlled gyroscopic ball that’s rolling across the floor. That’s the featured sound of the scene.

You just hear this little ball skittering and rolling across the floor. It kind of reminds me of Danny’s tricycle from The Shining. It’s just so simple and quiet. The rhythm creates this atmosphere and lulls you into a solitary mood. When the shark shows up, you’re coming out of this trance. It’s definitely one of the big shock-scares of the movie.

Jennings: We pared back the sounds there so that when the attack happened it was powerful. Before the attack, the rolling of the ball and the tickety-tick of it going over the seams in the floor really does lull you into a sense of calm. Then, when you do see the shark, there’s this cool moment where the shark and the girl are having a staring contest. You don’t know who’s going to make the first move.

There’s also a perfect handshake there between sound design and music. The music is very sparse, just a little bit of violins to give you that shiver up your spine. Then, WHAM!, the sound of the attack just shakes the whole facility.

What about the sub-bass sounds in that scene?
Aadahl: You have the mass of this multi-ton creature slamming into the window, and you want to feel that in your gut. It has to be this visceral body experience. By the way, effects re-recording mixer Doug Hemphill is a master at using the subwoofer. So during the attack, in addition to the glass cracking and these giant teeth chomping into this thick plexiglass, there’s this low-end “whoomph” that just shakes the theater. It’s one of those moments where you want everyone in the theater to just jump out of their seats and fling their popcorn around.

To create that sound, we used a number of elements, including some recordings that we had done awhile ago of glass breaking. My parents were replacing this 8’ x 12’ glass window in their house and before they demolished the old one, I told them to not throw it out because I wanted to record it first.

So I mic’d it up with my “hammer mic,” which I’m very willing to beat up. It’s an Audio-Technica AT825, which has a fixed stereo polar pattern of 110-degrees, and it has a large diaphragm so it captures a really nice low-end response. I did several bangs on the glass before finally smashing it with a sledgehammer. When you have a surface that big, you can get a super low-end response because the surface acts like a membrane. So that was one of the many elements that comprised that attack.

Jennings: Another custom-recorded element for that sound came from a recording session where we tried to simulate the sound of The Meg’s teeth on a plastic cylinder for the shark cage sequence later in the film. We found a good-sized plastic container that we filled with water and we put a hydrophone inside the container and put a contact mic on the outside. From that point, we proceeded to abuse that thing with handsaws and a hand rake — all sorts of objects that had sharp points, even sharp rocks. We got some great material from that session, sounds where you can feel the cracking nature of something sharp on plastic.

For another cool recording session, in the editorial building where we work, we set up all the sound systems to play the same material through all of the subwoofers at once. Then we placed microphones throughout the facility to record the response of the building to all of this low-end energy. So for that moment where the shark bites the window, we have this really great punching sound we recorded from the sound of all the subwoofers hitting the building at once. Then after the bite, the scene cuts to the rest of the crew who are up in a conference room. They start to hear these distant rumbling sounds of the facility as it’s shaking and rattling. We were able to generate a lot of material from that recording session to feel like it’s the actual sound of the building being shaken by extreme low-end.

L-R: Emma Present, Matt Cavanaugh and Jason (Jay) Jennings.

The film spends a fair amount of time underwater. How did you handle the sound of the underwater world?
Aadahl: Jay [Jennings] just put a new pool in his yard and that became the underwater Foley stage for the movie, so we had the hydrophones out there. In the film, there are these submersible vehicles that Jay did a lot of experimentation for, particularly for their underwater propeller swishes.

The thing about hydrophones is that you can’t just put them in water and expect there to be sound. Even if you are agitating the water, you often need air displacement underwater pushing over the mics to create that surge sound that we associate with being underwater. Over the years, we’ve done a lot of underwater sessions and we found that you need waves, or agitation, or you need to take a high-powered hose into the water and have it near the surface with the hydrophones to really get that classic, powerful water rush or water surge sound.

Jennings: We had six different hydrophones for this particular recording session. We had a pair of Aquarian Audio H2a hydrophones, a pair of JrF hydrophones and a pair of Ambient Recording ASF-1 hydrophones. These are all different quality mics — some are less expensive and some are extremely expensive, and you get a different frequency response from each pair.

Once we had the mics set up, we had several different props available to record. One of the most interesting was a high-powered drill that you would use to mix paint or sheetrock compound. Connected to the drill, we had a variety of paddle attachments because we were trying to create new source for all the underwater propellers for the submersibles, ships and jet skis — all of which we view from underneath the water. We recorded the sounds of these different attachments in the water churning back and forth. We recorded them above the water, below the water, close to the mic and further from the mic. We came up with an amazing palette of sounds that didn’t need any additional processing. We used them just as they were recorded.

We got a lot of use out of these recordings, particularly for the glider vehicles, which are these high-tech, electrically-propelled vehicles with two turbine cyclone propellers on the back. We had a lot of fun designing the sound of those vehicles using our custom recordings from the pool.

Aadahl: There was another hydrophone recording mission that the crew, including Jay, went on. They set out to capture the migration of humpback whales. One of our hydrophones got tangled up in the boat’s propeller because we had a captain who was overly enthusiastic to move to the next location. So there was one casualty in our artistic process.

Jennings: Actually, it was two hydrophones. But the best part is that we got the recording of that happening, so it wasn’t a total loss.

Aadahl: “Underwater” is a character in this movie. One of the early things that the director and the picture editor Steven Kemper mentioned was that they wanted to make a character out of the underwater environment. They really wanted to feel the difference between being underwater and above the water. There is a great scene with Jonas (Jason Statham) where he’s out in the water with a harpoon and he’s trying to shoot a tracking device into The Meg.

He’s floating on the water and it’s purely environmental sounds, with the gentle lap of water against his body. Then he ducks his head underwater to see what’s down there. We switch perspectives there and it’s really extreme. We have this deep underwater rumble, like a conch shell feeling. You really feel the contrast between above and below the water.

Van der Ryn: Whenever we go underwater in the movie, Turteltaub wanted the audience to feel extremely uncomfortable, like that was an alien place and you didn’t want to be down there. So anytime we are underwater the sound had to do that sonic shift to make the audience feel like something bad could happen at any time.

How did you make being underwater feel uncomfortable?
Aadahl: That’s an interesting question, because it’s very subjective. To me, the power of sound is that it can play with emotions in very subconscious and subliminal ways. In terms of underwater, we had many different flavors for what that underwater sound was.

In that scene with Jonas going above and below the water, it’s really about that frequency shift. You go into a deep rumble under the water, but it’s not loud. It’s quiet. But sometimes the scariest sounds are the quiet ones. We learned this from A Quiet Place recently and the same applies to The Meg for sure.

Van der Ryn: Whenever you go quiet, people get uneasy. It’s a cool shift because when you are above the water you see the ripples of the ocean all over the place. When working in 7.1 or the Dolby Atmos mix, you can take these little rolling waves and pan them from center to left or from the right front wall to the back speakers. You have all of this motion and it’s calming and peaceful. But as soon as you go under, all of that goes away and you don’t hear anything. It gets really quiet and that makes people uneasy. There’s this constant low-end tone and it sells pressure and it sells fear. It is very different from above the water.

Aadahl: Turteltaub described this feeling of pressure, so it’s something that’s almost below the threshold of hearing. It’s something you feel; this pressure pushing against you, and that’s something we can do with the subwoofer. In Atmos, all of the speakers around the theater are extended-frequency range so we can put those super-low frequencies into every speaker (including the overheads) and it translates in a way that it doesn’t in 7.1. In Atmos, you feel that pressure that Turteltaub talked a lot about.

The Meg is an action film, so there’s shootings, explosions, ships getting smashed up, and other mayhem. What was the most fun action scene for sound? Why?
Jennings: I like the scene in the submersible shark cage where Suyin (Bingbing Li) is waiting for the shark to arrive. This turns into a whole adventure of her getting thrashed around inside the cage. The boat that is holding the cable starts to get pulled along. That was fun to work on.

Also, I enjoyed the end of the film where Jonas and Suyin are in their underwater gliders and they are trying to lure The Meg to a place where they can trap and kill it. The gliders were very musical in nature. They had some great tonal qualities that made them fun to play with using Doppler shifts. The propeller sounds we recorded in the pool… we used those for when the gliders go by the camera. We hit them with these churning sounds, and there’s the sound of the bubbles shooting by the camera.

Aadahl: There’s a climactic scene in the film with hundreds of people on a beach and a megalodon in the water. What could go wrong? There’s one character inside a “zorb” ball — an inflatable hamster ball for humans that’s used for scrambling around on top of the water. At a certain point, this “zorb” ball pops and that was a sound that Turteltaub was obsessed with getting right.

We went through so many iterations of that sound. We wound up doing this extensive balloon popping session on Stage 10 at Warner Bros. where we had enough room to inflate a 16-foot weather balloon. We popped a bunch of different balloons there, and we accidentally popped the weather balloon, but fortunately we were rolling and we got it. So a combination of those sounds created the”‘zorb” ball pop.

That scene was one of my favorites in the film because that’s where the shit hits the fan.

Van der Ryn: That’s a great moment. I revisited that to do something else in the scene, and when the zorb popped it made me jump back because I forgot how powerful a moment that is. It was a really fun, and funny moment.

Aadahl: That’s what’s great about this movie. It has some serious action and really scary moments, but it’s also fun. There are some tongue-in-cheek moments that made it a pleasure to work on. We all had so much fun working on this film. Jon Turteltaub is also one of the funniest people that I’ve ever worked with. He’s totally obsessed with sound, and that made for an amazing sound design and sound mix experience. We’re so grateful to have worked on a movie that let us have so much fun.

What was the most challenging scene for sound? Was there one scene that evolved a lot?
Aadahl: There’s a rescue scene that takes place in the deepest part of the ocean, and the rescue is happening from this nuclear submarine. They’re trying to extract the survivors, and at one point there’s this sound from inside the submarine, and you don’t know what it is but it could be the teeth of a giant megalodon scraping against the hull. That sound, which takes place over this one long tracking shot, was one that the director focused on the most. We kept going back and forth and trying new things. Massaging this and swapping that out… it was a tricky sound.

Ultimately, it ended up being a combination of sounds. Jay and sound effects editor Matt Cavanaugh went out and recorded this huge, metal cargo crate container. They set up mics inside and took all sorts of different metal tools and did some scraping, stuttering, chittering and other friction sounds. We got all sorts of material from that session and that’s one of the main featured sounds there.

Jennings: Turteltaub at one point said he wanted it to sound like a shovel being dragged across the top of the submarine, and so we took him quite literally. We went to record that container on one of the hottest days of the year. We had to put Matt (Cavanaugh) inside and shut the door! So we did short takes.

I was on the roof dragging shovels, rakes, a garden hoe and other tools across the top. We generated a ton of great material from that.

As with every film we do, we don’t want to rely on stock sounds. Everything we put together for these movies is custom made for them.

What about the giant squid? How did you create its’ sounds?
Aadahl: I love the sound that Jay came up with for the suction cups on the squid’s tentacles as they’re popping on and off of the submersible.

Jennings: Yet another glorious recording session that we did for this movie. We parked a car in a quiet location here at WB, and we put microphones inside of the car — some stereo mics and some contact mics attached to the windshield. Then, we went outside the car with two or three different types of plungers and started plunging the windshield. Sometimes we used a dry plunger and sometimes we used a wet plunger. We had a wet plunger with dish soap on it to make it slippery and slurpie. We came up with some really cool material for the cups of this giant squid. So we would do a hard plunge onto the glass, and then pull it off. You can stutter the plunger across the glass to get a different flavor. Thankfully, we didn’t break any windows, although I wasn’t sure that we wouldn’t.

Aadahl: I didn’t donate my car for that recording session because I have broken my windshield recording water in the past!

Van der Ryn: In regards to perspective in that scene, when you’re outside the submersible, it’s a wide shot and you can see the arms of the squid flailing around. There we’re using the sound of water motion but when we go inside the submersible it’s like this sphere of plastic. In there, we used Atmos to make the audience really feel like those squid tentacles are wrapping around the theater. The little suction cup sounds are sticking and stuttering. When the squid pulls away, we could pinpoint each of those suction cups to a specific speaker in the theater and be very discrete about it.

Any final thoughts you’d like to share on the sound of The Meg?
Van der Ryn: I want to call out Ron Bartlett, the dialogue/music re-recording mixer and Doug Hemphill, the re-recording mixer on the effects. They did an amazing job of taking all the work done by all of the departments and forming it into this great-sounding track.

Aadahl: Our music composer, Harry Gregson-Williams, was pretty amazing too.

Pixelogic adds d-cinema, Dolby audio mixing theaters to Burbank facility

Pixelogic, which provides localization and distribution services, has opened post production content review and audio mixing theaters within its facility in Burbank. The new theaters extend the company’s end-to-end services to include theatrical screening of digital cinema packages as well as feature and episodic audio mixing in support of its foreign language dubbing business.

Pixelogic now operates a total of six projector-lit screening rooms within its facility. Each room was purpose-built from the ground up to include HDR picture and immersive sound technologies, including support for Dolby Atmos and DTS:X audio. The main theater is equipped with a Dolby Vision projection system and supports Dolby Atmos immersive audio. The facility will enable the creation of more theatrical content in Dolby Vision and Dolby Atmos, which consumers can experience at Dolby Cinema theaters, as well as in their homes and on the go. The four larger theaters are equipped with Avid S6 consoles in support of the company’s audio services. The latest 4D motion chairs are also available for testing and verification of 4D capabilities.

“The overall facility design enables rapid and seamless turnover of production environments that support Digital Cinema Package (DCP) screening, audio recording, audio mixing and a range of mastering and quality control services,” notes Andy Scade, SVP/GM of Pixelogic’s worldwide digital cinema services.

MSI’s new Intel Core i9 ultra-thin WS65 mobile workstation, curved monitors

MSI has introduced its new WS65 mobile workstation and announced the availability of its PS42 professional laptop and Optix MAG241C and MAG271C gaming monitors.

The WS65 mobile workstation features a chassis similar to that of the GS65 Stealth Thin, with attractive styling and 15.6-inch, ultra-thin bezel display. With up to Intel’s 8th Generation Core i9 processor and up to Nvidia Quadro P4200 graphics, the WS65 is up to 40 percent faster than the previous-generation model. Although it is designed for portability, the WS65 also incorporates an 82Whr battery for up to eight hours of battery life.

The WS65 features a 15.6-inch Full HD IPS display with 72 percent coverage of the NTSC color gamut. For storage, the workstation offers one PCI-e SSD / SATA combo and one PCI-e SSD. Ports include three USB 3.1 Type-A, one USB 3.1 Type-C, one HDMI 2.0, one mDP 1.4, one mic-in and a headphone out. The WS65 will be available this September, and it will bear the new elegant and minimalistic MSI workstation logo tailored to the business environment.

The PS42 notebook is the newest member of the MSI Prestige series. Measuring 0.63 inches thick, weighing 2.6 pounds and featuring a nearly bezel-free screen, the notebook offers high performance. The PS42 is powered by an Intel 8th Generation Core i7 processor and an Nvidia MX150 GPU and provides 10 hours of battery life, plus a Windows Hello Certified fingerprint sensor. It is now available at major e-tailers, starting at $899.

The Optix MAG271C and MAG241C feature a 144Hz curved VA LED display and fast -ms response time. The series also uses MSI’s Gaming On-Screen Display software to allow users to control monitor settings, including contrast ratio and brightness, from their Windows desktops. The software also supports hotkey options, so users can switch profiles while in-game or use the MSI remote display app on their Android phones. The MAG271C and MAG241C are now available on Amazon for $299.99 and $229.99, respectively.

Alkemy X joins forces with Quietman, adds CD Megan Oepen

Creative content studio Alkemy X has entered into a joint venture with long-time New York City studio Quietman. In addition, Alkemy X has brought on director/creative director Megan Oepen.

The Quietman deal will see founder and creative director Johnnie Semerad moving the operations of his company into Alkemy X, where both parties will share all creative talent, resources and capabilities.

“Quietman’s reputation of high-end, award-winning work is a tribute to Johnnie’s creative and entrepreneurial spirit,” says Justin B. Wineburgh, Alkemy X president/CEO. “Over the course of two decades, he grew and evolved Quietman from a fledgling VFX boutique into one of the most renowned production companies in advertising and branded content. By joining forces with Alkemy X, we’ll no doubt build on each other’s legacies collectively.”

Semerad co-founded Quietman in 1996 as a Flame-based visual effects company. Since then, it has expanded into the full gamut of production and post production services, producing more than 100 Super Bowl spots, and earning a Cannes Grand Prix, two Emmy Awards and other honors along the way.

“What I’ve learned over the years is that you have to constantly reinvest and reinvent, especially as clients increasingly demand start-to-finish projects,” says Semerad. “Our partnership with Alkemy X will elevate how we serve existing and future clients together, while bolstering our creative and technical resources to reach our potential as commercial filmmakers. The best part of this venture? I’ve always been listed with the Qs, but now, I’m with the As!”

Alkemy X is also teaming up with Oepen, an award-winning creative director and live-action director with 20 years of broadcast, sports and consumer brand campaign experience. Notable clients include Google, the NBA, MLB, PGA, NASCAR, Dove Beauty, Gatorade, Sprite, ESPN, Delta Air Lines, Home Depot, Regal Cinemas, Chick-Fil-A and Yahoo! Sports. Oepen was formerly the executive producer and director for Red Bull’s Non-Live/Long Format Productions group, and headed Under Armour’s Content House. She was also the creator behind Under Armour Originals.

Behind the Title: Trollbäck+Company’s David Edelstein

NAME: David Edelstein

COMPANY: Trollbäck+Company (@trollback)

CAN YOU DESCRIBE YOUR COMPANY?
We are a creative agency that believes in the power of communication, craft and collaboration.
Our mission is to promote innovation, create beauty and foster a lasting partnership. We believe that the brands of the future will thrive on the constant spirit of invention. We apply the same principle to our work, always evolving our practice and reaching across disciplines to produce unexpected, original results.

WHAT’S YOUR JOB TITLE?
Executive Director of Client Partnerships

WHAT DOES THAT ENTAIL?
I’m responsible for building on current client relationships and bringing in new ones. I work closely with the team on our strategic approach to presenting us to a wide array of clients.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think you need to be in a position of doing business development to really understand that question. The goal is to land work that the company wants to do and balance that with the needs of running a business. It is not an easy task to juggle.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love working with a talented team, and being in a position to present a company with such a strong legacy.

WHAT’S YOUR LEAST FAVORITE?
Even after all these years, rejection still isn’t easy, but it’s something you deal with on a daily, sometimes hourly, basis.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I’m a morning person, so I find it’s the perfect time to reach out to people when they’re fresh — and before their day gets chaotic.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Are you trying to tell me something? (laughs) I actually think I’d be doing the same thing, but perhaps for a different industry. I truly enjoy the experience of developing relationships and the challenge of solving creative problems with others. I think it’s a valuable skill set that can be applied to other types of jobs.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
This career came about pretty organically for me. I had a traditional production background and grew up in LA. When I moved to New York, I wound up at Showtime as a producer and discovered motion graphics. When I left there, I was fortunate enough to launch a few small studios. Being an owner makes you the head of business development from the start. These experiences have certainly prepared me for where I’ve been and where I am today.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’m only a few months in, but we are currently spearheading branding for a Fortune 500 company. Trollbäck is also coming off a fantastic title sequence and package for the final episode of the Motion Conference, which just took place in June.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It’s tough to call out one particular project, but some career highlights have been a long relationship with Microsoft, as well as traveling the world with Marriott and Hilton.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Cell phone, computer/email and iPad.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Twitter, Facebook, LinkedIn and Instagram.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
I try to give different types of music a go, so Spotify works well for me. But, honestly, I’m still a Springsteen guy.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I go home to relax and then come back the next day and try to be positive and grateful. Repeat!

HP intros new entry-level HP Z lineup

HP is offering new entry-level workstations with their HP Z lineup, which is designed to help accelerate performance and secure pros’ workflows.

The HP Z2 Mini, HP Z2 Small Form Factor and HP Z2 Tower, as well as the HP EliteDesk 800 Workstation Edition, feature built-in end-to-end HP security services, providing protection from evolving malware threats with self-healing BIOS and an HP endpoint security controller. Users get protection from hardware-enforced security solutions, including HP Sure Start Gen4 and HP Sure Run, which help keep critical processes running, even if malware tries to stop them. Additionally, HP’s Manageability Kit Gen 2 manages multiple devices.

All HP Z2 workstations can now connect with Thunderbolt for fast device connections and offer an array of certifications for the apps pros are using in their day-to-day work lives. HP Performance Advisor is available to optimize software and drivers, and users can deploy Intel Xeon processors and ECC memory for added reliability. The customization, expandability, performance upgradeability and I/O options help future-proof HP Z workstation purchases.

Here are some details about the fourth-generation entry HP Z workstation family:

The HP Z2 Mini G4 workstation features what HP calls “next-level performance” in a small form factor (2.7 liters in total volume). Compared to the previous generation HP Z2 Mini, it offers two times more graphics power. Users can choose either the Nvidia Quadro P600 or Nvidia Quadro P1000 GPU. In addition, there is the option for AMD Radeon Pro WX4150 graphics.

Thanks to its size, users can mount it under a desk, behind a display or in a rack — up to 56 HP Z2 Mini workstations will fit in a standard 42U rack with the custom rackmount bracket accessory. With its flexible I/O, users can configure the system for connectivity of legacy serial ports, as well as support for up to six displays for peripheral and display connectivity needs. The HP Z2 G4 Mini comes with six core Intel Xeon Processors.

The HP Z2 Small Form Factor (SFF) G4 workstation offers 50 percent more processing power than the previous generation in the exact same compact size. The six-core CPU provides significant performance boosts. The HP Z2 SFF takes customization to the next level with flexible I/O options that free up valuable PCIe slots, while providing customization for legacy or specialized equipment, and for changing display needs.

The HP Z2 G4 SFF ships with four PCIe slots and dual M.2 storage slots. Its flexible I/O option enables users to customize networking, I/O or display needs without taking up PCIe slots or adding external adapters.

The HP Z2 Tower G4 workstation is designed for complex workloads like rendering with up to Ultra 3D graphics and the latest Intel Core or Intel Xeon processors. The HP Z2 tower can handle demanding 3D projects with over 60 percent more graphics power than the previous generation. With high clock speeds, users can get full, unthrottled performance, even with heavy workloads.

The HP EliteDesk 800 workstation Edition targets users who want to upgrade to a workstation-class desktop with integrated ISV certified applications experience.

Designed for 2D/3D design, it is also out-of-the box optimized for leading VR engines and features the Nvidia GeForce GTX 1080.

The HP Z2 Mini is expected to be available later this month for a starting price of $799; the HP Z2 Small Form Factor is expected to be available later this month for a starting price of
$749; the HP Z2 Tower is expected to be available later this month for a starting price of $769; and the HP EliteDesk 800 is expected to be available later this month for a starting price of $642, including Nvidia Quadro P400 graphics.

Sony creates sounds for Director X’s Superfly remake

Columbia Pictures’ Superfly is a reimagining of Gordon Parks Jr.’s classic 1972 blaxploitation film of the same name. Helmed by Director X and written by Alex Tse, this new version transports the story of Priest from Harlem to modern-day Atlanta.

Steven Ticknor

Superfly’s sound team from Sony Pictures Post Production Services — led by supervising sound editor Steven Ticknor, supervising sound editor and re-recording mixer Kevin O’Connell, re-recording mixer Greg Orloff and sound designer Tony Lamberti — was tasked with bringing the sonic elements of Priest’s world to life. That included everything from building soundscapes for Atlanta’s neighborhoods and nightclubs to supplying the sounds of fireworks, gun battles and car chases.

“Director X and Joel Silver — who produced the movie alongside hip-hop superstar Future, who also curated and produced the film’s soundtrack — wanted the film to have a big sound, as big and theatrical as possible,” says Ticknor. “The film is filled with fights and car chases, and we invested a lot of detail and creativity into each one to bring out their energy and emotion.”

One element that received special attention from the sound team was the Lexus LC500 that Priest (Trevor Jackson) drives in the film. As the sports car was brand new, no pre-recorded sounds were available, so Ticknor and Lamberti dispatched a recording crew and professional driver to the California desert to capture every aspect of its unique engine sounds, tire squeals, body mechanics and electronics. “Our job is to be authentic, so we couldn’t use a different Lexus,” Ticknor explains. “It had to be that car.”

In one of the film’s most thrilling scenes, Priest and the Lexus LC500 are involved in a high-speed chase with a Lamborghini and a Cadillac Escalade. Sound artists added to the excitement by preparing sounds for every screech, whine and gear shift made by the cars, as well as explosions and other events happening alongside them and movements made by the actors behind the wheels.

It’s all much larger than life, says Ticknor, but grounded in reality. “The richness of the sound is a result of all the elements that go into it, the way they are recorded, edited and mixed,” he explains. “We wanted to give each car its own identity, so when you cut from one car revving to another car revving, it sounds like they’re talking to each other. The audience may not be able to articulate it, but they feel the emotion.”

Fights received similarly detailed treatment. Lamberti points to an action sequence in a barber shop as one of several scenes rendered partially in extreme slow motion. “It starts off in realtime before gradually shifting to slo-mo through the finish,” he says. “We had fun slowing down sounds, and processing them in strange and interesting ways. In some instances, we used sounds that had no literal relation to what was happening on the screen but, when slowed down, added texture. Our aim was to support the visuals with the coolest possible sound.”

Re-recording mixing was accomplished in the 125-seat Anthony Quinn Theater on an Avid S6 console with O’Connell handling dialogue and music and Orloff tackling sound effects and Foley. Like its 1972 predecessor, which featured an iconic soundtrack from Curtis Mayfield, the new film employs music brilliantly. Atlanta-based rapper Future, who shares producer credit, assembled a soundtrack that features Young Thug, Lil Wayne, Miguel, H.E.R. and 21 Savage.

“We were fortunate to have in Kevin and Greg, a pair of Academy Award-winning mixers, who did a brilliant job in blending music, dialogue and sound effects,” says Ticknor. “The mix sessions were very collaborative, with a lot of experimentation to build intensity and make the movie feel bigger than life. Everyone was contributing ideas and challenging each other to make it better, and it all came together in the end.”

The score for YouTube Red’s Cobra Kai pays tribute to original Karate Kid

By Jennifer Walden

In the YouTube Red comedy series Cobra Kai, Daniel LaRusso (Ralph Macchio), the young hero of the Karate Kid movies, has grown up to be a prosperous car salesman, while his nemesis Johnny Lawrence (William Zabka) just can’t seem to shake that loser label he earned long ago. Johnny can’t hold down his handy-man job. He lives alone in a dingy apartment, and his personality hasn’t benefited from maturity at all. He lives a very sad reality until one day he finds himself sticking up for a kid being bullied, and that redeeming bit of character makes you root for him. It’s an interesting dynamic that the series writers/showrunners have crafted, and it works.

L-R: Composers Leo Birenberg and Zack Robinson

Fans of the 1980’s film franchise will appreciate the soundtrack of the new Cobra Kai series. Los Angeles-based composers Leo Birenberg and Zach Robinson were tasked with capturing the essence of both composer Bill Conti’s original film scores and the popular music tracks that also defined the sound of the films.

To find that Karate Kid essence, Birenberg and Robinson listened to the original films and identified what audiences were likely latching onto sonically. “We concluded that it was mostly a color palette connection that people have. They hear a certain type of orchestral music with a Japanese flute sound, and they hear ‘80s rock,” says Birenberg. “It’s that palette of sounds that people connect with more so than any particular melody or theme from the original movies.”

Even though Conti’s themes and melodies for Karate Kid don’t provide the strongest sonic link to the films, Birenberg and Robinson did incorporate a few of them into their tracks at appropriate moments to create a feeling of continuity between the films and the series. “For example, there were a couple of specific Japanese flute phrases that we redid. And we found a recurring motif of a simple pizzicato string melody,” explains Birenberg. “It’s so simple that it was easy to find moments to insert it into our cues. We thought that was a really cool way to tie everything together and make it feel like it is all part of the same universe.”

Birenberg and Robinson needed to write a wide range of music for the show, which can be heard en masse on the Cobra Kai OST. There are the ’80s rock tracks that take over for licensed songs by bands like Poison and The Alan Parsons Project. This direction, as heard on the tracks “Strike First” and “Quiver,” covered the score for Johnny’s character.

The composers also needed to write orchestral tracks that incorporated Eastern influences, like the Japanese flutes, to cover Daniel as a karate teacher and to comment on his memories of Miyagi. A great example of this style is called, fittingly, “Miyagi Memories.”

There’s a third direction that Birenberg and Robinson covered for the new Cobra Kai students. “Their sound is a mixture of modern EDM and dance music with the heavier ‘80s rock and metal aesthetics that we used for Johnny,” explains Robinson. “So it’s like Johnny is imbuing the new students with his musical values. This style is best represented in the track ‘Slither.’”

Birenberg and Robinson typically work as separate composers, but they’ve collaborated on several projects before Cobra Kai. What makes their collaborations so successful is that their workflows and musical aesthetics are intrinsically similar. Both use Steinberg’s Cubase as their main DAW, while running Ableton Live in ReWire mode. Both like to work with MIDI notes while composing, as opposed to recording and cutting audio tracks.

Says Birenberg, “We don’t like working with audio from the get-go because TV and film are such a notes-driven process. You’re not writing music as much as you are re-writing it to specification and creative input. You want to be able to easily change every aspect of a track without having to dial in the same guitar sound or re-record the toms that you recorded yesterday.”

Virtual Instruments
For Cobra Kai, they first created demo songs using MIDI and virtual instruments. Drums and percussion sounds came from XLN Audio’s Addictive Drums. Spectrasonics Trilian was used for bass lines and Keyscape and Omnisphere 2 provided many soft-synth and keyboard sounds. Virtual guitar sounds came from MusicLab’s RealStrat and RealLPC, Orange Tree, and Ilya Efimov virtual instrument libraries. The orchestral sections were created using Native Instruments Kontakt, with samples coming from companies such as Spitfire, Cinesamples, Cinematic Strings, and Orchestral Tools.

“Both Zach and I put a high premium on virtual instruments that are very playable,” reports Birenberg. “When you’re in this line of work, you have to work superfast and you don’t want a virtual instrument that you have to spend forever tweaking. You want to be able to just play it in so that you can write quickly.”

For the final tracks, they recorded live guitar, bass and drums on every episode, as well as Japanese flute and small percussion parts. For the season finale, they recorded a live orchestra. “But,” says Birenberg, “all the orchestra and some Japanese percussion you hear earlier in the series, for the most part, are virtual instruments.”

Live Musicians
For the live orchestra, Robinson says they wrote 35 minutes of music in six days and immediately sent that to get orchestrated and recorded across the world with the Prague Radio Symphony Orchestra. The composing team didn’t even have to leave Los Angeles. “They sent us a link to a private live stream so we could listen to the session as it was going on, and we typed notes to them as we were listening. It sounds crazy but it’s pretty common. We’ve done that on numerous projects and it always turns out great.”

When it comes to dividing up the episodes — deciding who should score what scenes — the composing team likes to “go with gut and enthusiasm,” explains Birenberg. “We would leave the spotting session with the showrunners, and usually each of us would have a few ideas for particular spots.”

Since they don’t work in the same studio, the composers would split up and start work on the sections they chose. Once they had an idea down, they’d record a quick video of the track playing back to picture and share that with the other composer. Then they would trade tracks so they each got an opportunity to add in parts. Birenberg says, “We did a lot of sending iPhone videos back and forth. If it sounds good over an iPhone video, then it probably sounds pretty good!”

Both composers have different and diverse musical backgrounds, so they both feel comfortable diving right in and scoring orchestral parts or writing bass lines, for instance. “For the scope of this show, we felt at home in every aspect of the score,” says Birenberg. “That’s how we knew this show was for both of us. This score covers a lot of ground musically, and that ground happened to fit things that we understand and are excited about.” Luckily, they’re both excited about ‘80s rock (particularly Robinson) because writing music in that style effectively isn’t easy. “You can’t fake it,” he says.

Recreating ‘80s Rock
A big part of capturing the magic of ‘80s rock happened in the mix. On the track “King Cobra,” mix engineer Sean O’Brien harnessed the ‘80s hair metal style by crafting a drum sound that evoked Motley Crew and Bon Jovi. “I wanted to make the drums as bombastic and ‘80s as possible, with a really snappy kick drum and big reverbs on the kick and snare,” says O’Brien.

Using Massey DRT — a drum sample replacement plug-in for Avid Pro Tools, he swapped out the live drum parts with drum samples. Then on the snare, he added a gated reverb using Valhalla VintageVerb. He also used Valhalla Room to add a short plate sound to thicken up the kick and snare drums.

To get the toms to match the cavernous punchiness of the kick and snare, O’Brien augmented the live toms with compression and EQ. “I chopped up the toms so there wasn’t any noise in between each hit and then I sent those to the nonlinear short reverbs in Valhalla Room,” he says. “Next, I did parallel compression using the Waves SSL E-Channel plug-in to really squash the tom hits so they’re big and in your face. With EQ, I added more top end then I normally would to help the toms compete with the other elements in the mix. You can make the close mics sound really crispy with those SSL EQs.”

Next, he bussed all the drum tracks to a group aux track, which had a Neve 33609 plug-in by UAD and a Waves C4 multi-band compressor “to control the whole drum kit after the reverbs were laid in to make sure those tracks fit in with the other instruments.”

Sean O’Brien

On “Slither,” O’Brien also focused on the drums, but since this track is more ‘80s dance than ‘80s rock, O’Brien says he was careful to emphasize the composers’ ‘80s drum machine sounds (rather than the live drum kit), because that is where the character of the track was coming from. “My job on this track was to enhance the electric drum sounds; to give the drum machine focus. I used UAD’s Neve 1081 plug-in on the electronic drum elements to brighten them up.”

“Slither” also features Taiko drums, which make the track feel cinematic and big. O’Brien used Soundtoys Devil-Loc to make the taiko drums feel more aggressive, and added distortion using Decapitator from Soundtoys to help them cut through the other drums in the track. “I think the drums were the big thing that Zach [Robinson] and Leo [Birenberg] were looking to me for because the guitars and synths were already recorded the way the composers wanted them to sound.”

The Mix
Mix engineer Phil McGowan, who was responsible for mixing “Strike First,” agrees. He says, “The ‘80s sound for me was really based on drum sounds, effects and tape saturation. Most of the synth and guitar sounds that came from Zach and Leo were already very stylized so there wasn’t a whole lot to do there. Although I did use a Helios 69 EQ and Fairchild compressor on the bass along with a little Neve 1081 and Kramer PIE compression on the guitars, which are all models of gear that would have been used back then. I used some Lexicon 224 and EMT 250 on the synths, but otherwise there really wasn’t a whole lot of processing from me on those elements.”

Phil McGowan’s ‘Strike First’ Pro Tools session.

To get an ‘80s gated reverb sound for the snare and toms on “Strike First,” McGowan used an AMS RMX16 nonlinear reverb plug-in in Pro Tools. For bus processing, he mainly relied on a Pultec EQ, adding a bit of punch with the classic “Pultec Low End Trick” —which involves boosting and attenuating at the same frequency — plus adding a little bump at 8k for some extra snap. Next in line, he used an SSL G-Master buss compressor before going into UAD’s Studer A800 tape plug-in set to 456 tape at 30 ips and calibrated to +3 dB.

“I did end up using some parallel compression using a Distressor plug-in by Empirical Labs, which was not around back then, but it’s my go-to parallel compressor and it sounded fine, so I left it in my template. I also used a little channel EQ from FabFilter Pro-Q2 and the Neve 88RS Channel Strip,” concludes McGowan.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

Quick Chat: Technicolor’s new finishing artist, VP Pankaj Bajpai

By Randi Altman

Veteran colorist Pankaj Bajpai will be joining Technicolor’s Los Angeles studio in August as VP, finishing artist and business development. He comes to Technicolor from his long-tenured position at Encore.

Bajpai’s long list of television credits include House of Cards, Sex in the CityCarnivàle, The Newsroom, True Detective, Justified, Fear the Walking Dead, Genius: Einstein and Picasso, Snowfall and many more. He brings with him a background in both film cinematography and digital post.

Bajpai joins Technicolor’s roster of episodic colorists in Los Angeles who include Sparkle, Tim Vincent, Tony Dustin, Tom Forletta, Roy Vasich and Doug Delaney.

“I’m thrilled to start a new chapter at such a vibrant time in our industry’s landscape,” says Bajpai on joining Technicolor. “With the support of Sherri Potter (Technicolor’s president of worldwide post production), and the team of artists and engineers at Technicolor, I’m excited to continue to push the boundaries of technology and creativity to bring our clients’ vision and passion to all screens, in all formats, for all to enjoy.”

We reached out to Bajpai to find out more:

Why was now the right time to make this change, especially after being at one place for so long?
Consumers’ relationship with content has been disrupted, the entertainment industry has shifted, and as a result the dynamics of post are changing dramatically. Lines are blurring between “feature” and “episodic” content — the quality of the story and the production, the craft, the expectation by all stakeholders, etc. is now almost universally the same for all pieces of content regardless of distribution platform. I believe Technicolor understands this dynamic shift and is supporting the singular demand for stunning content regardless of distribution “genre,” and that made it the right time for me to join.

How do you divide your time between your colorist duties and your biz dev duties?
I believe that the role of the colorist is no longer a singular duty. It is my responsibility to be the center of collaboration across the post process — from a client perspective, a craft perspective and a workflow perspective. We no longer live in a silo’d industry with clear hand-offs. I must understand the demands that 4K, HDR and beyond have on workflows, the craft and the ever-tightening delivery deadlines.

I believe in being the catalyst for collaboration across the post process, uniting the technology and artistry to serve our clients’ visions. It’s not about wearing one hat at a time. It’s about taking my role as both artists and client ambassador seriously, ultimately ensuring that the experience is as flawless as possible, and the picture is stunning.

You are an artist first, but what do you get from doing the other parts as well?
We no longer work within independent processes. Being that center of collaboration that I referenced earlier influences my approach to color finishing as much as my role as an artist helps to bring perspective to the technology and operational demands of projects these days.

How does your background in cinematography inform you color work?
My work will always be informed by my clients, but my background in cinematography allows us to speak the same language — the language of lens and light, the language of photography. I find it is a very easy way of communicating visual ideas and gets us on the same page much faster. For instance, when a DP shares with me that they will be using a particular set of lenses and filters in combination with specific gels and lights, I’m able to visualize their creative intent quickly. Instinctively, we know what that image needs to be from the start without talking about it too much. Establishing such trust on demanding episodic shooting and finishing schedules is critical to stay true to my clients’ creative ideas.

Understanding and respecting the nuances of a cinematographer’s work in this way goes far in my ability to create a successful color finishing process in the end.

The world of color is thriving right now. How has the art changed since you started?
Art at its essence will always be about creative people seeing something come to life from within their own unique perspective. What has changed is the fact that the tools we now have at our disposal allow me as a finishing artist to create all new approaches to my craft. I can go deeper into an image and its color space now; it’s freeing and exciting because it allows for collaboration with cinematographers and directors on a continually deeper level.

What is the most exciting thing going on in color right now? HDR? Something else?
It really feels like the golden age of content across all platforms. Consumers’ expectations are understandably high across any type of content consumed in any environment or any screen. I think everyone involved on a show feels that and feels the excitement and continues to raise the bar for the quality of the storytelling, the craft and the overall consumer engagement. To be a contributor work, which is now easily seen globally, is very exciting.

Has the new technology changed the way you work or is your creative process essentially the same?
Technology will continue to change, workflows will be impacted and, as an industry, we’ll always be looking to challenge what is possible. My creative process continues to be influenced by the innovative tools that I get to explore.

For instance, it’s vital for me to understand an array of new digital cameras and the distinctive images they are capable of producing. I frequently use my toolset for creative options that can be deployed right within those cameras. To be able to help customize images non-destructively from the beginning of the shoot and to collaborate with directors and cinematographers to aid storytelling with a unique visual style all the way to the finish, is hugely satisfying. For innovation in the creative process today, the sky is the limit.

Review: HP DreamColor Z31x studio display for cinema 4K

By Mike McCarthy

Not long ago, HP sent me their newest high-end monitor to review, and I was eager to dig in. The DreamColor Z31x studio display is a 31-inch true 4K color-critical reference monitor. It has many new features that set it apart from its predecessors, which I have examined and will present here in as much depth as I can.

It is challenging to communicate the nuances of color quality through writing or any other form on the Internet, as some things can only be truly appreciated firsthand. But I will attempt to communicate the experience of using the new DreamColor as best I can.

First, we will start with a little context…

Some DreamColor History
HP revolutionized the world of color-critical displays with the release of the first DreamColor in June 2008. The LP2480zx was a 24-inch 1920×1200 display that had built-in color processing with profiles for standard color spaces and the ability to calibrate it to refine those profiles as the monitor aged. It was not the first display with any of these capabilities, but the first one that was affordable, by at least an order of magnitude.

It became very popular in the film industry, both sitting on desks in post facilities — as it was designed — and out in the field as a live camera monitor, which it was not designed for. It had a true 10-bit IPS pane and the ability to reproduce incredible detail in the darks. It could only display 10-bit sources from the brand-new DisplayPort input or the HDMI port, and the color gamut remapping only worked for non-interlaced RGB sources.

So many people using the DreamColor as a “video monitor” instead of a “computer monitor” weren’t even using the color engine — they were just taking advantage of the high-quality panel. It wasn’t just the color engine but the whole package, including the price, that led to its overwhelming success. This was helped by the lack of better options, even at much higher price points, since this was the period after CRT production ended but before OLED panels had reached the market. This was similar to (and in the same timeframe as) Canon’s 5D MarkII revolutionizing the world of independent filmmaking with its HDSLRs. The combination gave content creators amazing tools for moving into HD production at affordable price points.

It took six years for HP to release an update to the original model DreamColor in the form of the Z27x and Z24x. These had the same color engine but different panel technology. They never had the same impact on the industry as the original, because the panels didn’t “wow” people, and the competition was starting to catch up. Dell has PremierColor and Samsung and BenQ have models featuring color accuracy as well. The Z27x could display 4K sources by scaling them to its native 2560×1440 resolution, while the Z24x’s resolution was decreased to 1920×1080 with a panel that was even less impressive.

Fast forward a few more years, and the Z24x was updated to Gen2, and the Z32x was released with UHD resolution. This was four times the resolution of the original DreamColor and at half the price. But with lots of competition in the market, I don’t think it has had the reach of the original DreamColor, and the industry has matured to the point where people aren’t hooking them to 4K cameras because there are other options better suited to that environment, specifically battery powered OLED units.

DreamColor at 4K
Fast forward a bit and HP has released the Z31x DreamColor studio display. The big feature that this unit brings to the table is true cinema 4K resolution. The label 4K gets thrown around a lot these days, but most “4K” products are actually UHD resolution, at 3840×2160, instead of the full 4096×2160. This means that true 4K content is scaled to fit the UHD screen, or in the case of Sony TVs, cropped off the sides. When doing color critical work, you need to be able to see every pixel, with no scaling, which could hide issues. So the Z31x’s 4096×2160 native resolution will be an important feature for anyone working on modern feature films, from editing and VFX to grading and QC.

The 10-bit 4K Panel
The true 10-bit IPS panel is the cornerstone of what makes a DreamColor such a good monitor. IPS monitor prices have fallen dramatically since they were first introduced over a decade ago, and some of that is the natural progression of technology, but some of that has come at the expense of quality. Most displays offering 10-bit color are accomplishing that by flickering the pixels of an 8-bit panel in an attempt to fill in the remaining gradations with a technique called frame rate control (FRC). And cheaper panels are as low as 6-bit color with FRC to make them close to 8-bit. There are a variety of other ways to reduce cost with cheaper materials, and lower-quality backlights.

HP claims that the underlying architecture of this panel returns to the quality of the original IPS panel designs, but then adds the technological advances developed since then, without cutting any corners in the process. In order to fully take advantage of the 10-bit panel, you need to feed it 10-bit source content, which is easier than it used to be but not a forgone conclusion. Make sure you select 10-bit output color in your GPU settings.

In addition to a true 10-bit color display, it also natively refreshes at the rate of the source image, from 48Hz-60Hz, because displaying every frame at the right time is as important as displaying it in the right color. They say that the darker blacks are achieved by better crystal alignment in the LCD (Liquid Crystal Display) blocking out the backlight more fully. This also gives a wider viewing angle, since washing out the blacks is usually the main issue with off-axis viewing. I can move about 45 degrees off center, vertically or horizontally, without seeing any shift in the picture brightness or color. Past that I start to see the mid levels getting darker.

Speaking of brighter and darker, the backlight gives the display a native brightness of 250 nits. That is over twice the brightness needed to display SDR content, but this not an HDR display. It can be adjusted anywhere from 48 to 250 nits, depending on the usage requirements and environment. It is not designed to be the brightest display available, it is aiming to be the most accurate.

Much effort was put into the front surface, to get the proper balance of reducing glare and reflections as much as possible. I can’t independently verify some of their other claims without a microscope and more knowledge than I currently have, but I can easily see that the matte surface of the display is much better than other monitors in regards to fewer reflections and less glare for the surrounding environment, allowing you to better see the image on the screen. That is one of the most apparent strengths of the monitor, obviously visible at first glance.

Color Calibration
The other new headline feature is an integrated colorimeter for display calibration and verification, located in the top of the bezel. It can swing down and measure the color parameters of the true 10-bit IPS panel, to adjust the color space profiles, allowing the monitor to more accurately reproduce colors. This is a fully automatic feature, independent of any software or configuration on the host computer system. It can be controlled from the display’s menu interface, and the settings will persist between multiple systems. This can be used to create new color profiles, or optimize the included ones for DCI P3, BT.709, BT.2020, sRGB and Adobe RGB. It also includes some low-blue-light modes for use as an interface monitor, but this negates its color accurate functionality. It can also input and output color profiles and all other configuration settings through USB and its network connection.

The integrated color processor also supports using external colorimeters and spectroradiometers to calibrate the display, and even allows the integrated XYZ colorimeter itself to be calibrated by those external devices. And this is all accomplished internally in the display, independent of using any software on the workstation side. The supported external devices currently include:
– Klein Instruments: K10, K10-A (colorimeters)
– Photo Research: PR-655, PR-670, PR-680, PR-730, PR-740, PR-788 (spectroradiometers)
– Konica Minolta: CA-310 (colorimeter)
– X-Rite: i1Pro 2 (spectrophotometer), i1Display (colorimeter)
– Colorimetry Research: CR-250 (spectroradiometer)

Inputs and Ports
There are five main display inputs on the monitor: two DisplayPort 1.2, two HDMI 2.0 and one DisplayPort over USB-C. All support HDCP and full 4K resolution at up to 60 frames per second. It also has an 1/8-inch sound jack and a variety of USB options. There are four USB 3.0 ports that are shared via KVM switching technology between the USB-C host connection and a separate USB-B port to a host system. These are controlled by another dedicated USB keyboard port, giving the monitor direct access to the keystrokes. There are two more USB ports that connect to the integrated DreamColor hardware engine, for connecting external calibration instruments, and for loading settings from USB devices.

My only complaint is that while the many USB ports are well labeled, the video ports are not. I can tell which ones are HDMI without the existing labels, but what I really need is to know which one the display views as HDMI1 and which is HDMI2. The Video Input Menu doesn’t tell you which inputs are active, which is another oversight, given all of the other features they added to ease the process of sharing the display between multiple inputs. So I recommend labeling them yourself.

Full-Screen Monitoring Features
I expect the Z31x will most frequently be used as a dedicated full-resolution playback monitor, and HP has developed a bunch of new features that are very useful and applicable for that use case. The Z31x can overlay mattes (with variable opacity) for Flat and Scope cinema aspect ratios (1.85 and 2.39). It also can display onscreen markers for those sizes, as well as 16×9 or 3×4, including action and title safe, including further options for center and thirds markers with various colors available. The markers can be further customized with HP’s StudioCal.XML files. I created a preset that gives you 2.76:1 aspect ratio markers that you are welcome to download and use or modify. These customized XMLs are easy to create and are loaded automatically when you insert a USB stick containing them into the color engine port.

The display also gives users full control over the picture scaling, and has a unique 2:1 pixel scaling for reviewing 2K and HD images at pixel-for-pixel accuracy. It also offers compensation for video levels and overscan and controls for de-interlacing, cadence detection, panel overdrive and blue-channel-only output. You can even control the function of each bezel button, and their color and brightness. These image control features will definitely be significant to professional users in the film and video space. Combined with the accurate reproduction of color, resolution and frame rate, this makes for an ideal display for monitoring nearly any film or video content at the highest level of precision.

Interface Display Features
Most people won’t be using this as an interface monitor, due to the price and because the existing Z32x should suffice when not dealing with film content at full resolution. Even more than the original DreamColor, I expect it will primarily be used as a dedicated full-screen playback monitor and users will have other displays for their user interface and controls. That said, HP has included some amazing interface and sharing functionality in the monitor, integrating a KVM switch for controlling two systems on any of the five available inputs. They also have picture-in-picture and split screen modes that are both usable and useful. HD or 2K input can be displayed at full resolution over any corner of the 4K master shot.

The split view supports two full-resolution 2048×2160 inputs side by side and from separate sources. That resolution has been added as a default preset for the OS to use in that mode, but it is probably only worth configuring for extended use. (You won’t be flipping between full screen and split very easily in that mode.) The integrated KVM is even more useful in these configurations. It can also scale any other input sizes in either mode but at a decrease in visual fidelity.

HP has included every option that I could imagine needing for sharing a display between two systems. The only problem is that I need that functionality on my “other” monitor for the application UI, not on my color critical review monitor. When sharing a monitor like this, I would just want to be able to switch between inputs easily to always view them at full screen and full resolution. On a related note, I would recommend using DisplayPort over HDMI anytime you have a choice between the two, as HDMI 2.0 is pickier about 18Gb cables, occasionally preventing you from sending RGB input and other potential issues.

Other Functionality
The monitor has an RJ-45 port allowing it to be configured over the network. Normally, I would consider this to be overkill but with so many features to control and so many sub-menus to navigate through, this is actually more useful than it would be on any other display. I found myself wishing it came with a remote control as I was doing my various tests, until I realized the network configuration options would offer even better functionality than a remote control would have. I should have configured that feature first, as it would have made the rest of the tests much easier to execute. It offers simple HTTP access to the controls, with a variety of security options.

I also had some issues when using the monitor on a switched power outlet on my SmartUPS battery backup system, so I would recommend using an un-switched outlet whenever possible. The display will go to sleep automatically when the source feed is shut off, so power saving should be less of an issue that other peripherals.

Pricing and Options
The DreamColor Z31x is expected to retail for $4,000 in the US market. If that is a bit out of your price range, the other option is the new Z27x G2 for half of that price. While I have not tested it myself, I have been assured that the newly updated 27-inch model has all of the same processing functionality, just in a smaller form-factor, with a lower-resolution panel. The 2560×1440 panel is still 10-bit, with all of the same color and frame rate options, just at a lower resolution. They even plan to support scaling 4K inputs in the next firmware update, similar to the original Z27x.

The new DreamColor studio displays are top-quality monitors, and probably the most accurate SDR monitors in their price range. It is worth noting that with a native brightness of 250 nits, this is not an HDR display. While HDR is an important consideration when selecting a forward-looking display solution, there is still a need for accurate monitoring in SDR, regardless of whether your content is HDR compatible. And the Z31x would be my first choice for monitoring full 4K images in SDR, regardless of the color space you are working in.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Behind the Title: Sim LA’s VP of Post LA Greg Ciaccio

Name: Greg Ciaccio

Company: Sim

Can you describe your company?
We’re a full-service company providing studio space, lighting and grip, cameras, dailies and finishing in Los Angeles, New York, Toronto, Vancouver and Atlanta with outposts in New Mexico and Texas.

What’s your job title?
VP, Post Los Angeles

What does that entail?
Essentially, I’m the GM of our dailies and rentals and finishing businesses — the 2nd and 3rd floor of our building — formerly Kodak Cinesite. The first floor houses our camera rental business.

What would surprise people the most about what falls under that title?
I coproduce our SimLab industry events with Bill Russell in our camera department.

What’s your favorite part of the job?
Having camera, dailies, editorial and finishing under one roof — the workflows that tie them all together provide meaningful solutions for our clients.

What’s your least favorite?
Like most facility heads, business constraints. There’s not much of it, which is great, but running any successful company relies on managing the magic.

What is your favorite time of the day?
The early mornings when I can power through management work so I can spend time with staff and clients.

If you didn’t have this job, what would you be doing instead?
Probably a post sound mixer. I teach post production management one night a week at CSUN, so that provides a fresh perspective on my role in the industry.

How early on did you know this would be your path?
I really started back in the 4th grade in lighting. I then ran and designed lighting in high school and college, moving into radio-TV-film halfway through. I then moved into production sound. The move from production to post came out of a desire for (fairly) regular hours and consistent employment.

Can you name some recent projects you have worked on?
TV series: Game of Thrones, The Gifted, Krypton, The Son, Madam Secretary, Jane the Virgin. On the feature dailies and DI side: Amy Poehler’s Wine Country.

We’re also posting Netflix’ Best Worst Weekend Ever in ACES (Academy Color Encoding System) in UHD/Dolby Vision HDR.

Game of Thrones

What is the project that you are most proud of?
Game of Thrones. The quality bar which HBO has set is evident in the look of the show. It’s so well-produced — the production design, cinematography, editing and visual effects are stunning.

Name three pieces of technology that you can’t live without.
My iPhone X, my Sony Z9D HDR TV and my Apple Watch.

What social media channels do you follow?
Instagram for DP/other creative photography interests; LinkedIn for general socially/influencer-driven news; Facebook for peripheral news/personal insights; and channels, which include ETCentric — USC ETC; ACES Central for ACES-related community info; and Digital Cinema Society for industry events

Do you listen to music while you work? Care to share your favorite music to work to?
I listen to Pandora. The Thievery Corporation station.

What do you do to de-stress from it all?
Getting out for lunch and walking when possible. I visit our staff and clients throughout the day. Morning yoga. And the music helps!

Understanding and partnering on HDR workflows

By Karen Moltenbrey

Every now and then a new format or technology comes along that has a profound effect on post production. Currently, that tech is high dynamic range, or HDR, which offers a heightened visual experience through a greater dynamic range of luminosity.

Michel Suissa

So why is HDR important to the industry? “That is a massive question to answer, but to make a pretty long story relatively short, it is by far one of the recent technologies to emerge with the greatest potential to change how images are affecting audiences,” says Michel Suissa, manager of professional solutions at The Studio–B&H. “Regardless of the market and the medium used to distribute programming, irrelevant to where and how these images are consumed, it is a clearly noticeable enhancement, and at the same time a real marketing gold mine for manufacturers as well as content producers, since a premium can be attached to offering HDR as a feature.”

And he should know. Suissa has been helping a multitude of post studios navigate the HDR waters in their quest for the equipment necessary to meet their high dynamic range needs.

Suissa started seeing a growing appetite for HDR roughly three years ago, both in the consumer and professional markets and at about the same time. “Three years ago, if someone had said they were creating HDR content, a very small percentage of the community would have known what they were talking about,” he notes. “Now, if you don’t know what HDR is and you’re in the industry, then you are probably behind the times.”

Nevertheless, HDR is demanding in terms of the knowledge one needs to create HDR content and distribute it, as well as make sure people can consume it in a way that’s satisfying, Suissa points out. “And there’s still a lot of technical requirements that people have to carefully navigate through because it is hardly trivial,” he says.

How does a company like B&H go about helping a post studio select the right tools for their individual workflow needs? “The basic yet critically important task is understanding their workflow, their existing tool set and what is expected of them in terms of delivery to their clients,” says Suissa.

To assist studios and content creators working in post, The Studio–B&H team follows a blueprint that’s based on engaging customers about the nature of the work they do, asking questions like: Which camera material do they work from? In which form is the original camera material used? What platform do they use for editing? What is the preferred application to master HDR images? What is the storage and network infrastructure? What are the master delivery specifications they must adhere to (what flavor of HDR)?

“People have the most difficulty understanding the nature of the workflow: Do the images need to be captured differently from a camera? Do they need to be ingested in the post system differently? Do they need to be viewed differently? Do they need to be formatted differently? Do they need to be mastered differently? All those things created a new set of specifications that people have to learn, and this is where it has changed the way people handle post production,” Suissa contends. “There’s a lot of intricacies, and you have to understand what it is you’re looking at in order to make sure you’re making the correct decisions — not just technically, but creatively as well.”

When adding an HDR workflow, studios typically approach B&H looking for equipment across their entire pipeline. However, Suissa states that similar parameters apply for HDR work as for other high-performance environments. People will continue to need decent workstations, powerful GPUs, professional storage for performance and increased capacity, and an excellent understanding of monitoring. “Other aspects of a traditional pipeline can sometimes remain in play, but it is truly a case-by-case analysis,” he says.

The most critical aspect of working with HDR is the viewing experience, Suissa says, so selecting an appropriate monitoring solution is vital — as is knowing the output specifications that will be used for final delivery of the content.

Without question, Suissa has seen an increase in the number of studios asking about HDR equipment of late. “Generally speaking, the demand by people wanting to at least understand what they need in order to deliver HDR content is growing, and that’s because the demand for content is growing,” he says.

Yes, there are compromises that studios are making in terms of HDR that are based on budget. Nevertheless, there is a tipping point that can lead to the rejection of a project if it is not up to HDR standards. In fact, Suissa foresees in the next six months or so the tightening of standards on the delivery side, whether for Amazon, Netflix or the networks, and the issuance of mandates by over-the-air distribution channels in order for content to be approved as HDR.

B&H/Light Iron Collaboration
Among the studios that have purchased HDR equipment from B&H is Light Iron, a Panavision company with six facilities spanning the US that offer a range of post solutions, including dailies and DI. According to Light Iron co-founder Katie Fellion, the number of their clients requesting HDR finishing has increased in the past year. She estimates that one out of every three clients is considering HDR finishing, and in some cases, they are doing so even if they don’t have distribution in place yet.

Suissa and Light Iron SVP of innovation Michael Cioni gradually began forging a fruitful collaboration during the last few years, partnering a number of times at various industry events. “At the same time, we doubled up on our relationship of providing technology to them,” Suissa adds, whether for demonstrations or for Light Iron’s commercial production environment.

Katie Fellion

For some time, Light Iron has been moving toward HDR, purchasing equipment from various vendors along the way. In fact, Light Iron was one of the very first vendors to become involved with HDR finishing when Amazon introduced HDR-10 mastering for the second season of one of its flagship shows, Transparent, in 2015.

“Shortly after Transparent, we had several theatrical releases that also began to remaster in both HDR-10 and Dolby Vision, but the requests were not necessarily the norm,” says Fellion. “Over the last three years, that has steadily changed, as more studios are selling content to platforms that offer HDR distribution. Now, we have several shows that started their Season 1 with a traditional HD finish, but then transitioned to 4K HDR finishes in order to accommodate these additional distribution platform requirements.”

Some of the more recent HDR-finished projects at Light Iron include Glow (Season 2) and Thirteen Reasons Why (Season 2) for Netflix, Uncle Drew for Lionsgate, Life Itself for Amazon, Baskets (Season 3) and Better Things (Season 2) for FX and Action Point for Paramount.

Without question, HDR is important to today’s finishing, but one cannot just step blindly into this new, highly detailed world. There are important factors to consider. For instance, the source requirements for HDR mastering — 4K 16-bit files — require more robust tools and storage. “A show that was previously shot and mastered in 2K or HD may now require three or four times the amount of storage in a 4K HDR workflow. Since older post facilities had been previously designed around a 2K/HD infrastructure, newer companies that had fewer issues with legacy infrastructure were able to adopt 4K HDR faster,” says Fellion. Light Iron was designed around a 4K+ infrastructure from day one, she adds, allowing the post house to much more easily integrate HDR at a time when other facilities were still transitioning from 2K to 4K.

Nevertheless, this adoption required changes to the post house’s workflow. Fellion explains: “In a theatrical world, because HDR color is set in a much larger color gamut than P3, the technically correct way to master is to start with the HDR color first and then trim down for P3. However, since HDR theatrical exhibition is still in its infancy, there are not options for most feature films to monitor in a projected environment — which, in a feature workflow, is an expected part of the finishing process. As a result, we often use color-managed workflows that allow us to master first in a P3 theatrical projection environment and then to version for HDR as a secondary pass.”

Light-Iron-NY colorist-Steven Bodner grading music video Picture-Day in HDR on a Sony BVM X300.

In the episodic world, if a project is delivering in HDR, unless creative preference determines otherwise, Light Iron will typically start with the HDR version first and then trim down for the SDR Rec.709 versions.

For either, versioning and delivery have to be considered. For Dolby Vision, this starts with an analysis of the timeline to output an XML for the 709 derivative, explains Fellion of Light Iron’s workflow. And then from that 709 derivative, the colorist will review and tweak the XML values as necessary, sometimes going back to the HDR version and re-analyzing if a larger adjustment needs to be made for the Rec.709 version. For an HDR-10 workflow, this usually involves a different color pass and delivered file set, as well as analysis of the final HDR sequence, to create metadata values, she adds.

Needless to say, embracing HDR is not without challenges. Currently, HDR is only used in the final color process since there’s not many workflows to support HDR throughout the dailies or editorial process, says Fellion. “This can certainly be a challenge to creatives who have spent the past few months staring at images in SDR only to have a different reaction when they first view them in HDR.” Also, in HDR there may be elements on screen that weren’t previously visible in SDR dailies or offline (such as outside a window or production cables under a table), which creates new VFX requirements in order to adjust those elements.

“As more options are developed for on-set monitoring — such as Light Iron’s HDR Video Village System — productions are given an opportunity to see HDR earlier in the process and make mental and physical adjustments to help accommodate for the final HDR picture,” Fellion says.

Having an HDR monitor on set can aid in flagging potential issues that might not be seen in SDR. Currently, however, for dailies and editorial, HDR monitoring is not really used, according to Fellion, who hopes to see that change in the future. Conversely, in the finishing world, “an HDR monitor capable of a minimum 1,000-nit display, such as the Sony [BVM] X300, as well as a consumer-grade HDR UHD TV for client reviews, are part of our standard tool set for mastering,” she notes.

In fact, several months ago, Light Iron purchased new high-end HDR mastering monitors from B&H. The studio also sourced AJA Hi5 4K Plus converter boxes from B&H for its HDR workflow.

And, no doubt, there will be additional HDR equipment needs in Light Iron’s future, as delivery of HDR content continues to ramp up. But there’s a hefty cost involved in moving to HDR. Depending on whether a facility’s DI systems already had the capacity to play back 4K 16-bit files — a key requirement for HDR mastering — the cost can range from a few thousand dollars for a consumer-grade monitor to tens of thousands for professional reference monitoring, DI system, storage and network upgrades, as well as licensing and training for the Dolby Vision platform, according to Fellion.

That is one reason why it’s important for suppliers and vendors to form relationships. But there are other reasons, too. “Those leading the charge [in HDR] are innovators and people you want to be associated with,” Suissa explains. “You learn a lot by associating yourself with professionals on the other side of things. We provide technology. We understand it. We learn it. But we also practice it differently than people who create content. The exchange of knowledge is critical, and it enables us to help our customers better understand the technology they are purchasing.”

Main Image: Netflix’s Glow


Karen Maierhofer is a longtime technical writer with more than two decades of experience in segments of the CG and post industries.

Colorist Arianna Shining Star joins Apache

Santa Monica-based color and finishing boutique Apache has added colorist Arianna Shining Star to its roster at this Santa Monica color and finishing boutique. She is the studio’s first woman colorist.

Star’s commercial work includes spots and branded shorts for Apple, Nike, Porsche, Budweiser, Tommy Hilfiger, Spotify and Coca-Cola. Her music video credits include the MTV VMA-nominated videos Wild Thoughts for Rihanna and Justin Bieber’s visual album for Purpose. Her longform work includes newly released Netflix feature film Ibiza, a comedy co-produced by Adam McKay and Will Ferrell’s Gary Sanchez Productions.

After studying Cinematic Arts and Psychology at USC, Shining Star cut her teeth at Company 3 as an assistant colorist. She then worked as a Baselight specialist for FilmLight before joining Paramount Pictures, where she remastered feature films in HDR. She was then brought on as colorist at Velem to spearhead the post production department of Milk Studios.

“Arianna worked with us before, and we’ve always had our eye on her,” says managing partner LaRue Anderson. “She’s super-talented and a true go-getter who’s amassed an awesome body of work in a relatively short time.”

With Northern California roots, Arianna’s distinctive middle name (she goes by her first and middle names professionally) comes from her parents, who met at a Grateful Dead concert during a performance of the Jerry Garcia classic song, “Shining Star.” Something of a next-gen Dead Head herself, she admits to having seen the current iteration of the band over 30 times.

Her background and interest in psychology is clear as she explains what attracts her most to color grading: “It has the ability to elevate not only production value and overall aesthetic, but can help guide the viewers’ emotional journey through the piece,” Star says.  “I love the opportunity to put the finishing touches on a piece, too. After countless people have poured their heart and soul into crafting a film, it’s an immense privilege to have the last creative touch.”

On adding the first woman colorist to the Apache roster, Anderson says it’s a testament to Star’s creative skills that she’s flourished in what’s largely a male-dominated category of post production. “There’s a lack of role models for women coming up in the creative ranks of color and visual effects,” she explains. “Women have to work hard to get on the playing field. Arianna is not only on the field, she owns the field. She’s established herself as a specialist who DPs and directors lean on for creative collaboration.”

“I want to be seen for the quality of my work and nothing else,” she says. “What makes me unique as a colorist is not my gender, but my aesthetic and approach to collaboration — my style runs the gamut from big and bold to soft and subtle.”

She cites her work on Ibiza as an example of this versatility. “Comedies typically play it safe with color, but from day one we sought to do something different and color outside the lines,” she says. “Director Alex Richanbach and cinematographer Danny Modor set me up with an incredibly diverse palette that allowed us to go bold and use color to further enhance the three different worlds seen in the film: New York, Barcelona and Ibiza. Narrative work really allows you to take your viewer on a journey with the color grade.”

At Apache, Star says she’s found a home where she can continue to learn the craft. “They’re true veterans who know the ins and outs of this wild industry and are incredible leaders,” she says of Anderson and her partners, Shane Reed and Steve Rodriguez. “And their three key core tenets drew me. One, we’re a creatively driven company. Two, we’re consistently re-evaluating the playbook and figuring out what works and what we can improve. And three, we truly operate like a family and support one another. We’ve got a crew of talented artists, and it’s a privilege to work alongside them.”

Color for Television Series

By Karen Maierhofer

Several years ago I was lucky enough to see Van Gogh’s original The Starry Night oil on canvas at a museum and was awestruck by how rich and vibrant it really was. I had fallen in love with the painting years before after seeing reproductions/reprints, which paled in comparison to the original’s striking colors and beauty. No matter how well done, the reproductions could never duplicate the colors and richness of the original masterpiece.

Just as in the art world, stories told via television are transformed through the use of color. Color grading and color correction help establish a signature look for a series, though that can, and often does, change from one episode to another — or from one scene to another — based on the mood the DP and director want to portray.

Here we delve into this part of the post process and follow a trio of colorists as they set the tone for three very different television series.

Black-ish
Black-ish is an ABC series about a successful African-American couple raising their five children in an affluent, predominantly white neighborhood. Dre, an advertising executive, is proud of his heritage but fears that culture is lost when it comes to his kids.

There is no struggle, however, when it comes to color grading the show, a job that has fallen to colorist Phil Azenzer from The Foundation in Burbank starting with this past season (Season 4).

The show is shot using an Arri Alexa camera. The dailies are then produced by the show’s in-house editor. The files, including the assembly master, are sent to Azenzer, who uses the raw camera files for his color grading, which is done using Blackmagic’s Resolve.

Azenzer starts a scene by rolling into the establishing shot and sets the look there because “you can see all light sources and their color temperatures,” he says. “I get a feel for the composition of the shot and the gradation of shadow to light. I see what light each of the actors is standing in or walking through, and then know how to balance the surrounding coverage.”

In his opinion, networks, for the most part, like their half-hour comedies to be well lit, more chromatic, with less shadow and contrast than an average one-hour drama, in order to create a more inviting, light feel (less somber). “And Black-ish is no different, although because of the subject matter, I think of Black-ish as more of a ‘dramedy,’ and there are scenes where we go for a more dramatic feel,” Azenzer explains.

Black-ish’s main characters are African-American, and the actors’ skin tones vary. “Black-ish creator Kenya Barris is very particular about the black skin tones of the actors, which can be challenging because some tones are more absorbent and others more reflective,” says Azenzer. “You have to have a great balance so everyone’s skin tone feels natural and falls where it’s supposed to.”

Phil Azenzer

Azenzer notes that the makeup department does an excellent job, so he doesn’t have to struggle as much with pulling out the bounce coming off the actors’ skin as a result of their chromatic clothes. He also credits DP Rob Sweeney (with whom he has worked on Six Feet Under and Entourage) with “a beautiful job of lighting that makes my life easier in that regard.”

While color grading the series, Azenzer avoids any yellow in skin tones, per Barris’s direction. “He likes the skin tones to look more natural, more like what they actually are,” he says. “So, basically, the directive was to veer away from yellow and keep it neutral to cool.”

While the colorist follows that direction in most scenes, he also considers the time of day the scene takes place when coloring. “So, if the call is for the shot to be warm, I let it go warm, but more so for the environment than the skin tones,” explains Azenzer.

Most of the show is shot on set, with few outdoor sequences. However, the scenes move around the house (kitchen, living room, bedrooms) as well as at the ad agency where Dre works. “I have some preferred settings that I can usually use as a starting point because of the [general] consistency of the show’s lighting. So, I might ripple through a scene and then just tighten it up from there,” says Azenzer. But my preference as a colorist is not to take shortcuts. I don’t like to plug something in from another episode because I don’t know if, in fact, the lighting is exactly the same. Therefore, I always start from scratch to get a feel for what was shot.”

For instance, shots that take place in Dre’s office play out at various points in the day, so that lighting changes more often.

The office setting contains overhead lighting directly above the conference table, like one would find in a typical conference room. It’s a diffused lighting that is more intense directly over the table and diminishes in intensity as it feathers out over the actors, so the actors are often moving in and out of varying intensities of light on that set. “It’s a matter of finding the right balance so they don’t get washed out and they don’t get [too much shadow] when they are sitting back from the table,” explains Azenzer. “That’s probably the most challenging location for me.”

Alas, things changed somewhat during the last few episodes of the season. Dre and his wife, Rainbow, hit a rough patch in their marriage and separate. Dre moves into a sleek, ultra-modern house in the canyon, with two-story ceilings and 20-foot-tall floor-to-ceiling windows — resulting in a new location for Azenzer. “It was filled with natural light, so the image was a little flat in those scenes and awash with light and a cool aura,” he describes. Azenzer adjusted for this by “putting in extra contrast, double saturation nodes, and keying certain colors to create more color separation, which helps create overall separation and depth of field. It was a fun episode.”

In the prior episode, the show toggles back and forth from flashbacks of Bow and Dre from happier times in their marriage to present day. Azenzer describes the flashbacks as saturated with extremely high contrast, “pushing the boundaries of what would be acceptable.” When the scene switched to present day, instead of the typical look, it was shot with the movie Blue Valentine in mind, as the characters discussed separating and possibly divorcing.

“Those scenes were shot and color corrected with a very cool, desaturated look. I would latch onto maybe one thing in the shot and pop color back into that. So, it would be almost grayish blue, and if there was a Granny Smith apple on the counter, I grabbed that and popped it, made it chromatic,” explains Azenzer. “And Dre’s red sweatshirt, which was desaturated and cool along with the rest of the scene, I went back in there and keyed that and popped the red back in. It was one of the more creative episodes we did.”

When Azenzer first took over coloring the show, “everybody was involved,” he says. “I had a relationship with Rob Sweeney, but I was new to Kenya, the post team, and Tom Ragazzo, co-producer, so it was very collaborative at the beginning to nail the look they were going for, what Kenya wanted. Now we are at the point so when I finish an episode, I give Rob a heads-up and he’ll come over that day or whenever he can and bring lunch, and I play it back for him.”

It’s not as if the episodes are without change, though Azenzer estimates that 85 percent of the time Sweeney says, “‘Beautiful job,’ and is out the door.” When there are changes, they usually involve something nominal on just a shot or two. “We are never off-base to where we need to redo a scene. It’s usually something subjective, where he might ask me to add a Power Window to create a little shadow in a corner or create a light source that isn’t there.”

Azenzer enjoys working on Black-ish, particularly because of the close relationship he has with those working on the show. “They are all awesome, and we get along really well and collaborate well,” he says. Indeed, he has forged bonds with this new family of sorts on both a professional and personal level, and recently began working on Grown-ish, a spin-off of Black-ish that follows the family’s eldest daughter after she moves away to attend college.

The 100
Dan Judy, senior colorist at DigitalFilm Tree (DFT) in Hollywood, has been working on The CW’s The 100 starting with the pilot in 2014, and since then has helped evolve it into a gritty-looking show. “It started off with more of an Eden-type environment and has progressed into a much grittier, less friendly and dangerous place to live,” he says.

The 100 is a post-apocalyptic science-fiction drama that centers on a group of juvenile offenders from aboard a failing space station who are sent to Earth following a nuclear apocalypse there nearly a century earlier. Their mission: to determine whether the devastated planet is habitable. But, soon they encounter clans of humans who have survived the destruction.

“We have geographical locations that have a particular look to them, such as Polis (the capitol of the coalition),” says Judy of the environment set atop rolling hills lush with vegetation. “In this past season, we have the Eden environment — where after the planet incurs all this devastation, the group finds an oasis of thriving foliage and animated life. Then, gradually, we started backing off the prettiness of Eden and making it less colorful, a little more contrasty, a little harsher.”

The series is shot in Vancouver by DP Michael Blundell. The dailies are handled by Bling Digital’s Vancouver facility, which applies color with the dailies cut. As an episode is cut, Bling then ships drives containing the camera master media and the edit decision list to DFT, which assembles the show with a clip-based approach, using the full-resolution camera masters as its base source.

“We aren’t doing a transcode of the media. We actually work directly, 100 percent of the time, from the client camera master,” says Judy, noting this approach eliminates the possibility of errors, such as dropouts or digital hits that can result from transcoding. “It also gives me handles on either end of a shot if it was trimmed.”

Dan Judy

Vancouver-based Blundell sets the palette, but he conveys his ideas and concepts to Tim Scanlan, director and supervising producer on the show, with whom Judy has a longstanding relationship — they worked together years before on Smallville. “Then Tim and I will sit down and spot the show, setting looks for the scenes, and after the spotting session, I will fill in the gaps to give it a consistent look,” says Judy. Although Scanlan is in nearby Santa Monica, due to LA’s traffic, he and Hollywood-based Judy collaborate remotely, to save valuable time.

“I can remote into [Scanlan’s] system and color correct with him in full resolution and in realtime,” explains Judy. “I can play back the reference file with the dailies color on it, and I can split-screen that with him in realtime if he wants to reference the dailies color for that particular scene.”

For coloring the show, Judy uses Blackmagic’s DaVinci Resolve, which is also used to conform the series. Using Resolve’s Project Management tools, the editors and colorists “can all work on the project and contribute to it live, in realtime, simultaneously,” Judy points out. “So, I can be color correcting at the same time the editor is building the show, and getting all of his updates in mere seconds.”

Scanlan uses a remote Resolve system with a monitor that is calibrated to Judy’s, “so what he is seeing on his end is an exact replica of what I’m seeing in my room,” Judy says.

One scene in The 100 that stands out for Judy occurs early in the episode during the premiere of Season 5, which finds Clarke Griffin, one of the prisoners, trapped in a wasteland. He explains: “We had several different evolutions of what that look was going to be. I gave them a few designs, and they gave me some notes. Before the show was cut, they gave me little snippets of scenes to look at, and I did test looks. They came back and decided to go with one of those test looks at first, and then as the show progressed, we decided, collaboratively, to redesign the look of the scene and go with more of a sepia tone.”

Much of The 100 is filmed outdoors, and as everyone knows, nature does not always cooperate during shoots. “They deal with a lot of different weather conditions in Vancouver, unlike LA. They’ll get rain in the middle of a scene. Suddenly, clouds appear, and you have shadows that didn’t exist before. So, when that’s the only footage you have, you need to make it all blend together,” explains Judy. “Another challenge is making these amazing-looking sets look more natural by shadowing off the edges of the frame with power windows and darkening parts of the frame so it looks like the natural environment.”

Judy points to the character Becca’s abandoned lab — an elaborate set from last year’s season — as a scene that stands out for him. “It was an amazing set, and in wide shots, we would shape that picture with power windows and use color levels and desaturation to darken it, and then color levels and saturation to brighten up other areas,” he says. “This would make the room look more cavernous than it was, even though it was large to begin with, to give it more scope and vastness. It also made the room look dramatic yet inviting at the same time.”

All in all, Judy describes The 100 as a very edgy, dramatic show. “There’s a lot going on. It’s not your standard television fare. It’s very creative,” he says. “Tim and I did a lot of color design on Smallville, and we’re carrying on that tradition in The 100. It’s more feature-esque, more theatrical, than most television shows. We add grain on the picture to give it texture; it’s almost imperceptible, but it gives a slightly different feel than other shows. It’s nice to be part of something where I’m not just copying color for a standardized, formulaic show. This series gives me the opportunity to be creative, which is awesome.”

Dear White People
Sometimes color grading decisions are fairly standard on television shows. Black and white, so to speak. Not so for the Netflix series Dear White People, a comedy-drama spin-off from the 2014 film of the same name, which follows students of color at a predominantly white Ivy League college as they navigate various forms of discrimination — racial and otherwise.

Helping achieve the desired look for the series fell to senior colorist Scott Gregory from NBCUniversal StudioPost. Starting with Season 1, day one, “the show’s creator, Justin Simien, DP Jeffrey Waldron, executive producer Yvette Lee Bowser and I huddled in my bay and experimented with different ‘overall’ looks for the show,” notes Gregory.

Simien then settled on the “feel” that is present throughout most of the series. Once he had locked a base look, the group then discussed how to use color to facilitate the storytelling. “We created looks for title cards, flashbacks, historical footage, locations and even specific characters,” Gregory says.

Using stills he had saved during those creative meetings as a guide, he then color corrects each show. Once the show is ready for review, the executive producers and DP provide notes — during the same session if schedules permit, or separately, as is often the case. If any of the creatives cannot be present, stills and color review files are uploaded for review via the Internet.

According to Gregory, his workflow starts after he receives a pre-conformed 4:4:4 MXF video assembled master (VAM) and an EDL supplied by online editor Ian Lamb. Gregory then performs a process pass on the VAM using Resolve, whereby he re-renders the VAM, applying grain and two Digital Film Tools (DFT) optical filters. This gives the Red camera footage a more weathered, filmic look. This processing, however, is not applied to the full-frame television show inserts to better separate them from the visual palette created for the show by Simien, Bowser and DPs Waldron and Topher Osborn.

Scott Gregory

Once the VAM is processed, Gregory creates a timeline using the EDL, the processed VAM, and the temp audio, applies a one-light correction to all of the shots, and gets to work. As the color progresses, he drops in the visual effects, cleaned shots, composited elements, and some titles as they are delivered. Once the show is locked for color and VFX approval, he renders out a 3840×2160 UHD final 4:4:4 MXF color-timed master, which then goes back to the online editor for titling and delivery.

“Blue contaminated and lifted blacks, strong vignettes, film-grain emulation and warm, compressed filmic highlights are characteristics present in most of the show,” says Gregory. “We also created looks for Technicolor two-strip, sepia, black-and-white silent-era damaged print, and even an oversaturated, diffused, psychedelic drug trip scene.”

The looks for the flashback or “historical” sequences, usually somewhere in Act I, were created for the most part in Resolve. Many of these sequences or montages jump through different time periods. “I created a black-and-white damaged film look for the 1800s, Technicolor two-strip for the early 1900s, a faded-emulsion [Kodak] Ektachrome [film] look for the ’70s, and a more straightforward but chromatic look for the ’80s,” says Gregory.

Simien also wanted to use color “themes” for specific characters. This was reflected in not only the scenes that included the featured character for that particular show, but also in the title card at the head of the show. (The title card for each show has a unique color corresponding to the featured character of that episode.)

When coloring the series, Gregory inevitably encounters processing issues. “Using all the filters and VFX plug-ins that I do on this show and being in UHD resolution both eat up a lot of processing power. This slows down the software significantly, no matter what platform or GPUs are being used,” he says. In order to keep things up to speed, he decided to pre-render, or bake in, the grain and some of the filters that were to be used throughout each show.

“I then create a new timeline using the pre-rendered VAM and the EDL, and set a base correction,” Gregory explains. “This workflow frees up the hardware, so I can still get realtime playback, even with multiple color layers, composites and new effects plug-ins.”

Gregory is hardly new to color grading, having a long list of credits, including television series, full-length movies and short films. And while working on Seasons 1 and the recently released Season 2 of Dear White People, he appreciated the collaborative environment. “Justin is obviously very creative and has a discerning eye. I have really enjoyed the collaborative space in which he, Yvette, Jeffrey and Topher like to work,” he says. “Justin likes to experiment and go big. He wants the artists he works with to be a part of the creative process, and I think he believes that in the end, his final product will benefit from it. It makes for good times in the color bay and a show we are all very proud of.”


Karen Maierhofer is a longtime technical writer with more than two decades of experience in segments of the CG and post industries.

Behind the Title: Deluxe Senior Finishing Editor Samantha Uber

NAME: Samantha Uber (@samanthauber)

COMPANY: Deluxe NY

CAN YOU DESCRIBE YOUR COMPANY?
Deluxe NY is the New York City branch of the classic film lab founded in 1915. Today, we are a huge multimedia international empire for all types of content creation and delivery. My favorite part of working for this company is that we manage to serve our clients in a personalized, boutique environment but with the support of a worldwide network of both technology and ideas.

WHAT’S YOUR JOB TITLE?
Senior Finishing Editor

CAN YOU EXPLAIN WHAT YOU DO?
I am a Flame finishing editor/VFX artist, and I come from an Avid online and offline editorial background. I also use Blackmagic Resolve, Adobe Premiere and Apple FCP for their various abilities for different projects. While I always fully finish (conform/online) episodic and film projects in Flame, I also always use a unique mix of those applications listed above for each project to get me to that point in the highest quality and most efficient way possible. I am very interested in the building of the computer I am working on, the specialized scripts to make data organized, the debayer/color science process and, of course, the actual editing and delivery of the project.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
In my job as a finishing editor, I am surprisingly super-involved in dailies, mainly because I know what will make the job easier on the finishing editor if certain metadata is retained and organized in dailies. Seeing how the metadata coming from the dailies process is actually implemented in finishing allows me to have a unique perspective, and I teach dailies techs about this to give them a better understanding of how their work is being used.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Everyone who knows me, knows my favorite thing is a reconform. I love them. They are challenging, like giant Tetris puzzles — my favorite game growing up was Tetris. I love getting in the zone for hours and hours, moving the pieces of the timeline around, relying on the metadata the Flame gives me to do it more efficiently, and sometimes, not even looking at the actual picture until the end.

WHAT’S YOUR LEAST FAVORITE?
For me, my least favorite thing is working on something that doesn’t challenge me. I like to constantly be thinking about ways to process new camera formats and new workflows, and understanding/being involved in the entire online process from start to finish. I love the “hard” jobs… the tough ones to figure out, even if that means I lose quite a bit of sleep (she laughs). There is always a limit to that, of course, but if I’m not involved in research and development on a project, I’m not happy. For this reason, I love working in episodic television the most because I can R&D a workflow and then use it and perfect it over time, all while building a close relationship with my clients and feeling ownership of my show.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
I’d say mid-afternoon and around 9pm at night. After the morning fires are put out and everything gets going, the middle of the afternoon gets a lot of work done. Also, around 9pm I enjoy working because the formal working day has pretty much ended and I can just zero in on a project and work quietly, without distractions.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I really love restoring antiques, whether it’s furniture or the 100-year-old Victorian home I live in. I am always working with my hands — either at work or at home — building, painting, grooming dogs, veggie-gardening, cooking, sculpting, etc. I appreciate the craftsmanship that went into antique pieces. I feel that type of work is lost in today’s disposable world.

What I do for films as a finishing editor is quite like the restoration work I do at home — taking something and realizing it to its full potential and giving it a new life. For these reasons I think I could possibly be an architect/designer, specializing in the mostly period-accurate restoration of antique homes. I still may end up doing this many years from now.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I knew very early on that I wanted to be a film editor of some sort. I was 16 yrs old when the film Moulin Rouge came out, and my best friend Michelle and I saw it in the theater. We both knew we wanted to do something technical and creative from that point. She became a computer engineer, and I became a senior finishing editor. I loved the editing and pacing of that film, how it was so much like the music videos I grew up watching, and I wanted to be able to tell a story with VFX and editing. I actually practiced on the Moulin Rouge DVD extras re-editing the scenes on the ISOs of the cameras they provided.

I was 16 when I applied to NYU’s Tisch School of the Arts. It was my only choice for college. I initially went for a summer between my junior and senior year of high school and continued after high school for three more years until I graduated. I was working as a freelance editor for students, working at MTV as a junior editor, and teaching Avid editing at NYU during that time — always working!

Moulin Rouge is still my favorite film, and my dream is to work with director Baz Lurhmann one day.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I have worked as senior finishing editor on Paterno, High Maintenance, Girls, Vinyl and Boardwalk Empire for HBO, The Knick for Cinemax, Blue Bloods for CBS, The Americans for FX, Jesus Christ Superstar for NBC and Mr. Robot for USA. I worked on the film All These Small Moments for the 2018 Tribeca Film Festival, as well as the films Beasts of No Nation and Moonrise Kingdom in recent years.

YOU HAVE WORKED ON ALL SORTS OF PROJECTS. DO YOU PUT ON A DIFFERENT HAT WHEN CUTTING FOR A SPECIFIC GENRE?
I certainly put on a different workflow hat for the different parts of my job. It actually feels like different jobs sometimes —  painting a visual effect, building a computer, making a finishing workflow, conforming a show, debayering footage, designing a dailies workflow, etc. I think that keeps it interesting; doing something different every day.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
The project I am most proud of is The Knick. I was involved in the process of creating the workflow of the show with Steven Soderbergh’s team for one year before it actually began. I believe it was the first show to use the Red Dragon camera at 5K, finishing at UHD. I worked intensely with the Red team to develop the software, color workflow and computer for debayering the footage.

I also worked closely with colorist Martin Zeichner and Steven’s team to retain the exact onset look of color immediately and efficiently, while also giving them the full latitude of the Red format in the DI. The result was beautiful, and I really enjoyed the show. I felt like the plot of the show — innovation in the surgical field — was being mirrored in the innovation in the actual finishing of the show, which was super awesome!

CAN YOU TALK MORE ABOUT THE TOOLS YOU USE?
For all final finishing, I use Autodesk Flame. I am proficient in nearly all platforms, but to me, nothing is better than the unique timeline in Flame, where layers see each other and tracks do not. This allows you to have many versions of a cut in one timeline, and is ideal for finishing. Also, the VFX capability of the Flame is unparalleled in an editing system, and it allows me to start working on anything in moments at the client’s request. However, Avid will always be my favorite for metadata and database management, and I usually start every project with a peek at the metadata in the Avid, and frequently a full reorganization.

WHAT IS YOUR FAVORITE PLUGIN?
My favorite and most frequently used plugin is Re:Vision’s Twixtor, for the tons and tons of timewarps I do. This plugin helps me paint less frames than most. Close runners-up are Autodesk’s Autostabilize, which is actually highly customizable, and Furnace’s F-WireRemoval for all sorts of purposes.

ARE YOU OFTEN ASKED TO DO MORE THAN EDIT? 
Being a finishing editor means you are the last person to touch the project before it airs, so you are the last stop in everything. For that reason, I am often asked to anything and everything in session — re-mix sound, creatively re-edit, give advice on VFX shots and deliverables, do VFX shots, make masters, QC masters. You name it and I do it in session. I think that’s what the job really entails; being able to give the client what they are looking for at the last possible moment, especially now that they are seeing the final product in high-resolution and color corrected.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
I could not live without my iPhone, as it connects me to the outside world as well as my job. It’s like my whole life is on my phone. I could also not live without my Wacom tablet. Finishing an edit is a whole lot easier on a tablet. Also, my super-fast cylinder Mac, outfitted so that every application and high-resolution footage can be processed extremely quickly. I still do wish my Mac was square, however, (she laughs), for more equipment compatibility, but I cannot complain about its high-speed processing ability. Engineering has kindly given me a Mac that I can play on and try new software, often before it is rolled into production throughout the facility. Th is keeps me in the know on new developments in our industry. This computer is totally separate from my super powerful Linux Flame system.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Yes, this is a high-stress job! I feel very responsible for all of the people who have put their hard work into a project to make sure it is shown in its best light and everything is as perfect as possible on often-tight deadlines. After a project leaves my hands it goes to QC, and my final work is what they see and what airs.

Because everything I do is on computers, I try to spend as little time on a computer outside of work as possible. As I mentioned before, I live in a 100-year-old house that I am restoring myself. What is nice is that I feel like I’m using the same part of my brain as I do at my job, however it is usually outdoors and involving physical labor. That is a great de-stressor from working on a computer in a windowless and darkened room all week.

I live far outside the city by the beach, and when I’m home, I’m really home and work seems a world away. I have two beautiful Afghan Hound sister dogs, Ginny and Trill, and a 1974 VW bus named Buddy. I honestly don’t like to rest. I always like to be working on my projects and pushing forward in my life, and I am just your typical Jersey girl at heart.

Sim and the ASC partner on educational events, more

During Cine Gear recently, Sim announced a 30-year sponsorship with the American Society of Cinematographers (ASC). Sim offers end-to-end solutions for creatives in film and television, and the ASC is a nonprofit focusing on the art of cinematography. As part of the relationship, the ASC Clubhouse courtyard will now be renamed Sim Plaza.

Sim and the ASC have worked together frequently on events that educate industry professionals on current technology and its application to their evolving craft. As part of this sponsorship, Sim will expand its involvement with the ASC Master Classes, SimLabs, and conferences and seminars in Hollywood and beyond.

During an official ceremony, a commemorative plaque was unveiled and embedded into the walkway of what is now Sim Plaza in Hollywood. Sim will also host a celebration of the ASC’s 100th anniversary in 2019 at Sim’s Hollywood location.

What else does this partnership entail?
• The two organizations will work together closely over the next 30 years on educational events for the cinematography community. Sim’s sponsorship will help fund society programs and events to educate industry professionals (both practicing and aspiring) on current technology and its application to the evolving craft.
• The ASC Master Class program, SimLabs and other conferences and seminars will continue on over these 30 years with Sim increasing its involvement. Sim is not telling the ASC what kind of initiatives they should be doing, but is rather lending a helping hand to drive visual storytelling forward. For example, they have already hosted ASC Master Class sessions in Toronto and Hollywood, sponsored the annual ASC BBQ for the last couple of years, and founder Rob Sim himself is an ASC associate member.

How will the partnership will increase programming and resources to support the film and television community for the long term?
• It has a large focus on three things: financial resources, programming assistance and facility support.
• It will provide access and training with world-class technology in film and television.
• It will offer training directly from industry leaders in Hollywood and beyond
• It will develop new programs for people who can’t attend ASC Master Class sessions, such as an online experience, which is something ASC and Sim are working on together.
• It will expand SimLabs beyond Hollywood —with the potential to bring it to Vancouver, Atlanta, New York and Toronto with the goal of creating new avenues for people who are associated with the ASC and who know they can call on Sim.
• It will bring volunteers. Sim has many volunteers on ASC committees, including the Motion Imaging Technology Council and its Lens committee.

Main Image: L-R: Sim President/CEO James Haggarty, Sim founder and ASC associate member Rob Sim,ASC events coordinator Patty Armacost and ASC president Kees van Oostrum.

Testing large format camera workflows

By Mike McCarthy

In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.

L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.

Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.

Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.

First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.

Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.

We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.

Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.

One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”

Post Production
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.

Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.

The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.

Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.

Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.

I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:

This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.

We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.

We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.

Main Image: DP Michael Svitak


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

ACES adds new companies to Logo Program 

Six new companies have been added to the ACES Logo Program. Membership in the program signals that the companies are committed to implementing ACES into their hardware and software products in conformance with program specifications for consistency and quality. ACES is the global standard for color management, digital image interchange and archiving.

“ACES has given us a solid framework to efficiently solve issues, along with options, to maintain creative control for our productions. It provides much needed transparency, connecting and standardizing the workflows from on-set, dailies, VFX, DI and archival,” says Victoria Alonso, EVP, Physical Production, Marvel Studios. “Standards are important — they help studios protect and monetize our films for years, and they also help create a healthy ecosystem of suppliers and professionals who can reliably work on sophisticated productions because they know the infrastructure beneath them is solid. Standards like ACES give us a common language for applications and pipelines to connect without compromising translations and misunderstandings, and to protect the creative workflow established by filmmakers. We see ACES as an important component in allowing the industry to innovate and work at the highest levels of our craft.”

The new ACES Logo Program partner companies are:
– Color Trix, makers of Color Finale, a color correction add-on to Final Cut Pro X.
– DJI, makers of drones, camera accessories and systems, including the Zenmuse X7, a Super35mm cinema-grade camera.
– In2Core, makers of the QTake video assist system.
– Laser Graphics, makers of film scanning systems: DirectorScanner, Director10K scanner and ScanStation.
– Vision Research, makers of the Phantom line of high-speed cameras.
– WowWow Entertainment, makers of the IS Mini LUT Box.

These six companies join the existing manufacturers of cinema cameras, color correctors, monitors, on-set tools, animation and compositing software who are already part of ACES Product Partners.

Tom Dunlap, Gui Borchert lead Hecho Studios, formerly Hecho En 72

Hecho En 72 has expanded its North American operations to allow for continued growth in both Los Angeles and New York. It will now be known as Hecho Studios under the MDC Partners network of companies. Hecho, is a content development and production company that originally was grown as part of 72andSunny, but is now a separate entity. While they still work closely, and share the same space in New York and LA, they operate independently.

As part of this growth, Tom Dunlap, who was previously chief production officer at 72andSunny, has assumed the position of managing director of the company. He will be responsible for driving all aspects of growth, operations and production.

Hecho Studios will be led creatively by executive creative director, Gui Borchert, who most recently served as group creative director at 72andSunny for the last four years, working on projects for Starbucks, Sonos and the LA Olympic bid.

“In a time where marketing takes any form, how brands make content is just as important as what they make,” says Dunlap. “We have built and are growing Hecho Studios to help brands amplify creative opportunities and modernize production and content creation to maximize quality, efficiency and impact,” says Borchert.

Hecho’s past work includes the production of Sugar Coated, a short documentary featured in 18 film festivals in partnership with 72U, two Emmy nominations for their work on Google — Year in Search, editorial, print and product design for the award-winning LA Original campaign, and the recent short parody film featuring Will Ferrell and Joel McHale for The Hammer Museum at UCLA’s latest exhibition, Stories of Almost Everyone.

Hecho’s production offerings includes building models based on the client’s creative needs. This can range anywhere from a small table-top product shoot to a large scale narrative film and include live-action, photography, stop-motion, time-lapse, narrative, documentary and both short and long form.

Hecho offers full service audio post, specializing in commercial and branded content for broadcast TV, cinema, digital and radio. Their studios are equipped with two Avid S6 mixing consoles and JBL-M2 surround stage (7.1, 5.1) and spatial audio (VR) capabilities. Services include sound design, mixing, field recording, Foley, voice-over/ADR recording, Source-Connect/ISDN and music editorial and licensing.

Hecho’s post offerings include editorial, motion graphics and finishing. Their edit suites integrate all editing platforms, including Adobe Premiere, Avid and Final Cut Pro. Their motion graphics team uses After Effects, Cinema 4D and Maya animations. Hecho has two Flame suites to address requests for clean-up, conform and VFX. They also use Blackmagic Resolve for color grade needs.

Main Image Caption: (L-R) Gui Borchert and Tom Dunlap.