NBCUni 7.26

Category Archives: post production

Quantum offers new F-Series NVMe storage arrays

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.

Facilis Launches Hub shared storage line

Facilis Technology rolled out its new Hub Shared Storage line for media production workflows during the NAB show. Facilis Hub includes new hardware and an integrated disk-caching system for cloud and LTO backup and archive designed to provide block-level virtualization and multi-connectivity performance.

“Hub Shared Storage is an all-new product based on our Hub Server that launched in 2017. It’s the answer to our customers’ requests for a more compact server chassis, lower-cost hybrid (SSD and HDD) options and integrated cloud and LTO archive features,” says Jim McKenna, VP of sales and marketing at Facilis. “We deliver all of this with new, more powerful hardware, new drive capacity options and a new look to both the system and software interface.”

The Facilis shared storage network allows both block-mode Fibre Channel and Ethernet connectivity simultaneously with the ability to connect through either method with the same permissions, user accounts and desktop appearance. This expands user access, connection resiliency and network permissions. The system can be configured as a direct-attached drive or segmented into various-sized volumes that carry individual permissions for read and write access.

Facilis Object Cloud
Object Cloud is an integrated disk-caching system for cloud and LTO backup and archive that includes up to 100TB of cloud storage for an annual fee. The Facilis Virtual Volume can display cloud, tape and spinning disk data in the same directory structure on the client desktop.

“A big problem for our customers is managing multiple interfaces for the various locations of their data. With Object Cloud, files in multiple locations reside in the same directory structure and are tracked by our FastTracker asset tracking in the same database as any active media asset,” says McKenna. “Object Cloud uses Object Storage technology to virtualize a Facilis volume with cloud and LTO locations. This gives access to files that exist entirely on disk, in the Cloud or on LTO, or even partially on disk and partially in the cloud.”

Every Facilis Hub Shared Storage server comes with unlimited seats in the Facilis FastTracker asset tracking application. The Object Cloud Software and Storage package is available for most Facilis servers running version 7.2 or higher.

NBCUni 7.26

Atomos’ new Shogun 7: HDR monitor, recorder, switcher

The new Atomos Shogun 7 is a seven-inch HDR monitor, recorder and switcher that offers an all-new 1500-nit, daylight-viewable, 1920×1200 panel with a 1,000,000:1 contrast ratio and 15+ stops of dynamic range displayed. It also offers ProRes RAW recording and realtime Dolby Vision output. Shogun 7 will be available in June 2019, priced at $1,499.

The Atomos screen uses a combination of advanced LED and LCD technologies which together offer deeper, better blacks the company says rivals OLED screens, “but with the much higher brightness and vivid color performance of top-end LCDs.”

A new 360-zone backlight is combined with this new screen technology and controlled by the Dynamic AtomHDR engine to show millions of shades of brightness and color. It allows Shogun 7 to display 15+ stops of real dynamic range on-screen. The panel, says Atomos, is also incredibly accurate, with ultra-wide color and 105% of DCI-P3 covered, allowing for the same on-screen dynamic range, palette of colors and shades that your camera sensor sees.

Atomos and Dolby have teamed up to create Dolby Vision HDR “live” — a tool that allows you to see HDR live on-set and carry your creative intent from the camera through into HDR post. Dolby have optimized their target display HDR processing algorithm which Atomos has running inside the Shogun 7. It brings realtime automatic frame-by-frame analysis of the Log or RAW video and processes it for optimal HDR viewing on a Dolby Vision-capable TV or monitor over HDMI. Connect Shogun 7 to the Dolby Vision TV and AtomOS 10 automatically analyzes the image, queries the TV and applies the right color and brightness profiles for the maximum HDR experience on the display.

Shogun 7 records images up to 5.7kp30, 4kp120 or 2kp240 slow motion from compatible cameras, in RAW/Log or HLG/PQ over SDI/HDMI. Footage is stored directly to AtomX SSDmini or approved off-the-shelf SATA SSD drives. There are recording options for Apple ProRes RAW and ProRes, Avid DNx and Adobe CinemaDNG RAW codecs. Shogun 7 has four SDI inputs plus a HDMI 2.0 input, with both 12G-SDI and HDMI 2.0 outputs. It can record ProRes RAW in up to 5.7kp30, 4kp120 DCI/UHD and 2kp240 DCI/HD, depending on the camera’s capabilities. Also, 10-bit 4:2:2 ProRes or DNxHR recording is available up to 4Kp60 or 2Kp240. The four SDI inputs enable the connection of most quad-link, dual-link or single-link SDI cinema cameras. Pixels are preserved with data rates of up to 1.8Gb/s.

In terms of audio, Shogun 7 eliminates the need for a separate audio recorder. Users can add 48V stereo mics via an optional balanced XLR breakout cable, or select mic or line input levels, plus record up to 12 channels of 24/96 digital audio from HDMI or SDI. Monitoring selected stereo tracks is via the 3.5mm headphone jack. There are dedicated audio meters, gain controls and adjustments for frame delay.

Shogun 7 features the latest version of the AtomOS 10 touchscreen interface, first seen on the Ninja V.  The new body of Shogun 7 has a Ninja V-like exterior with ARRI anti-rotation mounting points on the top and bottom of the unit to ensure secure mounting.

AtomOS 10 on Shogun 7 has the full range of monitoring tools, including Waveform, Vectorscope, False Color, Zebras, RGB parade, Focus peaking, Pixel-to-pixel magnification, Audio level meters and Blue only for noise analysis.

Shogun 7 can also be used as a portable touchscreen-controlled multi-camera switcher with asynchronous quad-ISO recording. Users can switch up to four 1080p60 SDI streams, record each plus the program output as a separate ISO, then deliver ready-for-edit recordings with marked cut-points in XML metadata straight to your NLE. The current Sumo19 HDR production monitor-recorder will also gain the same functionality in a free firmware update.

There is asynchronous switching, plus use genlock in and out to connect to existing AV infrastructure. Once the recording is over, users can import the XML file into an NLE and the timeline populates with all the edits in place. XLR audio from a separate mixer or audio board is recorded within each ISO, alongside two embedded channels of digital audio from the original source. The program stream always records the analog audio feed as well as a second track that switches between the digital audio inputs to match the switched feed.


SymplyWorkspace: high-speed, multi-user SAN for smaller post houses

Symply has launched SymplyWorkspace, a SAN system that uses Quantum’s StorNext 6 for high-speed collaboration over Thunderbolt 3 for up to eight simultaneous Mac, Windows, or Linux editors, with RAID protection for content safety.

SymplyWorkspace is designed for sharing content in realtime video production. The product features a compact desk-side design geared to smaller post houses, in-house creatives, ad agencies or any creative house needing an affordable high-speed sharing solution.

“With the high adoption rates of Thunderbolt in smaller post houses, with in-house creatives and with other content creators, connecting high-speed shared storage has been a hassle that requires expensive and bulky adapters and rack-mounted, hot and noisy storage, servers and switches,” explains Nick Warburton from Global Distribution, which owns Symply. “SymplyWorkspace allows Thunderbolt 3 clients to just plug into the desk-side system to ingest, edit, finish and deliver without ever moving content locally, even at 4K resolutions, with no adapters or racks needed.”

Based on the Quantum StorNext 6 sharing software, SymplyWorkspace allows users to connect up to eight laptops and workstations to the system and share video files, graphics and other data files instantly with no copying and without concerns for version control or duplicated files. A file server can also be attached to enable re-sharing of content to other users across Ethernet networks.

Symply has also addressed the short cable-length issues commonly cited with Thunderbolt. By using the latest Thunderbolt 3 optical cable technology from Corning, clients can be up to 50 feet away from SymplyWorkspace while maintaining full high-speed collaboration.

The complete SymplyWorkspace solution starts at $10,995 for 24TB of RAID-protected storage and four simultaneous Mac users. Four additional users (up to eight total) can be added at any time. The product is also available in configurations up to 288TB and supporting multiple 4K streams, with any combination of up to eight Mac, Windows or Linux users. It’s available now through worldwide resellers and joins the SymplyUltra line of workflow storage solutions for larger post and broadcast facilities.


Review: Mzed.com’s Directing Color With Ollie Kenchington

By Brady Betzel

I am constantly looking to educate myself, no matter what the source — or subject. Whether I am learning how to make a transition in Adobe After Effects from an eSports editor on YouTube to Warren Eagles teaching color correction in Blackmagic’s DaVinci Resolve on FXPHD.com, I’m always beefing up my skills. I even learn from bad tutorials — they teach you what not to do!

But when you come across a truly remarkable learning experience, it is only fair to share with the rest of the world. Last year I saw an ad for an MZed.com course called “Directing Color With Ollie Kenchington,” and was immediately interested. These days you can pretty much find any technical tutorial you can dream of on YouTube, but truly professional, higher education-like, theory-based education series are very hard to come by. Even ones you need to pay for aren’t always worth their price of admission, which is a huge let down.

Ollie sharing his wisdom.

Once I gained access to MZed.com I wanted to watch every educational series they had. From lighting techniques with ASC member Shane Hurlbut to the ARRI Amira Camera Primer, there are over 150 hours of education available from industry leaders. However, I found my way to Directing Color…

I am often asked if I think people should go to college or a film school. My answer? If you have the money and time, you should go to college followed by film school (or do both together, if the college offers it). Not only will you learn a craft, but you will most likely spend hundreds of hours studying and visualizing the theory behind it. For example, when someone asks me about the science behind camera lenses, I can confidently answer them thanks to my physics class based on lenses and optics from California Lutheran University (yes, a shameless plug).

In my opinion, a two-, four- or even 10-year education allows me to live in the grey. I am comfortable arguing for both sides of a debate, as well as the options that are in between —  the grey. I feel like my post-high school education really allowed me to recognize and thrive in the nuances of debate. Leaving me to play devil’s advocate maybe a little too much, but also having civil and proactive discussions with others without being demeaning or nasty — something we are actively missing these days. So if living in the grey is for you, I really think a college education supplemented by online or film school education is valuable (assuming you make the decision that the debt is worth it like I did).

However, I know that is not an option for everyone since it can be very expensive — trust me, I know. I am almost done paying off my undergraduate fees while still paying off my graduate ones, which I am still two or three classes away from finishing. That being said, Directing Color With Ollie Kenchington is the only online education series I have seen so far that is on the same level as some of my higher education classes. Not only is the content beautifully shot and color corrected, but Ollie gives confident and accessible lessons on how color can be used to draw the viewer’s attention to multiple parts of the screen.

Ollie Kenchington is a UK-based filmmaker who runs Korro Films. From the trailer of his Directing Color series, you can immediately see the beauty of Ollie’s work and know that you will be in safe hands. (You can read more about his background here.)

The course raises the online education bar and will elevate the audiences idea of professional insight. The first module “Creating a Palette” covers the thoughts behind creating a color palette for a small catering company. You may even want to start with the last Bonus Module “Ox & Origin” to get a look at what Ollie will be creating throughout the seven modules and about an hour and a half of content.

While Ollie goes over “looks,” the beauty of this course is that he goes through his internal thought processes including deciding on palettes based on color theory. He didn’t just choose teal and orange because it looks good, he chooses his color palette based on complementary colors.

Throughout the course Ollie covers some technical knowledge, including calibrating monitors and cameras, white balancing and shooting color charts to avoid having wrong color balance in post. This is so important because if you don’t do these simple steps, your color correction session while be much harder. And wasting time on fixing incorrect color balance takes time away from the fun of color grading. All of this is done through easily digestible modules that range from two to 20 minutes.

The modules include Creating a Palette; Perceiving Color; Calibrating Color; Color Management; Deconstructing Color 1 – 3 and the Bonus Module Ox & Origin.

Without giving away the entire content in Ollie’s catalog, my favorite modules in this course are the on-set modules. Maybe because I am not on-set that often, but I found the “thinking out loud” about colors helpful. Knowing why reds represent blood, which raise your heart rate a little bit, is fascinating. He even goes through practical examples of color use in films such as in Whiplash.

In the final “Deconstructing Color” modules, Ollie goes into a color bay (complete with practical candle backlighting) and dives in Blackmagic’s DaVinci Resolve. He takes this course full circle to show how since he had to rush through a scene he can now go into Resolve and add some lighting to different sides of someone’s face since he took time to set up proper lighting on set, he can focus on other parts of his commercial.

Summing Up
I want to watch every tutorial MZed.com has to offer. From “Philip Bloom’s Cinematic Masterclass” to Ollie’s other course “Mastering Color.” Unfortunately, as of my review, you would have to pay an additional fee to watch the “Mastering Color” series. It seems like an unfortunate trend in online education to charge a fee and then when an extra special class comes up, charge more, but this class will supposedly be released to the standard subscribers in due time.

MZed.com has two subscription models: MZed Pro, which is $299 for one year of streaming the standard courses, and MZed Pro Premium for $399. This includes the standard courses for one year and the ability to choose one “Premium” course.

“Philip Bloom’s Cinematic Master Class” was the Premium course I was signed up for initially, but you you can decide between this one and the “Mastering Color” course. You will not be disappointed regardless of which one you choose. Even their first course “How to Photograph Everyone” is chock full of lighting and positioning instruction that can be applied in many aspects of videography.

I really was impressed with Directing Color with Ollie Kenchington, and if the other course are this good MZed.com will definitely become a permanent bookmark for me.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


FilmLight offers additions to Baselight toolkit

FilmLight will be at NAB showing updates to its Baselight toolkit, including T-Cam v2. This is FilmLight’s new and improved color appearance model, which allows the user to render an image for all formats and device types with confidence of color.

It combines with the Truelight Scene Looks and ARRI Look Library, now implemented within the Baselight software. “T-CAM color handling with the updated Looks toolset produces a cleaner response compared to creative, camera-specific LUTs or film emulations,” says Andrea Chlebak, senior colorist at Deluxe’s Encore in Hollywood. “I know I can push the images for theatrical release in the creative grade and not worry about how that look will translate across the many deliverables.”

FilmLight had added what they call “a new approach to color grading” with the addition of Texture Blend tools, which allow the colorist to apply any color grading operation dependent on image detail. This gives the colorist fine control over the interaction of color and texture.

Other workflow improvements aimed at speeding the process include enhanced cache management; a new client view that displays a live web-based representation of a scene showing current frame and metadata; and multi-directory conform for a faster and more straightforward conform process.

The latest version of Baselight software also includes per-pixel alpha channels, eliminating the need for additional layer mattes when compositing VFX elements. Tight integration with VFX suppliers, including Foundry Nuke and Autodesk, means that new versions of sequences can be automatically detected, with the colorist able to switch quickly between versions within Baselight.


VFX house Rodeo FX acquires Rodeo Production

Visual effects studio Rodeo FX, whose high-profile projects include Dumbo, Aquaman and Bumblebee, has purchased Rodeo Production and added its roster of photographers and directors to its offerings.

The two companies, whose common name is just a coincidence, will continue to operate as distinct entities. Rodeo Production’s 10-year-old Montreal office will continue to manage photo and video production, but will now also offer RodeoFX’s post production services and technical expertise.

In Toronto, Rodeo FX plans to open an Autodesk Flame editing suite in the Rodeo Production’ studio and expand its Toronto roster of photographers and directors with the goal of developing stronger production and post services for clients in the city’s advertising, television and film industries.

“This is a milestone in our already incredible history of growth and expansion,” says Sébastien Moreau, founder/president of Rodeo FX, which has offices in LA and Munich in addition to Montreal.

“I have always worked hard to give our artists the best possible opportunities, and this partnership was the logical next step,” says Rodeo Production’s founder Alexandra Saulnier. “I see this as a fusion of pure creativity and innovative technology. It’s the kind of synergy that Montreal has become famous for; it’s in our DNA.”

Rodeo Production clients include Ikea, Under Armour and Mitsubishi.


Xytech’s 2019 version: new UI, ability to personalize MediaPulse

Xytech, which makes facility management software for M&E, will launch the 2019 version of its MediaPulse facility management software system at NAB in Vegas.

The MediaPulse Sky UI has an entirely new user interface, allowing for a faster, cleaner and more modern look. The UI now includes a responsive design appropriate for use in all browsers and on all devices. This responsive design features performance improvements, including Xytech’s new Limitless Scrolling for instant search results regardless of the size of the result set.

A key part of the 2019 version update includes personalizing the MediaPulse experience for each user. “This update addresses a big shift in the marketplace, expanding our technology to foster automation through all users in our clients’ ecosystem,” says CEO Greg Dolan. “With the 2019 version of MediaPulse, every participant in an organization now has a personalized MediaPulse, allowing us to deliver the appropriate experience for each user tailored for their given role.”

In addition to the user experience upgrade, OpenID is supported through Sky, and the entire platform has now moved to a 64-bit architecture. Automated IMF distribution functionality, European Working Time Directive support, transmission module updates, in addition to hundreds of other new features, are now available.

The 2019 version of MediaPulse will be available in June 2019.


Frame.io intros 10 new features for video collaboration

Frame.io, which makes a video review and collaboration platform, has introduced 10 new features that will improve how media professionals collaborate on video, from initial upload to final delivery. Top user-requested features now available in Frame.io include a new reel player presentation format, @mentions and support for multi-page PDFs.

Here are some details:
– Multi-page PDFs: Users can now collaborate on scripts and storyboards just like on video. Entire video projects from the initial brief to the final deliverable, can now live in Frame.io.
– Enhanced version management: Users now have more control over how they manage versions. They can reorder or remove versions in one place.
– Private comments: For teams who routinely create a separate review link for internal teams to gather feedback they don’t want clients to see. Internal team conversations can be separated from client conversations, all within the same project.
– @mentions: Users can tag anyone on a project to quickly grab their attention when it’s needed most. Anytime someone is @mentioned, they’ll receive a notification creating streamlined communication.
– Reel player: Drop all assets into a filmstrip format for easy viewing, complete with built-in autoplay
– Archival storage (beta): Users can now free up more account storage by archiving projects. Original files will be archived but low-res preview files stay online and searchable. Users can still comment, compare and share Frame.io archived projects. Originals can be restored within a few hours.
– Updated review pages (beta): Frame.io review pages now include a simpler interface that makes it easier for clients to leave feedback with no login required.
– Redesigned iPhone app: Frame.io’s iOS app has a design update. Users will see a cleaner,  improved app interface.
– Short links: No more long and clunky URLs for clients and collaborators. New shareable URLs will use a f.io shortlink, making sharing them significantly more user-friendly.
– Account switching: For those users with multiple accounts, Frame.io now offers a simple way to navigate between them on Frame.io.

Red Ranger all-in-one camera system now available

Red Digital Cinema has made its new Red Ranger all-in-one camera system available to select Red authorized rental houses. Ranger includes Red’s cinematic full-frame 8K sensor Monstro in an all-in-one camera system, featuring three SDI outputs (two mirrored and one independent) allowing two different looks to be output simultaneously; wide-input voltage (11.5V to 32V); 24V and 12V power outs (two of each); one 12V P-Tap port; integrated 5-pin XLR stereo audio input (Line/Mic/+48V Selectable); as well as genlock, timecode, USB and control.

Ranger is capable of handling heavy-duty power sources and boasts a larger fan for quieter and more efficient temperature management. The system is currently shipping in a gold mount configuration, with a v-lock option available next month.

Ranger captures 8K RedCode RAW up to 60fps full-format, as well as Apple ProRes or Avid DNxHR formats at 4K up to 30fps and 2K up to 120fps. It can simultaneously record RedCode RAW plus Apple ProRes or Avid DNxHD or DNxHR at up to 300MB/s write speeds.

To enable an end-to-end color management and post workflow, Red’s enhanced image processing pipeline (IPP2) is also included in the system.

Ranger ships complete, including:
• Production top handle
• PL mount with supporting shims
• Two 15mm LWS rod brackets
• Red Pro Touch 7.0-inch LCD with 9-inch arm and LCD/EVF cable
• LCD/EVF adaptor A and LCD/EVF adaptor D
• 24V AC power adaptor with 3-pin 24V XLR power cable
• Compatible Hex and Torx tools

Shooting, posting New Republic’s Indie film, Sister Aimee

After a successful premiere at the Sundance Film Festival, New Republic Studios’ Sister Aimee screened at this month’s SXSW. The movie tells the story of an infamous American evangelist of the 1920s, Sister Aimee Semple McPherson, who gets caught up in her lover’s dreams of Mexico and finds herself on a road trip toward the border.

Sister Aimee shot at the newly renovated New Republic Studios near Austin, Texas, over two and a half weeks. “Their crew used our 2,400-square-foot Little Bear soundstage, our 3,000-square-foot Lone Wolf soundstage, our bullpen office space and numerous exterior locations in our backlot,” reports New Republic Studios president Mindy Raymond, adding that the Sister Aimee production also had access to two screening rooms with 5.1 surround sound, HDMI hookups to 4K monitors and theater-style leather chairs to watch dailies. The film also hit the road, shooting in the New Mexico desert.

L-R: Directors Samantha Buck, Marie Schlingmann at SXSW. Credit: Harrison Funk

Co-written and co-directed by Samantha Buck and Marie Schlingmann, the movie takes some creative license with the story of Aimee. “We don’t look for factual truth in Aimee’s journey,” they explain. “Instead we look for a more timeless truth that says something about female ambition, the female quest for immortality and, most of all, the struggle for women to control their own narratives. It becomes a story about storytelling itself.”

The film, shot by cinematographer Carlos Valdes-Lora at 3.2K ProRes 4444 XQ on an Arri Alexa Mini, was posted at Dallas and Austin-based Charlieuniformtango.

We reached out to the DP and the post team to find out more.

Carlos, why did you choose the package of the Alexa and Cooke Mini S4 Primes?
Carlos Valdes-Lora: In early conversations with the directors, we all agreed that we didn’t want Sister Aimee to feel like a traditional period movie. We didn’t want to use softening filters or vintage lenses. We aimed instead for clear images, deep focus and a rich color palette that remains grounded in the real world. We felt that this would lend the story a greater sense of immediacy and draw the viewer closer to the characters. Following that same thinking, we worked very extensively with the 25mm and 32mm, especially in closeups and medium closeups, emphasizing accessibility.

The Cooke Mini S4s are a beautiful and affordable set (relative to our other options.) We like the way they give deep dimensionality and warmth to faces, and how they create a slightly lower contrast image compared to the other modern lenses we looked at. They quickly became the right choice for us, striking the right balance between quality, size and value.

The Cookes paired with the Alexa Mini gave us a lightweight camera system with a very contained footprint, and we needed to stay fast and lean due to our compressed shooting schedule and often tight shooting quarters. The Chapman Cobra dolly was a big help in that regard as well.

What was the workflow to post like?
Charlieuniformtango producers Bettina Barrow, Katherine Harper, David Hartstein: Post took place primarily between Charlieuniformtango’s Dallas and Austin offices. Post strategizing started months before the shoot, and active post truly began when production began in July 2018.

Tango’s Evan Linton handled dailies brought in from the shoot, working alongside editor Katie Ennis out of Tango’s Austin studio, to begin assembling a rough cut as shooting continued. Ennis continued to cut at the studio through August with directors Schlingmann and Buck.

Editorial then moved back to the directors’ home state of New York to finish the cut for Sundance. (Editor Ennis, who four-walled out of Tango Austin for the first part of post, went to  New York with the directors, working out of a rented space.)

VFX and audio work started early at Tango, with continuously updated timelines coming from editorial, working to have certain locked shots also finished for the Sundance submission, while saving much of the cleanup and other CG heavy shots for the final picture lock.

Tango audio engineer Nick Patronella also tackled dialogue edit, sound design and mix for the submission out of the Dallas studio.

Can you talk about the VFX?
Barrow, Harper, Hartstein: The cut was locked in late November, and the heavy lifting really began. With delivery looming, Tango’s Flame artists Allen Robbins, Joey Waldrip, David Hannah, David Laird, Artie Peña and Zack Smith divided effects shots, which ranged from environmental cleanup, period-specific cleanup, beauty work such as de-aging, crowd simulation, CG sign creation and more. 3D

(L-R) Tango’s Artie Peña, Connor Adams, Allen Robbins in one of the studio’s Flame suites.

Artist Connor Adams used Houdini, Mixamo and Maya to create CG elements and crowds, with final comps being done in Nuke and sent to Flame for final color. Over 120 VFX shots were handled in total and Flame was the go-to for effects. Color and much of the effects happened simultaneously. It was a nice workflow as the project didn’t have major VFX needs that would have impacted color.

What about the color grade?
Barrow, Harper, Hartstein: Directors Buck and Schlingmann and DP Valdes-Lora worked with Tango colorist Allen Robbins to craft the final look of the film — with the color grade also done in Flame. The trio had prepped shooting for a Kodachrome-style look, especially for the exteriors, but really overall. They found important reference in selections of Robert Capa photographs.

Buck, Schlingmann and Valdes-Lora responded mostly to Kodachrome’s treatment of blues, browns, tans, greens and reds (while staying true to skin tone), but also to their gamma values, not being afraid of deep shadows and contrast wherever appropriate. Valdes-Lora wanted to avoid lighting/exposing to a custom LUT on set that would reflect this kind of Kodachrome look, in case they wanted to change course during the process. With the help of Tango, however, they discovered that by dialing back the Capa look it grounded the film a little more and made the characters “feel” more accessible. The roots of the inspiration remained in the image but a little more naturalism, a little more softness, served the story better.

Because of that they monitored on set with Alexa 709, which he thought exposing for would still provide enough room. Production designer Jonathan Rudak (another regular collaborator with the directors) was on the same page during prep (in terms of reflecting this Capa color style), and the practical team did what they could to make sure the set elements complemented this approach.

What about the audio post?
Barrow, Harper, Hartstein: With the effects and color almost complete, the team headed to Skywalker Ranch for a week of final dialogue edit, mix, sound design and Foley, led by Skywalker’s Danielle Dupre, Kim Foscato and E. Larry Oatfield. The team also was able to simultaneously approve color sections in Skywalker’s Stag Theater allowing for an ultra-efficient schedule. With final mix in hand, the film was mastered just after Christmas so that DCP production could begin.

Since a portion of the film was musical, how complex was the audio mix?
Skywalker sound mixer Dupre: The musical number was definitely one of the most challenging but rewarding scenes to design and mix. It was such a strong creative idea that played so deeply into the main character. The challenge was in striking a balance between tying it into the realism of the film while also leaning into the grandiosity of the musical to really sell the idea.

It was really fun to play with a combination of production dialogue and studio recordings to see how we could make it work. It was also really rewarding to create a soundscape that starts off minimally and simply and transitions to Broadway scale almost undetectably — one of the many exciting parts to working with creative and talented filmmakers.

What was the biggest challenge in post?
Barrow, Harper, Hartstein: Finishing a film in five to six weeks during the holidays was no easy feat. Luckily, we were able to have our directors hands-on for all final color, VFX and mix. Collaborating in the same room is always the best when you have no time to spare. We had a schedule where each day was accounted for — and we stuck to it almost down to the hour.

 

Vickie Sornsilp joins 1606 Studio as head of production

San Francisco-based 1606 Studio, formerly Made-SF, has hired veteran post producer Vickie Sornsilp to head of production. Sornsilp, whose background includes senior positions with One Union Recording and Beast Editorial, will oversee editorial and post finishing projects for the studio, which was launched last month by executive producer Jon Ettinger, editor/director Doug Walker and editors Brian Lagerhausen and Connor McDonald.

“Vickie represents what 1606 Studio is all about…family,” says Ettinger. “She trained under me at the beginning of her career and is now ready to take on the mantle of head of production. Our clients trust her to take care of business. I couldn’t be prouder to welcome her to our team.”

A graduate of San Francisco’s Academy of Art University, Sornsilp began her career as a copywriter with agency DDB. She got her start in post production in 2014 with Beast Editorial, where she produced work for such brands as Amazon, Clorox, Doritos, HP, Round Table Pizza, Mini Cooper, Toyota, Visa, Walmart and Yahoo! She joined One Union Recording as executive producer in 2018.

Sornsilp is excited to reunite with 1606 Studio’s founders. “It feels like coming home,” she says. “Jon, Doug, Brian and Connor are legends in the business and I look forward to doing more great work with them.”

Launched under the interim name Made-SF, the company is rebranding as 1606 Studio in anticipation of moving into permanent facilities in April at 1606 Stockton Street in San Francisco’s historic North Beach neighborhood. Currently undergoing a build-out, that site will feature five Adobe Premiere editorial suites, two motion graphics suites, and two Flame post finishing suites with room for further expansion.

“We want to underscore that we are a San Francisco-centric company,” explains Walker. “Service companies from outside the area have been moving into the city to take advantage of the boom in advertising and media production. We want to make it clear that we’re already here and grounded in the community.”

Signiant intros Jet SaaS solution for large, automated, fast file transfers
 

Signiant will be at NAB next month showing Jet, its new SaaS solution that makes it easy to automate and accelerate the transfer of large files between geographically dispersed locations. Targeted at simple “lights-out” use cases, Signiant Jet meets the growing need to replace scripted FTP with a faster, more reliable and more secure alternative.

Jet uses Signiant’s innovative SaaS platform, which also underpins the company’s Media Shuttle solution. Jet’s feature set and price point allow small- and mid-sized companies to easily automate system-to-system workflows, as well as recurring data exchange with partners.

Like all Signiant products, Jet uses a proprietary transport protocol that optimizes network performance for fast, reliable movement of large files under all network conditions. Coupled with enterprise-grade security and features tuned for media professionals, Signiant products are designed to enable the global flow of content, within and between companies, in a hybrid cloud world. The Signiant portfolio is now comprised of the following offerings:

• Manager+Agents – advanced enterprise software for complex networks and workflows
• Jet – SaaS solution for simple system-to-system automated file transfer
• Media Shuttle – SaaS solution that enables the sending and sharing of large files
• Flight – SaaS solution for transfers to and from AWS and/or Azure public cloud services

Media companies can deploy a single Signiant product to solve a specific problem or combine them for managing access to content that is located in various storage types worldwide. Signiant products interoperate with each other, as well as with third-party products in the media technology ecosystem.

Netflix hires Leon Silverman to enhance global post operation

By Adrian Pennington

Veteran postproduction executive Leon Silverman was pondering the future when Netflix came calling. The former president of Laser Pacific has spent the last decade building up Disney’s in-house digital post production wing as general manager, but will be taking on what is arguably one of the biggest jobs in the industry — director, post operations and creative services at Netflix.

“To tell you the truth, I wasn’t looking for a new job. I was looking to explore the next chapter of my life,” said Silverman, announcing the news at the HPA Tech Retreat last month.

“The fact is, if there is any organization or group of people anywhere that can bring content creators together with creative technology innovation in service of global storytelling, it is Netflix. This is a real opportunity to work closely with the creative community and with partners to create a future industry worthy of its past.”

That final point is telling. Indeed, Silverman’s move from one of the titans of Hollywood to the powerhouse of digital is symbolic of an industry passing the baton of innovation.

“In some ways, moving to Netflix is a culmination of everything I have been trying to achieve throughout my career,” says Silverman. “It’s about the intersection of technology and creativity, that nexus where art and science meet in order to innovate new forms of storytelling. Netflix has the resources, the vision and the talent to align these goals.”

L-R: Leon Silverman and Sean Cooney

Silverman will report to Sean Cooney, Netflix, director worldwide post production. During his keynote at the HPA Tech Retreat, Cooney introduced Silverman and his new role. He noted that the former president of the HPA (2008-2016) had built and run some of the most cutting-edge facilities on the planet.

“We know that there is work to be done on our part to better serve our talent,” says Cooney. “We were looking for someone with a deep understanding of the industry’s long and storied history of entertainment creation. Someone who knows the importance of working closely with creatives and has a vision for where things are going in the future.”

Netflix global post operation is centered in LA where it employs the majority of its 250 staff and will oversee delivery of 1,000 original pieces of programming this year. But with regional content increasingly important to the growth of the organization, Cooney and Silverman’s tricky task is to streamline core functions like localization, QC, asset management and archive while increasing output from Asia, Latin America and Europe.

“One of the challenges is making sure that the talent we work with feel they are creatively supported even while we operate on a such a large scale,” explains Cooney. “We want to continue to provide a boutique experience even as we expand.”

There’s recognition of the importance to Netflix of its relationship with dozens of third-party post houses, freelance artists and tech vendors.

“Netflix has spent a lot of time cultivating deep relationships in the post community, but as we get more and more involved in upstream production we want to focus on reducing the friction between the creative side of production and the delivery side,” says Silverman. “We need to redesign our internal workflows to really try to take as much as friction out of the process as possible.”

Netflix: Black Mirror – Bandersnatch

While this makes sense from a business point of view, there’s a creative intent too. Bandersnatch, the breakthrough interactive drama from the Black Mirror team, could not have been realized without close collaboration from editorial all the way to user interface design.

“We developed special technology to enable audience interaction but that had to work in concert with our engineering and product teams and with editorial and post teams,” says Cooney.

Silverman likens this collapse of the traditional role of post into the act of production itself as “Post Post.” It’s an industry-wide trend that will enable companies like Netflix to innovate new formats spanning film, TV and immersive media.

“We are at a time and a place where the very notion of a serial progression from content inception to production to editorial then finish to distribution is anachronistic,” says Silverman. “It’s not that post is dead, it’s just that ‘post’ is not ‘after’ anything as much as it has become the underlying fabric of content creation, production and distribution. There are some real opportunities to create a more expansive, elegant and global ability to enable storytellers of all kinds to make stories of all kinds — wherever they are.”


UK-based Adrian Pennington is a professional journalist and editor specializing in the production, the technology and the business of moving image media.

Posting director Darren Lynn Bousman’s horror film, St. Agatha

Atlanta’s Moonshine Post helped create a total post production pipeline — from dailies to finishing — for the film St. Agatha, directed by Darren Lynn Bousman (Saw II, Saw III, Saw IV, Repo the Genetic Opera). 

The project, from producers Seth and Sara Michaels, was co-edited by Moonshine’s Gerhardt Slawitschka and Patrick Perry and colored by Moonshine’s John Peterson.

St. Agatha is a horror film that shot in the town of Madison, Georgia. “The house we needed for the convent was perfect, as the area was one of the few places that had not burned down during the Civil War,” explains Seth Michaels. “It was our first time shooting in Atlanta, and the number one reason was because of the tax incentive. But we also knew Georgia had an infrastructure that could handle our production.”

What the producers didn’t know during production was that Moonshine Post could handle all aspects of post, and were initially brought in only for dailies. With the opportunity to do a producer’s cut, they returned to Moonshine Post.

Time and budget dictated everything, and Moonshine Post was able to offer two editors working in tandem to edit a final cut. “Why not cut in collaboration?” suggested Drew Sawyer, founder of Moonshine Post and executive producer. “It will cut the time in half, and you can explore different ideas faster.”

“We quite literally split the movie in half,” reports Perry, who, along with Slawitschka, cut on Adobe Premiere “It’s a 90-minute film, and there was a clear break. It’s a little unusual, I will admit, but almost always when we are working on something, we don’t have a lot of time, so splitting it in half works.”

Patrick Perry

Gerhardt Slawitschka

“Since it was a producer’s cut, when it came to us it was in Premiere, and it didn’t make sense to switch over to Avid,” adds Slawitschka. “Patrick and I can use both interchangeably, but prefer Premiere; it offers a lot of flexibility.”

“The editors, Patrick and Gerhardt, were great,” says Sara Michaels. “They watched every single second of footage we had, so when we recut the movie, they knew exactly what we had and how to use it.”

“We have the same sensibilities,” explains Gerhardt. “On long-form projects we take a feature in tandem, maybe split it in half or in reels. Or, on a TV series, each of us take a few episodes, compare notes, and arrive at a ‘group mind,’ which is our language of how a project is working. On St. Agatha, Patrick and I took a bit of a risk and generated a four-page document of proposed thoughts and changes. Some very macro, some very micro.”

Colorist John Peterson, a partner at Moonshine Post, worked closely with the director on final color using Blackmagic’s Resolve. “From day one, the first looks we got from camera raw were beautiful.” Typically, projects shot in Atlanta ship back to a post house in a bigger city, “and maybe you see it and maybe you don’t. This one became a local win, we processed dailies, and it came back to us for a chance to finish it here,” he says.

Peterson liked working directly with the director on this film. “I enjoyed having him in session because he’s an artist. He knew what he was looking for. On the flashbacks, we played with a variety of looks to define which one we liked. We added a certain amount of film grain and stylistically for some scenes, we used heavy vignetting, and heavy keys with isolation windows. Darren is a director, but he also knows the terminology, which gave me the opportunity to take his words and put them on the screen for him. At the end of the week, we had a successful film.”

John Peterson

The recent expansion of Moonshine Post, which included a partnership with the audio company Bare Knuckles Creative and a visual effects company Crafty Apes, “was necessary, so we could take on the kind of movies and series we wanted to work with,” explains Sawyer. “But we were very careful about what we took and how we expanded.”

They recently secured two AMC series, along with projects from Netflix. “We are not trying to do all the post in town, but we want to foster and grow the post production scene here so that we can continue to win people’s trust and solidify the Atlanta market,” he says.

Uncork’d Entertainment’s St. Agatha was in theaters and became available on-demand starting February 8. Look for it on iTunes, Amazon, Google Play, Vudu, Fandango Now, Xbox, Dish Network and local cable providers.

Warner Bros. Studio Facilities ups Kim Waugh, hires Duke Lim

Warner Bros. Studio Facilities in Burbank has promoted long-time post exec Kim Waugh to executive VP, worldwide post production services. They have also hired Duke Lim to serve as VP, post production sound at the studio.

In his new role, Waugh will be reporting to Jon Gilbert, president, worldwide studio facilities, Warner Bros. and will continue to lead the post creative services senior management team, overseeing all marketing, sales, talent management, facilities and technical operations across all locations. Waugh has been instrumental in expanding the business beyond the studio’s Burbank-based headquarters, first to Soho, London in 2012 with the acquisition of Warner Bros. De Lane Lea and then to New York in the 2015 acquisition of WB Sound in Manhattan.

The group supports all creative post production elements, ranging from sound mixing, editing and ADR to color correction and restoration, for Warner Bros.’ clients worldwide. Waugh’s creative services group features a vast array of award-winning artists, including the Oscar-nominated sound mixing team behind Warner Bros. Pictures’ A Star is Born.

Reporting to Waugh, Lim is responsible for overseeing the post sound creative services supporting Warner Bros.’ film and television clients on a day-to-day basis across the studio’s three facilities.

Duke Lim

Says Gilbert, “At all three of our locations, Kim has attracted award-winning creative talent who are sought out for Warner Bros. and third-party projects alike. Bringing in seasoned post executive Duke Lim will create an even stronger senior management team under Kim.”

Waugh most recently served as SVP, worldwide post production services, Warner Bros. Studio Facilities, a post he had held since 2007. In this position, he managed the post services senior management team, overseeing all talent, sales, facilities and operations on a day-to-day basis, with a primary focus on servicing all Warner Bros. Studios’ post sound clients. Prior to joining Warner Bros. as VP, post production services in 2004, Waugh worked at Ascent Media Creative Sound Services, where he served as SVP of sales and marketing, managing sales and marketing for the company’s worldwide divisional facilities. Prior to that, he spent more than 10 years at Soundelux, holding posts as president of Soundelux Vine Street Studios and Signet Soundelux Studios.

Lim has worked in the post production industry for more than 25 years, most recently posted at the Sony Sound Department, which he joined in 2014 to help expand the creative team and total number of mix stages. He began his career at Skywalker Sound South serving in various positions until their acquisition by Todd-AO in 1995, when Lim was given the opportunity to move into operations and began managing the mixing facilities for both its Hollywood location and the Todd-AO West studio in Santa Monica.

Behind the Title: ATK PLN Technical Supervisor Jon Speer

NAME: Jon Speer

COMPANY: ATK PLN (@atkpln_studio) in Dallas

CAN YOU DESCRIBE YOUR COMPANY?
We are a strategic creative group that specializes in design and animation for commercials and short-form video productions.

WHAT’S YOUR JOB TITLE?
Technical Supervisor

WHAT DOES THAT ENTAIL?
In general, a technical supervisor is responsible for leading the technical director team and making sure that the pipeline enables our artists’ effort of fulfilling the client’s vision.

Day-to-day responsibilities include:
– Reviewing upcoming jobs and making sure we have the necessary hardware resources to complete them
– Working with our producers and VFX supervisors to bid and plan future work
– Working with our CG/VFX supervisors to develop and implement new technologies that make our pipeline more efficient
– When problems arise in production, I am there to determine the cause, find a solution and help implement the fix
– Developing junior technical directors so they can be effective in mitigating pipeline issues that crop up during production

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I would say the most surprising thing that falls under the title is the amount of people and personality management that you need to employ.

As a technical supervisor, you have to represent every single person’s different perspectives and goals. Making everyone from artists, producers, management and, most importantly, clients happy is a tough balancing act. That balancing act needs to be constantly evaluated to make sure you have both the short-term and long-term interests of the company, clients and artists in mind.

WHAT TOOLS DO YOU USE?
Maya, Houdini and Nuke are the main tools we support for shot production. We have our own internal tracking software that we also integrate with.

From text editors for coding, to content creation programs and even budgeting programs, I typically use it all.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Starting the next project. Each new project offers the chance for us to try out a new or revamped pipeline tool that we hope will make things that much better for our team. I love efficiencies, so getting to try new tools, whether they are internally or externally developed, is always fun.

WHAT’S YOUR LEAST FAVORITE?
I know it sounds cliché, but I don’t really have one. My entire job is based on figuring out why things don’t work or how they could work better. So when things are breaking or getting technically difficult, that is why I am here. If I had to pick one thing, I suppose it would be looking at spreadsheets of any kind.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Early morning when no one else is in. This is the time of day that I get to see what new tools are out there and try them. This is when I get to come up with the crazy ideas and plans for what we do next from a pipeline standpoint. Most of the rest of my day usually includes dealing with issues that crop up during production, or being in meetings.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I think I would have to give teaching a try. Having studied architecture in school, I always thought it would be fun to teach architectural history.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We just wrapped on a set of Lego spots for the new Lego 2 movie.

Fallout 76

We also did an E3 piece for Fallout 76 this year that was a lot of fun. We are currently helping out with a spot for the big game this year that has been a blast.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I think I am most proud of our Lego spots we have created over the last three years. We have really experimented with pipeline on those spots. We saw a new technology out there — rendering in Octane — and decided to jump in head first. While it wasn’t the easiest thing to do, we forced ourselves to become even more efficient in all aspects of production.

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Houdini really makes the difficult things simple to do. I also love Nuke. It does what it does so well, and is amazingly fast and simple to program in.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Mainly I’ll listen to soundtracks when I am working, the lack of words is best when I am programming.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Golf is something I really enjoy on the weekends. However, like a lot of people, I find travel is easily the best way to for me to hit the reset button.

HPA Tech Retreat 2019: An engineer’s perspective

By John Ferder

Each year, I look forward to attending the Hollywood Professional Association’s Tech Retreat, better known as the HPA Tech Retreat. Apart from escaping the New York winter, it gives me new perspectives, a chance to exchange ideas with friends and colleagues and explore the latest technical and creative information. As a broadcast engineer, I get a renewed sense of excitement and purpose.

Also, as secretary/treasurer of SMPTE, the Board of Governors meetings as well as the Strategy Day held each year before the Tech Retreat energize me. This year, we invited a group of younger professionals to tell us what SMPTE could do to attract them to SMPTE and HPA, and what they needed from us as experienced professionals.

Their enthusiasm and honesty were refreshing and encouraging. We learned that while we have been trying to reach out to them, they have been looking for us to invite them into the Society. They have been looking for mentors and industry leaders to engage them one-on-one and introduce them to SMPTE and how it can be of value to them.

Presentations and Hot Topics
While it is true that the Hollywood motion picture community is behind producing this Tech Retreat, it is by no means limited to the film industry. There was plenty of content and information for those of us on the broadcast side to learn and incorporate into our workflows and future planning, including a presentation on the successor to SMPTE timecode. Peter Symes, formerly director of standards for SMPTE and a SMPTE Fellow, presented an update on the TLX Project and the development of what is to be SMPTE Standard ST2120, the Extensible Time Label.

This suite of standards will be built on the work already done in ST2059, which describes the use of the IEEE1588 Precision Time Protocol to synchronize video equipment over an IP network. This Extensible Time Label will succeed, not replace ST12, which is the analog timecode that we have used with great success for 50 years. As production moves increasingly toward using IP networks, this work will produce a digital time labeling system that will be as universal as ST12 timecode has been. Symes invited audience members to join the 32NF80 Technology Committee, which is developing and drafting the standard.

Phil Squyres

What were the hot topics this year? HDR, Wide Color Gamut, AI/machine learning, IMF and next-generation workflows had a large number of presentations. While this may seem to be the “same old, same old,” the amount of both technical and practical information presented this year was a real eye-opener to many of us.

Phil Squyres gave a talk on next generation versus broadcast production workflows that revealed that the amount of time and storage needed to complete a program episode for OTT distribution versus broadcast is 2.2X or greater. This echoed the observations of an earlier panel of colorists and post specialists for Netflix feature films, one of whom stated that instead of planning to complete post production two weeks prior to release, plan on completing five to six weeks prior in order to allow for the extra work needed for the extra QC of both HDR and SDR releases.

Artificial Intelligence and Machine Learning
Perhaps the most surprising presentation for me was given by Rival Theory, a company that generates AI personas based on real people’s memories, behaviors and mannerisms. They detailed the process by which they are creating a persona of Tony Robbins, famous motivational speaker and investor in Rival Theory. Robbins intends to have a life-like persona created to help people with life coaching and continue his mission to end suffering throughout the world, even after he dies. In addition to the demonstration of the multi-camera storing and rendering of his face while talking and displaying many emotions, they showed how Robbins’ speech was saved and synthesized for the persona. A rendering of the completed persona was presented and was very impressive.

Many presentations focused on applications of AI and machine learning in existing production and post workflows. I appreciated that a number of the presenters stressed that their solutions were meant not to replace the human element in these workflows, but to instead apply AI/ML to the redundant and tedious tasks, not the creative ones. Jason Brahms of Video Gorillas brought that point home in his presentation on “AI Film Restoration at 12 Million Frames per Second,” as did Tim Converse of Adobe in “Leveraging AI in Post Production.”

Broadcasters panel

Panels and Roundtables
Matthew Goldman of MediaKind chaired the annual Broadcasters Panel, which included Del Parks (Sinclair), Dave Siegler (Cox Media Group), Skip Pizzi (NAB) and Richard Friedel (Fox). They discussed the further development and implementation of the ATSC 3.0 broadcast standard, including the Pearl Consortium initiative in Phoenix and other locations, the outlook for ATSC 3.0 tuner chips in future television receivers and the applications of the standard beyond over-the-air broadcasting, with an emphasis on data-casting services.

All of the members of the panel are strong proponents of the implementation of the ATSC 3.0 standard, and more broadcasters are joining the evolution toward implementing it. I would have appreciated including on the panel someone of similar stature who is not quite so gung-ho on the standard to discuss some of the challenges and difficulties not addressed so that we could get a balanced presentation. For example, there is no government mandate nor sponsorship for the move to ATSC 3.0 as there was for the move to ATSC 1.0, so what really motivates broadcasters to make this move? Have the effects of the broadcast spectrum re-packing on available bandwidth negatively affected the ability of broadcasters in all markets to accommodate both ATSC 3.0 and ATSC 1.0 channels?

I really enjoyed “Adapting to a COTS Hardware World,” moderated by Stan Moote of the IABM. Paul Stechly, president of Applied Electronics, noted that more and more end users are building their own in-house solutions, assisted by manufacturers moving away from proprietary applications to open APIs. Another insight panelists shared was that COTS no longer applies to data hubs and switches only. Today, that term can be extended to desktop computers and consumer televisions and video displays as well. More and more, production and post suites are incorporating these into their workflows and environments to test their finished productions on the equipment on which their audience would be viewing them.

Breakfast roundtables

Breakfast Roundtables, which were held on Wednesday, Thursday and Friday mornings, are among my conference “must attends.” Over breakfast, manufacturers and industry experts are given a table to present a topic for discussion by all the participants. The exchange of ideas and approaches benefits everyone at the tables and is a great wake-up exercise leading into the presentations. My favorite, and one of the most popular of the Tech Retreat, is on Friday when S. Merrill Weiss of the Merrill Weiss Group, as he has for many years, presents us with a list of about 12 topics to discuss. This year, his co-host was Karl Paulsen, CTO of Diversified Systems, and the conversations were lively indeed. Some of the topics we discussed were the costs of building a facility based on ST2110, the future of coaxial cable in the broadcast plant, security in modern IP networks and PTP, and the many issues in the evolution from ATSC 1.0 to ATSC 3.0.

As usual, a few people were trying to fit in at or around the table, as it is always full. We didn’t address every topic, and we had to cut the discussions short or risk missing the first presentation of the day.

Final Thoughts
The HPA Tech Retreat’s presentations, panels and discussion forums are a continuing tool in my professional development. Attending this year reaffirmed and amplified my belief that this event is one that should be on each broadcasters’ and content creators’ calendar. The presentations showed that the line between the motion picture and television communities is further blurring and that the techniques embraced by the one community are also of benefit to the other.

The HPA Tech Retreat is still small enough for engaging conversations with speakers and industry professionals, sharing their industry, technical, and creative insights, issues and findings.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.

Review: HP’s double-hinged ZBook Studio x360 mobile workstation

By Mike McCarthy

I recently had the opportunity to test HP’s ZBook Studio x360 mobile workstation over the course of a few weeks. HP’s ZBook mobile workstation division has really been thinking outside the box lately, with the release of the ZBook X2 tablet, the HP Z-VR backpack-mounted system and now the ZBook Studio x360.

The ZBook Studio x360 is similar in design functionality to HP’s other x360 models — the Pavilion, Spectre, Envy, ProBook and Elitebook x360 — in that the display is double-hinged. The keyboard can be folded all the way behind the screen, allowing it to be used similarly to a tablet or placed in “tent” or “presentation” mode with the keyboard partially folded behind it. But the ZBook is clearly the top-end option of the systems available in that form factor. And it inherits all of the engineering from the rest of HP’s extensive product portfolio, in regards to security, serviceability, and interface.

Performance-wise, this Studio x360 model sits somewhere in the middle of HP’s extensive ZBook mobile workstation lineup. It is above the lightweight ZBook 14U and 15U and X2 tablet with their low-voltage U-Series CPUs and the value-oriented 15v. It is similar to the more traditional clamshell ultrabook ZBook Studio, and has less graphics power and RAM than the top-end ZBook 15 and 17.

It is distinguished from the ZBook Studio by its double-hinged 360 folding chassis, and its touch and pen inking capability. It is larger than the ZBook X2 with more powerful internal hardware. This model is packed with processing power in the form of a 6-core 8th generation Xeon processor, 32GB RAM and an Nvidia Quadro P1000 GPU. The 15-inch UHD screen boosts up to 400 nits at full brightness and, of course, supports touch and pen input.

Configuration Options
The unit has a number of interesting configuration options with two M.2 slots and a 2.5-inch bay allowing up to 6TB of internal storage, but most users will forgo the 2.5-inch SATA bay for an extended 96whr battery. There is the option of choosing between a 4G WWAN card or DreamColor display, giving users a wide selection of possible capabilities.

Because of the work I do, I am mostly interested in answering the question: “How small and light can I go, and still get my work done effectively?” In order to answer that question, I am reviewing a system with most of the top-end options. I started at a 17-inch Lenovo P71 last year, then tried a large 15-inch PNY PrevailPro and now am trying out this much lighter 15-inch book. There is no compromise with the 6-core CPU, as that is the same as in a 17-inch beast. So the biggest difference is in the GPU, with the mobile Quadro P1000 only having the 512 CUDA core, one third the power of the Quadro P4000 I last tested. So VR is not going to work, but besides heavy color grading, most video editing tasks should be supported. And 32GB of RAM should be enough for most users, but I installed a second NVMe drive, giving me a total of 2TB of storage.

Display
The 15.6-inch display is available in a number of different options, all supporting touch and digital pen input. The base-level full-HD screen can be upgraded to a Sure View screen, allowing the user to selectively narrow the viewing angle at the press of a key in order to increase their privacy. Next up is the beautiful 400-nit UHD screen that my unit came with. And the top option is a 600-nit DreamColor calibrated UHD panel. All of the options fully support touch and pen input.

Connectivity
The unit has dual-Thunderbolt 3 ports, supporting DisplayPort 1.3, as well as HDMI, dual-USB3.1 Type-A ports, an SDXC card slot and an audio jack. The main feature I am missing is an RJ-45 jack for Gigabit Ethernet. I get that there are trade-offs to be made in any configuration, but that is the item I am missing from this unit. On the flip side, with the release of affordable Thunderbolt-based 10GbE adapters, that is probably what I would pair with this unit if I was going to be using it to edit assets I have stored on my network. So that is a solvable problem.

Serviceability
Unlike the heavier ZBook 15 and 17 models, it does not have a tool-less chassis, but that is an understandable a compromise to reduce size and weight, and totally reasonable. I was able to remove the bottom cover with a single torx screwdriver, giving me access to the RAM, wireless cards, and M.2 slots I was populating with a second NVMe drive to test. The battery can also be replaced that way should the need arise, but the 96whr long-life battery is fully covered by the system warranty, be that three or five years depending on your service level.

Security
There are a number of unique features that this model shares with many others in HP’s lineup. The UEFI-based HP Sure Start BIOS and pre-boot environment provide a host of options for enterprise-level IT management, and make it less likely that the boot process will get corrupted. HP Sure Click is a security mechanism that isolates each Chromium browser tab in its own virtual machine, protecting the rest of your system from any malware that it might otherwise be exposed to. Sure Run and Sure Recover are designed to prevent and recover from security failures that render the system unusable.

The HP Client Security Manager brings the controls for all of this functionality into one place and uses the system’s integrated fingerprint reader. HP Workwise is a utility for integrating the laptop with one’s cell phone, allowing automatic system lock and unlock when the cell phone leaves or enters Bluetooth range and phone notifications from the other “Sure” security applications.

Thunderbolt Dock
HP also supplied me with their new Thunderbolt dock. The single most important feature on that unit from my perspective is the Gigabit Ethernet port, since there isn’t one built into the laptop. It also adds two DisplayPorts and one VGA output and includes five more USB ports. I was able to connect my 8K display to the DisplayPort output and it ran fine at 30Hz, as is to be expected from a single Thunderbolt connection. The dock should run anything smaller than that at 60Hz, including two 4K displays.

The dock also supports an optional audio module to facilitate better conference calls, with a built-in speaker, microphone and call buttons. It is a nice idea but a bit redundant since the laptop has a “world-facing” microphone for noise cancellation or group calling and even has “Collaboration Keys” for controlling calls built into the top of the keyboard. Apparently, HP sees this functionality totally replacing office phones.

I initially struggled to get the dock to work — besides the DisplayPorts — but this was because I connected it before boot-up. Unlike docking stations from back in the day, Thunderbolt is fully hot-swappable and actually needs to be powered on the first time it is connected in order to trigger the dialog box, which gives it low-level access to your computer for security reasons. Once I did that, it has worked seamlessly.

The two-part cable integrates a dedicated power port and Thunderbolt 3 connection, magnetically connected for simple usage while maintaining flexibility for future system compatibility. The system can receive power from the Thunderbolt port, but for maximum power and performance uses a 130W dedicated power plug as well, which appears to be standardized across much of HP’s line of business products.

Touchscreens and Pens
I had never seriously considered tablets or touchscreen solutions for my own work until one of HP’s reps showed me an early prototype of the ZBook X2 a few years ago. I initially dismissed it until he explained how much processing power they had packed into it. Only then did I recognize that HP had finally fulfilled two of my very different and long-standing requests in a way that I hadn’t envisioned. I had been asking the display team for a lightweight battery-powered DreamColor display, and I had been asking the mobile workstation team for a 12- or 14-inch Nvidia-powered model — this new device was both.

I didn’t end up reviewing the X2 during its initial release last year, although I plan to soon. But once the X2 shifted my thinking about tablet and touch-based tools, I saw this ZBook Studio x360 as an even more powerful implementation of that idea, in a slightly larger form factor. While I have used pens on other people’s systems in the past, usually when doing tech support for other editors, this is my first attempt to do real work with a pen instead of a mouse and keyboard.

One of the first obstacles I encountered was getting the pen to work at all. Unlike the EMR-based pens from Wacom tablets and the ZBook X2, the x360 uses an AES-based pen, which requires power and a Bluetooth connection to communicate with the system. I am not the only user to be confused by this solution, but I have been assured by HP that the lack of documentation and USB-C charging cable have been remedied in currently shipping systems.

It took me a while (and some online research) to figure out that there was a USB-C port hidden in the pen and that it needed to be charged and paired with the system. Once I did that, it has functioned fine for me. The pen itself works great, with high precision and 4K levels of pressure sensitivity and tilt support. I am not much of a sketcher or painter, but I do a lot of work in Photoshop, either cleaning images up or creating facial expressions for my Character Animator puppets. The pen is a huge step up from the mouse for creating smooth curves and natural lines. And the various buttons worked well for me once I got used to them. But I don’t do a lot of work that benefits from having the pen support, and trying to adapt other tasks to the pen-based input was more challenging than I anticipated.

The other challenge I encountered was with the pen holder, which fits into the SD card slot. The design is good and works better than I would have expected, but removing the original SD plug that protects the slot was far more difficult than it should be. I assume the plug is necessary for the system to pass the 13 MilSpec type tests that HP runs all of its ZBooks through, but I probably won’t be wedging it back in that slot as long as I have the system.

Inking
I am not much of a tablet user as of yet since this was my first foray into that form factor, but the system is a bit large and bulky when folded back into tablet mode. I have hit the power button by accident on multiple occasions, hibernating the system while I was trying to use it. This has primarily been an issue when I am using it in tablet mode and holding it with my left hand in that area by default. But the biggest limitation I encountered in tablet mode was recognizing just how frequently I use the keyboard during the course of my work. While Windows Inking does allow for an onscreen keyboard to be brought up for text entry, functions like holding Alt for anchor-based resizing are especially challenging. I am curious to see if some of these issues are alleviated on the X2 by the buttons they built into the edge of the display. As long as I have easy access to Shift, Ctrl, Alt, C, V and a couple others, I think I would be good to go, but it is one of those things that you can’t know for sure until you try it yourself. And different people with varying habits and preferences might prefer different solutions to the same tasks. In my case, I have not found the optimal touch and inking experience yet.

Performance
I was curious to see what level of performance I would get from the Quadro P1000, as I usually use systems with far more GPU power. But I was impressed with how well it was able to handle the animating and editing of the 5K assets for my Grounds of Freedom animated series. I was even able to dynamically link between the various Adobe apps with a reasonable degree of interactive feedback. That is where you start to see a difference between this mobile system and a massive desktop workstation.

eGPU
Always looking for more power, I hooked up Sonnet’s Breakaway Box 550 with a variety of different Nvidia GPUs to accelerate the graphics performance of the system. The Quadro P6000 was the best option, as it used the same Quadro driver and Pascal architecture as the integrated P1000 GPU but greatly increased performance.

It allowed me to use my Lenovo Explorer WMR headset to edit 360 video in VR with Premiere Pro, and I was able to playback 8K DNxHR files at full resolution in Premiere to my Dell 8K LCD display. I was also able to watch 8K HEVC files in Windows movie player smoothly. Pretty impressive for a 15-inch convertible laptop, but the 6-Core Xeon processor pairs well with the desktop GPU, making this an ideal system to harness the workflow possibilities offered by eGPU solutions.

Media Export Benchmarks
I did extensive benchmark testing, measuring the export times of various media at different settings with different internal and external GPU options. The basic conclusion was that currently simple transcodes and conversions are not much different with an eGPU, but that once color correction and other effects are brought into the equation, increasing GPU power makes processing two to five times faster.

I also tested DCP exports with Quvis’ Wraptor plugin for AME and found the laptop took less than twice as long as my top-end desktop to make DCPs, which I consider to be a good thing. You can kick out a 4K movie trailer in under 10 minutes. And if you want to export a full feature film, I would recommend a desktop, but this will do it in a couple of hours.

Final Observations
The ZBook Studio x360 is a powerful machine and an optimal host for eGPU workflows. While it exceeded my performance expectations, I did not find the touch and ink solution to be optimal for my needs as I am a heavy keyboard user, even when doing artistic tasks. (To be clear, I haven’t found a better solution. This just doesn’t suitably replace my traditional mouse and keyboard approach to work.) So if buying one for myself, I would personally opt for the non-touch ZBook Studio model. But for anyone to whom inking is a critical part of their artistic workflow, who needs a powerful system on the go, this is a very capable model that doesn’t appear to have too many similar alternatives. It blends the power of the ZBook Studio with the inking experience of HP’s other x360 products.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

SGO’s Mistika Ultima integrates AJA’s Kona 5

SGO has integrated AJA‘s Kona 5 audio and video I/O cards into its full finishing and workflow solution Mistika Ultima, providing simplified and optimized 8K production for broadcast clients.

The new Mistika Ultima 8K System provides a realtime finishing workflow for 8K full UHD at 60p, even with uncompressed formats. It is comprised of an AJA Kona 5 card with 12G-SDI I/O connectivity, Mistika Ultima software, an HP Z8 workstation, a high-performance SGO storage solution, and other industry-standard hardware.

Kona 5 is a high-performance eight-lane PCIe 3.0 capture and output card featuring 12G-SDI I/O and HDMI 2.0 output. For OEM partners, the card is supported on AJA’s SDK for Mac OS, Windows and Linux, offering advanced features such as 8K and multi-channel 4K. Kona 5 is also compatible with creative tools such as Adobe Premiere Pro, Apple Final Cut Pro X and Avid Media Composer, via AJA’s proven Mac OS and Windows drivers and application plug-ins. Kona 5 enables simultaneous capture with signal passthrough when using 12G-SDI, and offers HDMI 2.0 output, as well as deep-color and multi-format HDR support.

Molinare hires Nigel Bennett as commercial director

Nigel Bennett will be joining London’s Molinare as commercial director. He was most recently at Pinewood Studios and starts in May. 

Bennett brings experience managing creative, technical and financial pressures within post production.

At Pinewood Studios, Bennett was the group director of creative services, a position he had held since 2014, where he oversaw the opening of Pinewood Digital in Atlanta. With a career in post, Nigel worked his way up from re-recording mixer, through operations management across film, TV and games, head of operations of digital content services, up to his most recent role.

As a re-recording mixer at Shepperton Studios, he worked on a range of titles such as Nanny McPhee, Troy, Love Actually, Gosford Park and Last Orders. 

The London facility looks to build on the success of award-winning dramas Killing Eve and Bodyguard, the Primetime Emmy award-nominated Patrick Melrose, the documentary Three Identical Strangers and feature Mission: Impossible – Fallout, all from last year.

Sound designer Ash Knowlton joins Silver Sound

Emmy Award-winning NYC sound studio Silver Sound has added sound engineer Ash Knowlton to its roster. Knowlton is both a location sound recordist and sound designer, and on rare and glorious occasions she is DJ Hazyl. Knowlton has worked on film, television, and branded content for clients such as NBC, Cosmopolitan and Vice, among others.

“I know it might sound weird but for me, remixing music and designing sound occupy the same part of my brain. I love music, I love sound design — they are what make me happy. I guess that’s why I’m here,” she says.

Knowlton moved to Brooklyn from Albany when she was 18 years old. To this day, she considers making the move to NYC and surviving as one of her biggest accomplishments. One day, by chance, she ran into filmmaker John Zhao on the street and was cast on the spot as the lead for his feature film Alexandria Leaving. The experience opened Knowlton’s eyes to the wonders and complexity of the filmmaking process. She particularly fell in love with sound mixing and design.

Ten years later, with over seven independent feature films now under her belt, Knowlton is ready for the next 10 years as an industry professional.

Her tools of choice at Silver Sound are Reaper, Reason and Kontakt.

Main Photo Credit: David Choy

Method Studios adds Bill Tlusty joins as global head of production

Method Studios has brought on veteran production executive and features VFX Producer Bill Tlusty on board in the new role of global head of production. Reporting to EVP of global features VFX, Erika Burton, Tlusty will oversee Method’s global feature film and episodics production operation, leading teams worldwide.

Tlusty’s career as both a VFX producer and executive spans two decades. Most recently, as an executive with Universal Pictures, he managed more than 30 features, including First Man and The Huntsman: Winter’s War. His new role marks a return to Method Studios, as he served as head of studio in Vancouver prior to his gig at Universal. Tlusty also spent eight years as a VFX producer and executive producer at Rhythm & Hues.

In this capacity he was lead executive on Snow White and the Huntsman and the VFX Oscar-winning Life of Pi. His other VFX producer credits include Night at the Museum: Battle of the Smithsonian, The Mummy: Tomb of the Emperor Dragon and Yogi Bear, and he served as production manager on Hulk and Peter Pan and coordinator on A.I Artificial Intelligence. Early in his career Tlusty worked as a production aAssistant at American Zoetrope, working for its iconic filmmaker founders, Francis Ford Coppola and George Lucas. His VFX career began at Industrial Light & Magic where he worked in several capacities on the Star Wars prequel trilogy, first as a VFX coordinator and later, production  manager on the series. He is a member of the Producers Guild of America.

“Method has pursued intelligent growth, leveraging the strength across all of its studios, gaining presence in key regions and building on that to deliver high quality work on a massive scale,” Tlusty. “Coming from the client side, I understand how important it is to have the flexibility to grow as needed for projects.”

Tlusty is based in Los Angeles and will travel extensively among Method’s global studios.

Updated Quantum Xcellis targets robust video workflows

Quantum has updated its Xcellis storage environment, which allow users to ingest, edit, share and store media content. These new appliances, which are powered by the company’s StorNext platform, are based on a next-generation server architecture that includes dual eight-core Intel Xeon CPUs, 64GB memory, SSD boot drives and dual 100Gb Ethernet or 32Gb Fibre Channel ports.

The enhanced CPU and 50% increase in RAM over the previous generation greatly improve StorNext metadata performance. These enhancements make tasks such as file auditing less time-intensive, support an even greater number of clients per node and enable the management of billions of files per node. Users operating in a dynamic application environment on storage nodes will also see performance improvements.

With the ability to provide cross-protocol locking for shared files across SAN, NFS and SMB, Xcellis targets organizations that have collaborative workflows and need to share content across both Fibre Channel and Ethernet.

Leveraging this next-generation hardware platform, StorNext will provide higher levels of streaming performance for video playback. Xcellis appliances provide a high-performance gateway for StorNext advanced data management software to integrate tiers of scalable on-premise and cloud-based storage. This end-to-end capability provides a cost-effective solution to retain massive amounts of data.

StorNext offers a variety of features that ensure data-protection of valuable content over its entire life-cycle. Users can easily copy files to off-site tiers and take advantage of versioning to roll back to an earlier point in time (prior to a malware attack, for example) as well as set up automated replication for disaster recovery purposes — all of which is designed to protect digital assets.

Quantum’s latest Xcellis appliances are available now.

AICE Awards rebranded to AICP Post Awards

AICP has announced the Call for Entries for the AICP Post Awards, its revamped and rebranded competition for excellence in the post production arts. Formerly the AICE Awards, its categories have been re-imagined with a focus on recognizing standout examples of various crafts and technique in editing, audio, design, visual effects artistry and finishing. The AICP Post Awards are a part of the AICP Awards suite of competitions, which also include The AICP Show: The Art & Technique of the American Commercial and the AICP Next Awards, both of which are also currently accepting entries.

Among the changes for the AICP Post Awards this year are the opening of the competition to any entity having involvement in the creation of a piece of content beyond the AICP membership —previously the AICE Awards was a “members only” competition.

For the full rundown on rules, categories, eligibility and fees, visit the AICP Post Awards entry portal. Deadline for entries is Thursday, February 8 at 11:59pm PST. Entrants can use the portal to cross-enter work between all three of the 2019 AICP competitions, including the AICP Show: The Art & Technique of the American Commercial and the AICP Next Awards.

Regarding categories, the competition has regrouped its existing categories, introduced a range of new sections, expanded others and added an entirely new category for vertical video.

Danny Rosenbloom

“While we’ll continue to recognize editorial across a wide range of product, genre and technique categories, we now have a wider range of subcategories in areas like audio, visual effects and design and color grading,” says Danny Rosenbloom, AICP’s VP, post and digital Production.

“We saw this as an opportunity to make the Post Awards more reflective of the varied artists working across the spectrum of post production disciplines,” noted Matt Miller, president/CEO of AICP.  “Now that we’ve brought all this post production expertise into AICP, we want the Post Awards to be a real celebration of creative talent and achievement.”

A full list of AICP Post Awards categories now includes the following:

Editorial Categories
Automotive
Cause Marketing
Comedy
Dialogue
Monologue/Spoken Word
Docu-Style
Fashion/Beauty
Montage
Music Video
Storytelling
National Campaign
Regional Campaign

Audio Categories
Audio Mix
Sound Design With Composed Music
Sound Design Without Composed Music

Color Categories
Color :60
Color :30
Color Other Lengths
Color Music Video

Design, Visual Effects & Finishing Categories
Character Design & Animation
Typography Design & Animation
Graphic Design & Animation
End Tag
CGI
Compositing & Visual Effects
Vertical

In addition to its category winners and Best of Show honoree, the AICP Post Awards will continue to recognize Best of Region winners that represent the best work emanating from companies submitting within each AICP Chapter. These now encompass East, Florida, Midwest, Minnesota, Southeast, Southwest and West.

Industry vets open editorial, post studio Made-SF

Made-SF, a creative studio offering editorial and other services, has been launched by executive producer Jon Ettinger, editor/director Doug Walker and editors Brian Lagerhausen and Connor McDonald, all formerly of Beast Editorial. Along with creative editorial (Adobe Premiere), the company will provide motion graphic design (After Effects, Mocha), color correction and editorial finishing (likely Flame and Resolve). Eventually, it plans to add concept development, directing and production to its mix.

“Clients today are looking for creative partners who can help them across the entire production chain,” says Ettinger. “They need to tell stories and they have limited budgets available to tell them. We know how to do both, and we are gathering the resources to do so under one roof.”

Made is currently set up in interim quarters while completing construction of permanent studio space. The latter will be housed in a century-old structure in San Francisco’s North Beach neighborhood and will feature five editorial suites, two motion graphics suites, and two post production finishing suites with room for further expansion.

The four Made partners bring deep experience in traditional advertising and branded content, working both with agencies and directly with clients. Ettinger and Walker have worked together for more than 20 years and originally teamed up to launch FilmCore, San Francisco. Both joined Beast Editorial in 2012. Similarly, Lagerhausen and McDonald have been editing in the Bay Area for more than two decades. Collectively, their credits include work for agencies in San Francisco and nationwide. They’ve also helped to create content directly for Google, Facebook, LinkedIn, Salesforce and other corporate clients.

Made is indicative of a trend where companies engaged in content development are adopting fluid business models to address a diversifying media landscapes and where individual talent is no longer confined to a single job title. Walker, for example, has recently served as director on several projects, including a series of short films for Kelly Services, conceived by agency Erich & Kallman and produced by Caruso Co.

“People used to go to great pains to make a distinction about what they do,” Ettinger observes. “You were a director or an editor or a colorist. Today, those lines have blurred. We are taking advantage of that flattening out to offer clients a better way to create content.”

Main Image Caption: (L-R) Doug Walker, Brian Lagerhausen, Jon Ettinger and Connor McDonald.

Company 3 to open Hollywood studio, adds Roma colorist Steve Scott

Company 3 has added Steve Scott as EVP/senior finishing artist. His long list of credits includes Alfonso Cuarón’s Oscar-nominated Roma and Gravity; 19 Marvel features, including The Avengers, Iron Man and Guardians of the Galaxy franchises; and many Academy-Award-winning films, including The Jungle Book, Birdman or The Unexpected Virtue of Ignorance and The Revenant (both took Oscars for director Alejandro Iñárritu and cinematographer Emmanuel Lubezki).

Roma

The addition of Scott comes at a time when Company 3 is completing work on a new location at 950 Lillian Way in Hollywood. This new space represents the first phase of a planned much larger footprint in that area of Los Angeles. This new space will enable the company to significantly expand its capacity while providing the level of artistry and personalized service the industry expects from Company 3. It will also enable them to service more East Side and Valley-based clients.

“Steve is someone I’ve always wanted to work with and I am beyond thrilled that he has agreed to work with us at Company 3,” says CEO Stefan Sonnenfeld. “As we continue the process of re-imagining the entire concept of what ‘post production’ means creatively and technically, it makes perfect sense to welcome a leading innovator and brilliant artist to our team.”

Sonnenfeld and Scott will oversee every facet of this new boutique-style space to ensure it offers the same flexible experience clients have come to expect when working at Company 3. Scott, a devoted student of art and architecture, with extensive professional experience as a painter and architectural illustrator, says, “The opportunity to help design a new cutting-edge facility in my Hollywood hometown was too great to pass up.”

Scott oversees a team of additional artists to offer filmmakers the significantly increased ability to augment and refine imagery as part of the finishing process.

“The industry is experiencing a renaissance of content,” says Sonnenfeld. “The old models of feature film vs. television, long- vs. short-form are changing rapidly. Workflows and delivery methods are undergoing revolutionary changes with more content, and innovative content, coming from a whole array of new sources. It’s a very exciting and challenging time and I think these major additions to our roster and infrastructure will go a long way towards our goal of continuing Company 3’s role as a major force in the industry.”

Main Image Credit: 2018 HPA Awards Ceremony/Ryan Miller/Capture Imaging

BlacKkKlansman director Spike Lee

By Iain Blair

Spike Lee has been on a roll recently. Last time we sat down for a talk, he’d just finished Chi-Raq, an impassioned rap reworking of Aristophanes’ “Lysistrata,” which was set against a backdrop of Chicago gang violence. Since then, he’s directed various TV, documentary and video projects. And now his latest film BlacKkKlansman has been nominated for a host of Oscars, including Best Picture, Best Director, Best Adapted Screenplay, Best Film Editing,  Best Original Score and Best Actor in a Supporting Role (Adam Driver).

Set in the early 1970s, the unlikely-but-true story details the exploits of Ron Stallworth (John David Washington), the first African-American detective to serve in the Colorado Springs Police Department. Determined to make a name for himself, Stallworth sets out on a dangerous mission: infiltrate and expose the Ku Klux Klan. The young detective soon recruits a more seasoned colleague, Flip Zimmerman (Adam Driver), into the undercover investigation. Together, they team up to take down the extremist hate group as the organization aims to sanitize its violent rhetoric to appeal to the mainstream. The film also stars Topher Grace as David Duke.

Behind the scenes, Lee reteamed with co-writer Kevin Willmott, longtime editor Barry Alexander Brown and composer Terence Blanchard, along with up-and-coming DP Chayse Irvin. I spoke with the always-entertaining Lee, who first burst onto the scene back in 1986 with She’s Gotta Have It, about making the film, his workflow and the Oscars.

Is it true Jordan Peele turned you onto this story?
Yeah, he called me out of the blue and gave me possibly the greatest six-word pitch in film history — “Black man infiltrates Ku Klux Klan.” I couldn’t resist it, not with that pitch.

Didn’t you think, “Wait, this is all too unbelievable, too Hollywood?”
Well, my first question was, “Is this actually true? Or is it a Dave Chappelle skit?” Jordan assured me it’s a true story and that Ron wrote a book about it. He sent me a script, and that’s where we began, but Kevin Willmott and I then totally rewrote it so we could include all the stuff like Charlottesville at the end.

Iain Blair and Spike Lee

Did you immediately decide to juxtapose the story’s period racial hatred with all the ripped-from-the-headlines news footage?
Pretty much, as the Charlottesville rally happened August 11, 2017 and we didn’t start shooting this until mid-September, so we could include all that. And then there was the terrible synagogue massacre, and all the pipe bombs. Hate crimes are really skyrocketing under this president.

Fair to say, it’s not just a film about America, though, but about what’s happening everywhere — the rise of neo-Nazism, racism, xenophobia and so on in Europe and other places?
I’m so glad you said that, as I’ve had to correct several people who want to just focus on America, as if this is just happening here. No, no, no! Look at the recent presidential elections in Brazil. This guy — oh my God! This is a global phenomenon, and the common denominator is fear. You fire up your base with fear tactics, and pinpoint your enemy — the bogeyman, the scapegoat — and today that is immigrants.

What were the main challenges in pulling it all together?
Any time you do a film, it’s so hard and challenging. I’ve been doing this for decades now, and it ain’t getting any easier. You have to tell the story the best way you can, given the time and money you have, and it has to be a team effort. I had a great team with me, and any time you do a period piece you have added challenges to get it looking right.

You assembled a great cast. What did John David Washington and Adam Driver bring to the main roles?
They brought the weight, the hammer! They had to do their thing and bring their characters head-to-head, so it’s like a great heavyweight fight, with neither one backing down. It’s like Inside Man with Denzel and Clive Owen.

It’s the first time you’ve worked with the Canadian DP Chayse Irvin, who mainly shot shorts before this. Can you talk about how you collaborated with him?
He’s young and innovative, and he shot a lot of Beyonce’s Lemonade long-form video. What we wanted to do was shoot on film, not digital. I talked about all the ‘70s films I grew up with, like French Connection and Dog Day Afternoon. So that was the look I was after. It had to match the period, but not be too nostalgic. While we wanted to make a period film, I also wanted it to feel and look contemporary, and really connect that era with the world we live in now. He really nailed it. Then my great editor, Barry Alexander Brown, came up with all the split-screen stuff, which is also very ‘70s and really captured that era.

How tough was the shoot?
Every shoot’s tough. It’s part of the job. But I love shooting, and we used a mix of practical locations and sets in Brooklyn and other places that doubled for Colorado Springs.

Where did you post?
Same as always, in Brooklyn, at my 40 Acres and a Mule office.

Do you like the post process?
I love it, because post is when you finally sit down and actually make your film. It’s a lot more relaxing than the shoot — and a lot of it is just me and the editor and the Avid. You’re shaping and molding it and finding your way, cutting and adding stuff, flopping scenes, and it never really follows the shooting script. It becomes its own thing in post.

Talk about editing with Barry Alexander Brown, the Brit who’s cut so many of your films. What were the big editing challenges?
The big one was finding the right balance between the humor and the very serious subject matter. They’re two very different tones, and then the humor comes from the premise, which is absurd in itself. It’s organic to the characters and the situations.

Talk about the importance of sound and music, and Terence Blanchard’s spare score that blends funk with classical.
He’s done a lot of my films, and has never been nominated for an Oscar — and he should have been. He’s a truly great composer, trumpeter and bandleader, and a big part of what I do in post. I try to give him some pointers that aren’t restrictive, and then let him do his thing. I always put as much as emphasis on sound and music as I do on the acting, editing and cinematography. It’s hugely important, and once we have the score, we have a film.

I had a great sound team. Phil Stockton, who began with me back on School Daze, was the sound designer. David Boulton, Mike Russo and Howard London did the ADR mix, and my longtime mixer Tommy Fleischman was on it. We did it all at C5 in New York. We spent a long time on the mix, building it all up.

Where did you do the DI and how important is it to you?
At Company 3 with colorist Tom Poole, who’s so good. It’s very important but I’m in and out, as I know Tom and the DP are going to get the look I want.

Spike Lee on set.

Did the film turn out the way you hoped?
Here’s the thing. You try to do the best you can, and I can’t predict what the reaction will be. I made the film I wanted to make, and then I put it out in the world. It’s all about timing. This was made at the right time and was made with a lot of urgency. It’s a crazy world and it’s getting crazier by the minute.

How important are industry awards and nomination to you? 
They’re very important in that they bring more attention, more awareness to a film like this. One of the blessings from the strong critical response to this has been a resurgence in looking at my earlier films again, some of which may have been overlooked, like Bamboozled and Summer of Sam.

Do you see progress in Hollywood in terms of diversity and inclusion?
There’s been movement, maybe not as fast as I’d like, but it’s slowly happening, so that’s good.

What’s next?
We just finished the second season of She’s Gotta Have It for Netflix, and I have some movie things cooking. I’m pretty busy.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Post in the cloud company BeBop adds three tech pros

BeBop Technology, a provider of secure software solutions for moving media workflows to the cloud, has added three to its management team: director of business development Michael Kammes, VP of product management Patrick Cooper and Director of technical sales Nathaniel Bonini.

Michael Kammes joins BeBop from the media technology reseller and integrator Key Code Media, where he was director of technology. In his new position, he will leverage his experience with creative technology and tools providers to accelerate growth and provide strategic perspective across marketing, sales and partnerships. In addition to his experience as an integrator, Kammes brings more than 15 years of experience in technology consulting for the media and entertainment industry. His 5 Things web series breaks down technology and techniques. Kammes is a graduate of Columbia College in Chicago.

Cooper joins BeBop from Nokia, where he served as product manager and technical lead for tools and workflows. As part of Nokia’s Ozo camera team, he was instrumental in developing software and hardware products and designing workflows for virtual reality pros. Cooper also led film restoration and theatrical feature image processing projects at Lowry Digital and was a key contributor to the creation of its Academy Award-winning motion picture imaging technology. He is a graduate of the University of Southern California.

Bonini brings more than 30 years of experience as a technologist for cinema and broadcast. He joins BeBop from Meredith Corporation and Time Inc., where he served as director of video engineering. Throughout his career Bonini has provided crucial technology guidance as a digital cinema consultant, worked in numerous on-set and post roles, was director of integration for AbelCine and CTO for Madstone Films. He is a graduate of Rochester Institute of Technology.

BeBop’s cloud technology solutions include its flagship post production platform. It provides robust and secure virtualized desktops capable of processing-heavy tasks such as editing and visual effects, as well as “over the shoulder” collaboration, review and approval. Creatives can use industry-standard tools such as Adobe Creative Cloud on BeBop using their existing software licenses, and collaborate, process images, render, review and approve, ingest, manage and deliver media files from anywhere in the world using any computer with a 20mbps Internet connection.

Image Caption: Michael Kammes, Nathaniel Bonini, Patrick Cooper.

Catching up with Aquaman director James Wan

By Iain Blair

Director James Wan has become one of the biggest names in Hollywood thanks to the $1.5 billion-grossing Fast & Furious 7, as well as the Saw, Conjuring and Insidious films — three of the most successful horror franchises of the last decade.

Now the Malaysian-born, Australian-raised Wan, who also writes and produces, has taken on the challenge of bringing Aquaman and Atlantis to life. The origin story of half-surface dweller, half-Atlantean Arthur Curry stars Jason Momoa in the title role. Amber Heard plays Mera, a fierce warrior and Aquaman’s ally throughout his journey.

James Wan and Iain Blair

Additional cast includes Willem Dafoe as Vulko, council to the Atlantean throne; Patrick Wilson as Orm, the present King of Atlantis; Dolph Lundgren as Nereus, King of the Atlantean tribe Xebel; Yahya Abdul-Mateen II as the revenge-seeking Manta; and Nicole Kidman as Arthur’s mom, Atlanna.

Wan’s team behind the scenes included such collaborators as Oscar-nominated director of photography Don Burgess (Forrest Gump), his five-time editor Kirk Morri (The Conjuring), production designer Bill Brzeski (Iron Man 3), visual effects supervisor Kelvin McIlwain (Furious 7) and composer Rupert Gregson-Williams (Wonder Woman).

I spoke with the director about making the film, dealing with all the effects, and his workflow.

Aquaman is definitely not your usual superhero. What was the appeal of doing it? 
I didn’t grow up with Aquaman, but I grew up with other comic books, and I always was well aware of him as he’s iconic. A big part of the appeal for me was he’d never really been done before — not on the big screen and not really on TV. He’s never had the spotlight before. The other big clincher was this gave me the opportunity to do a world-creation film, to build a unique world we’ve never seen before. I loved the idea of creating this big fantasy world underwater.

What sort of film did you set out to make?
Something that was really faithful and respectful to the source material, as I loved the world of the comic book once I dove in. I realized how amazing this world is and how interesting Aquaman is. He’s bi-racial, half-Atlantean, half-human, and he feels he doesn’t really fit in anywhere at the start of the film. But by the end, he realizes he’s the best of both worlds and he embraces that. I loved that. I also loved the fact it takes place in the ocean so I could bring in issues like the environment and how we treat the sea, so I felt it had a lot of very cool things going for it — quite apart from all the great visuals I could picture.

Obviously, you never got the Jim Cameron post-Titanic memo — never, ever shoot in water.
(Laughs) I know, but to do this we unfortunately had to get really wet as over 2/3rds of the film is set underwater. The crazy irony of all this is when people are underwater they don’t look wet. It’s only when you come out of the sea or pool that you’re glossy and dripping.

We did a lot of R&D early on, and decided that shooting underwater looking wet wasn’t the right look anyway, plus they’re superhuman and are able to move in water really fast, like fish, so we adopted the dry-for-wet technique. We used a lot of special rigs for the actors, along with bluescreen, and then combined all that with a ton of VFX for the hair and costumes. Hair is always a big problem underwater, as like clothing it behaves very differently, so we had to do a huge amount of work in post in those areas.

How early on did you start integrating post and all the VFX?
It’s that kind of movie where you have to start post and all the VFX almost before you start production. We did so much prep, just designing all the worlds and figuring out how they’d look, and how the actors would interact with them. We hired an army of very talented concept artists, and I worked very closely with my production designer Bill Brzeski, my DP Don Burgess and my visual effects supervisor Kelvin McIlwain. We went to work on creating the whole look and trying to figure out what we could shoot practically with the actors and stunt guys and what had to be done with VFX. And the VFX were crucial in dealing with the actors, too. If a body didn’t quite look right, they’d just replace them completely, and the only thing we’d keep was the face.

It almost sounds like making an animated film.
You’re right, as over 90% of it was VFX. I joke about it being an animated movie, but it’s not really a joke. It’s no different from, say, a Pixar movie.

Did you do a lot of previs?
A lot, with people like Third Floor, Day For Nite, Halon, Proof and others. We did a lot of storyboards too, as they are quicker if you want to change a camera angle, or whatever, on the fly. Then I’d hand them off to the previs guys and they’d build on those.

What were the main technical challenges in pulling it all together on the shoot?
We shot most of it Down Under, near Brisbane. We used all nine of Village Roadshow Studios’ soundstages, including the new Stage 9, as we had over 50 sets, including the Atlantis Throne Room and Coliseum. The hardest thing in terms of shooting it was just putting all the actors in the rigs for the dry-for-wet sequences; they’re very cumbersome and awkward, and the actors are also in these really outrageous costumes, and it can be quite painful at times for them. So you can’t have them up there too long. That was hard. Then we used a lot of newish technology, like virtual production, for scenes where the actors are, say, riding creatures underwater.

We’d have it hooked up to the cameras so you could frame a shot and actually see the whole environment and the creature the actor is supposed to be on — even though it’s just the actors and bluescreen and the creature is not there. And I could show the actors — look, you’re actually riding a giant shark — and also tell the camera operator to pan left or right. So it was invaluable in letting me adjust performance and camera setups as we shot, and all the actors got an idea of what they were doing and how the VFX would be added later in post. Designing the film was so much fun, but executing it was a pain.

The film was edited by Kirk Morri, who cut Furious 7, and worked with you on the Insidious and The Conjuring films. How did that work?
He wasn’t on set but he’d visit now and again, especially when we were shooting something crazy and it would be cool to actually see it. Then we’d send dailies and he’d start assembling, as we had so much bluescreen and VFX stuff to deal with. I’d hop in for an hour or so at the end of each day’s shoot to go over things as I’m very hands on — so much so that I can drive editors crazy, but Kirk puts up with all that.

I like to get a pretty solid cut from the start. I don’t do rough assemblies. I like to jump straight into the real cut, and that was so important on this because every shot is a VFX shot. So the sooner you can lock the shot, the better, and then the VFX teams can start their work. If you keep changing the cut, then you’ll never get your VFX shots done in time. So we’d put the scene together, then pass it to previs, so you don’t just have actors floating in a bluescreen, but they’re in Atlantis or wherever.

Where did you do the post?
We did most of it back in LA on the Warner lot.

Do you like the post process?
I absolutely love it, and it’s very important to my filmmaking style. For a start, I can never give up editing and tweaking all the VFX shots. They have to pull it away from me, and I’d say that my love of all the elements of the post process — editing, sound design, VFX, music — comes from my career in suspense movies. Getting all the pieces of post right is so crucial to the end result and success of any film. This post was creatively so much fun, but it was long and hard and exhausting.

James Wan

All the VFX must have been a huge challenge.
(Laughs) Yes, as there’s over 2,500 VFX shots and we had everyone working on it — ILM, Scanline, Base, Method, MPC, Weta, Rodeo, Digital Domain, Luma — anyone who had a computer! Every shot had some VFX, even the bar scene where Arthur’s with his dad. That was a set, but the environment outside the window was all VFX.

What was the hardest VFX sequence to do?
The answer is, the whole movie. The trench sequence was hard, but Scanline did a great job. Anything underwater was tough, and then the big final battle was super-difficult, and ILM did all that.

Did the film turn out the way you hoped?
For the most part, but like most directors, I’m never fully satisfied.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Review: Picture Instruments’ plugin and app, Color Cone 2

By Brady Betzel

There are a lot of different ways to color correct an image. Typically, colorists will start by adjusting contrast and saturation followed by adjusting the lift, gamma and gain (a.k.a. shadows, midtones and highlights). For video, waveforms and vectorscopes are great ways of measuring color values and are about the only way to get the most accurate scientific facts on the colors you are manipulating.

Whether you are in Blackmagic Resolve, Avid Media Composer, Adobe Premiere Pro, Apple FCP X or any other nonlinear editor or color correction app, you usually have similar color correction tools across apps — whether you color based on curves, wheels, sliders or even interactively on screen. So when I heard about the way that Picture Instruments Color Cone 2 color corrects — via a Cone (or really a bicone) — I was immediately intrigued.

Color Cone 2 is a standalone app but also, more importantly, a plugin for Adobe After Effects, Adobe Premiere Pro and FCP X. In this review I am focusing on the Premiere Pro plugin, but keep in mind that the standalone version works on still images and allows you to export a 3dl or cube LUTs — a great way for a client to see what type of result you can get quickly from just a still image.

Color Cone 2 is literally a color corrector when used as a plugin for Adobe Premiere. There are no contrast and saturation adjustments, just the ability to select a color and transform it. For instance, you can select a blue sky and adjust the hue, chromanance (saturation) and/or luminance of the resulting color inside of the Color Cone plugin.

To get started you apply the Color Cone 2 plugin to your clip — the plugin is located under Picture Instruments in the Effects tab. Then you click the little square icon in the effect editor panel to open up the Color Cone 2 interface. The interface contains the bicone image representation of the color correction, presets to set up a split-tone color map or a three-point color correct, and the radius slider to adjust the effect your correction has on surrounding color.

Once you are set on a look you can jump out of the Color Cone interface and back into the effect editor inside of Premiere. There you can keyframe all of the parameters you adjusted in the Color Cone interface. This allows for a nice and easy way to transition from no color correction to color correction.

The Cone
The Cone itself is the most interesting part of this plugin. Think of the bicone as the 3D side view of a vectorscope. In other words, if the vectorscope view from a traditional scope is the top view — the bicone in Color Cone would be a side view. Moving your target color from the top cone to the bottom cone will adjust your lightness to darkness (or luminance). At the intersection of the cones is the saturation (or chromanance) and when moving from the center outwards saturation is increased. When a color is selected using the eye dropper you will see a square, which represents the source color selection, a circle representing the target color and an “x” with a line for reference on the middle section.

Additionally, there is a black circle on the saturation section in the middle that shows the boundaries of how far you can push your chromanance. There is a light circle that represents the radius of how surrounding colors are affected. Each video clip can have effects layered on them and one instance of the plugin can handle five colors. If you need more than five, you can add another instance of the plugin to the same clip.

If you are looking to export 3dl and Cube LUTs of your work you will need to use the standalone Color Cone 2 app. The one caveat to using the standalone app is that you can only apply color to still images. Once you do that you can export the LUT to be used in any modern NLE/color correction app.

Summing Up
To be honest, working in Color Cone 2 was a little weird for me. It’s not your usual color correction workflow, so I would need to sit with the plugin for a while to get used to its setup. That being said, it has some interesting components that I wish other color correction apps would use, such as the Cone view. The bicone is a phenomenal way to visualize color correction in realtime.

In my opinion, if Picture Instruments would sell just the Cone as a color measurement tool to work in conjunction with Lumetri, they would have another solid tool. Color Cone 2 has a very unique and interesting way to color correct in Premiere that acts as an advanced secondary color correct tool to the Lumetri color correction tools.

The Color Cone 2 standalone app and plugin costs $139 when purchased together, or $88 individually. In my opinion, video people should probably just stick to the plugin version. Check out Picture Instrument’s website for more info on Color Cone 2 as well as their other products. And check them out on Twitter @Pic_instruments.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Full-service creative agency Carousel opens in NYC

Carousel, a new creative agency helmed by Pete Kasko and Bernadette Quinn, has opened its doors in New York City. Billing itself as “a collaborative collective of creative talent,” Carousel is positioned to handle projects from television series to ad campaigns for brands, media companies and advertising agencies.

Clients such as PepsiCo’s Pepsi, Quaker and Lays brands; Victoria’s Secret; Interscope Records; A&E Network and The Skimm have all worked with the company.

Designed to provide full 360 capabilities, Carousel allows its brand partners to partake of all its services or pick and choose specific offerings including strategy, creative development, brand development, production, editorial, VFX/GFX, color, music and mix. Along with its client relationships, Carousel has also been the post production partner for agencies such as McGarryBowen, McCann, Publicis and Virtue.

“The industry is shifting in how the work is getting done. Everyone has to be faster and more adaptable to change without sacrificing the things that matter,” says Quinn. “Our goal is to combine brilliant, high-caliber people, seasoned in all aspects of the business, under one roof together with a shared vision of how to create better content in a more efficient way.”

According to managing director Dee Tagert comments, “The name Carousel describes having a full set of capabilities from ideation to delivery so that agencies or brands can jump on at any point in their process. By having a small but complete agency team that can manage and execute everything from strategy, creative development and brand development to production and post, we can prove more effective and efficient than a traditional agency model.”

Danielle Russo, Dee Tagert, AnaLiza Alba Leen

AnaLiza Alba Leen comes on board Carousel as creative director with 15 years of global agency experience, and executive producer Danielle Russo brings 12 years of agency experience.
Tagert adds, “The industry has been drastically changing over the last few years. As clients’ hunger for content is driving everything at a much faster pace, it was completely logical to us to create a fully integrative company to be able to respond to our clients in a highly productive, successful manner.”

Carousel is currently working on several upcoming projects for clients including Victoria’s Secret, DNTL, Subway, US Army, Tazo Tea and Range Rover.

Main Image: Bernadette Quinn and Pete Kasko

Boxx adds new Apexx S-class workstations with 9th-gen Intel processors

Boxx Technologies is offering a new line of Apexx S-class workstations featuring the company’s flagship Apexx S3. Purpose-built for 3D design, CAD and motion media workflows requiring CPU frequencies suitable for lightly threaded apps, the compact Apexx S3 now features a 9th-generation, eight-core Intel Core i7 or i9 processor (professionally overclocked to 5.1GHz) to support more heavily threaded applications as well.

Designed to optimize Autodesk tools, Adobe Creative Cloud, Maxon Cinema 4D and other applications, the overclocked and liquid-cooled Apexx S3 sustains its 5.1GHz frequency across all cores. With increased storage and upgradability, as well as multiple Nvidia Quadro or AMD Radeon Pro graphics cards, S3 is also ideal for light GPU compute or virtual reality.

New to the S-class line is Apexx Enigma S3. Built to accelerate professional 3D applications, Enigma S3 is also configurable with 9th-generation, eight-core Intel Core i7/i9 processors overclocked to 5.1GHz and up to three professional GPUs, making it suitable for workflows that include significant GPU rendering or GPU compute work.

The compact Apexx S3 and Enigma S3 are joined by the Apexx S1. The S1 also features an overclocked, eight-core Intel Core i7 for 3D content creation, CAD design and motion media. With its ultra-compact chassis, the S1 is a good solution for limited desktop space, an open environment or workflows where a graphics card is used primarily for display.

Rounding out the S-class family is the Apexx S4, a rack-mount system designed for heavy rendering or GPU compute.

Technicolor welcomes colorists Trent Johnson and Andrew Francis

Technicolor in Los Angeles will be beefing up its color department in January with the addition of colorists Andrew Francis and Trent Johnson.

Francis joins Technicolor after spending the last three years building the digital intermediate department of Sixteen19 in New York. With recent credits that include Second Act, Night School, Hereditary and Girls Trip. Francis is a trained fine artist who has established a strong reputation of integrating the bleeding edge of technology in support of the craft of color.

Johnson, a Technicolor alumnus, returns after stints as a digital colorist at MTI, Deluxe and Sony Colorworks. His recent credits include horror hits Slender Man and The Possession of Hannah Grace, as well as comedies Overboard and Ted 2.

Johnson will be using FilmLight and Resolve for his work, while Francis will toggle between Resolve, BaseLight and Lustre, depending on the project.

Francis and Johnson join Technicolor LA’s roster, which includes Pankaj Bajpai, Tony Dustin, Doug Delaney, Jason Fabbro, recent HPA award-winner Maxine Gervais, Michael Hatzer, Roy Vasich, Tim Vincent, Sparkle and others.

Main Image: Trent Johnson and Andrew Francis

Rohde & Schwarz’s storage system R&S SpycerNode shipping

First shown at IBC 2018, Rohde & Schwarz’s new media storage system, R&S SpycerNode, is now available for purchase. This new storage system uses High Performance Computing (HPC), a term that refers to the system’s performance, scalability and redundancy. HPC is a combination of hardware, file system and RAID approach. HPC employs redundancy using software RAID technologies called erasure coding in combination with declustering to increase performance and reduce rebuild times. Also, system scalability is almost infinite and expansion is possible during operation.

According to Rohde & Schwarz, in creating this new storage system, their engineers looked at many of the key issues that impact on media storage systems within high-performance video editing environments — from annoying maintenance requirements, such as defraging, to much more serious system failures, including dying disk drives.

R&S SpycerNode features Rohde & Schwarz‘s device manager web application that makes it much easier to set up and use Rohde & Schwarz solutions in an integrated fashion. Device manager helps to reduce setup times and simplifies maintenance and service due to its intuitive web-based UI-operated through a single client.

To ensure data security, Rohde & Schwarz has introduced data protection systems based on erasure coding and declustering within the R&S SpycerNode. Erasure coding means that a data block is always written including parity.

Declustering is a part of the data protection approach of HPC setups (formerly known as RAID). It is software based, and in comparison to a traditional RAID setup the spare disk is spread over all other disks and is not a dedicated disk. This will decrease rebuild times and reduce performance impact. Also, there are no limitations with the RAID controller, which results in much higher IOPS (input/output operations per second). Importantly, there is no impact on system performance over time due to declustering.

R&S SpycerNode comes in multiple 2U and 5U chassis designs, which are available with NL-SAS HDD and SAS SSDs in different capacities. An additional 2U24 chassis design is a pure Flash system with main processor units and JBOD units. A main unit is always redundant, equipped with two appliance controllers (AP). Each AP features two 100Gb interfaces, resulting in four 100Gbinterfaces per main unit.

The combination of different chassis systems makes R&S SpycerNode applicable to a very broad range of applications. The 2U system represents a compact, lightweight unit that works well within mobile productions as well as offering a very dense, high-speed storage device for on-premise applications. A larger 5U system offers sophisticated large-scale storage facilities on-premise within broadcast production centers and post facilities.

Storage for Post Studios

By Karen Moltenbrey

The post industry relies heavily on storage solutions, without question. Facilities are jugging a variety of tasks and multiple projects all at once. And deadlines are always looming. Thus, these studios need a storage solution that is fast and reliable. Each studio has different needs and searches to find the right system to fit their particular workflow. Luckily, there are many storage choices for pros to choose from.

For this article, we spoke with two post houses about their storage solutions and why they are a good fit for each of their needs.

Sugar Studios LA
Sugar Studios LA is one-stop shop playground for filmmakers that offers a full range of post production services, including editorial, color, VFX, audio, production and finishing, with each department led by seasoned professionals. Its office suites in the Wiltern Theater Tower, in the center of LA, serve an impressive list of clients, from numerous independent film producers and distributors to Disney, Marvel, Sony, MGM, Universal, Showtime, Netflix, AMC, Mercedes-Benz, Ferrari and others.

Jijo Reed and Sting in one of their post suites.

With so much important data in play at one time, Sugar needs a robust, secure and reliable storage system. However, with diverse offerings come diverse requirements. For its online and color projects, Sugar uses a Symply SAN with 200TB of usable storage. The color workstations are connected via 10Gb Ethernet over Fibre with a 40Gb uplink to the network. For mass storage and offline work, the studio uses a MacOS server acting as a NAS, with 530TB of usable storage connected via a 40Gb network uplink. For Avid offline jobs, the facility has an Avid Nexis Pro with 40TB of storage, and for Avid Pro Tools collaboration, a Facilis TerraBlock with 40TB of usable storage.

“We can collaborate with any and all client stations working on the same or different media and sharing projects across multiple software platforms,” says Jijo Reed, owner/executive producer of Sugar. “No station is limited to what it can do, since every station has access to all media. Centralized storage is so important because not only does it allow collaboration, we always have access to all media and don’t have to fumble through drives. It is also RAID-protected, so we don’t have to be concerned with losing data.”

Prior to employing the centralized storage, Sugar had been using G-Technology’s G-RAID drives, changing over in late 2016. “Once our technical service advisor, Zach Moller, came on board, he began immediately to institute a storage network solution that was tailored to our workflow,” says Reed.

Reed, an award-winning director/producer, founded the company in 2012, using a laptop (running Final Cut Pro 7) and an external hard drive he had purchased on sale at Fry’s. His target base at the time was producers and writers needing sizzle trailers to pitch their projects — at a time when the term “sizzle trailer” was not part of the common vernacular. “I attended festivals to pitch my wares, producing over 15 sizzles the first year,” he says, “and it grew from there.”

Since Reed was creating sizzles for yet-to-be-made features, he was in “pole position” to handle the post for some of these independent films when they got funded. In 2015, he, along with his senior editor, Paul Buhl, turned their focus to feature post work, which was “more lucrative and less exhausting, but mostly, we wanted to tell stories – the whole story.” He rebranded and changed the name of the company from Sizzlepitch to Sugar Studios, and brought on a feature post producer, Chris Harrington. Reed invested heavily in the company, purchasing equipment and acquiring space. Soon, one bay became two, then three and so on. Currently, the company spans three full floors, including the penthouse of the Wiltern Theater Tower.

As Reed proudly points out, the studio space features 21 bays and workstations, two screening theaters, including a 25-seat color and mix DI stage with a Barco DP4K projector and Dolby Atmos configuration. “We are fully staffed, all under one roof, with editorial, full audio services, color correction/grading, VFX and a greenscreen cyclorama stage with on-site 4K cameras, grip and lighting,” he details. “But, it’s the people who make this work. Our passion is obvious to our clients.”

While Sugar was growing and expanding, so, too, was its mass storage solution. According to Zach Moller, it started with the NAS due to its low price and fast (10Gb) connection to every client machine. “The Symply SAN solution was needed because we required a high-bandwidth system for online and color playback that used Fibre Channel technology for the low latency and local drive configuration,” he says.

Moreover, the facility wanted flexibility with its SAN solution; it was very expensive to have every machine connected via Fibre Channel, “and frankly, we didn’t need that bandwidth,” Reed says. “Symply allowed us to have client machines choose whether they connected via Fibre Channel or 10Gb. If this wasn’t the case, we would have been in a pickle, having to purchase expansion chassis for every machine to open up additional PCI slots.” (The bulk of the machines at Sugar connect using the pre-existing 10Gb Ethernet over Fibre network, thus negating the need to use another PCI slot on a Fibre Channel card.)

American Dreamer

At Sugar, the camera masters and production audio are loaded directly to the NAS for mass storage. Then, the group archives the camera masters to LTO for deep archival, for an additional backup. During LTO archival, the studio creates the dailies for the offline edit on either Avid Media Composer (where the MXFs are migrated to the Avid Nexis server) or Adobe Premiere (where the ProRes dailies continue to live on the NAS).

When adding visual effects, the artists render to the Symply SAN when preparing for the online, color and finishing.

The studio works with a wide range of codecs, some of which are extremely taxing on the systems. And, the SAN is ideal, especially for the raster image files (EXRs), since each frame has such a high density — and there can be 100,000 frames per folder. “This can only be accomplished with a premium storage solution: our SAN,” Reed says.

When the studio moved to the EXR codec for the VFX on the American Dreamer feature film, for example, its original NAS solution over 10Gb didn’t have enough bandwidth for playback on its systems (1.2GB/sec). Once it upgraded the SAN solution with dual 16Gb Fibre Channel, they were able to play back uncompressed 4K EXR footage without the headache or frustration of stuttering.

“We have created an environment that caters to the creative process with a technical infrastructure that is superfast and solid. Filmmakers love us, and I couldn’t be prouder of my team for making this happen,” says Reed.

Mike Seabrooke

Postal
Established in 2015, Postal is a boutique creative studio that produces motion graphics, visual effects, animation, live action and editorial, with the vision of transcending all mediums — whether it’s short animations for social media or big-budget visual effects for broadcast. “As a studio, we love to experiment with different techniques. We feel strongly that the idea should always come first,” says Mike Seabrooke, producer at New York’s Postal.

To ensure that these ideas make it to the final stage of a project, the company uses a mixture of hard drives, LTO tapes and servers that house the content while the artists are working on projects, as well as for archival purposes. Specifically, the studio employs the EditShare Storage v.7 shared storage platform and EditShare Ark Tape for managing the LTO tape libraries that serve as nearline and offline backup. This is the system setup that Postal deployed initially when it started up a few years ago, and since then Postal has been continuously updating and expanding it based on its growth as a studio.

Let’s face it, hard drives always have the possibility of failing. But, failure is not something that Postal — or any other post house — can afford. That is why the studio keeps two instances per job on archive drives: a master and a backup. “Organized hard drives give us quick access to previous jobs if need be, which sometimes can be quite the lifesaver,” says Seabrooke.

 

Postal’s Nordstrom project.

LTO tapes, meanwhile, are used to back up the facility’s servers running EditShare v7 – which house Postal’s editorial jobs — on the off chance that something happens to that precious piece of hardware. “The recovery process isn’t the fastest, but the system is compact, self-contained and gives us peace of mind in case anything does go wrong,” Seabrooke explains.

In addition, the studio uses Retrospect backup and restore software for its working projects server. Seabrooke says, “We chose it because it offers a backup service that does not require much oversight.”

When Postal began shopping for a solution for its studio three years ago, reliability was at the top of its list. The facility needed a system it could rely on to back up its data, which would comprise the facility’s entire scope of work. Ease of use was also a concern, as was access. This decision prompted questions such as: Would we have to monitor it constantly? In what timeframe would we be able to access the data? Moreover, cost was yet another factor: Would the solution be effective without breaking our budget?

Postal’s solution indeed enabled them to check off every one of those boxes. “Our projects demand a system that we can count on, with the added benefit of quick retrieval,” Seabrooke says.

Throughout the studio’s production process, the artists are accessing project data on the servers. Then, once they complete the project, the data is transferred to the archival drives for backup. This frees up space on the company servers for new jobs, while providing access to the stored data if needed.

“Storage is so important in our work because it is our work. Starting over on a project is an outcome we cannot allow, so responsible storage is a necessity,” concludes Seabrooke.


Karen Moltenbrey is a long-time VFX and post production writer.

Post house Cinematic Media opens in Mexico City, targets film, TV

Mexico City is now home to Cinematic Media, a full-service post production finishing facility focused on television and cinema content   Located on the lot at Estudios GGM, the facility offers dailies, look development, editorial finishing, color grading and other services, and aims to capitalize on entertainment media production in Mexico and throughout Central and South America.

Scot Evans

In its first project, Cinematic Media provided finishing services for the second season of the Netflix series Ingobernable.

CEO Scot Evans brings more than 25 years of post experience and has managed large-scale post production operations in the United States, Mexico and Canada. His recent posts include executive VP at Technicolor PostWorks in New York, managing director of Technicolor in Vancouver and managing director of Moving Picture Company (MPC) in Mexico City.

“We’re excited about the future for entertainment production in Mexico,” says Evans. “Netflix opened the door and now Amazon is in Mexico. We expect film production to also grow. Through its geographic location, strong infrastructure and cinematic history, Mexico is well-positioned to become a strong producer of content for the world market.”

Cinematic Media has been built from the ground up with a workflow modeled after top-tier facilities in Hollywood and geared toward television and cinema finishing. Engineering design was supervised by John Stevens, whose four decades of post experience includes stints at Cinesite, Efilm, The Post Group, Encore Hollywood, MTI Film and, currently, the Foundation.

Resources include a DI theater with DaVinci Resolve, 4K projection and 7.1 surround sound, four color suites supporting 2K, 4K and HDR, multiple editorial finishing suites, and a Colorfront On-Set Dailies system. The facility also offers look development services to assist productions in creating end-to-end color pipelines, as well as quality control and deliverable services for streaming, broadcast and cinema. Plans to add visual effects services are in the works.

“We can handle six or seven series simultaneously,” says Evans. “There is a lot of redundancy built into our pipeline, making it incredibly efficient and virtually eliminating downtime. A lot of facilities in Hollywood would be envious of what we have here.”

Cinematic Media features high-speed connectivity via the private network Sohonet. It will be employed to share media with studios, producers and distributors around the globe securely and efficiently. It will also be used to facilitate remote collaboration with directors, cinematographers, editors, colorists and other production partners.

Evans cites as a further plus Cinematic Media’s location within Estudios GGM, which has six sound stages, production and editorial office space, grip and lighting resources and more. Producers can take projects from concept to the screen from within the confines of the site. “We can literally walk down a flight of stairs to support a project shooting on one of the stages,” he says. “Proximity is important. We expect many productions to locate their offices and editorial teams here.”

Managing director Arturo Sedano will oversee day-to-day operations. He has supervised post for thousands of hours of television and cinema content on behalf of studios and producers from around the globe, including Netflix, Telemundo, Sony Pictures, Viacom, Lionsgate, HBO, TV Azteca, Grupo Imagen and Fox.

Other key staff includes senior colorist Ana Montaño whose experience as a digital colorist spans facilities in Mexico City, Barcelona, London, Dublin and Rome; producer and post supervisor Cyntia Navarro, previously with Lejana Films and Instituto Mexicano de Cinematografía (IMCINE). Her credits span episodic television, feature film and documentaries, and include projects for IFC Films, Canal Once, UPI, Discovery Channel, Netflix and Amazon.

Additional staff includes chief technology officer Oliver De Gante, previously with Ollin VFX, where his credits included the hit films Chappie, Her, Tron: Legacy and The Social Network, as well as the Netflix series House of Cards; technical director Gabriel Kerlegand, a workflow specialist and digital imaging technologist with 18 years of experience in cinema and television; and coordinator and senior conform editor Humberto Flores, formerly senior editor at Zenith Adventure Media.

Industry vets launch hybrid studio, Olio Creative

Colorist Marshall Plante, producer Natalie Westerfield and director/creative director Justin Purser founded hybrid studio Olio Creative, which has opened its doors in Venice, California.

Olio features vintage-style décor and an open floor plan and the space is adaptable for freelancers, mobile artists and traveling talent, with two color suites and a suite set up to toggle between editorial and Flame work.

Marshall Plante is a well-known colorist who has built his career at shops such as Digital Magic, Riot, Syndicate and, most recently, at Ntropic where he headed up the color department. His commercial credits include Samsung, Audi, Olay, Nike, Honda, Budweiser, and direct-to-brand projects for Apple and Riot Games. Recently, the Nick Jr. Girls in Charge: Girl Power campaign he graded won an Emmy for Outstanding Daytime Promo Announcement Brand Image Campaign, and the Uber campaign he graded, Rolling With the Champion with Lebron James, won a bronze Cannes Lion.

Marshall’s long-time producer, Natalie Westerfield, has over 10 years of experience producing at companies including The Mill and Ntropic. As executive producer, Westerfield will provide oversight to guide all projects that come through Olio’s pipeline.

The third member of the team is director/creative director Justin Purser. As a director, Purser has worked at production companies A Band Apart and Anonymous Content. He was one of the original creators and directors behind Maker Studios (acquired by Walt Disney Corp.) that pioneered the multi-channel YouTube-centric companies of today.

The three partners will bring an element of experimentation and collaboration to the post production field. “The ability to be chameleons within the industry keeps us open to fresh ideas,” says Pursur. “Our motto is, ‘Try it. If it doesn’t work, pivot.’ And if we thrive in a new way of working, we’re going to share that with everyone. We want to not only make noise for ourselves, but for others in the same business.”

Quick Chat: Westwind Media president Doug Kent

By Dayna McCallum

Doug Kent has joined Westwind Media as president. The move is a homecoming of sorts for the audio post vet, who worked as a sound editor and supervisor at the facility when they opened their doors in 1997 (with Miles O’ Fun). He comes to Westwind after a long-tenured position at Technicolor.

While primarily known as an audio post facility, Burbank-based Westwind has grown into a three-acre campus comprised of 10 buildings, which also house outposts for NBCUniversal and Technicolor, as well as media focused companies Keywords Headquarters and Film Solutions.

We reached out to Kent to find out a little bit more about what is happening over at Westwind, why he made the move and changes he has seen in the industry.

Why was now the right time to make this change, especially after being at one place for so long?
Well, 17 years is a really long time to stay at one place in this day and age! I worked with an amazing team, but Westwind presented a very unique opportunity for me. John Bidasio (managing partner) and Sunder Ramani (president of Westwind Properties) approached me with the role of heading up Westwind and teaming with them in shaping the growth of their media campus. It was literally an offer I couldn’t refuse. Because of the campus size and versatility of the buildings, I have always considered Westwind to have amazing potential to be one of the premier post production boutique destinations in the LA area. I’m very excited to be part of that growth.

You’ve worked at studios and facilities of all sizes in your career. What do you see as the benefit of a boutique facility like Westwind?
After 30 years in the post audio business — which seems crazy to say out loud — moving to a boutique facility allows me more flexibility. It also lets me be personally involved with the delivery of all work to our customers. Because of our relationships with other facilities, we are able to offer services to our customers all over the Los Angeles area. It’s all about drive time on Waze!

What does your new position at Westwind involve?
The size of our business allows me to actively participate with every service we offer, from business development to capital expenditures, while also working with our management team’s growth strategy for the campus. Our value proposition, as a nimble post audio provider, focuses on our high-quality brick and motor facility, while we continue to expand our editorial and mix talent working with many of the best mix facilities and sound designers in the LA area. Luckily, I now get to have a hand in all of it.

Westwind recently renovated two stages. Did Dolby Atmos certification drive that decision?
Netflix, Apple and Amazon all use Atmos materials for their original programming. It was time to move forward. These immersive technologies have changed the way filmmakers shape the overall experience for the consumer. These new object-based technologies enhance our ability to embellish and manipulate the soundscape of each production, creating a visceral experience for the audience that is more exciting and dynamic.

How to Get Away With Murder

Can you talk specifically about the gear you are using on the stages?
Currently, Westwind runs entirely on a Dante network design. We have four dub stages, including both of the Atmos stages, outfitted with Dante interfaces. The signal path from our Avid Pro Tools source machines — all the way to the speakers — is entirely in Dante and the BSS Blu link network. The monitor switching and stage are controlled through custom made panels designed in Harman’s Audio Architect. The Dante network allows us to route signals with complete flexibility across our network.

What about some of the projects you are currently working on?
We provide post sound services to the team at ShondaLand for all their productions, including Grey’s Anatomy, which is now in its 15th year, Station 19, How to Get Away With Murder and For the People. We are also involved in the streaming content market, working on titles for Amazon, YouTube Red and Netflix.

Looking forward, what changes in technology and the industry do you see having the most impact on audio post?
The role of post production sound has greatly increased as technology has advanced.  We have become an active part of the filmmaking process and have developed closer partnerships with the executive producers, showrunners and creative executives. Delivering great soundscapes to these filmmakers has become more critical as technology advances and audiences become more sophisticated.

The Atmos system creates an immersive audio experience for the listener and has become a foundation for future technology. The Atmos master contains all of the uncompressed audio and panning metadata, and can be updated by re-encoding whenever a new process is released. With streaming speeds becoming faster and storage becoming more easily available, home viewers will most likely soon be experiencing Atmos technology in their living room.

What haven’t I asked that is important?
Relationships are the most important part of any business and my favorite part of being in post production sound. I truly value my connections and deep friendships with film executives and studio owners all over the Los Angeles area, not to mention the incredible artists I’ve had the great pleasure of working with and claiming as friends. The technology is amazing, but the people are what make being in this business fulfilling and engaging.

We are in a remarkable time in film, but really an amazing time in what we still call “television.” There is growth and expansion and foundational change in every aspect of this industry. Being at Westwind gives me the flexibility and opportunity to be part of that change and to keep growing.

AI for M&E: Should you take the leap?

By Nick Gold

In Hollywood, the promise of artificial intelligence is all the rage. Who wouldn’t want a technology that adds the magic of AI to smarter computers for an instant solution to tedious, time-intensive problems? With artificial intelligence, anyone with abundant rich media assets can easily churn out more revenue or cut costs, while simplifying operations … or so we’re told.

If you attended IBC, you probably already heard the pitch: “It’s an ‘easy’ button that’s simple to add to the workflow and foolproof to operate, turning your massive amounts of uncategorized footage into metadata.”

But should you take the leap? Before you sign on the dotted line, take a closer look at the technology behind AI and what it can — and can’t — do for you.

First, it’s important to understand the bigger picture of artificial intelligence in today’s marketplace. Taking unstructured data and generating relevant metadata from it is something that other industries have been doing for some time. In fact, many of the tools we embrace today started off in other industries. But unlike banking, finance or healthcare, our industry prioritizes creativity, which is why we have always shied away from tools that automate. The idea that we can rely on the same technology as a hedge fund manager just doesn’t sit well with many people in our industry, and for good reason.

Nick Gold talks AI for a UCLA Annex panel.

In the media and entertainment industry, we’re looking for various types of metadata that could include a transcript of spoken words, important events within a period of time or information about the production (e.g., people, location, props), and currently there’s no single machine-learning algorithm that will solve for all these types of metadata parameters. For that reason, the best starting point is to define your problems and identify which machine learning tools may be able to solve them. Expecting to parse reams of untagged, uncategorized and unstructured media data is unrealistic until you know what you’re looking for.

What works for M&E?
AI has become pretty good at solving some specific problems for our industry. Speech-to-text is one of them. With AI, extracting data from a generally accurate transcription offers an automated solution that saves time. However, it’s important to note that AI tools still have limitations. An AI tool, known as “sentiment analysis,” could theoretically look for the emotional undertones described in spoken word, but it first requires another tool to generate a transcript for analysis.

But no matter how good the algorithms are, they won’t give you the qualitative data that a human observer would provide, such as the emotions expressed through body language. They won’t tell you the facial expressions of the people being spoken to, or the tone of voice, pacing and volume level of the speaker, or what is conveyed by a sarcastic tone or a wry expression. There are sentiment analysis engines that try to do this, but breaking down the components ensures the parameters you need will be addressed and solved.

Another task at which machine learning has progressed significantly is logo recognition. Certain engines are good at finding, for example, all the images with a Coke logo in 10,000 hours of video. That’s impressive and quite useful, but it’s another story if you want to also find footage of two people drinking what are clearly Coke-shaped bottles where the logo is obscured. That’s because machine-learning engines tend to have a narrow focus, which goes back to the need to define very specifically what you hope to get from it.

There are a bevy of algorithms and engines out there. If you license a service that will find a specific logo, then you haven’t solved your problem for finding objects that represent the product as well. Even with the right engine, you’ve got to think about how this information fits in your pipeline, and there are a lot of workflow questions to be explored.

Let’s say you’ve generated speech-to-text with audio media, but have you figured out how someone can search the results? There are several options. Sometimes vendors have their own front end for searching. Others may offer an export option from one engine into a MAM that you either already have on-premise or plan to purchase. There are also vendors that don’t provide machine learning themselves but act as a third-party service organizing the engines.

It’s important to remember that none of these AI solutions are accurate all the time. You might get a nudity detection filter, for example, but these vendors rely on probabilistic results. If having one nude image slip through is a huge problem for your company, then machine learning alone isn’t the right solution for you. It’s important to understand whether occasional inaccuracies will be acceptable or deal breakers for your company. Testing samples of your core content in different scenarios for which you need to solve becomes another crucial step. And many vendors are happy to test footage in their systems.

Although machine learning is still in its nascent stages, there is a lot of interest in learning how to make it work in the media workflow. It can do some magical things, but it’s not a magic “easy” button (yet, anyway). Exploring the options and understanding in detail what you need goes hand-in-hand with finding the right solution to integrate with your workflow.


Nick Gold is lead technologist for Baltimore’s Chesapeake Systems, which specializes in M&E workflows and solutions for the creation, distribution and preservation of content. Active in both SMPTE and the Association of Moving Image Archivists (AMIA), Gold speaks on a range of topics. He also co-hosts the Workflow Show Podcast.
 

Behind the Title: Pace Pictures owner Heath Ryan

NAME: Heath Ryan

COMPANY: Pace Pictures (@PacePictures)

CAN YOU DESCRIBE YOUR COMPANY?
We are a dailies-to-delivery post house, including audio mixing.

Pace’s Dolby Atmos stage.

WHAT’S YOUR JOB TITLE?
Owner and editor.

WHAT DOES THAT ENTAIL?
As owner, I need to make sure everyone is happy.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Psychology. I deal with a lot of producers, directors and artists that all have their own wants and needs. Sometimes what that entails is not strictly post production but managing personalities.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Editing. My company grew out of my love for editing. It’s the final draft of any film. In the over 30 years I have been editing, the power of what an editor can do has only grown.

WHAT’S YOUR LEAST FAVORITE?
Chasing unpaid invoices. It’s part of the job, but it’s not fun.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Late, late in the evening when there are no other people around and you can get some real work done.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Not by design but through sheer single mindedness, I have no other skill set but film production. My sense of direction is so bad that armed with a GPS super computer in my phone even Uber driver is not an option.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started making films in the single digit years. I won a few awards for my first short film in my teens and never looked back. I’m lucky to have found this passion early.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
This year I edited the reboot to Daddy Daycare called Grand-Daddy Daycare (2019) for Universal. I got to work with director Ron Oliver and actor Danny Trejo, and it meant a lot to me. It deals with what we do with our elders as time creeps up on us all. Sadly, we lost Ron’s mom while we were editing the film so it took on extra special meaning to us both.

Lawless Range

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Lawless Range and The Producer. I produced and edited both projects with my dear friend and collaborator Sean McGinly. A modern-day Western and a behind-the-scenes of a Hollywood pilot. They were very satisfying projects because there was no one to blame but ourselves.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Meridian Sound system, the Internet and TV.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, I love it. I have always set the tone in the edit bay with music. Especially during dailies – I like to put music on, sometimes films scores, to set the mood of what we are making.

Behind the Title: Post supervisor Chloe Blackwell

NAME: Chloe Blackwell

COMPANY: UK-based Click Post Production

CAN YOU DESCRIBE YOUR COMPANY?
I provide bespoke post solutions, which include consultancy and development courses for production companies. I’m also currently working on an online TV series full time. More on that later!

WHAT’S YOUR JOB TITLE?
Post Production Supervisor

WHAT DOES THAT ENTAIL?
Each job that I take on is quite different, so my role will evolve to suit each company’s needs.

Usually my job starts at the early stages of production, so I will meet with the editorial team to work out what they are looking to achieve visually. From this I can ascertain how their post will work most effectively, and work back from their delivery dates to put an edit and finishing schedule together.

For every shoot I will oversee the rushes being ingested and investigate any technical issues that crop up. Once the post production phase starts, I will be in charge of managing the offline. This includes ensuring editors are aware of deadlines and working with executives and/or directors and producers to ensure smooth running of their show.

This also requires me to liaise with the post house, keeping them informed of production’s requirements and schedules, and trouble shooting any obstacles that inevitably crop up along the way.

I also deal directly with the broadcaster, ensuring delivery requirements are clear, ironing out any technical queries from both sides and ensuring the final masters are delivered in timely manner. This also means that I have to be meticulous about quality control of the final product, as any errors can cause huge delays. As the post supervisor managing the post production budget, efficiently is vital. I keep a constant eye on spending and keep the production team up to date with cost reports.

Alternatively, I also offer my services as a consultant, if all a production needs is some initial support. I’m also in the process of setting up courses for production teams that will help them gain a better understanding of the new 4KHDR world, and how they can work to realistic timing and budgets.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably the amount of decisions I have to make on a daily basis. There are so many different ways of doing things, from converting frame rates, working with archive and creating the workflows for editorial to work with.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I think I have the best job in the world! I am one of the very few people on any production that sees the show from early development, right through to delivery. It’s a very privileged position.

WHAT’S YOUR LEAST FAVORITE?
My role can be quite intensive, so there is usually a real lack of downtime.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
As I have quite a long commute, I find that first thing in the morning is my most productive time. From about 6am I have a few hours of uninterrupted work I can do to set my day up to run smoothly.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would have joined the military!

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
As cheesy as it sounds, post production actually found me! I was working for a production company very early in my career, and I was going to be made redundant. Luckily, I was a valued member of the company and was re-drafted into their post production team. At first I thought it was a disaster, however with lots of help, I hit my stride and fell in love with the job.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
For the last three years I have been working on The Grand Tour for Amazon Prime.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
That’s a hard question as I have worked on so many.

But The Grand Tour has been the most technically challenging. It was the first ever 4K HDR factual entertainment show! Coupled with the fact that it was all shot at 23.98 with elements shot as live. It was one of those jobs where you couldn’t really ask people for advice because it just hadn’t been done.

However, I am also really proud of some of the documentaries I have made, including Born to be Different, Power and the Women’s World and VE day.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My coffee machine, my toaster and the Avid Media Composer.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
All of them…I have to! Part of being in post is being aware of all the new technologies, shows and channels/online platforms out there. You have to keep ahead of the times.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, I love music! I have an eclectic, wide-ranging taste, which means I have a million playlists on Spotify! I love finding new music and playing it for Jess (Jessica Redman, my post production coordinator). We are often shimmying around the office. It keeps the job light, especially during the most demanding days.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I am fortunate enough to be able to take my dog Mouse with me to work. She keeps me sane and keeps me calm, whilst also providing those I work with, with a little joy too!

I am also an obsessive reader, so any down time I get I am often found curled up under a blanket with a good book.

My passion for television really knows no bounds, so I watch TV a lot too! I try to watch at least the first episode of all new TV programs. I rarely get to go to the cinema, but when I do it’s such a treat to watch films on the big screen.

Encore adds colorist Andrea Chlebak, ups Genevieve Fontaine to director of production

Encore has added colorist Andrea Chlebak to its roster and promoted veteran post producer Genevieve Fontaine to director of production. Chlebak brings a multidisciplinary background in feature films, docu-series and commercials across a range of aesthetics. Fontaine has been a post producer since joining the Encore team in early 2010.

Chlebak’s credits include award-winning indies Mandy and Prospect, Neill Blomkamp features Elysium and Chappie and animated adaptation Kahlil Gibran’s “The Prophet.” Having worked primarily in the digital landscape, her experience as an artist, still photographer, film technician, editor and compositor are evident in both her work and how she’s able to streamline communication with directors and cinematographers in delivering their vision.

In her new role, Fontaine’s responsibilities shift toward ensuring organized, efficient and future-proof workflows. Fontaine began her career as a telecine and dailies producer at Riot before moving to Encore, where she managed post for up to 11 shows at a time, including Marvel’s The Defenders series for Netflix. She understands all the building blocks necessary to keep a facility running smoothly and has been instrumental in establishing Encore, a Deluxe company, as a leader in advanced formats, helping coordinate 4K, HDR and IMF-based workflows.

Main Image: (L-R) Genevieve Fontaine and Andrea Chlebak.

A Conversation: 3P Studio founder Haley Stibbard

Australia’s 3P Studio is a post house founded and led by artisan Haley Stibbard. The company’s portfolio of work includes commercials for brands such as Subway, Allianz and Isuzu Motor Company as well as iconic shows like Sesame Street. Stibbard’s path to opening her own post house was based on necessity.

After going on maternity to have her first child in 2013, she returned to her job at a content studio to find that her role had been made redundant. She was subsequently let go. Needing and wanting to work, she began freelancing as an editor — working seven days a week and never turning down a job. Eventually she realized that she couldn’t keep up with that type of schedule and took her fate into her own hands. She launched 3P Studio, one of Brisbane’s few women-led post facilities.

We reached out to Stibbard to ask about her love of post and her path to 3P Studio.

What made you want to get into post production? School?
I had a strong love of film, which I got from my late dad, Ray. He was a big film buff and would always come home from work when I was a kid with a shopping bag full of $2 movies from the video store and he would watch them. He particularly liked the crime stories and thrillers! So I definitely got my love of film and television from him.

We did not have any film courses at high school in the ‘90s, so the closest I could get was photography. Without a show reel it was hard to get a place at university in the college of art; a portfolio was a requirement and I didn’t have one. I remember I had to talk my way into the film program, and in the end I think they just got sick of me and let me into the course through the back door without a show reel — I can be very persistent when I want to be. I always had enjoyed editing and I was good at it, so in group tasks I was always chosen as the editor and then my love of post came from there.

What was your first job?
My very first job was quite funny, actually. I was working in both a shoe store and a supermarket at the time, and two post positions became available one day, an in-house editor for a big furniture chain and a job as a production assistant for a large VFX company at Movie World on the Gold Coast. Anyone who knows me knows that I would be the worst PA in the world. So, luckily for that company director, I didn’t get the PA job and became the in-house editor for the furniture chain.

I’m glad that I took that job, as it taught me so much — how to work under pressure, how to use an Avid, how to work with deadlines, what a key number was, how to dispatch TVCS to the stations, be quick, be accurate, how to take constructive feedback.

I made every mistake known to man, including one weekend when I forgot to remove the 4×3 safe bars from a TVC and my boss saw it on TV. I ended up having to drive to the office, climb the fence that was locked to get into the office and pull it off air. So I’ve learned a lot of things the hard way, but my boss was a very patient and forgiving man, and 18 years later is now a client of mine!

What job did you hold when you went out on maternity leave?
Before I left on maternity leave to have my son Dashiell, I was an editor for a small content company. I have always been a jack-of-all-trades and I took care of everything from offline to online, grading in Resolve, motion graphics in After Effects and general design. I loved my job and I loved the variety that it brought. Doing something different every day was very enjoyable.

After leaving that job, you started freelancing as an editor. What systems did you edit on at the time and what types of projects? How difficult a time was that for you? New baby, working all the time, etc.
I started freelancing when my son was just past seven months old. I had a mortgage and had just come off six months of unpaid maternity leave, so I needed to make a living and I needed to make it quickly. I also had the added pressure of looking after a young child under the age of one who still needed his mother.

So I started contacting advertising agencies and production companies that I thought may be interested in my skill set. I just took every job that I could get my hands on, as I was always worried that every job that I took could potentially be my last for a while. I was lucky that I had an incredibly well-behaved baby! I never said “no” to a job.

As my client base started to grow, my clients would always book me since they knew that I would never say “no” (they know I still don’t say no!). It got to the point where I was working seven days a week. I worked all day when my son was in childcare and all night after he would go to bed. I would take the baby monitor downstairs where I worked out of my husband’s ‘man den.’

As my freelance business grew, I was so lucky that I had the most supportive husband in the world who was doing everything for me, the washing, the cleaning, the cooking, bath time, as well has holding down his own full-time job as an engineer. I wouldn’t have been able to do what I did for that period of time without his support and encouragement. This time really proved to be a huge stepping stone for 3P Studio.

Do you remember the moment you decided you would start your own business?
There wasn’t really a specific moment where I decided to start my own business. It was something that seemed to just naturally come together. The busier I became, the more opportunities came about, like having enough work through the door to build a space and hire staff. I have always been very strategic in regard to the people that I have brought on at 3P, and the timing in which they have come on board.

Can you walk us through that bear of a process?
At the start of 2016, I made the decision to get out of the house. My work life was starting to blend in with my home life and I needed to have that separation. I worked out of a small office for 12 months, and about six months into that it came to a point where I was able to purchase an office space that would become our studio today.

I went to work planning the fit out for the next six months. The studio was an investment in the business and I needed a place that my clients could also bring their clients for approvals, screenings and collaboration on jobs, as well as just generally enjoying the space.

The office space was an empty white shell, but the beauty of coming into a blank canvas was that I was able to create a studio that was specifically built for post production. I was lucky in that I had worked in some of the best post houses in the country as an editor, and this being a custom build I was able to take all the best bits out of all the places I had previously worked and put them into my studio without the restriction of existing walls.

I built up the walls, ripped down the ceilings and was able to design the edit suites and infrastructure all the way down to designing and laying the cable runs myself that I knew would work for us down the line. Then, we saved money and added more equipment to the studio bit by bit. It wasn’t 0 to 100 overnight, I had to work at the business development side of the company a lot, and I spent a lot of long days sitting by myself in those edit suites doing everything. Soon, word of mouth started to circulate and the business started to grow on the back of some nice jobs from my existing loyal clients.

What type of work do you do, and what gear do you call on?
3P Studio is a boutique post production studio that specializes in full-service post production, we also shoot content when required.

Our clients range anywhere from small content videos for the web all the way up to large commercial campaigns and everything in between.

There are currently six of us working full time in the studio, and we handle everything in-house from offline editing to VFX to videography and sound design. We work primarily in the Adobe Creative suite for offline editing in Premiere, mixed with Maxon Cinema 4D/Autodesk Maya for 3D work, Autodesk Flame and Side Effects Houdini for online compositing and VFX, Blackmagic Resolve for color grading and Pro Tools HD for sound mixing. We use EditShare EFS shared storage nodes for collaborative working and sharing of content between the mix of creative platforms we use.

This year we have invested in a Red Digital Cinema camera as well as an EditShare XStream 200 EFS scale-out single-node server so we can become that one-stop shop for our clients. We have been able to create an amazing creative space for our clients to come and work with us, be it from the bespoke design of our editorial suites or the high level of client service we offer.

How did you build 3P Studios to be different from other studios you’ve worked at?
From a personal perspective, the culture that we have been able to build in the studio is unlike anywhere else I have worked in that we genuinely work as a team and support each other. On the business side, we cater to clients of all sizes and budgets while offering uncompromising services and experience whether they be large or small. Making sure they walk away feeling that they have had great value and exemplary service for their budget means that they will end up being a customer of ours for life. This is the mantra that I have been able to grow the business on.

What is your hiring process like, and how do you protect employees who need to go out on maternity or family leave?
When I interview people to join 3P, attitude and willingness to learn is everything to me — hands down. You can be the most amazing operator on the planet, but if your attitude stinks then I’m really not interested. I’ve been incredibly lucky with the team that I have, and I have met them along the journey at exactly the right times. We have an amazing team culture and as the company grows our success is shared.

I always make it clear that it’s swings and roundabouts and that family is always number one. I am there to support my team if they need me to be, not just inside of work but outside as well and I receive the same support in return. We have flexible working hours, I have team members with young families who, at times, are able to work both in the studio and from home so that they can be there for their kids when they need to be. This flexibility works fine for us. Happy team members make for a happy, productive workplace, and I like to think that 3P is forward thinking in that respect.

Any tips for young women either breaking into the industry or in it that want to start a family but are scared it could cost them their job?
Well, for starters, we have laws in Australia that make it illegal for any woman in this country to be discriminated against for starting a family. 3P also supports the 18 weeks paid maternity leave available to women heading out to start a family. I would love to see more female workers in post production, especially in operator roles. We aren’t just going to be the coffee and tea girls, we are directors, VFX artists, sound designers, editors and cinematographers — the future is female!

Any tips for anyone starting a new business?
Work hard, be nice to people and stay humble because you’re only as good as your last job.

Main Image: Haley Stibbard (second from left) with her team.

IBC 2018: Convergence and deep learning

By David Cox

In the 20 years I’ve been traveling to IBC, I’ve tried to seek out new technology, work practices and trends that could benefit my clients and help them be more competitive. One thing that is perennially exciting about this industry is the rapid pace of change. Certainly, from a post production point of view, there is a mini revolution every three years or so. In the past, those revolutions have increased image quality or the efficiency of making those images. The current revolution is to leverage the power and flexibly of cloud computing. But those revolutions haven’t fundamentally changed what we do. The images might have gotten sharper, brighter and easier to produce, but TV is still TV. This year though, there are some fascinating undercurrents that could herald a fundamental shift in the sort of content we create and how we create it.

Games and Media Collide
There is a new convergence on the horizon in our industry. A few years ago, all the talk was about the merge between telecommunications companies and broadcasters, as well as the joining of creative hardware and software for broadcast and film, as both moved to digital.

The new convergence is between media content creation as we know it and the games industry. It was subtle, but technology from gaming was present in many applications around the halls of IBC 2018.

One of the drivers for this is a giant leap forward in the quality of realtime rendering by the two main game engine providers: Unreal and Unity. I program with Unity for interactive applications, and their new HDSRP rendering allows for incredible realism, even when being rendered fast enough for 60+ frames per second. In order to create such high-quality images, those game engines must start with reasonably detailed models. This is a departure from the past, where less detailed models were used for games than were used for film CGI shots, to protect for realtime performance. So, the first clear advantage created by the new realtime renderers is that a film and its inevitable related game can use the same or similar model data.

NCam

Being able to use the same scene data between final CGI and a realtime game engine allows for some interesting applications. Habib Zargarpour from Digital Monarch Media showed a system based on Unity that allows a camera operator to control a virtual camera in realtime within a complex CGI scene. The resulting camera moves feel significantly more real than if they had been keyframed by an animator. The camera operator chases high-speed action, jumps at surprises and reacts to unfolding scenes. The subtleties that these human reactions deliver via minor deviations in the movement of the camera can convey the mood of a scene as much as the design of the scene itself.

NCam was showing the possibilities of augmenting scenes with digital assets, using their system based on the Unreal game engine. The NCam system provides realtime tracking data to specify the position and angle of a freely moving physical camera. This data was being fed to an Unreal game engine, which was then adding in animated digital objects. They were also using an additional ultra-wide-angle camera to capture realtime lighting information from the scene, which was then being passed back to Unreal to be used as a dynamic reflection and lighting map. This ensured that digitally added objects were lit by the physical lights in the realworld scene.

Even a seemingly unrelated (but very enlightening) chat with StreamGuys president Kiriki Delany about all things related to content streaming still referenced gaming technology. Delany talked about their tests to build applications with Unity to provide streaming services in VR headsets.

Unity itself has further aspirations to move into storytelling rather than just gaming. The latest version of Unity features an editing timeline and color grading. This allows scenes to be built and animated, then played out through various virtual cameras to create a linear story. Since those scenes are being rendered in realtime, tweaks to scenes such as positions of objects, lights and material properties are instantly updated.

Game engines not only offer us new ways to create our content, but they are a pathway to create a new type of hybrid entertainment, which sits between a game and a film.

Deep Learning
Other undercurrents at IBC 2018 were the possibilities offered by machine learning and deep learning software. Essentially, a normal computer program is hard wired to give a particular output for a given input. Machine learning allows an algorithm to compare its output to a set of data and adjust itself if the output is not correct. Deep learning extends that principle by using neural network structures to make a vast number of assessments of input data, then draw conclusions and predications from that data.

Real-world applications are already prevalent and are largely related in our industry to processing viewing metrics. For example, Netflix suggests what we might want to watch next by comparing our viewing habits to others with a similar viewing pattern.

But deep learning offers — indeed threatens — much more. Of course, it is understandable to think that, say, delivery drivers might be redundant in a world where autonomous vehicles rule, but surely creative jobs are safe, right? Think again!

IBM was showing how its Watson Studio has used deep learning to provide automated editing highlights packages for sporting events. The process is relatively simple to comprehend, although considerably more complicated in practice. A DL algorithm is trained to scan a video file and “listen” for a cheering crowd. This finds the highlight moment. Another algorithm rewinds back from that to find the logical beginning of that moment, such as the pass forward, the beginning of the volley etc. Taking the score into account helps decide whether that highlight was pivotal to the outcome of the game. Joining all that up creates a highlight package without the services of an editor. This isn’t future stuff. This has been happening over the last year.

BBC R&D was talking about their trials to have DL systems control cameras at sporting events, as they could be trained to follow the “two thirds” framing rule and to spot moments of excitement that justified close-ups.

In post production, manual tasks such as rotoscoping and color matching in color grading could be automated. Even styles for graphics, color and compositing could be “learned” from other projects.

It’s certainly possible to see that deep learning systems could provide a great deal of assistance in the creation of day-to-day media. Tasks that are based on repetitiveness or formula would be the obvious targets. The truth is, much of our industry is repetitive and formulaic. Investors prefer content that is more likely to be a hit, and this leads to replication over innovation.

So, are we heading for “Skynet” and need Arnold to save us? I thought it was very telling that IBM occupied the central stand position in Hall 7 — traditionally the home of the tech companies that have driven creativity in post. Clearly, IBM and its peers are staking their claim. I have no doubt that DL and ML will make massive changes to this industry in the years ahead. Creativity is probably, but not necessarily, the only defence for mere humans to keep a hand in.

That said, at IBC2018 the most popular place for us mere humans to visit was a bar area called The Beach, where we largely drank Heineken. If the ultimate deep learning system is tasked to emulate media people, surely it would create digital alcohol and spend hours talking nonsense, rather than try and take over the media world? So perhaps we have a few years left yet.


David Cox is a VFX compositor and colorist with 20-plus years of experience. He started his career with MPC and The Mill before forming his own London-based post facility. Cox recently created interactive projects with full body motion sensors and 4D/AR experiences.

Panavision, Sim, Saban Capital agree to merge

Saban Capital Acquisition Corp., a publicly traded special purpose acquisition company, Panavision and Sim Video International have agreed to combine their businesses to create a premier global provider of end-to-end production and post production services to the entertainment industry. Under the terms of the business combination agreement, Panavision and Sim will become wholly owned subsidiaries of Saban Capital Acquisition Corp. Upon completion, Saban Capital Acquisition Corp. will change its name to Panavision Holdings Inc. and is expected to continue to trade on the Nasdaq stock exchange. Kim Snyder, president and chief executive officer of Panavision, will serve as chairman and chief executive officer. Bill Roberts, chief financial officer of Panavision, will serve in that role for the combined company.

Panavision designs, manufactures and provides high-precision optics and camera technology for the entertainment industry and is a leading global provider of production equipment and services. Sim is a leading provider of production and post production solutions with facilities in Los Angeles, Vancouver, Atlanta, New York and Toronto.

“This acquisition will leverage the best of Panavision’s and Sim’s resources by providing comprehensive products and services to best address the ever-adapting needs of content creators globally,” says Snyder.

“We’re combining the talent and integrated services of Sim with two of the biggest names in the business, Panavision and Saban,” adds James Haggarty, president and CEO of Sim. “The resulting scale of the new combined enterprise will better serve our clients and help shape the content-creation landscape.”

The respective boards of directors of Saban Capital Acquisition Corp., Panavision and Sim have unanimously approved the merger with completion subject to Saban Capital Acquisition Corp. stockholder approval, certain regulatory approvals and other customary closing conditions. The parties expect that the process will be completed in the first quarter of 2019.

HPA Tech Retreat 2019 opens call for proposals

The Hollywood Professional Association has issued the call for proposals for the 2019 HPA Tech Retreat, the annual gathering of professionals from around the world who work at the intersection of technology and content creation. The main conference is determined by the proposals submitted during this process.

The HPA Tech Retreat is comprised of Tech Retreat Extra (TR-X), the Supersession, breakfast roundtables, an Innovation Zone and the main conference.  Also open now are submissions for the breakfast roundtables.

Now in its 24th year, the HPA Tech Retreat will take place February 11-15, 2019 at the JW Marriott Desert Springs Resort & Spa in Palm Desert, California, near Palm Springs.

The main program presentations are set for Wednesday, February 13 through Friday, February 15. These presentations are strictly reserved for marketing-free content.  Mark Schubin, who has programmed the Tech Retreat since its inception, notes that main program sessions can include a wide range of content. “We are looking for the most interesting, thought provoking, challenging and important ideas, diving into almost anything that is related to moving images and associated sounds. That includes, but is not limited to: alternative content for cinema, AR, broadcast in the age of broadband, content protection, dynamic range, enhanced cinema, frame rate, global mastering, higher immersion, international law, joke generation, kernel control, loss recovery, media management, night vision, optical advances, plug-‘n’-play, queasiness in VR, robo-post, surround imagery, Terabyte thumb drives, UHD II, verification, wilderness production, x-band Internet access, yield strength of lighting trusses and zoological holography.”

It is a far-ranging and creative call to the most innovative thinkers exploring the most interesting ideas and work. He concludes with his annual salvo, “Anything from scene to seen and gear to ear is fair game. So are haptic/tactile, olfactory and gustatory applications.”

Proposals, which are informal in nature and can be as short as a few sentences in length, must be submitted by the would-be presenter. Submitters will be contacted if the topic is of interest. Presentations in the main program are typically 30 minutes long, including set-up and Q&A. The deadline to submit main program proposals is end of day, Friday, October 26, 2018. Submissions should be sent to tvmark@earthlink.net.

Breakfast roundtables take place Wednesday to Friday, beginning at 7:30am. Unlike the main program, moderator-led breakfast roundtables can include marketing information. Schubin comments, “Table moderators are free to teach, preach, inquire, ask, call-to-task, sell or do anything else that keeps conversation flowing for an hour.”

There is no vetting process for breakfast roundtables. All breakfast roundtable moderators must be registered for the retreat, and there is no retreat registration discount conveyed by moderating a breakfast roundtable. Proposals for breakfast roundtables must be submitted by their proposed moderators, and once the maximum number of tables is reached (32 per day) no more can be accepted.

Further details for the 2019 HPA Tech Retreat will be announced in the coming weeks, including TR-X focus, supersession topics and Innovation Zone details, as well as seminars and meetings held in advance of the Tech Retreat.

Roundtable Post tackles HFR, UHD and HDR image processing

If you’re involved in post production, especially episodic TV, documentaries and feature films, then it’s highly probable that High Frame Rate (HFR), Ultra High Definition (UHD) and High Dynamic Range (HDR) have come your way.

“On any single project, the combination of HFR, UHD and HDR image-processing can be a pretty demanding, cutting-edge technical challenge, but it’s even more exacting when particular specs and tight turnarounds are involved,” says Jack Jones, digital colorist and CTO of full-service boutique facility Roundtable Post Production.

Among the central London facility’s credits are online virals for brands including Kellogg’s, Lurpak, Rolex and Ford, music films for Above & Beyond and John Mellencamp, plus broadcast TV series and feature documentaries for ITV, BBC, Sky, Netflix, Amazon, Discovery, BFI, Channel 4, Showtime and film festivals worldwide. These include Sean McAllister’s A Northern Soul, Germaine Bloody Greer (BBC) and White Right: Meeting The Enemy (ITV Exposure/Netflix).

“Yes, you can render-out HFR/UHD/HDR deliverables from a variety of editing and grading systems, but there are not many that can handle the simultaneous combination of these formats, never mind the detailed delivery stipulations and crunching deadlines that often accompany such projects,” says Jones.

Rewinding to the start of 2017, Jones says that, “Looking forward, to the future landscape of post, the proliferation of formats, resolutions, frame rates and color spaces involved in modern screened entertainment seemed an inevitability for our business. We realized that we were going to need to tackle the impending scenario head-on. Having assessed the alternatives, we took the plunge and gambled on Colorfront Transkoder.”

Transkoder is a standalone, automated system for fast digital file conversion. Roundtable Post’s initial use of Colorfront Transkoder turned out to be the creation of encrypted DCP masters and worldwide deliverables of a variety of long-form projects, such as Nick Broomfield’s Whitney: Can I Be Me, Noah Media Group’s Bobby Robson: More Than a Manager, Peter Medak’s upcoming feature The Ghost of Peter Sellers, and the Colombian feature-documentary To End A War, directed by Marc Silver.

“We discovered from these experiences that, along with incredible quality in terms of image science, color transforms and codecs, Transkoder is fast,” says Jones. “For example, the deliverables for To End A War, involved 10 different language versions, plus subtitles. It would have taken several days to complete these out straight of out of an Avid, but rendering in Transkoder took just four hours.”

More recently, Roundtable Post was faced with the task of delivering country-specific graphics packages, designed and created by production agency Noah Media Group, for use by FIFA rights holders and broadcasters during the 2018 World Cup.

The project involved delivering a mix of HFR, UHD, HDR and HD SDR formats, resulting in 240 bespoke animations, and the production of a mammoth 1,422 different deliverables. These included: 59.94p UHD HDR, 50p UHD HDR, 59.94p HD SDR, 50p HD SDR, 59.94i HD SDR and 50i HD SDR with a variety of clock, timecode, pre-roll, soundtrack, burn-in and metadata requirements as part of the overall specification. Furthermore, the job encompassed the final QC of all deliverables, and it had to be completed within a five-day work week.

“For a facility of our size, this was a significant job in terms of its scale and deadline,” says Jones. “Traditionally, projects like these would involve throwing a lot of people and time at them, and there’s always the chance of human error creeping in. Thankfully, we already had positive experiences with Transkoder, and were eager to see how we could harness its power.”

Using technical data from FIFA, Jones built an XML file containing timelines all of the relevant timecode, clock, image metadata, Wav audio and file-naming information of the required deliverables. He also liaised with Colorfront’s R&D team, and was quickly provided with an initial set of Python script templates that would help to automate the various requirements of the job in Transkoder.

Roundtable Post was able to complete the FIFA 2018 World Cup job, including the client-attend QC of the 1,422 different UHD HDR and HD SDR assets, in under three days.

The Meg: What does a giant shark sound like?

By Jennifer Walden

Warner Bros. Pictures’ The Meg has everything you’d want in a fun summer blockbuster. There are explosions, submarines, gargantuan prehistoric sharks and beaches full of unsuspecting swimmers. Along with the mayhem, there is comedy and suspense and jump-scares. Best of all, it sounds amazing in Dolby Atmos.

The team at E² Sound, led by supervising sound editors Erik Aadahl, Ethan Van der Ryn and Jason Jennings, created a soundscape that wraps around the audience like a giant squid around a submersible. (By the way, that squid vs. submersible scene is so fun for sound!)

L-R: Ethan Van der Ryn and Erik Aadahl.

We spoke to the E² Sound team about the details of their recording sessions for the film. They talk about how they approached the sound for the megalodons, how they used the Atmos surround field to put the audience underwater and much more.

Real sharks can’t make sounds, but Hollywood sharks do. How did director Jon Turteltaub want to approach the sound of the megalodon in his film?
Erik Aadahl: Before the film was even shot, we were chatting with producer Lorenzo di Bonaventura, and he said the most important thing in terms of sound for the megalodon was to sell the speed and power. Sharks don’t have any organs for making sound, but they are very large and powerful and are able to displace water. We used some artistic sonic license to create the quick sound of them moving around and displacing water. Of course, when they breach the surface, they have this giant mouth cavity that you can have a lot of fun with in terms of surging water and creating terrifying, guttural sounds out of that.

Jason Jennings: At one point, director Turteltaub did ask the question, “Would it be appropriate for The Meg to make a growl or roar?”

That opened up the door for us to explore that avenue. The megalodon shouldn’t make a growling or roaring sound, but there’s a lot that you can do with the sound of water being forced through the mouth or gills, whether you are above or below the water. We explored sounds that the megalodon could be making with its body. We were able to play with sounds that aren’t animal sounds but could sound animalistic with the right amount of twisting. For example, if you have the sound of a rock being moved slowly through the mud, and you process that a certain way, you can get a sound that’s almost vocal but isn’t an animal. It’s another type of organic sound that can evoke that idea.

Aadahl: One of my favorite things about the original Jaws was that when you didn’t see or hear Jaws it was more terrifying. It’s the unknown that’s so scary. One of my favorite scenes in The Meg was when you do not see or hear it, but because of this tracking device that they shot into its fin, they are able to track it using sonar pings. In that scene, one of the main characters is in this unbreakable shark enclosure just waiting out in the water for The Meg to show up. All you hear are these little pings that slowly start to speed up. To me, that’s one of the scariest scenes because it’s really playing with the unknown. Sharks are these very swift, silent, deadly killers, and the megalodon is this silent killer on steroids. So it’s this wonderful, cinematic moment that plays on the tension of the unknown — where is this megalodon? It’s really gratifying.

Since sharks are like the ninjas of the ocean (physically, they’re built for stealth), how do you use sound to help express the threat of the megalodon? How were you able to build the tension of an impending attack, or to enhance an attack?
Ethan Van der Ryn: It’s important to feel the power of this creature, so there was a lot of work put into feeling the effect that The Meg had on whatever it’s coming into contact with. It’s not so much about the sounds that are emitting directly from it (like vocalizations) but more about what it’s doing to the environment around it. So, if it’s passing by, you feel the weight and power of it passing by. When it attacks — like when it bites down on the window — you feel the incredible strength of its jaws. Or when it attacks the shark cage, it feels incredibly shocking because that sound is so terrifying and powerful. It becomes more about feeling the strength and power and aggressiveness of this creature through its movements and attacks.

Jennings: In terms of building tension leading up to an attack, it’s all about paring back all the elements beforehand. Before the attack, you’ll find that things get quiet and calmer and a little sparse. Then, all of a sudden, there’s this huge explosion of power. It’s all about clearing a space for the attack so that it means something.

The attack on the window in the underwater research station, how did you build that sequence? What were some of the ways you were able to express the awesomeness of this shark?
Aadahl: That’s a fun scene because you have the young daughter of a scientist on board this marine research facility located in the South China Sea and she’s wandered onto this observation deck. It’s sort of under construction and no one else is there. The girl is playing with this little toy — an iPad-controlled gyroscopic ball that’s rolling across the floor. That’s the featured sound of the scene.

You just hear this little ball skittering and rolling across the floor. It kind of reminds me of Danny’s tricycle from The Shining. It’s just so simple and quiet. The rhythm creates this atmosphere and lulls you into a solitary mood. When the shark shows up, you’re coming out of this trance. It’s definitely one of the big shock-scares of the movie.

Jennings: We pared back the sounds there so that when the attack happened it was powerful. Before the attack, the rolling of the ball and the tickety-tick of it going over the seams in the floor really does lull you into a sense of calm. Then, when you do see the shark, there’s this cool moment where the shark and the girl are having a staring contest. You don’t know who’s going to make the first move.

There’s also a perfect handshake there between sound design and music. The music is very sparse, just a little bit of violins to give you that shiver up your spine. Then, WHAM!, the sound of the attack just shakes the whole facility.

What about the sub-bass sounds in that scene?
Aadahl: You have the mass of this multi-ton creature slamming into the window, and you want to feel that in your gut. It has to be this visceral body experience. By the way, effects re-recording mixer Doug Hemphill is a master at using the subwoofer. So during the attack, in addition to the glass cracking and these giant teeth chomping into this thick plexiglass, there’s this low-end “whoomph” that just shakes the theater. It’s one of those moments where you want everyone in the theater to just jump out of their seats and fling their popcorn around.

To create that sound, we used a number of elements, including some recordings that we had done awhile ago of glass breaking. My parents were replacing this 8’ x 12’ glass window in their house and before they demolished the old one, I told them to not throw it out because I wanted to record it first.

So I mic’d it up with my “hammer mic,” which I’m very willing to beat up. It’s an Audio-Technica AT825, which has a fixed stereo polar pattern of 110-degrees, and it has a large diaphragm so it captures a really nice low-end response. I did several bangs on the glass before finally smashing it with a sledgehammer. When you have a surface that big, you can get a super low-end response because the surface acts like a membrane. So that was one of the many elements that comprised that attack.

Jennings: Another custom-recorded element for that sound came from a recording session where we tried to simulate the sound of The Meg’s teeth on a plastic cylinder for the shark cage sequence later in the film. We found a good-sized plastic container that we filled with water and we put a hydrophone inside the container and put a contact mic on the outside. From that point, we proceeded to abuse that thing with handsaws and a hand rake — all sorts of objects that had sharp points, even sharp rocks. We got some great material from that session, sounds where you can feel the cracking nature of something sharp on plastic.

For another cool recording session, in the editorial building where we work, we set up all the sound systems to play the same material through all of the subwoofers at once. Then we placed microphones throughout the facility to record the response of the building to all of this low-end energy. So for that moment where the shark bites the window, we have this really great punching sound we recorded from the sound of all the subwoofers hitting the building at once. Then after the bite, the scene cuts to the rest of the crew who are up in a conference room. They start to hear these distant rumbling sounds of the facility as it’s shaking and rattling. We were able to generate a lot of material from that recording session to feel like it’s the actual sound of the building being shaken by extreme low-end.

L-R: Emma Present, Matt Cavanaugh and Jason (Jay) Jennings.

The film spends a fair amount of time underwater. How did you handle the sound of the underwater world?
Aadahl: Jay [Jennings] just put a new pool in his yard and that became the underwater Foley stage for the movie, so we had the hydrophones out there. In the film, there are these submersible vehicles that Jay did a lot of experimentation for, particularly for their underwater propeller swishes.

The thing about hydrophones is that you can’t just put them in water and expect there to be sound. Even if you are agitating the water, you often need air displacement underwater pushing over the mics to create that surge sound that we associate with being underwater. Over the years, we’ve done a lot of underwater sessions and we found that you need waves, or agitation, or you need to take a high-powered hose into the water and have it near the surface with the hydrophones to really get that classic, powerful water rush or water surge sound.

Jennings: We had six different hydrophones for this particular recording session. We had a pair of Aquarian Audio H2a hydrophones, a pair of JrF hydrophones and a pair of Ambient Recording ASF-1 hydrophones. These are all different quality mics — some are less expensive and some are extremely expensive, and you get a different frequency response from each pair.

Once we had the mics set up, we had several different props available to record. One of the most interesting was a high-powered drill that you would use to mix paint or sheetrock compound. Connected to the drill, we had a variety of paddle attachments because we were trying to create new source for all the underwater propellers for the submersibles, ships and jet skis — all of which we view from underneath the water. We recorded the sounds of these different attachments in the water churning back and forth. We recorded them above the water, below the water, close to the mic and further from the mic. We came up with an amazing palette of sounds that didn’t need any additional processing. We used them just as they were recorded.

We got a lot of use out of these recordings, particularly for the glider vehicles, which are these high-tech, electrically-propelled vehicles with two turbine cyclone propellers on the back. We had a lot of fun designing the sound of those vehicles using our custom recordings from the pool.

Aadahl: There was another hydrophone recording mission that the crew, including Jay, went on. They set out to capture the migration of humpback whales. One of our hydrophones got tangled up in the boat’s propeller because we had a captain who was overly enthusiastic to move to the next location. So there was one casualty in our artistic process.

Jennings: Actually, it was two hydrophones. But the best part is that we got the recording of that happening, so it wasn’t a total loss.

Aadahl: “Underwater” is a character in this movie. One of the early things that the director and the picture editor Steven Kemper mentioned was that they wanted to make a character out of the underwater environment. They really wanted to feel the difference between being underwater and above the water. There is a great scene with Jonas (Jason Statham) where he’s out in the water with a harpoon and he’s trying to shoot a tracking device into The Meg.

He’s floating on the water and it’s purely environmental sounds, with the gentle lap of water against his body. Then he ducks his head underwater to see what’s down there. We switch perspectives there and it’s really extreme. We have this deep underwater rumble, like a conch shell feeling. You really feel the contrast between above and below the water.

Van der Ryn: Whenever we go underwater in the movie, Turteltaub wanted the audience to feel extremely uncomfortable, like that was an alien place and you didn’t want to be down there. So anytime we are underwater the sound had to do that sonic shift to make the audience feel like something bad could happen at any time.

How did you make being underwater feel uncomfortable?
Aadahl: That’s an interesting question, because it’s very subjective. To me, the power of sound is that it can play with emotions in very subconscious and subliminal ways. In terms of underwater, we had many different flavors for what that underwater sound was.

In that scene with Jonas going above and below the water, it’s really about that frequency shift. You go into a deep rumble under the water, but it’s not loud. It’s quiet. But sometimes the scariest sounds are the quiet ones. We learned this from A Quiet Place recently and the same applies to The Meg for sure.

Van der Ryn: Whenever you go quiet, people get uneasy. It’s a cool shift because when you are above the water you see the ripples of the ocean all over the place. When working in 7.1 or the Dolby Atmos mix, you can take these little rolling waves and pan them from center to left or from the right front wall to the back speakers. You have all of this motion and it’s calming and peaceful. But as soon as you go under, all of that goes away and you don’t hear anything. It gets really quiet and that makes people uneasy. There’s this constant low-end tone and it sells pressure and it sells fear. It is very different from above the water.

Aadahl: Turteltaub described this feeling of pressure, so it’s something that’s almost below the threshold of hearing. It’s something you feel; this pressure pushing against you, and that’s something we can do with the subwoofer. In Atmos, all of the speakers around the theater are extended-frequency range so we can put those super-low frequencies into every speaker (including the overheads) and it translates in a way that it doesn’t in 7.1. In Atmos, you feel that pressure that Turteltaub talked a lot about.

The Meg is an action film, so there’s shootings, explosions, ships getting smashed up, and other mayhem. What was the most fun action scene for sound? Why?
Jennings: I like the scene in the submersible shark cage where Suyin (Bingbing Li) is waiting for the shark to arrive. This turns into a whole adventure of her getting thrashed around inside the cage. The boat that is holding the cable starts to get pulled along. That was fun to work on.

Also, I enjoyed the end of the film where Jonas and Suyin are in their underwater gliders and they are trying to lure The Meg to a place where they can trap and kill it. The gliders were very musical in nature. They had some great tonal qualities that made them fun to play with using Doppler shifts. The propeller sounds we recorded in the pool… we used those for when the gliders go by the camera. We hit them with these churning sounds, and there’s the sound of the bubbles shooting by the camera.

Aadahl: There’s a climactic scene in the film with hundreds of people on a beach and a megalodon in the water. What could go wrong? There’s one character inside a “zorb” ball — an inflatable hamster ball for humans that’s used for scrambling around on top of the water. At a certain point, this “zorb” ball pops and that was a sound that Turteltaub was obsessed with getting right.

We went through so many iterations of that sound. We wound up doing this extensive balloon popping session on Stage 10 at Warner Bros. where we had enough room to inflate a 16-foot weather balloon. We popped a bunch of different balloons there, and we accidentally popped the weather balloon, but fortunately we were rolling and we got it. So a combination of those sounds created the”‘zorb” ball pop.

That scene was one of my favorites in the film because that’s where the shit hits the fan.

Van der Ryn: That’s a great moment. I revisited that to do something else in the scene, and when the zorb popped it made me jump back because I forgot how powerful a moment that is. It was a really fun, and funny moment.

Aadahl: That’s what’s great about this movie. It has some serious action and really scary moments, but it’s also fun. There are some tongue-in-cheek moments that made it a pleasure to work on. We all had so much fun working on this film. Jon Turteltaub is also one of the funniest people that I’ve ever worked with. He’s totally obsessed with sound, and that made for an amazing sound design and sound mix experience. We’re so grateful to have worked on a movie that let us have so much fun.

What was the most challenging scene for sound? Was there one scene that evolved a lot?
Aadahl: There’s a rescue scene that takes place in the deepest part of the ocean, and the rescue is happening from this nuclear submarine. They’re trying to extract the survivors, and at one point there’s this sound from inside the submarine, and you don’t know what it is but it could be the teeth of a giant megalodon scraping against the hull. That sound, which takes place over this one long tracking shot, was one that the director focused on the most. We kept going back and forth and trying new things. Massaging this and swapping that out… it was a tricky sound.

Ultimately, it ended up being a combination of sounds. Jay and sound effects editor Matt Cavanaugh went out and recorded this huge, metal cargo crate container. They set up mics inside and took all sorts of different metal tools and did some scraping, stuttering, chittering and other friction sounds. We got all sorts of material from that session and that’s one of the main featured sounds there.

Jennings: Turteltaub at one point said he wanted it to sound like a shovel being dragged across the top of the submarine, and so we took him quite literally. We went to record that container on one of the hottest days of the year. We had to put Matt (Cavanaugh) inside and shut the door! So we did short takes.

I was on the roof dragging shovels, rakes, a garden hoe and other tools across the top. We generated a ton of great material from that.

As with every film we do, we don’t want to rely on stock sounds. Everything we put together for these movies is custom made for them.

What about the giant squid? How did you create its’ sounds?
Aadahl: I love the sound that Jay came up with for the suction cups on the squid’s tentacles as they’re popping on and off of the submersible.

Jennings: Yet another glorious recording session that we did for this movie. We parked a car in a quiet location here at WB, and we put microphones inside of the car — some stereo mics and some contact mics attached to the windshield. Then, we went outside the car with two or three different types of plungers and started plunging the windshield. Sometimes we used a dry plunger and sometimes we used a wet plunger. We had a wet plunger with dish soap on it to make it slippery and slurpie. We came up with some really cool material for the cups of this giant squid. So we would do a hard plunge onto the glass, and then pull it off. You can stutter the plunger across the glass to get a different flavor. Thankfully, we didn’t break any windows, although I wasn’t sure that we wouldn’t.

Aadahl: I didn’t donate my car for that recording session because I have broken my windshield recording water in the past!

Van der Ryn: In regards to perspective in that scene, when you’re outside the submersible, it’s a wide shot and you can see the arms of the squid flailing around. There we’re using the sound of water motion but when we go inside the submersible it’s like this sphere of plastic. In there, we used Atmos to make the audience really feel like those squid tentacles are wrapping around the theater. The little suction cup sounds are sticking and stuttering. When the squid pulls away, we could pinpoint each of those suction cups to a specific speaker in the theater and be very discrete about it.

Any final thoughts you’d like to share on the sound of The Meg?
Van der Ryn: I want to call out Ron Bartlett, the dialogue/music re-recording mixer and Doug Hemphill, the re-recording mixer on the effects. They did an amazing job of taking all the work done by all of the departments and forming it into this great-sounding track.

Aadahl: Our music composer, Harry Gregson-Williams, was pretty amazing too.

Pixelogic adds d-cinema, Dolby audio mixing theaters to Burbank facility

Pixelogic, which provides localization and distribution services, has opened post production content review and audio mixing theaters within its facility in Burbank. The new theaters extend the company’s end-to-end services to include theatrical screening of digital cinema packages as well as feature and episodic audio mixing in support of its foreign language dubbing business.

Pixelogic now operates a total of six projector-lit screening rooms within its facility. Each room was purpose-built from the ground up to include HDR picture and immersive sound technologies, including support for Dolby Atmos and DTS:X audio. The main theater is equipped with a Dolby Vision projection system and supports Dolby Atmos immersive audio. The facility will enable the creation of more theatrical content in Dolby Vision and Dolby Atmos, which consumers can experience at Dolby Cinema theaters, as well as in their homes and on the go. The four larger theaters are equipped with Avid S6 consoles in support of the company’s audio services. The latest 4D motion chairs are also available for testing and verification of 4D capabilities.

“The overall facility design enables rapid and seamless turnover of production environments that support Digital Cinema Package (DCP) screening, audio recording, audio mixing and a range of mastering and quality control services,” notes Andy Scade, SVP/GM of Pixelogic’s worldwide digital cinema services.