Tag Archives: HPA Tech Retreat 2019

HPA Tech Retreat 2019: An engineer’s perspective

By John Ferder

Each year, I look forward to attending the Hollywood Professional Association’s Tech Retreat, better known as the HPA Tech Retreat. Apart from escaping the New York winter, it gives me new perspectives, a chance to exchange ideas with friends and colleagues and explore the latest technical and creative information. As a broadcast engineer, I get a renewed sense of excitement and purpose.

Also, as secretary/treasurer of SMPTE, the Board of Governors meetings as well as the Strategy Day held each year before the Tech Retreat energize me. This year, we invited a group of younger professionals to tell us what SMPTE could do to attract them to SMPTE and HPA, and what they needed from us as experienced professionals.

Their enthusiasm and honesty were refreshing and encouraging. We learned that while we have been trying to reach out to them, they have been looking for us to invite them into the Society. They have been looking for mentors and industry leaders to engage them one-on-one and introduce them to SMPTE and how it can be of value to them.

Presentations and Hot Topics
While it is true that the Hollywood motion picture community is behind producing this Tech Retreat, it is by no means limited to the film industry. There was plenty of content and information for those of us on the broadcast side to learn and incorporate into our workflows and future planning, including a presentation on the successor to SMPTE timecode. Peter Symes, formerly director of standards for SMPTE and a SMPTE Fellow, presented an update on the TLX Project and the development of what is to be SMPTE Standard ST2120, the Extensible Time Label.

This suite of standards will be built on the work already done in ST2059, which describes the use of the IEEE1588 Precision Time Protocol to synchronize video equipment over an IP network. This Extensible Time Label will succeed, not replace ST12, which is the analog timecode that we have used with great success for 50 years. As production moves increasingly toward using IP networks, this work will produce a digital time labeling system that will be as universal as ST12 timecode has been. Symes invited audience members to join the 32NF80 Technology Committee, which is developing and drafting the standard.

Phil Squyres

What were the hot topics this year? HDR, Wide Color Gamut, AI/machine learning, IMF and next-generation workflows had a large number of presentations. While this may seem to be the “same old, same old,” the amount of both technical and practical information presented this year was a real eye-opener to many of us.

Phil Squyres gave a talk on next generation versus broadcast production workflows that revealed that the amount of time and storage needed to complete a program episode for OTT distribution versus broadcast is 2.2X or greater. This echoed the observations of an earlier panel of colorists and post specialists for Netflix feature films, one of whom stated that instead of planning to complete post production two weeks prior to release, plan on completing five to six weeks prior in order to allow for the extra work needed for the extra QC of both HDR and SDR releases.

Artificial Intelligence and Machine Learning
Perhaps the most surprising presentation for me was given by Rival Theory, a company that generates AI personas based on real people’s memories, behaviors and mannerisms. They detailed the process by which they are creating a persona of Tony Robbins, famous motivational speaker and investor in Rival Theory. Robbins intends to have a life-like persona created to help people with life coaching and continue his mission to end suffering throughout the world, even after he dies. In addition to the demonstration of the multi-camera storing and rendering of his face while talking and displaying many emotions, they showed how Robbins’ speech was saved and synthesized for the persona. A rendering of the completed persona was presented and was very impressive.

Many presentations focused on applications of AI and machine learning in existing production and post workflows. I appreciated that a number of the presenters stressed that their solutions were meant not to replace the human element in these workflows, but to instead apply AI/ML to the redundant and tedious tasks, not the creative ones. Jason Brahms of Video Gorillas brought that point home in his presentation on “AI Film Restoration at 12 Million Frames per Second,” as did Tim Converse of Adobe in “Leveraging AI in Post Production.”

Broadcasters panel

Panels and Roundtables
Matthew Goldman of MediaKind chaired the annual Broadcasters Panel, which included Del Parks (Sinclair), Dave Siegler (Cox Media Group), Skip Pizzi (NAB) and Richard Friedel (Fox). They discussed the further development and implementation of the ATSC 3.0 broadcast standard, including the Pearl Consortium initiative in Phoenix and other locations, the outlook for ATSC 3.0 tuner chips in future television receivers and the applications of the standard beyond over-the-air broadcasting, with an emphasis on data-casting services.

All of the members of the panel are strong proponents of the implementation of the ATSC 3.0 standard, and more broadcasters are joining the evolution toward implementing it. I would have appreciated including on the panel someone of similar stature who is not quite so gung-ho on the standard to discuss some of the challenges and difficulties not addressed so that we could get a balanced presentation. For example, there is no government mandate nor sponsorship for the move to ATSC 3.0 as there was for the move to ATSC 1.0, so what really motivates broadcasters to make this move? Have the effects of the broadcast spectrum re-packing on available bandwidth negatively affected the ability of broadcasters in all markets to accommodate both ATSC 3.0 and ATSC 1.0 channels?

I really enjoyed “Adapting to a COTS Hardware World,” moderated by Stan Moote of the IABM. Paul Stechly, president of Applied Electronics, noted that more and more end users are building their own in-house solutions, assisted by manufacturers moving away from proprietary applications to open APIs. Another insight panelists shared was that COTS no longer applies to data hubs and switches only. Today, that term can be extended to desktop computers and consumer televisions and video displays as well. More and more, production and post suites are incorporating these into their workflows and environments to test their finished productions on the equipment on which their audience would be viewing them.

Breakfast roundtables

Breakfast Roundtables, which were held on Wednesday, Thursday and Friday mornings, are among my conference “must attends.” Over breakfast, manufacturers and industry experts are given a table to present a topic for discussion by all the participants. The exchange of ideas and approaches benefits everyone at the tables and is a great wake-up exercise leading into the presentations. My favorite, and one of the most popular of the Tech Retreat, is on Friday when S. Merrill Weiss of the Merrill Weiss Group, as he has for many years, presents us with a list of about 12 topics to discuss. This year, his co-host was Karl Paulsen, CTO of Diversified Systems, and the conversations were lively indeed. Some of the topics we discussed were the costs of building a facility based on ST2110, the future of coaxial cable in the broadcast plant, security in modern IP networks and PTP, and the many issues in the evolution from ATSC 1.0 to ATSC 3.0.

As usual, a few people were trying to fit in at or around the table, as it is always full. We didn’t address every topic, and we had to cut the discussions short or risk missing the first presentation of the day.

Final Thoughts
The HPA Tech Retreat’s presentations, panels and discussion forums are a continuing tool in my professional development. Attending this year reaffirmed and amplified my belief that this event is one that should be on each broadcasters’ and content creators’ calendar. The presentations showed that the line between the motion picture and television communities is further blurring and that the techniques embraced by the one community are also of benefit to the other.

The HPA Tech Retreat is still small enough for engaging conversations with speakers and industry professionals, sharing their industry, technical, and creative insights, issues and findings.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.

HPA releases 2019 Tech Retreat program, includes eSports

The Hollywood Professional Association (HPA) has set its schedule for the 2019 HPA Tech Retreat, set for February 11-15. The Tech Retreat, which is celebrating its 25th year, takes place over the course of a week at the JW Marriott Resort & Spa in Palm Desert, California.

The HPA Tech Retreat spans five days of sessions, technology demonstrations and events. During this week, important aspects of production, broadcast, post, distribution and related M&E trends are explored. One of the key differentiators of the Tech Retreat is its strict adherence to a non-commercial focus: marketing-oriented presentations are prohibited except at breakfast roundtables.

“Once again, we’ve received many more submissions than we could use,” says Mark Schubin, the Program Maestro of the HPA Tech Retreat. “To say this year’s were ‘compelling’ is an understatement. We could have programmed a few more days. Rejecting terrific submissions is always the hardest thing we have to do. I’m really looking forward to learning the latest on HDR, using artificial intelligence to restore old movies and machine learning to deal with grunt work, the Academy’s new software foundation, location-based entertainment with altered reality and much more.”

This year’s program is as follows:

Monday February 11: TR-X
eSports: Dropping the Mic on Center Stage
Separate registration required
A half day of targeted panels, speakers and interaction, TR-X will focus on the rapidly growing arena of eSports, with a keynote from Yvette Martinez, CEO – North America of eSports organizer and production company ESL North America.
Tuesday February 12: Supersession
Next-Gen Workflows and Infrastructure: From the Set to the Consumer

Tuesday February 12: Supersession
Next-Gen Workflows and Infrastructure: From the Set to the Consumer

Wednesday February 13: Main Program Highlights
• Mark Schubin’s Technology Year in Review
• Washington Update (Jim Burger, Thompson Coburn LLP)
The highly anticipated review of legislation and its impact on our business from a leading Washington attorney.

• Deep Fakes (Moderated by Debra Kaufman, ETCentric; Panelists Marc Zorn, HBO; Ed Grogan, Department of Defense; Alex Zhukov, Video Gorillas)
It might seem nice to be able to use actors long dead, but the concept of “fake news” takes a terrifying new turn with deepfakes, the term that Wikipedia describes as a portmanteau of “deep learning” and “fake.” Although people have been manipulating images for centuries – long before the creation of Adobe Photoshop – the new AI-powered tools allow the creation of very convincing fake audio and video.

• The Netflix Media Database (Rohit Puri, Netflix)
An optimized user interface, meaningful personalized recommendations, efficient streaming and a high-quality catalog of content are the principal factors that define theNetflix end-user experience. A myriad of business workflows of varying complexities come together to realize this experience. Under the covers, they use computationally expensive computer vision, audio processing and natural language-processing based media analysis algorithms. These algorithms generate temporally and spatially dynamic metadata that is shared across the various use cases. The Netflix Media DataBase (NMDB) is a multi-tenant, data system that is used to persist this deeply technical metadata about various media assets at Netflix and that enables querying the same at scale. The “shared nothing” distributed database architecture allows NMDB to store large amounts of media timeline data, thus forming the backbone for various Netflix media processing systems.

• AI Film Restoration at 12 Million Frames per Second (Alex Zhukov, Video Gorillas)

• Is More Media Made for Subways Than for TV and Cinema? (and does it Make More $$$?) (Andy Quested, BBC)

• Broadcasters Panel (Moderator: Matthew Goldman, MediaKind)

• CES Review (Peter Putman, ROAM Consulting)
Pete Putman traveled to Las Vegas to see what’s new in the world of consumer electronics and returns to share his insights with the HPA Tech Retreat audience.

• 8K: Whoa! How’d We Get There So Quickly (Peter Putman, ROAM Consulting)

• Issues with HDR Home Video Deliverables for Features (Josh Pines, Technicolor)

• HDR “Mini” Session
• HDR Intro: Seth Hallen, Pixelogic
• Ambient Light Compensation for HDR Presentation: Don Eklund, Sony Pictures Entertainment
• HDR in Anime: Haruka Miyagawa, Netflix
• Pushing the Limits of Motion Appearance in HDR: Richard Miller, Pixelworks
• Downstream Image Presentation Management for Consumer Displays:
• Moderator: Michael Chambliss, International Cinematographers Guild
• Michael Keegan, Netflix
• Annie Chang, UHD Alliance
• Steven Poster, ASC, International Cinematographers Guild
• Toshi Ogura, Sony

• Solid Cinema Screens with Front Sound: Do They Work? (Julien Berry, Delair Studios)
Direct-view displays bring high image quality in the cinema but suffer from low pixel fill factor that can lead to heavy moiré and aliasing patterns. Cinema projectors have a much better fill factor which avoids most of those issues even though some moiré effect can be produced due to the screen perforations needed for the audio. With the advent of high contrast, EDR and soon HDR image quality in cinema, screen perforations impact the perceived brightness and contrast from the same image, though the effect has never been quantified since some perforations had always been needed for cinema audio. With the advent of high-quality cinema audio system, it is possible to quantify this effect.

Thursday, February 14: Main Program Highlights

• A Study Comparing Synthetic Shutter and HFR for Judder Reduction (Ianik Beitzel and Aaron Kuder, ARRI and Stuttgart Media University (HdM))

• Using Drones and Photogrammetry Techniques to Create Detailed (High Resolution) Point Cloud Scenes (Eric Pohl, Singularity Imaging)
Drone aerial photography may be used to create multiple geotagged images that are processed to create a 3D point cloud set of a ground scene. The point cloud may be used for production previsualization or background creation for videogames or VR/AR new-media products.

• Remote and Mobile Production Panel (Moderator: Mark Chiolis, Mobile TV Group; Wolfgang Schram, PRG; Scott Rothenberg, NEP)
With a continuing appetite for content from viewers of all the major networks, as well as niche networks, streaming services, web, eGames/eSports and venue and concert-tour events, the battle is on to make it possible to watch almost every sporting and entertainment event that takes place, all live as it is happening. Key members of the remote and mobile community explore what’s new and what workflows are behind the content production and delivery in today’s fast-paced environments. Expect to hear about new REMI applications, IP workflows, AI, UHD/HDR, eGames, and eSports.

• IMSC 1.1: A Single Subtitle and Caption Format for the Entertainment Chain (Pierre-Anthony Lemieux, Sandflow Consulting (supported by MovieLabs); Dave Kneeland, Fox)
IMSC is a W3C standard for worldwide subtitles/captions, and the result of an international collaboration. The initial version of IMSC (IMSC 1) was published in 2016, and has been widely adopted, including by SMPTE, MPEG, ATSC and DVB. With the recent publication of IMSC 1.1, we now have the opportunity to converge on a single subtitle/caption format across the entire entertainment chain, from authoring to consumer devices. IMSC 1.1 improves on IMSC 1 with support for HDR, advanced Japanese language features, and stereoscopic 3D. Learn about IMSC’s history, capabilities, operational deployment, implementation experience, and roadmap — and how to get involved.

• ACESNext and the Academy Digital Source Master: Extensions, Enhancements and a Standardized Deliverable (Andy Maltz, Academy of Motion Picture Arts & Sciences; Annie Chang, Universal Pictures)

• Mastering for Multiple Display and Surround Brightness Levels Using the Human Perceptual Model to Insure the Original Creative Intent Is Maintained (Bill Feightner, Colorfront)
Maintaining a consistent creative look across today’s many different cinema and home displays can be a big challenge, especially with the wide disparity in possible display brightness and contrast as well as the viewing environments or surrounds. Even if it was possible to have individual creative sessions, maintaining creative consistency would be very difficult at best. By using the knowledge of how the human visual system works, the perceptual model, processing source content to fit a given displays brightness and surround can be automatically applied while maintaining the original creative intent with little to no trimming.

• Cloud: Where Are We Now? (Moderator: Erik Weaver, Western Digital)

• Digitizing Workflow – Leveraging Platforms for Success (Roger Vakharia, Salesforce)
While the business of content creation hasn’t changed much over time, the technology enabling processes around production, digital supply chain and marketing resource management among other areas have become increasingly complex. Enabling an agile, platform-based workflow can help in decreasing time and complexity but cost, scale and business sponsorship are often inhibitors in driving success.

Driving efficiency at scale can be daunting but many media leaders have taken the plunge to drive agility across their business process. Join this discussion to learn best practices, integrations, workflows and techniques that successful companies have used to drive simplicity and rigor around their workflow and business process.

• Leveraging Machine Learning in Image Processing (Rich Welsh, Sundog Media Toolkit)
How to use AI (ML and DL networks) to perform “creative” tasks that are boring and humans spend time doing but don’t want to (working real world examples included)

• Leveraging AI in Post Production: Keeping Up with Growing Demands for More Content (Van Bedient, Adobe)
Expectations for more and more content continue to increase — yet staffing remains the same or only marginally bigger. How can advancements from machine learning help content creators? AI can be an incredible boon to remove repetitive tasks and tedious steps allowing humans to concentrate on the creative; ultimately AI can provide the one currency creatives yearn for more than anything else: Time.

• Deploying Component-Based Workflows: Experiences from the Front Lines (Moderator: Pierre-Anthony Lemieux, Sandflow Consulting (supported by MovieLabs))
The content landscape is shifting, with an ever-expanding essence and metadata repertoire, viewing experiences, global content platforms and automated workflows. Component-based workflows and formats, such as the Interoperable Master Format (IMF) standard, are being deployed to meet the challenges brought by this shift. Come and join us for a first-hand account from those on the front lines.

• Content Rights, Royalties and Revenue Management via Blockchain (Adam Lesh, SingularDTV)
The blockchain entertainment economy: adding transparency, disintermediating the supply chain, and empowering content creators to own, manage and monetize their IP to create sustainable, personal and connected economies. As we all know, rights and revenue (including royalties, residuals, etc.) management is a major pain point for content creators in the entertainment industry.

Friday, February 15: Main Program Highlights

• Beyond SMPTE Time Code: The TLX Project: (Peter Symes)
SMPTE Time Code, ST 12, was developed and standardized in the 1970s to support the emerging field of electronic editing. It has been, and continues to be, a robust standard; its application is almost universal in the media industry, and the standard has found use in other industries. However, ST 12 was developed using criteria and restrictions that are not appropriate today, and it has many shortcomings in today’s environment.

A new project in SMPTE, the Extensible Time Label (TLX) is gaining traction and appears to have the potential to meet a wide range of requirements. TLX is designed to be transport-agnostic and with a modern data structure.

• Blindsided: The Game-Changers We Might Not See Coming (Mark Harrison, Digital Production Partnership)
The world’s number one company for gaming revenue makes as much as Sony and Microsoft combined. It isn’t American or Japanese. Marketeers project that by 2019, video advertising on out-of-home displays will be as important as their spending on TV. Meanwhile, a single US tech giant could buy every franchise of the top five US sports leagues. From its off-shore reserves. And still have $50 billion change.

We all know consumers like OTT video. But that’s the least of it. There are trends in the digital economy that, if looked at globally, could have sudden, and profound, implications for the professional content creation industry. In this eye-widening presentation, Mark Harrison steps outside the western-centric, professional media industry perspective to join the technology, consumer and media dots and ask: what could blindside us if we don’t widen our point of view?

• Interactive Storytelling: Choose What Happens Next (Andy Schuler, Netflix)
Looking to experiment with nonlinear storytelling, Netflix launched its first interactive episodes in 2017. Both in children’s programming, the shows encouraged even the youngest of viewers to touch or click on their screens to control the trajectory of the story (think Choose Your Own Adventure books from the 1980s). How did Netflix overcome some of the more interesting technical challenges of the project (i.e., mastering, encoding, streaming), how was SMPTE IMF used to streamline the process and why are we more formalized mastering practices needed for future projects?

• HPA Engineering Excellence Award Winners (Moderator: Joachim Zell, EFILM, Chair HPA Engineering Excellence Awards; Joe Bogacz, Canon; Paul Saccone, Blackmagic Design; Lance Maurer, Cinnafilm; Michael Flathers, IBM; Dave Norman, Telestream).

Since the HPA launched in 2008, the HPA Awards for Engineering Excellence have honored some of the most groundbreaking, innovative, and impactful technologies. Spend a bit of time with a select group of winners and their contributions to the way we work and the industry at large.

• The Navajo Strategic Digital Plan (John Willkie, Luxio)

• Adapting to a COTS Hardware World (Moderator: Stan Moote, IABM)
Transitioning to off-the-shelf hardware is one of the biggest topics on all sides of the industry, from manufacturers, software and service providers through to system integrators, facilities and users themselves. It’s also incredibly uncomfortable. Post production was an early adopter of specialized workstations (e.g. SGI), and has now embraced a further migration up the stack to COTS hardware and IP networks, whether bare metal, virtualized, hybrid or fully cloud based. As the industry deals with the global acceleration of formats, platforms and workflows, what are the limits of COTS hardware when software innovation is continually testing the limits of general-purpose CPUs, GPUs and network protocols? Covering “hidden” issues in using COTS hardware, from the point of view of users and facility operators as well as manufacturers, services and systems integrators.

• Academy Software Foundation: Enabling Cross-Industry Collaboration for Open Source Projects (David Morin, Academy Software Foundation)
In August 2018, the Academy of Motion Picture Arts and Sciences and The Linux Foundation launched the Academy Software Foundation (ASWF) to provide a neutral forum for open source software developers in the motion picture and broader media industries to share resources and collaborate on technologies for image creation, visual effects, animation and sound. This presentation will explain why the Foundation was formed and how it plans to increase the quality and quantity of open source contributions by lowering the barrier to entry for developing and using open source software across the industry.