The Hollywood Professional Association (HPA) has set its schedule for the 2019 HPA Tech Retreat, set for February 11-15. The Tech Retreat, which is celebrating its 25th year, takes place over the course of a week at the JW Marriott Resort & Spa in Palm Desert, California.
The HPA Tech Retreat spans five days of sessions, technology demonstrations and events. During this week, important aspects of production, broadcast, post, distribution and related M&E trends are explored. One of the key differentiators of the Tech Retreat is its strict adherence to a non-commercial focus: marketing-oriented presentations are prohibited except at breakfast roundtables.
“Once again, we’ve received many more submissions than we could use,” says Mark Schubin, the Program Maestro of the HPA Tech Retreat. “To say this year’s were ‘compelling’ is an understatement. We could have programmed a few more days. Rejecting terrific submissions is always the hardest thing we have to do. I’m really looking forward to learning the latest on HDR, using artificial intelligence to restore old movies and machine learning to deal with grunt work, the Academy’s new software foundation, location-based entertainment with altered reality and much more.”
This year’s program is as follows:
Monday February 11: TR-X
eSports: Dropping the Mic on Center Stage
Separate registration required
A half day of targeted panels, speakers and interaction, TR-X will focus on the rapidly growing arena of eSports, with a keynote from Yvette Martinez, CEO – North America of eSports organizer and production company ESL North America.
Tuesday February 12: Supersession
Next-Gen Workflows and Infrastructure: From the Set to the Consumer
Tuesday February 12: Supersession
Next-Gen Workflows and Infrastructure: From the Set to the Consumer
Wednesday February 13: Main Program Highlights
• Mark Schubin’s Technology Year in Review
• Washington Update (Jim Burger, Thompson Coburn LLP)
The highly anticipated review of legislation and its impact on our business from a leading Washington attorney.
• Deep Fakes (Moderated by Debra Kaufman, ETCentric; Panelists Marc Zorn, HBO; Ed Grogan, Department of Defense; Alex Zhukov, Video Gorillas)
It might seem nice to be able to use actors long dead, but the concept of “fake news” takes a terrifying new turn with deepfakes, the term that Wikipedia describes as a portmanteau of “deep learning” and “fake.” Although people have been manipulating images for centuries – long before the creation of Adobe Photoshop – the new AI-powered tools allow the creation of very convincing fake audio and video.
• The Netflix Media Database (Rohit Puri, Netflix)
An optimized user interface, meaningful personalized recommendations, efficient streaming and a high-quality catalog of content are the principal factors that define theNetflix end-user experience. A myriad of business workflows of varying complexities come together to realize this experience. Under the covers, they use computationally expensive computer vision, audio processing and natural language-processing based media analysis algorithms. These algorithms generate temporally and spatially dynamic metadata that is shared across the various use cases. The Netflix Media DataBase (NMDB) is a multi-tenant, data system that is used to persist this deeply technical metadata about various media assets at Netflix and that enables querying the same at scale. The “shared nothing” distributed database architecture allows NMDB to store large amounts of media timeline data, thus forming the backbone for various Netflix media processing systems.
• AI Film Restoration at 12 Million Frames per Second (Alex Zhukov, Video Gorillas)
• Is More Media Made for Subways Than for TV and Cinema? (and does it Make More $$$?) (Andy Quested, BBC)
• Broadcasters Panel (Moderator: Matthew Goldman, MediaKind)
• CES Review (Peter Putman, ROAM Consulting)
Pete Putman traveled to Las Vegas to see what’s new in the world of consumer electronics and returns to share his insights with the HPA Tech Retreat audience.
• 8K: Whoa! How’d We Get There So Quickly (Peter Putman, ROAM Consulting)
• Issues with HDR Home Video Deliverables for Features (Josh Pines, Technicolor)
• HDR “Mini” Session
• HDR Intro: Seth Hallen, Pixelogic
• Ambient Light Compensation for HDR Presentation: Don Eklund, Sony Pictures Entertainment
• HDR in Anime: Haruka Miyagawa, Netflix
• Pushing the Limits of Motion Appearance in HDR: Richard Miller, Pixelworks
• Downstream Image Presentation Management for Consumer Displays:
• Moderator: Michael Chambliss, International Cinematographers Guild
• Michael Keegan, Netflix
• Annie Chang, UHD Alliance
• Steven Poster, ASC, International Cinematographers Guild
• Toshi Ogura, Sony
• Solid Cinema Screens with Front Sound: Do They Work? (Julien Berry, Delair Studios)
Direct-view displays bring high image quality in the cinema but suffer from low pixel fill factor that can lead to heavy moiré and aliasing patterns. Cinema projectors have a much better fill factor which avoids most of those issues even though some moiré effect can be produced due to the screen perforations needed for the audio. With the advent of high contrast, EDR and soon HDR image quality in cinema, screen perforations impact the perceived brightness and contrast from the same image, though the effect has never been quantified since some perforations had always been needed for cinema audio. With the advent of high-quality cinema audio system, it is possible to quantify this effect.
Thursday, February 14: Main Program Highlights
• A Study Comparing Synthetic Shutter and HFR for Judder Reduction (Ianik Beitzel and Aaron Kuder, ARRI and Stuttgart Media University (HdM))
• Using Drones and Photogrammetry Techniques to Create Detailed (High Resolution) Point Cloud Scenes (Eric Pohl, Singularity Imaging)
Drone aerial photography may be used to create multiple geotagged images that are processed to create a 3D point cloud set of a ground scene. The point cloud may be used for production previsualization or background creation for videogames or VR/AR new-media products.
• Remote and Mobile Production Panel (Moderator: Mark Chiolis, Mobile TV Group; Wolfgang Schram, PRG; Scott Rothenberg, NEP)
With a continuing appetite for content from viewers of all the major networks, as well as niche networks, streaming services, web, eGames/eSports and venue and concert-tour events, the battle is on to make it possible to watch almost every sporting and entertainment event that takes place, all live as it is happening. Key members of the remote and mobile community explore what’s new and what workflows are behind the content production and delivery in today’s fast-paced environments. Expect to hear about new REMI applications, IP workflows, AI, UHD/HDR, eGames, and eSports.
• IMSC 1.1: A Single Subtitle and Caption Format for the Entertainment Chain (Pierre-Anthony Lemieux, Sandflow Consulting (supported by MovieLabs); Dave Kneeland, Fox)
IMSC is a W3C standard for worldwide subtitles/captions, and the result of an international collaboration. The initial version of IMSC (IMSC 1) was published in 2016, and has been widely adopted, including by SMPTE, MPEG, ATSC and DVB. With the recent publication of IMSC 1.1, we now have the opportunity to converge on a single subtitle/caption format across the entire entertainment chain, from authoring to consumer devices. IMSC 1.1 improves on IMSC 1 with support for HDR, advanced Japanese language features, and stereoscopic 3D. Learn about IMSC’s history, capabilities, operational deployment, implementation experience, and roadmap — and how to get involved.
• ACESNext and the Academy Digital Source Master: Extensions, Enhancements and a Standardized Deliverable (Andy Maltz, Academy of Motion Picture Arts & Sciences; Annie Chang, Universal Pictures)
• Mastering for Multiple Display and Surround Brightness Levels Using the Human Perceptual Model to Insure the Original Creative Intent Is Maintained (Bill Feightner, Colorfront)
Maintaining a consistent creative look across today’s many different cinema and home displays can be a big challenge, especially with the wide disparity in possible display brightness and contrast as well as the viewing environments or surrounds. Even if it was possible to have individual creative sessions, maintaining creative consistency would be very difficult at best. By using the knowledge of how the human visual system works, the perceptual model, processing source content to fit a given displays brightness and surround can be automatically applied while maintaining the original creative intent with little to no trimming.
• Cloud: Where Are We Now? (Moderator: Erik Weaver, Western Digital)
• Digitizing Workflow – Leveraging Platforms for Success (Roger Vakharia, Salesforce)
While the business of content creation hasn’t changed much over time, the technology enabling processes around production, digital supply chain and marketing resource management among other areas have become increasingly complex. Enabling an agile, platform-based workflow can help in decreasing time and complexity but cost, scale and business sponsorship are often inhibitors in driving success.
Driving efficiency at scale can be daunting but many media leaders have taken the plunge to drive agility across their business process. Join this discussion to learn best practices, integrations, workflows and techniques that successful companies have used to drive simplicity and rigor around their workflow and business process.
• Leveraging Machine Learning in Image Processing (Rich Welsh, Sundog Media Toolkit)
How to use AI (ML and DL networks) to perform “creative” tasks that are boring and humans spend time doing but don’t want to (working real world examples included)
• Leveraging AI in Post Production: Keeping Up with Growing Demands for More Content (Van Bedient, Adobe)
Expectations for more and more content continue to increase — yet staffing remains the same or only marginally bigger. How can advancements from machine learning help content creators? AI can be an incredible boon to remove repetitive tasks and tedious steps allowing humans to concentrate on the creative; ultimately AI can provide the one currency creatives yearn for more than anything else: Time.
• Deploying Component-Based Workflows: Experiences from the Front Lines (Moderator: Pierre-Anthony Lemieux, Sandflow Consulting (supported by MovieLabs))
The content landscape is shifting, with an ever-expanding essence and metadata repertoire, viewing experiences, global content platforms and automated workflows. Component-based workflows and formats, such as the Interoperable Master Format (IMF) standard, are being deployed to meet the challenges brought by this shift. Come and join us for a first-hand account from those on the front lines.
• Content Rights, Royalties and Revenue Management via Blockchain (Adam Lesh, SingularDTV)
The blockchain entertainment economy: adding transparency, disintermediating the supply chain, and empowering content creators to own, manage and monetize their IP to create sustainable, personal and connected economies. As we all know, rights and revenue (including royalties, residuals, etc.) management is a major pain point for content creators in the entertainment industry.
Friday, February 15: Main Program Highlights
• Beyond SMPTE Time Code: The TLX Project: (Peter Symes)
SMPTE Time Code, ST 12, was developed and standardized in the 1970s to support the emerging field of electronic editing. It has been, and continues to be, a robust standard; its application is almost universal in the media industry, and the standard has found use in other industries. However, ST 12 was developed using criteria and restrictions that are not appropriate today, and it has many shortcomings in today’s environment.
A new project in SMPTE, the Extensible Time Label (TLX) is gaining traction and appears to have the potential to meet a wide range of requirements. TLX is designed to be transport-agnostic and with a modern data structure.
• Blindsided: The Game-Changers We Might Not See Coming (Mark Harrison, Digital Production Partnership)
The world’s number one company for gaming revenue makes as much as Sony and Microsoft combined. It isn’t American or Japanese. Marketeers project that by 2019, video advertising on out-of-home displays will be as important as their spending on TV. Meanwhile, a single US tech giant could buy every franchise of the top five US sports leagues. From its off-shore reserves. And still have $50 billion change.
We all know consumers like OTT video. But that’s the least of it. There are trends in the digital economy that, if looked at globally, could have sudden, and profound, implications for the professional content creation industry. In this eye-widening presentation, Mark Harrison steps outside the western-centric, professional media industry perspective to join the technology, consumer and media dots and ask: what could blindside us if we don’t widen our point of view?
• Interactive Storytelling: Choose What Happens Next (Andy Schuler, Netflix)
Looking to experiment with nonlinear storytelling, Netflix launched its first interactive episodes in 2017. Both in children’s programming, the shows encouraged even the youngest of viewers to touch or click on their screens to control the trajectory of the story (think Choose Your Own Adventure books from the 1980s). How did Netflix overcome some of the more interesting technical challenges of the project (i.e., mastering, encoding, streaming), how was SMPTE IMF used to streamline the process and why are we more formalized mastering practices needed for future projects?
• HPA Engineering Excellence Award Winners (Moderator: Joachim Zell, EFILM, Chair HPA Engineering Excellence Awards; Joe Bogacz, Canon; Paul Saccone, Blackmagic Design; Lance Maurer, Cinnafilm; Michael Flathers, IBM; Dave Norman, Telestream).
Since the HPA launched in 2008, the HPA Awards for Engineering Excellence have honored some of the most groundbreaking, innovative, and impactful technologies. Spend a bit of time with a select group of winners and their contributions to the way we work and the industry at large.
• The Navajo Strategic Digital Plan (John Willkie, Luxio)
• Adapting to a COTS Hardware World (Moderator: Stan Moote, IABM)
Transitioning to off-the-shelf hardware is one of the biggest topics on all sides of the industry, from manufacturers, software and service providers through to system integrators, facilities and users themselves. It’s also incredibly uncomfortable. Post production was an early adopter of specialized workstations (e.g. SGI), and has now embraced a further migration up the stack to COTS hardware and IP networks, whether bare metal, virtualized, hybrid or fully cloud based. As the industry deals with the global acceleration of formats, platforms and workflows, what are the limits of COTS hardware when software innovation is continually testing the limits of general-purpose CPUs, GPUs and network protocols? Covering “hidden” issues in using COTS hardware, from the point of view of users and facility operators as well as manufacturers, services and systems integrators.
• Academy Software Foundation: Enabling Cross-Industry Collaboration for Open Source Projects (David Morin, Academy Software Foundation)
In August 2018, the Academy of Motion Picture Arts and Sciences and The Linux Foundation launched the Academy Software Foundation (ASWF) to provide a neutral forum for open source software developers in the motion picture and broader media industries to share resources and collaborate on technologies for image creation, visual effects, animation and sound. This presentation will explain why the Foundation was formed and how it plans to increase the quality and quantity of open source contributions by lowering the barrier to entry for developing and using open source software across the industry.