Tag Archives: SMPTE

SMPTE ST 2110 enables IP workflows

By Tom Coughlin

At IBC2017 and this year’s SMPTE Conference there were significant demonstrations of IP-based workflows with interoperability demonstrations and conference sessions. Clearly proprietary media networking will be supplanted by IP-based workflows. This will enable new equipment economies and open up new opportunities for using and repurposing media. IP workflows will also impact the way we store and use digital content and thus the storage systems where they live.

SMPTE has just ratified ST 2110 standards for IP transport in media workflows. The standard puts video, audio and ancillary data into separate routable streams as shown in the figure below. PCM Audio streams are covered by SMPTE ST 2110-30, uncompressed video streams are covered by ST 2110-20 and ancillary data is covered by ST 2110-40. Some other parts of the standards cover traffic shaping of uncompressed video (ST 2110-21), AES3 transparent transport (ST 2110-31) and ST 2110-50 allows integration with older specification ST 2022-6 that covers legacy SDI over IP.

The separate streams have timestamps that allow proper alignment of the different streams when they are combined together — this timestamp is provided by ST 2059. Each stream contains metadata that tells the receiver how to interpret what is inside of the stream. The uncompressed video stream supports up to 32k X 32k images, HDR and all common color systems and formats.

The important thing about these IP standards is that they allow using conventional Ethernet cabling rather than special proprietary cables. This saves a lot of money on hardware. In addition, having an IP-based workflow allows easy ingest into a core IP network and distribution of content using IP-based broadcast, telco, cable and broadband technologies as well as satellite channels. As most consumers have IP content access, these IP networks connect directly to consumer equipment. The image below from an Avid presentation by Shailendra Mathur at SMPTE 2017 illustrates the workflow below.

At IBC and the SMPTE 2017 Conference there were interoperability demonstrations. Although the IBC interop demo had many more participants the SMPTE demo was pretty extensive. The photo below shows the SMPTE interoperability demonstration setup.

As many modern network storage systems, whether file or object based, use Ethernet connectivity, having the rest of the workflow using an IP network makes movement of data through the workflow to and from digital storage easier. Since access to cloud-based assets is also though IP-based networks and these can feed CDNs and other distribution networks, on-premise and cloud storage interact through IP networks and can be used to support working storage, archives as well as content distribution libraries.

IP workflows combined with IP-based digital storage provide end-to-end processing and storage of data. This provides hardware economics and access to a lot of software built to manage and monitor IP flows to help optimize a media production and distribution system. By avoiding the overhead of converting from one type of network to another the overall system complexity and efficiency will be improved, resulting in faster projects and easier repair of problems when they arise.


Tom Coughlin is president of Coughlin Associates. He is the founder and organizer of the annual Storage Visions Conference as well as the Creative Storage Conference. He has also been the general chairman of the annual Flash Memory Summit.

Geena Davis Institute CEO to speak at SMPTE’s Women in Tech lunch

Madeline Di Nonno, CEO of the Geena Davis Institute on Gender in Media, will be speaking at the annual women in technology luncheon, presented by SMPTE and HPA Women in Post on October 23 and held in conjunction with the SMPTE 2017 Annual Technical  & Exhibition (SMPTE 2017).

Di Nonno will be in conversation with Kari Grubin, co-chair of HPA Women in Post. The luncheon will be held at The Vantage Room on the fifth level of the Hollywood & Highland Center in Hollywood.

The research performed by the Geena Davis Institute analyzes and tracks how women and girls are portrayed in media, and how negative gender stereotypes can influence cultural and social behaviors and beliefs. Di Nonno will share the latest findings from the Institute’s new machine learning research tool, GD-IQ: the Geena Davis Inclusion Quotient.

“What we see onscreen greatly influences our views on society,” says Di Nonno. “The gender disparity in media reinforces unconscious gender bias off screen, behind-the-scenes, and in the real world. According to our research, positive portrayals in media can inspire women and girls to pursue certain careers in STEM as well as furthering their education and leaving abusive relationships. Our mission is to change the media landscape to reflect our growing intersectionality in society. Our motto is ‘If you can see it, you can be it.’ Clearly, there is work to do, but I look forward to speaking with a group of women who are doing it.”

Di Nonno brings 30 years of international experience to her responsibilities at the Geena Davis Institute, where she leads strategic direction, research, education, advocacy, and financial and operational activities. Her past roles have included president/CEO of On the Scene Productions; executive positions for Anchor Bay Entertainment/Starz Media and EVP/GM for Nielsen EDI; SVP, Marketing Alliances and Digital Media at the Hallmark Channel; and VP, Universal Studios Home Video.

Di Nonno began her career at ABC Television Network in corporate publicity. In many of the organizations she has been part of, Di Nonno has led groundbreaking global initiatives in digital technology.

The Women in Technology luncheon is an annual event held in conjunction with the SMPTE Annual Technical Conference & Exhibition. Last year, Victoria Alonso, executive VP of physical production for Marvel Studios was the featured Women in Technology luncheon speaker. Previous speakers also include Cheryl Boone Isaacs, Michelle Munson and Wendy Aylsworth.

Is television versioning about to go IMF?

By Andy Wilson

If you’ve worked in the post production industry for the last 20 years, you’ll have seen the exponential growth of feature film versioning. What was once a language track dub, subtitled version or country specific compliance edit has grown into a versioning industry that has to feed a voracious number of territories, devices, platforms and formats — from airplane entertainment systems to iTunes deliveries.

Of course, this rise in movie versioning has been helped by the shift over the last 10 years to digital cinema and file-based working. In 2013, SMPTE ratified ST 2067-2, which created the Interoperable Master Format (IMF). IMF was designed to help manage the complexity of storing high-quality master rushes inside a file structure that allowed the flexibility to generate multiple variants of films through constraining what was included in the output and in the desired output formats.

Like any workflow and format change, IMF has taken time to be adopted, but it is now becoming the preferred way to share high-quality file masters between media organizations. These masters are all delivered in the J2K codec to support cinema resolutions and playback technologies.

Technologists in the broadcast community have been monitoring the growth in popularity and flexibility of IMF, with its distinctive solution to the challenge of multiple versioning. Most broadcasters have moved away from tape-based playout and are instead using air-ready playout files. These are medium-sized files (50-100Mb/s), derived from high quality rushes that can be used on playout servers to create broadcast streams. The most widespread of these includes the native XDCAM file format, but it is fast being overtaken by the AS-11 format. This format has proved very popular in the United Kingdom, where all major broadcasters made a switch to AS-11 UK DPP in 2014. AS-11 is currently rolling out in the US via the AS-11 X8 and X9 variants. However, these remain air-ready playout files, output from the 600+Mb/s ProRes and RAW files used in high-end productions. AS-11 brings some uniformity, but it doesn’t solve the versioning challenge.

Versioning is rapidly becoming as big an issue for high-end broadcast content as for feature films. Broadcasters are now seeing the sales lifecycle of some of their programs running for more than 10 years. The BBC’s Planet Earth is a great example of this, with dozens of versions being made over several years. So the need to keep high-quality files for re-versioning for new broadcast and online deliveries has become increasingly important. It is crucial for long-tail sales revenue, and productions are starting to invest in higher-resolution recordings for exactly this reason.

So, as the international high-end television market continues to grow, producers are having to look at ways that they can share much higher quality assets than air-ready files. This is where IMF offers significant opportunity for efficiencies in the broadcast and wider media market and why it is something that has the attention of producers, such as the BBC and Sky. Major broadcasters such as these have been working with global partners through the Digital Production Partnership (DPP) to help develop a new specification of IMF, specifically designed for television and online mastering.

The DPP, in partnership with the North American Broadcasters Association (NABA) and the European Broadcasting Union (EBU), have been exploring what the business requirements are for a mastering format for broadcasting. The outcome of this work was published in June 2017, and can be downloaded here.

The work explored three different user requirements: Program Acquisitions (incoming), Program Sales (outgoing) and Archive. The sales and acquisition of content can be significantly transformed with the ability to build new versions on the fly, via the Composition Playlist (CPL) and an Output Profile List (OPL). The ability to archive master rushes in a suitably high-quality package will be extremely valuable to broadcast archives. The addition of the ability to store ProRes as part of an IMF is also being welcomed, as many broadcaster archives are already full of ProRes material.

The EBU-QC group has already started to look at how to manage program quality from a broadcast IMF package, and how technical assessments can be carried out during the outputting of materials, as well as on the component assets. This work paves the way for some innovative solutions to future QC checks, whether carried out locally in the post suite or in the cloud.

The DPP will be working with SMPTE and its partners to fast track a constrained version of IMF ready for use in the broadcast and online delivery market in the first half of 2018.

As OTT video services rely heavily on the ability to output multiple different versions of the source content, this new variant of IMF could play a particularly important role in automatic content versioning and automated processes for file creation and delivery to distribution platforms — not to mention in advertising, where commercials are often re-versioned for multiple territories and states.

The DPP’s work will include the ability to add ProRes- and H.264-derived materials into the IMF package, as well as the inclusion of delivery specific metadata. The DPP are working to deliver some proof-of-concept presentations for IBC 2017 and will host manufacturer and supplier briefing days and plugfests as the work progresses on the draft version of the IMF specification. It is hoped that the work will be completed in time to have the IMF specification for broadcast and online integrated into products by NAB 2018.

It’s exciting to think about how IMF and Internet-enabled production and distribution tools will work together as part of the architecture of the future content supply chain. This supply chain will enable media companies to respond more quickly and effectively to the ever-growing and changing demands of the consumer. The DPP sees this shift to more responsive operational design as the key to success for media suppliers in the years ahead.


Andy Wilson is head of business development at DPP.

SMPTE’s ETCA conference takes on OTT, cloud, AR/VR, more

SMPTE has shared program details for its Entertainment Technology in the Connected Age (ETCA) conference, taking place in Mountain View, California, May 8-9 at the Microsoft Silicon Valley Campus.

Called “Redefining the Entertainment Experience,” this year’s conference will explore emerging technologies’ impact on current and future delivery of compelling connected entertainment experiences.

Bob DeHaven, GM of worldwide communications & media at Microsoft Azure, will present the first conference keynote, titled “At the Edge: The Future of Entertainment Carriage.” The growth of on-demand programming and mobile applications, the proliferation of the cloud and the advent of the “Internet of things” demands that video content is available closer to the end user to improve both availability and the quality of the experience.

DeHaven will discuss the relationships taking shape to embrace these new requirements and will explore the roles network providers, content delivery networks (CDNs), network optimization technologies and cloud platforms will play in achieving the industry’s evolving needs.

Hanno Basse, chief technical officer at Twentieth Century Fox Film, will present “Next-Generation Entertainment: A View From the Fox.” Fox distributes content via multiple outlets ranging — from cinema to Blu-ray, over-the-top (OTT), and even VR. Basse will share his views on the technical challenges of enabling next-generation entertainment in a connected age and how Fox plans to address them.

The first conference session, “Rethinking Content Creation and Monetization in a Connected Age,” will focus on multiplatform production and monetization using the latest creation, analytics and search technologies. The session “Is There a JND in It for Me?” will take a second angle, exploring what new content creation, delivery and display technology innovations will mean for the viewer. Panelists will discuss the parameters required to achieve original artistic intent while maintaining a just noticeable difference (JND) quality level for the consumer viewing experience.

“Video Compression: What’s Beyond HEVC?” will explore emerging techniques and innovations, outlining evolving video coding techniques and their ability to handle new types of source material, including HDR and wide color gamut content, as well as video for VR/AR.

Moving from content creation and compression into delivery, “Linear Playout: From Cable to the Cloud” will discuss the current distribution landscape, looking at the consumer apps, smart TV apps, and content aggregators/curators that are enabling cord-cutters to watch linear television, as well as the new business models and opportunities shaping services and the consumer experience. The session will explore tools for digital ad insertion, audience measurement and monetization while considering the future of cloud workflows.

“Would the Internet Crash If Everyone Watched the Super Bowl Online?” will shift the discussion to live streaming, examining the technologies that enable today’s services as well as how technologies such as transparent caching, multicast streaming, peer-assisted delivery and User Datagram Protocol (UDP) streaming might enable live streaming at a traditional broadcast scale and beyond.

“Adaptive Streaming Technology: Entertainment Plumbing for the Web” will focus specifically on innovative technologies and standards that will enable the industry to overcome inconsistencies of the bitrate quality of the Internet.

“IP and Thee: What’s New in 2017?” will delve into the upgrade to Internet Protocol infrastructure and the impact of next-generation systems such as the ATSC 3.0 digital television broadcast system, the Digital Video Broadcast (DVB) suite of internationally accepted open standards for digital television, and fifth-generation mobile networks (5G wireless) on Internet-delivered entertainment services.

Moving into the cloud, “Weather Forecast: Clouds and Partly Scattered Fog in Your Future” examines how local networking topologies, dubbed “the fog,” are complementing the cloud by enabling content delivery and streaming via less traditional — and often wireless — communication channels such as 5G.

“Giving Voice to Video Discovery” will highlight the ways in which voice is being added to pay television and OTT platforms to simplify searches.

In a session that explores new consumption models, “VR From Fiction to Fact” will examine current experimentation with VR technology, emerging use cases across mobile devices and high-end headsets, and strategies for addressing the technical demands of this immersive format.

You can resister for the conference here.

Netflix's Stranger Things

AES LA Section & SMPTE Hollywood: Stranger Things sound

By Mel Lambert

The most recent joint AES/SMPTE meeting at the Sportsmen’s Lodge in Studio City showcased the talents of the post production crew that worked on the recent Netflix series Stranger Things at Technicolor’s facilities in Hollywood.

Over 160 attendees came to hear how supervising sound editor Brad North, sound designer Craig Henighan, sound effects editor Jordan Wilby, music editor David Klotz and dialog/music re-recording mixer Joe Barnett worked their magic on last year’s eight-episode Season One (Sadly, effects re-recording mixer Adam Jenkins was unable to attend the gathering.) Stranger Things, from co-creators Matt Duffer and Ross Duffer, is scheduled to return in mid-year for Season 2.

L-R: Jordan Wilby, Brad North, Craig Henighan, Joe Barnett, David Klotz and Mel Lambert. Photo Credit: Steve Harvey.

Attendees heard how the crew developed each show’s unique 5.1-channel soundtrack, from editorial through re-recording — including an ‘80s-style, synth-based music score, from Austin-based composers Kyle Dixon and Michael Stein, that is key to the show’s look and feel — courtesy of a full-range surround sound playback system supplied by Dolby Labs.

“We drew our inspiration — subconsciously, at least — from sci-fi films like Alien, The Thing and Predator,” Henighan explained. The designer also revealed how he developed a characteristic sound for the monster that appears in key scenes. “The basic sound is that of a seal,” he said. “But it wasn’t as simple as just using a seal vocal, although it did provide a hook — an identifiable sound around which I could center the rest of the monster sounds. It’s fantastic to take what is normally known as a nice, light, fun-loving sound and use it in a terrifying way!” Tim Prebble, a New Zealand-based sound designer, and owner of sound effects company Hiss and A Roar, offers a range of libraries, including SD003 Seal Vocals|Hiss and A Roar.

Gear used includes Avid Pro Tools DAWs — everybody works in the box — and Avid 64-fader, dual-operator S6 console at the Technicolor Seward Stage. The composers use Apple Logic Pro to record and edit their AAF-format music files.


Mel Lambert is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

 

SMPTE: The convergence of toolsets for television and cinema

By Mel Lambert

While the annual SMPTE Technical Conferences normally put a strong focus on things visual, there is no denying that these gatherings offer a number of interesting sessions for sound pros from the production and post communities. According to Aimée Ricca, who oversees marketing and communications for SMPTE, pre-registration included “nearly 2,500 registered attendees hailing from all over the world.” This year’s conference, held at the Loews Hollywood Hotel and Ray Dolby Ballroom from October 24-27, also attracted more than 108 exhibitors in two exhibit halls.

Setting the stage for the 2016 celebration of SMPTE’s Centenary, opening keynotes addressed the dramatic changes that have occurred within the motion picture and TV industries during the past 100 years, particularly with the advent of multichannel immersive sound. The two co-speakers — SMPTE president Robert Seidel and filmmaker/innovator Doug Trumbull — chronicled the advance in audio playback sound since, respectively, the advent of TV broadcasting after WWII and the introduction of film soundtracks in 1927 with The Jazz Singer.

Robert Seidel

ATSC 3.0
Currently VP of CBS Engineering and Advanced Technology, with responsibility for TV technologies at CBS and the CW networks, Seidel headed up the team that assisted WRAL-HD, the CBS affiliate in Raleigh, North Carolina, to become the first TV station to transmit HDTV in July 1996.  The transition included adding the ability to carry 5.1-channel sound using Advanced Television Systems Committee (ATSC) standards and Dolby AC-3 encoding.

The 45th Grammy Awards Ceremony broadcast by CBS Television in February 2004 marked the first scheduled HD broadcast with a 5.1 soundtrack. The emergent ATSC 3.0 standard reportedly will provide increased bandwidth efficiency and compression performance. The drawback is the lack of backwards compatibility with current technologies, resulting in a need for new set-top boxes and TV receivers.

As Seidel explained, the upside for ATSC 3.0 will be immersive soundtracks, using either Dolby AC-4 or MPEG-H coding, together with audio objects that can carry alternate dialog and commentary tracks, plus other consumer features to be refined with companion 4K UHD, high dynamic range and high frame rate images. In June, WRAL-HD launched an experimental ATSC 3.0 channel carrying the station’s programming in 1080p with 4K segments, while in mid-summer South Korea adopted ATSC 3.0 and plans to begin broadcasts with immersive audio and object-based capabilities next February in anticipation of hosting the 2018 Winter Olympics. The 2016 World Series games between the Cleveland Indians and the Chicago Cubs marked the first live ATSC 3.0 broadcast of a major sporting event on experimental station Channel 31, with an immersive-audio simulcast on the Tribune Media-owned Fox affiliate WJW-TV.

Immersive audio will enable enhanced spatial resolution for 3D sound-source localization and therefore provide an increased sense of envelopment throughout the home listening environment, while audio “personalization” will include level control for dialog elements, alternate audio tracks, assistive services, other-language dialog and special commentaries. ATSC 3.0 also will support loudness normalization and contouring of dynamic range.

Doug Trumbull

Higher Frame Rates
With a wide range of experience within the filmmaking and entertainment technologies, including visual effects supervision on 2001: A Space Odyssey, Close Encounters of the Third Kind, Star Trek: The Motion Picture and Blade Runner, Trumbull also directed Silent Running and Brainstorm, as well as special venue offerings. He won an Academy Award for his Showscan process for high-speed 70mm cinematography, helped develop IMAX technologies and now runs Trumbull Studios, which is innovating a new MAGI process to offer 4K 3D at 120fps. High production costs and a lack of playback environments meant that Trumbull’s Showscan format never really got off the ground, which was “a crushing disappointment,” he conceded to the SMPTE audience.

But meanwhile, responding to falling box office receipts during the ‘50s and ‘60s, Hollywood added more consumer features, including large-screen presentations and surround sound, although the movie industry also began to rely on income from the TV community for broadcast rights to popular cinema releases.

As Seidel added, “The convergence of toolsets for both television and cinema — including 2K, 4K and eventually 8K — will lead to reduced costs, and help create a global market around the world [with] a significant income stream.” He also said that “cord cutting” — substituting cable subscription services for Amazon.com, Hulu, iTunes, Netflix and the like — is bringing people back to over-the-air broadcasting.

Trumbull countered that TV will continue at 60fps “with a live texture that we like,” whereas film will retain its 24fps frame rate “that we have loved for years and which has a ‘movie texture.’ Higher frame rates for cinema, such as 48fps used by Peter Jackson for several of the Lord of the Rings films, has too much of a TV look. Showscan at 120fps and a 360-degree shutter avoided that TV look, which is considered objectionable.” (Early reviews of director Ang Lee’s upcoming 3D film Billy Lynn’s Long Halftime Walk, which was shot in 4K at 120fps, have been critical of its video look and feel.)

complex-tv-networkNext-Gen Audio for Film and TV
During a series of “Advances in Audio Reproduction” conference sessions, chaired by Chris Witham, director of digital cinema technology at Walt Disney Studios, three presentations covered key design criteria for next-generation audio for TV and film. During his discussion called “Building the World’s Most Complex TV Network — A Test Bed for Broadcasting Immersive & Interactive Audio,” Robert Bleidt, GM of Fraunhofer USA’s audio and multimedia division, provided an overview of a complete end-to-end broadcast plant that was built to test various operational features developed by Fraunhofer, Technicolor and Qualcomm. These tests were used to evaluate an immersive/object-based audio system based on MPEG-H for use in Korea during planned ATSC 3.0 broadcasting.

“At the NAB Convention we demonstrated The MPEG Network,” Bleidt stated. “It is perhaps the most complex combination of broadcast audio content ever made in a single plant, involving 13 different formats.” This includes mono, stereo, 5.1-channel and other sources. “The network was designed to handle immersive audio in both channel- and HOA-based formats, using audio objects for interactivity. Live mixes from a simulated sports remote was connected to a network operating center, with distribution to affiliates, and then sent to a consumer living room, all using the MPEG-H audio system.”

Bleidt presented an overview of system and equipment design, together with details of a critical AMAU (audio monitoring and authoring unit) that will be used to mix immersive audio signals using existing broadcast consoles limited to 5.1-channel assignment and panning.

Dr. Jan Skoglund, who leads a team at Google developing audio signal processing solutions, addressed the subject of “Open-source Spatial Audio Compression for VR Content,” including the importance of providing realistic immersive audio experiences to accompany VR presentations and 360-degree 3D video.

“Ambisonics have reemerged as an important technique in providing immersive audio experiences,” Skoglund stated. “As an alternative to channel-based 3D sound, Ambisonics represent full-sphere sound, independent of loudspeaker location.” His fascinating presentation considered the ways in which open-source compression technologies can transport audio for various species of next-generation immersive media. Skoglund compared the efficacy of several open-source codecs for first-order Ambisonics, and also the progress being made toward higher-order Ambisonics (HOA) for VR content delivered via the internet, including enhanced experience provided by HOA.

Finally, Paul Peace, who oversees loudspeaker development for cinema, retail and commercial applications at JBL Professional — and designed the Model 9350, 9300 and 9310 surround units — discussed “Loudspeaker Requirements in Object-Based Cinema,” including a valuable in-depth analysis of the acoustic delivery requirements in a typical movie theater that accommodates object-based formats.

Peace is proposing the use of a new metric for surround loudspeaker placement and selection when the layout relies on venue-specific immersive rendering engines for Dolby Atmos and Barco Auro-3D soundtracks, with object-based overhead and side-wall channels. “The metric is based on three foundational elements as mapped in a theater: frequency response, directionality and timing,” he explained. “Current set-up techniques are quite poor for a majority of seats in actual theaters.”

Peace also discussed new loudspeaker requirements and layout criteria necessary to ensure a more consistent sound coverage throughout such venues that can replay more accurately the material being re-recorded on typical dub stages, which are often smaller and of different width/length/height dimensions than most multiplex environments.


Mel Lambert, who also gets photo credit on pictures from the show, is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com Follow him on Twitter @MelLambertLA.

 

New England SMPTE holding free session on UHD/HDR/HFR, more

The New England Section of SMPTE is holding a free day-long “New Technologies Boot Camp” that focuses on working with high resolution (UHD, 4K and beyond), high-dynamic-range (HDR) imaging and higher frame rates (HFR). In addition, they will discuss how to maintain resolution independence on screens of every size, as well as how to leverage IP and ATSC 3.0 for more efficient movement of this media content.

The boot camp will run from 9am to 9pm on May 19 at the Holiday Inn in Dedham, Massachusetts.

“These are exciting times for those of us working on the technical side of broadcasting, and the array of new formats and standards we’re facing can be a bit overwhelming,” says Martin P. Feldman, chair of SMPTE New England Section. “No one wants — or can afford — to be left behind. That’s why we’re gathering some of the industry’s foremost experts for a free boot camp designed to bring engineers up to speed on new technologies that enable more efficient creation and delivery of a better broadcast product.”

Boot camp presentations will include:

• “High-Dynamic-Range and Wide Color Gamut in Production and Distribution” by Hugo Gaggioni, chief technical officer at Sony Electronics.
• “4K/UHD/HFR/HDR — HEVC H.265 — ATSC 3.0” by Karl Kuhn of Tektronix.
• “Where Is 4K (UHD) Product Used Today — 4K Versus HFR — 4K and HFR Challenges” by Bruce Lane of Grass Valley.
• “Using MESH Networks” by Al Kornak of JVC Kenwood Corporation.
• “IP in Infrastructure-Building (Replacing HD-SDI Systems and Accommodating UHD)” by Paul Briscoe of Evertz Microsystems;
• “Scripted Versus Live Production Requirements” by Michael Bergeron of Panasonic.
• “The Transition from SDI to IP, Including IP Infrastructure and Monitoring” by John Shike of SAM (formerly Snell/Quantel).
• “8K, High-Dynamic-Range, OLED, Flexible Displays” by consultant Peter Putman.
• “HDR: The Great, the Okay, and the WTF” by Mark Schubin, engineer-in-charge at the Metropolitan Opera, Sesame Street and Great Performances (PBS).

The program will conclude with a panel discussion by the program’s presenters.

 No RSVP is required, and both SMPTE members and non-members are welcome.

Leon Silverman steps down, Seth Hallen named new HPA president

In a crowded conference room in Indian Wells, California, during the HPA Tech Retreat, HPA founding president Leon Silverman literally handed the baton to long-time board member Seth Hallen. The organization has also taken on a new name, the Hollywood Professional Association. More on that later.

Hallen, who joined the HPA board in 2007, is SVP of Global Creative Services at Sony DADC New Media Solutions. Silverman, who helped found the organization, will continue to serve on the board of directors in the newly created role of past president.

“It is a distinct honor to continue the important work that Leon has undertaken for this organization, and I am clearly dedicated to making the next phase of HPA a great one,” said Hallen. “Enabling our industry to evolve by fueling our community with ideas, opportunity and recognition remains our goal. I look forward to working with our incredibly talented and dedicated board and continuing our collaboration with our colleagues at SMPTE, and the staff, volunteers and community that are the heart and soul of HPA, as we build upon the work of the past 14 years and look toward the future.”

The HPA, which is now part of SMPTE, also announced newly elected board members, including Craig German, SVP Studio Post at NBCUniversal Media; Jenni McCormick, executive director of American Cinema Editors (ACE); and Chuck Parker, CEO of Sohonet. Newly elected board member Bill Roberts, CFO of Panavision, will assume treasurer responsibilities as Phil Squyres steps down from the post he has held since HPA’s founding. Squyres will remain on the board.

Wendy Aylsworth, past president of SMPTE, was named SMPTE representative. Barbara Lange serves as executive director of SMPTE and HPA. The new Board members join Mark Chiolis, Carolyn Giardina, Vincent Maza, Kathleen Milnes, Loren Nielsen and VP Jerry Pierce on the HPA board.

In commenting on the new HPA name, executive director Lange noted, “The nature of the work and responsibilities that our community is engaged in has changed, and will continue to change. After carefully exploring how to address this growth, it became clear that Professional more accurately and inclusively identifies the creative talent, content holders and global infrastructure of services, as well as emerging processes and platforms. As an organization, we are dedicated to seeing beyond the horizon to the wider future, and bringing a wide array of individuals and companies into the organization. Our new name and identity makes that statement.”

Ncam hires industry vet Vincent Maza to head up LA office

Ncam, makers of camera tracking for augmented reality production and previs, has opened a new office in Los Angeles, and they have brought on Vincent Maza to run the operation.

Maza spent much of his career at Avid and as an HD engineer at Fletcher Chicago. More recently he has been working with the professional imaging division of Dolby and with data transfer specialist Aspera. He is also a member of the board of directors of the HPA (Hollywood Post Alliance), now part of SMPTE. He will be in Indian Wells, California next week representing Ncam at the HPA Tech Retreat.

“2016 is going to be a great year for augmented reality and we believe we will see a huge uptake in people using it to make television more engaging, more exciting and more challenging,” commented Maza. “Ncam’s camera tracking technology makes augmented reality a practical proposition, and I am very excited to be at the heart of it and supporting our US presence.”

Ncam’s tracking system is able to achieve all six degrees of movement in camera location: XYZ position in 3D space, pan, tilt and roll, so even handheld cameras can be precisely tracked with minimal latency.

Broadcasters have embraced augmented reality with Ncam, including CNN, ESPN, Fox Sports and the NFL. This same technology is used to provide directors and cinematographers with realtime visualization of effects shots. Recent movies using the technology include, Avengers Age of Ultron, Edge of Tomorrow and White House Down.

Filmmaker Howard Lukk is SMPTE’s new director of standards

Film director Howard Lukk has joined the Society of Motion Picture and Television Engineers (SMPTE ) as its new director of standards. Over the next year, Lukk will transition into the position that has been held by Peter Symes — who is retiring — for the past eight years.

Lukk is a writer and director at independent film production and management company Pannon Entertainment, where he has been working on short films and providing technical consulting and education for clients. His last short film, “Emma,” was shot and finished in high dynamic range (HDR).

In an earlier role as VP of production systems at The Walt Disney Studios, Lukk oversaw a team responsible for the engineering, installation and maintenance of on-lot and on-set feature film production and post systems. Responsible for helping to incorporate new technologies into the workflow, he assisted the studios’ transition from analog to digital workflows. Lukk also led theatrical production, post and distribution projects focused on digital capture, digital cinema, 3D stereo, file-based workflow, color management and archive.

During two years as director of media systems at Pixar, Lukk was responsible for managing both the audio/visual engineering as well as the image mastering departments and the work of maintaining the recording, projection and post systems and workflows supporting Pixar filmmakers. Before joining Pixar, he held his first role with The Walt Disney Studios. As VP of production technology for the studios, he focused on integrating a new digital cinema workflow throughout the company’s global operations.

Lukk’s early work with Disney built on his previous experience as director of technology at Digital Cinema Initiatives (DCI), where he was responsible for research and development, design and documentation of a digital cinema system specification and test plan. Earlier, as chief engineer at International Video Conversions, Lukk worked with engineering staff to design, build, and maintain a high-end post  facility specializing in digital cinema, telecine transfer, audio post and standards conversion work.

“I have always respected and valued SMPTE’s work in creating the standards that support interoperability in image, sound and metadata, and I am excited about becoming even more involved in this process,” says Lukk, who is also a SMPTE Fellow. “The many significant technological changes taking place in our industry give an immediacy to the Society’s efforts and open up unprecedented opportunities to make a meaningful impact on the future of media creation, delivery, and consumption.”