Cinnafilm 6.6.19

Category Archives: 4k

Apple intros long-awaited new Mac Pro and Pro Display XDR

By Barry Goch

The Apple Worldwide Developers Conference (WWDC19) kicked off on Monday with a keynote from Apple CEO Tim Cook, where he announced the eagerly awaited new Mac Pro and Pro Display XDR.

Tim Cook’s keynote

In recent years, many working in M&E felt as if Apple had moved away from supporting creative pros in this industry. There was the fumbled rollout of FCPX and then the “trash can” MacPro with its limited upgrade path. Well, our patience has finally paid off and our faith in Apple restored. This week Apple delivered products beyond expectation.

This post pro, for one, is very happy that Apple is back making serious hardware for creative professionals. The tight integration of hardware and software, along with Apple’s build quality, makes its products unique in the market. There is confidence and freedom using Macs that creatives love, and the tower footprint is back!

The computer itself is a more than worthy successor to the original Mac Pro tower design. It’s the complete opposite concept of the current trash-can-shaped Mac Pro, with its closed design and limited upgradeability. The new Mac Pro’s motherboard is connected to a stainless steel space frame offering 360-degree access to the internals, which include 12 memory slots with up to 1.5TB of RAM capacity and eight PCI slots, which is the most ever in a Mac — more than the venerable 9600 Power Mac. The innovative graphics architecture in the new Mac Pro is an expansion module, or MPX module, which allows the installation of two graphic cards tied together through the Infinity Fabric link. This allows for data transfers up to five times faster between the GPUs on the PCIe bus.

Also new is the Apple Afterburner hardware accelerator card, which is a field programmable gate array (FPGA) hardware card for accelerating ProRes and ProRes RAW workflows. Afterburner supports playback of up to three streams of 8K ProRes RAW or up to 12 streams of 4K ProRes RAW. The FPGA allows new instruction to be installed on the chipset, giving the MacPro Afterburner card a wealth of possibilities for future updates.

Plays Well With Others
Across the street from the San Jose Convention Center, where the keynote was held, Apple set up “The Studio” in the historic San Jose Civic. The venue was divided into areas of creative specialization: video, photography, music production, 3D and AR. It was really great to see complete workflows and to be able to interface with Apple creative pros. Oh, and Apple has announced support from third-party developers, such as Blackmagic, Avid, Adobe, Maxon’s Cinema 4D, Foundry, Red, Epic Games, Unity, Pixar and more.

Metal is Apple’s replacement for OpenCL/GL. It’s a low level language for interfacing with GPUs. Working closely with AMD, the new Mac Pro will use native Metal rendering for Resolve, OToy Octane, Maxon Cinema 4D and Red.

Blackmagic’s Grant Perry and Barry Goch at The Studio.

DaVinci Resolve is a color correction and online editing software for high-end film and television work. “It was the first professional software to adopt Metal and now, with the new Mac Pro and Afterburner, we’re seeing full-quality 8K performance in realtime with color correction and effects, something we could never dream of doing before,” explains Blackmagic CEO Grant Petty. “DaVinci Resolve running on the new Mac Pro is the fastest way to edit, grade and finish movies and TV shows.”

According to Avid’s director of product management for audio, Francois Quereuil, “Avid’s Pro Tools team is blown away by the unprecedented processing power of the new Mac Pro, and thanks to its internal expansion capabilities, up to six Pro Tools HDX cards can be installed within the system — a first for Avid’s flagship audio workstation. We’re now able to deliver never-before-seen performance and capabilities for audio production in a single system and deliver a platform that professional users in music and post have been eagerly awaiting.”

“Apple continues to innovate for video professionals,” reports Adobe’s VP of digital video and audio, Steven Warner. “With the power offered by the new Mac Pro, editors will be able to work with 8K without the need for any proxy workflows in a future release of Premiere Pro.”

And from Apple? Expect versions of FCPX and Logic to be available with release of the new MacPro and rest assured they will fully use the new hardware.

The Cost
The price for a Mac Pro with an eight-core Xeon W processor, 32GB of RAM, an AMD Radeon Pro 580X GPU and a 256GB SSD is $5999. The price for the fully loaded version with the 28-core Xeon processor, Afterburner, two MDX modules with four AMD Radeon Pro Vega II Duo graphics cards and 4TB of SSD internal storage will come in around $20,000, give or take. It will be available this fall.

Pro Display XDR
The new Pro Display XDR is amazing. I was invited into a calibrated viewing environment that also housed Dell, Eizo, Sony BVM-X300 and Sony-X310 HDR monitors. We were shown the typical extreme bright and colorful animal footage for monitor demos. Personally, I would have preferred to have seen more shots of people from a TV show or feature and not the usual extreme footage used to show off how bright the monitor could get.

For example, it would have been cool to see the Jony Ive video — which plays on the Apple site and describes the offerings of the MacPro and the monitor — talking about the design of the product on the monitor.

Anyway, the big hang-up with the monitor is the stand. The price tag of $1,000 for a monitor stand is a lot compared to the price of the monitor itself. When the price of the stand was announced during the keynote, there was a loud gasp, which unfortunately dampened the excitement and momentum of the new releases. It too will be available in the fall.

Display Specs
This Retina 6K 32-inch (diagonal) display offers 6016×3384 pixels (20.4 million pixels) at 218 pixels per inch. The sustained brightness is 1000-nits sustained (full screen) with 1600 nits peak and a contrast ratio of one million to one. It works in P3 wide color gamut with 10-bit depth for 1.073 billion colors. Available reference modes include HDR video (P3-ST 2084), Digital Cinema (P3-DCI), Digital Cinema (P3-D65) and HDTV video (BT.709-BT.1886). Supported HDR formats are HLG, HDR 10 and Dolby Vision.

Portrait mode

The Cost
The standard glass version is $4,999. The nano-texture anti-glare glass version is $5,999. As mentioned, the Pro Stand is $999 and VESA mount adapter is $199. Both are sold separately and have a Thunderbolt 3 connection only.

Pros and Cons
MacPro Pros: innovative design, expandability
Cons: Lack of Nvidia support, no Afterburner support for other formats beyond ProRes and no optical audio output.

Pro Display XDR Pros: Ability to sustain 1,000 nits, beautiful design and execution.
Cons: Lack of Rec 2020 color space and ACES profile, plus the high cost of the display stand.

Summing Up
The Pro is back for Apple and third-party apps like Avid and Resolve. I really can’t wait to get my hands on the new MacPro and Pro Display XDR and put them through their paces.


Barry Goch is a finishing artist at LA’s The Foundation as well as a UCLA Extension Instructor, Post Production. You can follow him on Twitter at @Gochya

Cobalt Digital’s card-based solution for 4K/HDR conversions

Cobalt Digital was at NAB showing with card-based solutions for openGear frames for 4K and HDR workflows. Cobalt’s 9904-UDX-4K up/down/cross converter and image processor offers an economical SDR-to-HDR and HDR-to-SDR conversion for 4K.

John Stevens, director of engineering at Burbank post house The Foundation, calls it “a swiss army knife” for a post facility.

The 9904-UDX-4K upconverts 12G/6G/3G/HD/SD to either UHD1 3840×2160 square division multiplex (SDM) or two-sample interleave (2SI) quad 3G-SDI-based formats, or it can output SMPTE ST 2082 12G-SDI for single-wire 4K transport. With both 12G-SDI and quad 3G-SDI inputs, the 9904-UDX-4K can downconvert 12G and quad UHD. The 9904-UDX-4K provides an HDMI 2.0 output for economical 4K video monitoring and offers numerous options, including SDR-to-HDR conversion and color correction.

The 9904-UDX-4K-IP model offers the same functionality as the 9904-UDX-4K SDI-based model, plus it also provides dual 10GigE ports to support for the emerging uncompressed video/audio/data over IP standards.

The 9904-UDX-4K-DSP model provides the same functionality as the 9904-UDX-4K model, and additionally also offers a DSP-based platform that supports multiple audio DSP options, including Dolby realtime loudness leveling (automatic loudness processing), Dolby E/D/D+ encode/decode and Linear Acoustic Upmax automatic upmixing. Embedded audio and metadata are properly delayed and re-embedded to match any video processing delay, with full adjustment available for audio/video offset.

The product’s high-density openGear design allows for up to five 9904-UDX-4K cards to be installed in one 2RU openGear frame. Card control/monitoring is available via the DashBoard user interface, integrated HTML5 web interface, SNMP or Cobalt’s RESTful-based Reflex protocol.

“I have been looking for a de-embedder that will work with SMPTE ST-2048 raster sizes — specifically 2048×1080 and 4096×2160,” explains Stevens. “The reason this is important is Netflix deliverables require these rasters. We use all embedded audio and I need to de-embed for monitoring. The same Cobalt Digital card will take almost every SDI input from quad link to 12G and output HDMI. There are other converters that will do some of the same things, but I haven’t seen anything that does what this product does.”

Cinnafilm 6.6.19

Quantum offers new F-Series NVMe storage arrays

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.


Red Ranger all-in-one camera system now available

Red Digital Cinema has made its new Red Ranger all-in-one camera system available to select Red authorized rental houses. Ranger includes Red’s cinematic full-frame 8K sensor Monstro in an all-in-one camera system, featuring three SDI outputs (two mirrored and one independent) allowing two different looks to be output simultaneously; wide-input voltage (11.5V to 32V); 24V and 12V power outs (two of each); one 12V P-Tap port; integrated 5-pin XLR stereo audio input (Line/Mic/+48V Selectable); as well as genlock, timecode, USB and control.

Ranger is capable of handling heavy-duty power sources and boasts a larger fan for quieter and more efficient temperature management. The system is currently shipping in a gold mount configuration, with a v-lock option available next month.

Ranger captures 8K RedCode RAW up to 60fps full-format, as well as Apple ProRes or Avid DNxHR formats at 4K up to 30fps and 2K up to 120fps. It can simultaneously record RedCode RAW plus Apple ProRes or Avid DNxHD or DNxHR at up to 300MB/s write speeds.

To enable an end-to-end color management and post workflow, Red’s enhanced image processing pipeline (IPP2) is also included in the system.

Ranger ships complete, including:
• Production top handle
• PL mount with supporting shims
• Two 15mm LWS rod brackets
• Red Pro Touch 7.0-inch LCD with 9-inch arm and LCD/EVF cable
• LCD/EVF adaptor A and LCD/EVF adaptor D
• 24V AC power adaptor with 3-pin 24V XLR power cable
• Compatible Hex and Torx tools


Goldcrest adds 4K theater and colorist Marcy Robinson

Goldcrest Post in New York City has expanded its its picture finishing services, adding veteran colorist Marcy Robinson and unveiling a new, state-of-the-art 4K theater that joins an existing theater and other digital intermediate rooms. The moves are part of a broader strategy to offer film and television productions packaged post services encompassing editorial, picture finishing and sound.

Robinson brings experience working in features, television, documentaries, commercials and music videos. She has recently been working as a freelance colorist, collaborating with directors Noah Baumbach and Ang Lee. Her background also includes 10 years at the creative boutique Box Services, best known for its work in fashion advertising.

Robinson, who was recruited to Goldcrest by Nat Jencks, the facility’s senior colorist, says she was attracted by the opportunity to work on a diversity of high-quality projects. Robinson’s first projects for Goldcrest include the Netflix documentary The Grass is Greener and an advertising campaign for Reebok.

Robinson started out in film photography and operated a custom color photographic print lab for 13 years. She became a digital colorist after joining Box Services in 2008. As a freelance colorist, her credits include the features Billy Lynn’s Long Halftime Walk, DePalma and Frances Ha, the HBO documentary Suited, commercials for Steve Madden, Dior and Prada, and music videos for Keith Urban and Madonna.

Goldcrest’s new 4K theater is set up for the dual purposes of feature film and HDR television mastering. Its technical features include a Blackmagic DaVinci Resolve Linux Advanced color correction and finishing system, a Barco 4K projector, a Screen Research projection screen and Dolby-calibrated 7:1 surround sound.


AJA ships HDR Image Analyzer developed with Colorfont

AJA is now shipping HDR Image Analyzer, a realtime HDR monitoring and analysis solution developed in partnership with Colorfront. HDR Image Analyzer features waveform, histogram and vectorscope monitoring and analysis of 4K/UltraHD/2K/HD, HDR and WCG content for broadcast and OTT production, post, QC and mastering.

Combining AJA’s video I/O with HDR analysis tools from Colorfront in a compact 1RU chassis, the HDR Image Analyzer features a toolset for monitoring and analyzing HDR formats, including Perceptual Quantizer (PQ) and Hybrid Log Gamma (HLG) for 4K/UltraHD workflows. The HDR Image Analyzer takes in up to 4K sources across 4x 3G-SDI inputs and loops the video out, allowing analysis at any point in the production workflow.

Additional feature highlights include:
– Support for display referred SDR (Rec.709), HDR ST 2084/PQ and HLG analysis
– Support for scene referred ARRI, Canon, Panasonic, Red and Sony camera color spaces
– Display and color processing look up table (LUT) support
– Automatic color space conversion based on the award winning Colorfront Engine
– CIE graph, vectorscope, waveform and histogram support– Nit levels and phase metering
– False color mode to easily spot out-of-gamut/out-of-brightness pixels
– Advanced out-of-gamut and out-of-brightness detection with error intolerance
– Data analyzer with pixel picker
– Line mode to focus a region of interest onto a single horizontal or vertical line
– File-based error logging with timecode
– Reference still store
– UltraHD UI for native-resolution picture display
– Up to 4K/UltraHD 60p over 4x 3G-SDI inputs, with loop out
– SDI auto signal detection
– Loop through output to broadcast monitors
– Three-year warranty

The HDR Image Analyzer is the second technology collaboration between AJA and Colorfront, following the integration of Colorfront Engine into AJA’s FS-HDR realtime HDR/WCG converter. Colorfront has exclusively licensed its Colorfront HDR Image Analyzer software to AJA for the HDR Image Analyzer.

The HDR Image Analyzer is available through AJA’s worldwide reseller network for $15,995.


Shutterstock Select: 4K footage shot on cinema cameras

Shutterstock has introduced a new tier of footage: Shutterstock Select. This collection of exclusive video clips includes far-ranging content — everything from everyday moments to blockbuster-worthy action scenes — all captured by industry pros using cinema-grade cameras. The Shutterstock Select video collection is available to download in both 4K and HD.

“As most filmmakers and cinematographers know, creating high-quality establishing shots are important to any film, but are also very expensive to produce,” says Jon Oringer, founder/CEO of Shutterstock.

This new tier offering features in-demand content categories, such as cinematic aerials, millennial adventure, gastronomy, action scenes and workplace scenes. The shots are filmed on high-end cinema cameras using cinema lenses.

According to Shutterstock’s director, creative video content, Kyle Trotter, a variety of cinema-grade equipment was used to create this collection. The Phantom Flex4K was used for super slow motion 1000fps footage. Contributors also used Red’s latest sensor, the Monstro, on some shoots and as a result filmed large format for some of the content. Additionally, they used the Shotover K1 and Cineflex (both camera rigs for helicopters). In terms of lenses, the Cooke S4s, ARRI/Zeiss Ultra Primes and Sigma Cine Primes were used. This footage was created with a particular focus on Hollywood-style camera movements, composition and acting.

Two of the contributors Shutterstock worked with and are highlighting in this collection are VIA Films’ Daniel Hurst and Aila Images’ Bevan Goldswain. “We aim to build the Shutterstock Select collection by working with more contributors [who can provide  high-quality content] we expect for this offering,” says Trotter. To learn more about contributing, check out their FAQ here.

 


Franz Kraus to advisory role at ARRI, Michael Neuhaeuser takes tech lead

The ARRI Group has named Dr. Michael Neuhaeuser as the new executive board member responsible for technology. He succeeds Professor Franz Kraus, who after more than 30 years at ARRI, joins the Supervisory Board and will continue to be closely associated with the company. Neuhaeuser starts September 1.

Kraus, who has been leading tech development at ARRI for the last few decades, played an essential role in the development of the Alexa digital camera system and early competence in multi-channel LED technology for ARRI lighting. During Kraus’ tenure at ARRI, and while he was responsible for research and development, the company was presented with nine Scientific and Technical Awards by the Academy of Motion Picture Arts and Sciences for its outstanding technical achievements.

In 2011, along with two colleagues, Kraus was honored with an Academy Award of Merit, an Oscar statuette for the design and development of the digital film
recorder, the ARRILASER.

Neuhaeuser, who is now responsible for technology at the ARRI Group, previously served as VP of automotive microcontroller development at Infineon Technologies in Munich. He studied electrical engineering at the Ruhr-University Bochum, Germany, and subsequently completed his doctorate in semiconductor devices. He brings with him 30 years of experience in the electronics industry.

Neuhaeuser started his industrial career at Siemens Semiconductor in Villach, Austria, and also took over leadership development at Micram Microelectronic in Bochum. He joined Infineon Technologies in 1998, where he performed various management functions in Germany and abroad. Some of his notable accomplishments include being responsible for the digital cordless business since 2005 and, together with his team, having developed the world’s first fully integrated DECT chip. In 2009, he was appointed to VP/GM at Infineon Technologies Romania in Bucharest where, as country manager, he built up various local activities with more than 300 engineers. In 2012, he was asked to head up the automotive microcontroller development division for which he and his team developed the highly successful Aurix product family, which is used in every second car worldwide.

Main Image: L-R: Franz Kraus and Michael Neuhaeuser.


Lenovo intros 15-inch VR-ready ThinkPad P52

Lenovo’s new ThinkPad P52 is a 15-inch, VR-ready and ISV-certified mobile workstation featuring an Nvidia Quadro P3200 GPU. The all-new hexa-core Intel Xeon CPU doubles the memory capacity to 128GB and increases PCIe storage. Lenovo says the ThinkPad excels in animation and visual effects project storage, the creation of large models and datasets, and realtime playback.

“More and more, M&E artists have the need to create on-the-go,” reports Lenovo senior worldwide industry manager for M&E Rob Hoffmann. “Having desktop-like capabilities in a 15-inch mobile workstation, allows artists to remain creative anytime, anywhere.”

The workstation targets traditional ISV workflows, as well as AR and VR content creation or deployment of mobile AI. Lenovo points to Virtalis, a VR and advanced visualization company, as an example of who might take advantage of the workstation.

“Our virtual reality solutions help clients better understand data and interact with it. Being able to take these solutions mobile with the ThinkPad P52 gives us expanded flexibility to bring the technology to life for clients in their unique environments,” says Steve Carpenter, head of solutions development for Virtalis. “The ThinkPad P52 powering our Virtalis Visionary Render software is perfect for engineering and design professionals looking for a portable solution to take their first steps into the endless possibilities of VR.”

The P52 also will feature a 4K UHD display with 400nits, 100% Adobe color gamut and 10-bit color depth. There are dual USB-C Thunderbolt ports supporting the display of 8K video, allowing users to take advantage of the ThinkPad Thunderbolt Workstation Dock.

The ThinkPad P52 will be available later this month.

Testing large format camera workflows

By Mike McCarthy

In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.

L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.

Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.

Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.

First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.

Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.

We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.

Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.

One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”

Post Production
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.

Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.

The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.

Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.

Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.

I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:

This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.

We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.

We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.

Main Image: DP Michael Svitak


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Red simplifies camera lineup with one DSMC2 brain

Red Digital Cinema modified its camera lineup to include one DSMC2 camera Brain with three sensor options — Monstro 8K VV, Helium 8K S35 and Gemini 5K S35. The single DSMC2 camera Brain includes high-end frame rates and data rates regardless of the sensor chosen. In addition, this streamlined approach will result in a price reduction compared to Red’s previous camera line-up.

“We have been working to become more efficient, as well as align with strategic manufacturing partners to optimize our supply chain,” says Jarred Land, president of Red Digital Cinema. “As a result, I am happy to announce a simplification of our lineup with a single DSMC2 brain with multiple sensor options, as well as an overall reduction on our pricing.”

Red’s DSMC2 camera Brain is a modular system that allows users to configure a fully operational camera setup to meet their individual needs. Red offers a range of accessories, including display and control functionality, input/output modules, mounting equipment, and methods of powering the camera. The camera Brain is capable of up to 60fps at 8K, offers 300MB/s data transfer speeds and simultaneous recording of RedCode RAW and Apple ProRes or Avid DNxHD/HR.

The Red DSMC2 camera Brain and sensor options:
– DSMC2 with Monstro 8K VV offers cinematic full frame lens coverage, produces ultra-detailed 35.4 megapixel stills and offers 17+ stops of dynamic range for $54,500.
– DSMC2 with Helium 8K S35 offers 16.5+ stops of dynamic range in a Super 35 frame, and is available now for $24,500.
– DSMC2 with Gemini 5K S35 uses dual sensitivity modes to provide creators with greater flexibility using standard mode for well-lit conditions or low-light mode for darker environments priced at $19,500.

Red will begin to phase out new sales of its Epic-W and Weapon camera Brains starting immediately. In addition to the changes to the camera line-up, Red will also begin offering new upgrade paths for customers looking to move from older Red camera systems or from one sensor to another. The full range of upgrade options can be found here.

 

 

NAB First Thoughts: Fusion in Resolve, ProRes RAW, more

By Mike McCarthy

These are my notes from the first day I spent browsing the NAB Show floor this year in Las Vegas. When I walked into the South Lower Hall, Blackmagic was the first thing I saw. And, as usual, they had a number of new products this year. The headline item is the next version of DaVinci Resolve, which now integrates the functionality of their Fusion visual effects editor within the program. While I have never felt Resolve to be a very intuitive program for my own work, it is a solution I recommend to others who are on a tight budget, as it offers the most functionality for the price, especially in the free version.

Blackmagic Pocket Cinema Camera

The Blackmagic Pocket Cinema Camera 4K looks more like a “normal” MFT DSLR camera, although it is clearly designed for video instead of stills. Recording full 4K resolution in RAW or ProRes to SD or CFast cards, it has a mini-XLR input with phantom power and uses the same LP-E6 battery as my Canon DSLR. It uses the same camera software as the Ursa line of devices and includes a copy of Resolve Studio… for $1,300.  If I was going to be shooting more live-action video anytime soon, this might make a decent replacement for my 70D, moving up to 4K and HDR workflows. I am not as familiar with the Panasonic cameras that it is closely competes with in the Micro Four Thirds space.

AMD Radeon

Among other smaller items, Blackmagic’s new UpDownCross HD MiniConverter will be useful outside of broadcast for manipulating HDMI signals from computers or devices that have less control over their outputs. (I am looking at you, Mac users.) For $155, it will help interface with projectors and other video equipment. At $65, the bi-directional MicroConverter will be a cheaper and simpler option for basic SDI support.

AMD was showing off 8K editing in Premiere Pro, the result of an optimization by Adobe that uses the 2TB SSD storage in AMD’s Radeon Pro SSG graphics card to cache rendered frames at full resolution for smooth playback. This change is currently only applicable to one graphics card, so it will be interesting to see if Adobe did this because it expects to see more GPUs with integrated SSDs hit the market in the future.

Sony is showing crystal light emitting diode technology in the form of a massive ZRD video wall of incredible imagery. The clarity and brightness were truly breathtaking, but obviously my camera rendered to the web hardly captures the essence of what they were demonstrating.

Like nearly everyone else at the show, Sony is also pushing HDR in the form of Hybrid Log Gamma, which they are developing into many of their products. They also had an array for their tiny RX0 cameras on display with this backpack rig from Radiant Images.

ProRes RAW
At a higher level, one of the most interesting things I have seen at the show is the release of ProRes RAW. While currently limited to external recorders connected to cameras from Sony, Panasonic and Canon, and only supported in FCP-X, it has the potential to dramatically change future workflows if it becomes more widely supported. Many people confuse RAW image recording with the log gamma look, or other low-contrast visual interpretations, but at its core RAW imaging is a single-channel image format paired with a particular bayer color pattern specific to the sensor it was recorded with.

This decreases the amount of data to store (or compress) and gives access to the “source” before it has been processed to improve visual interpretation — in the form of debayering and adding a gamma curve to reverse engineer the response pattern of the human eye, compared to mechanical light sensors. This provides more flexibility and processing options during post, and reduces the amount of data to store, even before the RAW data is compressed, if at all. There are lots of other compressed RAW formats available; the only thing ProRes actually brings to the picture is widespread acceptance and trust in the compression quality. Existing compressed RAW formats include R3D, CinemaDNG, CineformRAW and Canon CRM files.

None of those caught on as a widespread multi-vendor format, but this ProRes RAW is already supported by systems from three competing camera vendors. And the applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.

There is no technical reason that ProRes RAW couldn’t be implemented on future Arri, Red or BMD cameras, which are all currently capable of recording ProRes and RAW data (but not the combination, yet). And since RAW is inherently a playback-only format, (you can’t alter a RAW image without debayering it), I anticipate we will see support in other applications, unless Apple wants to sacrifice the format in an attempt to increase NLE market share.

So it will be interesting to see what other companies and products support the format in the future, and hopefully it will make life easier for people shooting and producing HDR content.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Quantum’s StorNext 6 Release Now Shipping

The industry’s ongoing shift to higher-resolution formats, its use of more cameras to capture footage and its embrace of additional distribution formats and platforms is putting pressure on storage infrastructure. For content creators and owners to take full advantage of their content, storage must not only deliver scalable performance and capacity but also ensure that media assets remain readily available to users and workflow applications. Quantum’s new StorNext 6 is engineered to address these requirements.

StorNext 6 is now shipping with all newly purchased Xcellis offerings and is also available at no additional cost to current Xcellis users running StorNext 5 under existing support contracts.

Leveraging its extensive real-world 4K testing and a series of 4K reference architectures developed from test data, Quantum’s StorNext platform provides scalable storage that delivers high performance using less hardware than competing systems. StorNext 6 offers a new quality of service (QoS) feature that empowers facilities to further tune and optimize performance across all client workstations, and on a machine-by-machine basis, in a shared storage environment.

Using QoS to specify bandwidth allocation to individual workstations, a facility can guarantee that more demanding tasks, such as 4K playback or color correction, get the bandwidth they need to maintain the highest video quality. At the same time, QoS allows the facility to set parameters ensuring that less timely or demanding tasks do not consume an unnecessary amount of bandwidth. As a result, StorNext 6 users can take on work with higher-resolution content and easily optimize their storage resources to accommodate the high-performance demands of such projects.

StorNext 6 includes a new feature called FlexSpace, which allows multiple instances of StorNext — and geographically distributed teams — located anywhere in the world to share a single archive repository, allowing collaboration with the same content. Users at different sites can store files in the shared archive, as well as browse and pull data from the repository. Because the movement of content can be fully automated according to policies, all users have access to the content they need without having it expressly shipped to them.

Shared archive options include both public cloud storage on Amazon Web Services (AWS), Microsoft Azure or Google Cloud via StorNext’s existing FlexTier capability and private cloud storage based on Quantum’s Lattus object storage or, through FlexTier third-party object storage, such as NetApp StorageGrid, IBM Cleversafe and Scality Ring. In addition to simplifying collaborative work, FlexSpace also makes it easy for multinational companies to establish protected off-site content storage.

FlexSync, which is new to StorNext 6, provides a fast and simple way to synchronize content between multiple StorNext systems that is highly manageable and automated. FlexSync supports one-to-one, one-to-many and many-to-one file replication scenarios and can be configured to operate at almost any level: specific files, specific folders or entire file systems. By leveraging enhancements in file system metadata monitoring, FlexSync recognizes changes instantly and can immediately begin reflecting those changes on another system. This approach avoids the need to lock the file systems to identify changes, reducing synchronization time from hours or days to minutes, or even seconds. As a result, users can also set policies that automatically trigger copies of files so that they are available at multiple sites, enabling different teams to access content quickly and easily whenever it’s needed. In addition, by providing automatic replication across sites, FlexSync offers increased data protection.

StorNext 6 also gives users greater control and selectivity in maximizing their use of storage on an ROI basis. When archive policies call for storage across disk, tape and the cloud, StorNext makes a copy for each. A new copy expiration feature enables users to set additional rules determining when individual copies are removed from a particular storage tier. This approach makes it simpler to maintain data on the storage medium most appropriate and economical and, in turn, to free up space on more expensive storage. When one of several copies of a file is removed from storage, a complementary selectable retrieve function in StorNext 6 enables users to dictate which of the remaining copies is the first priority for retrieval. As a result, users can ensure that the file is retrieved from the most appropriate storage tier.

StorNext 6 offers valuable new capabilities for those facilities that subscribe to Motion Picture Association of America (MPAA) rules for content auditing and tracking. The platform can now track changes in files and provide reports on who changed a file, when the changes were made, what was changed and whether and to where a file was moved. With this knowledge, a facility can see exactly how its team handled specific files and also provide its clients with details about how files were managed during production.

As facilities begin to move to 4K production, they need a storage system that can be expanded for both performance and capacity in a non-disruptive manner. StorNext 6 provides for online stripe group management, allowing systems to have additional storage capacity added to existing stripe groups without having to go offline and disrupt critical workflows.

Another enhancement in StorNext 6 allows StorNext Storage Manager to automate archives in an environment with Mac clients, effectively eliminating the lengthy retrieve process previously required to access an archived directory that contains offline files  which can number in the hundreds of thousands, or even millions.

MammothHD shooting, offering 8K footage

By Randi Altman

Stock imagery house MammothHD has embraced 8K production, shooting studio, macros, aerials, landscapes, wildlife and more. Clark Dunbar, owner of MammothHD, is shooting using the Red 8K VistaVision model. He’s also getting 8K submissions from his network of shooters and producers from around the world. They have been calling on the Red Helium s35 and Epic-W models.

“8K is coming fast —from feature films to broadcast to specialty uses, such as signage and exhibits — the Rio Olympics were shot partially in 8K, and the 2020 Tokyo Olympics will be broadcast in 8K,” says Dunbar. “TV and projector manufacturers of flat screens, monitors and projectors are moving to 8K and prices are dropping, so there is a current clientele for 8K, and we see a growing move to 8K in the near future.”

So why is it important to have 8K imagery while the path is still being paved? “Having an 8K master gives all the benefits of shooting in 8K, but also allows for a beautiful and better over-sampled down-rezing for 4K or lower. There is less noise (if any, and smaller noise/grain patterns) so it’s smoother and sharper and the new color space has incredible dynamic range. Also, shooting in RAW gives the advantages of working to any color grading post conforms you’d like, and with 8K original capture, if needed, there is a large canvas in which to re-frame.”

He says another benefit for 8K is in post — with all those pixels — if you need to stabilize a shot “you have much more control and room for re-framing.”

In terms of lenses, which Dunbar says “are a critical part of the selection for each shot,” current VistaVision sessions have used Zeiss Otus, Zeiss Makro, Canon, Sigma and Nikon glass from 11mm to 600mm, including extension tubes for the macro work and 2X doublers for a few of the telephotos.

“Along with how the lighting conditions affect the intent of the shot, in the field we use from natural light (all times of day), along with on-camera filtration (ND, grad ND, polarizers) with LED panels as supplements to studio set-ups with a choice of light fixtures,” explains Dunbar. “These range from flashlights, candles, LED panels from 2-x-3 inches to 1-x-2 foot panels, old tungsten units and light through the window. Having been shooting for almost 50 years, I like to use whatever tool is around that fits the need of the shot. If not, I figure out what will do from what’s in the kit.”

Dunbar not only shoots, he edits and colors as well. “My edit suite is kind of old. I have a MacPro (cylinder) with over a petabyte of online storage. I look forward to moving to the next-generation of Macs with Thunderbolt 3. On my current system, I rarely get to see the full 8K resolution. I can check files at 4K via the AJA io4K or the KiPro box to a 4K TV.

“As a stock footage house, other than our occasional demo reels, and a few custom-produced client show reels, we only work with single clips in review, selection and prepping for the MammothHD library and galleries,” he explains. “So as an edit suite, we don’t need a full bore throughput for 4K, much less 8K. Although at some point I’d love to have an 8K state-of-the-art system to see just what we’re actually capturing in realtime.”

Apps used in MammothHD’s Apple-based edit suite are Red’s RedCineX (the current beta build) using the new IPP2 pipeline, Apple’s Final Cut 7 and FCP X, Adobe’s Premiere, After Effects and Photoshop, and Blackmagic’s Resolve, along with QuickTime 7 Pro.

Working with these large 8K files has been a challenge, says Dunbar. “When selecting a single frame for export as a 16-bit tiff (via the RedCine-X application), the resulting tiff file in 8K is 200MB!”

The majority of storage used at MammothHD is Promise Pegasus and G-Tech Thunderbolt and Thunderbolt 2 RAIDs, but the company has single disks, LTO tape and even some old SDLT media ranging from FireWire to eSata.

“Like moving to 4K a decade ago, once you see it it’s hard to go back to lower resolutions. I’m looking forward to expanding the MammothHD 8K galleries with more subjects and styles to fill the 8K markets.” Until then Dunbar also remains focused on 4K+ footage, which he says is his site’s specialty.

Designed for large file sizes, Facilis TerraBlock 7 ships

Facilis, makers of shared storage solutions for collaborative media production networks, is now shipping TerraBlock Version 7. The new Facilis Hub Server, a performance aggregator that can be added to new and existing TerraBlock systems, is also available now. Version 7 includes a new browser-based, mobile-compatible Web Console that delivers enhanced workflow and administration from any connected location.

With ever-increasing media file sizes and 4K, HDR and VR workflows continually putting pressure on facility infrastructure, the Facilis Hub Server is aimed at future-proofing customers’ current storage while offering new systems that can handle these types of files. The Facilis Hub Server uses a new architecture to optimize drive sets and increase the bandwidth available from standard TerraBlock storage systems. New customers will get customized Hub Server Stacks with enhanced system redundancy and data resiliency, plus near-linear scalability of bandwidth when expanding the network.

According to James McKenna, VP of marketing/pre-sales at Facilis, “The Facilis Hub Server gives current and new customers a way to take advantage of advanced bandwidth aggregation capabilities, without rendering their existing hardware obsolete.”

The company describes the Web Console as a modernized browser-based and mobile-compatible interface designed to increase the efficiency of administrative tasks and improve the end-user experience.

Easy client setup, upgraded remote volume management and a more integrated user database are among the additional improvements. The Web Console also supports Remote Volume Push to remotely mount volumes onto any client workstations.

Asset Tracking
As the number of files and storage continue to increase, organizations are realizing they need some type of asset tracking system to aid them in moving and finding files in their workflow. Many hesitate to invest in traditional MAM systems due to complexity, cost, and potential workflow impact.

McKenna describes the FastTracker asset tracking software as the “right balance for many customers. Many administrators tell us they are hesitant to invest in traditional asset management systems because they worry it will change the way their editors work. Our FastTracker is included with every TerraBlock system. It’s simple but comprehensive, and doesn’t require users to overhaul their workflow.”

V7 is available immediately for eligible TerraBlock servers.

Check out our interview with McKenna during NAB:

Quick Chat: Endcrawl now supports 4K

By Randi Altman

Endcrawl, a web-based end credits service, is now supporting 4K. This rollout comes on the heels of an extensive testing period — Endcrawl ran 37 different 4K pilot projects on movies for Netflix, Sony and Filmnation.

Along with 4K support comes new pricing. All projects still start on a free-forever tier with 1K preview renders, and users can upgrade to 4K for $999 or 2K for $499.

We reached out to Endcrawl co-founder John “Pliny” Eremic to find out more about the upgrade to 4K.

You are now offering unlimited 4K renders in the cloud. Why was that an important thing to include in Endcrawl, and what does that mean for users?
We’ve seen a sharp rise in the demand for 4K and UHD finishes over the past 18 months. Some of this is driven by studios, like Netflix and Sony, but there’s plenty of call for 4K on the indie and short-form side as well.

Why cloud rendering?
Speed is a big reason. 4K renders usually turn around in less than an hour. 2K renders in half that time. You’d need a beefy rig to match that performance. Convenience is another reason. Offloading renders to the cloud eliminates a huge bottleneck. If you need that late-night clutch render it’s just a few clicks away. Your workflow isn’t tied to a single workstation somewhere… or to business hours.

That’s why we decided to make Endcrawl 100% cloud-based from day one. And, yes, I’d say that using SaaS tools in production is more or less completely normalized in 2017.

Endcrawl’s UI

Are renders really unlimited?
Yes, they are. Unlimited preview renders on the free tier. Unlimited 2K or 4K uncompressed for upgraded projects. We do reserve the right to cut off a project if someone is behaving abusively or just spamming the render engine for kicks.

Have you ever had to do that?
After more than 1,000 projects, this has come up exactly zero times.

Can you mention some films Endcrawl has been used on?
Moonlight, 10 Cloverfield Lane, Ava DuVernay’s 13th, Oliver Stone’s Snowden, Spike Lee’s Chi-Raq, Pride Prejudice & Zombies and Dirty Grandpa, and about 1,000 others.

What else should people know?
– It’s still really fast. 4K renders turn around in about an hour. That’s 60 minutes from clicking “render” until you (or your post house) see a download link to fresh, zipped DPX frames. I cannot overstate how much this comes in handy.
– File sizes are small. Even though a five-minute 4K sequence weighs in at around 250GB, those same frames zip up to just 2.2GB. That’s a compression ratio of more than 100:1. On a fast pipe, you’ll download that in minutes.
– All projects are 4K under the hood now. Even if you’re on a 1K or 2K tier, our engine initially typesets and rasterizes all renders in 4K.
– 4K is still tough on the desktop. Some applications start to run out of memory even on lengthy 2K credits sequences — to say nothing of 4K. Endcrawl eliminates those worries and adds collaboration, live preview and that speedy cloud render engine.

Canon targets HDR with EOS C200, C200B cinema cameras

Canon has grown its Cinema EOS line of pro cinema cameras with the EOS C200 and EOS C200B. These new offerings target filmmakers and TV productions. They offer two 4K video formats — Canon’s new Cinema RAW Light and MP4 — and are optimized for those interested in shooting HDR video.

Alongside a newly developed dual Digic DV6 image processing system, Canon’s Dual Pixel CMOS AF system and improved operability for pros, these new cameras are built for capturing 4K video across a variety of production applications.

Based on feedback from Cinema EOS users, these new offerings will be available in two configurations, while retaining the same core technologies within. The Canon EOS C200 is a production-ready solution that can be used right out of the box, accompanied by an LCD monitor, LCD attachment, camera grip and handle unit. The camera also features a 1.77 million-dot OLED electronic view finder (EVF). For users who need more versatility and the ability to craft custom setups tailored to their subject or environment, the C200B offers cinematographers the same camera without these accessories and the EVF to optimize shooting using a gimbal, drone or a variety of other configurations.

Canon’s Peter Marr was at Cine Gear demo-ing the new cameras.

New Features
Both cameras feature the same 8.85MP CMOS sensor that combines with a newly developed dual Digic DV6 image processing system to help process high-resolution image data and record video from full HD (1920×1080) and 2K (2048×1080) to 4K UHD (3840×2160) and 4K DCI (4096×2160). A core staple of the third-generation Cinema EOS system, this new processing platform offers wide-ranging expressive capabilities and improved operation when capturing high-quality HDR video.

The combination of the sensor and a newly developed processing system also allows for the support for two new 4K file formats designed to help optimize workflow and make 4K and HDR recording more accessible to filmmakers. Cinema RAW Light, available in 4K 60p/50p at 10-bit and 30p/25p/24p at 12-bit, allows users to record data internally to a CFast card by cutting data size to about one-third to one-fifth of a Cinema RAW file, without losing grading flexibility. Due to the reduced file size, users will appreciate rich dynamic range and easier post processing without sacrificing true 4K quality. Alongside recording to a CFast card, proxy data (MP4) can also be simultaneously recorded to an SD card for use in offline editing.

Additionally, filmmakers will also be able to export 4K in MP4 format on SD media cards at 60/50/30/25/24P at 8-bit. Support for UHD recording allows for use in cinema and broadcasting applications or scenarios where long recording times are needed while still maintaining top image quality. The digital cinema cameras also offer slow-motion full-HD recording support at up to 120fps.

The Canon EOS C200and Canon EOS C200B feature Innovative Focus Control that helps assist with 4K shooting that demands precise focusing, whether from single or remote operation. According to Canon, its Dual Pixel CMOS AF technology helps to expand the distance of the subject area to enable faster focus during 4K video recording. This also allows for highly accurate continuous AF and face detection AF when using EF lenses. For 4K video opportunities that call for precise focus accuracy that can’t be checked on an HD monitor, users can also take advantage of the LCD Monitor LM-V1 (supplied with the EOS C200 camera), which provides intuitive touch focusing support to help filmmakers achieve sophisticated focusing even as a single operator.

In addition to these features, the cameras offer:
• Oversampling HD processing: enhances sensitivity and helps minimize noise
• Wide DR Gamma: helps reduce overexposure by retaining continuity with a gamma curve
• ISO 100-102400 and 54db gain: high quality in both low sensitivity and low-light environments
• In-camera ND filter: internal ND unit allows cleaning of glass for easier maintenance
• ACESproxy support: delivers standardized color space in images, helping to improve efficiency
• Two SD card and one CFast card slots for internal recording
• Improved grip and Cinema-EOS-system-compatible attachment method
• Support for Canon Cine-Servo and EF cinema lenses

Editing and grading of the Cinema RAW Light video format will be supported in Blackmagic Resolve. Editing will also be possible in Avid Media Composer, using a Canon RAW plugin for Avid Media Access. This format can also be processed using the Canon application, Cinema RAW Development.

Also, Premiere Pro CC of Adobe will support this format until the end of 2017. Editing will also be possible in Final Cut Pro X from Apple, using the Canon RAW Plugin for Final Cut Pro X after the second half of this year.

The Canon EOS C200 and EOS C200B are scheduled to be available in August for estimated retail prices of $7,499 and $5,999, respectively. The EOS C200 comes equipped with additional accessories including the LM-V1 LCD monitor, LA-V1 LCD attachment, GR-V1 camera grip and HDU-2 handle unit. Available in September, these accessories will also be sold separately.

Pixelogic acquires Sony DADC NMS’ creative services unit

Pixelogic, a provider of localization and distribution services, has completed the acquisition of the creative services business unit of Sony DADC New Media Solutions, which specializes in 4K, UHD, HDR and IMF workflows for features and episodics. The move brings an expansion of Pixelogic’s significant services to the media and entertainment industry and provides additional capabilities, including experienced staff, proprietary technology and an extended footprint.

According to John Suh, co-president of Pixelogic, the acquisition “expands our team of expert media engineers and creative talent, extends our geographic reach by providing a fully established London operation and further adds to our capacity and capability within an expansive list of tools, technologies, formats and distribution solutions.”

Seth Hallen

Founded less than a year ago, Pixelogic currently employs over 240 worldwide and is led by industry veterans Suh and Rob Seidel. While the company is headquartered in Burbank, California, it has additional operations in Culver City, California, London and Cairo.

Sony DADC NMS Creative Services was under the direction of Seth Hallen, who joins Pixelogic as senior VP of business development and strategy. All Sony DADC NMS Creative Services staff, technology and operations are now part of Pixelogic. “Our business model is focused on the deep integration of localization and distribution services for movies and television products,” says Hallen. “This supply chain will require significant change in order to deliver global day and date releases with collapsed distribution windows, and by partnering closely with our customers we are setting out to innovate and help lead this change.”

Bluefish444 supports Adobe CC and 4K HDR with Epoch card

Bluefish444 Epoch video audio and data I/O cards now support the advanced 4K high dynamic range (HDR) workflows offered in the latest versions of the Adobe Creative Cloud.

Epoch SDI and HDMI solutions are suited for Adobe’s Premiere Pro CC, After Effects CC, Audition CC and other tools that are part of the Creative Cloud. With GPU-accelerated performance for emerging post workflows, including 4K HDR and video over IP, Adobe and Bluefish444 are providing a strong option for pros.

Bluefish444’s Adobe Mercury Transmit support for Adobe Creative Cloud brings improved performance in demanding workflows requiring realtime video I/O from UHD and 4K HDR sequences.

Bluefish444 Epoch video card support adds:
• HD/SD SDI input and output
• 4K/2K SDI input and output
• 12/10/8-bit SDI input and output
• 4K/2K/HD/SD HDMI preview
• Quad split 4K UHD SDI
• Two sample interleaved 4K UHD SDI
• 23, 24, 25, 29, 30fps video input and output
• 48, 50, 59, 60fps video input and output
• Dual-link 1.5Gbps SDI
• 3Gbps level A & B SDI
• Quad link 1.5Gbps and 3Gbps SDI
• AES digital audio
• Analog audio monitoring
• RS-422 machine control
• 12-bit video color space conversions

“Recent updates have enabled performance which was previously unachievable,” reports Tom Lithgow, product manager at Bluefish444. “Thanks to GPU acceleration, and [the] Adobe Mercury Transmit plug-in, Bluefish444 and Adobe users can be confident of smooth realtime video performance for UHD 4K 60fps and HDR content.”

JVC GY-LS300CH camera offering 4K 4:2:2 recording, 60p output

JVC has announced version 4.0 of the firmware for its GY-LS300CH 4KCAM Super 35 handheld camcorder. The new firmware increases color resolution to 4:2:2 (8-bit) for 4K recording at 24/25/30p onboard to SDXC media cards. In addition, the IP remote function now allows remote control and image viewing in 4K. When using 4K 4:2:2 recording mode, the video output from the HDMI/SDI terminals is HD.

The GY-LS300CH also now has the ability to output Ultra HD (3840 x 2160) video at 60/50p via its HDMI 2.0b port. Through JVC’s partnership with Atomos, the GY-LS300CH integrates with the new Ninja Inferno and Shogun Inferno monitor recorders, triggering recording from the camera’s start/stop operation. Plus, when the camera is set to J-Log1 gamma recording mode, the Atomos units will record the HDR footage and display it on their integrated, 7-inch monitors.

“The upgrades included in our Version 4.0 firmware provide performance enhancements for high raster recording and IP remote capability in 4K, adding even more content creation flexibility to the GY-LS300CH,” says Craig Yanagi, product marketing manager at JVC. “Seamless integration with the new Ninja Inferno will help deliver 60p to our customers and allow them to produce outstanding footage for a variety of 4K and UHD productions.”

Designed for cinematographers, documentarians and broadcast production departments, the GY-LS300CH features JVC’s 4K Super 35 CMOS sensor and a Micro Four Thirds (MFT) lens mount. With its “Variable Scan Mapping” technology, the GY-LS300CH adjusts the sensor to provide native support for MFT, PL, EF and other lenses, which connect to the camera via third-party adapters. Other features include Prime Zoom, which allows shooters using fixed-focal (prime) lenses to zoom in and out without loss of resolution or depth, and a built-in HD streaming engine with Wi-Fi and 4G LTE connectivity for live HD transmission directly to hardware decoders as well as JVCVideocloud, Facebook Live and other CDNs.

The Version 4.0 firmware upgrade is free of charge for all current GY-LS300CH owners and will be available in late May.

Eizo intros DCI-4K reference monitor for HDR workflows

Eizo will be at NAB next week demonstrating its ColorEdge Prominence CG3145 31.1-inch reference monitor, which offers DCI-4K resolution (4096×2160) for pro HDR post workflows.

Eizo says the ColorEdge Prominence CG3145 can display both very bright and very dark areas on the screen without sacrificing the integrity of either. The monitor achieves the 1000cd/m (typical) high brightness level needed for an HDR content display. It can achieve a typical contrast ratio of 1,000,000:1 for displaying true blacks.

The ColorEdge Prominence CG3145 supports both the HLG (Hybrid Log-Gamma) and PQ (Perceptual Quantization) curves so post pros can rely on a monitor compliant with industry standards for HDR video.

The ColorEdge Prominence CG3145 supports various video formats, including HDMI input compatible with 10-bit 4:2:2 at 50/60p. The DisplayPort input supports up to 10-bit 4:4:4 at 50/60p. Additional features include 98 percent of the DCI-P3 color space smooth gradations with 10-bit display from a 24-bit LUT (look-up-table) and an optional light-shielding hood.
The ColorEdge Prominence CG3145 will begin shipping in early 2018.

In addition to the new ColorEdge Prominence CG3145 HDR reference monitor, Eizo currently offers optional HLG and PQ curves for many of its current CG Series monitors. The optimized gamma curves render images to appear more true to how the human eye perceives the real world compared to SDR. This HDR gamma support is available as an option for ColorEdge CG318-4K, CG248-4K, CG277 and CG247X. Both gamma curves were standardized by the ITU as ITU-R BT.2100. In addition, the PQ curve was standardized by SMPTE as ST-2084.

Comprimato plug-in manages Ultra HD, VR files within Premiere

Comprimato, makers of GPU-accelerated storage compression and video transcoding solutions, has launched Comprimato UltraPix. This video plug-in offers proxy-free, auto-setup workflows for Ultra HD, VR and more on hardware running Adobe Premiere Pro CC.

The challenge for post facilities finishing in 4K or 8K Ultra HD, or working on immersive 360­ VR projects, is managing the massive amount of data. The files are large, requiring a lot of expensive storage, which can be slow and cumbersome to load, and achieving realtime editing performance is difficult.

Comprimato UltraPix addresses this, building on JPEG2000, a compression format that offers high image quality (including mathematically lossless mode) to generate smaller versions of each frame as an inherent part of the compression process. Comprimato UltraPix delivers the file at a size that the user’s hardware can accommodate.

Once Comprimato UltraPix is loaded on any hardware, it configures itself with auto-setup, requiring no specialist knowledge from the editor who continues to work in Premiere Pro CC exactly as normal. Any workflow can be boosted by Comprimato UltraPix, and the larger the files the greater the benefit.

Comprimato UltraPix is a multi-platform video processing software for instant video resolution in realtime. It is a lightweight, downloadable video plug-in for OS X, Windows and Linux systems. Editors can switch between 4K, 8K, full HD, HD or lower resolutions without proxy-file rendering or transcoding.

“JPEG2000 is an open standard, recognized universally, and post production professionals will already be familiar with it as it is the image standard in DCP digital cinema files,” says Comprimato founder/CEO Jirˇí Matela. “What we have achieved is a unique implementation of JPEG2000 encoding and decoding in software, using the power of the CPU or GPU, which means we can embed it in realtime editing tools like Adobe Premiere Pro CC. It solves a real issue, simply and effectively.”

“Editors and post professionals need tools that integrate ‘under the hood’ so they can focus on content creation and not technology,” says Sue Skidmore, partner relations for Adobe. “Comprimato adds a great option for Adobe Premiere Pro users who need to work with high-resolution video files, including 360 VR material.”

Comprimato UltraPix plug-ins are currently available for Adobe Premiere Pro CC and Foundry Nuke and will be available on other post and VFX tools soon. You can download a free 30-day trial or buy Comprimato UltraPix for $99 a year.

Sony intros extended-life SSDs for 4K or higher-bitrate recording 

Sony is expanding its media lineup with the introduction of two new G Series professional solid-state drives in 960GB (SV-GS96) and 480GB (SV-GS48) capacities. Sony says that these SSDs were designed to meet the growing need for external video recording devices docked to camcorders or high-performance DSLRs.

The new SSDs are an option for respective video recorders, offering videographers stable high-speed capabilities, a sense of security and lower cost of ownership due to their longer life. Using Sony’s Error Correction Code technology, the 960GB G Series SSD achieves up to 2400TBW (Terabytes Written), while the 460GB drive can reach 1200TBW, resulting in less frequent replacement and increased ROI. 2400TBW translates to about 10 years of use for the SV-GS96, if data is fully written to the drive an average of five times per week.

According to Sony, the drives are also designed for ultra-fast, stable data writing. Sony G Series SSDs feature built-in technology preventing sudden speed decreases, while ensuring stable recording of high-bitrate 4K video without frame dropping. For example, used with an Atomos Shogun Inferno, G Series SSDs are able to record video at 4K 60p (ProRes 422 HQ) mode stably.

When paired with the necessary connection cables, the new G Series drives can be effortlessly removed from a recorder and connected to a computer for file downloading, making editing easier and faster with read speeds up to 550MB/s.

G Series SSDs also offer data protection technology that keeps content secure and intact, even if a sudden power failure occurs. To add to the drive’s stability, it features a durable connector which withstands extreme repeated insertion and removal up to 3,000 times — or six times more tolerance than standard SATA connectors — even in harsh conditions.

Sony’s SSD G Series is expected to be available May 2017 at the suggested retail prices of $539 for SV-GS96 and $287 for SV-GS48.

Post vet Katie Hinsen now head of operations at NZ’s Department of Post

Katie Hinsen, who many of you may know as co-founder of the Blue Collar Post Collective, has moved back to her native New Zealand and has been named head of operations at Aukland’s Department of Post.

Most recently at New York City’s Light Iron, Hinsen comes from a technical and operations background, with credits on over 80 major productions. Over a 20-year career she has worked as an engineer, editor, VFX artist, stereoscopic 3D artist, colorist and finishing artist on commercials, documentaries, television, music videos, shorts and feature films. In addition to Light Iron, she has had stints at New Zealand’s Park Road Post Production and Goldcrest in New York.

Hinsen has throughout her career been involved in both production and R&D of new digital acquisition and distribution formats, including stereoscopic/autostereoscopic 3D, Red, HFR, HDR, 4K+ and DCP. Her expertise includes HDR, 4K and 8K workflows.

“I was looking for a company that had the forward-thinking agility to be able to grow in a rapidly changing industry. New Zealand punches well above its weight in talent and innovation, and now is the time to use this to expand our wider post production ecosystem,” says Hinsen.

“Department of Post is a company that has shown rapid growth and great success by taking risks, thinking outside the box, and collaborating across town, across the country and across the world. That’s a model I can work with, to help bring and retain more high-end work to Auckland’s post community. We’ve got an increasing number of large-scale productions choosing to shoot here. I want to give them a competitive reason to stay here through Post.“

Department of Post was started by James Brookes and James Gardner in 2008. They provide offline, online, color, audio and deliverables services to film and television productions, both local and international.

DP John Kelleran shoots Hotel Impossible

Director of photography John Kelleran shot season eight of the Travel Channel’s Hotel Impossible, a reality show in which struggling hotels receive an extensive makeover by veteran hotel operator and hospitality expert Anthony Melchiorri and team.

Kelleran, who has more than two decades experience shooting reality/documentary projects, called on Panasonic VariCam LT 4K cinema camcorders for this series.

eWorking for New York production company Atlas Media, Kelleran shot a dozen Hotel Impossible hour-long episodes in locations that include Palm Springs, Fire Island, Capes May, Cape Hatteras, Sandusky, Ohio, and Albany, New York. The production, which began last April and wrapped in December 2016, spent five days in each location.

Kelleran liked the VariCam LT’s dual native ISOs of 800/5000. “I tested ISO5000 by shooting in my own basement at night, and had my son illuminated only by a lighter and whatever light was coming through the small basement window, one foot candle at best. The footage showed spectacular light on the boy.”

Kelleran regularly deployed ISO5000 on each episode. “The crux of the show is chasing out problems in dark corners and corridors, which we were able to do like never before. The LT’s extreme low light handling allowed us to work in dark rooms with only motivated light sources like lamps and windows, and absolutely keep the honesty of the narrative.”

Atlas Media is handling the edit, using Avid Media Composer. “We gave post such a solid image that they had to spend very little time or money on color correction, but could rather devote resources to graphics, sound design and more,” concludes Kelleran.

Hollywood’s Digital Jungle moves to Santa Clarita

Digital Jungle, a long-time Hollywood-based post house, has moved its operations to a new facility in Santa Clarita, California, which has become a growing hub for production and post in the suburbs of Los Angeles. The new headquarters is now home to both Digital Jungle Post and its recent off-shoot Digital Jungle Pictures, a feature film development and production studio.

“I don’t mind saying, it was a bit of an experiment moving to Santa Clarita,” explains Digital Jungle president and chief creative Dennis Ho. “With so many filmmakers and productions working out here — including Disney/ABC Studios, Santa Clarita Studios and Universal Locations — this area has developed into a vast untapped market for post production professionals. I decided that now was a good time to tap into that opportunity.”

Digital Jungle’s new facility offers the full complement of digital workflow solutions for HD to 4K. The facility has multiple suites featuring Smoke, DaVinci Resolve, audio recording via Avid’s S6 console and Pro Tools, production offices, a conference area, a full kitchen and a client lounge.

Digital Jungle is well into the process of adding further capabilities with a new high-end luxury DI 4K theater and screening room, greenscreen stage, VFX bullpen, multiple edit bays and additional production offices as part of their phase two build-out.

Digital Jungle Post services include DI/color grading; VFX/motion graphics; audio recording/mixing and sound design; ADR and VO; HD to 4K deliverables for tape and data; DCI and DCDM; promo/bumper design and film/television title design.

Commenting on Digital Jungle Pictures, Ho says, “It was a natural step for me. I started my career by directing and producing promos and interstitials for network TV, studios and distributors. I think that our recent involvement in producing several independent films has enhanced our credibility on the post side. Filmmakers tend to feel more comfortable entrusting their post work to other filmmakers. One example is we recently completed audio post and DI for a new Hallmark film called Love at First Glance.”

In addition to Love at First Glance, Digital Jungle Productions’ recent projects include indie films Day of Days, A Better Place (available now on digital and DVD) and Broken Memories, which was screened at the Sedona Film Festival.

 

Mozart in the Jungle

The colorful dimensions of Amazon’s Mozart in the Jungle

By Randi Altman

How do you describe Amazon’s Mozart in the Jungle? Well, in its most basic form it’s a comedy about the changing of the guard — or maestro — at the New York Philharmonic, and the musicians that make up that orchestra. When you dig deeper you get a behind-the-scenes look at the back-biting and crazy that goes on in the lives and heads of these gifted artists.

Timothy Vincent

Timothy Vincent

Based on the novel Mozart in the Jungle: Sex, Drugs, and Classical Music by oboist Blair Tindall, the series — which won the Golden Globe last year and was nominated this year — has shot in a number of locations over its three seasons, including Mexico and Italy.

Since its inception, Mozart in the Jungle has been finishing in 4K and streaming in both SDR and HDR. We recently reached out to Technicolor’s senior color timer, Timothy Vincent, who has been on the show since the pilot to find out more about the show’s color workflow.

Did Technicolor have to gear up infrastructure-wise for the show’s HDR workflow?
We were doing UHD 4K already and were just getting our HDR workflows worked out.

What is the workflow from offline to online to color?
The dailies are done in New York based on the Alexa K1S1 709 LUT. (Technicolor On-Location Services handled dailies out of Italy, and Technicolor PostWorks in New York.) After the offline and online, I get the offline reference made with the dailies so I can look at if I have a question about what was intended.

If someone was unsure about watching in HDR versus SDR, what would you tell them?
The emotional feel of both the SDR and the HDR is the same. That is always the goal in the HDR pass for Mozart. One of the experiences that is enhanced in the HDR is the depth of field and the three-dimensional quality you gain in the image. This really plays nicely with the feel in the landscapes of Italy, the stage performances where you feel more like you are in the audience, and the long streets of New York just to name a few.

Mozart in the JungleWhen I’m grading the HDR version, I’m able to retain more highlight detail than I was in the SDR pass. For someone who has not yet been able to experience HDR, I would actually recommend that they watch an episode of the show in SDR first and then in HDR so they can see the difference between them. At that point they can choose what kind of viewing experience they want. I think that Mozart looks fantastic in both versions.

What about the “look” of the show. What kind of direction where you given?
We established the look of the show based on conversations and collaboration in my bay. It has always been a filmic look with soft blacks and yellow warm tones as the main palette for the show. Then we added in a fearlessness to take the story in and out of strong shadows. We shape the look of the show to guide the viewers to exactly the story that is being told and the emotions that we want them to feel. Color has always been used as one of the storytelling tools on the show. There is a realistic beauty to the show.

What was your creative partnership like with the show’s cinematographer, Tobias Datum?
I look forward to each episode and discovering what Tobias has given me as palette and mood for each scene. For Season 3 we picked up where we left off at the end of Season 2. We had established the look and feel of the show and only had to account for a large portion of Season 3 being shot in Italy. Making sure to feel the different quality of light and feel of the warmth and beauty of Italy. We did this by playing with natural warm skin tones and the contrast of light and shadow he was creating for the different moods and locations. The same can be said for the two episodes in Mexico in Season 2. I know now what Tobias likes and can make decisions I’m confident that he will like.

Mozart in the JungleFrom a director and cinematographer’s point of view, what kind of choices does HDR open up creatively?
It depends on if they want to maintain the same feel of the SDR or if they want to create a new feel. If they choose to go in a different direction, they can accentuate the contrast and color more with HDR. You can keep more low-light detail while being dark, and you can really create a separate feel to different parts of the show… like a dream sequence or something like that.

Any workflow tricks/tips/trouble spots within the workflow or is it a well-oiled machine at this point?
I have actually changed the way I grade my shows based on the evolution of this show. My end results are the same, but I learned how to build grades that translate to HDR much easier and consistently.

Do you have a color assistant?
I have a couple of assistants that I work with who help me with prepping the show, getting proxies generated, color tracing and some color support.

What tools do you use — monitor, software, computer, scope, etc.?
I am working on Autodesk Lustre 2017 on an HP Z840, while monitoring on both a Panasonic CZ950 and a Sony X300. I work on Omnitek scopes off the downconverter to 2K. The show is shot on both Alexa XT and Alexa Mini, framing for 16×9. All finishing is done in 4K UHD for both SDR and HDR.

Anything you would like to add?
I would only say that everyone should be open to experiencing both SDR and HDR and giving themselves that opportunity to choose which they want to watch and when.

Digital Anarchy’s new video sharpening tool for FCP X, Premiere and AE

Digital Anarchy has released Samurai Sharpen for Video, a sharpening tool for artists using Apple FCP X, Adobe After Effects and Adobe Premiere Pro. Samurai Sharpen is edge-aware sharpening software that uses a number of intelligent algorithms, as well as GPU acceleration to give users working with HD and 4K footage precise image quality control.

Samurai Sharpen provides video editors and colorists the ability to sharpen the details they want and be highly selective of those areas they want untouched. This means areas like skin tones or dark, noisy background areas, which usually do not look better sharpened, can be protected and not sharpened.

The built-in masks within the software allow editors and colorists to apply specific focus on areas within the video for sharpening, while avoiding dark, shadow areas and bright, highlight areas. This allows for maintaining pristine image quality throughout the shot. The edge-aware algorithm provides another level of masking by restricting the sharpening to only the significant details you choose. For example, enhancing details of an actor’s eyes while preventing areas like skin from being affected.

While the software supports After Effects, Premiere Pro and Final Cut Pro X currently, support for Avid and OpenFX is coming soon. Samurai Sharpen is available now. It’s regularly priced at $129, but Digital Anarchy is offering it for $99 until November 15.

Quick Chat: Josh Haynie Light Iron’s VP of US operations

Post services company Light Iron has named veteran post pro Josh Haynie to VP of US operations, a newly created position. Based in Light Iron’s Hollywood facility, Haynie will be responsible for leveraging the company’s resources across Los Angeles, New York, New Orleans and future locations.

Haynie joins Light Iron after 13 years at Efilm, where, as managing director, he maintained direct responsibility for all aspects of the company’s operations, including EC3 (on-location services), facility dailies, trailers, digital intermediate, home video and restoration. He managed a team of 100-plus employees. Previously, Haynie held positions at Sunset Digital, Octane/Lightning Dubs and other production and post companies. Haynie is an associate member of the ASC and is also actively involved in the HPA, SMPTE, and VES.

“From the expansion of Light Iron’s episodic services and New York facilities to the development of the color science in the new Millennium DXL camera, it is clear that the integration of Panavision and Light Iron brings significant benefits to clients,” says Haynie.

He was kind enough to take time out of his schedule to answer some of our questions…

Your title hints Light Iron opening up in new territories. Can you talk about this ? What is happening in the industry that this makes sense?
We want to be strategically located near the multiple Panavision locations. Productions and filmmakers need the expertise and familiarity of Light Iron resources in the region with the security and stability of a solid infrastructure. Projects often have splinter and multiple units in various locations, and they demand a workflow continuity in these disparate locations. We can help facilitate projects working in those various regions and offer unparalleled support and guidance.

What do you hope to accomplish in your first 6 to 12 months? What are your goals for Light Iron?
I want to learn from this very agile team of professionals and bring in operational and workflow options to the rapidly changing production/post production convergence we are all encountering. We have a very solid footing in LA, NY and NOLA. I want to ensure that each unit is working together using effective skills and technology to collaborate and allow filmmakers creative freedom. My goal is to help navigate this team though the traditional growth patterns as well as the unpredictable challenges that lie ahead in the emerging market.

You have a wealth of DI experience and knowledge. How has DI changed over the years?
The change depends on the elevation. From a very high level, it was the same simple process for many years: shoot, edit, scan, VFX, color — and our hero was always a film print. Flying lower, we have seen massive shifts in technology that have re-written the play books. The DI really starts in the camera testing phase and begins to mature during the production photography stage. The importance of look setting, dailies and VFX collaboration take on a whole new meaning with each day of shooting.

The image data that is captured needs to be available for near set cutting while VFX elements are being pulled within a few short days of photography. This image data needs to be light and nimble, albeit massive in file size and run time. The turnarounds are shrinking in the feature space exponentially. We are experiencing international collaboration on the finish and color of each project, and the final render dates are increasingly close to worldwide release dates. We are now seeing a tipping point like we encountered a few years back when we asked ourselves, “Is the hero a print or DCP?” Today, we are at the next hero question, DCP or HDR?

Do you have any advice for younger DI artists based on your history?
I think it is always good to learn from the past and understand how we got here. I would say younger artists need to aggressively educate themselves on workflow, technology, and collaboration. Each craft in the journey has experienced rapid evolvement in the last few years. There are many outlets to learn about the latest capture, edit, VFX, sound and distribution techniques being offered, and that research time needs to be on everyone’s daily task list. Seeking out new emerging creative talent is critical learning at this stage as well. Everyday a filmmaker is formulating a vision that is new to the world. We are fortunate here at Light Iron to work with these emerging filmmakers who share the same passion for taking that bold next step in storytelling.

Bluefish444 offering new range of I/O cards with Kronos

Bluefish444, makers of uncompressed 4K/2K/HD/SD video I/O cards for Windows, Mac OS X and Linux, has introduced the Kronos range of video and audio I/O cards. The new line extends the feature set of Bluefish444’s Epoch video cards, which support up to 4K 60 frame per second workflows. Kronos is developed for additional workflows requiring Ultra HD up to 8K, high frame rates up to 120fps, high dynamic range and video over IP.

With these capabilities, Bluefish444 cards are developed to support all areas of TV and feature film production, post, display and restoration, in addition to virtual reality and augmented reality.

Kronos adds video processing technologies including resolution scaling, video interlace and de-interlace; hardware CODEC support, SDI to IP and IP to SDI conversion, and continues to offer the 12-bit color space conversion and low-latency capabilities of Epoch.

“With the choice of HD BNC SD/HD/3G connectivity, or SFP+ connectivity enabling greater than 3G SDI and Video over IP across 10Gbps Ethernet, the Kronos range will suit today’s demanding requirements and cover emerging technologies as they mature,” says product manager Tom Lithgow. “4K ultra high definition, high frame rate and ultra-high frame rate video, high dynamic range, video over IP and hardware assisted processing are now available to OEM developers, professional content creators and system integrators with the Bluefish444 Kronos range.”

HDMI 2.0 I/O, additional SMPTE 2022 IP standards and emerging IP standards are earmarked for future support via firmware update.

Kronos will offer the choice of SDI I/O connectivity with the Kronos elektron featuring eight high-density BNC connectors capable of SD/HD/3G SDI. Each HD BNC connector is fully bi-directional enabling numerous configuration options, including eight input, eight output, or a mixture of SDI input and output connections.

The Kronos optikós offers future proofing connectivity with three SFP+ cages in addition to two HD BNC connectors for SD/HD/3G SDI I/O. The SFP+ cages on Kronos optikós provide limitless connectivity options, exposing greater than 3G SDI, IP connectivity across 10Gb Ethernet, and flexibility to choose from numerous physical interfaces.

All Kronos cards will have an eight-lane Gen 3 PCIe interface and will provide access to high bandwidth UHD, high frame rate and high dynamic range video IO and processing across traditional SDI and also emerging IP standards such as SMPTE 2022.

Kronos specs include:
* SD SMPTE295M
* HD 1.5G SMPTE292M
* 3G (A+B) SMPTE424M
* ASI
* 4:2:2:4 / 4:4:4:4 SDI
* Single Link / Dual Link / Quad Link interfaces
* 12/10-bit SDI
* Full 4K frame buffer
* 3Gbps Bypass Relays
* 12-bit video processing pipeline
* 4 x 4 x 32bit Matrix
* MR2 Routing resources
* Hardware Keyer (2K/HD)
* Customizable and flexible pixel formats
* AES Audio Input / AES Audio Output
* LTC I / O
* RS422
* Bi/Tri-level Genlock Input & Crosslocking
* Genlock loop through
* VANC complete access
* HANC Embedded Audio/Payload ID/Custom Packets/RP188
* MSA and Non-MSA compatible SFP+
* SMPTE 2022-6

The Kronos range will be available in Q4 2016 with pricing announced then.

Sony launches Z Series line of UHD TVs for 4K HDR

Sony recently introduced a line of UHD television displays that could be suitable as client monitors in post houses. Sony’s new Z series television display technology — including the X930D and X940D — has adopted Sony’s Backlight Master Drive backlight boosting technology, which expands brightness and contrast to better exploit 4K HDR. While testing will need to be done, rumor has it the monitor may easily comply with Ultra HD Alliance requirements, making this an excellent choice for large size monitors for the client experience.

To further enhance contrast, the Backlight Master Drive includes a dense LED structure, discrete lighting control and an optical design with a calibrated beam LED. Previously, local dimming was controlled by zones with several LEDs. The discrete LED control feature allows the Backlight Master Drive to dim and boost each LED individually for greater precision, contrast and realism.

Additionally, the Z series features a newly developed 4K image processor, the 4K HDR Processor X1 Extreme. Combined with Backlight Master Drive, the Z series features expanded contrast and more accurate color expression. The 4K Processor X1 Extreme incorporates three new technologies: an object-based HDR remaster, dual database processing and Super Bit Mapping 4K HDR. With these three technologies, 4K HDR Processor X1 Extreme reproduces a wide variety of content with immersive 4K HDR picture quality.

The Z series runs on Android TV with a Sony user interface that includes a new content bar with enhanced content navigation, voice search and a genre filtering function. Instead of selecting a program from several channels, users can select from favorite genres, including as sports, music, news.

Pricing is as follows:

  • XBR65Z9D, 65″ class (64.5″ diagonal), $6,999 MSRP, available summer 2016
  • XBR75Z9D, 75″ class (74.5″ diagonal), $9,999 MSRP, available summer 2016
  • XBR100Z9D, 100″ class (99.5″ diagonal), pricing and availability details to be announced later this year.

Storage Workflows for 4K and Beyond

Technicolor-Postworks and Deluxe Creative Services share their stories.

By Beth Marchant

Once upon a time, an editorial shop was a sneaker-net away from the other islands in the pipeline archipelago. That changed when the last phases of the digital revolution set many traditional editorial facilities into swift expansion mode to include more post production services under one roof.

The consolidating business environment in the post industry of the past several years then brought more of those expanded, overlapping divisions together. That’s a lot for any network to handle, let alone one containing some of the highest quality and most data-dense sound and pictures being created today. The networked storage systems connecting them all must be robust, efficient and realtime without fail, but also capable of expanding and contracting with the fluctuations of client requests, job sizes, acquisitions and, of course, evolving technology.

There’s a “relief valve” in the cloud and object storage, say facility CTOs minding the flow, but it’s still a delicate balance between local pooled and tiered storage and iron-clad cloud-based networks their clients will trust.

Technicolor-Postworks
Joe Beirne, CTO of Technicolor-PostWorks New York, is probably as familiar as one can be with complex nonlinear editorial workflows. A user of Avid’s earliest NLEs, an early adopter of networked editing and an immersive interactive filmmaker who experimented early with bluescreen footage, Beirne began his career as a technical advisor and producer for high-profile mixed-format feature documentaries, including Michael Moore’s Fahrenheit 9/11 and the last film in Godfrey Reggio’s KOYAANISQATSI trilogy.

Joe Beirne

Joe Beirne

In his 11 years as a technology strategist at Technicolor-PostWorks New York, Beirne has also become fluent in evolving color, DI and audio workflows for clients such as HBO, Lionsgate, Discovery and Amazon Studios. CTO since 2011, when PostWorks NY acquired the East Coast Technicolor facility and the color science that came with it, he now oversees the increasingly complicated ecosystem that moves and stores vast amounts of high-resolution footage and data while simultaneously holding those separate and variously intersecting workflows together.

As the first post facility in New York to handle petabyte levels of editorial-based storage, Technicolor-PostWorks learned early how to manage the data explosion unleashed by digital cameras and NLEs. “That’s not because we had a petabyte SAN or NAS or near-line storage,” explains Beirne. “But we had literally 25 to 30 Avid Unity systems that were all in aggregate at once. We had a lot of storage spread out over the campus of buildings that we ran on the traditional PostWorks editorial side of the business.”

The TV finishing and DI business that developed at PostWorks in 2005, when Beirne joined the company (he was previously a client), eventually necessitated a different route. “As we’ve grown, we’ve expanded out to tiered storage, as everyone is doing, and also to the cloud,” he says. “Like we’ve done with our creative platforms, we have channeled our different storage systems and subsystems to meet specific needs. But they all have a very promiscuous relationship with each other!”

TPW’s high-performance storage in its production network is a combination of local or semi-locally attached near-line storage tethered by several Quantum StorNext SANs, all of it air-gapped — or physically segregated —from the public Internet. “We’ve got multiple SANs in the main Technicolor mothership on Leroy Street with multiple metadata controllers,” says Beirne. “We’ve also got some client-specific storage, so we have a SAN that can be dedicated to a particular account. We did that for a particular client who has very restrictive policies about shared storage.”

TPW’s editorial media, for the most part, resides in Avid’s ISIS system and is in the process of transitioning to its software-defined replacement, Nexis. “We have hundreds of Avids, a few Adobe and even some Final Cut systems connected to that collection of Nexis and ISIS and Unity systems,” he says. “We’re currently testing the Nexis pipeline for our needs but, in general, we’re going to keep using this kind of storage for the foreseeable future. We have multiple storage servers that serve that part of our business.”

Beirne says most every project the facility touches is archived to LTO tape. “We have a little bit of disc-to-tape archiving going on for the same reasons everybody else does,” he adds. “And some SAN volume hot spots that are all SSD (solid state drives) or a hybrid.” The facility is also in the process of improving the bandwidth of its overall switching fabric, both on the Fibre Channel side and on the Ethernet side. “That means we’re moving to 32Gb and multiple 16Gb links,” he says. “We’re also exploring a 40Gb Ethernet backbone.”

Technicolor-Postworks 4K theater at their Leroy Street location.

This backbone, he adds, carries an exponential amount of data every day. “Now we have what are like two nested networks of storage at a lot of the artist workstations,” he explains. “That’s a complicating feature. It’s this big, kind of octopus, actually. Scratch that: it’s like two octopi on top of one another. That’s not even mentioning the baseband LAN network that interweaves this whole thing. They, of course, are now getting intermixed because we are also doing IT-based switching. The entire, complex ecosystem is evolving and everything that interacts with it is evolving right along with it.”

The cloud is providing some relief and handles multiple types of storage workflows across TPW’s various business units. “Different flavors of the commercial cloud, as well as our own private cloud, handle those different pools of storage outside our premises,” Beirne says. “We’re collaborating right now with an international account in another territory and we’re touching their storage envelope through the Azure cloud (Microsoft’s enterprise-grade cloud platform). Our Azure cloud and theirs touch and we push data from that storage back and forth between us. That particular collaboration happened because we both had an Azure instance, and those kinds of server-to-server transactions that occur entirely in the cloud work very well. We also had a relationship with one of the studios in which we made a similar connection through Amazon’s S3 cloud.”

Given the trepidations most studios still have about the cloud, Beirne admits there will always be some initial, instinctive mistrust from both clients and staff when you start moving any content away from computers that are not your own and you don’t control. “What made that first cloud solution work, and this is kind of goofy, is we used Aspera to move the data, even though it was between adjacent racks. But we took advantage of the high-bandwidth backbone to do it efficiently.”

Both TPW in New York and Technicolor in Los Angeles have since leveraged the cloud aggressively. “We our own cloud that we built, and big Technicolor has a very substantial purpose-built cloud, as well as Technicolor Pulse, their new storage-related production service in the cloud. They also use object storage and have some even newer technology that will be launching shortly.”

The caveat to moving any storage-related workflow into the cloud is thorough and continual testing, says Beirne. “Do I have more concern for my clients’ media in the cloud than I do when sending my own tax forms electronically? Yea, I probably do,” he says. “It’s a very, very high threshold that we need to pass. But that said, there’s quite a bit of low-impact support stuff that we can do on the cloud. Review and approval stuff has been happening in the cloud for some time.” As a result, the facility has seen an increase, like everyone else, in virtual client sessions, like live color sessions and live mix sessions from city to city or continent to continent. “To do that, we usually have a closed circuit that we open between two facilities and have calibrated displays on either end. And, we also use PIX and other normal dailies systems.”

“How we process and push this media around ultimately defines our business,” he concludes. “It’s increasingly bigger projects that are made more demanding from a computing point of view. And then spreading that out in a safe and effective way to where people want to access it, that’s the challenge we confront every single day. There’s this enormous tension between the desire to be mobile and open and computing everywhere and anywhere, with these incredibly powerful computer systems we now carry around in our pockets and the bandwidth of the content that we’re making, which is high frame rate, high resolution, high dynamic range and high everything. And with 8K — HDR and stereo wavefront data goes way beyond 8K and what the retina even sees — and 10-bit or more coming in the broadcast chain, it will be more of the same.” TPW is already doing 16-bit processing for all of its film projects and most of its television work. “That’s piles and piles and piles of data that also scales linearly. It’s never going to stop. And we have a VR lab here now, and there’s no end of the data when you start including everything in and outside of the frame. That’s what keeps me up at night.”

Deluxe Creative Services
Before becoming CTO at Deluxe Creative Services, Mike Chiado had a 15-year career as a color engineer and image scientist at Company 3, the grading and finishing powerhouse acquired by Deluxe in 2010. He now manages the pipelines of a commercial, television and film Creative Services division that encompasses not just dailies, editorial and color, but sound, VFX, 3D conversion, virtual reality, interactive design and restoration.

MikeChiado

Mike Chiado

That’s a hugely data-heavy load to begin with, and as VR and 8K projects become more common, managing the data stored and coursing through DCS’ network will get even more demanding. Branded companies currently under the monster Deluxe umbrella include Beast, Company 3, DDP, Deluxe/Culver City, Deluxe VR, Editpool, Efilm, Encore, Flagstaff Studios, Iloura, Level 3, Method Studios, StageOne Sound, Stereo D, and Rushes.

“Actually, that’s nothing when you consider that all the delivery and media teams from Deluxe Delivery and Deluxe Digital Cinema are downstream of Creative Services,” says Chiado. “That’s a much bigger network and storage challenge at that level.” Still, the storage challenges of Chiado’s segment are routinely complicated by the twin monkey wrenches of the collaborative and computer kind that can unhinge any technology-driven art form.

“Each area of the business has its own specific problems that recur: television has its issues, commercial work has its issues and features its issues. For us, commercials and features are more alike than you might think, partly due to the constantly changing visual effects but also due to shifting schedules. Television is much more regimented,” he says. “But sometimes we get hard drives in on a commercial or feature and we think, ‘Well that’s not what we talked about at all!”

Company 3’s file-based digital intermediate work quickly clarified Chiado’s technical priorities. “The thing that we learned early on is realtime playback is just so critical,” he says. “When we did our very first file-based DI job 13 years ago, we were so excited that we could display a certain resolution. OK, it was slipping a little bit from realtime, maybe we’ll get 22 frames a second, or 23, but then the director walked out after five minutes and said, ‘No. This won’t work.’ He couldn’t care less about the resolution because it was only always about realtime and solid playback. Luckily, we learned our lesson pretty quickly and learned it well! In Deluxe Creative Services, that still is the number one priority.”

It’s also helped him cut through unnecessary sales pitches from storage vendors unfamiliar with Deluxe’s business. “When I talk to them, I say, ‘Don’t tell me about bit rates. I’m going to tell you a frame rate I want to hit and a resolution, and you tell me if we can hit it or not with your solution. I don’t want to argue bits; I want tell you this is what I need to do and you’re going to tell me whether or not your storage can do that.’ The storage vendors that we’re going to bank our A-client work on better understand fundamentally what we need.”

Because some of the Deluxe company brands share office space — Method and Company 3 moved into a 63,376-square-foot former warehouse in Santa Monica a few years ago — they have access to the same storage infrastructure. “But there are often volumes specially purpose-built for a particular job,” says Chiado. “In that way, we’ve created volumes focused on supporting 4K feature work and others set up specifically for CG desktop environments that are shared across 400 people in that one building. We also have similar business units in Company 3 and Efilm, so sometimes it makes sense that we would want, for artist or client reasons, to have somebody in a different location from where the data resides. For example, having the artist in Santa Monica and the director and DP in Hollywood is something we do regularly.”

Chiado says Deluxe has designed and built with network solution and storage solution providers a system “that suits our needs. But for the most part, we’re using off-the-shelf products for storage. The magic is how we tune them to be able to work with our systems.”

Those vendors include Quantum, DDN Storage and EMC’s network-attached storage Isilon. “For our most robust needs, like 4K feature workflows, we rely on DDN,” he says. “We’ve actually already done some 8K workflows. Crazy world we live in!” For long-term archiving, each Deluxe Creative Service location worldwide has an LTO-tape robot library. “In some cases, we’ll have a near-line tier two volume that stages it. And for the past few years, we’re using object storage in some locations to help with that.”

Although the entire group of Deluxe divisions and offices are linked by a robust 10GigE network that sometimes takes advantage of dark fiber, unused fiber optic cables leased from larger fiber-optic communications companies, Chiado says the storage they use is all very specific to each business unit. “We’re moving stuff around all the time but projects are pretty much residing in one spot or another,” he says. “Often, there are a thousand reasons why — it may be for tax incentives in a particular location, it may be for project-specific needs. Or it’s just that we’re talking about the London and LA locations.”

With one eye on the future and another on budgets, Chiado says pooled storage has helped DCS keep costs down while managing larger and larger subsets of data-heavy projects. “We are always on the lookout for ways to handle the next thing, like the arrival of 8K workflows, but we’ve gained huge, huge efficiencies from pooled storage,” he says. “So that’s the beauty of what we build, specific to each of our world locations. We move it around if we have to between locations but inside that location, everybody works with the content in one place. That right there was a major efficiency in our workflows.”

Beyond that, he says, how to handle 8K is still an open question. “We may have to make an island, and it’s been testing so far, but we do everything we can to keep it in one place and leverage whatever technology that’s required for the job,” Chiado says. “We have isolated instances of SSDs (solid-state drives) but we don’t have large-scale deployment of SSDs yet. On the other end, we’re working with cloud vendors, too, to be able to maximize our investments.”

Although the company is still working through cloud security issues, Chiado says Deluxe is “actively engaging with cloud vendors because we aren’t convinced that our clients are going to be happy with the security protocols in place right now. The nature of the business is we are regularly involved with our clients and MPAA and have ongoing security audits. We also have a group within Deluxe that helps us maintain the best standards, but each show that comes in may have its own unique security needs. It’s a constant, evolving process. It’s been really difficult to get our heads and our clients’ heads around using the cloud for rendering, transcoding or for storage.”

Luckily, that’s starting to change. “We’re getting good traction now, with a few of the studios getting ready to greenlight cloud use and our own pipeline development to support it,” he adds. “They are hand in hand. But I think once we move over this hurdle, this is going to help the industry tremendously.”

Beyond those longer-term challenges, Chiado says the day-to-day demands of each division haven’t changed much. “Everybody always needs more storage, so we are constantly looking at ways to make that happen,” he says. “The better we can monitor our storage and make our in-house people feel comfortable moving stuff off near-line to tape and bring it back again, the better we can put the storage where we need it. But I’m very optimistic about the future, especially about having a relief valve in the cloud.”

Our main image is the shared 4K theater at Company 3 and Method.

Blending Ursa Mini and Red footage for Aston Martin spec spot

By Daniel Restuccio

When producer/director Jacob Steagall set out to make a spec commercial for Aston Martin, he chose to lens it on the Blackmagic Ursa Mini 4.6k and the Scarlet Red. He says the camera combo worked so seamlessly he dares anyone to tell which shots are Blackmagic and which are Red.

L-R Blackmagic’s Moritz Fortmann and Shawn Carlson with Jacob Steagall and Scott Stevens.

“I had the idea of filming a spec commercial to generate new business,” says Steagall. He convinced the high-end car maker to lend him an Aston Martin 2016 V12 Vanquish for a weekend. “The intent was to make a nice product that could be on their website and also be a good-looking piece on the demo reel for my production company.”

Steagall immediately pulled together his production team, which consisted of co-director Jonathan Swecker and cinematographers Scott Stevens and Adam Pacheco. “The team and I collaborated together about the vision for the spot which was to be quick, clean and to the point, but we would also accentuate the luxury and sexiness of the car.”

“We had access to the new Blackmagic Ursa Mini 4.6k and an older Red Scarlet with the MX chip,” says Stevens. “I was really interested in seeing how both cameras performed.”

He set up the Ursa Mini to shoot ProRes HQ at Ultra HD (3840×2160) and the Scarlet at 8:1 compression at 4K (4096×2160). He used both Canon still camera primes and a 24-105mm zoom, switching them from camera to camera depending on the shot. “For some wide shots we set them up side by side,” explains Stevens. “We also would have one camera shooting the back of the car and the other camera shooting a close-up on the side.”

In addition to his shooting duties, Stevens also edited the spot, using Adobe Premiere, and exported the XML into Blackmagic Resolve Studio 12. Stevens notes that, in addition to loving cinematography, he’s also “really into” color correction. “Jacob (Steagall) and I liked the way the Red footage looked straight out of the camera in the RedGamma4 color space. I matched the Blackmagic footage to the Red footage to get a basic look.”

Blackmagic colorist Moritz Fortmann took Stevens’ basis color correction and finessed the grade even more. “The first step was to talk to Jacob and Scott and find out what they were envisioning, what feel and look they were going for. They had already established a look so we saved a few stills as reference images to work off. The spot was shot on two different types of cameras, and in different formats. Step two was to analyze the characteristics of each camera and establish a color correction to match the two.  Step three was to tweak and refine the look. We did what I would describe as a simple color grade, only relying on primaries, without using any Power Windows or keys.”

If you’re planning to shoot mixed footage, Fortmann suggests you use cameras with similar characteristics, matching resolution, dynamic range and format. “Shooting RAW and/or Log provides for the highest dynamic range,” he says. “The more ‘room’ a colorist has to make adjustments, the easier it will be to match mixed footage. When color correcting, the key is to make mixed footage look consistent. One camera may perform well in low light while another one does not. You’ll need to find that sweet spot that works for all of your footage, not just one camera.”

Daniel Restuccio is a writer and chair of the multimedia department at California Lutheran University.

Quick Chat: New president/GM Deluxe TV post services Dom Rom

Domenic Rom, a fixture in the New York post community for 30 years, has been promoted to president and GM of Deluxe TV Post Production Services. Rom was most recently managing director of Deluxe’s New York studio, which incorporates Encore/Company 3/Method. He will now be leading Deluxe’s global services for television, specifically, the Encore and Level 3 branded companies. He will be making the move to Los Angeles.

Rom’s resume is long. He joined DuArt Film Labs in 1984 as a colorist, working his way up to EVP of the company, running both its digital and film lab divisions. In 2000, he joined stock footage/production company Sekani (acquired by Corbis), helping to build the first fully digital content distribution network. In 2002, he founded The Lab at Moving Images, the first motion picture lab to open in in New York in 25 years. It was acquired by PostWorks, which named Rom COO overseeing its Avid rentals, remote set-ups, audio mixing, color correction and editorial businesses. In 2010, Rom joined Technicolor NY as SVP post production. When PostWorks NY acquired Technicolor NY, Rom again became COO of the now-larger company. He joined Deluxe in 2013 as GM of its New York operations.

“I love what I’m seeing today in the industry,” he says. “It has been said many times, but we’re truly in a golden age of television. The best entertainment in the world is coming from the networks and a whole new generation of original content creators. It’s exciting to be in a position to service that work. There are few, if any, companies that have invested in the research, technology and talent to the degree Deluxe has, to help clients take advantage of the latest advancements — whether it’s HDR, 4K, or whatever comes next, to create amazing new experiences for viewers.”

postPerspective reached out to Rom, as he was making his transition to the West Coast, to find out more about his new role and his move.

What does this position mean to you?
This position is the biggest honor and challenge of my career. I have always considered Encore and Level 3 to be the premier television facilities in the world, and to be responsible for them is amazing and daunting all at the same time. I am so looking forward to working with the facilities in Vancouver, Toronto, New York and London.

What do you expect/hope to accomplish in this new role?
To bring our worldwide teams even closer and grow the client relationships even stronger than they already are, because at the end of the day this is a total relationship business and probably my favorite part of the job.

How have you seen post and television change over the years?
I was talking about this with the staff out here the other day. I have seen the business go from film to 2-inch tape to 1-inch to D2 to D5 to HDCAM (more formats than I can remember) to nonlinear editing and digital acquisition — I could go on and on. Right now the quality and sheer amount of content coming from the studios, networks, cablenets and many, many new creators is both exciting and challenging. The fact that this business is constantly changing helps to keep me young.

How is today’s production and post technology helping make TV an even better experience for audiences?
In New York we just completed the first Dolby Vision project for an entire episodic television season (which I can’t name yet), and it looks beautiful. HDR opens up a whole new visual world to the artists and the audience.

Are you looking forward to living in Los Angeles?
I have always danced with the idea of living in LA throughout my career, and to do so this far in is interesting timing. My family and, most importantly, my brand new grandson are all on the east coast so I will maintain my roots there while spreading them out west as well.

Talking VR content with Phillip Moses of studio Rascali

Phillip Moses, head of VR content developer Rascali, has been working in visual effects for over 25 years. His resume boasts some big-name films, including Alice in Wonderland, Speed Racer and Spider-Man 3, just to name a few. Seven years ago he launched a small boutique visual effects studio, called The Resistance VFX, with VFX supervisor Jeff Goldman.

Two years ago, after getting a demo of an Oculus pre-release Dev Kit 2, Moses realized that “we were poised on the edge of not just a technological breakthrough, but what will ultimately be a new platform for consuming content. To me, this was a shift almost as big as the smartphone, and an exciting opportunity for content creators to begin creating in a whole new ecosystem.”

Phillip Moses

Phillip Moses

Shortly after that, his friends James Chung and Taehoon Oh launched Reload Studios, with the vision of creating the first independently-developed first-person shooter game, designed from the ground up for VR. “As one of the first companies formed around the premise of VR, they attracted quite a bit of interest in the non-gaming sector as well,” he explains. “Last year, they asked me to come aboard and direct their non-gaming division, Rascali. I saw this as a huge opportunity to do what I love best: explore, create and innovate.”

Rascali has been busy. They recently debuted trailers for their first episodic VR projects, Raven and The Storybox Project, on YouTube, Facebook/Oculus Video, Jaunt, Littlstar, Vrideo and Samsung MilkVR. Let’s find out more…

You recently directed two VR trailers. How is directing for VR different than directing for traditional platforms?
Directing for VR is a tricky beast and requires a lot of technical knowledge of the whole process that would not normally be required of directors. To be fair, today’s directors are a very savvy bunch, and most have a solid working knowledge of how visual effects are used in the process. However, for the way I have chosen to shoot the series, it requires the ability to have a pretty solid understanding of not just what can be done, but how to actually do it. To be able to previsualize the process and, ultimately, the end result in your head first is critical to being able to communicate that vision down the line.

Also, from a script and performance perspective, I think it’s important to start with a very important question of “Why VR?” And once you believe you have a compelling answer to that question, then you need to start thinking about how to use VR in your story.  Will you require interaction and participation from the viewer? Will you involve the viewer in any way? Or will you simply allow VR to serve as an additional element of presence and immersion for the viewer?

While you gain many things in VR, you also have to go into the process with a full knowledge of what you ultimately lose. The power of lenses, for example, to capture nuance and to frame an image to evoke an emotional response, is all but lost. You find yourself going back to exploring what works best in a real-world framing — almost like you are directing a play in an intimate theater.

What is the biggest challenge in the post workflow for VR?
Rendering! Everything we are producing for Raven is at 4K left eye, 4K right eye and 60fps. The rendering process alone guarantees that the process will take longer than you hoped. It also guarantees that you will need more data storage than you ever thought necessary.

But other than rendering, I find that the editorial process is also more challenging. With VR, those shots that you thought you were holding onto way too long are actually still too short, and it involves an elaborate process to conform everything for review in a headset between revisions. In many ways, it’s similar to the old process of making your edit decisions, then walking the print into the screening room. You forget how tedious the process can be.
By the way, I’m looking forward to integrating some realtime 360 review into the editorial process. Make it happen Adobe/Avid!

These trailers are meant to generate interest from production partners to green light these as full episodic series. What is the intended length of each episode, and what’s the projected length of time from concept to completion for each episode of the all-CG Storybox, and live-action Raven?
Each one of these projects is designed for completely different audiences, so the answer is a bit different for each one. For Storybox, we are looking to keep each episode under five minutes, with the intention that it is a fairly easy-to-consume piece of content that is accessible to a broad spectrum of ages. We really hope to make the experiences fun, playful and surprising for the viewer, and to create a context for telling these stories that fuels the imagination of kids.

For Storybox, I believe that we can start delivering finished episodes before the end of the third quarter — with a full season representing 12 to 15 episodes. Raven, on the other hand, is a much more complex undertaking. While the VR market is being developed, we are betting on the core VR consumers to really want stories and experiences that range closer to 12 to 15 minutes in duration. We feel this is enough time to tell more complex stories, but still make each episode feel like a fantastic experience that they could not experience anywhere else. If green-lit tomorrow, I believe we would be looking at a four-month production schedule for the pilot episode.

Rascali is a division of Reload Studios, which is developing VR games. Is there a technology transfer of workflows and pipelines and shared best practices across production for entertainment content and games within the company?
Absolutely! While VR is a new technology, there is such a rich heritage of knowledge present at Reload Studios. For example, one question that VR directors are asking themselves is: “How can I direct my audience’s attention to action in ways that are organic and natural?” While this is a new question for film directors — who typically rely on camera to do this work for them — this is a question that the gaming community has been answering for years. Having some of the top designers in the game industry at our disposal is an invaluable asset.

That being said, Reload is much different than most independent game companies. One of their first hires was senior Disney animator Nik Ranieri. Our producing team is composed of top animation producers from Marvel and DC. We have a deep bench of people who give the whole company a very comprehensive knowledge of how content of all types is created.

What was the equipment set-up for the Raven VR shoot? Which camera was used? What tools were used in the post pipeline?
Much of the creative IP for Raven is very much in development, including designs, characters, etc. For this reason, we elected to construct a teaser that highlighted immersive VR vistas that you could expect in the world we are creating. This required us to lean very heavily on the visual effects / CG production process — the VFX pipeline included Autodesk 3ds Max, rendering in V-Ray, with some assistance from Nuke and even Softimage XSI. The entire project was edited in Adobe Premiere.

For our one live-action element, this was shot with a single Red camera, and then projected onto geometry for accurate stereo integration.

Where do you think the prevailing future of VR content is? Narrative, training, therapy, gaming, etc.?
I think your question represents the future of VR. Games, for sure, are going to be leading the charge, as this demographic is the only one on a large scale that will be purchasing the devices required to build a viable market. But much more than games, I’m excited to see growth in all of the areas you listed above, including, most significantly, education. Education could be a huge winner in the growing VR/AR ecosystem.

The reason I elected to join Rascali is to help provide solutions and pave the way for solutions in markets that mostly don’t yet exist.  It’s exciting to be a part of a new industry that has the power to improve and benefit so many aspects of the global community.

New England SMPTE holding free session on UHD/HDR/HFR, more

The New England Section of SMPTE is holding a free day-long “New Technologies Boot Camp” that focuses on working with high resolution (UHD, 4K and beyond), high-dynamic-range (HDR) imaging and higher frame rates (HFR). In addition, they will discuss how to maintain resolution independence on screens of every size, as well as how to leverage IP and ATSC 3.0 for more efficient movement of this media content.

The boot camp will run from 9am to 9pm on May 19 at the Holiday Inn in Dedham, Massachusetts.

“These are exciting times for those of us working on the technical side of broadcasting, and the array of new formats and standards we’re facing can be a bit overwhelming,” says Martin P. Feldman, chair of SMPTE New England Section. “No one wants — or can afford — to be left behind. That’s why we’re gathering some of the industry’s foremost experts for a free boot camp designed to bring engineers up to speed on new technologies that enable more efficient creation and delivery of a better broadcast product.”

Boot camp presentations will include:

• “High-Dynamic-Range and Wide Color Gamut in Production and Distribution” by Hugo Gaggioni, chief technical officer at Sony Electronics.
• “4K/UHD/HFR/HDR — HEVC H.265 — ATSC 3.0” by Karl Kuhn of Tektronix.
• “Where Is 4K (UHD) Product Used Today — 4K Versus HFR — 4K and HFR Challenges” by Bruce Lane of Grass Valley.
• “Using MESH Networks” by Al Kornak of JVC Kenwood Corporation.
• “IP in Infrastructure-Building (Replacing HD-SDI Systems and Accommodating UHD)” by Paul Briscoe of Evertz Microsystems;
• “Scripted Versus Live Production Requirements” by Michael Bergeron of Panasonic.
• “The Transition from SDI to IP, Including IP Infrastructure and Monitoring” by John Shike of SAM (formerly Snell/Quantel).
• “8K, High-Dynamic-Range, OLED, Flexible Displays” by consultant Peter Putman.
• “HDR: The Great, the Okay, and the WTF” by Mark Schubin, engineer-in-charge at the Metropolitan Opera, Sesame Street and Great Performances (PBS).

The program will conclude with a panel discussion by the program’s presenters.

 No RSVP is required, and both SMPTE members and non-members are welcome.

BenQ offering 4K UHD monitor for editing pros

For video editors looking for a new monitor, BenQ America has made available the PV3200PT IPS, which is purpose-built for post workflows. The 32-inch 4K Ultra HD display offers color precision via 10-bit, 100 percent sRGB color, which follows the Rec. 709 standard. Available now, the unit sells for $1,499.

The PV3200PT reproduces color tones with a Delta-E value of less than or equal to two and features a 14-bit 3D LUT to display an accurate color mixture for improved RGB color blending. By balancing brightness to a deviation and chromaticity less than 10 percent, the monitor offers a more consistent viewing experience. The monitor also features simple hardware and software calibration by allowing users to adjust the unit’s image processing chip without altering graphics card data.

An OSD controller provides preset custom modes so users can easily switch between Rec. 709, EBU and SMPTE-C modes. The PV3200PT is part of BenQ’s Eye-Care models, which are designed to increase visual comfort while performing common computer tasks. While conventional screens flicker at a rate of 200 times per second, BenQ’s ZeroFlicker technology eliminates flickering at all brightness levels, which reduces eye fatigue and provides a more comfortable viewing experience during prolonged sessions of computer use. Further capabilities include ergonomic customization such as height, tilt, pivot and swivel adjustments.

Watch this space in coming weeks for a review of the product via video editor Brady Betzel.

UHD Alliance’s Victor Matsuda: updates from NAB 2016

Victor Matsuda from the UHD Alliance was at NAB 2016. The Alliance was formed about 15 months ago as 4K UHD products began exploding into the market. The goal of the Alliance was to establish certifications for these new products and for content. All of this is to ensure a quality experience for consumers, who will ultimately drive 4K/UHD adoption throughout the market.

Watch our video with Matsuda to find out more.

NAB 2016 from an EP’s perspective

By Tara Holmes

Almost two weeks ago, I found myself at NAB for the first time. I am the executive producer of color and finishing at Nice Shoes, a post production studio in New York City. I am not an engineer and I am not an artist, so why would an EP go to NAB? I went because one of my main goals for 2016 is to make sure the studio remains at the forefront of technology. While I feel that our engineering team and artists represent us well in that respect, I wanted to make sure that I, along with our producers, were fully educated on these emerging technologies.

One of our first priorities for NAB was to meet with top monitor manufacturers to hopefully land on what UHD HDR monitors we would find to meet our standards for professional client viewing. We came to the conclusion that the industry is not there yet and we have more research to do before we upgrade our studio viewing environments.

Everyone with me was in agreement. They aren’t where they need to be. Most are only outputting around 400-800 nits and are experiencing luminance and contrast issues. None of this should stop the process of coloring for HDR. For the master monitor for the colorist, the Sony BVM-X300 OLED master monitor, which we are currently using, seems to be the ideal choice as you can still work in traditional Rec 709 as well as Rec 2020 for HDR.

After checking out some monitors, we headed to the FilmLight booth to go over the 5.0 upgrades to Baselight. Our colorist Ron Sudul, along with Nice Shoes Creative Studio VFX supervisor Adrian Winter, sat with myself and the FilmLight reps to discuss the upgrades, which included incredible new isolation tracking capabilities.  These upgrades will reinvent what can be achieved in the color suite: from realtime comps to retouch being done in color. The possibilities are exciting.

I also spent time learning about the upgrades to Filmlight’s Flip, which is their on-set color hardware. The Flip can allow you to develop your color look on set, apply it during your edit process (with the Baselight plug-in for Avid) and refine it in final color, all without affecting your RAW files. In addition to the Flip, they developed a software that supports on-set look development and grading called Prelight. I asked if these new technologies could enable us to even do high-end things like sky replacements on set and was told that the hardware within the Flip very well could.

We also visited our friends at DFT, the manufacturers of the Scanity film scanner, to catch up and discuss the business of archiving. With Scanity, Nice Shoes can scan 4K when other scanners only scan up to 2K resolution. This is a vital tool in not only preserving past materials, but in future proofing for emerging formats when archiving scans from film.

VR
On Sunday evening before the exhibits opened, we attended a panel on VR that was hosted by the Foundry. At this event we got to experience a few of the most talked about VR projects including Defrost, one of the first narrative VR films, from the director of Grease, Randal Kleiser, who was on the panel along with moderator Morris May (CEO/founder, Specular Theory), Bryn Mooser (co-founder, RYOT), Tim Dillon (executive producer, MPC) and Jake Black (head of VR, Create Advertising).

The Foundry’s VR panel.

The panel inspired me to delve deeper into the VR world, and on Wednesday I spent most of my last day exploring the Virtual & Augmented Reality Pavilion. In addition to seeing the newest VR camera rig offerings and experiencing a live VR feed, as well as demo-ing the Samsung Gear, I explored viewing options for the color workflow. Some people I spoke to mentioned that multiple Oculus set-ups all attached to a single feed was the way to go for color workflow, but another option that we did a very preliminary exploration of was the “dome” possibility, which offers a focused 180-degree view for everyone involved to comment on the same section of a VR scene. This would enable all involved to be sure they are experiencing and viewing the same thing at the same time.

HDR Workflow
Another panel we attended was about HDR workflows. Nice Shoes has already had the opportunity to work on HDR material and have begun to develop workflows for this emerging medium. Most HDR deliverables are for episodic and long form for such companies as Netflix, Hulu and the like. It may be some time before commercial clients are requesting an HDR deliverable, but the workflows will be much the same so the development being performed now is extremely valuable.

My biggest take away was that there are still no set standards. There’s Dolby Vision vs. HDR 10 vs. PQ vs. others. But it appears that everyone agrees that standards are not needed right now. We need to get tools into the hands of the artists and figure out what works best. Standards will come out of that. The good news is that we appear to be future-proofed for the standard to change. Meaning for the most part, every camera we are shooting on is shooting for HDR and should standards change — say from 1000 nits to 10,000 nits — the footage and process is still there to go back in and color for the new request.

Summing Up
I truly believe my time spent at NAB has prepared me for the myriad of questions that will be put forth throughout the year and will help us develop our workflows to evolve the creative process of post. I’ll be sure to be there again next year in order to prepare myself for the questions of 2017 and beyond.

Our Main Image: The view walking into the South Hall Lower at the LVCC.

Fusion launches at NAB 2016 with 4K OLED display

Industry veterans Carl J. Dempsey and Steve Farmer introduced their new company, Fusion, at NAB 2016. They also unveiled their first product — a 55-inch OLED 4K reference display system called the ORD-55.

The ORD-55 features independent-processing quad-mode operation (IPQ), where four individual processors provide independent control of all channels, and uses a single-link 12G input. In quad mode, it provides four independent 27.5-inch FHD displays. The display can also be configured to show one large 4K picture with three smaller preview panes in FHD. The system features a deep “Black Level,” super-wide viewing angle of 178 degrees, a 10 microseconds response time, 100,000:1 contrast ratio, ultra-wide color gamut with 1.07 billion colors, and 12-bit color processing.

Dempsey and Farmer bring a ton of experience to Fusion. Dempsey, a 25-year industry vet, was most recently president and CEO of Wohler Technologies. Farmer brings 22 years in the broadcast industry to the new company, including a stint as director of engineering at Wohler. Starting as a design engineer, he then took on senior management roles in both engineering and product management.

In their new venture, Dempsey will serve as Fusion’s CEO, while Farmer will hold the CTO post.

Sony at NAB with new 4K OLED monitor, 4K, 8X Ultra HFR camera

At last year’s NAB, Sony introduced its first 4K OLED reference monitor for critical viewing — the BVM-X300. This year, Sony added a new monitor, the the PVM-X550, a 55-inch, OLED panel with 12-bit signal processing, perfect for client viewing. The Trimaster EL PVM-X550 supports HDR through various Electro-Optical Transfer Functions (EOTF), such as S-Log3, SMPTE ST.2084 and Hybrid Log-Gamma, covering applications for both cinematography and broadcast. The PVM-X550 is a quad-view OLED monitor, which allows customized individual display settings across four distinct views in HD. It is equipped with the same signal-processing engine as the BVM-X300, providing a 12-bit output signal for picture accuracy and consistency. It also supports industry standard color spaces including the wider ITU-R BT.2020 for Ultra High Definition.

HFR Camera
At NAB 2016, Sony displayed their newest camera system: the HDC-4800 combines 4K resolution with enhanced high frame rate capabilities, capturing up to 8X at 4K, and 16X in full HD. “This camera system can do a lot of everything — very high frame rate, very high resolution,” said Rob Willox, marketing manager for content creation, Sony Electronics.

I broke the second paragraph into two, and they are now: The HDC-4800 uses a new Super 35mm 4K CMOS sensor, supporting a wide color space (both BT.2020 and BT.709), and provides an industry standard PL lens mount, giving the system the capability of using the highest quality cinematic lenses for clear and crisp high resolution images.The new sensor brings the system into the cinematic family of RED and Alexa, making it well suited as a competitor to today’s modern, high end cinematic digital solutions.

An added feature of the HDC-4800 is how it’s specifically designed to integrate with Sony’s companion system, the Sony HDC-4300, a 2/3 inch image sensor 4k/HD camera. Using matching colorimetry and deep toolset camera adjustments, and with the ability to take advantage of existing build-up kits, remote control panels and master setup units, the two cameras can blend seamlessly.

Archive
Sony also showed the second generation of its Optical Disc Archive System, which adopts new, high-capacity optical media, rated with a 100 year shelf life with double the transfer rate and double the capacity of a single cartridge at 3.3 TB. The Generation 2 Optical Disc Archive System also adds an 8-channel optical drive unit, doubling the read/write speeds of the previous generation, helping to meet the data needs of real-time 4K production.

Quick Chat: SGO CEO Miguel Angel Doncel

By Randi Altman

When I first happened upon Spanish company SGO, they were giving demos of their Mistika system on a small stand in the back of the post production hall at IBC. That was about eight years ago. Since then, the company has grown its Mistika DI finishing system, added a new product called Mamba FX, and brought them both to the US and beyond.

With NAB fast approaching, I thought I would check in with SGO CEO Miguel Angel Doncel to find out how the company began, where they are now and where they are going. I also checked in about some industry trends.

Can you talk about the genesis of your company and the Mistika product?
SGO was born out of a technically oriented mentality to find the best ways to use open architectures and systems to improve media content creation processes. That is not a challenging concept today, but it was an innovative view in 1993 when most of the equipment used in the industry was proprietary hardware. The idea of using computers to replace proprietary solutions was the reason SGO was founded.

It seems you guys were ahead of the curve in terms of one product that could do many things. Was that your goal from the outset?
Ten years ago, most of the manufacturers approached the industry with a set of different solutions to address different parts of the workflow; this gave us an opportunity to capitalize on improving the workflow, as disjointed solutions imply inefficient workflows due to their linearity/sequentiality.

We always thought that by improving the workflow, our technology would be able to play in all those arenas without having to change the tools. Making the workflow parallel and saving time when a problem is detected avoids going backwards in the pipeline, and we can focus moving forward.

I think after so many years, the industry is saying we were right, and all are going in that direction.

How is SGO addressing HDR?
We are excited about HDR, as it really improves the visual experience, but at the same time it is a big challenge to define a workflow that can work in both HDR and SDR in a smooth way. Our solution to that challenge is the four-dimensional grading that is implemented with our 4th ball. This allows the colorist to work not only in the three traditional dimensions — R, G and B — but also to work in the highlights as a parallel dimension.

What about VR?
VR pieces together all the requirements of the most demanding 3D with the requirements of 360. Considering what SGO already offers in stereo 3D production, we feel we are well positioned to provide a 360/VR solution. For that reason, we want to introduce a specific workflow for VR that helps customers to work on VR projects, addressing the most difficult requirements, such as discontinuities in the poles, or dealing with shapes.

The new VR mode we are preparing for Mistika 8.7 will be much more than a VR visualization tool. It will allow users to work in VR environments the same way they would work in a normal production. Not having to worry about circles ending up being highly distorted ellipses and so forth.

What do you see as the most important trends happening in post and production currently?
The industry is evolving in many different directions at the moment — 8K realtime, 4K/UHD, HDR, HFR, dual-stream stereo/VR. These innovations improve and enhance the audience’s experience in many different ways. They are all interesting individually, but the most vital aspect for us is that all of them actually have something in common — they all require a very smart way of how to deal with increasing bandwidths. We believe that a variety of content will use different types of innovation relevant to the genre.

Where do you see things moving in the future?
I personally envision a lot more UHD, HDR and VR material in the near future. The technology is evolving in a direction that can really make the entertainment experience very special for audiences, leaving a lot of room to still evolve. An example is the Quantum Break game from Remedy Studios/Microsoft, where the actual users’ experience is part of the story. This is where things are headed.

I think the immersive aspect is the challenge and goal. The reason why we all exist in this industry is to make people enjoy what they see, and all these tools and formulas combined together form a great foundation on which to build realistic experiences.

Digging Deeper: NASA TV UHD executive producer Joel Marsden

It’s hard to deny the beauty of images of Earth captured from outer space. And NASA and partner Harmonic agree, boldly going where no one has gone before — creating NASA TV UHD, the first non-commercial consumer UHD channel in North America. Leveraging the resolution of ultra high definition, the channel gives viewers a front row seat to some gorgeous views captured from the International Space Station (ISS), other current NASA missions and remastered historical footage.

We recently reached out to Joel Marsden, executive producer of NASA TV UHD, to find out how this exciting new endeavor reached “liftoff.”

Joel Marsden

Joel Marsden

This was obviously a huge undertaking. How did you get started and how is the channel set up?
The new channel was launched with programming created from raw video footage and imagery supplied by NASA. Since that time, Harmonic has also shot and contributed 4K footage, including video of recent rocket launches. They provide the end-to-end UHD video delivery system and post production services while managing operations. It’s all hosted at a NASA facility managed by Encompass Digital Media in Atlanta, which is home to the agency’s satellite and NASA TV hubs.

Like the current NASA TV channels, and on the same transponder, NASA TV UHD is transmitted via the SES AMC-18C satellite, in the clear, with a North American footprint. The channel is delivered at 13.5Mbps, as compared with many of the UHD demo channels in the industry, which have required between 50 and 100 Mbps. NASA’s ability to minimize bandwidth use is based on a combination of encoding technology from Harmonic in conjunction with the next-generation H.265 HEVC compression algorithm.

Can you talk about how the footage was captured and how it got to you for post?
When the National Aeronautics and Space Act of 1958 was created, one of the legal requirements of NASA was to keep the public apprised of its work in the most efficient means possible and with the ultimate goal of bringing everyone on Earth as close as possible to being in space. Over the years, NASA has used imagery as the primary means of demonstration. The group in charge of these efforts, the NASA Imagery Experts Program, provides the public with a wide array of digital television, web video and still images based on the agency’s activities. Today, NASA’s broadcast offerings via NASA TV include an HD consumer channel, an HD media channel and an SD education channel.

In 2015, the agency introduced NASA TV UHD. Naturally, NASA archives provide remastered footage from historical missions and shots from NASA’s development and training processes, all of which are used for production of broadcast programming. In fact, before the agency launched NASA TV, it had already begun production of its own documentary series, based on footage collected during missions.

Just five or six years ago, NASA also began documenting major events in 4K resolution or higher. The agency has been using 6K Red Dragon digital cinema cameras for some time. NASA TV UHD video content is sourced from high-resolution images and video generated on the ISS, Hubble Space Telescope and other current NASA missions. The raw content files are then sent to Harmonic for post.

Can you walk us through the workflow?
Raw video files are mailed on physical discs or sent via FTP from a variety of NASA facilities to Harmonic’s post studio in San Jose and stored on the Harmonic MediaGrid system, which supports an edit-in-place workflow with Final Cut Pro and other third-party editing tools.

During the content processing phase, Harmonic uses Adobe After Effects to paint out dead pixels that result from the impact of cosmic radiation on camera sensors. They have built bad-pixel maps that they use in post production to remove the distracting white dots from the picture. The detail of UHD means that the footage also shows scratches on the windows of the ISS through which the camera is shooting, but these are left in for authenticity.

 

A Blackmagic DaVinci Resolve is used to color grade footage, and Maxon Cinema 4D Studio is used to create animations of images. Final Cut Pro X and Adobe Creative Suite are used to set the video to music and add text and graphics, along with the programming name, logo and branding.

Final programs are then transferred in HD back to the NASA teams for review, and in UHD to the Harmonic team in Atlanta to be loaded onto the Spectrum X for playout.

————

You can check out NASA TV’s offerings here.

Atomos brings 4K HDR monitors/recorders on set

Atomos, makers on the Shogun and Ninja on-set systems, has introduced new products targeting 4K HDR and offering brightness and detail simultaneously — field monitor/recorders Shogun Flame and Ninja Flame.

The Atomos Flame Series features a calibrated 7-inch field monitor, which displays 10 stops of the luminance detail of LOG with 10-bit HDR post production color accuracy. While the AtomHDR engine resolves HDR brightness detail (dynamic range) with 10-bit color accuracy, it also resolves 64 times more color information than traditional 8-bit panels. For Rec709 standard dynamic range scenes, the 1500nits brightness aids with outdoor shooting as does the upgraded continuous power management system that allows users to shoot shooting longer in the field. The Flame series, which offers a cost effective solution to the growing demands for HDR image capture and on set viewing, also features pro 4K/HD Apple ProRes/DNxHR recording, playback and editing.

We threw a few questions at Atomos president Matt Cohen (who many of you know from Tekserve) regarding the new gear…

What’s the most important thing people should know about Atomos Flame Series?
Atomos Flame Series products empower realtime visualization of HDR on set and in post using LOG from the camera and our 10-bit AtomHDR processing. We explain it here

Are we really looking at 10 different stops? That’s a ton of bandwidth. How can you accomplish that?
It’s magic! Well, not really… it’s math. LOG is the trick; it basically takes the range and squishes it. That’s why when looking at a LOG signal it looks all washed out — it has transformed the brightest part into something representable using current technology same with the darkest areas. So, basically, it is making the changes much more subtle in the image that can then be interpreted into the true levels they represent. We do that in realtime with AtomHDR.

The Series works with all cameras, as long as they have a live video tap?
Our products have always been compatible with cameras that have a clean output, meaning there is no menu data overlaid or degradation to the output. We get the pristine image from the sensor before any of the compression or degradation that occurs recording internally to the camera.

In how many different ways can this be used on set? Focus, Composition, Color, etc?
All the Atomos Monitor tools are still available. We have very accurate scopes and focus tools. You can use this on set for all aspects you describe. There is calibrated monitoring focus and exposure tools, including waveform vectorscope and RGB Parade, We have Graticule support and even support to de-squeeze anamorphic. We also feel confident the Flame tools will be very valuable in post production, enabling most systems to work with HDR and UWG (Ultra Wide Gamut) content.

Here are some key features of the Flame Series:
– AtomHDR monitors, which offer a dynamic range to match that of a 10-bit camera LOG footage, provide the detail in highlights and shadows usually clipped on traditional monitors.

– Is an advanced field monitor even in non-HDR scenarios with 1500nits brightness for outdoor shooting, native full HD resolution and optional calibration to ensure natural LCD color drift can be corrected over time.

– Users can  record directly from the sensor in 4K UHD (up to 30p) or record high frame rate HD (up to 120p).

– Along with recording the high pixel density of 4K, the Ninja and Shogun Flame also record higher resolution 10-bit color information and more precise yet efficient 4:2:2 color encoding.

– Recording to Apple ProRes and Avid DNxHR visually lossless edit-ready codecs allow users to capture full individual frames like film, allowing for more flexibility and creativity in post.

– The Series features an armor protection, dual battery hot-swappable continuous power system and included accessories, such as a new fast charger and snap-fast sun hood.

– Atomos’ hot-swappable dual battery system for continuous power is backed up with the included power accessories (2 x 4-cell batteries, D-Tap adaptor and fast battery charger).

– There are focus and exposure tools, 3D custom Looks, waveforms (LUMA and RGB) and vectorscopes.

– XLR audio via breakout cables are available for Shogun Flame or 3.5mm line level input with audio delay, level adjustment and dedicated audio meters with channel selection for Ninja Flame.

– The Flame Series supports affordable, readily available SSDs.

Shogun Flame and Ninja Flame are available for sale on March 28.

A glimpse at what Sony has in store for NAB

By Fergus Burnett

I visited Sony HQ in Manhattan for their pre-NAB Show press conference recently. In a board room with tiny muffins, mini bagels and a great view of New York, we sat pleasantly for a few hours to learn about the direction the company is taking in 2016.

Sony announced details for a slew of 4K-, HDR-capable broadcast cameras and workflow systems, all backwards compatible with standard HD to ease the professional and consumer transition to Ultra-HD.

As well as broadcast and motion picture, Sony’s Pro division has a finger in the corporate, healthcare, education and faith markets. They have been steadily pushing their new products and systems into universities, private companies, hospitals and every other kind of institution. Last year, they helped to fit out the very first 4K church.

I work as a DIT/dailies technician in the motion picture industry rather than broadcast, so many of these product announcements were outside my sphere of professional interest, but it was fascinating to gain an understanding of the immense scale and variety of markets that Sony is working in.

There were only a handful of new additions the CineAlta (pictured) line, firmware updates for the F5 and F55, and a new 4K recording module. These two cameras have really endured in popularity since their introduction in 2012.

The new AXS-R7 recording module (right) offers a few improvements over its predecessor the AXS-R5. It’s capable of full 4K up to 120fps and has a nifty 30-second cache capability, which is going to be really useful for shooting water droplets in slow motion. The AXS-R7 uses a new kind of high-speed media card that looks like a slightly smaller SxS — it’s called AXSM-S48. Sony is really on fire with these names!

A common and unfortunate problem when I am dealing with on-set dailies is sketchy card readers. This is something that ALL motion picture camera companies are guilty of producing. USB 3.0 is just not fast enough when copying huge chunks of critical camera data to multiple drives, and I’ve found the power connector on the current AXS card reader to be touchy on separate occasions with different readers, causing the card to eject in the midst of offloading. Though there are no details yet, I was assured that the AXSM-S48 reader would use a faster connection than USB 3.0. I certainly hope so; it’s a weak point in what is otherwise a fairly trouble-free camera ecosystem.

Looming at the top of the CineAlta lineup, the F65 is still Sony’s flagship camera for cinema production. Its specs were outrageous four years ago and still are, but it never became a common sight on film sets. The 8K resolution was mostly unnecessary even for top-tier productions. I inquired where Sony saw the F65 sitting among its competition, from Arri and Red, as well as their own F55 which has become a staple of TV drama.

Sony sees the F65 as their true cinema camera, ideally suited for projection on large screens. They admitted that while uptake of the camera was slow after its introduction, rentals have been increasing as more DPs gain experience with the camera, enjoying its low-light capabilities, color gamut and sheer physical bulk.

Sony manufactures a gigantic fleet of sensible, soberly named cameras for every conceivable purpose. They are very capable production tools, but it’s only a small part of Sony’s overall strategy.

With 4K HDR delivery fast becoming standard and expected, we are headed for a future world where pictures are more appealing than reality. From production to consumption, Sony could well be set to dominate that world. We already watch Sony-produced movies shot on Sony cameras playing on Sony screens, and we listen to Sony musicians on Sony stereos as we make our way to worship the God of sound and vision in a 4K church.

Enjoy NAB everyone!

Sony gives Rita Hayworth, Gene Kelly a 4K make-over in ‘Cover Girl’

Sony Pictures Entertainment has completed an all-new 4K restoration of Cover Girl, director Charles Vidor’s 1944 Technicolor musical that starred Rita Hayworth and Gene Kelly. The restoration, completed under the supervision of Sony’s Grover Crisp, premiered at New York’s Museum of Modern Art in New York during Preserve and Project, its 13th international festival of film preservation.

Cover Girl was Columbia Pictures’ first big film shot in the Technicolor three-strip process. For the new 4K restoration, the team went back to the original 3-strip nitrate camera negatives.

“There was a preservation initiative with this film in the 1990s that involved making some positive intermediate elements for video transfer, but our current process dictates that we source the most original materials possible to come up with the best visual result for our 4K workflow,” recalls Crisp, who is EVP of asset management, film restoration and digital mastering at Sony Pictures. “The technical capabilities that we have now allow us to digitally recombine the three separate black and white negatives to create a color image that is virtually free of the fuzzy registration issues inherent in the traditional analog work, in addition to the usual removal of scratches and other physical flaws in the film.”

Crisp says they tried to stay as true to the Technicolor look as possible. “That specific kind of look is impossible to match exactly as it was in the original work from the 1940s and 1950s for a variety of reasons. With original sources for reference, however, it gives us a good target to aim for.”

The greater color range facilitated the recreation of a Technicolor look that is as authentic as possible, especially where original dye transfer prints were available as reference points.

In terms of challenges, Crisp says that aside from the usual number of torn frames, scratches and dirt imbedded in the emulsion of the film, there is always the issue of color breathing when working with the 3-strip Technicolor films.  “It is an inconsistent problem and can be very difficult to address,” he explains. “Kevin Manbeck at MTI Film has developed algorithms to compensate and correct for this problem and that is a big advancement.”

The film was scanned at Cineric in New York City on their proprietary 4K wetgate scanner.

“Working with our colorist, Sheri Eisenberg, we strived to get the colors, with deep blacks and vibrant reds, right.”

She called on the FilmLight Baselight 8 for the color at Deluxe (formerly ColorWorks) in Culver City. “It is a very robust color correction system, and one that we have used for years on our work,” says Crisp. “The lion’s share of the image restoration was done at L’Immagine Ritrovata, a film restoration and conservation facility in Bologna, Italy.  They use a variety of software for image cleanup, though much of this kind of work is manual. This means a lot of individuals sitting at digital workstations working on one frame at a time.  At MTI Film, here in Los Angeles, some of the final image restoration was completed, mostly for the removal of gate hairs in numerous shots, something that is very difficult to achieve without leaving digital artifacts.”