Category Archives: 8K

NAB First Thoughts: Fusion in Resolve, ProRes RAW, more

By Mike McCarthy

These are my notes from the first day I spent browsing the NAB Show floor this year in Las Vegas. When I walked into the South Lower Hall, Blackmagic was the first thing I saw. And, as usual, they had a number of new products this year. The headline item is the next version of DaVinci Resolve, which now integrates the functionality of their Fusion visual effects editor within the program. While I have never felt Resolve to be a very intuitive program for my own work, it is a solution I recommend to others who are on a tight budget, as it offers the most functionality for the price, especially in the free version.

Blackmagic Pocket Cinema Camera

The Blackmagic Pocket Cinema Camera 4K looks more like a “normal” MFT DSLR camera, although it is clearly designed for video instead of stills. Recording full 4K resolution in RAW or ProRes to SD or CFast cards, it has a mini-XLR input with phantom power and uses the same LP-E6 battery as my Canon DSLR. It uses the same camera software as the Ursa line of devices and includes a copy of Resolve Studio… for $1,300.  If I was going to be shooting more live-action video anytime soon, this might make a decent replacement for my 70D, moving up to 4K and HDR workflows. I am not as familiar with the Panasonic cameras that it is closely competes with in the Micro Four Thirds space.

AMD Radeon

Among other smaller items, Blackmagic’s new UpDownCross HD MiniConverter will be useful outside of broadcast for manipulating HDMI signals from computers or devices that have less control over their outputs. (I am looking at you, Mac users.) For $155, it will help interface with projectors and other video equipment. At $65, the bi-directional MicroConverter will be a cheaper and simpler option for basic SDI support.

AMD was showing off 8K editing in Premiere Pro, the result of an optimization by Adobe that uses the 2TB SSD storage in AMD’s Radeon Pro SSG graphics card to cache rendered frames at full resolution for smooth playback. This change is currently only applicable to one graphics card, so it will be interesting to see if Adobe did this because it expects to see more GPUs with integrated SSDs hit the market in the future.

Sony is showing crystal light emitting diode technology in the form of a massive ZRD video wall of incredible imagery. The clarity and brightness were truly breathtaking, but obviously my camera rendered to the web hardly captures the essence of what they were demonstrating.

Like nearly everyone else at the show, Sony is also pushing HDR in the form of Hybrid Log Gamma, which they are developing into many of their products. They also had an array for their tiny RX0 cameras on display with this backpack rig from Radiant Images.

ProRes RAW
At a higher level, one of the most interesting things I have seen at the show is the release of ProRes RAW. While currently limited to external recorders connected to cameras from Sony, Panasonic and Canon, and only supported in FCP-X, it has the potential to dramatically change future workflows if it becomes more widely supported. Many people confuse RAW image recording with the log gamma look, or other low-contrast visual interpretations, but at its core RAW imaging is a single-channel image format paired with a particular bayer color pattern specific to the sensor it was recorded with.

This decreases the amount of data to store (or compress) and gives access to the “source” before it has been processed to improve visual interpretation — in the form of debayering and adding a gamma curve to reverse engineer the response pattern of the human eye, compared to mechanical light sensors. This provides more flexibility and processing options during post, and reduces the amount of data to store, even before the RAW data is compressed, if at all. There are lots of other compressed RAW formats available; the only thing ProRes actually brings to the picture is widespread acceptance and trust in the compression quality. Existing compressed RAW formats include R3D, CinemaDNG, CineformRAW and Canon CRM files.

None of those caught on as a widespread multi-vendor format, but this ProRes RAW is already supported by systems from three competing camera vendors. And the applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.

There is no technical reason that ProRes RAW couldn’t be implemented on future Arri, Red or BMD cameras, which are all currently capable of recording ProRes and RAW data (but not the combination, yet). And since RAW is inherently a playback-only format, (you can’t alter a RAW image without debayering it), I anticipate we will see support in other applications, unless Apple wants to sacrifice the format in an attempt to increase NLE market share.

So it will be interesting to see what other companies and products support the format in the future, and hopefully it will make life easier for people shooting and producing HDR content.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

GTC embraces machine learning and AI

By Mike McCarthy

I had the opportunity to attend GTC 2018, Nvidia‘s 9th annual technology conference in San Jose this week. GTC stands for GPU Technology Conference, and GPU stands for graphics processing unit, but graphics makes up a relatively small portion of the show at this point. The majority of the sessions and exhibitors are focused on machine learning and artificial intelligence.

And the majority of the graphics developments are centered around analyzing imagery, not generating it. Whether that is classifying photos on Pinterest or giving autonomous vehicles machine vision, it is based on the capability of computers to understand the content of an image. Now DriveSim, Nvidia’s new simulator for virtually testing autonomous drive software, dynamically creates imagery for the other system in the Constellation pair of servers to analyze and respond to, but that is entirely machine-to-machine imagery communication.

The main exception to this non-visual usage trend is Nvidia RTX, which allows raytracing to be rendered in realtime on GPUs. RTX can be used through Nvidia’s OptiX API, as well as Microsoft’s DirectX RayTracing API, and eventually through the open source Vulkan cross-platform graphics solution. It integrates with Nvidia’s AI Denoiser to use predictive rendering to further accelerate performance, and can be used in VR applications as well.

Nvidia RTX was first announced at the Game Developers Conference last week, but the first hardware to run it was just announced here at GTC, in the form of the new Quadro GV100. This $9,000 card replaces the existing Pascal-based GP100 with a Volta-based solution. It retains the same PCIe form factor, the quad DisplayPort 1.4 outputs and the NV-Link bridge to pair two cards at 200GB/s, but it jumps the GPU RAM per card from 16GB to 32GB of HBM2 memory. The GP100 was the first Quadro offering since the K6000 to support double-precision compute processing at full speed, and the increase from 3,584 to 5,120 CUDA cores should provide a 40% increase in performance, before you even look at the benefits of the 640 Tensor Cores.

Hopefully, we will see simpler versions of the Volta chip making their way into a broader array of more budget-conscious GPU options in the near future. The fact that the new Nvidia RTX technology is stated to require Volta architecture CPUs leads me to believe that they must be right on the horizon.

Nvidia also announced a new all-in-one GPU supercomputer — the DGX-2 supports twice as many Tesla V100 GPUs (16) with twice as much RAM each (32GB) compared to the existing DGX-1. This provides 81920 CUDA cores addressing 512GB of HBM2 memory, over a fabric of new NV-Link switches, as well as dual Xeon CPUs, Infiniband or 100GbE connectivity, and 32TB of SSD storage. This $400K supercomputer is marketed as the world’s largest GPU.

Nvidia and their partners had a number of cars and trucks on display throughout the show, showcasing various pieces of technology that are being developed to aid in the pursuit of autonomous vehicles.

Also on display in the category of “actually graphics related” was the new Max-Q version of the mobile Quadro P4000, which is integrated into PNY’s first mobile workstation, the Prevail Pro. Besides supporting professional VR applications, the HDMI and dual DisplayPort outputs allow a total of three external displays up to 4K each. It isn’t the smallest or lightest 15-inch laptop, but it is the only system under 17 inches I am aware of that supports the P4000, which is considered the minimum spec for professional VR implementation.

There are, of course, lots of other vendors exhibiting their products at GTC. I had the opportunity to watch 8K stereo 360 video playing off of a laptop with an external GPU. I also tried out the VRHero 5K Plus enterprise-level HMD, which brings the VR experience to whole other level. Much more affordable is TP-Cast’s $300 wireless upgrade Vive and Rift HMDs, the first of many untethered VR solutions. HTC has also recently announced the Vive Pro, which will be available in April for $800. It increases the resolution by 1/3 in both dimensions to 2880×1600 total, and moves from HDMI to DisplayPort 1.2 and USB-C. Besides VR products, they also had all sorts of robots in various forms on display.

Clearly the world of GPUs has extended far beyond the scope of accelerating computer graphics generation, and Nvidia is leading the way in bringing massive information processing to a variety of new and innovative applications. And if that leads us to hardware that can someday raytrace in realtime at 8K in VR, then I suppose everyone wins.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Cinna 4.13

A Closer Look: Why 8K?

By Mike McCarthy

As we enter 2018, we find a variety of products arriving to market that support 8K imagery. The 2020 Olympics are slated to be broadcast in 8K, and while clearly we have a way to go, innovations are constantly being released that get us closer to making that a reality.

The first question that comes up when examining 8K video gear is, “Why 8K?” Obviously, it provides more resolution, but that is more of an answer to the how question than the why question. Many people will be using 8K imagery to create projects that are finished at 4K, giving them the benefits of oversampling or re-framing options. Others will use the full 8K resolution on high DPI displays. There is also the separate application of using 8K images in 360 video for viewing in VR headsets.

Red Monstro 8K

Similar technology may allow reduced resolution extraction on-the-fly to track an object or person in a dedicated 1080p window from an 8K master shot, whether that is a race car or a basketball player. The benefit compared to tracking them with the camera is that these extractions can be generated for multiple objects simultaneously, allowing viewers to select their preferred perspective on the fly. So there are lots of uses for 8K imagery. Shooting 8K for finishing in 4K is not much different from a workflow perspective than shooting 5K or 6K, so we will focus on workflows and tools that actually result in an 8K finished product.

8K Production
The first thing you need for 8K video production is an 8K camera. There are a couple of options, the most popular ones being from Red. The Weapon 8K came out in 2016, followed by the smaller sensor Helium8K, and the recently announced Monstro8K. Panavision has the DXL, which by my understanding is really a derivation of the Red Dragon8K sensor. Canon has been demoing an 8K camera for two years now, with no released product that I am aware of. Sony announced the 8K 3-chip camera UHC-8300 at IBC 2017, but that is probably out of most people’s price range. Those are the only major options I am currently aware of, and the Helium8K is the only one I have been able to shoot with and edit footage from.

Sony UHC-8300 8K

Moving 8K content around in realtime is a challenge. DisplayPort 1.3 supports 8K at 30p, with dual cables being used for 60p. HDMI 2.1 will eventually allow devices to support 8K video on a single cable as well. (The HDMI 2.1 specification was just released at the end of November, so it will be a while before we see it implemented in products on the market. DisplayPort 1.4 exists today — GPUs, Dell monitor — while HDMI 2.1 only exists on paper and in CES technology demos.) Another approach is to use multiple parallel channels for 12G SDI, similar to how quad 3G SDI can be used to transmit 4K data. It is more likely that by the time most facilities are pushing around lots of realtime 8K content, they will have moved to video IP, and be using compression to move 8K streams on 10GbE networks, or moving uncompressed 8K content on 40Gb or 100Gb networks.

Software
The next step is the software part, which is in pretty good shape. Most high-end applications are already set for 8K, because high resolutions are already used as backplates and for other unique uses, and because software is the easiest part of allowing higher resolutions. I have edited 8K files in Adobe Premiere Pro in a variety of flavors without issue. Both Avid Media Composer and Blackmagic Resolve claim to support 8K content. Codec-wise, there are already lots of options for storing 8K, including DNxHR, Cineform, JPEG2000 and HEVC/H265, among many others.

Blackmagic DeckLink 8K Pro

The hardware to process those files in realtime is a much greater challenge, but we are just seeing the release of Intel’s next generation of high-end computing chips. The existing gear is just at the edge of functional at 8K, so I expect the new systems to make 8K editing and playback a reality at the upper end. Blackmagic has announced the DeckLink 8K Pro, a PCIe card with quad 12G SDI ports. I suspect that AJA’s new Io 4K Plus may support 8K at some point in the future, with quad bidirectional 12G SDI ports. Thunderbolt 3 is the main bandwidth limitation there, but it should do 4:2:2 at 24p or 30p. I am unaware of any display that can take that yet, but I am sure they are coming.

In regards to displays, the only one commercially available is Dell’s UP3218K monitor running on dual DisplayPort 1.4 cables. It looks amazing, but you won’t be able to hook it up to your 8K camera for live preview very easily. An adapter is a theoretical possibility, but I haven’t heard of any being developed. Most 8K assets are being recorded to be used in 4K projects, so the output and display at 8K aren’t as big of a deal. Most people will have their needs met with existing 4K options, with the 8K content giving them the option to reframe their shot without losing resolution.

Dell UP3218K

Displaying 8K content at 4K is a much simpler proposition with current technology. Many codecs allow for half-res decode, which makes the playback requirements similar to 4K at full resolution. While my dual-processor desktop workstation can playback most any intermediate codec at half resolution for 4K preview, my laptop seems like a better test-bed to evaluate the fractional resolution playback efficiency of various codecs at 8K, so that will be one of my next investigations.

Assuming you want to show your content at the full 8K, how do you deliver it to your viewers? H.264 files are hard-limited to 4K, but HEVC (or H.265) allows 8K files to be encoded and decoded at reasonable file sizes, and is hardware-accelerated on the newest GPU cards. So 8K HEVC playback should be possible on shipping mid- and high-end computers, provided that you have a display to see it on. 8K options will continue to grow as TV makers push to set apart their top-of-the-line models, and that will motivate development of the rest of the ecosystem to support them.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Blackmagic embraces 8K workflows with DeckLink 8K Pro

At InterBee in Japan, Blackmagic showed it believes in 8K workflows with the introduction of the DeckLink 8K Pro, a new high-performance capture and playback card featuring quad link 12G‑SDI to allow realtime high resolution 8K workflows.

This new DeckLink 8K Pro supports all film and video formats from SD all the way up to 8K DCI, 12‑bit RGB 4:4:4, plus it also handles advanced color spaces such as Rec. 2020 for deeper color and higher dynamic range. DeckLink 8K also handles 64 channels of audio, stereoscopic 3D, high frame rates and more.

DeckLink 8K Pro will be available in early January for US $645 from Blackmagic resellers worldwide. In addition, Blackmagic has also lowered the price of its DeckLink 4K Extreme 12G — to US $895.

The DeckLink 8K Pro digital cinema capture and playback card features four quad-link multi-rate 12G‑SDI connections and can work in all SD, HD, Ultra HD, 4K, 8K and 8K DCI formats. It’s also compatible with all existing pro SDI equipment. The 12G‑SDI connections are also bi-directional so they can be used to either capture or playback quad-link 8K, or for the simultaneous capture and playback of single- or dual-link SDI sources.

According to Blackmagic, DeckLink 8K Pro’s 8K images have 16 times more pixels than a regular 1080 HD image, which lets you reframe or scale shots with high fidelity and precision.

DeckLink 8K Pro supports capture and playback of 8- or 10-bit YUV 4:2:2 video and 10- or 12‑bit RGB 4:4:4. Video can be captured as uncompressed or to industry standard broadcast quality ProRes and DNx files. DeckLink 8K Pro users can work at up to 60 frames per second in 8K and it supports stereoscopic 3D for all modes up to 4K DCI at 60 frames per second in 12‑bit RGB.

The advanced broadcast technology in DeckLink 8K Pro is built into an easy-to-install eight-lane third generation PCI Express for Mac, Windows and Linux workstations. Users get support for all legacy SD and HD formats, along with Ultra HD, DCI 4K, 8K and DCI 8K, as well as Rec. 601, 709 and 2020 color.

DeckLink 8K Pro is designed to work seamlessly with the upcoming DaVinci Resolve 14.2 Studio for seamless editing, color and audio post production workflow. In addition, DeckLink 8K Pro also works with other pro tools, such as Apple Final Cut Pro X, Avid Media Composer, Adobe’s Premiere Pro and After Effects, Avid Pro Tools, Foundry’s Nuke and more. There’s also a free software development kit so customers and OEMs can build their own custom solutions.

 


MammothHD shooting, offering 8K footage

By Randi Altman

Stock imagery house MammothHD has embraced 8K production, shooting studio, macros, aerials, landscapes, wildlife and more. Clark Dunbar, owner of MammothHD, is shooting using the Red 8K VistaVision model. He’s also getting 8K submissions from his network of shooters and producers from around the world. They have been calling on the Red Helium s35 and Epic-W models.

“8K is coming fast —from feature films to broadcast to specialty uses, such as signage and exhibits — the Rio Olympics were shot partially in 8K, and the 2020 Tokyo Olympics will be broadcast in 8K,” says Dunbar. “TV and projector manufacturers of flat screens, monitors and projectors are moving to 8K and prices are dropping, so there is a current clientele for 8K, and we see a growing move to 8K in the near future.”

So why is it important to have 8K imagery while the path is still being paved? “Having an 8K master gives all the benefits of shooting in 8K, but also allows for a beautiful and better over-sampled down-rezing for 4K or lower. There is less noise (if any, and smaller noise/grain patterns) so it’s smoother and sharper and the new color space has incredible dynamic range. Also, shooting in RAW gives the advantages of working to any color grading post conforms you’d like, and with 8K original capture, if needed, there is a large canvas in which to re-frame.”

He says another benefit for 8K is in post — with all those pixels — if you need to stabilize a shot “you have much more control and room for re-framing.”

In terms of lenses, which Dunbar says “are a critical part of the selection for each shot,” current VistaVision sessions have used Zeiss Otus, Zeiss Makro, Canon, Sigma and Nikon glass from 11mm to 600mm, including extension tubes for the macro work and 2X doublers for a few of the telephotos.

“Along with how the lighting conditions affect the intent of the shot, in the field we use from natural light (all times of day), along with on-camera filtration (ND, grad ND, polarizers) with LED panels as supplements to studio set-ups with a choice of light fixtures,” explains Dunbar. “These range from flashlights, candles, LED panels from 2-x-3 inches to 1-x-2 foot panels, old tungsten units and light through the window. Having been shooting for almost 50 years, I like to use whatever tool is around that fits the need of the shot. If not, I figure out what will do from what’s in the kit.”

Dunbar not only shoots, he edits and colors as well. “My edit suite is kind of old. I have a MacPro (cylinder) with over a petabyte of online storage. I look forward to moving to the next-generation of Macs with Thunderbolt 3. On my current system, I rarely get to see the full 8K resolution. I can check files at 4K via the AJA io4K or the KiPro box to a 4K TV.

“As a stock footage house, other than our occasional demo reels, and a few custom-produced client show reels, we only work with single clips in review, selection and prepping for the MammothHD library and galleries,” he explains. “So as an edit suite, we don’t need a full bore throughput for 4K, much less 8K. Although at some point I’d love to have an 8K state-of-the-art system to see just what we’re actually capturing in realtime.”

Apps used in MammothHD’s Apple-based edit suite are Red’s RedCineX (the current beta build) using the new IPP2 pipeline, Apple’s Final Cut 7 and FCP X, Adobe’s Premiere, After Effects and Photoshop, and Blackmagic’s Resolve, along with QuickTime 7 Pro.

Working with these large 8K files has been a challenge, says Dunbar. “When selecting a single frame for export as a 16-bit tiff (via the RedCine-X application), the resulting tiff file in 8K is 200MB!”

The majority of storage used at MammothHD is Promise Pegasus and G-Tech Thunderbolt and Thunderbolt 2 RAIDs, but the company has single disks, LTO tape and even some old SDLT media ranging from FireWire to eSata.

“Like moving to 4K a decade ago, once you see it it’s hard to go back to lower resolutions. I’m looking forward to expanding the MammothHD 8K galleries with more subjects and styles to fill the 8K markets.” Until then Dunbar also remains focused on 4K+ footage, which he says is his site’s specialty.


2017 HPA Engineering Excellence Award winners

The HPA has announced the winners of the 2017 Engineering Excellence Award. Colorfront, Dolby, SGO and Red Digital Cinema will be awarded this year’s honor, which recognizes “outstanding technical and creative ingenuity in media, content production, finishing, distribution and/or archiving.”

The awards will be presented November 16, 2017 at the 12th annual HPA Awards show in Los Angeles.

The winners of the 2017 HPA Engineering Excellence Award are:

Colorfront Engine
An automatically managed, ACES-compliant color pipeline that brings plug-and-play simplicity to complex production requirements, Colorfront Engine ensures image integrity from on-set to the finished product.

Dolby Vision Post Production Tools
Dolby Vision Post Production Tools integrate into existing color grading workflows for both cinema and home deliverable grading, preserving more of what the camera originally captured and limiting creative trade-offs.

SGO’s Mistika VR
Mistika VR is SGO’s latest development and is an affordable VR-focused solution with realtime stitching capabilities using SGO’s optical flow technology.

Red’s Weapon 8K Vista Vision
Weapon with the Dragon 8K VV sensor delivers stunning resolution and image quality, and at 35 megapixels, 8K offers 17x more resolution than HD and over 4x more than 4K.

In addition, honorable mentions will also be awarded to Canon USA for Critical Viewing Reference Displays and Eizo for the ColorEdge CG318-4K.

Joachim Zell, who chairs the committee for this award, said, “Entries for the Engineering Excellence Award were at one of the highest levels ever, on a par with last year’s record breaker, and we saw a variety of serious technologies. The HPA Engineering Excellence Award is meaningful to those who present, those who judge, and the industry. It sounds a bit cliché to say that we had a very tight outcome, and it was a really competitive field this year. Congratulations to the winners and to the nominees for another great year.”

The HPA Awards will also recognize excellence in 12 craft categories, covering color grading, editing, sound and visual effects, and Larry Chernoff will receive the 2017 HPA Lifetime Achievement award.


Designed for large file sizes, Facilis TerraBlock 7 ships

Facilis, makers of shared storage solutions for collaborative media production networks, is now shipping TerraBlock Version 7. The new Facilis Hub Server, a performance aggregator that can be added to new and existing TerraBlock systems, is also available now. Version 7 includes a new browser-based, mobile-compatible Web Console that delivers enhanced workflow and administration from any connected location.

With ever-increasing media file sizes and 4K, HDR and VR workflows continually putting pressure on facility infrastructure, the Facilis Hub Server is aimed at future-proofing customers’ current storage while offering new systems that can handle these types of files. The Facilis Hub Server uses a new architecture to optimize drive sets and increase the bandwidth available from standard TerraBlock storage systems. New customers will get customized Hub Server Stacks with enhanced system redundancy and data resiliency, plus near-linear scalability of bandwidth when expanding the network.

According to James McKenna, VP of marketing/pre-sales at Facilis, “The Facilis Hub Server gives current and new customers a way to take advantage of advanced bandwidth aggregation capabilities, without rendering their existing hardware obsolete.”

The company describes the Web Console as a modernized browser-based and mobile-compatible interface designed to increase the efficiency of administrative tasks and improve the end-user experience.

Easy client setup, upgraded remote volume management and a more integrated user database are among the additional improvements. The Web Console also supports Remote Volume Push to remotely mount volumes onto any client workstations.

Asset Tracking
As the number of files and storage continue to increase, organizations are realizing they need some type of asset tracking system to aid them in moving and finding files in their workflow. Many hesitate to invest in traditional MAM systems due to complexity, cost, and potential workflow impact.

McKenna describes the FastTracker asset tracking software as the “right balance for many customers. Many administrators tell us they are hesitant to invest in traditional asset management systems because they worry it will change the way their editors work. Our FastTracker is included with every TerraBlock system. It’s simple but comprehensive, and doesn’t require users to overhaul their workflow.”

V7 is available immediately for eligible TerraBlock servers.

Check out our interview with McKenna during NAB:


Comprimato plug-in manages Ultra HD, VR files within Premiere

Comprimato, makers of GPU-accelerated storage compression and video transcoding solutions, has launched Comprimato UltraPix. This video plug-in offers proxy-free, auto-setup workflows for Ultra HD, VR and more on hardware running Adobe Premiere Pro CC.

The challenge for post facilities finishing in 4K or 8K Ultra HD, or working on immersive 360­ VR projects, is managing the massive amount of data. The files are large, requiring a lot of expensive storage, which can be slow and cumbersome to load, and achieving realtime editing performance is difficult.

Comprimato UltraPix addresses this, building on JPEG2000, a compression format that offers high image quality (including mathematically lossless mode) to generate smaller versions of each frame as an inherent part of the compression process. Comprimato UltraPix delivers the file at a size that the user’s hardware can accommodate.

Once Comprimato UltraPix is loaded on any hardware, it configures itself with auto-setup, requiring no specialist knowledge from the editor who continues to work in Premiere Pro CC exactly as normal. Any workflow can be boosted by Comprimato UltraPix, and the larger the files the greater the benefit.

Comprimato UltraPix is a multi-platform video processing software for instant video resolution in realtime. It is a lightweight, downloadable video plug-in for OS X, Windows and Linux systems. Editors can switch between 4K, 8K, full HD, HD or lower resolutions without proxy-file rendering or transcoding.

“JPEG2000 is an open standard, recognized universally, and post production professionals will already be familiar with it as it is the image standard in DCP digital cinema files,” says Comprimato founder/CEO Jirˇí Matela. “What we have achieved is a unique implementation of JPEG2000 encoding and decoding in software, using the power of the CPU or GPU, which means we can embed it in realtime editing tools like Adobe Premiere Pro CC. It solves a real issue, simply and effectively.”

“Editors and post professionals need tools that integrate ‘under the hood’ so they can focus on content creation and not technology,” says Sue Skidmore, partner relations for Adobe. “Comprimato adds a great option for Adobe Premiere Pro users who need to work with high-resolution video files, including 360 VR material.”

Comprimato UltraPix plug-ins are currently available for Adobe Premiere Pro CC and Foundry Nuke and will be available on other post and VFX tools soon. You can download a free 30-day trial or buy Comprimato UltraPix for $99 a year.


Post vet Katie Hinsen now head of operations at NZ’s Department of Post

Katie Hinsen, who many of you may know as co-founder of the Blue Collar Post Collective, has moved back to her native New Zealand and has been named head of operations at Aukland’s Department of Post.

Most recently at New York City’s Light Iron, Hinsen comes from a technical and operations background, with credits on over 80 major productions. Over a 20-year career she has worked as an engineer, editor, VFX artist, stereoscopic 3D artist, colorist and finishing artist on commercials, documentaries, television, music videos, shorts and feature films. In addition to Light Iron, she has had stints at New Zealand’s Park Road Post Production and Goldcrest in New York.

Hinsen has throughout her career been involved in both production and R&D of new digital acquisition and distribution formats, including stereoscopic/autostereoscopic 3D, Red, HFR, HDR, 4K+ and DCP. Her expertise includes HDR, 4K and 8K workflows.

“I was looking for a company that had the forward-thinking agility to be able to grow in a rapidly changing industry. New Zealand punches well above its weight in talent and innovation, and now is the time to use this to expand our wider post production ecosystem,” says Hinsen.

“Department of Post is a company that has shown rapid growth and great success by taking risks, thinking outside the box, and collaborating across town, across the country and across the world. That’s a model I can work with, to help bring and retain more high-end work to Auckland’s post community. We’ve got an increasing number of large-scale productions choosing to shoot here. I want to give them a competitive reason to stay here through Post.“

Department of Post was started by James Brookes and James Gardner in 2008. They provide offline, online, color, audio and deliverables services to film and television productions, both local and international.

Quick Chat: Josh Haynie Light Iron’s VP of US operations

Post services company Light Iron has named veteran post pro Josh Haynie to VP of US operations, a newly created position. Based in Light Iron’s Hollywood facility, Haynie will be responsible for leveraging the company’s resources across Los Angeles, New York, New Orleans and future locations.

Haynie joins Light Iron after 13 years at Efilm, where, as managing director, he maintained direct responsibility for all aspects of the company’s operations, including EC3 (on-location services), facility dailies, trailers, digital intermediate, home video and restoration. He managed a team of 100-plus employees. Previously, Haynie held positions at Sunset Digital, Octane/Lightning Dubs and other production and post companies. Haynie is an associate member of the ASC and is also actively involved in the HPA, SMPTE, and VES.

“From the expansion of Light Iron’s episodic services and New York facilities to the development of the color science in the new Millennium DXL camera, it is clear that the integration of Panavision and Light Iron brings significant benefits to clients,” says Haynie.

He was kind enough to take time out of his schedule to answer some of our questions…

Your title hints Light Iron opening up in new territories. Can you talk about this ? What is happening in the industry that this makes sense?
We want to be strategically located near the multiple Panavision locations. Productions and filmmakers need the expertise and familiarity of Light Iron resources in the region with the security and stability of a solid infrastructure. Projects often have splinter and multiple units in various locations, and they demand a workflow continuity in these disparate locations. We can help facilitate projects working in those various regions and offer unparalleled support and guidance.

What do you hope to accomplish in your first 6 to 12 months? What are your goals for Light Iron?
I want to learn from this very agile team of professionals and bring in operational and workflow options to the rapidly changing production/post production convergence we are all encountering. We have a very solid footing in LA, NY and NOLA. I want to ensure that each unit is working together using effective skills and technology to collaborate and allow filmmakers creative freedom. My goal is to help navigate this team though the traditional growth patterns as well as the unpredictable challenges that lie ahead in the emerging market.

You have a wealth of DI experience and knowledge. How has DI changed over the years?
The change depends on the elevation. From a very high level, it was the same simple process for many years: shoot, edit, scan, VFX, color — and our hero was always a film print. Flying lower, we have seen massive shifts in technology that have re-written the play books. The DI really starts in the camera testing phase and begins to mature during the production photography stage. The importance of look setting, dailies and VFX collaboration take on a whole new meaning with each day of shooting.

The image data that is captured needs to be available for near set cutting while VFX elements are being pulled within a few short days of photography. This image data needs to be light and nimble, albeit massive in file size and run time. The turnarounds are shrinking in the feature space exponentially. We are experiencing international collaboration on the finish and color of each project, and the final render dates are increasingly close to worldwide release dates. We are now seeing a tipping point like we encountered a few years back when we asked ourselves, “Is the hero a print or DCP?” Today, we are at the next hero question, DCP or HDR?

Do you have any advice for younger DI artists based on your history?
I think it is always good to learn from the past and understand how we got here. I would say younger artists need to aggressively educate themselves on workflow, technology, and collaboration. Each craft in the journey has experienced rapid evolvement in the last few years. There are many outlets to learn about the latest capture, edit, VFX, sound and distribution techniques being offered, and that research time needs to be on everyone’s daily task list. Seeking out new emerging creative talent is critical learning at this stage as well. Everyday a filmmaker is formulating a vision that is new to the world. We are fortunate here at Light Iron to work with these emerging filmmakers who share the same passion for taking that bold next step in storytelling.