Category Archives: 8K

Lenovo intros 15-inch VR-ready ThinkPad P52

Lenovo’s new ThinkPad P52 is a 15-inch, VR-ready and ISV-certified mobile workstation featuring an Nvidia Quadro P3200 GPU. The all-new hexa-core Intel Xeon CPU doubles the memory capacity to 128GB and increases PCIe storage. Lenovo says the ThinkPad excels in animation and visual effects project storage, the creation of large models and datasets, and realtime playback.

“More and more, M&E artists have the need to create on-the-go,” reports Lenovo senior worldwide industry manager for M&E Rob Hoffmann. “Having desktop-like capabilities in a 15-inch mobile workstation, allows artists to remain creative anytime, anywhere.”

The workstation targets traditional ISV workflows, as well as AR and VR content creation or deployment of mobile AI. Lenovo points to Virtalis, a VR and advanced visualization company, as an example of who might take advantage of the workstation.

“Our virtual reality solutions help clients better understand data and interact with it. Being able to take these solutions mobile with the ThinkPad P52 gives us expanded flexibility to bring the technology to life for clients in their unique environments,” says Steve Carpenter, head of solutions development for Virtalis. “The ThinkPad P52 powering our Virtalis Visionary Render software is perfect for engineering and design professionals looking for a portable solution to take their first steps into the endless possibilities of VR.”

The P52 also will feature a 4K UHD display with 400nits, 100% Adobe color gamut and 10-bit color depth. There are dual USB-C Thunderbolt ports supporting the display of 8K video, allowing users to take advantage of the ThinkPad Thunderbolt Workstation Dock.

The ThinkPad P52 will be available later this month.

Testing large format camera workflows

By Mike McCarthy

In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.

L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.

Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.

Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.

First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.

Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.

We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.

Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.

One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”

Post Production
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.

Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.

The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.

Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.

Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.

I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:

This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.

We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.

We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.

Main Image: DP Michael Svitak


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

DG 7.9.18

Red simplifies camera lineup with one DSMC2 brain

Red Digital Cinema modified its camera lineup to include one DSMC2 camera Brain with three sensor options — Monstro 8K VV, Helium 8K S35 and Gemini 5K S35. The single DSMC2 camera Brain includes high-end frame rates and data rates regardless of the sensor chosen. In addition, this streamlined approach will result in a price reduction compared to Red’s previous camera line-up.

“We have been working to become more efficient, as well as align with strategic manufacturing partners to optimize our supply chain,” says Jarred Land, president of Red Digital Cinema. “As a result, I am happy to announce a simplification of our lineup with a single DSMC2 brain with multiple sensor options, as well as an overall reduction on our pricing.”

Red’s DSMC2 camera Brain is a modular system that allows users to configure a fully operational camera setup to meet their individual needs. Red offers a range of accessories, including display and control functionality, input/output modules, mounting equipment, and methods of powering the camera. The camera Brain is capable of up to 60fps at 8K, offers 300MB/s data transfer speeds and simultaneous recording of RedCode RAW and Apple ProRes or Avid DNxHD/HR.

The Red DSMC2 camera Brain and sensor options:
– DSMC2 with Monstro 8K VV offers cinematic full frame lens coverage, produces ultra-detailed 35.4 megapixel stills and offers 17+ stops of dynamic range for $54,500.
– DSMC2 with Helium 8K S35 offers 16.5+ stops of dynamic range in a Super 35 frame, and is available now for $24,500.
– DSMC2 with Gemini 5K S35 uses dual sensitivity modes to provide creators with greater flexibility using standard mode for well-lit conditions or low-light mode for darker environments priced at $19,500.

Red will begin to phase out new sales of its Epic-W and Weapon camera Brains starting immediately. In addition to the changes to the camera line-up, Red will also begin offering new upgrade paths for customers looking to move from older Red camera systems or from one sensor to another. The full range of upgrade options can be found here.

 

 


NAB First Thoughts: Fusion in Resolve, ProRes RAW, more

By Mike McCarthy

These are my notes from the first day I spent browsing the NAB Show floor this year in Las Vegas. When I walked into the South Lower Hall, Blackmagic was the first thing I saw. And, as usual, they had a number of new products this year. The headline item is the next version of DaVinci Resolve, which now integrates the functionality of their Fusion visual effects editor within the program. While I have never felt Resolve to be a very intuitive program for my own work, it is a solution I recommend to others who are on a tight budget, as it offers the most functionality for the price, especially in the free version.

Blackmagic Pocket Cinema Camera

The Blackmagic Pocket Cinema Camera 4K looks more like a “normal” MFT DSLR camera, although it is clearly designed for video instead of stills. Recording full 4K resolution in RAW or ProRes to SD or CFast cards, it has a mini-XLR input with phantom power and uses the same LP-E6 battery as my Canon DSLR. It uses the same camera software as the Ursa line of devices and includes a copy of Resolve Studio… for $1,300.  If I was going to be shooting more live-action video anytime soon, this might make a decent replacement for my 70D, moving up to 4K and HDR workflows. I am not as familiar with the Panasonic cameras that it is closely competes with in the Micro Four Thirds space.

AMD Radeon

Among other smaller items, Blackmagic’s new UpDownCross HD MiniConverter will be useful outside of broadcast for manipulating HDMI signals from computers or devices that have less control over their outputs. (I am looking at you, Mac users.) For $155, it will help interface with projectors and other video equipment. At $65, the bi-directional MicroConverter will be a cheaper and simpler option for basic SDI support.

AMD was showing off 8K editing in Premiere Pro, the result of an optimization by Adobe that uses the 2TB SSD storage in AMD’s Radeon Pro SSG graphics card to cache rendered frames at full resolution for smooth playback. This change is currently only applicable to one graphics card, so it will be interesting to see if Adobe did this because it expects to see more GPUs with integrated SSDs hit the market in the future.

Sony is showing crystal light emitting diode technology in the form of a massive ZRD video wall of incredible imagery. The clarity and brightness were truly breathtaking, but obviously my camera rendered to the web hardly captures the essence of what they were demonstrating.

Like nearly everyone else at the show, Sony is also pushing HDR in the form of Hybrid Log Gamma, which they are developing into many of their products. They also had an array for their tiny RX0 cameras on display with this backpack rig from Radiant Images.

ProRes RAW
At a higher level, one of the most interesting things I have seen at the show is the release of ProRes RAW. While currently limited to external recorders connected to cameras from Sony, Panasonic and Canon, and only supported in FCP-X, it has the potential to dramatically change future workflows if it becomes more widely supported. Many people confuse RAW image recording with the log gamma look, or other low-contrast visual interpretations, but at its core RAW imaging is a single-channel image format paired with a particular bayer color pattern specific to the sensor it was recorded with.

This decreases the amount of data to store (or compress) and gives access to the “source” before it has been processed to improve visual interpretation — in the form of debayering and adding a gamma curve to reverse engineer the response pattern of the human eye, compared to mechanical light sensors. This provides more flexibility and processing options during post, and reduces the amount of data to store, even before the RAW data is compressed, if at all. There are lots of other compressed RAW formats available; the only thing ProRes actually brings to the picture is widespread acceptance and trust in the compression quality. Existing compressed RAW formats include R3D, CinemaDNG, CineformRAW and Canon CRM files.

None of those caught on as a widespread multi-vendor format, but this ProRes RAW is already supported by systems from three competing camera vendors. And the applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.

There is no technical reason that ProRes RAW couldn’t be implemented on future Arri, Red or BMD cameras, which are all currently capable of recording ProRes and RAW data (but not the combination, yet). And since RAW is inherently a playback-only format, (you can’t alter a RAW image without debayering it), I anticipate we will see support in other applications, unless Apple wants to sacrifice the format in an attempt to increase NLE market share.

So it will be interesting to see what other companies and products support the format in the future, and hopefully it will make life easier for people shooting and producing HDR content.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


GTC embraces machine learning and AI

By Mike McCarthy

I had the opportunity to attend GTC 2018, Nvidia‘s 9th annual technology conference in San Jose this week. GTC stands for GPU Technology Conference, and GPU stands for graphics processing unit, but graphics makes up a relatively small portion of the show at this point. The majority of the sessions and exhibitors are focused on machine learning and artificial intelligence.

And the majority of the graphics developments are centered around analyzing imagery, not generating it. Whether that is classifying photos on Pinterest or giving autonomous vehicles machine vision, it is based on the capability of computers to understand the content of an image. Now DriveSim, Nvidia’s new simulator for virtually testing autonomous drive software, dynamically creates imagery for the other system in the Constellation pair of servers to analyze and respond to, but that is entirely machine-to-machine imagery communication.

The main exception to this non-visual usage trend is Nvidia RTX, which allows raytracing to be rendered in realtime on GPUs. RTX can be used through Nvidia’s OptiX API, as well as Microsoft’s DirectX RayTracing API, and eventually through the open source Vulkan cross-platform graphics solution. It integrates with Nvidia’s AI Denoiser to use predictive rendering to further accelerate performance, and can be used in VR applications as well.

Nvidia RTX was first announced at the Game Developers Conference last week, but the first hardware to run it was just announced here at GTC, in the form of the new Quadro GV100. This $9,000 card replaces the existing Pascal-based GP100 with a Volta-based solution. It retains the same PCIe form factor, the quad DisplayPort 1.4 outputs and the NV-Link bridge to pair two cards at 200GB/s, but it jumps the GPU RAM per card from 16GB to 32GB of HBM2 memory. The GP100 was the first Quadro offering since the K6000 to support double-precision compute processing at full speed, and the increase from 3,584 to 5,120 CUDA cores should provide a 40% increase in performance, before you even look at the benefits of the 640 Tensor Cores.

Hopefully, we will see simpler versions of the Volta chip making their way into a broader array of more budget-conscious GPU options in the near future. The fact that the new Nvidia RTX technology is stated to require Volta architecture CPUs leads me to believe that they must be right on the horizon.

Nvidia also announced a new all-in-one GPU supercomputer — the DGX-2 supports twice as many Tesla V100 GPUs (16) with twice as much RAM each (32GB) compared to the existing DGX-1. This provides 81920 CUDA cores addressing 512GB of HBM2 memory, over a fabric of new NV-Link switches, as well as dual Xeon CPUs, Infiniband or 100GbE connectivity, and 32TB of SSD storage. This $400K supercomputer is marketed as the world’s largest GPU.

Nvidia and their partners had a number of cars and trucks on display throughout the show, showcasing various pieces of technology that are being developed to aid in the pursuit of autonomous vehicles.

Also on display in the category of “actually graphics related” was the new Max-Q version of the mobile Quadro P4000, which is integrated into PNY’s first mobile workstation, the Prevail Pro. Besides supporting professional VR applications, the HDMI and dual DisplayPort outputs allow a total of three external displays up to 4K each. It isn’t the smallest or lightest 15-inch laptop, but it is the only system under 17 inches I am aware of that supports the P4000, which is considered the minimum spec for professional VR implementation.

There are, of course, lots of other vendors exhibiting their products at GTC. I had the opportunity to watch 8K stereo 360 video playing off of a laptop with an external GPU. I also tried out the VRHero 5K Plus enterprise-level HMD, which brings the VR experience to whole other level. Much more affordable is TP-Cast’s $300 wireless upgrade Vive and Rift HMDs, the first of many untethered VR solutions. HTC has also recently announced the Vive Pro, which will be available in April for $800. It increases the resolution by 1/3 in both dimensions to 2880×1600 total, and moves from HDMI to DisplayPort 1.2 and USB-C. Besides VR products, they also had all sorts of robots in various forms on display.

Clearly the world of GPUs has extended far beyond the scope of accelerating computer graphics generation, and Nvidia is leading the way in bringing massive information processing to a variety of new and innovative applications. And if that leads us to hardware that can someday raytrace in realtime at 8K in VR, then I suppose everyone wins.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


A Closer Look: Why 8K?

By Mike McCarthy

As we enter 2018, we find a variety of products arriving to market that support 8K imagery. The 2020 Olympics are slated to be broadcast in 8K, and while clearly we have a way to go, innovations are constantly being released that get us closer to making that a reality.

The first question that comes up when examining 8K video gear is, “Why 8K?” Obviously, it provides more resolution, but that is more of an answer to the how question than the why question. Many people will be using 8K imagery to create projects that are finished at 4K, giving them the benefits of oversampling or re-framing options. Others will use the full 8K resolution on high DPI displays. There is also the separate application of using 8K images in 360 video for viewing in VR headsets.

Red Monstro 8K

Similar technology may allow reduced resolution extraction on-the-fly to track an object or person in a dedicated 1080p window from an 8K master shot, whether that is a race car or a basketball player. The benefit compared to tracking them with the camera is that these extractions can be generated for multiple objects simultaneously, allowing viewers to select their preferred perspective on the fly. So there are lots of uses for 8K imagery. Shooting 8K for finishing in 4K is not much different from a workflow perspective than shooting 5K or 6K, so we will focus on workflows and tools that actually result in an 8K finished product.

8K Production
The first thing you need for 8K video production is an 8K camera. There are a couple of options, the most popular ones being from Red. The Weapon 8K came out in 2016, followed by the smaller sensor Helium8K, and the recently announced Monstro8K. Panavision has the DXL, which by my understanding is really a derivation of the Red Dragon8K sensor. Canon has been demoing an 8K camera for two years now, with no released product that I am aware of. Sony announced the 8K 3-chip camera UHC-8300 at IBC 2017, but that is probably out of most people’s price range. Those are the only major options I am currently aware of, and the Helium8K is the only one I have been able to shoot with and edit footage from.

Sony UHC-8300 8K

Moving 8K content around in realtime is a challenge. DisplayPort 1.3 supports 8K at 30p, with dual cables being used for 60p. HDMI 2.1 will eventually allow devices to support 8K video on a single cable as well. (The HDMI 2.1 specification was just released at the end of November, so it will be a while before we see it implemented in products on the market. DisplayPort 1.4 exists today — GPUs, Dell monitor — while HDMI 2.1 only exists on paper and in CES technology demos.) Another approach is to use multiple parallel channels for 12G SDI, similar to how quad 3G SDI can be used to transmit 4K data. It is more likely that by the time most facilities are pushing around lots of realtime 8K content, they will have moved to video IP, and be using compression to move 8K streams on 10GbE networks, or moving uncompressed 8K content on 40Gb or 100Gb networks.

Software
The next step is the software part, which is in pretty good shape. Most high-end applications are already set for 8K, because high resolutions are already used as backplates and for other unique uses, and because software is the easiest part of allowing higher resolutions. I have edited 8K files in Adobe Premiere Pro in a variety of flavors without issue. Both Avid Media Composer and Blackmagic Resolve claim to support 8K content. Codec-wise, there are already lots of options for storing 8K, including DNxHR, Cineform, JPEG2000 and HEVC/H265, among many others.

Blackmagic DeckLink 8K Pro

The hardware to process those files in realtime is a much greater challenge, but we are just seeing the release of Intel’s next generation of high-end computing chips. The existing gear is just at the edge of functional at 8K, so I expect the new systems to make 8K editing and playback a reality at the upper end. Blackmagic has announced the DeckLink 8K Pro, a PCIe card with quad 12G SDI ports. I suspect that AJA’s new Io 4K Plus may support 8K at some point in the future, with quad bidirectional 12G SDI ports. Thunderbolt 3 is the main bandwidth limitation there, but it should do 4:2:2 at 24p or 30p. I am unaware of any display that can take that yet, but I am sure they are coming.

In regards to displays, the only one commercially available is Dell’s UP3218K monitor running on dual DisplayPort 1.4 cables. It looks amazing, but you won’t be able to hook it up to your 8K camera for live preview very easily. An adapter is a theoretical possibility, but I haven’t heard of any being developed. Most 8K assets are being recorded to be used in 4K projects, so the output and display at 8K aren’t as big of a deal. Most people will have their needs met with existing 4K options, with the 8K content giving them the option to reframe their shot without losing resolution.

Dell UP3218K

Displaying 8K content at 4K is a much simpler proposition with current technology. Many codecs allow for half-res decode, which makes the playback requirements similar to 4K at full resolution. While my dual-processor desktop workstation can playback most any intermediate codec at half resolution for 4K preview, my laptop seems like a better test-bed to evaluate the fractional resolution playback efficiency of various codecs at 8K, so that will be one of my next investigations.

Assuming you want to show your content at the full 8K, how do you deliver it to your viewers? H.264 files are hard-limited to 4K, but HEVC (or H.265) allows 8K files to be encoded and decoded at reasonable file sizes, and is hardware-accelerated on the newest GPU cards. So 8K HEVC playback should be possible on shipping mid- and high-end computers, provided that you have a display to see it on. 8K options will continue to grow as TV makers push to set apart their top-of-the-line models, and that will motivate development of the rest of the ecosystem to support them.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Blackmagic embraces 8K workflows with DeckLink 8K Pro

At InterBee in Japan, Blackmagic showed it believes in 8K workflows with the introduction of the DeckLink 8K Pro, a new high-performance capture and playback card featuring quad link 12G‑SDI to allow realtime high resolution 8K workflows.

This new DeckLink 8K Pro supports all film and video formats from SD all the way up to 8K DCI, 12‑bit RGB 4:4:4, plus it also handles advanced color spaces such as Rec. 2020 for deeper color and higher dynamic range. DeckLink 8K also handles 64 channels of audio, stereoscopic 3D, high frame rates and more.

DeckLink 8K Pro will be available in early January for US $645 from Blackmagic resellers worldwide. In addition, Blackmagic has also lowered the price of its DeckLink 4K Extreme 12G — to US $895.

The DeckLink 8K Pro digital cinema capture and playback card features four quad-link multi-rate 12G‑SDI connections and can work in all SD, HD, Ultra HD, 4K, 8K and 8K DCI formats. It’s also compatible with all existing pro SDI equipment. The 12G‑SDI connections are also bi-directional so they can be used to either capture or playback quad-link 8K, or for the simultaneous capture and playback of single- or dual-link SDI sources.

According to Blackmagic, DeckLink 8K Pro’s 8K images have 16 times more pixels than a regular 1080 HD image, which lets you reframe or scale shots with high fidelity and precision.

DeckLink 8K Pro supports capture and playback of 8- or 10-bit YUV 4:2:2 video and 10- or 12‑bit RGB 4:4:4. Video can be captured as uncompressed or to industry standard broadcast quality ProRes and DNx files. DeckLink 8K Pro users can work at up to 60 frames per second in 8K and it supports stereoscopic 3D for all modes up to 4K DCI at 60 frames per second in 12‑bit RGB.

The advanced broadcast technology in DeckLink 8K Pro is built into an easy-to-install eight-lane third generation PCI Express for Mac, Windows and Linux workstations. Users get support for all legacy SD and HD formats, along with Ultra HD, DCI 4K, 8K and DCI 8K, as well as Rec. 601, 709 and 2020 color.

DeckLink 8K Pro is designed to work seamlessly with the upcoming DaVinci Resolve 14.2 Studio for seamless editing, color and audio post production workflow. In addition, DeckLink 8K Pro also works with other pro tools, such as Apple Final Cut Pro X, Avid Media Composer, Adobe’s Premiere Pro and After Effects, Avid Pro Tools, Foundry’s Nuke and more. There’s also a free software development kit so customers and OEMs can build their own custom solutions.

 


MammothHD shooting, offering 8K footage

By Randi Altman

Stock imagery house MammothHD has embraced 8K production, shooting studio, macros, aerials, landscapes, wildlife and more. Clark Dunbar, owner of MammothHD, is shooting using the Red 8K VistaVision model. He’s also getting 8K submissions from his network of shooters and producers from around the world. They have been calling on the Red Helium s35 and Epic-W models.

“8K is coming fast —from feature films to broadcast to specialty uses, such as signage and exhibits — the Rio Olympics were shot partially in 8K, and the 2020 Tokyo Olympics will be broadcast in 8K,” says Dunbar. “TV and projector manufacturers of flat screens, monitors and projectors are moving to 8K and prices are dropping, so there is a current clientele for 8K, and we see a growing move to 8K in the near future.”

So why is it important to have 8K imagery while the path is still being paved? “Having an 8K master gives all the benefits of shooting in 8K, but also allows for a beautiful and better over-sampled down-rezing for 4K or lower. There is less noise (if any, and smaller noise/grain patterns) so it’s smoother and sharper and the new color space has incredible dynamic range. Also, shooting in RAW gives the advantages of working to any color grading post conforms you’d like, and with 8K original capture, if needed, there is a large canvas in which to re-frame.”

He says another benefit for 8K is in post — with all those pixels — if you need to stabilize a shot “you have much more control and room for re-framing.”

In terms of lenses, which Dunbar says “are a critical part of the selection for each shot,” current VistaVision sessions have used Zeiss Otus, Zeiss Makro, Canon, Sigma and Nikon glass from 11mm to 600mm, including extension tubes for the macro work and 2X doublers for a few of the telephotos.

“Along with how the lighting conditions affect the intent of the shot, in the field we use from natural light (all times of day), along with on-camera filtration (ND, grad ND, polarizers) with LED panels as supplements to studio set-ups with a choice of light fixtures,” explains Dunbar. “These range from flashlights, candles, LED panels from 2-x-3 inches to 1-x-2 foot panels, old tungsten units and light through the window. Having been shooting for almost 50 years, I like to use whatever tool is around that fits the need of the shot. If not, I figure out what will do from what’s in the kit.”

Dunbar not only shoots, he edits and colors as well. “My edit suite is kind of old. I have a MacPro (cylinder) with over a petabyte of online storage. I look forward to moving to the next-generation of Macs with Thunderbolt 3. On my current system, I rarely get to see the full 8K resolution. I can check files at 4K via the AJA io4K or the KiPro box to a 4K TV.

“As a stock footage house, other than our occasional demo reels, and a few custom-produced client show reels, we only work with single clips in review, selection and prepping for the MammothHD library and galleries,” he explains. “So as an edit suite, we don’t need a full bore throughput for 4K, much less 8K. Although at some point I’d love to have an 8K state-of-the-art system to see just what we’re actually capturing in realtime.”

Apps used in MammothHD’s Apple-based edit suite are Red’s RedCineX (the current beta build) using the new IPP2 pipeline, Apple’s Final Cut 7 and FCP X, Adobe’s Premiere, After Effects and Photoshop, and Blackmagic’s Resolve, along with QuickTime 7 Pro.

Working with these large 8K files has been a challenge, says Dunbar. “When selecting a single frame for export as a 16-bit tiff (via the RedCine-X application), the resulting tiff file in 8K is 200MB!”

The majority of storage used at MammothHD is Promise Pegasus and G-Tech Thunderbolt and Thunderbolt 2 RAIDs, but the company has single disks, LTO tape and even some old SDLT media ranging from FireWire to eSata.

“Like moving to 4K a decade ago, once you see it it’s hard to go back to lower resolutions. I’m looking forward to expanding the MammothHD 8K galleries with more subjects and styles to fill the 8K markets.” Until then Dunbar also remains focused on 4K+ footage, which he says is his site’s specialty.


2017 HPA Engineering Excellence Award winners

The HPA has announced the winners of the 2017 Engineering Excellence Award. Colorfront, Dolby, SGO and Red Digital Cinema will be awarded this year’s honor, which recognizes “outstanding technical and creative ingenuity in media, content production, finishing, distribution and/or archiving.”

The awards will be presented November 16, 2017 at the 12th annual HPA Awards show in Los Angeles.

The winners of the 2017 HPA Engineering Excellence Award are:

Colorfront Engine
An automatically managed, ACES-compliant color pipeline that brings plug-and-play simplicity to complex production requirements, Colorfront Engine ensures image integrity from on-set to the finished product.

Dolby Vision Post Production Tools
Dolby Vision Post Production Tools integrate into existing color grading workflows for both cinema and home deliverable grading, preserving more of what the camera originally captured and limiting creative trade-offs.

SGO’s Mistika VR
Mistika VR is SGO’s latest development and is an affordable VR-focused solution with realtime stitching capabilities using SGO’s optical flow technology.

Red’s Weapon 8K Vista Vision
Weapon with the Dragon 8K VV sensor delivers stunning resolution and image quality, and at 35 megapixels, 8K offers 17x more resolution than HD and over 4x more than 4K.

In addition, honorable mentions will also be awarded to Canon USA for Critical Viewing Reference Displays and Eizo for the ColorEdge CG318-4K.

Joachim Zell, who chairs the committee for this award, said, “Entries for the Engineering Excellence Award were at one of the highest levels ever, on a par with last year’s record breaker, and we saw a variety of serious technologies. The HPA Engineering Excellence Award is meaningful to those who present, those who judge, and the industry. It sounds a bit cliché to say that we had a very tight outcome, and it was a really competitive field this year. Congratulations to the winners and to the nominees for another great year.”

The HPA Awards will also recognize excellence in 12 craft categories, covering color grading, editing, sound and visual effects, and Larry Chernoff will receive the 2017 HPA Lifetime Achievement award.

Designed for large file sizes, Facilis TerraBlock 7 ships

Facilis, makers of shared storage solutions for collaborative media production networks, is now shipping TerraBlock Version 7. The new Facilis Hub Server, a performance aggregator that can be added to new and existing TerraBlock systems, is also available now. Version 7 includes a new browser-based, mobile-compatible Web Console that delivers enhanced workflow and administration from any connected location.

With ever-increasing media file sizes and 4K, HDR and VR workflows continually putting pressure on facility infrastructure, the Facilis Hub Server is aimed at future-proofing customers’ current storage while offering new systems that can handle these types of files. The Facilis Hub Server uses a new architecture to optimize drive sets and increase the bandwidth available from standard TerraBlock storage systems. New customers will get customized Hub Server Stacks with enhanced system redundancy and data resiliency, plus near-linear scalability of bandwidth when expanding the network.

According to James McKenna, VP of marketing/pre-sales at Facilis, “The Facilis Hub Server gives current and new customers a way to take advantage of advanced bandwidth aggregation capabilities, without rendering their existing hardware obsolete.”

The company describes the Web Console as a modernized browser-based and mobile-compatible interface designed to increase the efficiency of administrative tasks and improve the end-user experience.

Easy client setup, upgraded remote volume management and a more integrated user database are among the additional improvements. The Web Console also supports Remote Volume Push to remotely mount volumes onto any client workstations.

Asset Tracking
As the number of files and storage continue to increase, organizations are realizing they need some type of asset tracking system to aid them in moving and finding files in their workflow. Many hesitate to invest in traditional MAM systems due to complexity, cost, and potential workflow impact.

McKenna describes the FastTracker asset tracking software as the “right balance for many customers. Many administrators tell us they are hesitant to invest in traditional asset management systems because they worry it will change the way their editors work. Our FastTracker is included with every TerraBlock system. It’s simple but comprehensive, and doesn’t require users to overhaul their workflow.”

V7 is available immediately for eligible TerraBlock servers.

Check out our interview with McKenna during NAB: