In another new addition to the Nvidia RTX Studio of laptops, the Nvidia Quadro RTX 6000 GPU will power the Asus ProArt StudioBook One, making it the first laptop to offer the Nvidia Quadro RTX 6000 in a mobile solution so creatives can run complex workloads regardless of location.
The Quadro RTX 6000 within the ProArt StudioBook One provides creatives a similar high-end experience as a deskside workstation. The ProArt StudioBook One is able to handle massive datasets and accelerate compute-intensive workflows, such as creating 3D animations, rendering photoreal product designs, editing 8K video, visualizing volumetric geophysical datasets and conducting walk-throughs of photoreal building designs in VR.
RTX Studio systems, which integrate Nvidia Quadro RTX or GeForce RTX GPUs, offer advanced features — like realtime raytracing, AI and 8K Red video acceleration — to creative and technical professionals.
The Asus ProArt StudioBook One combines performance and portability with the power of Quadro RTX 6000 and features of the new Nvidia “ACE” reference design system, including:
• 24GB of ultra-fast GPU memory to tackle large scenes, models, datasets and complex multi-app workflows.
• Nvidia Turing architecture RT Cores and Tensor Cores to deliver realtime raytracing, advanced shading and AI-enhanced tools to accelerate professional workflows.
• Advanced thermal cooling solution featuring ultra-thin titanium vapor chambers.
• Enhanced Nvidia Optimus technology for seamless switching between the discrete and integrated graphics based on application use with no need to restart applications or reboot the system.
• Slim 300W high-density, high-efficiency power adapter for charging and power at half the size of traditional 300W power adapters.
• Professional 4K 120Hz Pantone-validated display with 100% Adobe RGB color coverage, color accuracy and factory calibration.
In other Nvidia-related news, Acer announced its latest additions to the ConceptD series of laptops, including the ConceptD Pro models featuring Quadro GPUs.
In addition to the Asus ProArt StudioBook One, Nvidia announced 11 additional RTX Studio laptops and desktops from Acer, Asus, HP and MSI, bringing the total number of RTX Studio systems to 39.
Colorfront, which makes on-set dailies and transcoding systems, has rolled out new 8K HDR capabilities and updates across its product lines. The company has also deepened its technology partnership with AJA and entered into a new collaboration with Pomfort to bring more efficient color and HDR management on-set.
Colorfront Transkoder is a post workflow tool for handling UHD, HDR camera, color and editorial/deliverables formats, with recent customers such as Sky, Pixelogic, The Picture Shop and Hulu. With a new HDR GUI, Colorfront’s Transkoder 2019 performs the realtime decompression/de-Bayer/playback of Red and Panavision DXL2 8K R3D material displayed on a Samsung 82-inch Q900R QLED 8K Smart TV in HDR and in full 8K resolution (7680 X 4320). The de-Bayering process is optimized through Nvidia GeForce RTX graphics cards with Turing GPU architecture (also available on Colorfront On-Set Dailies 2019), with 8K video output (up to 60p) using AJA Kona 5 video cards.
“8K TV sets are becoming bigger, as well as more affordable, and people are genuinely awestruck when they see 8K camera footage presented on an 8K HDR display,” said Aron Jaszberenyi, managing director, Colorfront. “We are actively working with several companies around the world originating 8K HDR content. Transkoder’s new 8K capabilities — across on-set, post and mastering — demonstrate that 8K HDR is perfectly accessible to an even wider range of content creators.”
Powered by a re-engineered version of Colorfront Engine and featuring the HDR GUI and 8K HDR workflow, Transkoder 2019 supports camera/editorial formats including Apple ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE (High Density Encoding).
Transkoder 2019’s mastering toolset has been further expanded to support Dolby Vision 4.0 as well as Dolby Atmos for the home with IMF and Immersive Audio Bitstream capabilities. The new Subtitle Engine 2.0 supports CineCanvas and IMSC 1.1 rendering for preservation of content, timing, layout and styling. Transkoder can now also package multiple subtitle language tracks into the timeline of an IMP. Further features support fast and efficient audio QC, including solo/mute of individual tracks on the timeline, and a new render strategy for IMF packages enabling independent audio and video rendering.
Colorfront also showed the latest versions of its On-Set Dailies and Express Dailies products for motion pictures and episodic TV production. On-Set Dailies and Express Dailies both now support ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE. As with Transkoder 2019, the new version of On-Set Dailies supports real-time 8K HDR workflows to support a set-to-post pipeline from HDR playback through QC and rendering of HDR deliverables.
In addition, AJA Video Systems has released v3.0 firmware for its FS-HDR realtime HDR/WCG converter and frame synchronizer. The update introduces enhanced coloring tools together with several other improvements for broadcast, on-set, post and pro AV HDR production developed by Colorfront.
A new, integrated Colorfront Engine Film Mode offers an ACES-based grading and look creation toolset with ASC Color Decision List (CDL) controls, built-in LOOK selection including film emulation looks, and variable Output Mastering Nit Levels for PQ, HLG Extended and P3 colorspace clamp.
Since launching in 2018, FS-HDR has been used on a wide range of TV and live outside broadcast productions, as well as motion pictures including Paramount Pictures’ Top Gun: Maverick, shot by Claudio Miranda, ASC.
Colorfront licensed its HDR Image Analyzer software to AJA for AJA’s HDR Image Analyzer in 2018. A new version of AJA HDR Image Analyzer is set for release during Q3 2019.
Finally, Colorfront and Pomfort have teamed up to integrate their respective HDR-capable on-set systems. This collaboration, harnessing Colorfront Engine, will include live CDL reading in ACES pipelines between Colorfront On-Set/Express Dailies and Pomfort LiveGrade Pro, giving motion picture productions better control of HDR images while simplifying their on-set color workflows and dailies processes.
In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.
L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.
Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.
Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.
First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.
Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.
We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.
Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.
One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.
Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.
The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.
Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.
Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.
I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:
This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.
We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.
We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.
Main Image: DP Michael Svitak
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.
As we enter 2018, we find a variety of products arriving to market that support 8K imagery. The 2020 Olympics are slated to be broadcast in 8K, and while clearly we have a way to go, innovations are constantly being released that get us closer to making that a reality.
The first question that comes up when examining 8K video gear is, “Why 8K?” Obviously, it provides more resolution, but that is more of an answer to the how question than the why question. Many people will be using 8K imagery to create projects that are finished at 4K, giving them the benefits of oversampling or re-framing options. Others will use the full 8K resolution on high DPI displays. There is also the separate application of using 8K images in 360 video for viewing in VR headsets.
Red Monstro 8K
Similar technology may allow reduced resolution extraction on-the-fly to track an object or person in a dedicated 1080p window from an 8K master shot, whether that is a race car or a basketball player. The benefit compared to tracking them with the camera is that these extractions can be generated for multiple objects simultaneously, allowing viewers to select their preferred perspective on the fly. So there are lots of uses for 8K imagery. Shooting 8K for finishing in 4K is not much different from a workflow perspective than shooting 5K or 6K, so we will focus on workflows and tools that actually result in an 8K finished product.
The first thing you need for 8K video production is an 8K camera. There are a couple of options, the most popular ones being from Red. The Weapon 8K came out in 2016, followed by the smaller sensor Helium8K, and the recently announced Monstro8K. Panavision has the DXL, which by my understanding is really a derivation of the Red Dragon8K sensor. Canon has been demoing an 8K camera for two years now, with no released product that I am aware of. Sony announced the 8K 3-chip camera UHC-8300 at IBC 2017, but that is probably out of most people’s price range. Those are the only major options I am currently aware of, and the Helium8K is the only one I have been able to shoot with and edit footage from.
Sony UHC-8300 8K
Moving 8K content around in realtime is a challenge. DisplayPort 1.3 supports 8K at 30p, with dual cables being used for 60p. HDMI 2.1 will eventually allow devices to support 8K video on a single cable as well. (The HDMI 2.1 specification was just released at the end of November, so it will be a while before we see it implemented in products on the market. DisplayPort 1.4 exists today — GPUs, Dell monitor — while HDMI 2.1 only exists on paper and in CES technology demos.) Another approach is to use multiple parallel channels for 12G SDI, similar to how quad 3G SDI can be used to transmit 4K data. It is more likely that by the time most facilities are pushing around lots of realtime 8K content, they will have moved to video IP, and be using compression to move 8K streams on 10GbE networks, or moving uncompressed 8K content on 40Gb or 100Gb networks.
The next step is the software part, which is in pretty good shape. Most high-end applications are already set for 8K, because high resolutions are already used as backplates and for other unique uses, and because software is the easiest part of allowing higher resolutions. I have edited 8K files in Adobe Premiere Pro in a variety of flavors without issue. Both Avid Media Composer and Blackmagic Resolve claim to support 8K content. Codec-wise, there are already lots of options for storing 8K, including DNxHR, Cineform, JPEG2000 and HEVC/H265, among many others.
Blackmagic DeckLink 8K Pro
The hardware to process those files in realtime is a much greater challenge, but we are just seeing the release of Intel’s next generation of high-end computing chips. The existing gear is just at the edge of functional at 8K, so I expect the new systems to make 8K editing and playback a reality at the upper end. Blackmagic has announced the DeckLink 8K Pro, a PCIe card with quad 12G SDI ports. I suspect that AJA’s new Io 4K Plus may support 8K at some point in the future, with quad bidirectional 12G SDI ports. Thunderbolt 3 is the main bandwidth limitation there, but it should do 4:2:2 at 24p or 30p. I am unaware of any display that can take that yet, but I am sure they are coming.
In regards to displays, the only one commercially available is Dell’s UP3218K monitor running on dual DisplayPort 1.4 cables. It looks amazing, but you won’t be able to hook it up to your 8K camera for live preview very easily. An adapter is a theoretical possibility, but I haven’t heard of any being developed. Most 8K assets are being recorded to be used in 4K projects, so the output and display at 8K aren’t as big of a deal. Most people will have their needs met with existing 4K options, with the 8K content giving them the option to reframe their shot without losing resolution.
Displaying 8K content at 4K is a much simpler proposition with current technology. Many codecs allow for half-res decode, which makes the playback requirements similar to 4K at full resolution. While my dual-processor desktop workstation can playback most any intermediate codec at half resolution for 4K preview, my laptop seems like a better test-bed to evaluate the fractional resolution playback efficiency of various codecs at 8K, so that will be one of my next investigations.
Assuming you want to show your content at the full 8K, how do you deliver it to your viewers? H.264 files are hard-limited to 4K, but HEVC (or H.265) allows 8K files to be encoded and decoded at reasonable file sizes, and is hardware-accelerated on the newest GPU cards. So 8K HEVC playback should be possible on shipping mid- and high-end computers, provided that you have a display to see it on. 8K options will continue to grow as TV makers push to set apart their top-of-the-line models, and that will motivate development of the rest of the ecosystem to support them.
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.
At InterBee in Japan, Blackmagic showed it believes in 8K workflows with the introduction of the DeckLink 8K Pro, a new high-performance capture and playback card featuring quad link 12G‑SDI to allow realtime high resolution 8K workflows.
This new DeckLink 8K Pro supports all film and video formats from SD all the way up to 8K DCI, 12‑bit RGB 4:4:4, plus it also handles advanced color spaces such as Rec. 2020 for deeper color and higher dynamic range. DeckLink 8K also handles 64 channels of audio, stereoscopic 3D, high frame rates and more.
DeckLink 8K Pro will be available in early January for US $645 from Blackmagic resellers worldwide. In addition, Blackmagic has also lowered the price of its DeckLink 4K Extreme 12G — to US $895.
The DeckLink 8K Pro digital cinema capture and playback card features four quad-link multi-rate 12G‑SDI connections and can work in all SD, HD, Ultra HD, 4K, 8K and 8K DCI formats. It’s also compatible with all existing pro SDI equipment. The 12G‑SDI connections are also bi-directional so they can be used to either capture or playback quad-link 8K, or for the simultaneous capture and playback of single- or dual-link SDI sources.
According to Blackmagic, DeckLink 8K Pro’s 8K images have 16 times more pixels than a regular 1080 HD image, which lets you reframe or scale shots with high fidelity and precision.
DeckLink 8K Pro supports capture and playback of 8- or 10-bit YUV 4:2:2 video and 10- or 12‑bit RGB 4:4:4. Video can be captured as uncompressed or to industry standard broadcast quality ProRes and DNx files. DeckLink 8K Pro users can work at up to 60 frames per second in 8K and it supports stereoscopic 3D for all modes up to 4K DCI at 60 frames per second in 12‑bit RGB.
The advanced broadcast technology in DeckLink 8K Pro is built into an easy-to-install eight-lane third generation PCI Express for Mac, Windows and Linux workstations. Users get support for all legacy SD and HD formats, along with Ultra HD, DCI 4K, 8K and DCI 8K, as well as Rec. 601, 709 and 2020 color.
DeckLink 8K Pro is designed to work seamlessly with the upcoming DaVinci Resolve 14.2 Studio for seamless editing, color and audio post production workflow. In addition, DeckLink 8K Pro also works with other pro tools, such as Apple Final Cut Pro X, Avid Media Composer, Adobe’s Premiere Pro and After Effects, Avid Pro Tools, Foundry’s Nuke and more. There’s also a free software development kit so customers and OEMs can build their own custom solutions.
Stock imagery house MammothHD has embraced 8K production, shooting studio, macros, aerials, landscapes, wildlife and more. Clark Dunbar, owner of MammothHD, is shooting using the Red 8K VistaVision model. He’s also getting 8K submissions from his network of shooters and producers from around the world. They have been calling on the Red Helium s35 and Epic-W models.
“8K is coming fast —from feature films to broadcast to specialty uses, such as signage and exhibits — the Rio Olympics were shot partially in 8K, and the 2020 Tokyo Olympics will be broadcast in 8K,” says Dunbar. “TV and projector manufacturers of flat screens, monitors and projectors are moving to 8K and prices are dropping, so there is a current clientele for 8K, and we see a growing move to 8K in the near future.”
So why is it important to have 8K imagery while the path is still being paved? “Having an 8K master gives all the benefits of shooting in 8K, but also allows for a beautiful and better over-sampled down-rezing for 4K or lower. There is less noise (if any, and smaller noise/grain patterns) so it’s smoother and sharper and the new color space has incredible dynamic range. Also, shooting in RAW gives the advantages of working to any color grading post conforms you’d like, and with 8K original capture, if needed, there is a large canvas in which to re-frame.”
He says another benefit for 8K is in post — with all those pixels — if you need to stabilize a shot “you have much more control and room for re-framing.”
In terms of lenses, which Dunbar says “are a critical part of the selection for each shot,” current VistaVision sessions have used Zeiss Otus, Zeiss Makro, Canon, Sigma and Nikon glass from 11mm to 600mm, including extension tubes for the macro work and 2X doublers for a few of the telephotos.
“Along with how the lighting conditions affect the intent of the shot, in the field we use from natural light (all times of day), along with on-camera filtration (ND, grad ND, polarizers) with LED panels as supplements to studio set-ups with a choice of light fixtures,” explains Dunbar. “These range from flashlights, candles, LED panels from 2-x-3 inches to 1-x-2 foot panels, old tungsten units and light through the window. Having been shooting for almost 50 years, I like to use whatever tool is around that fits the need of the shot. If not, I figure out what will do from what’s in the kit.”
Dunbar not only shoots, he edits and colors as well. “My edit suite is kind of old. I have a MacPro (cylinder) with over a petabyte of online storage. I look forward to moving to the next-generation of Macs with Thunderbolt 3. On my current system, I rarely get to see the full 8K resolution. I can check files at 4K via the AJA io4K or the KiPro box to a 4K TV.
“As a stock footage house, other than our occasional demo reels, and a few custom-produced client show reels, we only work with single clips in review, selection and prepping for the MammothHD library and galleries,” he explains. “So as an edit suite, we don’t need a full bore throughput for 4K, much less 8K. Although at some point I’d love to have an 8K state-of-the-art system to see just what we’re actually capturing in realtime.”
Apps used in MammothHD’s Apple-based edit suite are Red’s RedCineX (the current beta build) using the new IPP2 pipeline, Apple’s Final Cut 7 and FCP X, Adobe’s Premiere, After Effects and Photoshop, and Blackmagic’s Resolve, along with QuickTime 7 Pro.
Working with these large 8K files has been a challenge, says Dunbar. “When selecting a single frame for export as a 16-bit tiff (via the RedCine-X application), the resulting tiff file in 8K is 200MB!”
The majority of storage used at MammothHD is Promise Pegasus and G-Tech Thunderbolt and Thunderbolt 2 RAIDs, but the company has single disks, LTO tape and even some old SDLT media ranging from FireWire to eSata.
“Like moving to 4K a decade ago, once you see it it’s hard to go back to lower resolutions. I’m looking forward to expanding the MammothHD 8K galleries with more subjects and styles to fill the 8K markets.” Until then Dunbar also remains focused on 4K+ footage, which he says is his site’s specialty.
Comprimato, makers of GPU-accelerated storage compression and video transcoding solutions, has launched Comprimato UltraPix. This video plug-in offers proxy-free, auto-setup workflows for Ultra HD, VR and more on hardware running Adobe Premiere Pro CC.
The challenge for post facilities finishing in 4K or 8K Ultra HD, or working on immersive 360 VR projects, is managing the massive amount of data. The files are large, requiring a lot of expensive storage, which can be slow and cumbersome to load, and achieving realtime editing performance is difficult.
Comprimato UltraPix addresses this, building on JPEG2000, a compression format that offers high image quality (including mathematically lossless mode) to generate smaller versions of each frame as an inherent part of the compression process. Comprimato UltraPix delivers the file at a size that the user’s hardware can accommodate.
Once Comprimato UltraPix is loaded on any hardware, it configures itself with auto-setup, requiring no specialist knowledge from the editor who continues to work in Premiere Pro CC exactly as normal. Any workflow can be boosted by Comprimato UltraPix, and the larger the files the greater the benefit.
Comprimato UltraPix is a multi-platform video processing software for instant video resolution in realtime. It is a lightweight, downloadable video plug-in for OS X, Windows and Linux systems. Editors can switch between 4K, 8K, full HD, HD or lower resolutions without proxy-file rendering or transcoding.
“JPEG2000 is an open standard, recognized universally, and post production professionals will already be familiar with it as it is the image standard in DCP digital cinema files,” says Comprimato founder/CEO Jirˇí Matela. “What we have achieved is a unique implementation of JPEG2000 encoding and decoding in software, using the power of the CPU or GPU, which means we can embed it in realtime editing tools like Adobe Premiere Pro CC. It solves a real issue, simply and effectively.”
“Editors and post professionals need tools that integrate ‘under the hood’ so they can focus on content creation and not technology,” says Sue Skidmore, partner relations for Adobe. “Comprimato adds a great option for Adobe Premiere Pro users who need to work with high-resolution video files, including 360 VR material.”
Comprimato UltraPix plug-ins are currently available for Adobe Premiere Pro CC and Foundry Nuke and will be available on other post and VFX tools soon. You can download a free 30-day trial or buy Comprimato UltraPix for $99 a year.
Higher resolution content is becoming the norm in today’s media workflows, but pixel count is not the only element that is changing. In addition to the pixel density the depth of image, color gamut, frame rates and even the number of simultaneous streams of video will be important. At the 2015 IBC in Amsterdam there was a clear picture of a future that includes UHD 4K and 8K video, as well as virtual reality, as the future path to more immersive video and entertainment experiences.
NHK, a pioneer in 8K video hardware and infrastructure development has given more details on its introduction of this higher resolution format. They will start test broadcasts of their 8K technology in 2016, followed by significant satellite video transmission in 2018 and widespread deployment in 2020 in time for the Tokyo Olympic Games. The company is looking at using HEVC compression to put a 72Gb/s video stream with 22:2 channel audio into a 100Mb/s delivery channel.
In the Technology Zone at the IBC there were displays of virtual reality, 8K video developments, (mostly by NHK), as well as multiple camera set-ups for creating virtual reality video and various ways to use panoramic video. Sphericam 2 is a Kickstarter-funded product that provides 60 frames per second 4K video capture for creating VR content. This six-camera device is compact and can be placed on a stick and used like a selfie camera to capture a 360-degree view.
At the 2015 Google Developers Conference, GoPro demonstrated a 360-degree camera rig (our main image) using 16 GoPro cameras to capture panoramic video. At the IBC, GoPro displayed a more compact 360 Hero six-camera rig for 3D video capture.
In the Technology Zone, Al Jeezera had an eight-camera rig for 4K video capture (made using a 3D printer) and were using software to create panoramic videos. There are many such videos on YouTube that can be viewed as panoramic videos, which change perspective when viewed on a smart phone that has an accelerometer that will create a reference around which the viewer can look at the panoramic activities. The Kolor software actually provides a number of different ways to view the captured content.
Eight-camera rig at Al Jeezera stand.
While many viewing devices for VR video use special split-screen displays, or even use smart phones with a split screen image while using the phone’s accelerometers to give the sense of being surrounded by the viewed image — like the Google Cardboard — there are other ways to create an immersive experience. As mentioned earlier, panoramic videos with a single (or split screen) are available on YouTube. There are also spherical display devices where the still or video image can be rotated by moving your hand across the sphere like the one shown below.
Higher resolution content is becoming mainstream, with 4K TVs set to be the majority that are sold within the next few years. 8K video production, pioneered by NHK and others in Japan, could be the next 4K video by the start of the next decade, driving even more realistic content capture and higher bandwidth and higher storage capacity post.
Multi-camera content is also growing in popularity to support virtual reality games and other applications. This growth is enabled by the proliferation of low cost, high-resolution cameras and sophisticated software that combine the video from these cameras to create a panoramic video and virtual reality experience.
The trends toward higher resolution, combined with a greater color gamut, higher frame rate and color depth will transform video experiences by the next decade, leading to new requirements for storage, networking and processing in video production and display.