AJA is now shipping HDR Image Analyzer, a realtime HDR monitoring and analysis solution developed in partnership with Colorfront. HDR Image Analyzer features waveform, histogram and vectorscope monitoring and analysis of 4K/UltraHD/2K/HD, HDR and WCG content for broadcast and OTT production, post, QC and mastering.
Combining AJA’s video I/O with HDR analysis tools from Colorfront in a compact 1RU chassis, the HDR Image Analyzer features a toolset for monitoring and analyzing HDR formats, including Perceptual Quantizer (PQ) and Hybrid Log Gamma (HLG) for 4K/UltraHD workflows. The HDR Image Analyzer takes in up to 4K sources across 4x 3G-SDI inputs and loops the video out, allowing analysis at any point in the production workflow.
Additional feature highlights include:
– Support for display referred SDR (Rec.709), HDR ST 2084/PQ and HLG analysis
– Support for scene referred ARRI, Canon, Panasonic, Red and Sony camera color spaces
– Display and color processing look up table (LUT) support
– Automatic color space conversion based on the award winning Colorfront Engine
– CIE graph, vectorscope, waveform and histogram support– Nit levels and phase metering
– False color mode to easily spot out-of-gamut/out-of-brightness pixels
– Advanced out-of-gamut and out-of-brightness detection with error intolerance
– Data analyzer with pixel picker
– Line mode to focus a region of interest onto a single horizontal or vertical line
– File-based error logging with timecode
– Reference still store
– UltraHD UI for native-resolution picture display
– Up to 4K/UltraHD 60p over 4x 3G-SDI inputs, with loop out
– SDI auto signal detection
– Loop through output to broadcast monitors
– Three-year warranty
The HDR Image Analyzer is the second technology collaboration between AJA and Colorfront, following the integration of Colorfront Engine into AJA’s FS-HDR realtime HDR/WCG converter. Colorfront has exclusively licensed its Colorfront HDR Image Analyzer software to AJA for the HDR Image Analyzer.
The HDR Image Analyzer is available through AJA’s worldwide reseller network for $15,995.
Shutterstock has introduced a new tier of footage: Shutterstock Select. This collection of exclusive video clips includes far-ranging content — everything from everyday moments to blockbuster-worthy action scenes — all captured by industry pros using cinema-grade cameras. The Shutterstock Select video collection is available to download in both 4K and HD.
“As most filmmakers and cinematographers know, creating high-quality establishing shots are important to any film, but are also very expensive to produce,” says Jon Oringer, founder/CEO of Shutterstock.
This new tier offering features in-demand content categories, such as cinematic aerials, millennial adventure, gastronomy, action scenes and workplace scenes. The shots are filmed on high-end cinema cameras using cinema lenses.
According to Shutterstock’s director, creative video content, Kyle Trotter, a variety of cinema-grade equipment was used to create this collection. The Phantom Flex4K was used for super slow motion 1000fps footage. Contributors also used Red’s latest sensor, the Monstro, on some shoots and as a result filmed large format for some of the content. Additionally, they used the Shotover K1 and Cineflex (both camera rigs for helicopters). In terms of lenses, the Cooke S4s, ARRI/Zeiss Ultra Primes and Sigma Cine Primes were used. This footage was created with a particular focus on Hollywood-style camera movements, composition and acting.
Two of the contributors Shutterstock worked with and are highlighting in this collection are VIA Films’ Daniel Hurst and Aila Images’ Bevan Goldswain. “We aim to build the Shutterstock Select collection by working with more contributors [who can provide high-quality content] we expect for this offering,” says Trotter. To learn more about contributing, check out their FAQ here.
The ARRI Group has named Dr. Michael Neuhaeuser as the new executive board member responsible for technology. He succeeds Professor Franz Kraus, who after more than 30 years at ARRI, joins the Supervisory Board and will continue to be closely associated with the company. Neuhaeuser starts September 1.
Kraus, who has been leading tech development at ARRI for the last few decades, played an essential role in the development of the Alexa digital camera system and early competence in multi-channel LED technology for ARRI lighting. During Kraus’ tenure at ARRI, and while he was responsible for research and development, the company was presented with nine Scientific and Technical Awards by the Academy of Motion Picture Arts and Sciences for its outstanding technical achievements.
In 2011, along with two colleagues, Kraus was honored with an Academy Award of Merit, an Oscar statuette for the design and development of the digital film
recorder, the ARRILASER.
Neuhaeuser, who is now responsible for technology at the ARRI Group, previously served as VP of automotive microcontroller development at Infineon Technologies in Munich. He studied electrical engineering at the Ruhr-University Bochum, Germany, and subsequently completed his doctorate in semiconductor devices. He brings with him 30 years of experience in the electronics industry.
Neuhaeuser started his industrial career at Siemens Semiconductor in Villach, Austria, and also took over leadership development at Micram Microelectronic in Bochum. He joined Infineon Technologies in 1998, where he performed various management functions in Germany and abroad. Some of his notable accomplishments include being responsible for the digital cordless business since 2005 and, together with his team, having developed the world’s first fully integrated DECT chip. In 2009, he was appointed to VP/GM at Infineon Technologies Romania in Bucharest where, as country manager, he built up various local activities with more than 300 engineers. In 2012, he was asked to head up the automotive microcontroller development division for which he and his team developed the highly successful Aurix product family, which is used in every second car worldwide.
Main Image: L-R: Franz Kraus and Michael Neuhaeuser.
Lenovo’s new ThinkPad P52 is a 15-inch, VR-ready and ISV-certified mobile workstation featuring an Nvidia Quadro P3200 GPU. The all-new hexa-core Intel Xeon CPU doubles the memory capacity to 128GB and increases PCIe storage. Lenovo says the ThinkPad excels in animation and visual effects project storage, the creation of large models and datasets, and realtime playback.
“More and more, M&E artists have the need to create on-the-go,” reports Lenovo senior worldwide industry manager for M&E Rob Hoffmann. “Having desktop-like capabilities in a 15-inch mobile workstation, allows artists to remain creative anytime, anywhere.”
The workstation targets traditional ISV workflows, as well as AR and VR content creation or deployment of mobile AI. Lenovo points to Virtalis, a VR and advanced visualization company, as an example of who might take advantage of the workstation.
“Our virtual reality solutions help clients better understand data and interact with it. Being able to take these solutions mobile with the ThinkPad P52 gives us expanded flexibility to bring the technology to life for clients in their unique environments,” says Steve Carpenter, head of solutions development for Virtalis. “The ThinkPad P52 powering our Virtalis Visionary Render software is perfect for engineering and design professionals looking for a portable solution to take their first steps into the endless possibilities of VR.”
The P52 also will feature a 4K UHD display with 400nits, 100% Adobe color gamut and 10-bit color depth. There are dual USB-C Thunderbolt ports supporting the display of 8K video, allowing users to take advantage of the ThinkPad Thunderbolt Workstation Dock.
The ThinkPad P52 will be available later this month.
In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.
L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.
Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.
Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.
First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.
Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.
We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.
Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.
One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.
Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.
The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.
Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.
Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.
I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:
This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.
We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.
We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.
Main Image: DP Michael Svitak
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.
Red Digital Cinema modified its camera lineup to include one DSMC2 camera Brain with three sensor options — Monstro 8K VV, Helium 8K S35 and Gemini 5K S35. The single DSMC2 camera Brain includes high-end frame rates and data rates regardless of the sensor chosen. In addition, this streamlined approach will result in a price reduction compared to Red’s previous camera line-up.
“We have been working to become more efficient, as well as align with strategic manufacturing partners to optimize our supply chain,” says Jarred Land, president of Red Digital Cinema. “As a result, I am happy to announce a simplification of our lineup with a single DSMC2 brain with multiple sensor options, as well as an overall reduction on our pricing.”
Red’s DSMC2 camera Brain is a modular system that allows users to configure a fully operational camera setup to meet their individual needs. Red offers a range of accessories, including display and control functionality, input/output modules, mounting equipment, and methods of powering the camera. The camera Brain is capable of up to 60fps at 8K, offers 300MB/s data transfer speeds and simultaneous recording of RedCode RAW and Apple ProRes or Avid DNxHD/HR.
The Red DSMC2 camera Brain and sensor options:
– DSMC2 with Monstro 8K VV offers cinematic full frame lens coverage, produces ultra-detailed 35.4 megapixel stills and offers 17+ stops of dynamic range for $54,500.
– DSMC2 with Helium 8K S35 offers 16.5+ stops of dynamic range in a Super 35 frame, and is available now for $24,500.
– DSMC2 with Gemini 5K S35 uses dual sensitivity modes to provide creators with greater flexibility using standard mode for well-lit conditions or low-light mode for darker environments priced at $19,500.
Red will begin to phase out new sales of its Epic-W and Weapon camera Brains starting immediately. In addition to the changes to the camera line-up, Red will also begin offering new upgrade paths for customers looking to move from older Red camera systems or from one sensor to another. The full range of upgrade options can be found here.
These are my notes from the first day I spent browsing the NAB Show floor this year in Las Vegas. When I walked into the South Lower Hall, Blackmagic was the first thing I saw. And, as usual, they had a number of new products this year. The headline item is the next version of DaVinci Resolve, which now integrates the functionality of their Fusion visual effects editor within the program. While I have never felt Resolve to be a very intuitive program for my own work, it is a solution I recommend to others who are on a tight budget, as it offers the most functionality for the price, especially in the free version.
Blackmagic Pocket Cinema Camera
The Blackmagic Pocket Cinema Camera 4K looks more like a “normal” MFT DSLR camera, although it is clearly designed for video instead of stills. Recording full 4K resolution in RAW or ProRes to SD or CFast cards, it has a mini-XLR input with phantom power and uses the same LP-E6 battery as my Canon DSLR. It uses the same camera software as the Ursa line of devices and includes a copy of Resolve Studio… for $1,300. If I was going to be shooting more live-action video anytime soon, this might make a decent replacement for my 70D, moving up to 4K and HDR workflows. I am not as familiar with the Panasonic cameras that it is closely competes with in the Micro Four Thirds space.
Among other smaller items, Blackmagic’s new UpDownCross HD MiniConverter will be useful outside of broadcast for manipulating HDMI signals from computers or devices that have less control over their outputs. (I am looking at you, Mac users.) For $155, it will help interface with projectors and other video equipment. At $65, the bi-directional MicroConverter will be a cheaper and simpler option for basic SDI support.
AMD was showing off 8K editing in Premiere Pro, the result of an optimization by Adobe that uses the 2TB SSD storage in AMD’s Radeon Pro SSG graphics card to cache rendered frames at full resolution for smooth playback. This change is currently only applicable to one graphics card, so it will be interesting to see if Adobe did this because it expects to see more GPUs with integrated SSDs hit the market in the future.
Sony is showing crystal light emitting diode technology in the form of a massive ZRD video wall of incredible imagery. The clarity and brightness were truly breathtaking, but obviously my camera rendered to the web hardly captures the essence of what they were demonstrating.
Like nearly everyone else at the show, Sony is also pushing HDR in the form of Hybrid Log Gamma, which they are developing into many of their products. They also had an array for their tiny RX0 cameras on display with this backpack rig from Radiant Images.
At a higher level, one of the most interesting things I have seen at the show is the release of ProRes RAW. While currently limited to external recorders connected to cameras from Sony, Panasonic and Canon, and only supported in FCP-X, it has the potential to dramatically change future workflows if it becomes more widely supported. Many people confuse RAW image recording with the log gamma look, or other low-contrast visual interpretations, but at its core RAW imaging is a single-channel image format paired with a particular bayer color pattern specific to the sensor it was recorded with.
This decreases the amount of data to store (or compress) and gives access to the “source” before it has been processed to improve visual interpretation — in the form of debayering and adding a gamma curve to reverse engineer the response pattern of the human eye, compared to mechanical light sensors. This provides more flexibility and processing options during post, and reduces the amount of data to store, even before the RAW data is compressed, if at all. There are lots of other compressed RAW formats available; the only thing ProRes actually brings to the picture is widespread acceptance and trust in the compression quality. Existing compressed RAW formats include R3D, CinemaDNG, CineformRAW and Canon CRM files.
None of those caught on as a widespread multi-vendor format, but this ProRes RAW is already supported by systems from three competing camera vendors. And the applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.
There is no technical reason that ProRes RAW couldn’t be implemented on future Arri, Red or BMD cameras, which are all currently capable of recording ProRes and RAW data (but not the combination, yet). And since RAW is inherently a playback-only format, (you can’t alter a RAW image without debayering it), I anticipate we will see support in other applications, unless Apple wants to sacrifice the format in an attempt to increase NLE market share.
So it will be interesting to see what other companies and products support the format in the future, and hopefully it will make life easier for people shooting and producing HDR content.
Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.
The industry’s ongoing shift to higher-resolution formats, its use of more cameras to capture footage and its embrace of additional distribution formats and platforms is putting pressure on storage infrastructure. For content creators and owners to take full advantage of their content, storage must not only deliver scalable performance and capacity but also ensure that media assets remain readily available to users and workflow applications. Quantum’s new StorNext 6 is engineered to address these requirements.
StorNext 6 is now shipping with all newly purchased Xcellis offerings and is also available at no additional cost to current Xcellis users running StorNext 5 under existing support contracts.
Leveraging its extensive real-world 4K testing and a series of 4K reference architectures developed from test data, Quantum’s StorNext platform provides scalable storage that delivers high performance using less hardware than competing systems. StorNext 6 offers a new quality of service (QoS) feature that empowers facilities to further tune and optimize performance across all client workstations, and on a machine-by-machine basis, in a shared storage environment.
Using QoS to specify bandwidth allocation to individual workstations, a facility can guarantee that more demanding tasks, such as 4K playback or color correction, get the bandwidth they need to maintain the highest video quality. At the same time, QoS allows the facility to set parameters ensuring that less timely or demanding tasks do not consume an unnecessary amount of bandwidth. As a result, StorNext 6 users can take on work with higher-resolution content and easily optimize their storage resources to accommodate the high-performance demands of such projects.
StorNext 6 includes a new feature called FlexSpace, which allows multiple instances of StorNext — and geographically distributed teams — located anywhere in the world to share a single archive repository, allowing collaboration with the same content. Users at different sites can store files in the shared archive, as well as browse and pull data from the repository. Because the movement of content can be fully automated according to policies, all users have access to the content they need without having it expressly shipped to them.
Shared archive options include both public cloud storage on Amazon Web Services (AWS), Microsoft Azure or Google Cloud via StorNext’s existing FlexTier capability and private cloud storage based on Quantum’s Lattus object storage or, through FlexTier third-party object storage, such as NetApp StorageGrid, IBM Cleversafe and Scality Ring. In addition to simplifying collaborative work, FlexSpace also makes it easy for multinational companies to establish protected off-site content storage.
FlexSync, which is new to StorNext 6, provides a fast and simple way to synchronize content between multiple StorNext systems that is highly manageable and automated. FlexSync supports one-to-one, one-to-many and many-to-one file replication scenarios and can be configured to operate at almost any level: specific files, specific folders or entire file systems. By leveraging enhancements in file system metadata monitoring, FlexSync recognizes changes instantly and can immediately begin reflecting those changes on another system. This approach avoids the need to lock the file systems to identify changes, reducing synchronization time from hours or days to minutes, or even seconds. As a result, users can also set policies that automatically trigger copies of files so that they are available at multiple sites, enabling different teams to access content quickly and easily whenever it’s needed. In addition, by providing automatic replication across sites, FlexSync offers increased data protection.
StorNext 6 also gives users greater control and selectivity in maximizing their use of storage on an ROI basis. When archive policies call for storage across disk, tape and the cloud, StorNext makes a copy for each. A new copy expiration feature enables users to set additional rules determining when individual copies are removed from a particular storage tier. This approach makes it simpler to maintain data on the storage medium most appropriate and economical and, in turn, to free up space on more expensive storage. When one of several copies of a file is removed from storage, a complementary selectable retrieve function in StorNext 6 enables users to dictate which of the remaining copies is the first priority for retrieval. As a result, users can ensure that the file is retrieved from the most appropriate storage tier.
StorNext 6 offers valuable new capabilities for those facilities that subscribe to Motion Picture Association of America (MPAA) rules for content auditing and tracking. The platform can now track changes in files and provide reports on who changed a file, when the changes were made, what was changed and whether and to where a file was moved. With this knowledge, a facility can see exactly how its team handled specific files and also provide its clients with details about how files were managed during production.
As facilities begin to move to 4K production, they need a storage system that can be expanded for both performance and capacity in a non-disruptive manner. StorNext 6 provides for online stripe group management, allowing systems to have additional storage capacity added to existing stripe groups without having to go offline and disrupt critical workflows.
Another enhancement in StorNext 6 allows StorNext Storage Manager to automate archives in an environment with Mac clients, effectively eliminating the lengthy retrieve process previously required to access an archived directory that contains offline files which can number in the hundreds of thousands, or even millions.
Stock imagery house MammothHD has embraced 8K production, shooting studio, macros, aerials, landscapes, wildlife and more. Clark Dunbar, owner of MammothHD, is shooting using the Red 8K VistaVision model. He’s also getting 8K submissions from his network of shooters and producers from around the world. They have been calling on the Red Helium s35 and Epic-W models.
“8K is coming fast —from feature films to broadcast to specialty uses, such as signage and exhibits — the Rio Olympics were shot partially in 8K, and the 2020 Tokyo Olympics will be broadcast in 8K,” says Dunbar. “TV and projector manufacturers of flat screens, monitors and projectors are moving to 8K and prices are dropping, so there is a current clientele for 8K, and we see a growing move to 8K in the near future.”
So why is it important to have 8K imagery while the path is still being paved? “Having an 8K master gives all the benefits of shooting in 8K, but also allows for a beautiful and better over-sampled down-rezing for 4K or lower. There is less noise (if any, and smaller noise/grain patterns) so it’s smoother and sharper and the new color space has incredible dynamic range. Also, shooting in RAW gives the advantages of working to any color grading post conforms you’d like, and with 8K original capture, if needed, there is a large canvas in which to re-frame.”
He says another benefit for 8K is in post — with all those pixels — if you need to stabilize a shot “you have much more control and room for re-framing.”
In terms of lenses, which Dunbar says “are a critical part of the selection for each shot,” current VistaVision sessions have used Zeiss Otus, Zeiss Makro, Canon, Sigma and Nikon glass from 11mm to 600mm, including extension tubes for the macro work and 2X doublers for a few of the telephotos.
“Along with how the lighting conditions affect the intent of the shot, in the field we use from natural light (all times of day), along with on-camera filtration (ND, grad ND, polarizers) with LED panels as supplements to studio set-ups with a choice of light fixtures,” explains Dunbar. “These range from flashlights, candles, LED panels from 2-x-3 inches to 1-x-2 foot panels, old tungsten units and light through the window. Having been shooting for almost 50 years, I like to use whatever tool is around that fits the need of the shot. If not, I figure out what will do from what’s in the kit.”
Dunbar not only shoots, he edits and colors as well. “My edit suite is kind of old. I have a MacPro (cylinder) with over a petabyte of online storage. I look forward to moving to the next-generation of Macs with Thunderbolt 3. On my current system, I rarely get to see the full 8K resolution. I can check files at 4K via the AJA io4K or the KiPro box to a 4K TV.
“As a stock footage house, other than our occasional demo reels, and a few custom-produced client show reels, we only work with single clips in review, selection and prepping for the MammothHD library and galleries,” he explains. “So as an edit suite, we don’t need a full bore throughput for 4K, much less 8K. Although at some point I’d love to have an 8K state-of-the-art system to see just what we’re actually capturing in realtime.”
Apps used in MammothHD’s Apple-based edit suite are Red’s RedCineX (the current beta build) using the new IPP2 pipeline, Apple’s Final Cut 7 and FCP X, Adobe’s Premiere, After Effects and Photoshop, and Blackmagic’s Resolve, along with QuickTime 7 Pro.
Working with these large 8K files has been a challenge, says Dunbar. “When selecting a single frame for export as a 16-bit tiff (via the RedCine-X application), the resulting tiff file in 8K is 200MB!”
The majority of storage used at MammothHD is Promise Pegasus and G-Tech Thunderbolt and Thunderbolt 2 RAIDs, but the company has single disks, LTO tape and even some old SDLT media ranging from FireWire to eSata.
“Like moving to 4K a decade ago, once you see it it’s hard to go back to lower resolutions. I’m looking forward to expanding the MammothHD 8K galleries with more subjects and styles to fill the 8K markets.” Until then Dunbar also remains focused on 4K+ footage, which he says is his site’s specialty.
Facilis, makers of shared storage solutions for collaborative media production networks, is now shipping TerraBlock Version 7. The new Facilis Hub Server, a performance aggregator that can be added to new and existing TerraBlock systems, is also available now. Version 7 includes a new browser-based, mobile-compatible Web Console that delivers enhanced workflow and administration from any connected location.
With ever-increasing media file sizes and 4K, HDR and VR workflows continually putting pressure on facility infrastructure, the Facilis Hub Server is aimed at future-proofing customers’ current storage while offering new systems that can handle these types of files. The Facilis Hub Server uses a new architecture to optimize drive sets and increase the bandwidth available from standard TerraBlock storage systems. New customers will get customized Hub Server Stacks with enhanced system redundancy and data resiliency, plus near-linear scalability of bandwidth when expanding the network.
According to James McKenna, VP of marketing/pre-sales at Facilis, “The Facilis Hub Server gives current and new customers a way to take advantage of advanced bandwidth aggregation capabilities, without rendering their existing hardware obsolete.”
The company describes the Web Console as a modernized browser-based and mobile-compatible interface designed to increase the efficiency of administrative tasks and improve the end-user experience.
Easy client setup, upgraded remote volume management and a more integrated user database are among the additional improvements. The Web Console also supports Remote Volume Push to remotely mount volumes onto any client workstations.
As the number of files and storage continue to increase, organizations are realizing they need some type of asset tracking system to aid them in moving and finding files in their workflow. Many hesitate to invest in traditional MAM systems due to complexity, cost, and potential workflow impact.
McKenna describes the FastTracker asset tracking software as the “right balance for many customers. Many administrators tell us they are hesitant to invest in traditional asset management systems because they worry it will change the way their editors work. Our FastTracker is included with every TerraBlock system. It’s simple but comprehensive, and doesn’t require users to overhaul their workflow.”
V7 is available immediately for eligible TerraBlock servers.