Tag Archives: storage

Object Matrix and Arvato partner for managing digital archives

Object Matrix and Arvato Systems have partnered to help companies instantly access, manage, browse and edit clips from their digital archives.

Using Arvato’s production asset management platform, VPMS EditMate along with the media-focused object storage solution from Object Matrix, MatrixStore, the companies report that organizations can significantly reduce the time needed to manage media workflows, while making content easily discoverable. The integration makes it easy to unlock assets held in archive, enable creative collaboration and monetize archived assets.

MatrixStore is a media-focused private and hybrid cloud storage platform that provides instant access to all media assets. Built upon object-based storage technology, MatrixStore provides digital content governance through an integrated and automated storage platform supporting multiple media-based workflows while providing a secure and scalable solution.

VPMS EditMate is a toolkit built for managing and editing projects in a streamlined, intuitive and efficient manner, all from within Adobe Premiere Pro. From project creation and collecting media, to the export and storage of edited material, users benefit from a series of features designed to simplify the spectrum of tasks involved in a modern and collaborative editing environment.

Review: Samsung’s 970 EVO Plus 500GB NVMe M.2 SSD

By Brady Betzel

It seems that the SSD drives are dropping in price by the hour. (This might be a slight over-exaggeration, but you understand what I mean.) Over the last year or so there has been a huge difference in pricing, including high-speed NVMe SSD drives. One of those is the highly touted Samsung EVO Plus NVMe line.

In this review, I am going to go over Samsung’s 500GB version of the 970 EVO Plus NVMe M.2 SSD drive. The Samsung 970 EVO Plus NVMe M.2 SSD drive comes in four sizes — 250GB, 500GB, 1TB, and 2TB — and retails (according to www.samsung.com) for $74.99, $119.99, $229.99 and $479.99, respectively. For what it’s worth, I really didn’t see much of price difference on other sites I visited, namely Amazon.com and Best Buy.

On paper, the EVO Plus line of drives can achieve speeds of up to 3,500MB/s read and 3,300MB/s write. Keep in mind that the lower the storage size the lower the read/write speeds will be. For instance, the EVO Plus 250GB SSD can still get up to 3,500MB/s in sequential read speeds, while the sequential write speeds dwindle down to max speeds of 2,300MB/s. Comparatively, the “standard” EVO line can get 3,400MB/s to 3,500MB/s sequential read speeds and 1,500MB/s sequential write speeds on the 250GB EVO SSD. The 500GB version costs just $89.99, but if you need more storage size, you will have to pay more.

There is another SSD to compare the 970 EVO Plus to, and that is the 970 Pro, which only comes in 512GB and 1TB sizes — costing around $169.99 and $349.99, respectively. While the Pro version has similar read speeds to the Plus (up to 3,500MB/s read) and actually slower write speeds (up to 2,700MB/s), the real ticket to admission for the Samsung 970 Pro is the Terabytes Written (TBW) warranty period. Samsung warranties the 970 line of drives for five years or Terabytes Written, whichever comes first. In the 500GB line of 970 drives, the “standard” and Plus 970 cover 300TBW, while the Pro covers a whopping 600TBW.

Samsung says its use of the latest V-NAND technology, in addition to its Phoenix controller, provides the highest speeds and power efficiency of the EVO NVMe drives. Essentially, V-NAND is a way to vertically stack memory instead of the previous method of stacking memory in a planar way. Stacking vertically allows for more memory in the same space in addition to longer life spans. You can read more about the Phoenix controller here.

If you are like me and want both a good warranty (or, really, faith in the product) and blazing speeds, check out the Samsung 970 EVO Plus line of drives. Great price point with almost all of the features as the Pro line. The 970 line of NVMe M.2 SSD drives fits the 2280 form factor (meaning 22mm x 80mm) and fits an M key-style interface. It’s important to understand what interface your SSD is compatible with: either M key (or M) or B key. Cards in the Samsung 970 EVO line are all M key. Most newer motherboards will have at least one if not two M.2 ports to plug drives into. You can also find PCIe adapters for under $20 or $30 on Amazon that will give you essentially the same read/write speeds. External USB 3.1 Gen 2, USB-C enclosures can also be found that will give you an easier way of replacing the drives when needed without having to open your case.

One really amazing way to use these newly lower-priced drives: When color correcting, editing, and/or performing VFX miracles in apps like Adobe Premiere Pro or Blackmagic Resolve, use NVMe drives for only cache, still stores, renders and/or optimized media. With the low cost of these NVMe M.2 drives, you might be able to include the price of one when charging a client and throw it on the shelf when done, complete with the project and media. Not only will you have a super-fast way to access the media, but you can easily get another one in the system when using an external drive.

Summing Up
In the end, the price points of the Samsung 970 EVO Plus NVMe M.2 drives are right in the sweet spot. There are, of course, competing drives that run a little bit cheaper, like the Western Digital Black SN750 NVMe SSDs (at around $99 for the 500GB model), but they come with a slightly slower read/write speed. So for my money, the Samsung 970 line of NVMe drives is a great combination of speed and value that can take your computer to the next level.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Whiskytree experiences growth, upgrades tools

Visual effects and content creation company Whiskytree has gone through a growth spurt that included a substantial increase in staff, a new physical space and new infrastructure.

Providing content for films, television, the Web, apps, game and VR or AR, Whiskytree’s team of artists, designers and technicians use applications such as Autodesk Maya, Side Effects Houdini, Autodesk Arnold, Gaffer and Foundry Nuke on Linux — along with custom tools — to create computer graphics and visual effects.

To help manage its growth and the increase in data that came with it, Whiskytree recently installed Panasas ActiveStor. The platform is used to store and manage Whiskytree’s computer graphics and visual effects workflows, including data-intensive rendering and realtime collaboration using extremely large data sets for movies, commercials and advertising; work for realtime render engines and games; and augmented reality and virtual reality applications.

“We recently tripled our employee count in a single month while simultaneously finalizing the build-out of our new facility and network infrastructure, all while working on a 700-shot feature film project [The Captain],” says Jonathan Harb, chief executive officer and owner of Whiskytree. “Panasas not only delivered the scalable performance that we required during this critical period, but also delivered a high level of support and expertise. This allowed us to add artists at the rapid pace we needed with an easy-to-work-with solution that didn’t require fine-tuning to maintain and improve our workflow and capacity in an uninterrupted fashion. We literally moved from our old location on a Friday, then began work in our new facility the following Monday morning, with no production downtime. The company’s ‘set it and forget it’ appliance resulted in overall smooth operations, even under the trying circumstances.”

In the past, Whiskytree operated a multi-vendor storage solution that was complex and time consuming to administer, modify and troubleshoot. With the office relocation and rapid team expansion, Whiskytree didn’t have time to build a new custom solution or spend a lot of time tuning. It also needed storage that would grow as project and facility needs change.

Projects from the studio include Thor: Ragnarok, Monster Hunt 2, Bolden, Mother, Star Wars: The Last Jedi, Downsizing, Warcraft and Rogue One: A Star Wars.

Sonnet adds new card and adapter to 10GbE line

Sonnet Technologies is offering the Solo10G SFP+ PCIe card and the Solo10G SFP+ Thunderbolt 3 Edition adapter, the latest products in the company’s line of 10 Gigabit Ethernet (10GbE) network adapters.

Solo10G SFP+ adapters add fast 10GbE network connectivity to a wide range of computers, enabling users to easily connect to 10GbE-enabled network infrastructure and storage systems via LC fiber optic cables (sold separately). Both products include a 10GBase-SR (short-range) SFP+ transceiver (the most commonly used optical transceiver), enabling 10Gb connectivity at distances up to 300 meters.

The Solo10G SFP+ PCIe card is a low-profile x4 PCIe 3.0 adapter card that offers Mac, Windows and Linux users an easy-to-install and easy-to-manage solution for adding 10GbE fiber network connectivity to computers with PCIe card slots. This card is also suited for use in a multi-slot Thunderbolt-to-PCIe card expansion system connected to a Mac. The Solo10G SFP+ Thunderbolt 3 Edition adapter is a compact, rugged, bus-powered, fanless Thunderbolt 3 adapter for Mac and Windows computers with Thunderbolt 3 ports.

Sonnet’s Solo10G SFP+ products offer Mac users a plug-and-play experience with no driver installation required; Windows and Linux use only requires a simple driver installation. Both products are configured using operating system settings, so there’s no separate management program to install or run.

With its broad OS support and small form factor, the Solo10G SFP+ PCIe card allows companies to standardize on a single adapter and deploy it across platforms with ease. For users with Thunderbolt 3-equipped Mac and Windows computers, the Solo10G SFP+ Thunderbolt 3 Edition adapter is a simple external solution for adding 10GbE fiber network connectivity. From its replaceable captive cable to its bus-powered operation, the Thunderbolt 3 adapter is highly portable.

Solo10G SFP+ products were engineered with security features essential to today’s users. Incorporating encryption in hardware, the Sonnet network adapters are protected against malicious firmware modification. Any unauthorized attempt to modify the firmware to enable covert computer access renders them inoperable. These security features prevent the Solo10G SFP+ adapters from being reprogrammed, except by a manufacturer’s update using a secure encryption key.

Measuring a compact 3.1 inches wide by 4.9 inches deep by 1.1 inches tall — less than half the size of every other adapter in its class — the Solo10G SFP+ Thunderbolt 3 Edition adapter features an aluminum enclosure that effectively cools the circuitry and eliminates the need for a fan, enabling silent operation. Unlike every other 10GbE fiber Thunderbolt adapter available, Sonnet’s Solo10G SFP+ adapter requires no power adapter and instead is powered by the computer to which it’s connected.

The Solo10G SFP+ PCIe card and Solo10G SFP+ Thunderbolt 3 Edition adapter are available now for $149 and $249, respectively.

Atto’s FibreBridge now part of NetApp’s MetroCluster

Atto Technology has teamed with NetApp to offer Atto FibreBridge 7600N as a key component in the MetroCluster continuous data availability solution. Atto FibreBridge 7600N storage controller enables synchronous site-to-site replication up to 300km by providing low latency 32Gb Fibre Channel connections to NetApp flash and disk systems while maintaining high resiliency. FibreBridge 7600N supports up to 1.2 million IOPS and 6,400MB/s per controller.

NetApp MetroCluster enhances the built-in high availability and non-disruptive operations of NetApp systems with Ontap software, providing an additional layer of protection for the entire storage and host environment.

The Atto XstreamCore FC 7600 is a hardware protocol converter that connects 32Gb Fibre Channel ports to 12Gb SAS. It allows post and production houses to free up server resources normally used for handling storage activity and distribute storage connections across up to 64 servers with less than four micro seconds of latency. XstreamCore FC 7600 offers the flexibility needed for modern media production, allowing streaming of uncompressed HD, 4K and larger video, adding shared capabilities to direct attached storage and remotely locating direct attached disk or tape devices. This is a major advantage in workflow management, system architecting and layout of production facilities.

FibreBridge 7600N is one of Atto XstreamCore storage controller products, just one of Atto’s broad portfolio of connectivity solutions widely tested and certified for compatibility with all operating systems and platforms.

SymplyWorkspace: high-speed, multi-user SAN for smaller post houses

Symply has launched SymplyWorkspace, a SAN system that uses Quantum’s StorNext 6 for high-speed collaboration over Thunderbolt 3 for up to eight simultaneous Mac, Windows, or Linux editors, with RAID protection for content safety.

SymplyWorkspace is designed for sharing content in realtime video production. The product features a compact desk-side design geared to smaller post houses, in-house creatives, ad agencies or any creative house needing an affordable high-speed sharing solution.

“With the high adoption rates of Thunderbolt in smaller post houses, with in-house creatives and with other content creators, connecting high-speed shared storage has been a hassle that requires expensive and bulky adapters and rack-mounted, hot and noisy storage, servers and switches,” explains Nick Warburton from Global Distribution, which owns Symply. “SymplyWorkspace allows Thunderbolt 3 clients to just plug into the desk-side system to ingest, edit, finish and deliver without ever moving content locally, even at 4K resolutions, with no adapters or racks needed.”

Based on the Quantum StorNext 6 sharing software, SymplyWorkspace allows users to connect up to eight laptops and workstations to the system and share video files, graphics and other data files instantly with no copying and without concerns for version control or duplicated files. A file server can also be attached to enable re-sharing of content to other users across Ethernet networks.

Symply has also addressed the short cable-length issues commonly cited with Thunderbolt. By using the latest Thunderbolt 3 optical cable technology from Corning, clients can be up to 50 feet away from SymplyWorkspace while maintaining full high-speed collaboration.

The complete SymplyWorkspace solution starts at $10,995 for 24TB of RAID-protected storage and four simultaneous Mac users. Four additional users (up to eight total) can be added at any time. The product is also available in configurations up to 288TB and supporting multiple 4K streams, with any combination of up to eight Mac, Windows or Linux users. It’s available now through worldwide resellers and joins the SymplyUltra line of workflow storage solutions for larger post and broadcast facilities.

Pixit Media adds David Sallak 
as CTO

Pixit Media, which provides data-driven storage platforms targeting M&E, has expanded its management team with the addition of CTO and member of the board David Sallak.

Most recently the VP of industry marketing at storage company Panasas, Sallak brings with him more than 15 years of experience. Prior to Panasas, Sallak served as CTO at EMC Isilon. Based in Chicago but working globally, he will be responsible for helping to grow Pixit Media.

Other key appointments for Pixit Media include Chris Horn as chief operating officer. He also joins the board. Greg Furmidge comes on as VP of global sales, and Chris Exton has been promoted to professional services manager.

With offices in Vista CA, London and Stuttgart, Pixit Media clients include Warner Bros., Pixelogic, Framestore, Goldcrest, Encompass and Deluxe.

Updated Quantum Xcellis targets robust video workflows

Quantum has updated its Xcellis storage environment, which allow users to ingest, edit, share and store media content. These new appliances, which are powered by the company’s StorNext platform, are based on a next-generation server architecture that includes dual eight-core Intel Xeon CPUs, 64GB memory, SSD boot drives and dual 100Gb Ethernet or 32Gb Fibre Channel ports.

The enhanced CPU and 50% increase in RAM over the previous generation greatly improve StorNext metadata performance. These enhancements make tasks such as file auditing less time-intensive, support an even greater number of clients per node and enable the management of billions of files per node. Users operating in a dynamic application environment on storage nodes will also see performance improvements.

With the ability to provide cross-protocol locking for shared files across SAN, NFS and SMB, Xcellis targets organizations that have collaborative workflows and need to share content across both Fibre Channel and Ethernet.

Leveraging this next-generation hardware platform, StorNext will provide higher levels of streaming performance for video playback. Xcellis appliances provide a high-performance gateway for StorNext advanced data management software to integrate tiers of scalable on-premise and cloud-based storage. This end-to-end capability provides a cost-effective solution to retain massive amounts of data.

StorNext offers a variety of features that ensure data-protection of valuable content over its entire life-cycle. Users can easily copy files to off-site tiers and take advantage of versioning to roll back to an earlier point in time (prior to a malware attack, for example) as well as set up automated replication for disaster recovery purposes — all of which is designed to protect digital assets.

Quantum’s latest Xcellis appliances are available now.

Storage for VFX Studios

By Karen Moltenbrey

Visual effects are dazzling — inviting eye candy, if you will. But when you mention the term “storage,” the wide eyes may turn into a stifled yawn from viewers of the amazing content. Not so for the makers of that content.

They know that the key to a successful project rests within the reliability of their storage solutions. Here, we look at two visual effects studios — both top players in television and feature film effects — as they discuss how data storage enables them to excel at their craft.

Zoic Studios
A Culver City-based visual effects facility, with shops in Vancouver and New York, Zoic Studios has been crafting visual effects for a host of television series since its founding in 2002, starting with Firefly. In addition to a full plate of episodics, Zoic also counts numerous feature films and spots to its credits.

Saker Klippsten

According to Saker Klippsten, CTO, the facility has used a range of storage solutions over the past 16 years from BlueArc (before it was acquired by Hitachi), DataDirect Networks and others, but now uses Dell EMC’s Isilon cluster file storage system for its current needs. “We’ve been a fan of theirs for quite a long time now. I think we were customer number two,” he says, “back when they were trying to break into the media and entertainment sector.”

Locally, the studio uses Intel and NVMe drives for its workstations. NVMe, or non-volatile memory express, is an open logical device interface specification for accessing all-flash storage media attached via PCI Express (PCIe) bus. Previously, Zoic had been using Samsung SSD drives, with Samsung 1TB and 2TB EVO drives, but in the past year and a half, began migrating to NVMe on the local workstations.

Zoic transitioned to the Isilon system in 2004-2005 because of the heavy usage its renderfarm was getting. “Renderfarms work 24/7 and don’t take breaks. Our storage was getting really beat up, and people were starting to complain that it was slow accessing the file system and affecting playback of their footage and media,” explains Klippsten. “We needed to find something that could scale out horizontally.”

At the time, however, file-level storage was pretty much all that was available — “you were limited to this sort of vertical pool of storage,” says Klippsten. “You might have a lot of storage behind it, but you were still limited at the spigot, at the top end. You couldn’t get the data out fast enough.” But Isilon broke through that barrier by creating a cluster storage system that allotted the scale horizontally, “so we could balance our load, our render nodes and our artists across a number of machines, and access and update in parallel at the same time,” he adds.

Klippsten believes that solution was a big breakthrough for a lot of users; nevertheless, it took some time for others to get onboard. “In the media and entertainment industry, everyone seemed to be locked into BlueArc or NetApp,” he notes. Not so with Zoic.

Fairly recently, some new players have come onto the market, including Qumulo, touted as a “next-generation NAS company” built around advanced, distributed software running on commodity hardware. “That’s another storage platform that we have looked at and tested,” says Klippsten, adding that Zoic even has a number of nodes from the vendor.

There are other open-source options out there as well. Recently, Red Hat began offering Gluster Storage, an open, software-defined storage platform for physical, virtual and cloud environments. “And now with NVMe, it’s eliminating a lot of these problems as well,” Klippsten says.

Back when Zoic selected Isilon, there were a number of major issues that affected the studio’s decision making. As Klippsten notes, they had just opened the Vancouver office and were transferring data back and forth. “How do we back up that data? How do we protect it? Storage snapshot technology didn’t really exist at the time,” he says. But, Isilon had a number of features that the studio liked, including SyncIQ, software for asynchronous replication of data. “It could push data between different Isilon clusters from a block level, in a more automated fashion. It was very convenient. It offered a lot of parameters, such as moving data by time of day and access frequency.”

SyncIQ enabled the studio to archive the data. And for dealing with interim changes, such as a mistakenly deleted file, Zoic found Isilon’s SnapshotIQ ideal for fast data recovery. Moreover, Isilon was one of the first to support Aspera, right on the Isilon cluster. “You didn’t have to run it on a separate machine. It was a huge benefit because we transfer a lot of secure, encrypted data between us and a lot of our clients,” notes Klippsten.

Netflix’s The Chilling Adventures of Sabrina

Within the pipeline, Zoic’s storage system sits at the core. It is used immediately as the studio ingests the media, whether it is downloaded or transferred from hard drives – terabytes upon terabytes of data. The data is then cleaned up and distributed to project folders for tasks assigned to the various artists. In essence, it acts as a holding tank for the main production storage as an artist begins working on those specific shots, Klippsten explains.

Aside from using the storage at the floor level, the studio also employs it at the archive level, for data recovery as well as material that might not be accessed for weeks. “We have sort of a tiered level of storage — high-performance and deep-archival storage,” he says.

And the system is invaluable, as Zoic is handling 400 to 500 shots a week. If you multiply that by the number of revisions and versions that take place during that time frame, it adds up to hundreds of terabytes weekly. “Per day, we transfer between LA, Vancouver and New York somewhere around 20TB to 30TB,” he estimates. “That number increases quite a bit because we do a lot of cloud rendering. So, we’re pushing a lot of data up to Google and back for cloud rendering, and all of that hits our Isilon storage.”

When Zoic was founded, it originally saw itself as a visual effects company, but at the end of the day, Klippsten says they’re really a technology company that makes pretty pictures. “We push data and move it around to its limits. We’re constantly coming up with new, creative ideas, trying to find partners that can help provide solutions collaboratively if we cannot create them ourselves. The shot cost is constantly being squeezed by studios, which want these shots done faster and cheaper. So, we have to make sure our artists are working faster, too.”

The Chilling Adventures of Sabrina

Recently, Zoic has been working on a TV project involving a good deal of water simulations and other sims in general — which rapidly generate a tremendous amount of data. Then the data is transferred between the LA and Vancouver facilities. Having storage capable of handling that was unheard of three years ago, Klippsten says. However, Zoic has managed to do so using Isilon along with some off-the-shelf Supermicro storage with NVMe drives, enabling its dynamics department to tackle this and other projects. “When doing full simulation, you need to get that sim in front of the clients as soon as possible so they can comment on it. Simulations take a long time — we’re doing 26GB/sec, which is crazy. It’s close to something in the high-performance computing realm.”

With all that considered, it is hardly surprising to hear Klippsten say that Zoic could not function without a solid storage solution. “It’s funny. When people talk about storage, they are always saying they don’t have enough of it. Even when you have a lot of storage, it’s always running at 99 percent full, and they wonder why you can’t just go out to Best Buy and purchase another hard drive. It doesn’t work that way!”

Milk VFX
Founded just five years ago, Milk VFX is an independent visual effects facility in the UK with locations in London and Cardiff, Wales. While Milk VFX may be young, it was founded by experienced and award-winning VFX supervisors and producers. And the awards have continued, including an Oscar (Ex-Machina), an Emmy (Sherlock) and three BAFTAs, as the studio creates innovative and complex work for high-end television and feature films.

Benoit Leveau

With so much precious data, and a lot of it, the studio has to ensure that its work is secure and the storage system is keeping pace with the staff using it. When the studio was set up, it installed Pixit Media’s PixStor, a parallel file system with limitless storage, for its central storage solution. And, it has been growing with the company ever since. (Milk uses almost no local storage, except for media playback.)

“It was a carefully chosen solution due to its enterprise-level performance,” says Benoit Leveau, head of pipeline at Milk, about the decision to select PixStor. “It allowed us to expand when setting up our second studio in Cardiff and our rendering solutions in the cloud.”

When Milk was shopping for a storage offering while opening the studio, four things were forefront in their minds: speed, scalability, performance and reliability. Those were the functions the group wanted from its storage system — exactly the same four demands that the projects at the studios required.

“A final image requires gigabytes, sometimes terabytes, of data in the form of detailed models, high-resolution textures, animation files, particles and effects caches and so forth,” says Leveau. “We need to be able to review 4K image sequences in real time, so it’s really essential for daily operation.”

This year alone, Milk has completed a number of high-end visual effects sequences for feature films such as Adrift, serving as the principal vendor on this true story about a young couple lost at sea during one of the most catastrophic hurricanes in recorded history. The Milk team created all the major water and storm sequences, including bespoke 100-foot waves, all of which were rendered entirely in the cloud.

As Leveau points out, one of the shots in the film was more than 60TB, as it required complex ocean simulations. “We computed the ocean simulations on our local renderfarm, but the rendering was done in the cloud, and with this setup, we were able to access the data from everywhere almost transparently for the artists,” he explains.

Adrift

The studio also recently completed work on the blockbuster Fantastic Beasts sequel, The Crimes of Grindelwald.

For television, the studio created visual effects for an episode of the Netflix Altered Carbon sci-fi series, where people can live forever, as they digitally store their consciousness (stacks) and then download themselves into new bodies (sleeves). For the episode, the Milk crew created forest fires and the aftermath, as well as an alien planet and escape ship. For Origin, an action-thriller, the team generated 926 VFX shots in 4K for the 10-part series, spanning a wide range of work. Milk is also serving as the VFX vendor for Good Omens, a six-part horror/fantasy/drama series.

“For Origin, all the data had to be online for the duration of the four-month project. At the same time, we commenced work as the sole VFX vendor on the BBC/Amazon Good Omens series, which is now rapidly filling up our PixStor, hence the importance of scalability!” says Leveau.

Main Image: Origin via Milk VFX


Karen Moltenbrey is a veteran VFX and post writer.

Virtual Roundtable: Storage

By Randi Altman

The world of storage is ever changing and complicated. There are many flavors that are meant to match up to specific workflow needs. What matters most to users? In addition to easily-installed and easy-to-use systems that let them focus on the creative and not the tech? Scalability, speed, data protection, the cloud and the need to handle higher and higher frame rates with higher resolutions — meaning larger and larger files. The good news is the tools are growing to meet these needs. New technologies and software enhancements around NVMe are providing extremely low-latency connectivity that supports higher performance workflows. Time will tell how that plays a part in day-to-day workflows.

For this virtual roundtable, we reached out to makers of storage and users of storage. Their questions differ a bit, but their answers often overlap. Enjoy.

Western Digital Global Director M&E Strategy & Market Development Erik Weaver

What is the biggest trend you’ve seen in the past year in terms of storage?
There’s a couple that immediately come to mind. Both have to do with the massive amounts of data generated by the media and entertainment industry.

The first is the need to manage this data to understand what you have, where it resides and where it’s going. With multiple storage architectures in play – cloud, hybrid, legacy, remote, etc. — some may be out of your purview, making data management challenging. The key is abstraction, creating a unique identifier(s) for every file everywhere so assets can be identified regardless of file name or location.

Some companies are already making progress using the C4 framework and the C4 ID system. With abstraction, you can apply rules so you always know where assets are located within these environments. It allows you to see all your assets and easily move them between storage tiers, if needed. Better data management will also help with analytics and AI/ML.

The second big trend, which we’ll talk about some more, is NVMe (and NVMe-over-Fabric) and the incredible speed and flexibility it provides. It has the ability to radically change the workflow for M&E to genuinely handle multiple 4K, 6K and 8K feeds and manage massive volumes of data. NVMe all-Flash arrays such as our IntelliFlash N-Series product line, as opposed to traditional NAS, bring transfer rates to a whole new level. Using the NVMe protocol can deliver three to five times faster performance than traditional flash technology and 20 times faster than traditional NAS.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
For AI, VR and machine learning, there’s a general trend toward using Flash on the front end and object storage on the back end. Our customers use ActiveScale object storage to scale up and out and store the primary dataset, then use an NVMe tier to process that data. You need a storage architecture large enough to capture all those datasets, then analyze them. This is driving an extreme amount of data.

Take, for example, VR. The move from simple 360 video into volumetric capture is analogous to what film used to be: it’s expensive. With film, you only have a limited number of takes and only so much storage, but with digital you capture everything, then fix it and post. The expansion in storage needs is outrageous and you need cost-effective storage that can scale.

As far as AI and ML, think about a popular Internet entertainment or streaming service. They’re running analytics looking at patterns of what customers are watching. They’re constantly growing and adapting in order to provide recommendations, 24×7. It would be tedious and downright unfeasible for humans to track this.

All of this requires compute power and storage. And having the right balance of performance, storage economics and low TCO is critical. We’re helping many companies define that strategy today leveraging our family of IntelliFlash, ActiveScale, Ultrastar and G-Technology branded products.

WD’s IntelliFlash N-Series NVMe all-Flash array

Can you talk about NVMe?
NVMe is a game changer. NVMe, with extreme performance, low latencies and incredible throughput is opening up new possibilities for the media workflow. NVMe can offer 5x the performance of traditional Flash at comparable prices and will be the foundation for next-generation workflows for production, gaming and VFX. It’s a radical change to traditional workflows today.

NVMe also lays the foundation for NVMe over fabric (NVMf). With that, it’s important to mention the difference between NVMe and NVMf.

Unlike SAS and SATA protocols that were designed for disk drives, NVMe was designed from the ground up for persistent Flash memory technologies and the massively parallel transfer capabilities of SSDs. As such, it delivers significant advantages including extreme performance, improved queuing, low-latency and the reduction of I/O stack overheads.

NVMf is a networked storage protocol that allows NVMe Flash storage to be disaggregated from the server and made widely available to concurrent applications and multiple compute resources. There is no limit to the number of servers or NVMf storage devices that can be shared. It promises to deliver the lowest end-to-end latency from application to storage while delivering agility and flexibility by sharing resources throughout the enterprise.

The bottom line is NVMe and NVMf are enablers for next-generation workflows that can give you a competitive edge in terms of efficiency, productivity and extracting the most value from your data.

What do you do in your products to help safeguard your users’ data?
As one of the largest storage companies in the world, we understand the value of data. Our goal is to deliver the highest quality storage solutions that deliver consistent performance, high-capacity and value to our customers. We design and manufacture storage solutions from silicon to systems. This vertical innovation gives us a unique advantage to fine-tune and optimize virtually any layer within the stack, including firmware, software, processing, interconnect, storage, mechanical and even manufacturing disciplines. This approach helps us deliver purpose-built products across all of our brands that provide the performance, reliability, total cost of ownership and sustainability demanded by our customers.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We believe hybrid workflows are critical in today’s environment. M&E companies are increasingly leveraging a hybrid of on-premises and multi-cloud architectures. Core intellectual property (in the form of digital assets) is stored in private, secure storage, while they access multi-cloud vendors to render, run post workflows or take advantage of various tools and services such as AI.

Object storage in a private cloud configuration is enabling new capabilities by providing “warm” online access to petabyte-scale repositories that were previously stored on tape or other “cold” storage archives. Suddenly, with this hybrid approach, companies can access and retain all their assets, and create new content services, monetize opportunities or run analytics across a much larger dataset. Combined with the ability to use AI for audience viewing, demographic and geographic data allows companies to deliver high-value, tailored content and services on a global scale.

Final Thoughts?
We’re seeing a third dimension to the “digital dilemma.” The digital dilemma is not new and has been talked about before. The first dilemma is the physical device itself. No physical device lasts forever. Tape and media degradation happen over extended periods of time. You also need to think about the limitation of the device itself and will it become obsolete? The second is the age of the media format and compatibility with modern operating systems, leaving data possibly unreadable. But the third thing that’s happening, and it’s quite serious, is that the experts who manage the libraries are “aging out” and nearing retirement. They’ve owned or worked on these infrastructures for generations and have this tribal knowledge of what assets they have and where they’re stored as well as the fickle nature of the underlying hardware. Because of these factors, we strongly encourage that companies evaluate their archive strategy, or potentially risk losing enormous amounts of data.

Company 3 NY and Deluxe NY Data/IO Supervisor Hollie Grant

Company 3 specializes in DI, finishing and color correction, and Deluxe is an end-to-end post house working on projects from dailies through finishing.

Hollie Grant

How much data did you use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Over the past year, as a rough estimate, my team dealt with around 1.5 petabytes of data. The latter half of this year really ramped up storage-wise. We were cruising along with a normal increase in data per show until the last few months where we had an influx of UHD, 4K and even 6K jobs, which take up to quadruple the space of a “normal” HD or 2K project.

I don’t think we’ll see a decrease in this trend with the take off of 4K televisions as the baseline for consumers and with streaming becoming more popular than ever. OTT films and television have raised the bar for post production, expecting 4K source and native deliveries. Even smaller indie films that we would normally not think twice about space-wise are shooting and finishing 4K in the hopes that Netflix or Amazon will buy their film. This means that even for the projects that once were not a burden on our storage will have to be factored in differently going forward.

Have you ever lost important data due to a hardware failure?Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Triple knock on wood! In my time here we have not lost any data due to an operator error. We follow strict procedures and create redundancy in our data, so if there is a hardware failure we don’t lose anything permanently. We have received hard drives or tapes that failed, but this far along in the digital age most people have more than one copy of their work, and if they don’t, a backup is the first thing I recommend.

Do you find access speed to be a limiting factor with you current storage solution?
We can reach read and write speeds of 1GB on our SAN. We have a pretty fast configuration of disks. Of course, the more sessions you have trying to read or write on a volume, the harder it can be to get playback. That’s why we have around 2.5PB of storage across many volumes so I can organize projects based on the bandwidth they will need and their schedules so we don’t have trouble with speed. This is one of the more challenging aspects of my day-to-day as the size of projects and their demand for larger frame playback increase.

Showtime’s Escape From Dannemora – Co3 provided color grading and conform.

What percentage of your data’s value do you budget toward storage and data security?
I can’t speak to exact percentages, but storage upgrades are a large part of our yearly budget. There is always an ask for new disks in the funding for the year because every year we’re growing along with the size of the data for productions. Our production network infrastructure is designed around security regulations set forth by many studios and the MPAA. A lot of work goes into maintaining that and one of the most important things to us is keeping our clients’ data safe behind multiple “locks and keys.”

What trends do you see in storage?
I see the obvious trends in physical storage size decreasing while bandwidth and data size increases. Along those lines I’m sure we’ll see more movies being post produced with everything needed in “the cloud.” The frontrunners of cloud storage have larger, more secure and redundant forms of storing data, so I think it’s inevitable that we’ll move in that direction. It will also make collaboration much easier. You could have all camera-original material stored there, as well as any transcoded files that editorial and VFX will be working with. Using the cloud as a sort of near-line storage would free up the disks in post facilities to focus on only having online what the artists need while still being able to quickly access anything else. Some companies are already working in a manner similar to this, but I think it will start to be a more common solution moving forward.

creative.space‘s Nick Anderson

What is the biggest trend you’ve seen in the past year in terms of storage?
The biggest trend is NVMe storage. SSDs are finally entering a range where they are forcing storage vendors to re-evaluate their architectures to take advantage of its performance benefits.

Nick Anderson

Can you talk more about NVMe?
When it comes to NVMe, speed, price and form factor are three key things users need to understand. When it comes to speed, it blasts past the limitations of hard drives speeds to deliver 3GB/s per drive, which requires a faster connector (PCIe) to take advantage of. With parallel access and higher IOPS (input/output operations per second), NVMe drives can handle operations that would bring an HDD to its knees. When it comes to price, it is cheaper per GB than past iterations of SSD, making it a feasible alternative for tier one storage in many workflows. Finally, when it comes to form factor, it is smaller and requires less hardware bulk in a purpose-built system so you can get more drives in a smaller amount of space at a lower cost. People I talk to are surprised to hear that they have been paying a premium to put fast SSDs into HDD form factors that choke their performance.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
This is something we have been thinking a lot about and we have some exciting stuff in the works that addresses this need that I can’t go into at this time. For now, we are working with our early adopters to solve these needs in ways that are practical to them, integrating custom software as needed. Moving forward we hope to bring an intuitive and seamless storage experience to the larger industry.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
This gets down to a shift in what kind of data is being processed and how it can be accessed. When it comes to video, big media files and image sequences have driven the push for better performance. 360° video pushes the performance storage further past 4K into 8K, 12K, 16K and beyond. On the other hand, as CGI continues to become more photorealistic and we emerge from the “uncanny valley,” the performance need shifts from big data to small data in many cases as render engines are used instead of video or image files. Moving lots of small data is what these systems were originally designed for, so it will be a welcome shift for users.

When it comes to AI, our file system architectures and NVMe technology are making data easily accessible with less impact on performance. Apart from performance, we monitor thousands of metrics on the system that can be easily connected to your machine learning system of choice. We are still in the early days of this technology and its application to media production, so we are excited to see how customers take advantage of it.

What do you do in your products to help safeguard your users’ data?
From a data integrity perspective, every bit of data gets checksumed on copy and can be restored from that checksum if it gets corrupted. This means that that storage is self-healing with 100% data integrity once it is written to the disk.

As far as safeguarding data from external threats, this is a complicated issue. There are many methods of securing a system, but for post production, performance can’t be compromised. For companies following MPAA recommendations, putting the storage behind physical security is often considered enough. Unfortunately, for many companies without an IT staff, this is where the security stops and the system is left open once you get access to the network. To solve this problem, we developed an LDAP user management system that is built-in to our units that provides that extra layer of software security at no additional charge. Storage access becomes user-based, so system activity can be monitored. As far as administering support, we designed an API gatekeeper to manage data to and from the database that is auditable and secure.

AlphaDogs‘ Terence Curren

Alpha Dogs is a full-service post house in Burbank, California. They provide color correction, graphic design, VFX, sound design and audio mixing.

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
We are primarily a finishing house, so we use hundreds of TBs per year on our SAN. We work at higher resolutions, which means larger file sizes. When we have finished a job and delivered the master files, we archive to LTO and clear the project off the SAN. When we handle the offline on a project, obviously our storage needs rise exponentially. We do foresee those requirements rising substantially this year.

Terence Curren

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
We’ve been lucky in that area (knocking on wood) as our SANs are RAID-protected and we maintain a degree of redundancy. We have had clients’ transfer drives fail. We always recommend they deliver a copy of their media. In the early days of our SAN, which is the Facilis TerraBlock, one of our editors accidentally deleted a volume containing an ongoing project. Fortunately, Facilis engineers were able to recover the lost partition as it hadn’t been overwritten yet. That’s one of the things I really have appreciated about working with Facilis over the years — they have great technical support which is essential in our industry.

Do you find access speed to be a limiting factor with you current storage solution?
Not yet, As we get forced into heavily marketed but unnecessary formats like the coming 8K, we will have to scale to handle the bandwidth overload. I am sure the storage companies are all very excited about that prospect.

What percentage of your data’s value do you budget toward storage and data security?
Again, we don’t maintain long-term storage on projects so it’s not a large consideration in budgeting. Security is very important and one of the reasons our SANs are isolated from the outside world. Hopefully, this is an area in which easily accessible tools for network security become commoditized. Much like deadbolts and burglar alarms in housing, it is now a necessary evil.

What trends do you see in storage?
More storage and higher bandwidths, some of which is being aided by solid state storage, which is very expensive on our level of usage. The prices keep coming down on storage, yet it seems that the increased demand has caused our spending to remain fairly constant over the years.

Cinesite London‘s Chris Perschky

Perschky ensures that Cinesite’s constantly evolving infrastructure provides the technical backbone required for a visual effects facility. His team plans, installs and implements all manner of technology, in addition to providing technical support to the entire company.

Chris Perschky

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Depending on the demands of the project that we are working on we can generate terabytes of data every single day. We have become increasingly adept at separating out data we need to keep long-term from what we only require for a limited time, and our cleanup tends to be aggressive. This allows us to run pretty lean data sets when necessary.

I expect more 4K work to creep in next year and, as such, expect storage demands to increase accordingly.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Our thorough backup procedures mean that we have an offsite copy of all production data within a couple of hours of it being written. As such, when an artist has accidentally overwritten a file we are able to retrieve it from backup swiftly.

Do you find access speed to be a limiting factor with your current storage solution?
Only remotely, thereby requiring a caching solution.

What percentage of your data’s value do you budget toward storage and data security?
Due to the requirements of our clients, we do whatever is necessary to ensure the security of their IP and our work.

Cinesite also worked on Iron Spider for Avengers Infinity War ©2018 Marvel Studios

What trends do you see in storage?
The trendy answer is to move all storage to the cloud, but it is just too expensive. That said, the benefits of cloud storage are well documented, so we need some way of leveraging it. I see more hybrid on-prem and cloud solutions. providing the best of both worlds as demand requires. Full SSD solutions are still way too expensive for most of us, but multi-tier storage solutions will have a larger SSD cache tier as prices drop.

Panasas‘ RW Hawkins

What is the biggest trend you’ve seen in the past year in terms of storage?
The demand for more capacity certainly isn’t slowing down! New formats like ProRes RAW, HDR and stereoscopic images required for VR continue to push the need to scale storage capacity and performance. New Flash technologies address the speed, but not the capacity. As post production houses scale, they see that complexity increases dramatically. Trying to scale to petabytes with individual and limited file servers is a big part of the problem. Parallel file systems are playing a more important role, even in medium-sized shops.

RW Hawkins

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
VR (and, more generally, interactive content creation) is particularly interesting as it takes many of the aspects of VFX and interactive gaming and combines them with post. The VFX industry, for many years, has built batch-oriented pipelines running on multiple Linux boxes to solve many of their production problems. This same approach works well for interactive content production where the footage often needs to be pre-processed (stitched, warped, etc.) before editing. High speed, parallel filesystems are particularly well suited for this type of batch-based work.

The AI/ML space is red hot, and the applications seem boundless. Right now, much of the work is being done at a small scale where direct-attach, all-Flash storage boxes serve the need. As this technology is used on a larger scale, it will put demands on storage that can’t be met by direct-attached storage, so meeting those high IOP needs at scale is certainly something Panasas is looking at.

Can you talk about NVMe?
NVMe is an exciting technology, but not a panacea for all storage problems. While being very fast, and excellent at small operations, it is still very expensive, has small capacity and is difficult to scale to petabyte sizes. The next-generation Panasas ActiveStor Ultra platform uses NVMe for metadata while still leveraging spinning disk and SATA SSD. This hybrid approach, using each storage medium for what it does best, is something we have been doing for more than 10 years.

What do you do in your products to help safeguard your users’ data?
Panasas uses object-based data protection with RAID- 6+. This software-based erasure code protection, at the file level, provides the best scalable data protection. Only files affected by a particular hardware failure need to be rebuilt, and increasing the number of drives doesn’t increase the likelihood of losing data. In a sense, every file is individually protected. On the hardware side, all Panasas hardware provides non-volatile components, including cutting-edge NVDIMM technology to protect our customers’ data. The file system has been proven in the field. We wouldn’t have the high-profile customers we do if we didn’t provide superior performance as well as superior data protection.

Users want more flexible workflows — storage in the cloud, on-premises, etc. How are your offerings reflective of that?
While Panasas leverages an object storage backend, we provide our POSIX-compliant file system client called DirectFlow to allow standard file access to the namespace. Files and directories are the “lingua franca” of the storage world, allowing ultimate compatibility. It is very easy to interface between on-premises storage, remote DR storage and public cloud/REST storage using DirectFlow. Data flows freely and at high speed using standard tools, which makes the Panasas system an ideal scalable repository for data that will be used in a variety of pipelines.

Alkemy X‘s Dave Zeevalk

With studios in Philly, NYC, LA and Amsterdam, Alkemy X provides live-action, design, post, VFX and original content for spots, branded content and more.

Dave Zeevalk

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Each year, our VFX department generates nearly a petabyte of data, from simulation caches to rendered frames. This year, we have seen a significant increase in data usage as client expectations continue to grow and 4K resolution becomes more prominent in episodic television and feature film projects.

In order to use our 200TB server responsibly, we have created a solid system for preserving necessary data and clearing unnecessary files on a regular basis. Additionally, we are diligent in archiving finale projects to our LTO tape systems, and removing them from our production server.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)

Because of our data redundancy, through hourly snapshots and daily backups, we have avoided any data loss even with hardware failure. Although hardware does fail, with these snapshots and backups on a secondary server, we are able to bring data back online extremely quickly in the case of hardware failure on our production server. Years ago, while migrating to Linux, a software issue completely wiped out our production server. Within two hours, we were able to migrate all data back from our snapshots and backups to our production server with no data loss.

Do you find access speed to be a limiting factor with your current storage solution?
There are a few scenarios where we do experience some issues with access speed to the production server. We do a good amount of heavy simulation work, at times writing dozens of terabytes per hour. While at our peak, we have experienced some throttled speeds due to the amount of data being written to the server. Our VFX team also has a checkpoint system for simulation where raw data is saved to the server in parallel to the simulation cache. This allows us to restart a simulation mid-way through the process if a render node drops or fails the job. This raw data is extremely heavy, so while using checkpoints on heavy simulations, we also experience some slower than normal speeds.

What percentage of your data’s value do you budget toward storage and data security?
Our active production server houses 200TB of storage space. We have a secondary backup server, with equivalent storage space that we store hourly snapshots and daily back-ups to.

What trends do you see in storage?
With client expectations continuing to rise, and 4K (and higher at times) becoming more and more regular on jobs, the need for more storage space is ever increasing.

Quantum‘s Jamie Lerner

What is the biggest trend you’ve seen in the past year in terms of storage?
Although the digital transformation to higher resolution content in M&E has been taking place over the past several years, the interesting aspect is that the pace of change over the past 12 months is accelerating. Driving this trend is the mainstream adoption of 4K and high dynamic range (HDR) video, and the strong uptick in applications requiring 8K formats.

Jamie Lerner

Virtual reality and augmented reality applications are booming across the media and entertainment landscape; everywhere from broadcast news and gaming to episodic television. These high-resolution formats add data to streams that must be ingested at a much higher rate, consume more capacity once stored and require significantly more bandwidth when doing realtime editing. All of this translates into a significantly more demanding environment, which must be supported by the storage solution.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
New technologies for producing stunning visual content are opening tremendous opportunities for studios, post houses, distributors, and other media organizations. Sophisticated next-generation cameras and multi-camera arrays enable organizations to capture more visual information, in greater detail than ever before. At the same time, innovative technologies for consuming media are enabling people to view and interact with visual content in a variety of new ways.

To capitalize on new opportunities and meet consumer expectations, many media organizations will need to bolster their storage infrastructure. They need storage solutions that offer scalable capacity to support new ingest sources that capture huge amounts of data, with the performance to edit and add value to this rich media.

Can you talk about NVMe?
The main benefit of NVMe storage is that it provides extremely low latency — therefore allowing users to seek content at very high speed — which is ideal for high stream counts and compressed 4K content workflows.

However, NVMe resources are expensive. Quantum addresses this issue head-on by leveraging NVMe over fabrics (NVMeoF) technology. With NVMeoF, multiple clients can use pooled NVMe storage devices across a network at local speeds and latencies. And when combined with our StorNext, all data is accessible by multiple clients in a global namespace, making this high-performance tier of storage much more cost-effective. Finally, Quantum is in early field trials of a new advancement that will allow customers to benefit even more from NVMe-enabled storage.

What do you do in your products to help safeguard your users’ data?
A storage system must be able to accommodate policies ranging from “throw it out when the job is done” to “keep it forever” and everything in between. The cost of storage demands control over where data lives and when, how many copies of the data exist and where those copies reside over time.

Xcellis scale-out storage powered by StorNext incorporates a broad range of features for data protection. This includes integrated features such as RAID, automated copying, versioning and data replication functionality, all included within our latest release of StorNext.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Given the differences in size and scope of organizations across the media industry, production workflows are incredibly varied and often geographically dispersed. Within this context, flexibility becomes a paramount feature of any modern storage architecture.

We provide flexibility in a number of important ways for our customers. From the perspective of system architecture, and recognizing there is no one-size fits all solution, StorNext allows customers to configure storage with multiple media types that balance performance and capacity requirements across an entire end-to-end workflow. Second, and equally important for those companies that have a global workforce, is that our data replication software FlexSync allows for content to be rapidly distributed to production staff around the globe. And no matter what tier of storage the data resides on, FlexTier provides coordinated and unified access to the content within a single global namespace.

EditShare‘s Bill Thompson

What is the biggest trend you’ve seen in the past year in terms of storage?
In no particular order, the biggest trends for storage in the media and entertainment space are:
1. The need to handle higher and higher data rates associated with higher resolution and higher frame rate content. Across the industry, this is being address with Flash-based storage and the use of emerging technology like NVMe over “X” and 25/50/100G networking.

Bill Thompson

2. The ever-increasing concern about content security and content protection, backup and restoration solutions.

3. The request for more powerful analytics solutions to better manage storage resources.

4. The movement away from proprietary hardware/software storage solutions toward ones that are compatible with commodity hardware and/or virtual environments.

Can you talk about NVMe?
NVMe technology is very interesting and will clearly change the M&E landscape going forward. One of the challenges is that we are in the midst of changing standards and we expect current PCIe-based NVMe components to be replaced by U2/M2 implementations. This migration will require important changes to storage platforms.

In the meantime, we offer non-NVMe Flash-based storage solutions whose performance and price points are equivalent to those claimed by early NVMe implementations.

What do you do in your products to help safeguard your users’ data?
EditShare has been in the forefront of user data protection for many years beginning with our introduction of disk-based and tape-based automated backup and restoration solutions.

We expanded the types of data protection schemes and provided easy-to-use management tools that allow users to tailor the type of redundant protection applied to directories and files. Similarly, we now provide ACL Media Spaces, which allow user privileges to be precisely tailored to their tasks at hand; providing only the rights needed to accomplish their tasks, nothing more, nothing less.

Most recently, we introduced EFS File Auditing, a content security solution that enables system administrators to understand “who did what to my content” and “when and how did they did it.”

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
The EditShare file system is now available in variants that support EditShare hardware-based solutions and hybrid on-premise/cloud solutions. Our Flow automation platform enables users to migrate from on-premise high-speed EFS solutions to cloud-based solutions, such as Amazon S3 and Microsoft Azure, offering the best of both worlds.

Rohde & Schwarz‘s Dirk Thometzek

What is the biggest trend you’ve seen in the past year in terms of storage?
Consumer behavior is the most substantial change that the broadcast and media industry has experienced over the past years. Content is consumed on-demand. In order to stay competitive, content providers need to produce more content. Furthermore, to make the content more desirable, technologies such as UHD and HDR need to be adopted. This obviously has an impact on the amount of data being produced and stored.

Dirk Thometzek

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
In media and entertainment there has always been a remarkable growth of data over time, from the very first simple SCSI hard drives to huge network environments. Nowadays, however, there is a tremendous growth approximating an exponential function. Considering all media will be preserved for a very long time, the M&E storage market segment will keep on growing and innovating.

Looking at the amount of footage being produced, a big challenge is to find the appropriate data. Taking it a step further, there might be content that a producer wouldn’t even think of looking for, but has a relative significance to the original metadata queried. That is where machine learning and AI come into the play. We are looking into automated content indexing with the minimum amount of human interaction where the artificial intelligence learns autonomously and shares information with other databases. The real challenge here is to protect these intelligences from being compromised by unintentional access to the information.

What do you do to help safeguard your users’ data?
In collaboration with our Rohde & Schwarz Cybersecurity division, we are offering complete and protected packages to our customers. It begins with access restrictions to server rooms up to encrypted data transfers. Cyber attacks are complex and opaque, but the security layer must be transparent and usable. In media though, latency is just as critical, which is usually introduced with every security layer.

Can you talk about NVMe?
In order to bring the best value to the customer, we are constantly looking for improvements. The direct PCI communication of NVMe certainly brings a huge improvement in terms of latency since it completely eliminates the SCSI communication layer, so no protocol translation is necessary anymore. This results in much higher bandwidth and more IOPS.

For internal data processing and databases, R&S SpycerNode NVMe is used, which really boosts its performance. Unfortunately, the economic aspects of using this technology for media data storage is currently not considered to be efficient. We are dedicated to getting the best performance-to-cost ratio for the market, and since we have been developing video workstations and servers besides storage for decades now, we know how to get the best performance out of a drive — spinning or solid state.

Economically, it doesn’t seem to be acceptable to a build system with the latest and greatest technology for a workflow when standards will do, just because it is possible. The real art of storage technology lies in a highly customized configuration according to the technical requirements of an application or workflow. R&S SpycerNode will evolve over time and technologies will be added to the family.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Although hybrid workflows are highly desirable, it is quite important to understand the advantages and limits of this technology. High-bandwidth and low-latency wide-area network connections involve certain economical aspects. Without the suitable connection, an uncompressed 4K production does not seem feasible from a remote location — uploading several terabytes to a co-location can take hours or even days to be transferred, even if protocol acceleration is used. However, there are workflows, such as supplemental rendering or proxy editing, that do make sense to offload to a datacenter. R&S SpycerNode is ready to be an integral part of geographically scattered networks and the Spycer Storage family will grow.

Dell EMC‘s Tom Burns

What is the biggest trend you’ve seen in the past year in terms of storage?
The most important storage trend we’ve seen is an increasing need for access to shared content libraries accommodating global production teams. This is becoming an essential part of the production chain for feature films, episodic television, sports broadcasting and now e-sports. For example, teams in the UK and in California can share asset libraries for their file-based workflow via a common object store, whether on-prem or hybrid cloud. This means they don’t have to synchronize workflows using point-to-point transmissions from California to the UK, which can get expensive.

Tom Burns

Achieving this requires seamless integration of on-premises file storage for the high-throughput, low-latency workloads with object storage. The object storage can be in the public cloud or you can have a hybrid private cloud for your media assets. A private or hybrid cloud allows production teams to distribute assets more efficiently and saves money, versus using the public cloud for sharing content. If the production needs it to be there right now, they can still fire up Aspera, Signiant, File Catalyst or other point-to-point solutions and have prioritized content immediately available, while allowing your on-premise cloud to take care of the shared content libraries.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Dell Technologies offers end-to-end storage solutions where customers can position the needle anywhere they want. Are you working purely in the cloud? Are you working purely on-prem? Or, like most people, are you working somewhere in the middle? We have a continuous spectrum of storage between high-throughput low-latency workloads and cloud-based object storage, plus distributed services to support the mix that meets your needs.

The most important thing that we’ve learned is that data is expensive to store, granted, but it’s even more expensive to move. Storing your assets in one place and having that path name never change, that’s been a hallmark of Isilon for 15 years. Now we’re extending that seamless file-to-object spectrum to a global scale, deploying Isilon in the cloud in addition to our ECS object store on premises.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
AR, VR, AI and other emerging technologies offer new opportunities for media companies to change the way they tell and monetize their stories. However, due to the large amounts of data involved, many media organizations are challenged when they rely on storage systems that lack either scalability or performance to meet the needs of these new workflows.

Dell EMC’s file and object storage solutions help media companies cost effectively tier their content based upon access. This allows media organizations to use emerging technologies to improve how stories are told and monetize their content with the assistance of AI-generated metadata, without the challenges inherent in many traditional storage systems.

With artificial intelligence, for example, where it was once the job of interns to categorize content in projects that could span years, AI gives media companies the ability to analyze content in near-realtime and create large, easily searchable content libraries as the content is being migrated from existing tape libraries to object-based storage, or ingested for current projects. The metadata involved in this process includes brand recognition and player/actor identification, as well as speech-to-text, making it easy to determine logo placement for advertising analytics and to find footage for use in future movies or advertisements.

With Dell EMC storage, AI technologies can be brought to the data, removing the need to migrate or replicate data to direct-attach storage for analysis. Our solutions also offer the scalability to store the content for years using affordable archive nodes in Isilon or ECS object storage.

In terms of AR and VR, we are seeing video game companies using this technology to change the way players interact with their environments. Not only have they created a completely new genre with games such as Pokemon Go, they have figured out that audiences want nonlinear narratives told through realtime storytelling. Although AR and VR adoption has been slower for movies and TV compared to the video game industry, we can learn a lot from the successes of video game production and apply similar methodologies to movie and episodic productions in the future.

Can you talk about NVMe?
NVMe solutions are a small but exciting part of a much larger trend: workflows that fully exploit the levels of parallelism possible in modern converged architectures. As we look forward to 8K, 60fps and realtime production, the usage of PCIe bus bandwidth by compute, networking and storage resources will need to be much more balanced than it is today.

When we get into realtime productions, these “next-generation” architectures will involve new production methodologies such as realtime animation using game engines rather than camera-based acquisition of physically staged images. These realtime processes will take a lot of cooperation between hardware, software and networks to fully leverage the highly parallel, low-latency nature of converged infrastructure.

Dell Technologies is heavily invested in next-generation technologies that include NVMe cache drives, software-defined networking, virtualization and containerization that will allow our customers to continuously innovate together with the media industry’s leading ISVs.

What do you do in your products to help safeguard your users’ data?
Your content is your most precious capital asset and should be protected and maintained. If you invest in archiving and backing up your content with enterprise-quality tools, then your assets will continue to be available to generate revenue for you. However, archive and backup are just two pieces of data security that media organizations need to consider. They must also take active measures to deter data breaches and unauthorized access to data.

Protecting data at the edge, especially at the scale required for global collaboration can be challenging. We simplify this process through services such as SecureWorks, which includes offerings like security management and orchestration, vulnerability management, security monitoring, advanced threat services and threat intelligence services.

Our storage products are packed with technologies to keep data safe from unexpected outages and unauthorized access, and to meet industry standards such as alignment to MPAA and TPN best practices for content security. For example, Isilon’s OneFS operating system includes SyncIQ snapshots, providing point-in-time backup that updates automatically and generates a list of restore points.

Isilon also supports role-based access control and integration with Active Directory, MIT Kerberos and LDAP, making it easy to manage account access. For production houses working on multiple customer projects, our storage also supports multi-tenancy and access zones, which means that clients requiring quarantined storage don’t have to share storage space with potential competitors.

Our on-prem object store, ECS, provides long-term, cost-effective object storage with support for globally distributed active archives. This helps our customers with global collaboration, but also provides inherent redundancy. The multi-site redundancy creates an excellent backup mechanism as the system will maintain consistency across all sites, plus automatic failure detection and self-recovery options built into the platform.

Scale Logic‘s Bob Herzan

What is the biggest trend you’ve seen in the past year in terms of storage?
There is and has been a considerable buzz around cloud storage, object storage, AI and NVMe. Scale Logic recently took a private survey to its customer base to help determine the answer to this question. What we found is none of those buzzwords can be considered a trend. We also found that our customers were migrating away from SAN and focusing on building infrastructure around high-performance and scalable NAS.

Bob Herzan

They felt on-premises LTO was still the most viable option for archiving, and finding a more efficient and cost-effective way to manage their data was their highest priority for the next couple of years. There are plenty of early adopters testing out the buzzwords in the industry, but the trend — in my opinion — is to maximize a stable platform with the best overall return on the investment.

End users are not focused so much on storage, but on how a company like ours can help them solve problems within their workflows where storage is an important component.

Can you talk more about NVMe?
NVMe provides an any-K solution and superior metadata low-latency performance and works with our scale-out file system. All of our products have had 100GbE drivers for almost two years, enabling mesh technologies with NVMe for networks as well. As cost comes down, NVMe should start to become more mainstream this year — our team is well versed in supporting NVMe and ready to help facilities research the price-to-performance of NVMe to see if it makes sense for their Genesis and HyperFS Scale Out system.

With AI, VR and machine learning, our industry is even more dependent on storage. How are you addressing this?
We are continually refining and testing our best practices. Our focus on broadcast automation workflows over the years has already enabled our products for AI and machine learning. We are keeping up with the latest technologies, constantly testing in our lab with the latest in software and workflow tools and bringing in other hardware to work within the Genesis Platform.

What do you do in your products to help safeguard your users’ data?
This is a broad question that has different answers depending on which aspect of the Genesis Platform you may be talking about. Simply speaking, we can craft any number of data safeguard strategies and practices based on our customer needs, the current technology they are using and, most importantly, where they see their growth of capacity and data protection needs moving forward. Our safeguards start as simple as enterprise-quality components, mirrored sets, RAID -6, RAID-7.3 and RAID N+M, asynchronous data sync to a second instance, full HA with synchronous data sync to a second instance, virtual IP failover between multiple sites, multi-tier DR and business continuity solutions.

In addition, the Genesis Platform’s 24×7 health monitoring service (HMS) communicates directly with installed products at customer sites, using the equipment serial number to track service outages, system temperature, power supply failure, data storage drive failure and dozens of other mission-critical status updates. This service is available to Scale Logic end users in all regions of the world and complies with enterprise-level security protocols by relying only on outgoing communication via a single port.

Users want more flexible workflows — storage in the cloud, on-premises. Are your offerings reflective of that?
Absolutely. This question defines our go-to-market strategy — it’s in our name and part of our day-to-day culture. Scale Logic takes a consultative role with its clients. We take our 30-plus years of experience and ask many questions. Based on the answers, we can give the customer several options. First off, many customers feel pressured to refresh their storage infrastructure before they’re ready. Scale Logic offers customized extended warranty coverage that takes the pressure off the client and allows them to review their options and then slowly implement the migration and process of taking new technology into production.

Also, our Genesis Platform has been designed to scale, meaning clients can start small and grow as their facility grows. We are not trying to force a single solution to our customers. We educate them on the various options to solve their workflow needs and allow them the luxury of choosing the solution that best meets both their short-term and long-term needs as well as their budget.

Facilis‘ Jim McKenna

What is the biggest trend you’ve seen in the past year in terms of storage?
Recently, I’ve found that conversations around storage inevitably end up highlighting some non-storage aspects of the product. Sort of the “storage and…” discussion where the technology behind the storage is secondary to targeted add-on functionality. Encoding, asset management and ingest are some of the ways that storage manufacturers are offering value-add to their customers.

Jim McKenna

It’s great that customers can now expect more from a shared storage product, but as infrastructure providers we should be most concerned with advancing the technology of the storage system. I’m all for added value — we offer tools ourselves that assist our customers in managing their workflow — but that can’t be the primary differentiator. A premium shared storage system will provide years of service through the deployment of many supporting products from various manufacturers, so I advise people to avoid being caught-up in the value-add marketing from a storage vendor.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
Our industry has always been dependent upon storage in the workflow, but now facilities need to manage large quantities of data efficiently, so it’s becoming more about scaled networks. In the traditional SAN environment, hard-wired Fibre Channel clients are the exclusive members of the production workgroup.

With scalable shared-storage through multiple connection options, everyone in the facility can be included in the collaboration on a project. This includes offload machines for encoding and rendering large HDR and VR content, and MAM systems with localized and cloud analysis of data. User accounts commonly grow into the triple digits when producers, schedulers and assistants all require secure access to the storage network.

Can you talk about NVMe?
Like any new technology, the outlook for NVMe is promising. Solid state architecture solves a lot of problems inherent in HDD-based systems — seek times, read speeds, noise and cooling, form factor, etc. If I had to guess a couple years ago, I would have thought that SATA SSDs would be included in the majority of systems sold by now; instead they’ve barely made a dent in the HDD-based unit sales in this market. Our customers are aware of new technology, but they also prioritize tried-and-true, field-tested product designs and value high capacity at a lower cost per GB.

Spinning HDD will still be the primary storage method in this market for years to come, although solid state has advantages as a helper technology for caching and direct access for high-bandwidth requirements.

What do you do in your products to help safeguard your users’ data?
Integrity and security are priority features in a shared storage system. We go about our security differently than most, and because of this our customers have more confidence in their solution. By using a system of permissions that emanate from the volume-level, and are obscured from the complexities of network ownership attributes, network security training is not required. Because of the simplicity of securing data to only the necessary people, data integrity and privacy is increased.

In the case of data integrity during hardware failure, our software-defined data protection has been guarding our customers assets for over 13 years, and is continually improved. With increasing drive sizes, time to completion of drive recovery is an important factor, as well as system usability during the process.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
When data lifecycle is a concern of our customers, we consult on methods of building a storage hierarchy. There is no one-size-fits-all approach here, as every workflow, facility and engineering scope is different.

Tier 1 storage is our core product line, but we also have solutions for nearline (tier 2) and archive (tier 3). When the discussion turns to the cloud as a replacement for some of the traditional on-premises storage offerings, the complexity of the pricing structure, access model and interface becomes a gating factor. There are a lot of ways to effectively use the cloud, such as compute (AI, encoding, etc.), business continuity, workflow (WAN collaboration) or simple cold storage. These tools, when combined with a strong on-premises storage network, will enhance productivity and ensure on-time delivery of product.

mLogic’s co-founder/CEO Roger Mabon

What is the biggest trend you’ve seen in the past year in terms of storage?
In the M&E industry, high-resolution 4K/8K multi-camera shoots,
stereoscopic VR and HDR video are commonplace and are contributing to the unprecedented amounts of data being generated in today’s media productions. This trend will continue as frame rates and resolutions increase and video professionals move to shoot in these new formats to future-proof their content.

Roger Mabon

With AI, VR and machine learning, etc., our industry is even more dependent on storage. Can you talk about that?
Absolutely. In this environment, content creators must deploy storage solutions that are high capacity, high-performance and fault-tolerant. Furthermore, all of this content must be properly archived so it can be accessed well in to the future. mLogic’s mission is to provide affordable RAID and LTO tape storage solutions that fit this critical need.

How are you addressing this?
The tsunami of data being produced in today’s shoots must be properly managed. First and foremost is the need to protect the original camera files (OCF). Our high-performance mSpeed Thunderbolt 3 RAID solutions are being deployed on-set to protect these OCF. mSpeed is a desktop RAID that features plug-and-play Thunderbolt connectivity, capacities up to 168TB and RAID-6 data protection. Once the OCF is transferred to mSpeed, camera cards can be wiped and put back into production

The next step involves moving the OCF from the on-set RAID to LTO tape. Our portable mTape Thunderbolt 3 LTO solutions are used extensively by media pros to transfer OCF to LTO tape. LTO tape cartridges are shelf stable for 30+ years and cost around $10 per TB. That said, I find that many productions skip the LTO transfer and rely solely on single hard drives to store the OCF. This is a recipe for disaster as hard drives sitting on a shelf have a lifespan of only three to five years. Companies working with the likes of Netflix are required to use LTO for this very reason. Completed projects should also be offloaded from hard drives and RAIDs to LTO tape. These hard drives systems can then be put back into action for the tasks that they are designed for… editing, color correction, VFX, etc.

Can you talk about NVMe?
mLogic does not currently offer storage solutions that incorporate NVMe technology, but we do recognize numerous use cases for content creation applications. Intel is currently shipping an 8TB SSD with PCIe NVMe 3.1 x4 interface that can read/write data at 3000+ MB/second! Imagine a crazy fast and ruggedized NVMe shuttle drive for on-set dailies…

What do you do in your products to help safeguard your users data?

Our 8- and12-drive mSpeed solutions feature hardware RAID data protection. mSpeed can be configured in multiple RAID levels including RAID-6, which will protect the content stored on the unit even if two drives should fail. Our mTape solutions are specifically designed to make it easy to offload media from spinning drives and archive the content to LTO tape for long term data preservation.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We recommend that you make two LTO archives of your content that are geographically separated in secure locations such as the post facility and the production facility. Our mTape Thunderbolt solutions accomplish this task.

In regards to the cloud, transferring terabytes upon terabytes of data takes an enormous amount of time and can be prohibitively expensive, especially when you need to retrieve the content. For now, cloud storage is reserved for productions with big pipes and big budgets.

OWC president Jennifer Soulé 


With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?

We’re constantly working to provide more capacity and faster performance.  For spinning disk solutions, we’re making sure that we’re offering the latest sizes in ever-increasing bays. Our ThunderBay line started as a four-bay, went to a six-bay and will grow to eight-bay in 2019. With 12TB drives, that’s 96TB in a pretty workable form factor. Of course, you also need performance, and that is where our SSD solutions come in as well as integrating the latest interfaces like Thunderbolt 3. For those with greater graphics needs, we also have our Helios FX external GPU box.

Can you talk about NVME?
With our Aura Pro X, Envoy Pro EX, Express 4M2 and ThunderBlade, we’re already into NVMe and don’t see that stopping. By the end of 2019 we expect virtually all of our external Flash-based solutions will be NVMe-based rather than SATA. As the cost of Flash goes down and performance and capacity go up, we expect broader adoption as both primary storage and in secondary cache setups. 2TB drive supply will stabilize and we should see 4TB  and PCIe Gen 4 will double bandwidth.  Bigger, faster and cheaper is a pretty awesome combination.

What do you do in your products to help safeguard your users data?
We focus more on providing products that are compatible with different encryption schemas rather than building something in. As far as overall data protection, we’re always focused on providing the most reliable storage we can. We make sure our power supplies are over what is required to make sure insufficient power is never a factor. We test a multitude of drives in our enclosures to ensure we’re providing the best performing drives.

For our RAID solutions, we do burn-in testing to make sure all the drives are solid. Our SoftRAID technology also provides in-depth drive health monitoring so you know well in advance if a drive is failing.  This is critical because many other SMART-based systems fail to detect bad drives leading to subpar system performance and corrupted data. Of course, all the hardware and software technology we put into our drives don’t do much if people don’t back up their data — so we also work with our customers to find the right solution for their use case or workflow.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
I definitely think we hit on flexibility within the on prem-space by offering a full range of single and multi-drive solutions, spinning disk and SSD options, portable to rack mounted that can be fully setup solutions or DIY where you can use drives you might already have. You’ll have to stay tuned on the cloud part, but we do have plans to use the cloud to expand on the data protection our drives already offer.

Panasas’ new ActiveStor Ultra targets emerging apps: AI, VR

Panasas has introduced ActiveStor Ultra, the next generation of its high-performance computing storage solution, featuring PanFS 8, a plug-and-play, portable, parallel file system. ActiveStor Ultra offers up to 75GB/s per rack on industry-standard commodity hardware.

ActiveStor Ultra comes as a fully integrated plug-and-play appliance running PanFS 8 on industry-standard hardware. PanFS 8 is the completely re-engineered Panasas parallel file system, which now runs on Linux and features intelligent data placement across three tiers of media — metadata on non-volatile memory express (NVMe), small files on SSDs and large files on HDDs — resulting in optimized performance for all data types.

ActiveStor Ultra is designed to support the complex and varied data sets associated with traditional HPC workloads and emerging applications, such as artificial intelligence (AI), autonomous driving and virtual reality (VR). ActiveStor Ultra’s modular architecture and building-block design enables enterprises to start small and scale linearly. With dock-to-data in one hour, ActiveStor Ultra offers fast data access and virtually eliminates manual intervention to deliver the lowest total cost of ownership (TCO).

ActiveStor Ultra will be available early in the second half of 2019.

Symply offering StorNext 6-powered Thunderbolt 3 storage solution

Symply is at NAB New York providing tech previews of its SymplyWorkspace Thunderbolt 3-based SAN technology that uses Quantum’s StorNext 6.

SymplyWorkspace allows laptops and workstations equipped with Thunderbolt 3 to ingest, edit, finish and deliver media through a direct Thunderbolt 3 cable connection, with no adapter needed and without having to move content locally, even at 4K resolutions.

Based on StorNext 6 sharing software, users can connect up to eight laptops and workstations to the system and instantly share video, graphics and other data files using a standard Thunderbolt interface with no additional hardware or adapters.

While the company has not announced pricing it does expect to have systems for sale in Q4. The boxes are expected to start under $10,000 for 48TB and up to four users, making the system well-suited for users such as smaller post houses, companies with in-house creative teams and ad agencies.

Quantum upgrades Xcellis scale-out storage with StoreNext 6.2, NVMe tech

Quantum has made enhancements to its Xcellisscale-out storage appliance portfolio with an upgrade to StorNext 6.2 and the introduction of NVMe storage. StorNext 6.2 bolsters performance for 4K and 8K video while enhancing integration with cloud-based workflows and global collaborative environments. NVMe storage significantly accelerates ingest and other aspects of media workflows.

Quantum’s Xcellis scale-out appliances provide high performance for increasingly demanding applications and higher resolution content. Adding NVMe storage to the Xcellis appliances offers ultra-fast performance: 22 GB/s single-client, uncached streaming bandwidth. Excelero’s NVMesh technology in combination with StorNext ensures all data is accessible by multiple clients in a global namespace, making it easy to access and cost-effective to share Flash-based resources.

Xcellis provides cross-protocol locking for shared access across SAN, NFS and SMB, helping users share content across both Fibre Channel and Ethernet.

With StorNext 6.2, Quantum now offers an S3 interface to Xcellis appliances, allowing them to serve as targets for applications designed to write to RESTful interfaces. This allows pros to use Xcellis as either a gateway to the cloud or as an S3 target for web-based applications.

Xcellis environments can now be managed with a new cloud monitoring tool that enables Quantum’s support team to monitor critical customer environmental factors, speed time to resolution and ultimately increase uptime. When combined with Xcellis Web Services — a suite of services that lets users set policies and adjust system configuration — overall system management is streamlined.

Available with StorNext 6.2, enhanced FlexSync replication capabilities enable users to create local or remote replicas of multitier file system content and metadata. With the ability to protect data for both high-performance systems and massive archives, users now have more flexibility to protect a single directory or an entire file system.

StorNext 6.2 lets administrators provide defined and enforceable quotas and implement quality of service levels for specific users, and it simplifies reporting of used storage capacity. These new features make it easier for administrators to manage large-scale media archives efficiently.

The new S3 interface and NVMe storage option are available today. The other StorNext features and capabilities will be available by December 2018.

 

mLogic at IBC with four new storage solutions

mLogic will be at partner booths during IBC showing four new products at: the mSpeed Pro, mRack Pro, mShare MDC and mTape SAS.

The mLogic mSpeed Pro (pictured) is a 10-drive RAID system with integrated LTO tape. Thishybrid storage solution and hard drive provides high-speed access to media for coloring, editing and VFX, while also providing an extended, long-term archive for content to LTO tape, which promises more than 30+ years of media preservation.

mSpeed Pro supports multiple RAID levels, including RAID-6 for the ultimate in fault tolerance. It connects to any Linux, macOS, or Windows computer via a fast 40Gb/second Thunderbolt 3 port. The unit ships with the mLogic Linear Tape File System (LTFS) Utility, a simple drag-and-drop application that transfers media from the RAID to the LTO.

The mLogic mSpeed Pro will be available in 60, 80 and 100TB with an LT0-7 or LTO-8 tape drive. Pricing starts at $8,999.

The mRack Pro is a 2U rack-mountable archiving solution that features full-height LTO-8 drives and Thunderbolt 3 connectivity. Full-height (FH) LTO-8 drives offer numerous benefits over their half-height counterparts, including:
– Having larger motors that move media faster
– Working more optimally in LTFS (Linear Tape File System) environments
– Providing increased mechanical reliability
– Being a better choice for high-duty cycle workloads
– Having a lower operating temperature

The mRack Pro is available with one or two LTO-8 FH drives. Pricing starts at $7,999.

mLogic’s mShare is a metadata controller (MDC) with PCIe switch and embedded Storage Area Network (SAN) software, all integrated in a single compact rack-mount enclosure. Designed to work with mLogic’s mSAN Thunderbolt 3 RAID, the unit can be configured with Apple Xsan or Tiger Technology Tiger Store software. With mShare and mSAN, collaborative workgroups can be configured over Thunderbolt at a fraction of the cost of traditional SAN solutions. Pricing TBD.

Designed for archiving media in the Linux and Windows environments, mTape SAS is a desktop LTO-7 or LTO-8 that ships bundled with a high-speed SAS PCIe adapter to install in host computers. The mTape SAS can also be bundled with Xendata Workstation 6 archiving software for Windows. Pricing starts at $3,399.

Review: Mobile Filmmaking with Filmic Pro, Gnarbox, LumaFusion

By Brady Betzel

There is a lot of what’s become known as mobile filmmaking being done with cell phones, such as the iPhone, Samsung Galaxy and even the Google Pixel. For this review, I will cover two apps and one hybrid hard drive/mobile media ingest station built specifically for this type of mobile production.

Recently, I’ve heard how great the latest mobile phone camera sensors are, and how those embracing mobile filmmaking are taking advantage of them in their workflows. Those workflows typically have one thing in common: Filmic Pro.

One of the more difficult parts of mobile filmmaking, whether you are using a GoPro, DSLR or your phone, is storage and transferring the media to a workable editing system. The Gnarbox, which is designed to help solve this issue, is in my opinion one of the best solutions for mobile workflows that I have seen.

Finally, editing your footage together in a professional nonlinear editor like Adobe Premiere Pro or Blackmagic’s Resolve takes some skills and dedication. Moreover, if you are doing a lot of family filmmaking (like me), you usually have to wait for the kids to go to sleep to start transferring and editing. However, with the iOS app LumaFusion — used simultaneously with the Gnarbox — you can transfer your GoPro, DSLR or other pro camera shots, while your actors are taking a break, allowing you to clear your memory cards or get started on a quick rough cut to send to executives that might be waiting off site.

Filmic Pro
First up is Filmic Pro V.6. Filmic Pro is an iOS and Android app that gives you fine-tuned control over your phone’s camera, including live image analyzation features, focus pulling and much more.

There are four very useful live analytic views you can enable at the top of the app: Zebra Stripes, Clipping, False Color and Focus Peaking. There is another awesome recording view that allows simultaneous focus and exposure adjustments, conveniently placed where you would naturally rest your thumbs. With the focus pulling feature you can even set start and end focus points that Filmic Pro will run for you — amazing!

There are many options under the hood of Filmic Pro, including the ability to record at almost any frame rate and aspect ratio, such as 9:16 vertical video (Instagram TV anyone?). You can also film at one particular frame rate, such as 120fps and record at a more standard frame rate of 24fps, essentially processing your high-speed footage in the phone. Vertical video is one of those constant questions that arises when producing video for mobile viewing. If you don’t want the app to automatically change to vertical video recording mode, you can set an orientation lock in the settings. When recording video there are four data rate options: Filmic Extreme, with 100Mb/s for any frame size 2K or higher and 50Mb/s for 1080p or lower; Filmic Quality, which limits the data rate to 35Mb/s (your phone’s default data rate); or Economy, which you probably don’t need to use.

I have only touched on a few of the options inside of Filmic Pro. There are many more, including mic input selections, sample rate selections (including 48kHz), timelapse mode and, in my opinion, the most powerful feature, Log recording. Log recording inside of a mobile phone can unlock some unnoticed potential in your phone’s camera chip, allowing for a better ability to match color between cameras or expose details in shadows when doing color correction in post.

The only slightly bad news is that on top of the $14.99 price for the Filmic Pro app itself, to gain access to the Log ability (labeled Cinematographer’s Toolkit) you have to pay an additional $9.99. In the end, $25 is a really, really, really small price to pay for the abilities that Filmic Pro unlocks for you. And while this won’t turn your phone into an Arri Alexa or Red Helium (yet), you can raise your level of mobile cinematography quickly, and if you are using your phone for some B-or C-roll, Filmic Pro can help make your colorist happy, thanks to Log recording.

One feature that I couldn’t test because I do not own a DJI Osmo is that you can control the features on your iOS device from the Osmo itself, which is pretty intriguing. In addition, if you use any of the Moondog Labs anamorphic adapters, Filmic Pro can be programmed to de-squeeze the footage properly.

You can really dive in with Filmic Pro’s library of tutorials here.

Gnarbox 1.0
After running around with GoPro cameras strapped to your (or your dog’s) head all day, there will be some heavy post work to get it offloaded onto your computer system. And, typically, you will have much more than just one GoPro recording during the day. Maybe you took some still photos on your DSLR and phone, shot some drone footage and had GoPro on a chest mount.

As touched on earlier, the Gnarbox 1.0 is a stand-alone WiFi-enabled hard drive and media ingestion station that has SD, microSD, USB 3.0 and USB 2.0 ports to transfer media to the internal 128GB or 256GB Flash memory. You simply insert the memory cards or the camera’s USB cable and connect to the Gnarbox via the App on your phone to begin working or transferring.

There are a bunch of files that will open using the Gnarbox 1.0 iOS and Android apps, but there are some specific files that won’t open, including ProRes, H.265 iPhone recordings, CinemaDNG, etc. However, not all hope is lost. Gnarbox is offering up the Gnarbox 2.0 via IndieGogo and can be pre-ordered. Version 2.0 will offer compatibility with file types such as ProRes, in addition to having faster transfer times and app-free backups.

So while reading this review of the Gnarbox 1.0, keep Version 2 in the back of your mind, since it will likely contain many new features that you will want… if you can wait until the estimated delivery of January 2019.

Gnarbox 1.0 comes in two flavors: a 128GB version for $299.99, and the version I was sent to review, which is 256GB for $399.99. The price is a little steep, but the efficiency this product brings is worth the price of admission. Click here for all the lovely specs.

The drive itself is made to be used with an iPhone or Android-based device primarily, but it can be put into an external hard drive mode to be used with a stand-alone computer. The Gnarbox 1.0 has a write speed of 132MB/s and read speed of 92MB/s when attached to a computer in Mass Storage Mode via the USB 3.0 connection. I actually found myself switching modes a lot when transferring footage or photos back to my main system.

It would be nice to have a way to switch to the external hard drive mode outside of the app, but it’s still pretty easy and takes only a few seconds. To connect your phone or tablet to the Gnarbox 1.0, you need to download the Gnarbox app from the App Store or Google Play Store. From there you can access content on your phone as well as on the Gnarbox when connected to it. In addition to the Gnarbox app, Gnarbox 1.0 can be used with Adobe Lightroom CC and the mobile NLE LumaFusion, which I will cover next in the review.

The reason I love the Gnarbox so much is how simply, efficiently and powerfully it accomplishes its task of storing media without a computer, allowing you to access, edit and export the media to share online without a lot of technical know-how. The one drawback to using cameras like GoPros is it can take a lot of post processing power to get the videos on your system and edited. With the Gnarbox, you just insert your microSD card into the Gnarbox, connect your phone via WiFi, edit your photos or footage then export to your phone or the Gnarbox itself.

If you want to do a full backup of your memory card, you open the Gnarbox app, find the Connected Devices, select some or all of the clips and photos you want to backup to the Gnarbox and click Copy Files. The same screen will show you which files have and have not been backed up yet so you don’t do it multiple times.

When editing photos or video there are many options. If you are simply trimming down a video clip, stringing out a few clips for a highlight reel, adding some color correction, and even some music, then the Gnarbox app is all you will need. With the Gnarbox 1.0, you can select resolution and bit rates. If you’re reading this review you are probably familiar with how resolutions and bit rates work, so I won’t bore you with those explanations. Gnarbox 1.0 allows for 4K, 2.7K. 1080p and 720p resolutions and bitrates of 65 Mbps, 45Mbps, 30Mbps and 10Mbps.

My rule of thumb for social media is that resolution over 1080p doesn’t really apply to many people since most are watching it on their phone, and even with a high-end HDR, 4K, wide gamut… whatever, you really won’t see much difference. The real difference comes in bit rates. Spend your megabytes wisely and put all your eggs in the bit rate basket. The higher the bit rates the better quality your color will be and there will be less tearing or blockiness. In my opinion a higher bit rate 1080p video is worth more than a 4K video with a lower bit rate. It just doesn’t pay off. But, hey, you have the options.

Gnarbox has an awesome support site where you can find tutorial GIFs and writeups covering everything from powering on your Gnarbox to bitrates, like this one. They also have a great YouTube playlist that covers most topics with the Gnarbox, its app, and working with other apps like LumaFusion to get you started. Also, follow them on Instagram for some sweet shots they repost.

LumaFusion
With Filmic Pro to capture your video and with the Gnarbox you can lightly edit and consolidate your media, but you might need to go a little further in the editing than just simple trims. This is where LumaFusion comes in. At the moment, LumaFusion is an iOS only app, but I’ve heard they might be working on an Android version. So for this review I tried to get my hands on an iPad and an iPad Pro because this is where LumaFusion would sing. Alas, I had to settle for my wife’s iPhone 7 Plus. This was actually a small blessing, because I was afraid the app would be way too small to use on a standard iPhone. To my surprise it was actually fine.

LumaFusion is an iOS-based nonlinear editor, much like Adobe Premiere or FCPX, but it only costs $19.99 in the App store. I added LumaFusion to this review because of its tight integration with Gnarbox (by accessing the files directly on the Gnarbox for editing and output), but also because it has presets for Filmic Pro aspect ratios: 1.66:1, 17:9, 2.2:1, 2.39:1, 2.59:1. LumaFusion will also integrate with external drives like the Western Digital wireless SSD, as well as cloud services like Google Drive.

In the actual editing interface LumaFusion allows for advanced editing with titles, music, effects and color correction. It gives you three video and audio tracks to edit with, allowing for J and L cuts or transitions between clips. For an editor like me who is so used to Avid Media Composer that I want to slip and trim in every app, LumaFusion allows for slips, trims, insert edits, overwrite edits, audio track mixing, audio ducking to automatically set your music levels — depending on when dialogue occurs — audio panning, chroma key effects, slow and fast motion effects, titles with different fonts and much more.

There is a lot of versatility inside of LumaFusion, including the ability to export different frame rates between 18, 23.976, 24, 25, 29.97, 30, 48, 50, 59.94, 60, 120 and 240 fps. If you are dealing with 360-degree video, you can even enable the 360-degree metadata flag on export.

LumaFusion has a great reference manual that will fill you in on all the aspects of the app, and it’s a good primer on other subjects like exporting. In addition, they have a YouTube playlist. Simply, you can export for all sorts of social media platforms or even to share over Air Drop between Mac OS and iOS devices. You can choose your export resolution such as 1080p or UHD 4K (3840×2160), as well as your bit rate, and then you can select your codec, whether it be H.264 or H.265. You can also choose whether the container is a MP4 or MOV.

Obviously, some of these output settings will be dictated by the destination, such as YouTube, Instagram or maybe your NLE on your computer system. Bit rate is very important for color fidelity and overall picture quality. LumaFusion has a few settings on export, including: 12Mbps, 24Mbps, 32Mbps and 50Mbps if in 1080p, otherwise 100 Mbps if you are exporting UHD 4k (3840×2160).

LumaFusion is a great solution for someone who needs the fine tuning of a pro NLE on their iPad or iPhone. You can be on an exotic vacation without your laptop and still create intricately edited highlight reels.

Summing Up
In the end, technology is amazing! From the ultra-high-end camera app Filmic Pro to the amazing wireless media hub Gnarbox and even the iOS-based nonlinear editor LumaFusion, you can film, transfer and edit a professional-quality UHD 100Mbps clip without the need for a stand-alone computer.

If you really want to see some amazing footage being created using Filmic Pro you should follow Richard Lackey on all social media platforms. You can find more info on his website. He has some amazing imagery as well as tips on how to shoot more “cinematic” video using your iPhone with Filmic Pro.

The Gnarbox — one of my favorite tools reviewed over the years — serves a purpose and excels. I can’t wait to see how the Gnarbox 2.0 performs when it is released. If you own a GoPro or any type of camera and want a quick and slick way to centralize your media while you are on the road, then you need the Gnarbox.

LumaFusion will finish off your mobile filmmaking vision with titles, trimming and advanced edit options that will leave people wondering how you pulled off such a professional video from your phone or tablet.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

DigitalGlue’s Creative.Space optimized for Resolve workflows

DigitalGlue’s Creative.Space, an on-premise managed storage (OPMS) service, has been optimized for Blackmagic DaVinci Resolve workflows, meeting the technical requirements for inclusion in Blackmagic’s Configuration Guide. DigitalGlue is an equipment, integration and software development provider, that also designs and implements solutions for complete turnkey content creation, post production and distribution.

According to DigitalGlue CEO/CTO Tim Anderson, each Creative.Space system is pre-loaded with a Resolve optimized PostreSQL database server enabling users to simply create databases in Resolve using the same address they use to connect to their storage. In addition, users can schedule database backups with snapshots, ensuring that work is preserved timely and securely. Creative.Space also uses media intelligent caching to move project data and assets into a “fast lane” allowing all collaborators to experience seamless performance.

“We brought a Creative.Space entry-level Auteur unit optimized with a DaVinci Resolve database to the Blackmagic training facility in Burbank,” explains Nick Anderson, Creative.Space product manager. “The Auteur was put through a series of rigorous testing processes and passed each with flying colors. Our Media Intelligent caching allowed the unit to provide full performance to 12 systems at a level that would normally require a much larger and more expensive system.”

Auteur was the first service in the Creative.Space platform to launch. Creative.Space targets collaborative workflows by optimizing the latest hardware and software for efficiency and increased productivity. Auteur starts at 120TB RAW capacity across 12 drives in a 24-bay 4RU chassis with open bays for rapid growth. Every system is custom-built to address each client’s unique needs. Entry level systems are designed for small to medium workgroups using compressed 4K, 6K and 8K workflows and can scale for 4K uncompressed workflows (including 4K OpenEXR) and large multi-user environments.

Avid adds to Nexis product line with Nexis|E5

The Nexis|E5 NL nearline storage solution from Avid is now available. The addition of this high-density on-premises solution to the Avid Nexis family allows Avid users to manage media across all their online, nearline and archive storage resources.

Avid Nexis|E5 NL includes a new web-based Nexis management console for managing, controlling and monitoring Nexis installations. NexislE5 NL can be easily accessed through MediaCentral | Cloud UX or Media Composer and also integrates with MediaCentral|Production Management, MediaCentral|Asset Management and MediaCentral|Editorial Management to help collaboration, with advanced features such as project and bin sharing. Extending the Nexis|FS (file system) to a secondary storage tier makes it easy to search for, find and import media, enabling users to locate content distributed throughout their operations more quickly.

Build for project parking, staging workflows and proxy archive, Avid reports that Nexis | E5 NL streamlines the workflow between active and non-active assets, allowing media organizations to park assets as well as completed projects on high-density nearline storage, and keep them within easy reach for rediscovery and reuse.

Up to eight Nexis|E5 NL engines can be integrated as one virtualizable pool of storage, making content and associated projects and bins more accessible. In addition, other Avid Nexis Enterprise engines can be integrated into a single storage system that is partitioned for better archival organization.

Additional Nexis|E5 NL features include:
• It’s scalable from 480TB of storage to more than 7PB by connecting multiple Nexis|E5 NL engines together as a single nearline system for a highly scalable, lower-cost secondary tier of storage.
• It offers flexible storage infrastructure that can be provisioned with required capacity and fault-tolerance characteristics.
• Users can configure, control and monitor Nexis using the updated management console that looks and feels like a MediaCentral|Cloud UX application. Its dashboard provides an overview of the system’s performance, bandwidth and status, as well as access to quickly configure and manage workspaces, storage groups, user access, notifications and other functions. It offers the flexibility and security of HTML5 along with an interface design that enables mobile device support.

DigitalGlue’s Creative.Space intros all-Flash 1RU OPMS storage

Creative.Space, a division of DigitalGlue that provides on-premise managed storage (OPMS) as a service for production and post companies as well as broadcast networks, has added the Breathless system to its offerings. The product will make its debut at Cine Gear in LA next month.

The Breathless Next Generation Small Form Factor (NGSFF) media storage system offers 36 front-serviceable NVMe SSD bays in 1RU. It is designed for 4K, 6K and 8K uncompressed workflows using JPEG2000, DPX and multi-channel OpenEXR. There are 4TB of NVMe SSDs currently available, but a 16TB version will be available in later this year, allowing 576TB of Flash storage to fit in 1RU. Breathless performs 10 million random read IOPS (Input/Output Operations per Second) of storage performance (up to 475,000 per drive).

Each of the 36 NGSFF SSD bays connects to the motherboard directly over PCIe to deliver maximum potential performance. With dual Intel Skylake-SP CPUs and 24 DDR4 DIMMs of memory, this system is perfect for I/O intensive local workloads, not just for high-end VFX, but also realtime analytics, database and OTT content delivery servers.

Breathless’ OPMS features 24/7 monitoring, technical support and next-day repairs for an all-inclusive, affordable fixed monthly rate of $2,495.00, based on a three-year contract (16TB of SSD).

Breathless is the second Creative.Space system to launch, joining Auteur, which offers 120TB RAW capacity across 12 drives in a 24-bay 4 RU chassis. Every system is custom-built to address each client’s needs. Entry level systems are designed for small to medium workgroups using compressed 4K, 6K and 8K workflows and can scale for 4K uncompressed workflows (including 4K OpenEXR) and large multi-user environments.

DigitalGlue, an equipment, integration and software development provider, also designs and implements turnkey solutions for content creation, post and distribution.

 

NAB 2018: My key takeaways

By Twain Richardson

I traveled to NAB this year to check out gear, software, technology and storage. Here are my top takeaways.

Promise Atlas S8+
First up is storage and the Promise Atlas S8+. The Promise Atlas S8+ is a network attached storage solution for small groups that features easy and fast NAS connectivity over Thunderbolt3 and 10GB Ethernet.

The Thunderbolt 3 version of the Atlas S8+ offers two Thunderbolt 3 ports, four 1Gb Ethernet ports, five USB 3.0 ports and one HMDI output. The 10g BaseT version swaps in two 10Gb/s Ethernet ports for the Thunderbolt 3 connections. It can be configured up to 112TB. The unit comes empty, and you will have to buy hard drives for it. The Atlas S8+ will be available later this year.

Lumaforge

Lumaforge Jellyfish Tower
The Jellyfish is designed for one thing and one thing only: collaborative video workflow. That means high bandwidth, low latency and no dropped frames. It features a direct connection, and you don’t need a 10GbE switch.

The great thing about this unit is that it runs quiet, and I mean very quiet. You could place it under your desk and you wouldn’t hear it running. It comes with two 10GbE ports and one 1GbE port. It can be configured for more ports and goes up to 200TB. The unit starts at $27,000 and is available now.

G-Drive Mobile Pro SSD
The G-Drive Mobile Pro SSD is blazing-fast storage with data transfer rates of up to 2800MB/s. It was said that you could transfer as much as a terabyte of media in seven minutes or less. That’s fast. Very fast.

It provides up to three-meter drop protection and comes with a single Thunderbolt 3 port and is bus powered. It also features a 1000lb crush-proof rating, which makes it ideal for being used in the field. It will be available in May with a capacity of 500GB. 1TB and 2TB versions will be available later this year.

OWC Thunderblade
Designed to be rugged and dependable as well as blazing fast, the Thunderblade has a rugged and sleek design, and it comes with a custom-fit ballistic hard-shell case. With capacities of up 8TB and data transfer rates of up to 2800MB/s, this unit is ideal for on-set workflows. The unit is not bus powered, but you can connect two ThunderBlades that can reach speeds of up to 3800MB/s. Now that’s fast.

OWC Thunderblade

It starts at $1,199 for the 1TB and is available now for purchase.

OWC Mercury Helios FX External Expansion Chassis
Add the power of a high-performance GPU to your Mac or PC via Thunderbolt 3. Performance is plug-and-play, and upgrades are easy. The unit is quiet and runs cool, making it a great addition to your environment.

It starts at $319 and is available now.

Flanders XM650U
This display is beautiful, absolutely beautiful.

The XM650U is a professional reference monitor designed for color-critical monitoring of 4K, UHD, and HD signals. It features the latest large-format OLED panel technology, offering outstanding black levels and overall picture performance. The monitor also features the ability to provide a realtime downscaled HD resolution output.

The FSI booth was showcasing the display playing HD, UHD, and UHD HDR content, which demonstrates how versatile the device is.

The monitor goes for $12,995 and is available for purchase now.

DaVinci Resolve 15
What could arguably be the biggest update yet to Resolve is version 15. It combines editing, color correction, audio and now visual effects all in one software tool with the addition of Fusion. Other additions include ADR tools in Fairlight and a sound library. The color and edit page has additions such as a LUT browser, shared grades, stacked timelines, closed captioning tools and more.

You can get DR15 for free — yes free — with some restrictions to the software and you can purchase DR15 Studio for $299. It’s available as a beta at the moment.

Those were my top take aways from NAB 2018. It was a great show, and I look forward to NAB 2019.


Twain Richardson is a co-founder of Frame of Reference, a boutique post production company located on the beautiful island of Jamaica. Follow the studio and Twain on Twitter: @forpostprod @twainrichardson

Riding the digital storage bus at the HPA Tech Retreat

By Tom Coughlin

At the 2018 HPA Tech Retreat in Palm Desert there were many panels that spoke to the changing requirements for digital storage to support today’s diverse video workflows. While at the show, I happened to snap a picture of the Maxx Digital bus — these guys supply video storage and RAID. I liked this picture because it had the logos of a number of companies with digital storage products serving the media and entertainment industry. So, this blog will ride the storage bus to see where digital storage in M&E is going.

Director of photography Bill Bennett, ASC, and senior scientist for RealD Tony Davis gave an interesting talk about why it can be beneficial to capture content at high frame rates, even if it will ultimately be shown at much lower frame rate. They also offered some interesting statics about Ang Lee’s 2016 technically groundbreaking movie, Billy Lynn’s Long Halftime Walk, which was shot in in 3D at 4K resolution and 120 frames per second.

The image above is a slide from the talk describing the size of the data generated in creating this movie. Single Sony F65 frames with 6:1 compression were 5.2MB in size with 7.5TB of average footage per day over 49 days. They reported that 104-512GB cards were used to capture and transfer the content and the total raw negative size (including test materials) was 404TB. This was stored on 1.5PB of hard disk storage. The actual size of the racks used for storage and processing wasn’t all that big. The photo below shows the setup in Ang Lee’s apartment.

Bennett and Davis went on to describe the advantages of shooting at high frame rates. Shooting at high frame rates gives greater on-set flexibility since no motion data is lost during shooting, so things can be fixed in post more easily. Even when shown at lower resolution in order to get conventional cinematic aesthetics, a synthetic shutter can be created with different motion sense in different parts of the frame to create effective cinematic effects using models for particle motion, rotary motion and speed ramps.

During Gary Demos’s talk on Parametric Appearance Compensation he discussed the Academy Color Encoding System (ACES) implementation and testing. He presented an interesting slide on a single master HDR architecture shown below. A master will be an important element in an overall video workflow that can be part of an archival package, probably using the SMPTE (and now ISO) Archive eXchange Format (AXF) standard and also used in a SMPTE Interoperable Mastering Format (IMF) delivery package.

The Demo Area
At the HPA Retreat exhibits area we found several interesting storage items. Microsoft had on exhibit one of it’s Data Boxes, that allow shipping up to 100 TB of data to its Azure cloud. The Microsoft Azure Data Box joins Amazon’s Snowball and Google’s similar bulk ingest box. Like the AWS Snowball, the Azure Data Box includes an e-paper display that also functions as a shipping label. Microsoft did early testing of their Data Box with Oceaneering International, which performs offline sub-sea oil industry inspection and uploaded their data to Azure using Data Box.

ATTO was showing its Direct2GPU technology that allowed direct transfer from storage to GPU memory for video processing without needing to pass through a system CPU. ATTO is a manufacturer of HBA and other connectivity solutions for moving data, and developing smarter connectors that can reduce overall system overhead.

Henry Gu’s GIC company was showing its digital video processor with automatic QC, and IMF tool set enabling conversion of any file type to IMF and transcoding to any file format and playback of all file types including 4K/UHD. He was doing his demonstration using a DDN storage array (right).

Digital storage is a crucial element in modern professional media workflows. Digital storage enables higher frame rate, HDR video recording and processing to create a variety of display formats. Digital storage also enables uploading bulk content to the cloud and implementing QC and IMF processes. Even SMPTE standards for AXF, IMF and others are dependent upon digital storage and memory technology in order to make them useful. In a very real sense, in the M&E industry, we are all riding the digital storage bus.


Dr. Tom Coughlin, president of Coughlin Associates, is a storage analyst and consultant. Coughlin has six patents to his credit and is active with SNIA, SMPTE, IEEE and other pro organizations. Additionally, Coughlin is the founder and organizer of the annual Storage Visions Conference as well as the Creative Storage Conference.
.

Quantum’s Xcellis scale-out NAS targets IP workflows for M&E

Quantum is now offering a new Xcellis Scale-out NAS targeting data-heavy IP-based media workflows. Built off Quantum’s StorNext shared storage and data management platform, the multi-protocol, multi-client Xcellis Scale-out NAS system combines media and metadata management with high performance and scalability. Users can configure an Xcellis solution with both scale-out SAN and NAS to provide maximum flexibility.

“Media professionals have been looking for a solution that combines the performance and simplified scalability of a SAN with the cost efficiency and ease of use of NAS,” says Quantum’s Keith Lissak. “Quantum’s new Xcellis Scale-out NAS platform bridges that gap. By affordably delivering high performance, petabyte-level scalability and advanced capabilities such as integrated AI, Xcellis Scale-out NAS is [a great] solution for migrating to all-IP environments.”

Specific benefits of Xcellis Scale-out NAS include:
• Increased Productivity in All-IP Environments: It features a converged architecture that saves space and power, continuous scalability for simplified scaling of performance and capacity and unified access to content.
• Cost-Effective Scaling of Performance and Capacity: One appliance provides 12 GB/sec per client. An Xcellis cluster can scale performance and capacity together or independently to reach hundreds of petabytes in capacity and more than a terabyte per second in performance. When deployed as part of a multitier StorNext infrastructure ― which can include object, tape and cloud storage ― Xcellis Scale-out NAS can cost as little as 1/10 that of an enterprise-only NAS solution with the same capacity.
• Lifecycle, Location and Cost Management: It’s built off of Quantum’s StorNext software, which provides automatic tiering between flash, disk, tape, object storage and public cloud. Copies can be created for content distribution, collaboration, data protection and disaster recovery.
• Integrated Artificial Intelligence: Xcellis can integrate artificial intelligence (AI) capabilities to enable users to extract more value for their assets through the automated creation of metadata. The system can actively interrogate data across multiple axes to uncover events, objects, faces, words and sentiments, automatically generating new, custom metadata that unlocks additional possibilities for the use of stored assets.

Xcellis Scale-out NAS will be generally available this month with entry configurations and those leveraging tiering starting at under $100 per terabyte (raw).

Cloudian HyperFile for object-storage-based NAS

Newly introduced Cloudian HyperFile is an integrated NAS controller that provides SMB/NFS file services from on-premises Cloudian HyperStore object storage systems. Cloudian HyperFile includes targets enterprise network attached storage (NAS) customers, those working in mission-critical, capacity-intensive applications that employ file data. Media and entertainment is one of the main target markets for HyperFile.

Cloudian HyperFile incorporates snapshot, WORM, non-disruptive failover, scale-out performance, POSIX compliance and Active Directory integration. When combined with the limitless scalability of Cloudian HyperStore enterprise storage, organizations gain new on-premises options for managing all of their unstructured data.

Pricing for complete Cloudian HyperFile storage solutions, including on-premises disk-based storage, start at less than 1/2 cent per GB/mo. To simplify implementation, Cloudian HyperFile incorporates a policy-based data migration engine that transfers files to Cloudian from existing NAS systems, or from proprietary systems such as EMC Centera. IT managers select the attributes for files to be migrated and the data movement then proceeds as a background task with no service interruption.

Cloudian HyperFile is available as an appliance or as a virtual machine. The HyperFile appliance is deployed as a node within a Cloudian cluster and includes active-passive nodes for rapid failover, fully redundant hardware for high-availability, and integrated caching for performance.

Cloudian is offering two software versions, HyperFile Basic and HyperFile Enterprise. A HyperFile Basic software license is included with Cloudian HyperStore at no additional charge and includes multi-protocol support, high-availability support and a management feature set. HyperFile Enterprise includes everything in HyperFile Basic, plus Snapshot, WORM, Geo-distribution, Global Namespace and File Versioning.

Pricing for complete on-premises, appliance-based solutions, begins at ½ cent per GB per month. Cloudian HyperFile is available now from Cloudian and from Cloudian reseller partners.

Storage Roundtable

Production, post, visual effects, VR… you can’t do it without a strong infrastructure. This infrastructure must include storage and products that work hand in hand with it.

This year we spoke to a sampling of those providing storage solutions — of all kinds — for media and entertainment, as well as a storage-agnostic company that helps get your large files from point A to point B safely and quickly.

We gathered questions from real-world users — things that they would ask of these product makers if they were sitting across from them.

Quantum’s Keith Lissak
What kind of storage do you offer, and who is the main user of that storage?
We offer a complete storage ecosystem based around our StorNext shared storage and data management solution,including Xcellis high-performance primary storage, Lattus object storage and Scalar archive and cloud. Our customers include broadcasters, production companies, post facilities, animation/VFX studios, NCAA and professional sports teams, ad agencies and Fortune 500 companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Xcellis features continuous scalability and can be sized to precisely fit current requirements and scaled to meet future demands simply by adding storage arrays. Capacity and performance can grow independently, and no additional accelerators or controllers are needed to reach petabyte scale.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
We don’t have exact numbers, but a growing number of our customers are using cloud storage. Our FlexTier cloud-access solution can be used with both public (AWS, Microsoft Azure and Google Cloud) and private (StorageGrid, CleverSafe, Scality) storage.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We offer a range of StorNext 4K Reference Architecture configurations for handling the demanding workflows, including 4K, 8K and VR. Our customers can choose systems with small or large form-factor HDDs, up to an all-flash SSD system with the ability to handle 66 simultaneous 4K streams.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
StorNext systems are OS-agnostic and can work with all Mac, Windows and Linux clients with no discernible difference.

Zerowait’s Rob Robinson
What kind of storage do you offer, and who is the main user of that storage?
Zerowait’s SimplStor storage product line provides storage administrators scalable, flexible and reliable on-site storage needed for their growing storage requirements and workloads. SimplStor’s platform can be configured to work in Linux or Windows environments and we have several customers with multiple petabytes in their data centers. SimplStor systems have been used in VFX production for many years and we also provide solutions for video creation and many other large data environments.

Additionally, Zerowait specializes in NetApp service, support and upgrades, and we have provided many companies in the media and VFX businesses with off-lease transferrable licensed NetApp storage solutions. Zerowait provides storage hardware, engineering and support for customers that need reliable and big storage. Our engineers support customers with private cloud storage and customers that offer public cloud storage on our storage platforms. We do not provide any public cloud services to our customers.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our customers typically need on-site storage for processing speed and security. We have developed many techniques and monitoring solutions that we have incorporated into our service and hardware platforms. Our SimplStor and NetApp customers need storage infrastructures that scale into the multiple petabytes, and often require GigE, 10GigE or a NetApp FC connectivity solution. For customers that can’t handle the bandwidth constraints of the public Internet to process their workloads, Zerowait has the engineering experience to help our customers get the most of their on-premises storage.

How many of the people buying your solutions are using them with another cloud-based products (i.e. Microsoft Azure)?
Many of our customers use public cloud solutions for their non-proprietary data storage while using our SimplStor and NetApp hardware and support services for their proprietary, business-critical, high-speed and regulatory storage solutions where data security is required.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
SimplStor’s density and scalability make it perfect for use in HD and higher resolution environments. Our SimplStor platform is flexible and we can accommodate customers with special requests based on their unique workloads.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Zerowait’s NetApp and SimplStor platforms are compatible with both Linux (NFS) and Windows (CIFS) environments. OS X is supported in some applications. Every customer has a unique infrastructure and set of applications they are running. Customers will see differences in performance, but our flexibility allows us to customize a solution to maximize the throughput to meet workflow requirements.

Signiant’s Mike Nash
What kind of storage works with your solution, and who is the main user or users of that storage?
Signiant’s Media Shuttle file transfer solution is storage agnostic, and for nearly 200,000 media pros worldwide it is the primary vehicle for sending and sharing large files. Even though Media Shuttle doesn’t provide storage, and many users think of their data as “in Media Shuttle.” In reality, their files are located in whatever storage their IT department has designated. This might be the company’s own on-premises storage, or it could be their AWS or Microsoft Azure cloud storage tenancy. Our users employ a Media Shuttle portal to send and share files; they don’t have to think about where the files are stored.

How are you making sure your products are scalable so people can grow either their use or the bandwidth of their networks (or both)?
Media Shuttle is delivered as a cloud-native SaaS solution, so it can be up and running immediately for new customers, and it can scale up and down as demand changes. The servers that power the software are managed by our DevOps team and monitored 24×7 — and the infrastructure is auto-scaling and instantly available. Signiant does not charge for bandwidth, so customers can use our solutions with any size pipe at no additional cost. And while Media Shuttle can scale up to support the needs of the largest media companies, the SaaS delivery model also makes it accessible to even the smallest production and post facilities.

How many of the people buying your solutions are using them with cloud storage (i.e. AWS or Microsoft Azure)?
Cloud adoption within the M&E industry remains uneven, so it’s no surprise that we see a mixed picture when we look at the storage choices our customers make. Since we first introduced the cloud storage option, there has been a constant month-over-month growth in the number of customers deploying portals with cloud storage. It’s not yet in parity with on-prem storage, but the growth trends are clear.

On-premises content storage is very far from going away. We see many Media Shuttle customers taking a hybrid approach, with some portals using cloud storage and others using on-prem storage. It’s also interesting to note that when customers do choose cloud storage, we increasingly see them use both AWS and Azure.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We can move any size of file. As media files continue to get bigger, the value of our solutions continues to rise. Legacy solutions such as FTP, which lack any file acceleration, will grind things to a halt if 4K, 8K, VR and other huge files need to be moved between locations. And consumer-oriented sharing services like Dropbox and Google Drive become non-starters with these types of files.

What platforms do your system connect to (e.g. Mac OS X, Windows, Linux), and what differences might end-users notice when connecting on these different platforms?
Media Shuttle is designed to work with a wide range of platforms. Users simply log in to portals using any web browser. In the background, a native application installed on the user’s personal computer provides the acceleration functionality. This App works with Windows or Mac OSX systems.

On the IT side of things, no installed software is required for portals deployed with cloud storage. To connect Media Shuttle to on-premises storage, the IT team will run Signiant software on a computer in the customer’s network. This server-side software is available for Linux and Windows.

NetApp’s Jason Danielson
What kind of storage do you offer, and who is the main user of that storage?
NetApp has a wide portfolio of storage and data management products and services. We have four fundamentally different storage platforms — block, file, object and converged infrastructure. We use these platforms and our data fabric software to create a myriad of storage solutions that incorporate flash, disk and cloud storage.

1. NetApp E-Series block storage platform is used by leading shared file systems to create robust and high-bandwidth shared production storage systems. Boutique post houses, broadcast news operations and corporate video departments use these solutions for their production tier.
2. NetApp FAS network-attached file storage runs NetApp OnTap. This platform supports many thousands of applications for tens of thousands of customers in virtualized, private cloud and hybrid cloud environments. In media, this platform is designed for extreme random-access performance. It is used for rendering, transcoding, analytics, software development and the Internet-of-things pipelines.
3. NetApp StorageGrid Webscale object store manages content and data for back-up and active archive (or content repository) use cases. It scales to dozens of petabytes, billions of objects and currently 16 sites. Studios and national broadcast networks use this system and are currently moving content from tape robots and archive silos to a more accessible object tier.
4. NetApp SolidFire converged and hyper-converged platforms are used by cloud providers and enterprises running large private clouds for quality-of-service across hundreds to thousands of applications. Global media enterprises appreciate the ease of scaling, simplicity of QOS quota setting and overall maintenance for largest scale deployments.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The four platforms mentioned above scale up and scale out to support well beyond the largest media operations in the world. So our challenge is not scalability for large environments but appropriate sizing for individual environments. We are careful to design storage and data management solutions that are appropriate to media operations’ individual needs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Seven years ago, NetApp set out on a major initiative to build the data fabric. We are well on the path now with products designed specifically for hybrid cloud (a combination of private cloud and public cloud) workloads. While the uptake in media and entertainment is slower than in other industries, we now have hundreds of customers that use our storage in hybrid cloud workloads, from backup to burst compute.

We help customers wanting to stay cloud-agnostic by using AWS, Microsoft Azure, IBM Cloud, and Google Cloud Platform flexibly and as the project and pricing demands. AWS, Microsoft Azure, IBM, Telsra and ASE along with another hundred or so cloud storage providers include NetApp storage and data management products in their service offerings.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
For higher bandwidth, or bitrate, video production we’ll generally architect a solution with our E-Series storage under either Quantum StorNext or PixitMedia PixStor. Since 2012, when the NetApp E5400 enabled the mainstream adoption of 4K workflows, the E-Series platform has seen three generations of upgrades and the controllers are now more than 4x faster. The chassis has remained the same through these upgrades so some customers have chosen to put the latest controllers into these chassis to improve bandwidth or to utilize faster network interconnect like 16 gigabit fibrechannel. Many post houses continue to use fibrechannel to the workstation for these higher bandwidth video formats while others have chosen to move to Ethernet (40 and 100 Gigabit). As flash (SSDs) continue to drop in price it is starting to be used for video production in all flash arrays or in hybrid configurations. We recently showed our new E570 all flash array supporting NVM Express over Fabrics (NVMe-oF) technology providing 21GB/s of bandwidth and 1 million IOPs with less than 100µs of latency. This technology is initially targeted at super-computing use cases and we will see if it is adopted over the next couple of years for UHD production workloads.

What platforms do your system connect to (Mac OSx, Windows, Linux, etc.), and what differences might end-users notice when connecting on these different platforms?
NetApp maintains a compatibility matrix table that delineates our support of hundreds of client operating systems and networking devices. Specifically, we support Mac OS X, Windows and various Linux distributions. Bandwidth expectations differ between these three operating systems and Ethernet and Fibre Channel connectivity options, but rather than make a blanket statement about these, we prefer to talk with customers about their specific needs and legacy equipment considerations.

G-Technology’s Greg Crosby
What kind of storage do you offer, and who is the main user of that storage?
Western Digital’s G-Technology products provide high-performing and reliable storage solutions for end-to-end creative workflows, from capture and ingest to transfer and shuttle, all the way to editing and final production.

The G-Technology brand supports a wide range of users for both field and in-studio work, with solutions that span a number of portable handheld drives — which are often times used to backup content on-the-go — all the way to in-studio drives that offer capacities up to 144TB. We recognize that each creative has their own unique workflow and some embrace the use of cloud-based products. We are proud to be companions to those cloud services as a central location to store raw content or a conduit to feed cloud features and capabilities.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our line ranges from small portable and rugged drives to large, multi-bay RAID and NAS solutions, for all aspects of the media and entertainment industry. Integrating the latest interface technology such as USB-C or Thunderbolt 3, our storage solutions will take advantage of the ability to quickly transfer files.

We make it easy to take a ton of storage into the field. The G-Speed Shuttle XL drive is available in capacities up to 96TB, and an optional Pelican case, with handle, is available, making it easy to transport in the field and mitigating any concerns about running out of storage. We recently launched the G-Drive mobile SSD R-Series. This drive is built to withstand a three meter (nine foot) drop, and is able to endure accidental bumps or drops, given that it is a solid-state drive.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many of our customers are using cloud-based solutions to complement their creative workflows. We find that most of our customers use our solutions as the primary storage or to easily transfer and shuttle their content since the cloud is not an efficient way to move large amounts of data. We see the cloud capabilities as a great way to share project files and low-resolution content, or collaborate with others on projects as well as distribute share a variety of deliverables.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Today’s camera technology enables not only capture at higher resolutions but also higher frame rates with more dynamic imagery. We have solutions that can easily support multi-stream 4K, 8K and VR workflows or multi-layer photo and visual effects projects. G-Technology is well positioned to support these creative workflows as we integrate the latest technologies into our storage solutions. From small portable and rugged SSD drives to high-capacity and fast multi-drive RAID solutions with the latest Thunderbolt 3 and USB-C interface technology we are ready tackle a variety of creative endeavors.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.), and what differences might users notice when connecting on these different platforms?
Our complete portfolio of external storage solutions work for Mac and PC users alike. With native support for Apple Time Machine, these solutions are formatted for Mac OS out of the box, but can be easily reformatted for Windows users. G-Technology also has a number of strategic partners with technology vendors, including Apple, Atomos, Red Camera, Adobe and Intel.

Panasas’ David Sallak
What kind of storage do you offer, and who is the main user of that storage?
Panasas ActiveStor is an enterprise-class easy-to-deploy parallel scale-out NAS (network-attached storage) that combines Flash and SATA storage with a clustered file system accessed via a high-availability client protocol driver with support for standard protocols.

The ActiveStor storage cluster consists of the ActiveStor Director (ASD-100) control engine, the ActiveStor Hybrid (ASH-100) storage enclosure, the PanFS parallel file system, and the DirectFlow parallel data access protocol for Linux and Mac OS.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
ActiveStor is engineered to scale easily. There are no specific architectural limits for how widely the ActiveStor system can scale out, and adding more workloads and more users is accomplished without system downtime. The latest release of ActiveStor can grow either storage or bandwidth needs in an environment that lets metadata responsiveness, data performance and data capacity scale independently.

For example, we quote capacity and performance numbers for a Panasas storage environment containing 200 ActiveStor Hybrid 100 storage node enclosures with 5 ActiveStor Director 100 units for filesystem metadata management. This configuration would result in a single 57PB namespace delivering 360GB/s of aggregate bandwidth with an excess of 2.6M IOPs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Panasas customers deploy workflows and workloads in ways that are well-suited to consistent on-site performance or availability requirements, while experimenting with remote infrastructure components such as storage and compute provided by cloud vendors. The majority of Panasas customers continue to explore the right ways to leverage cloud-based products in a cost-managed way that avoids surprises.

This means that workflow requirements for file-based storage continue to take precedence when processing real-time video assets, while customers also expect that storage vendors will support the ability to use Panasas in cloud environments where the benefits of a parallel clustered data architecture can exploit the agility of underlying cloud infrastructure without impacting expectations for availability and consistency of performance.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Panasas ActiveStor is engineered to deliver superior application responsiveness via our DirectFlow parallel protocol for applications working in compressed UHD, 4K and higher-resolution media formats. Compared to traditional file-based protocols such as NFS and SMB, DirectFlow provides better granular I/O feedback to applications, resulting in client application performance that aligns well with the compressed UHD, 4K and other extreme-resolution formats.

For uncompressed data, Panasas ActiveStor is designed to support large-scale rendering of these data formats via distributed compute grids such as render farms. The parallel DirectFlow protocol results in better utilization of CPU resources in render nodes when processing frame-based UHD, 4K and higher-resolution formats, resulting in less wall clock time to produce these formats.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Panasas ActiveStor supports macOS and Linux with our higher-performance DirectFlow parallel client software. We support all client platforms via NFS or SMB as well.

Users would notice that when connecting to Panasas ActiveStor via DirectFlow, the I/O experience is as if users were working with local media files on internal drives, compared to working with shared storage where normal protocol access may result in the slight delay associated with open network protocols.

Facilis’ Jim McKenna
What kind of storage do you offer, and who is the main user of that storage?
We have always focused on shared storage for the facility. It’s high-speed attached storage and good for anyone who’s cutting HD or 4K. Our workflow and management features really make us different than basic network storage. We have attachment to the cloud through software that uses all the latest APIs.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Most of our large customers have been with us for several years, and many started pretty small. Our method of scalability is flexible in that you can decide to simply add expansion drives, add another server, or add a head unit that aggregates multiple servers. Each method increases bandwidth as well as capacity.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many customers use cloud, either through a corporate gateway or directly uploaded from the server. Many cloud service providers have ways of accessing the file locations from the facility desktops, so they can treat it like another hard drive. Alternatively, we can schedule, index and manage the uploads and downloads through our software.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Facilis is known for our speed. We still support Fibre Channel when everyone else, it seems, has moved completely to Ethernet, because it provides better speeds for intense 4K and beyond workflows. We can handle UHD playback on 10Gb Ethernet, and up to 4K full frame DPX 60p through Fibre Channel on a single server enclosure.

What platforms do your systems connect to (e.g. Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
We have a custom multi-platform shared file system, not NAS (network attached storage). Even though NAS may be compatible with multiple platforms by using multiple sharing methods, permissions and optimization across platforms is not easily manageable. With Facilis, the same volume, shared one way with one set of permissions, looks and acts native to every OS and even shows up as a local hard disk on the desktop. You can’t get any more cross-platform compatible than that.

SwiftStack’s Mario Blandini
What kind of storage do you offer, and who is the main user of that storage?
We offer hybrid cloud storage for media. SwiftStack is 100% software and runs on-premises atop the server hardware you already buy using local capacity and/or capacity in public cloud buckets. Data is stored in cloud-native format, so no need for gateways, which do not scale. Our technology is used by broadcasters for active archive and OTT distribution, digital animators for distributed transcoding and mobile gaming/eSports for massive concurrency among others.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The SwiftStack software architecture separates access, storage and management, where each function can be run together or on separate hardware. Unlike storage hardware with the mix of bandwidth and capacity being fixed to the ports and drives within, SwiftStack makes it easy to scale the access tier for bandwidth independently from capacity in the storage tier by simply adding server nodes on the fly. On the storage side, capacity in public cloud buckets scales and is managed in the same single namespace.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Objectively, use of capacity in public cloud providers like Amazon Web Services and Google Cloud Platform is still “early days” for many users. Customers in media however are on the leading edge of adoption, not only for hybrid cloud extending their on-premises environment to a public cloud, but also using a second source strategy across two public clouds. Two years ago it was less than 10%, today it is approaching 40%, and by 2020 it looks like the 80/20 rule will likely apply. Users actually do not care much how their data is stored, as long as their user experience is as good or better than it was before, and public clouds are great at delivering content to users.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Arguably, larger assets produced by a growing number of cameras and computers have driven the need to store those assets differently than in the past. A petabyte is the new terabyte in media storage. Banks have many IT admins, where media shops have few. SwiftStack has the same consumption experience as public cloud, which is very different than on-premises solutions of the past. Licensing is based on the amount of data managed, not the total capacity deployed, so you pay-as-you-grow. If you save four replicas or use erasure coding for 1.5X overhead, the price is the same.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
The great thing about cloud storage, whether it is on-premises or residing with your favorite IaaS providers like AWS and Google, the interface is HTTP. In other words, every smartphone, tablet, Chromebook and computer has an identical user experience. For classic applications on systems that do not support AWS S3 as an interface, users see the storage as a mount point or folder in their application — either NFS or SMB. The best part, it is a single namespace where data can come in file, get transformed via object, and get read either way, so the user experience does not need to change even though the data is stored in the most modern way.

Dell EMC’s Tom Burns
What kind of storage do you offer, and who is the main user of that storage?
At Dell EMC, we created two storage platforms for the media and entertainment industry: the Isilon scale-out NAS All-Flash, hybrid and archive platform to consolidate and simplify file-based workflows and the Dell EMC Elastic Cloud Storage (ECS), a scalable enterprise-grade private cloud solution that provides extremely high levels of storage efficiency, resiliency and simplicity designed for both traditional and next-generation workloads.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
In the media industry, change is inevitable. That’s why every Isilon system is built to rapidly and simply adapt by allowing the storage system to scale performance and capacity together, or independently, as more space or processing power is required. This allows you to scale your storage easily as your business needs dictate.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Over the past five years, Dell EMC media and entertainment customers have added more than 1.5 exabytes of Isilon and ECS data storage to simplify and accelerate their workflows.

Isilon’s cloud tiering software, CloudPools, provides policy-based automated tiering that lets you seamlessly integrate with cloud solutions as an additional storage tier for the Isilon cluster at your data center. This allows you to address rapid data growth and optimize data center storage resources by using the cloud as a highly economical storage tier with massive storage capacity.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
As technologies that enhance the viewing experience continue to emerge, including higher frame rates and resolutions, uncompressed 4K, UHD, high dynamic range (HDR) and wide color gamut (WCG), underlying storage infrastructures must effectively scale to keep up with expanding performance requirements.

Dell EMC recently launched the sixth generation of the Isilon platform, including our all-flash (F800), which brings the simplicity and scalability of NAS to uncompressed 4K workflows — something that up until now required expensive silos of storage or complex and inefficient push-pull workflows.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc)? And what differences might end-users notice when connecting on these different platforms?
With Dell EMC Isilon, you can streamline your storage infrastructure by consolidating file-based workflows and media assets, eliminating silos of storage. Isilon scale-out NAS includes integrated support for a wide range of industry-standard protocols allowing the major operating systems to connect using the most suitable protocol, for optimum performance and feature support, including Internet Protocols IPv4, and IPv6, NFS, SMB, HTTP, FTP, OpenStack Swift-based Object access for your cloud initiatives and native Hadoop Distributed File System (HDFS).

The ECS software-defined cloud storage platform provides the ability to store, access, and manipulate unstructured data and is compatible with existing Amazon S3, OpenStack Swift APIs, EMC CAS and EMC Atmos APIs.

EditShare’s Lee Griffin
What kind of storage do you offer, and who is the main user of that storage?
Our storage platforms are tailored for collaborative media workflows and post production. It combines the advanced EFS (that’s EditShare File System, in short) distributed file system with intelligent load balancing. It’s a scalable, fault-tolerant architecture that offers cost-effective connectivity. Within our shared storage platforms, we have a unique take on current cloud workflows, with current security and reliability of cloud-based technology prohibiting full migration to cloud storage for production, EditShare AirFlow uses EFS on-premise storage to provide secure access to media from anywhere in the world with a basic Internet connection. Our main users are creative post houses, broadcasters and large corporate companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Recently, we upgraded all our platforms to EFS and introduced two new single-node platforms, the EFS 200 and 300. These single-node platforms allow users to grow their storage whilst keeping a single namespace which eliminates management of multiple storage volumes. It enables them to better plan for the future, when their facility requires more storage and bandwidth, they can simply add another node.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
No production is in one location, so the ability to move media securely and back up is still a high priority to our clients. From our Flow media asset management and via our automation module, we offer clients the option to backup their valuable content to places like Amazon S3 servers.

How does your system handle UHD, 4K and other higher-than HD resolutions?
We have many clients working with UHD content who are supplying programming content to broadcasters, film distributors and online subscription media providers. Our solutions are designed to work effortlessly with high data rate content, enabling the bandwidth to expand with the addition of more EFS nodes to the intelligent storage pool. So, our system is ready and working now for 4K content and is future proof for even higher data rates in the future.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
EditShare supplies native client EFS drivers to all three platforms, allowing clients to pick and choose which platform they want to work on. If it is an Autodesk Flame for VFX, a Resolve for grading or our own Lightworks for editing on Linux, we don’t mind. In fact, EFS offers a considerable bandwidth improvement when using our EFS drivers over existing AFP and SMB protocol. Improved bandwidth and speed to all three platforms makes for happy clients!

And there are no differences when clients connect. We work with all three platforms the same way, offering a unified workflow to all creative machines, whether on Mac, Windows or PC.

Scale Logic’s Bob Herzan
What kind of storage do you offer, and who is the main user of that storage?
Scale Logic has developed an ecosystem (Genesis Platform) that includes servers, networking, metadata controllers, single and dual-controller RAID products and purpose-built appliances.

We have three different file systems that allow us to use the storage mentioned above to build SAN, NAS, scale-out NAS, object storage and gateways for private and public cloud. We use a combination of disk, tape and Flash technology to build our tiers of storage that allows us to manage media content efficiently with the ability to scale seamlessly as our customers’ requirements change over time.

We work with customers that range from small to enterprise and everything in between. We have a global customer base that includes broadcasters, post production, VFX, corporate, sports and house of worship.

In addition to the Genesis Platform we have also certified three other tier 1 storage vendors to work under our HyperMDC SAN and scale-out NAS metadata controller (HPE, HDS and NetApp). These partnerships complete our ability to consult with any type of customer looking to deploy a media-centric workflow.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Great questions and it’s actually built into the name and culture of our company. When we bring a solution to market it has to scale seamlessly and it needs to be logical when taking the customer’s environment into consideration. We focus on being able to start small but scale any system into a high-availability solution with limited to no downtime. Our solutions can scale independently if clients are looking to add capacity, performance or redundancy.

For example, a customer looking to move to 4K uncompressed workflows could add a Genesis Unlimited as a new workspace focused on the 4K workflow, keeping all existing infrastructure in place alongside it, avoiding major adjustments to their facility’s workflow. As more and more projects move to 4K, the Unlimited can scale capacity, performance and the needed HA requirements with zero downtime.

Customers can then start to migrate their content from their legacy storage over to Unlimited and then repurpose their legacy storage onto the HyperFS file system as second tier storage.Finally, once we have moved the legacy storage onto the new file system we also are more than happy to bring the legacy storage and networking hardware under our global support agreements.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Cloud continues to be ramping up for our industry, and we have many customers using cloud solutions for various aspects of their workflow. As it pertains to content creation, manipulation and long-term archive, we have not seen much adoption with our customer base. The economics just do not support the level of performance or capacity our clients demand.

However, private cloud or cloud-like configurations are becoming more mainstream for our larger customers. Working with on-premise storage while having DR (disaster recovery) replication offsite continues to be the best solution at this point for most of our clients.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Our solutions are built not only for the current resolutions but completely scalable to go beyond them. Many of our HD customers are now putting in UHD and 4K workspaces on the same equipment we installed three years ago. In addition to 4K we have been working with several companies in Asia that have been using our HyperFS file system and Genesis HyperMDC to build 8K workflows for the Olympics.

We have a number of solutions designed to meet our customer’s requirements. Some are done with spinning disk, others with all flash, and then even more that want a hybrid approach to seamlessly combine the technologies.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
All of our solutions are designed to support Windows, Linux, and Mac OS. However, how they support the various operating systems is based on the protocol (block or file) we are designing for the facility. If we are building a SAN that is strictly going to be block level access (8/16/32 Gbps Fibre Channel or 1/10/25/40/100 Gbps iSCSI, we would use our HyperFS file system and universal client drivers across all operating systems. If our clients also are looking for network protocols in addition to the block level clients we can support jSMB and NFS but allow access to block and file folders and files at the same time.

For customers that are not looking for block level access, we would then focus our design work around our Genesis NX or ZX product line. Both of these solutions are based on a NAS operating system and simply present themselves with the appropriate protocol over 1/10/25/40 or 100Gb. Genesis ZX solution is actually a software-defined clustered NAS with enterprise feature sets such as unlimited snapshots, metro clustering, thin provisioning and will scale up over 5 Petabytes.

Sonnet Technologies‘ Greg LaPorte
What kind of storage do you offer, and who is the main user of that storage?
We offer a portable, bus-powered Thunderbolt 3 SSD storage device that fits in your hand. Primary users of this product include video editors and DITs who need a “scratch drive” fast enough to support editing 4K video at 60fps while on location or traveling.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The Fusion Thunderbolt 3 PCIe Flash Drive is currently available with 1TB capacity. With data transfer of up to 2,600 MB/s supported, most users will not run out of bandwidth when using this device.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
Computers with Thunderbolt 3 ports running either macOS Sierra or High Sierra, or Windows 10 are supported. The drive may be formatted to suit the user’s needs, with either an OS-specific format such as HFS+, or cross-platform format such as exFAT.

Post Supervisor: Planning an approach to storage solutions

By Lance Holte

Like virtually everything in post production, storage is an ever-changing technology. Camera resolutions and media bitrates are constantly growing, requiring higher storage bitrates and capacities. Productions are increasingly becoming more mobile, demanding storage solutions that can live in an equally mobile environment. Yesterday’s 4K cameras are being replaced by 8K cameras, and the trend does not look to be slowing down.

Yet, at the same time, productions still vary greatly in size, budget, workflow and schedule, which has necessitated more storage options for post production every year. As a post production supervisor, when deciding on a storage solution for a project or set of projects, I always try to have answers to a number of workflow questions.

Let’s start at the beginning with production questions.

What type of video compression is production planning on recording?
Obviously, more storage will be required if the project is recording to Arriraw rather than H.264.

What camera resolution and frame rate?
Once you know the bitrate from the video compression specs, you can calculate the data size on a per-hour basis. If you don’t feel like sitting down with a calculator or spreadsheet for a few minutes, there are numerous online data size calculators, but I particularly like AJA’s DataCalc application, which has tons of presets for cameras and video and audio formats.

How many cameras and how many hours per day is each camera likely to be recording?
Data size per hour, multiplied by hours per day, multiplied by shoot days, multiplied by number of cameras gives a total estimate of the storage required for the shoot. I usually add 10-20% to this estimate to be safe.

Let’s move on to post questions…

Is it an online/offline workflow?
The simplicity of editing online is awesome, and I’m holding out for the day when all projects can be edited with online media. In the meantime, most larger projects require online/offline editorial, so keep in mind the extra storage space for offline editorial proxies. The upside is that raw camera files can be stored on slower, more affordable (even archival) storage through editorial until the online process begins.

On numerous shows I’ve elected to keep the raw camera files on portable external RAID arrays (cloned and stored in different locations for safety) until picture lock. G-Tech, LaCie, OWC and Western Digital all make 48+ TB external arrays on which I’ve stored raw median urging editorial. When you start the online process, copy the necessary media over to your faster online or grading/finishing storage, and finish the project with only the raw files that are used in the locked cut.

How much editorial staff needs to be working on the project simultaneously?
On smaller projects that only require an editorial staff of two or three people who need to access the media at the same time, you may be able to get away with the editors and assistants network sharing a storage array, and working in different projects. I’ve done numerous smaller projects in which a couple editors connected to an external RAID (I’ve had great success with Proavio and QNAP arrays), which is plugged into one workstation and shares over the network. Of course, the network must have enough bandwidth for both machines to play back the media from the storage array, but that’s the case for any shared storage system.

For larger projects that employ five, 10 or more editors and staff, storage that is designed for team sharing is almost a certain requirement. Avid has opened up integrated shared storage to outside storage vendors the past few years, but Avid’s Nexis solution still remains an excellent option. Aside from providing a solid solution for Media Composer and Symphony, Nexis can also be used with basically any other NLE, ranging from Adobe Premiere Pro to Blackmagic DaVinci Resolve to Final Cut Pro and others. The project sharing abilities within the NLEs vary depending on the application, but the clear trend is moving toward multiple editors and post production personnel working simultaneously in the same project.

Does editorial need to be mobile?
Increasingly, editorial is tending to begin near the start of physical production and this can necessitate the need for editors to be on or near set. This is a pretty simple question to answer but it is worth keeping in mind so that a shoot doesn’t end up without enough storage in a place where additional storage isn’t easily available — or the power requirements can’t be met. It’s also a good moment to plan simple things like the number of shuttle or transfer drives that may be needed to ship media back to home base.

Does the project need to be compartmentalized?
For example, should proxy media be on a separate volume or workspace from the raw media/VFX/music/etc.? Compartmentalization is good. It’s safe. Accidents happen, and it’s a pain if someone accidentally deletes everything on the VFX volume or workspace on the editorial storage array. But it can be catastrophic if everything is stored in the same place and they delete all the VFX, graphics, audio, proxy media, raw media, projects and exports.

Split up the project onto separate volumes, and only give write access to the necessary parties. The bigger the project and team, the bigger the risk for accidents, so err on the side of safety when planning storage organization.

Finally, we move to finishing, delivery and archive questions…

Will the project color and mix in-house? What are the delivery requirements? Resolution? Delivery format? Media and other files?
Color grading and finishing often require the fastest storage speeds of the whole pipeline. By this point, the project should be conformed back to the camera media, and the colorist is often working with high bitrate, high-resolution raw media or DPX sequences, EXRs or other heavy file types. (Of course, there are as many workflows as there are projects, many of which can be very light, but let’s consider the trend toward 4K-plus and the fact that raw media generally isn’t getting lighter.) On the bright side, while grading and finishing arrays need to be fast, they don’t need to be huge, since they won’t house all the raw media or editorial media — only what is used in the final cut.

I’m a fan of using an attached SAS or Thunderbolt array, which is capable of providing high bandwidth to one or two workstations. Anything over 20TB shouldn’t be necessary, since the media will be removed and archived as soon as the project is complete, ready for the next project. Arrays like Areca ARC-5028T2 or Proavio EB800MS give read speeds of 2000+ MB/s,which can play back 4K DPXs in real time.

How should the project be archived?
There are a few follow-up questions to this one, like: Will the project need to be accessed with short notice in the future? LTO is a great long-term archival solution, but pulling large amounts of media off LTO tape isn’t exactly quick. For projects that I suspect will be reopened in the near future, I try to keep an external hard drive or RAID with the necessary media onsite. Sometimes it isn’t possible to keep all of the raw media onsite and quickly accessible, so keeping the editorial media and projects onsite is a good compromise. Offsite, in a controlled, safe, secure location, LTO-6 tapes house a copy of every file used on the project.

Post production technology changes with the blink of an eye, and storage is no exception. Once these questions have been answered, if you are spending any serious amount of money, get an opinion from someone who is intimately familiar with the cutting edge of post production storage. Emphasis on the “post production” part of that sentence, because video I/O is not the same as, say, a bank with the same storage size requirements. The more money devoted to your storage solutions, the more opinions you should seek. Not all storage is created equal, so be 100% positive that the storage you select is optimal for the project’s particular workflow and technical requirements.

There is more than one good storage solution for any workflow, but the first step is always answering as many storage- and workflow-related questions as possible to start taking steps down the right path. Storage decisions are perhaps one of the most complex technical parts of the post process, but like the rest of filmmaking, an exhaustive, thoughtful, and collaborative approach will almost always point in the right direction.

Main Image: G-Tech, QNAP, Avid and Western Digital all make a variety of storage solutions for large and small-scale post production workflows.


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.

What you should ask when searching for storage

Looking to add storage to your post studio? Who isn’t these days? Jonathan Abrams, chief technical officer at New York City’s Nutmeg Creative was kind enough to put together a list that can help all in their quest for the storage solution that best fits their needs.

Here are some questions that customers should ask a storage manufacturer.

What is your stream count at RAID-6?
The storage manufacturer should have stream count specifications available for both Avid DNx and Apple ProRes at varying frame rates and raster sizes. Use this information to help determine which product best fits your environment.

How do I connect my clients to your storage?  
Gigabit Ethernet (copper)? 10 Gigabit Ethernet (50-micron Fiber)? Fiber Channel (FC)? These are listed in ascending order of cost and performance. Combined with the answer to the question above, this narrows down which product a storage manufacturer has that fits your environment.

Can I use whichever network switch I want to and know that it will work, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?
If you are using a Mac with Thunderbolt ports, then you will need a network adapter, such as a Promise SANLink2 10G SFP+ for your shared storage connection. Also ask, “Can I use any Thunderbolt network adapter, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?”

If you are an Avid Media Composer user, ask, “Does your storage present itself to Media Composer as if it was Avid shared storage?”
This will allow the first person who opens a Media Composer project to obtain a lock on a bin.  Other clients can open the same project, though they will not have write access to said bin.

What is covered by support? 
Make certain that both the hardware (chassis and everything inside of it) and the software (client and server) are covered by support. This includes major version upgrades to the server and client software (i.e. v.11 to v.12). You do not want your storage manufacturer to announce a new software version at NAB 2018 and then find out that it’s not covered by your support contract. That upgrade is a separate cost.

For how many years will you be able to replace all of the hardware parts?
Will the storage manufacturer replace any part within three years of your purchase, provided that you have an active support contract? Will they charge you less for support if they cannot replace failed components during that year’s support contract? The variation of this question is, “What is your business model?” If the storage manufacturer will only guarantee availability of all components for three years, then their business model is based upon you buying another server from them in three years. Are you prepared to be locked into that upgrade cycle?

Are you using custom components that I cannot source elsewhere?
If you continue using your storage beyond the date when the manufacturer can replace a failed part, is the failed part a custom part that was only sold to the manufacturer of your storage? Is the failed part one that you may be able to find used or refurbished and swap out yourself?

What is the penalty for not renewing support? Can I purchase support incidents on an as-needed basis?
How many as-needed event purchases equate to you realizing, “We should have renewed support instead.” If you cannot purchase support on an as-needed basis, then you need to ask what the penalty for reinstating support is. This information helps you determine what your risk tolerance is and whether or not there is a date in the future when you can say, “We did not incur a financial loss with that risk.”

Main Image:  Nutmeg Creative’s Jonathan Abrams with the company’s 80 TB of EditShare storage and two spare drive.  Photo Credit:  Larry Closs

Storage in the Studio: Post Houses

By Karen Maierhofer

There are many pieces that go into post production, from conform, color, dubbing and editing to dailies and more. Depending on the project, a post house can be charged with one or two pieces of this complex puzzle, or even the entire workload. No matter the job, the tasks must be done on time and on budget. Unforeseen downtime is unacceptable.

That is why when it comes to choosing a storage solution, post houses are very particular. They need a setup that is secure, reliable and can scale. For them, one size simply does not fit all. They all want a solution that fits their particular needs and the needs of their clients.

Here, we look at three post facilities of various sizes and range of services, and the storage solutions that are a good fit for their business.

Liam Ford

Sim International
The New York City location of Sim has been in existence for over 20 years, operating under the former name of Post Factory NY up until about a month ago when Sim rebranded it and its seven other founding post companies as Sim International. Whether called by its new moniker or its previous one, the facility has grown to become a premier space in the city for offline editorial teams as well as one of the top high-end finishing studios in town, as the list of feature films and episodic shows that have been cut and finished at Sim is quite lengthy. And starting this past year, Sim has launched a boutique commercial finishing division.

According to senior VP of post engineering Liam Ford, the vast majority of the projects at the NYC facility are 4K, much of which is episodic work. “So, the need is for very high-capacity, very high-bandwidth storage,” Ford says. And because the studio is located in New York, where space is limited, that same storage must be as dense as possible.

For its finishing work, Sim New York is using a Quantum Xcellis SAN, a StorNext-based appliance system that can be specifically tuned for 4K media workflow. The system, which was installed approximately two years ago, runs on a 16Gb Fibre Channel network. Almost half a petabyte of storage fits into just a dozen rack units. Meanwhile, an Avid Nexis handles the facility’s offline work.

The Sim SAN serves as the primary playback system for all the editing rooms. While there are SSDs in some of the workstations for caching purposes, the scheduling demands of clients do not leave much time for staging material back and forth between volumes, according to Ford. So, everything gets loaded back to the SAN, and everything is played back from the SAN.

As Ford explains, content comes into the studio from a variety of sources, whether drives, tapes or Internet transfers, and all of that is loaded directly onto the SAN. An online editor then soft-imports all that material into his or her conform application and creates an edited, high-resolution sequence that is rendered back to the SAN. Once at the SAN, that edited sequence is available for a supervised playback session with the in-house colorists, finishing VFX artists and so forth.

“The point is, our SAN is the central hub through which all content at all stages of the finishing process flows,” Ford adds.

Before installing the Xcellis system, the facility had been using local workstation storage only, but the huge growth in the finishing division prompted the transition to the shared SAN file system. “There’s no way we could do the amount of work we now have, and with the flexibility our clients demand, using a local storage workflow,” says Ford.

When it became necessary for the change, there were not a lot of options that met Sim’s demands for high bandwidth and reliable streaming, Ford points out, as Quantum’s StorNext and SGI’s CXFS were the main shared file systems for the M&E space. Sim decided to go with Quantum because of the work the vendor has done in recent years toward improving the M&E experience as well as the ease of installing the new system.

Nevertheless, with the advent of 25Gb and 100Gb Ethernet, Sim has been closely monitoring the high-performance NAS space. “There are a couple of really good options out there right now, and I can see us seriously looking at those products in the near future as, at the very least, an augmentation to our existing Fibre Channel-based storage,” Ford says.

At Sim, editors deal with a significant amount of Camera Raw, DPX and OpenEXR data. “Depending on the project, we could find ourselves needing 1.5GB/sec or more of bandwidth for a single playback session, and that’s just for one show,” says Ford. “We typically have three or four [shows] playing off the SAN at any one time, so the bandwidth needs are huge!”

Master of None

And the editors’ needs continue to evolve, as does their need for storage. “We keep needing more storage, and we need it to be faster and faster. Just when storage technology finally got to the point that doing 10-bit 2K shows was pretty painless, everyone started asking for 16-bit 4K,” Ford points out.

Recently, Sim completed work on the feature American Made and the Netflix show Master of None, in addition to a number of other episodic projects. For these and others shows, the SAN acts as the central hub around which the color correction, online editing, visual effects and deliverables are created.

“The finishing portion of the post pipeline deals exclusively with the highest-quality content available. It used to be that we’d do our work directly from a film reel on a telecine, but those days are long past,” says Ford. “You simply can’t run an efficient finishing pipeline anymore without a lot of storage.”

DigitalFilm Tree
DigitalFilm Tree (DFT) opened its doors in 1999 and now occupies a 10,000-square-foot space in Universal City, California, offering full round-trip post services, including traditional color grading, conform, dailies and VFX, as well as post system rentals and consulting services.

While Universal City may be DFT’s primary location, it has dozens of remote satellite systems — mini post houses for production companies and studios – around the world. Those remote post systems, along with the increase in camera resolution (Alexa, Raw, 4K), have multiplied DFT’s storage needs. Both have resulted in a sea change in the facility’s storage solution.

According to CEO Ramy Katrib, most companies in the media and entertainment industry historically have used block storage, and DFT was no different. But four years ago, the company began looking at object storage, which is used by Silicon Valley companies, like Dropbox and AWS, to store large assets. After significant research, Katrib felt it was a good fit for DFT as well, believing it to be a more economical way to build petabytes of storage, compared to using proprietary block storage.

Ramy Katrib

“We were unique from most of the post houses in that respect,” says Katrib. “We were different from many of the other companies using object storage — they were tech, financial institutions, government agencies, health care; we were the rare one from M&E – but our need for extremely large, scalable and resilient storage was the same as theirs.”

DFT’s primary work centers around scripted television — an industry segment that continues to grow. “We do 15-plus television shows at any given time, and we encourage them to shoot whatever they like, at whatever resolution they desire,” says Katrib. “Most of the industry relies on LTO to back up camera raw materials. We do that too, but we also encourage productions to take advantage of our object storage, and we will store everything they shoot and not punish them for it. It is a rather Utopian workflow. We now give producers access to all their camera raw material. It is extremely effective for our clients.”

Over four years ago, DFT began using a cloud-based platform called OpenStack, which is open-source software that controls large pools of data, to build and design its own object storage system. “We have our own software developers and people who built our hardware, and we are able to adjust to the needs of our clients and the needs of our own workflow,” says Katrib.

DFT designs its custom PC- and Linux-based post systems, including chassis from Super Micro, CPUs from Intel and graphic cards from Nvidia. Storage is provided from a number of companies, including spinning-disc and SSD solutions from Seagate Technology and Western Digital.

DFT then deploys remote dailies systems worldwide, in proximity to where productions are shooting. Each day clients plug their production hard drives (containing all camera raw files) into DFT’s remote dailies system. From DFT’s facility, dailies technicians remotely produce editorial, viewing and promo dailies files, and transfer them to their destinations worldwide. All the while, the camera raw files are transported from the production location to DFT’s ProStack “massively scalable object storage.” In this case, “private cloud storage” consists of servers DFT designed that house all the camera raw materials, with management from DFT post professionals who support clients with access to and management of their files.

DFT provides color grading for Great News.

Recently, storage vendors such as Quantum and Avid have begun building and branding their own object storage solutions not unlike what DFT has constructed at its Universal City locale. And the reason is simple: Object storage provides a clear advantage because of reliability and the low cost. “We looked at it because the storage we were paying for, proprietary block storage, was too expensive to house all the data our clients were generating. And resolutions are only going up. So, every year we needed more storage,” Katrib explains. “We needed a solution that could scale with the practical reality we were living.”

Then, about four years ago when DFT started becoming a software company, one of the developers brought OpenStack to Katrib’s attention. “The open-source platform provided several storage solutions, networking capabilities and cloud compute capabilities for free,” he points out. Of course, the solution is not a panacea, as it requires a company to customize the offering for its own needs and even contribute back to the OpenStack community. But then again, that requirement enables DFT to evolve to the changing needs of its clients without waiting for a manufacturer to do it.

“It does not work out of the box like a solution from IBM, for instance. You have to develop around it,” Katrib says. “You have to have a lab mentality, designing your own hardware and software based on pain points in your own environment. And, sometimes it fails. But when you do it correctly, you realize it is an elegant solution.” However, there are vibrant communities, user groups and tech summits of those leveraging the technology who are willing to assist and collaborate.

DFT has evolved its object storage solution, extending its capabilities from an initial hundreds of terabytes – which is nothing to sneeze at — to hundreds of petabytes of storage. DFT also designs remote post systems and storage solutions for customers in remote locations around the world. And those remote locations can be as simple as a workstation running applications such as Blackmagic’s Resolve or Adobe After Effects and connected to object storage housing all the client’s camera raw material.

The key, Katrib notes, is to have great post and IT pros managing the projects and the system. “I can now place a remote post system with a calibrated 4K monitor and object storage housing the camera raw material, and I can bring the post process to you wherever you are, securely,” he adds. “From wherever you are, you can view the conform, color and effects, and sign off on the final timeline, as if you were at DFT.”

DFT posts American Housewife

In addition to the object storage, DFT is also using Facilis TerraBlock and Avid Nexis systems locally and on remote installs. The company uses those commercial solutions because they provide benefits, including storage performance and feature sets that optimize certain software applications. As Katrib points out, storage is not one flavor fits all, and different solutions work better for certain use cases. In DFT’s case, the commercial storage products provide performance for the playback of multiple 4K streams across the company’s color, VFX and conform departments, while its ProStack high-capacity object storage comes into play for storing the entirety of all files produced by our clients.

“Rather than retrieve files from an LTO tape, as most do when working on a TV series, with object storage, the files are readily available, saving hours in retrieval time,” says Katrib.

Currently, DFT is working on a number of television series, including Great News (color correction only) and Good Behavior (dailies only). For other shows, such as the Roseanne revival, NCIS: Los Angeles, American Housewife and more, it is performing full services such as visual effects, conform, color, dailies and dubbing. And in some instances, even equipment rental.

As the work expands, DFT is looking to extend upon its storage and remote post systems. “We want to have more remote systems where you can do color, conform, VFX, editorial, wherever you are, so the DP or producer can have a monitor in their office and partake in the post process that’s particular to them,” says Katrib. “That is what we are scaling as we speak.”

Broadway Video
Broadway Video is a global media and entertainment company that is primarily engaged in post-production services for television, film, music, digital and commercial projects for the past four decades. Located in New York and Los Angeles, the facility offers one-stop tools and talent for editorial, audio, design, color grading, finishing and screening, as well as digital file storage, preparation, aggregation and delivery of digital content across multiple platforms.

Since its founding in 1979, Broadway Video has grown into an independent studio. During this timeframe, content has evolved greatly, especially in terms of resolution, to where 4K and HD content — including HDR and Atmos sound — is becoming the norm. “Staying current and dealing with those data speeds are necessary in order to work fluidly on a 4K project at 60p,” says Stacey Foster, president and managing director, Broadway Video Digital and Production. “The data requirements are pretty staggering for throughput and in terms of storage.”

Stacey Foster

This led Broadway Video to begin searching a year ago for a storage system that would meet its needs now as well as in the foreseeable future — in short, it also needed a system that is scalable. Their solution: an all-Flash Hitachi Vantara Virtual Storage Platform (VSP) G series. Although quite expensive, a flash-based system is “ridiculously powerful,” says Foster. “Technology is always marching forward, and Flash-based systems are going to become the norm; they are already the norm at the high end.”

Foster has had a long-standing relationship with Hitachi for more than a decade and has witnessed the company’s growth into M&E from the medical and financial worlds where it has been firmly ensconced. According to Foster, Hitachi’s VSP series will enhance Broadway Video’s 4K offerings and transform internal operations by allowing quick turnaround, efficient and cost-effective production, post production and delivery of television shows and commercials. And, the system offers workload scalability, allowing the company to expand and meet the changing needs of the digital media production industry.

“The systems we had were really not that capable of handling DPX files that were up to 50TB, and Hitachi’s VSP product has been handling them effortlessly,” says Foster. “I don’t think other [storage] manufacturers can say that.”

Foster explains that as Broadway Video continued to expand its support of the latest 4K content and technologies, it became clear that a more robust, optimized storage solution was needed as the company moved in this new direction. “It allows us to look at the future and create a foundation to build our post production and digital distribution services on,” Foster says.

Broadway Video’s with Netflix projects sparked the need for a more robust system. Recently, Comedians in Cars Getting Coffee, an Embassy Row production, transitioned to Netflix, and one of the requirements by its new home was the move from 2K to 4K. “It was the perfect reason for us to put together a 4K end-to-end workflow that satisfies this client’s requirements for technical delivery,” Foster points out. “The bottleneck in color and DPX file delivery is completely lifted, and the post staff is able to work quickly and sometimes even faster than in real time when necessary to deliver the final product, with its very large files. And that is a real convenience for them.”

Broadway Video’s Hitachi Vantara Virtual Storage Platform G series.

As a full-service post company, Broadway Video in New York operates 10 production suites of Avids running Adobe Premiere and Blackmagic Resolve, as well as three full mixing suites. “We can have all our workstations simultaneously hit the [storage] system hard and not have the system slow down. That is where Hitachi’s VSP product has set itself apart,” Foster says.

For Comedians in Cars Getting Coffee, like many projects Broadway Video encounters, the cut is in a lower-resolution Avid file. The 4K media is then imported into the Resolve platform, so it is colored in its original material and format. In terms of storage, once the material is past the cutting stage, it is all stored on the Hitachi system. Once the project is completed, it is handed off on spinning disc for archival, though Foster foresees a limited future for spinning discs due to their inherent nature for a limited life span — “anything that spins breaks down,” he adds.

All the suites are fully HD-capable and are tied with shared SAN and ISIS storage; because work on most projects is shared between editing suites, there is little need to use local storage. Currently Broadway Video is still using its previous Avid ISIS products but is slowly transitioning to the Hitachi system only. Foster estimates that at this time next year, the transition will be complete, and the staff will no longer have to support the multiple systems. “The way the systems are set up right now, it’s just easier to cut on ISIS using the Avid workstations. But that will soon change,” he says.

Other advantages the Hitachi system provides is stability and uptime, which Foster maintains is “pretty much 100 percent guaranteed.” As he points out, there is no such thing as downtime in banking and medical, where Hitachi earned its mettle, and bringing that stability to the M&E industry “has been terrific.”

Of course, that is in addition to bandwidth and storage capacity, which is expandable. “There is no limit to the number of petabytes you can have attached,” notes Foster.

Considering that the majority of calls received by Broadway Video center on post work for 4K-based workflows, the new storage solution is a necessary technical addition to the facility’s other state-of-the-art equipment. “In the environment we work in, we spend more and more time on the creative side in terms of the picture cutting and sound mixing, and then it is a rush to get it out the door. If it takes you days to import, color correct, export and deliver — especially with the file sizes we are talking about – then having a fast system with the kind of throughput and bandwidth that is necessary really lifts the burden for the finishing team,” Foster says.

He continues: “The other day the engineers were telling me we were delivering 20 times faster using the Hitachi technology in the final cutting and coloring of a Jerry Seinfeld stand-up special we had done in 4K” resulting in a DPX file that was about 50TB. “And that is pretty significant,” Foster adds.

Main Image: DigitalFilm Tree’s senior colorist Patrick Woodard.

Panasas intros faster, customizable storage solutions for M&E

Panasas has introduced three new products that target those working in the media and entertainment world, a world that requires fast and customizable workflows that offer a path for growth.

Panasas’s ActiveStor is now capable of scaling capacity to 57PB and offering 360GB/s of bandwidth. According to the company, this system doubles metadata performance to cut data access time in half, scales performance and capacity independently and seamlessly adapts to new technology advancements.

The new ActiveStor Director 100 (ASD-100) control-plane engine and the new ActiveStor Hybrid 100 (ASH-100) configurable plug-and-play storage system allows users to design storage systems that meet their exact specifications and workflow requirements, as well as grow the system if needed.

For the first time, Panasas is offering a disaggregated Director Blade  — ASD-100, the brain of the Panasas storage system — to provide flexibility. Customers can now add any number of ASD-100s to drive exactly the level of metadata performance they need. With double the raw CPU power and RAM capacity of previous Director Blades, the ASD-100 offers double the metadata performance on metadata-intensive workloads.

Based on industry-standard hardware, the ASD-100 manages metadata and the global namespace; it also acts as a gateway for standard data-access protocols such as NFS and SMB. The ASD-100 uses non-volatile dual in-line memory modules (NVDIMMs) to store metadata transaction logs, and Panasas is contributing its NVDIMM driver to the FreeBSD community.

The ASH-100 and ASD-100 rack

The ASH-100 hardware platform offers the high-capacity HDD (12TB) and SSD (1.9TB) in a parallel hybrid storage system. A broad range of HDD and SSD capacities can be paired as needed to meet specific workflow needs. The ASH-100 can be configured with ASD-100s or can be delivered with integrated traditional ActiveStor Director Blades (DBs), depending on user requirements.

The latest version of this plug-and-play parallel file system features an updated FreeBSD operating foundation and a GUI that supports asynchronous “push” notification of system changes without user interaction.

Panasas’ updated DirectFlow parallel data access protocol offers a 15 percent improvement in throughput thanks to enhancements to memory allocation and readahead. All ActiveStor models will benefit from this performance increase after upgrading to the new release of PanFS.

Using ASD-100, the ASH-100, an updated PanFS 7.0 parallel file system and enhancements to the DirectFlow parallel data-access protocol has these advantages:
Performance – Users can scale metadata performance, data bandwidth, and data capacity independently for faster time-to-results.
Flexibility – The ability to mix and match HDD and SSD configurations under a single global namespace enables users to best match the system performance to their workload requirements.
Productivity – The new ActiveStor solution doubles productivity by cutting data access time in half, regardless of the number of users.
Investment Protection – The solution is backward and forward compatible with the ActiveStor product portfolio.

The ASH-100 is shipping now. The ASD-100 and PanFS 7.0 will be available in Q1 2018.

Sonnet’s portable eGPU accelerates computer graphics

Sonnet has introduced a Thunderbolt-connected external GPU (eGPU) device called the eGFX Breakaway Puck, which is a portable, high-performance, all-in-one eGPU for Thunderbolt 3 computers. The Puck offers accelerated graphics and provides multi-display connectivity thanks to AMD’s Eyefinity technology. Users employing a Puck will experience boosted GPU acceleration when using professional video apps.

Sonnet is offering two Puck models: the eGFX Breakaway Puck Radeon RX 560 and eGFX Breakaway Puck Radeon RX 570. Each Puck model is 6 inches wide by 5.1 inches deep by 2 inches tall. Both feature one Thunderbolt 3 port, three DisplayPorts and one HDMI port to support up to four 4K displays in multi-monitor mode.

The Puck connects to a computer with a single Thunderbolt 3 cable and provides up to 45W of power to charge the computer. On the desktop, the Puck has a minimal footprint. With an optional VESA mounting bracket kit, the Puck can be attached to the back of a display or the arm of a multi-monitor stand, leaving a zero footprint on the desktop. The kit also includes a 0.5-meter cable to help reduce cable clutter.

The eGFX Breakaway Puck Radeon RX 560 sells for $449., and the eGFX Breakaway Puck Radeon RX 570 costs $599. The optional PuckCuff VESA Mounting Bracket Kit has an MSRP of $59. All models are immediately available.

 

ATTO XstreamCore for remote access to DAS and sharing of devices

ATTO, which provides storage, network connectivity and infrastructure solutions, is now offering XstreamCore storage controllers to add remote Fibre Channel or Ethernet connectivity to SATA optical disc and SAS LTO tape devices, including the newly available LTO-8 tape format. XstreamCore is designed to provide the benefits of remote access to direct attached storage (DAS) technologies as well as the connection of multiple devices to be shared by multiple clients.

XstreamCore is a rack-scale flash and capacity storage controller that bridges 12Gb and 6Gb SAS storage devices and SATA devices to share and remotely connect them to Fibre Channel or Ethernet networks. XstreamCore includes exclusive ATTO developed features including xCore data acceleration to rapidly move data, the eCore control engine to add services and management features to storage while not affecting performance and SpeedWrite, a tape and optical performance feature that significantly boosts write-performance by effectively managing commands between attached clients and tape and optical devices.

By directly being able to connect up to 16 SATA optical or SAS tape drives with the possibility of additional drives being connected through SAS expanders, XstreamCore enables a lower cost of ownership versus native Ethernet or Fibre Channel tape devices. Fewer switch ports are required when using XstreamCore and power, cooling, cabling and weight requirements can be better managed as the ATTO controller allows separation of racks of client servers, storage and archive devices. ATTO XstreamCore FC 7500 currently supports connectivity for SAS tape devices and SATA optical devices to Fibre Channel connected servers or fabrics. ATTO Ethernet to SAS controllers will be available in early 2018 to support connectivity for these devices to Ethernet connected servers or fabrics.

 

ATTO intros quad-port version of 32Gb Fibre line of HBAs

ATTO Technology has added a new quad-port host bus adapter (HBA) to its Fibre Channel portfolio. The ATTO Celerity 32Gb Gen 6 FC-324E HBA will enable companies to use their existing storage area network infrastructure and address the growing need for high-performing, scalable and secure storage. Celerity is intended to support exponential data growth in applications such as 4K/8K editing and high-performance computing and data warehousing, along with the proliferation of virtualized servers and flash arrays.

According to ATTO, the 32Gb HBAs support data throughput of 3,200 MB/s per channel, maximizing the number of virtual machines per physical server. With 16 PCIe bus connections and four 32 Gb/s Fibre Channel ports, the FC-324E eliminates the bottlenecks created by I/O data-intensive applications.

With data centers moving to all-flash arrays, there’s a need to drive greater performance to more solid-state drives. Having four Fibre Channel ports in a single PCIe slot ensures high-density connectivity at the highest available performance for up to 1.2 GB/s throughput, making it well suited for environments that rely on next-generation, flash-based storage.

Celerity 32Gb HBAs also make it possible to increase the distance between servers and storage. Because they support more data in flight, users can extend their connection up to 10 kilometers without degrading throughput in demanding long-distance applications, such as a stretch cluster.

The ATTO Gen 6 line includes Celerity 32Gb and 16Gb HBAs in low-profile single-, dual- and now quad-port full-height versions. All versions are backward-compatible and take advantage of advancements in reliability and forward error correction to improve network performance and resiliency.

Celerity 32Gb Gen 6 quad-port FC-324E HBAs are available now.

Quantum targets smaller post houses with under $25K NAS storage

Quantum is now offering an entry-level NAS storage solution targeting post houses and corporate video departments. Xcellis Foundation is a high-performance, entry-level workflow storage system specifically designed to address the technical and budgetary requirements of small- to medium-sized studios.

Based on Quantum’s StorNext shared file system and data management platform, this new product offers enterprise-class Xcellis storage, including high performance and scalability, in a NAS appliance for under $25,000.

The 3U Xcellis Foundation system includes Quantum’s QXS disk storage chassis and Workflow Director appliance, which provides NAS connectivity and support for billions of files across up to 64 virtual file systems. Xcellis Foundation comes standard with 48TB of raw capacity, and users can upgrade to 72TB or 96TB. When the user is ready to scale the system, adding performance and capacity can be done cost-effectively and non-disruptively by simply connecting more storage. Connectivity is via dual 10 GbE or optional 40 GbE, and NAS protocol support is included with no per-seat licensing.

Here are some additional details about the new system:
• works with higher video resolutions, including 1080p and 4K, without introducing complexity or unnecessary cost to the workflow
• cost-effective IP connectivity over standard NAS protocols
• advanced data management capabilities that optimize performance and maximize capacity across different storage tiers while assuring that content is always in the right place at the right time
• seamless integration into a multi-tier storage infrastructure that includes flash, disk, nearline object storage, public cloud and tape archive
• the ability to scale up and scale out through readily extended capacity, connectivity and redundancy
• simple installation and setup via a web-based GUI

Quantum will be showing Xcellis Foundation at the upcoming IBCShow in Amsterdam, and the new appliance will be available through Quantum and its reseller partners later this month.

Speaking of resellers, here is what one —Nick Smith, director of technology at JB&A Distribution — had to say about the new system: “Xcellis Foundation gives our reseller community exactly what it’s been wanting ― a Quantum StorNext-powered shared storage solution designed specifically for smaller video production environments. [It combines] easy NAS connectivity, 4K-ready performance and simplified setup and management, all at a cost-effective price point.”

Mistika Ultima offering storage connectivity via ATTO HBAs

SGO has certified ATTO’s 12Gb ExpressSAS host bus adapters (HBAs) for use with its high-end post system, the Mistika Ultima. This new addition can help post teams to better manage large data transfers and offer support for realtime editing of uncompressed 4K video.

The latest addition to the ATTO ExpressSAS family, the 12Gb SAS/SATA HBA provides users with fast storage connectivity while allowing scalability for next-gen platforms and infrastructures. Optimized for extremely low latency and high-bandwidth data transfer, ExpressSAS HBAs offer a wide variety of port configurations, RAID-0, -1, and -1e.

“Projects that our customers are working on are becoming incredibly data heavy and the integration of ATTO products into a Mistika solution will help smooth and speed up data transfers, shortening production times,” said Miguel Angel Doncel, CEO of SGO.

Quantum’s StorNext 6 Release Now Shipping

The industry’s ongoing shift to higher-resolution formats, its use of more cameras to capture footage and its embrace of additional distribution formats and platforms is putting pressure on storage infrastructure. For content creators and owners to take full advantage of their content, storage must not only deliver scalable performance and capacity but also ensure that media assets remain readily available to users and workflow applications. Quantum’s new StorNext 6 is engineered to address these requirements.

StorNext 6 is now shipping with all newly purchased Xcellis offerings and is also available at no additional cost to current Xcellis users running StorNext 5 under existing support contracts.

Leveraging its extensive real-world 4K testing and a series of 4K reference architectures developed from test data, Quantum’s StorNext platform provides scalable storage that delivers high performance using less hardware than competing systems. StorNext 6 offers a new quality of service (QoS) feature that empowers facilities to further tune and optimize performance across all client workstations, and on a machine-by-machine basis, in a shared storage environment.

Using QoS to specify bandwidth allocation to individual workstations, a facility can guarantee that more demanding tasks, such as 4K playback or color correction, get the bandwidth they need to maintain the highest video quality. At the same time, QoS allows the facility to set parameters ensuring that less timely or demanding tasks do not consume an unnecessary amount of bandwidth. As a result, StorNext 6 users can take on work with higher-resolution content and easily optimize their storage resources to accommodate the high-performance demands of such projects.

StorNext 6 includes a new feature called FlexSpace, which allows multiple instances of StorNext — and geographically distributed teams — located anywhere in the world to share a single archive repository, allowing collaboration with the same content. Users at different sites can store files in the shared archive, as well as browse and pull data from the repository. Because the movement of content can be fully automated according to policies, all users have access to the content they need without having it expressly shipped to them.

Shared archive options include both public cloud storage on Amazon Web Services (AWS), Microsoft Azure or Google Cloud via StorNext’s existing FlexTier capability and private cloud storage based on Quantum’s Lattus object storage or, through FlexTier third-party object storage, such as NetApp StorageGrid, IBM Cleversafe and Scality Ring. In addition to simplifying collaborative work, FlexSpace also makes it easy for multinational companies to establish protected off-site content storage.

FlexSync, which is new to StorNext 6, provides a fast and simple way to synchronize content between multiple StorNext systems that is highly manageable and automated. FlexSync supports one-to-one, one-to-many and many-to-one file replication scenarios and can be configured to operate at almost any level: specific files, specific folders or entire file systems. By leveraging enhancements in file system metadata monitoring, FlexSync recognizes changes instantly and can immediately begin reflecting those changes on another system. This approach avoids the need to lock the file systems to identify changes, reducing synchronization time from hours or days to minutes, or even seconds. As a result, users can also set policies that automatically trigger copies of files so that they are available at multiple sites, enabling different teams to access content quickly and easily whenever it’s needed. In addition, by providing automatic replication across sites, FlexSync offers increased data protection.

StorNext 6 also gives users greater control and selectivity in maximizing their use of storage on an ROI basis. When archive policies call for storage across disk, tape and the cloud, StorNext makes a copy for each. A new copy expiration feature enables users to set additional rules determining when individual copies are removed from a particular storage tier. This approach makes it simpler to maintain data on the storage medium most appropriate and economical and, in turn, to free up space on more expensive storage. When one of several copies of a file is removed from storage, a complementary selectable retrieve function in StorNext 6 enables users to dictate which of the remaining copies is the first priority for retrieval. As a result, users can ensure that the file is retrieved from the most appropriate storage tier.

StorNext 6 offers valuable new capabilities for those facilities that subscribe to Motion Picture Association of America (MPAA) rules for content auditing and tracking. The platform can now track changes in files and provide reports on who changed a file, when the changes were made, what was changed and whether and to where a file was moved. With this knowledge, a facility can see exactly how its team handled specific files and also provide its clients with details about how files were managed during production.

As facilities begin to move to 4K production, they need a storage system that can be expanded for both performance and capacity in a non-disruptive manner. StorNext 6 provides for online stripe group management, allowing systems to have additional storage capacity added to existing stripe groups without having to go offline and disrupt critical workflows.

Another enhancement in StorNext 6 allows StorNext Storage Manager to automate archives in an environment with Mac clients, effectively eliminating the lengthy retrieve process previously required to access an archived directory that contains offline files  which can number in the hundreds of thousands, or even millions.

Last Chance to Enter to Win an Amazon Echo… Take our Storage Survey Now!

If you’re working in post production, animation, VFX and/or VR/AR/360, please take our short survey and tell us what works (and what doesn’t work) for your day-to-day needs.

What do you need from a storage solution? Your opinion is important to us, so please complete the survey by Wednesday, March 8th.

We want to hear your thoughts… so click here to get started now!

 

 

Quantum shipping StorNext 5.4

Quantum has introduced StorNext 5.4, the latest release of their workflow storage platform, designed to bring efficiency and flexibility to media content management. StorNext 5.4 enhancements include the ability to integrate existing public cloud storage accounts and third-party object storage (private cloud) — including Amazon Web Services, Microsoft Azure, Google Cloud, NetApp StorageGRID, IBM Cleversafe and Scality Ring — as archive tiers in a StorNext-managed media environment. It also lets users deploy applications embedded within StorNext-powered Xcellis workflow storage appliances.

Quantum has also included a new feature called StorNext Storage Manager, offering automated, policy-based movement of content into and out of users’ existing public and private clouds while maintaining the visibility and access that StorNext provides. It offers seamless integration for public and private clouds within a StorNext-managed environment — as well as primary disk and tape storage tiers, full user and application access to media stored in the cloud without additional hardware or software, and extended versioning across sites and the cloud.

By enabling applications to run inside its Xcellis Workflow Director, the new Dynamic Application Environment (DAE) capability in StorNext 5.4 allows users to leverage a converged storage architecture, reducing the time, cost and complexity of deploying and maintaining applications.

StorNext 5.4 is currently shipping with all newly-purchased Xcellis, StorNext M-Series and StorNext Pro Solutions, as well as Artico archive appliances. It is available at no additional cost for StorNext 5 users under current support contracts.

Promise, Symply team up on Thunderbolt 3 RAID system

Storage solutions companies Promise Technology and Symply have launched Pegasus3 Symply Edition, the next generation of the Pegasus desktop RAID storage system. The new system combines 40Gb/s Thunderbolt 3 speed with Symply’s storage management suite.

According to both companies, Pegasus3 Symply Edition complements the new MacBook Pro — it’s optimized for performance and content protection. The Pegasus3 Symply Edition offers the speed needed for creative pro generating high-resolution video and rich media content, and also the safety and security of full-featured RAID protection.

The intuitive Symply software suite allows for easy setup, optimization and management. The dual Thunderbolt 3 ports provide fast connectivity and the ability to connect up to six daisy-chained devices on a single Thunderbolt 3 port while adding new management tools and support from Symply.

“As the Symply solution family grows, Pegasus3 Symply Edition will continue to be an important part in the larger, shared creative workflows built around Promise and Symply solutions,” said Alex Grossman, president and CEO, Symply.

The Pegasus3 Symply Edition is available in three models — Pegasus R4, Pegasus R6 and Pegasus R8 — delivering four-, six- and eight-drive configurations of RAID storage, respectively. Each system is ready to go “out of the box” for Mac users with a 1m 40Gb/s Active Thunderbolt 3 cable for easy, high-speed connectivity.

Every Pegasus3 Symply Edition will include Symply’s Always-Up-to-Date Mac OS management app. iOS and Apple Watch apps to monitor your Pegasus3 Symply Edition system remotely are coming soon. The Symply Management suite will support most earlier Pegasus systems. The Pegasus3 Symply Edition includes a full three-year warranty, tech support and 24/7 media and creative user support worldwide.

The Pegasus3 Symply Edition lineup will be available on the Apple online store, at select Apple retail stores and at resellers.

IBC 2016: VR and 8K will drive M&E storage demand

By Tom Coughlin

While attending the 2016 IBC show, I noticed some interesting trends, cool demos and new offerings. For example, while flying drones were missing, VR goggles were everywhere; IBM was showing 8K video editing using flash memory and magnetic tape; the IBC itself featured a fully IP-based video studio showing the path to future media production using lower-cost commodity hardware with software management; and, it became clear that digital technology is driving new entertainment experiences and will dictate the next generation of content distribution, including the growing trend to OTT channels.

In general, IBC 2016 featured the move to higher resolution and more immersive content. On display throughout the show was 360-degree video for virtual reality, as well as 4K and 8K workflows. Virtual reality and 8K are driving new levels of performance and storage demand, and these are just some of the ways that media and entertainment pros are future-zone-2increasing the size of video files. Nokia’s Ozo was just one of several multi-camera content capture devices on display for 360-degree video.

Besides multi-camera capture technology and VR editing, the Future Tech Zone at IBC included even larger 360-degree video display spheres than at the 2015 event. These were from Puffer Fish (pictured right). The smaller-sized spherical display was touch-sensitive so you could move your hand across the surface and cause the display to move (sadly, I didn’t get to try the big sphere).

IBM had a demonstration of a 4K/8K video editing workflow using the IBM FlashSystem and IBM Enterprise tape storage technology, which was a collaboration between the IBM Tokyo Laboratory and IBM’s Storage Systems division. This work was done to support the move to 4K/8K broadcasts in Japan by 2018, with a broadcast satellite and delivery of 8K video streams of the 2020 Tokyo Olympic Games. The combination of flash memory storage for working content and tape for inactive content is referred to as FLAPE (flash and tAPE).

The graphic below shows a schematic of the 8K video workflow demonstration.

The argument for FLAPE appears to be that flash performance is needed for editing 8K content and the magnetic tape provides low-cost storage the 8K content, which may require greater than 18TB for an hour of raw content (depending upon the sampling and frame rate). Note that magnetic tape is often used for archiving of video content, so this is a rather unusual application. The IBM demonstration, plus discussions with media and entertainment professionals at IBC indicate that with the declining costs of flash memory and the performance demands of 8K, 8K workflows may finally drive increased demand for flash memory for post production.

Avid was promoting their Nexis file system, the successor to ISIS. The company uses SSDs for metadata, but generally flash isn’t used for actual editing yet. They agreed that as flash costs drop, flash could find a role for higher resolution and richer media. Avid has embraced open source for their code and provides free APIs for their storage. The company sees a hybrid of on-site and cloud storage for many media and entertainment applications.

EditShare announced a significant update to its XStream EFS Shared Storage Platform (our main image). The update provides non-disruptive scaling to over 5PB with millions of assets in a single namespace. The system provides a distributed file system with multiple levels of hardware redundancy and reduced downtime. An EFS cluster can be configured with a mix of capacity and performance with SSDs for high data rate content and SATA HDD for cost-efficient higher-performance storage — 8TB HDDs have been qualified for the system. The latest release expands optimization support for file-per-frame media.

The IBC IP Interoperability Zone was showing a complete IP-based studio (pictured right) was done with the cooperation of AIMS and the IABM. The zone brings to life the work of the JT-NM (the Joint Task Force on Networked Media, a combined initiative of AMWA, EBU, SMPTE and VSF) and the AES on a common roadmap for IP interoperability. Central to the IBC Feature Area was a live production studio, based on the technologies of the JT-NM roadmap that Belgian broadcaster VRT has been using daily on-air all this summer as part of the LiveIP Project, which is a collaboration between VRT, the European Broadcasting Union (EBU) and LiveIP’s 12 technology partners.

Summing Up
IBC 2016 showed some clear trends to more immersive, richer content with the numerous displays of 360-degree and VR content and many demonstrations of 4K and even 8K workflows. Clearly, the trend is for higher-capacity, higher-performance workflows and storage systems that support this workflow. This is leading to a gradual move to use flash memory to support these workflows as the costs for flash go down. At the same time, the move to IP-based equipment will lead to lower-cost commodity hardware with software control.

Storage analyst Tom Coughlin is president of Coughlin Associates. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide. He also  publishes the Digital Storage Technology Newsletter, the Digital Storage in Media and Entertainment Report.

Introducing a new section on our site: techToolbox

In our quest to bring even more information and resources to postPerspective, we have launched a new section called techToolbox — a repository of sorts, where you can find white papers, tutorials, videos and more from a variety of product makers.

To kick-off our new section, we’re focusing our first techToolbox on storage. Of all the technologies required for today’s entertainment infrastructure, storage remains one of the most crucial. Without the ability to store data in an efficient and reliable fashion, everything breaks down.

In techToolbox: Storage, we highlight some of today’s key advances in storage technology, with each providing a technical breakdown of why they could be the solution to your needs.

Check it out here.

Archion’s new Omni Hybrid storage targets VR, VFX, animation

Archion Technologies has introduced the EditStor Omni Hybrid, a collaborative storage solution for virtual reality, visual effects, animation, motion graphics and post workflows.

In terms of performance, an Omni Hybrid with one expansion chassis offers 8000MB/second for 4K and other streaming demands, and over 600,000 IOPS for rendering and motion graphics. The product has been certified for Adobe After Effects, Autodesk’s Maya/Flame/Lustre, The Foundry’s Nuke and Modo, Assimilate Scratch and Blackmagic’s Resolve and Fusion.  The Omni Hybrid is scalable up to a 1.5Petabytes, and can be expanded without shutdown.

“We have Omni Hybrid in post production facilities that range from high-end TV and film to massive reality productions,” reports Archion CTO James Tucci. “They are all doing graphics and editorial work on one storage system.”

Grading & Compositing Storage: Northern Lights

Speed is key for artist Chris Hengeveld.

By Beth Marchant

For Flame artist Chris Hengeveld of Northern Lights in New York City, high-performance file-level storage and a Fibre Channel connection mean it’s never been easier for him to download original source footage and share reference files with editorial on another floor. But Hengeveld still does 80 percent of his work the old-fashioned way: off hand-delivered drives that come in with raw footage from production.

Chris Hengeveld

The bicoastal editorial and finishing facility Northern Lights — parent company to motion graphics house Mr. Wonderful, the audio facility SuperExploder and production boutique Bodega — has an enviably symbiotic relationship with its various divisions. “We’re a small company but can go where we need to go,” says colorist/compositor Hengeveld. “We also help each other out. I do a lot of compositing, and Mr. Wonderful might be able to help me out or an assistant editor here might help me with After Effects work. There’s a lot of spillover between the companies, and I think that’s why we stay busy.”

Hengeveld, who has been with Northern Lights for nine years, uses Flame Premium, Autodesk’s visual effects finishing bundle of Flame and Flare with grading software Lustre. “It lets me do everything from final color work, VFX and compositing to plain-old finishing to get it out of the box and onto the air,” he says. With Northern Lights’ TV-centric work now including a growing cache of Web content, Hengeveld must often grade and finish in parallel. “No matter how you send it out, chances are what you’ve done is going to make it to the Web in some way. We make sure that what we make look good on TV also looks good on the Web. It’s often just two different outputs. What looks good on broadcast you often have to goose a bit to get it to look good on the Web. Also, the audio specs are slightly different.”

Hengeveld provided compositing and color on this spot for Speedo.

Editorial workflows typically begin on the floor above Hengeveld in Avid, “and an increasing number, as time goes by, in Adobe Premiere,” he says. Editors are connected to media through a TerraBlock shared storage system from Facilis. “Each room works off a partition from the TerraBlock, though typically with files transcoded from the original footage,” he says. “There’s very little that gets translated from them to me, in terms of clip-based material. But we do have an Aurora RAID from Rorke (now Scale Logic) off which we run a HyperFS SAN — a very high-performance, file-level storage area network — that connects to all the rooms and lets us share material very easily.”

The Avids in editorial at Northern Lights are connected by Gigabit Ethernet, but Hengeveld’s room is connected by Fibre. “I get very fast downloading of whatever I need. That system includes Mr. Wonderful, too, so we can share what we need to, when we need to. But I don’t really share much of the Avid work except for reference files.” For that, he goes back to raw camera footage. “I’d say bout 80 percent of the time, I’m pulling that raw shoot material off of G-Technology drives. It’s still sneaker-net on getting those source drives, and I don’t think that’s ever going to change,” he says. “I sometimes get 6TB of footage in for certain jobs and you’re not going to copy that all to a centrally located storage, especially when you’ll end up using about a hundredth of that material.”

The source drives are typically dupes from the production company, which more often than not is sister company Bodega. “These drives are not made for permanent storage,” he says. “These are transitional drives. But if you’re storing stuff that you want to access in five to six years, it’s really got to go to LTO or some other system.” It’s another reason he’s so committed to Flame and Lustre, he says. Both archive every project locally with its complete media, which can be then be easily dropped onto an LTO for safe long-term storage.

Time or money constraints can shift this basic workflow for Hengeveld, who sometimes receives a piece of a project from an editor that has been stripped of its color correction. “In that case, instead of loading in the raw material, I would load in the 15- or 30-second clip that they’ve created and work off of that. The downside with that is if the clip was shot with an adjustable format camera like a Red or Arri RAW, I lose that control. But at least, if they shoot it in Log-C, I still have the ability to have material that has a lot of latitude to work with. It’s not desirable, but for better stuff I almost always go back to the original source material and do a conform. But you sometimes are forced to make concessions, depending on how much time or budget the client has.”

A recent spot for IZod, with color by Hengeveld.

Those same constraints, paired with advances in technology, also mean far fewer in-person client meetings. “So much of this stuff is being evaluated on their computer after I’ve done a grade or composite on it,” he says. “I guess they feel more trust with the companies they’re working with. And let’s be honest: when you get into these very detailed composites, it can be like watching paint dry. Yet, many times when I’m grading,  I love having a client here because I think the sum of two is always greater than one. I enjoy the interaction. I learn something and I get to know my client better, too. I find out more about their subjectivity and what they like. There’s a lot to be said for it.”

Hengeveld also knows that his clients can often be more efficient at their own offices, especially when handling multiple projects at once, influencing their preferences for virtual meetings. “That’s the reality. There’s good and bad about that trade off. But sometimes, nothing beats an in-person session.”

Our main image is from NBC’s Rokerthon.

Storage Workflows for 4K and Beyond

Technicolor-Postworks and Deluxe Creative Services share their stories.

By Beth Marchant

Once upon a time, an editorial shop was a sneaker-net away from the other islands in the pipeline archipelago. That changed when the last phases of the digital revolution set many traditional editorial facilities into swift expansion mode to include more post production services under one roof.

The consolidating business environment in the post industry of the past several years then brought more of those expanded, overlapping divisions together. That’s a lot for any network to handle, let alone one containing some of the highest quality and most data-dense sound and pictures being created today. The networked storage systems connecting them all must be robust, efficient and realtime without fail, but also capable of expanding and contracting with the fluctuations of client requests, job sizes, acquisitions and, of course, evolving technology.

There’s a “relief valve” in the cloud and object storage, say facility CTOs minding the flow, but it’s still a delicate balance between local pooled and tiered storage and iron-clad cloud-based networks their clients will trust.

Technicolor-Postworks
Joe Beirne, CTO of Technicolor-PostWorks New York, is probably as familiar as one can be with complex nonlinear editorial workflows. A user of Avid’s earliest NLEs, an early adopter of networked editing and an immersive interactive filmmaker who experimented early with bluescreen footage, Beirne began his career as a technical advisor and producer for high-profile mixed-format feature documentaries, including Michael Moore’s Fahrenheit 9/11 and the last film in Godfrey Reggio’s KOYAANISQATSI trilogy.

Joe Beirne

Joe Beirne

In his 11 years as a technology strategist at Technicolor-PostWorks New York, Beirne has also become fluent in evolving color, DI and audio workflows for clients such as HBO, Lionsgate, Discovery and Amazon Studios. CTO since 2011, when PostWorks NY acquired the East Coast Technicolor facility and the color science that came with it, he now oversees the increasingly complicated ecosystem that moves and stores vast amounts of high-resolution footage and data while simultaneously holding those separate and variously intersecting workflows together.

As the first post facility in New York to handle petabyte levels of editorial-based storage, Technicolor-PostWorks learned early how to manage the data explosion unleashed by digital cameras and NLEs. “That’s not because we had a petabyte SAN or NAS or near-line storage,” explains Beirne. “But we had literally 25 to 30 Avid Unity systems that were all in aggregate at once. We had a lot of storage spread out over the campus of buildings that we ran on the traditional PostWorks editorial side of the business.”

The TV finishing and DI business that developed at PostWorks in 2005, when Beirne joined the company (he was previously a client), eventually necessitated a different route. “As we’ve grown, we’ve expanded out to tiered storage, as everyone is doing, and also to the cloud,” he says. “Like we’ve done with our creative platforms, we have channeled our different storage systems and subsystems to meet specific needs. But they all have a very promiscuous relationship with each other!”

TPW’s high-performance storage in its production network is a combination of local or semi-locally attached near-line storage tethered by several Quantum StorNext SANs, all of it air-gapped — or physically segregated —from the public Internet. “We’ve got multiple SANs in the main Technicolor mothership on Leroy Street with multiple metadata controllers,” says Beirne. “We’ve also got some client-specific storage, so we have a SAN that can be dedicated to a particular account. We did that for a particular client who has very restrictive policies about shared storage.”

TPW’s editorial media, for the most part, resides in Avid’s ISIS system and is in the process of transitioning to its software-defined replacement, Nexis. “We have hundreds of Avids, a few Adobe and even some Final Cut systems connected to that collection of Nexis and ISIS and Unity systems,” he says. “We’re currently testing the Nexis pipeline for our needs but, in general, we’re going to keep using this kind of storage for the foreseeable future. We have multiple storage servers that serve that part of our business.”

Beirne says most every project the facility touches is archived to LTO tape. “We have a little bit of disc-to-tape archiving going on for the same reasons everybody else does,” he adds. “And some SAN volume hot spots that are all SSD (solid state drives) or a hybrid.” The facility is also in the process of improving the bandwidth of its overall switching fabric, both on the Fibre Channel side and on the Ethernet side. “That means we’re moving to 32Gb and multiple 16Gb links,” he says. “We’re also exploring a 40Gb Ethernet backbone.”

Technicolor-Postworks 4K theater at their Leroy Street location.

This backbone, he adds, carries an exponential amount of data every day. “Now we have what are like two nested networks of storage at a lot of the artist workstations,” he explains. “That’s a complicating feature. It’s this big, kind of octopus, actually. Scratch that: it’s like two octopi on top of one another. That’s not even mentioning the baseband LAN network that interweaves this whole thing. They, of course, are now getting intermixed because we are also doing IT-based switching. The entire, complex ecosystem is evolving and everything that interacts with it is evolving right along with it.”

The cloud is providing some relief and handles multiple types of storage workflows across TPW’s various business units. “Different flavors of the commercial cloud, as well as our own private cloud, handle those different pools of storage outside our premises,” Beirne says. “We’re collaborating right now with an international account in another territory and we’re touching their storage envelope through the Azure cloud (Microsoft’s enterprise-grade cloud platform). Our Azure cloud and theirs touch and we push data from that storage back and forth between us. That particular collaboration happened because we both had an Azure instance, and those kinds of server-to-server transactions that occur entirely in the cloud work very well. We also had a relationship with one of the studios in which we made a similar connection through Amazon’s S3 cloud.”

Given the trepidations most studios still have about the cloud, Beirne admits there will always be some initial, instinctive mistrust from both clients and staff when you start moving any content away from computers that are not your own and you don’t control. “What made that first cloud solution work, and this is kind of goofy, is we used Aspera to move the data, even though it was between adjacent racks. But we took advantage of the high-bandwidth backbone to do it efficiently.”

Both TPW in New York and Technicolor in Los Angeles have since leveraged the cloud aggressively. “We our own cloud that we built, and big Technicolor has a very substantial purpose-built cloud, as well as Technicolor Pulse, their new storage-related production service in the cloud. They also use object storage and have some even newer technology that will be launching shortly.”

The caveat to moving any storage-related workflow into the cloud is thorough and continual testing, says Beirne. “Do I have more concern for my clients’ media in the cloud than I do when sending my own tax forms electronically? Yea, I probably do,” he says. “It’s a very, very high threshold that we need to pass. But that said, there’s quite a bit of low-impact support stuff that we can do on the cloud. Review and approval stuff has been happening in the cloud for some time.” As a result, the facility has seen an increase, like everyone else, in virtual client sessions, like live color sessions and live mix sessions from city to city or continent to continent. “To do that, we usually have a closed circuit that we open between two facilities and have calibrated displays on either end. And, we also use PIX and other normal dailies systems.”

“How we process and push this media around ultimately defines our business,” he concludes. “It’s increasingly bigger projects that are made more demanding from a computing point of view. And then spreading that out in a safe and effective way to where people want to access it, that’s the challenge we confront every single day. There’s this enormous tension between the desire to be mobile and open and computing everywhere and anywhere, with these incredibly powerful computer systems we now carry around in our pockets and the bandwidth of the content that we’re making, which is high frame rate, high resolution, high dynamic range and high everything. And with 8K — HDR and stereo wavefront data goes way beyond 8K and what the retina even sees — and 10-bit or more coming in the broadcast chain, it will be more of the same.” TPW is already doing 16-bit processing for all of its film projects and most of its television work. “That’s piles and piles and piles of data that also scales linearly. It’s never going to stop. And we have a VR lab here now, and there’s no end of the data when you start including everything in and outside of the frame. That’s what keeps me up at night.”

Deluxe Creative Services
Before becoming CTO at Deluxe Creative Services, Mike Chiado had a 15-year career as a color engineer and image scientist at Company 3, the grading and finishing powerhouse acquired by Deluxe in 2010. He now manages the pipelines of a commercial, television and film Creative Services division that encompasses not just dailies, editorial and color, but sound, VFX, 3D conversion, virtual reality, interactive design and restoration.

MikeChiado

Mike Chiado

That’s a hugely data-heavy load to begin with, and as VR and 8K projects become more common, managing the data stored and coursing through DCS’ network will get even more demanding. Branded companies currently under the monster Deluxe umbrella include Beast, Company 3, DDP, Deluxe/Culver City, Deluxe VR, Editpool, Efilm, Encore, Flagstaff Studios, Iloura, Level 3, Method Studios, StageOne Sound, Stereo D, and Rushes.

“Actually, that’s nothing when you consider that all the delivery and media teams from Deluxe Delivery and Deluxe Digital Cinema are downstream of Creative Services,” says Chiado. “That’s a much bigger network and storage challenge at that level.” Still, the storage challenges of Chiado’s segment are routinely complicated by the twin monkey wrenches of the collaborative and computer kind that can unhinge any technology-driven art form.

“Each area of the business has its own specific problems that recur: television has its issues, commercial work has its issues and features its issues. For us, commercials and features are more alike than you might think, partly due to the constantly changing visual effects but also due to shifting schedules. Television is much more regimented,” he says. “But sometimes we get hard drives in on a commercial or feature and we think, ‘Well that’s not what we talked about at all!”

Company 3’s file-based digital intermediate work quickly clarified Chiado’s technical priorities. “The thing that we learned early on is realtime playback is just so critical,” he says. “When we did our very first file-based DI job 13 years ago, we were so excited that we could display a certain resolution. OK, it was slipping a little bit from realtime, maybe we’ll get 22 frames a second, or 23, but then the director walked out after five minutes and said, ‘No. This won’t work.’ He couldn’t care less about the resolution because it was only always about realtime and solid playback. Luckily, we learned our lesson pretty quickly and learned it well! In Deluxe Creative Services, that still is the number one priority.”

It’s also helped him cut through unnecessary sales pitches from storage vendors unfamiliar with Deluxe’s business. “When I talk to them, I say, ‘Don’t tell me about bit rates. I’m going to tell you a frame rate I want to hit and a resolution, and you tell me if we can hit it or not with your solution. I don’t want to argue bits; I want tell you this is what I need to do and you’re going to tell me whether or not your storage can do that.’ The storage vendors that we’re going to bank our A-client work on better understand fundamentally what we need.”

Because some of the Deluxe company brands share office space — Method and Company 3 moved into a 63,376-square-foot former warehouse in Santa Monica a few years ago — they have access to the same storage infrastructure. “But there are often volumes specially purpose-built for a particular job,” says Chiado. “In that way, we’ve created volumes focused on supporting 4K feature work and others set up specifically for CG desktop environments that are shared across 400 people in that one building. We also have similar business units in Company 3 and Efilm, so sometimes it makes sense that we would want, for artist or client reasons, to have somebody in a different location from where the data resides. For example, having the artist in Santa Monica and the director and DP in Hollywood is something we do regularly.”

Chiado says Deluxe has designed and built with network solution and storage solution providers a system “that suits our needs. But for the most part, we’re using off-the-shelf products for storage. The magic is how we tune them to be able to work with our systems.”

Those vendors include Quantum, DDN Storage and EMC’s network-attached storage Isilon. “For our most robust needs, like 4K feature workflows, we rely on DDN,” he says. “We’ve actually already done some 8K workflows. Crazy world we live in!” For long-term archiving, each Deluxe Creative Service location worldwide has an LTO-tape robot library. “In some cases, we’ll have a near-line tier two volume that stages it. And for the past few years, we’re using object storage in some locations to help with that.”

Although the entire group of Deluxe divisions and offices are linked by a robust 10GigE network that sometimes takes advantage of dark fiber, unused fiber optic cables leased from larger fiber-optic communications companies, Chiado says the storage they use is all very specific to each business unit. “We’re moving stuff around all the time but projects are pretty much residing in one spot or another,” he says. “Often, there are a thousand reasons why — it may be for tax incentives in a particular location, it may be for project-specific needs. Or it’s just that we’re talking about the London and LA locations.”

With one eye on the future and another on budgets, Chiado says pooled storage has helped DCS keep costs down while managing larger and larger subsets of data-heavy projects. “We are always on the lookout for ways to handle the next thing, like the arrival of 8K workflows, but we’ve gained huge, huge efficiencies from pooled storage,” he says. “So that’s the beauty of what we build, specific to each of our world locations. We move it around if we have to between locations but inside that location, everybody works with the content in one place. That right there was a major efficiency in our workflows.”

Beyond that, he says, how to handle 8K is still an open question. “We may have to make an island, and it’s been testing so far, but we do everything we can to keep it in one place and leverage whatever technology that’s required for the job,” Chiado says. “We have isolated instances of SSDs (solid-state drives) but we don’t have large-scale deployment of SSDs yet. On the other end, we’re working with cloud vendors, too, to be able to maximize our investments.”

Although the company is still working through cloud security issues, Chiado says Deluxe is “actively engaging with cloud vendors because we aren’t convinced that our clients are going to be happy with the security protocols in place right now. The nature of the business is we are regularly involved with our clients and MPAA and have ongoing security audits. We also have a group within Deluxe that helps us maintain the best standards, but each show that comes in may have its own unique security needs. It’s a constant, evolving process. It’s been really difficult to get our heads and our clients’ heads around using the cloud for rendering, transcoding or for storage.”

Luckily, that’s starting to change. “We’re getting good traction now, with a few of the studios getting ready to greenlight cloud use and our own pipeline development to support it,” he adds. “They are hand in hand. But I think once we move over this hurdle, this is going to help the industry tremendously.”

Beyond those longer-term challenges, Chiado says the day-to-day demands of each division haven’t changed much. “Everybody always needs more storage, so we are constantly looking at ways to make that happen,” he says. “The better we can monitor our storage and make our in-house people feel comfortable moving stuff off near-line to tape and bring it back again, the better we can put the storage where we need it. But I’m very optimistic about the future, especially about having a relief valve in the cloud.”

Our main image is the shared 4K theater at Company 3 and Method.

VFX Storage: The Molecule

Evolving to a virtual private local cloud?

By Beth Marchant

VFX artists, supervisors and technologists have long been on the cutting-edge of evolving post workflows. The networks built to move, manage, iterate, render and put every pixel into one breathtaking final place are the real super heroes here, and as New York’s The Molecule expands to meet the rising demand for prime-time visual effects, it pulls even more power from its evolving storage pipeline in and out of the cloud.

The Molecule CEO/CTO Chris Healer has a fondness for unusual workarounds. While studying film in college, he built a 16mm projector out of Legos and wrote a 3D graphics library for DOS. In his professional life, he swiftly transitioned from Web design to motion capture and 3D animation. He still wears many hats at his now bicoastal VFX and VR facility, The Molecule —which he founded in New York in 2005 — including CEO, CTO, VFX supervisor, designer, software developer and scientist. In those intersecting capacities, Healer has created the company’s renderfarm, developed and automated its workflow, linking and preview tools and designed and built out its cloud-based compositing pipeline.

When the original New York office went into growth mode, Healer (pictured at his new, under-construction facility) turned to GPL Technologies, a VFX and post-focused digital media pipeline and data infrastructure developer, to help him build an entirely new network foundation for the new location the company will move to later this summer. “Up to this point, we’ve had the same system and we’ve asked GPL to come in and help us create a new one from scratch,” he says. “But any time you hire anyone to help with this kind of thing, you’ve really got to do your own research and figure out what makes sense for your artists, your workflows and, ultimately, your bottom line.”

The new facility will start with 65 seats and expand to more than 100 within the next year to 18 months. Current clients include the major networks, Showtime, HBO, AMC, Netflix and director/producer Doug Limon.

UKS-beforesmall      UKS-aftersmall
Netflix’s Unbreakable Kimmy Schmidt is just one of the shows The Molecule works on.

Healer’s experience as an artist, developer, supervisor and business owner has given him a seasoned perspective on how to develop VFX pipeline work. “There’s a huge disparity between what the conventional user wants to do, i.e. share data, and the much longer dialog you need to have to build a network. Connecting and sharing data is really just the beginning of a very long story that involves so many other factors: how many things are you connecting to? What type of connection do you have? How far away are you from what you’re connecting to? How much data are you moving, and it is all at once or a continuous stream? Users are so different, too.”

Complicating these questions, he says, are a facility’s willingness to embrace new technology before it’s been vetted in the market. “I generally resist the newest technologies,” he says. “My instinct is that I would prefer an older system that’s been tested for years upon years. You go to NAB and see all kinds of cool stuff that appears to be working the way it should. But it hasn’t been tried in different kinds of circumstances or its being pitched to the broadcast industry and may not work well for VFX.”

Making a Choice
He was convinced by EMC’s Isilon system, based on customer feedback and the hardware has already been delivered to the new office. “We won’t install it until construction is complete, but all the documentation is pointing in the right direction,” he says. “Still, it’s a bit of a risk until we get it up and running.”

Last October, Dell announced it would acquire EMC in a deal that is set to close in mid-July. That should suit The Molecule just fine —most of its artists computers are either Dell or HP running Nvidia graphics.

A traditional mass configuration on a single GigE line can only do up to 100MB per second. “A 10GigE connection running in NFS can, theoretically, do 10 times that,” says Healer. “But 10GigE works slightly differently, like an LA freeway, where you don’t change the speed limit but you change the number of lanes and the on and off ramp lights to keep the traffic flowing. It’s not just a bigger gun for a bigger job, but more complexity in the whole system. Isilon seems to do that very well and it’s why we chose them.”

His company’s fast growth, Healer says, has “presented a lot of philosophical questions about disk and RAID redundancy, for example. If you lose a disk in RAID-5 you’re OK, but if two fail, you’re screwed. Clustered file systems like GlusterFS and OneFS, which Isilon uses, have a lot more redundancy built in so you could lose quite a lot of disks and still be fine. If your number is up and on that unlucky day you lost six disks, then you would have backup. But that still doesn’t answer what happens if you have a fire in your office or, more likely, there’s a fire elsewhere in the building and it causes the sprinklers to go off. Suddenly, the need for off-site storage is very important for us, so that’s where we are pushing into next.”

Healer honed in on several metrics to help him determine the right path. “The solutions we looked at had to have the following: DR, or disaster recovery, replication, scalability, off-site storage, undelete and versioning snapshots. And they don’t exactly overlap. I talked to a guy just the other day at Rsync.net, which does cloud storage of off-site backups (not to be confused with the Unix command, though they are related). That’s the direction we’re headed. But VFX is just such a hard fit for any of these new data centers because they don’t want to accept and sync 10TB of data per day.”

A rendering of The Molecule NYC's new location.His current goal is simply to sync material between the two offices. “The holy grail of that scenario is that neither office has the definitive master copy of the material and there is a floating cloud copy somewhere out there that both offices are drawing from,” he says. “There’s a process out there called ‘sharding,’ as in a shard of glass, that MongoDB and Scality and other systems use that says that the data is out there everywhere but is physically diverse. It’s local but local against synchronization of its partners. This makes sense, but not if you’re moving terabytes.”

The model Healer is hoping to implement is to “basically offshore the whole company,” he says. “We’ve been working for the past few months with a New York metro startup called Packet which has a really unique concept of a virtual private local cloud. It’s a mouthful but it’s where we need to be.” If The Molecule is doing work in New York City, Healer points out, Packet is close enough that network transmissions are fast enough and “it’s as if the machines were on our local network, which is amazing. It’s huge. It the Amazon cloud data center is 500 miles away from your office, that drastically changes how well you can treat those machines as if they are local. I really like this movement of virtual private local that says, ‘We’re close by, we’re very secure and we have more capacity than individual facilities could ever want.’ But they are off-site and the multiple other companies that use them are in their own discrete containers that never crosses. Plus, you pay per use — basically per hour and per resource. In my ideal future world, we would have some rendering capacity in our office, some other rendering capacity at Packet and off-site storage at Rsync.net. If that works out, we could potentially virtualize the whole workflow and join our New York and LA office and any other satellite office we want to set up in the future.”

The VFX market, especially in New York, has certainly come into its own in recent years. “It’s great to be in an era when nearly every single frame of every single shot of both television and film is touched in some way by visual effects, and budgets are climbing back and the tax credits have brought a lot more VFX artists, companies and projects to town,” Healer says. “But we’re also heading toward a time when the actual brick-and-mortar space of an office may not be as critical as it is now, and that would be a huge boon for the visual effects industry and the resources we provide.”

Storage Roundtable

Manufacturers weigh in on trends, needs.

By Randi Altman

Storage is the backbone of today’s workflows, from set to post to archive. There are many types of storage offerings from many different companies, so how do you know what’s right for your needs?

In an effort to educate, we gathered questions from users in the field. “If you were sitting across a table from makers of storage, what would you ask?”

The following is a virtual roundtable featuring a diverse set of storage makers answering a variety of questions. We hope it’s helpful. If you have a question that you would like to ask of these companies, feel free to email me directly at randi@postPerspective.com and I will get them answered.

SCALE LOGIC’S BOB HERZAN
What are the top three requests you get from your post clients?
A post client’s primary concern is reliability. They want to be assured that the storage solution they are buying supports all of their applications and will provide the performance each application will need when they need it. The solution needs the ability to interact with MAM or PAM solutions and they need to be able to search and retrieve their assets and to future proof, scale and manage the storage in a tiered infrastructure.

Secondly, the client wants to be able to use their content in a way that makes sense. Assets need to be accessible to the stakeholders of a project, no matter how big or complex the storage ecosystem.

Finally, the client wants to see the options available to develop a long-term archiving process that can assure the long-term preservation of their finished assets. All three of these areas can be very daunting to our customers, and being able to wade through all of the technology options and make the right choices for each business is our specialty.

How should post users decide between SAN, NAS and object storage?
There are a number of factors to consider, including overall bandwidth, individual client bandwidth, project lifespan and overall storage requirements. Because high-speed online storage typically has the highest infrastructure costs, a tiered approach makes the most sense for many facilities, where SAN, NAS, cloud or object storage may all be used at the same time. In this case, the speed with which a user will need access to a project is directly related to the type of storage the project is stored on.

Scale Logic uses a consultative approach with our customers to architect a solution that will fit both their workflow and budget requirements. We look at the time it takes to accomplish a task, what risks, if any, are acceptable, the size of the assets and the obvious, but nonetheless, vital budgetary considerations. One of the best tools in our toolbox is our HyperFS file system, which allows customers the ability to choose any one of four tiers of storage solutions while allowing full scalability to incorporate SAN, NAS, cloud and object storage as they grow.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
Above everything else we want to tailor a solution to the needs of the clients. With our consultative approach we take a look not only at the requirements to build the best solution for today, but also  the ability to grow and scale up to the needs of tomorrow. We look at scalability not just from the perspective of having more ability to do things, but in doing the most with what we have. While even our entry level system is capable of doing 10 streams of 4K, it’s equally, if not more, important to make sure that those streams are directed to the people who need them most while allowing other users access at lower resolutions.

GENESIS Unlimited

Our Advanced QoS can learn the I/O patterns/behavior for an application while admins can give those applications a “realtime” or “non-realtime” status. This means “non-realtime” applications auto-throttle down to allow realtime apps the bandwidth. Many popular applications come pre-learned, like Finder, Resolve, Premiere or Flame. In addition, admins can add their own apps.

What do you expect to see as the next trend relating to storage?
Storage always evolves. Whatever is next in post production storage is already in use elsewhere as we are a pretty risk-averse group, for obvious reasons. With that said, the adoption of Unified Storage Platforms and hybrid cloud workflows will be the next big thing for big media producers like post facilities. The need for local online and nearline storage must remain for realtime, resolution-intense processes and data movement between tiers, but the decision-making process and asset management is better served globally by increased shared access and creative input.

The entertainment industry has pushed the limits of storage for over 30 years with no end in sight. In addition, the ability to manage storage tiers and collaborate both on-prem and off will dictate the type of storage solutions our customers will need to invest in. The evolution of storage needs continues to be driven by the consumer: TVs and displays have moved to demanding 4K content from the producers. The increased success of the small professional cameras allows more access to multi-camera shoots. However, as performance and capacity continues to grow for our customers, it brings the complexity down to managing large data farms effectively, efficiently and affordably. That is on the horizon in our future solution designs. Expensive, proprietary hardware will be a thing of the past and open, affordable storage will be the norm, with user-friendly and intuitive software developed to automate, simplify, and monetize our customer assets while maintaining industry compatibility.

SMALL TREE‘S CORKY SEEBER
How do your solutions work with clients’ existing storage? And who is your typical client?
There are many ways to have multiple storage solutions co-exist within the post house, most of these choices are driven by the intended use of the content and the size and budget of the customer. The ability to migrate content from one storage medium to another is key to allowing customers to take full advantage of our shared storage solutions.

Our goal is to provide simple solutions for the small to medium facilities, using Ethernet connectivity from clients to the server to keep costs down and make support of the storage less complicated. Ethernet connectivity also enables the ability to provide access to existing storage pools via Ethernet switches.

What steps have you taken to work with technologies outside of your own?
Today’s storage providers need to actively design their products to allow the post house to maximize the investment in their shared storage choice. Our custom software is open-sourced based, which allows greater flexibility to integrate with a wider range of technologies seamlessly.

Additionally, the actual communication between products from different companies can be a problem. Storage designs that allow the ability to use copper or optical Ethernet and Fibre Channel connectivity provide a wide range of options to ensure all aspects of the workflow can be supported from ingest to archive.

What challenges, if any, do larger drives represent?
Today’s denser drives, while providing more storage space within the same physical footprint, do have some characteristics that need to be factored in when making your storage solution decisions. Larger drives will take longer to configure and rebuild data sets once a failed disk occurs, and in some cases may be slightly slower than less dense disk drives. You may want to consider using different RAID protocols or even using software RAID protection rather than hardware RAID protection to minimize some of the challenges that the new, larger disk drives present.

When do you recommend NAS over SAN deployments?
This is an age-old question as both deployments have advantages. Typically, NAS deployments make more sense for smaller customers as they may require less networking infrastructure. If you can direct connect all of your clients to the storage and save the cost of a switch, why not do that?

SAN deployments make sense for larger customers who have such a large number of clients that making direct connections to the server is impractical or impossible: these require additional software to keep everything straight.

In the past, SAN deployments were viewed as the superior option, mostly due to Fibre Channel being faster than Ethernet. With the wide acceptance of 10GbE, there is a convergence of sorts, and NAS performance is no longer considered a weakness compared to SAN. Performance aside, a SAN deployment makes more sense for very large customers with hundreds of clients and multiple large storage pools that need to support universal access.

QUANTUM‘S JANET LAFLEUR
What are the top three requests that you get from post users?
1) Shared storage with both SAN and NAS access to collaborate more broadly acrossJanet Lafleur groups. For streaming high-resolution content to editorial workstations, there’s nothing that can match the performance of shared SAN storage, but not all production team members need the power of SAN.

For example, animation and editorial workflows often share content. While editorial operations stream content from a SAN connection, a NAS gateway using a higher-speed IP protocol optimized for video (such as our StorNext DLC) can be used for rendering. By working with NAS, producers and other staff who primarily access proxies, images, scripts and other text documents can more easily access this content directly from their desktops. Our Xcellis workflow storage offers NAS access out of the box, so content can be shared over IP and over Fibre Channel SAN.

2) A starting point for smaller shops that scales smoothly. For a small shop with a handful of workstations, it can be hard to find a storage solution that fits into the budget now but doesn’t require a forklift upgrade later when the business grows. That’s one reason we built Xcellis workflow storage with a converged architecture that combines metadata storage and content storage. Xcellis provides a tighter footprint for smaller sites, but still can scale up for hundreds of users and multiple petabytes of content.

3) Simple setup and management of storage. No one wants to spend time deploying, managing and upgrading complex storage infrastructure, especially not post users who just want storage that supports their workflow. That’s why we are continuing to enhance StorNext Connect, which can not only identify problems before they affect users but also reduce the risk of downtime or degraded performance by eliminating error-prone manual tasks. We want our customers to be able to focus on content creation, not on managing storage.

How should post users decide between SAN, NAS and object storage?
Media workflows are complex, with unique requirements at each step. SAN, NAS and object storage all have qualities that make them ideal for specific workflow functions.

SAN: High-resolution, high-image-quality content production requires low-latency, high-performance storage that can stream 4K or greater — plus HDR, HFR content — to multiple workstations without dropping frames. Fibre Channel SANs are the only way to ensure performance for multi-streaming this content.

Object storage: For content libraries that are being actively monetized, object storage delivers the disk-level of performance needed for transcoding and reuse. Object storage also scales beyond the petabyte level, and the self-balancing nature of its erasure code algorithms make replacing aging disks with next-generation ones much simpler and faster than is possible with RAID systems.

Quantum XcellisNAS: High-performance IP-based connections are ideal for enabling render server farms to access content from shared storage. The simplicity of deploying NAS is also recommended for low-bandwidth functions such as review and approval, plus DVD authoring, closed captioning and subtitling.

With an integrated, complete storage infrastructure, such as those built with our StorNext platform, users can work with any or all of these technologies — as well as digital tape and cloud — and target the right storage for the right task.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
So much depends on the configuration: how many spindles, how many controllers, etc. At NAB 2016, our StorNext Pro 4K demo system delivered eight to 10 streams of 4K 10-bit DPX with headroom to stream more. The solution included four RAID-6 arrays of 24 drives each with redundant Xcellis Workflow Directors for an 84TB usable capacity in a neat 10U rack.

The StorNext platform allows users to scale performance and capacity independently. The need for more capacity can be addressed with the simple addition of Xcellis storage expansion arrays. The need for more performance can be met with an upgrade of the Xcellis Workflow Director to support more concurrent file systems.

PANASAS‘ DAVID SALLAK
What are the top three storage-related requests/needs that you get from your post clients or potential post clients?
They want native support for Mac, high performance and a system that is easier to grow and manage than SAN.

When comparing shared storage product choices, what are the advantages of NAS over SAN? Does the easier administration of NAS compared to SAN factor into your choice of storage?
NAS is easier to manage than SAN. Scale-out NAS is easier to grow thPanasasan SAN, and is designed for high availability. If scale-out NAS could be as fast as SAN, then SAN buyers would be very attracted to scale-out NAS.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
As many streams as possible. Post users always need more performance for future projects and media formats, so storage should support a lot of streams of ProRes HD or DNxHD and be capable of handling uncompressed DPX formats that come from graphics departments.

What do you expect to see as the next trend relating to storage? The thing that’s going to push storage systems even further?
Large post production facilities need greater scalability, higher performance, easier use, and affordable pricing.

HGST‘s JEFF GREENWALD
What are the top three requests you get from your post clients or potential post clients?
They’re looking for better ways to develop cost efficiencies of their workflows. Secondly, they’re looking for ways to improve the performance of those workflows. Finally, they’re looking for ways to improve and enhance data delivery and availability.

How should post users decide between SAN, NAS and object storage?
There are four criteria that customers must evaluate in order to make trade-offs between the various storage technologies as well as storage tiers. Customers must evaluate the media quantity of data, and they must also evaluate the frequency of acceptability. They must evaluate the latency requirements of data delivery, and, finally they must balance these three evaluations across their financial budgets.

Active ArchiverHow many data streams of 4K 10-bit DPX at 24fps can your storage provide?
In order to calculate quantity of video streams you must balance available bandwidth as well as file sizes and data delivery requirements toward the desired capacity. Also, jitter and data loss continue to shrink available bandwidth for retries and resends.

What do you expect to see as the next trend relating to storage, and what will push storage even further?
There are two trends that will dramatically transform the storage industry. The first is storage analytics, and the second is new and innovative usage of automatic meta-tagging of file data.

New technologies like SMR, optical and DNA-based object storage have not yet proven to be technology disruptors in storage, therefore it is likely that storage technology advancements will be evolutionary as opposed to revolutionary in the next 10 years.

G-TECH‘S MICHAEL WILLIAMS
Who is using your gear in the post world? What types of pros?
Filmmakers, digital imaging technicians, editors, audio technicians and photographers all use our solutions. These are the pros that capture, store, transfer and edit motion pictures, indie films, TV shows, music, photography and more. We offer everything from rugged standalone portable drives to high-performance RAID solutions to high-capacity network storage for editing and collaboration.

You recently entered the world of NAS storage. Can you talk about the types of pros taking advantage of that tech?
Our NAS customers run the gamut from DITs to production coordinators to video editors and beyond. With camera technology advancing so rapidly, they are looking for storage solutions that can fit within the demanding workflows they encounter every day.

With respect to episodic, feature film, commercials or in-house video production storage, needs are rising faster than ever before and many IT staffs are shrinking, so we introduced the G-Rack 12 NAS platform. We are able to use HGST’s new 10TB enterprise-class hard drives to deliver 120TB of raw storage in a 2RU platform, providing the required collaboration and performance.

We have also made sure that our NAS OS on the G-Rack 12 is designed to be easily administered by the DIT, video editor or someone else on the production staff and not necessarily a Linux IT tech.

Production teams need to work smarter — DITs, video editors, DPs and the like can do the video shoot, get the video ingested into a device and get the post team working on it much faster than in days past. We all know that time is money; this is why we entered the NAS market.

Any other new tech on the horizon that might affect how you make storage or a certain technology that might drive your storage in other directions?
The integration of G-Technology — along with SanDisk and HGST — into Western Digital is opening up doors in terms of new technologies. In addition to our current high-capacity, enterprise-class HDD-based offerings, SSD devices are now available to give us the opportunity to expand our offerings to a broader range of solutions.

G-RACK 12This, in addition to new external device interfaces, is paving the way for higher-performance storage solutions. At NAB this year, we demonstrated Thunderbolt 3 and USB-C solutions with higher-performance storage media and network connectivity. We are currently shipping the USB solutions and the technology demos we gave provide a glimpse into future solutions. In addition, we’re always on the lookout for new form factors and technologies that will make our storage solutions faster, more powerful, more reliable and affordable.

What kind of connections do your drives have, and if it’s Thunderbolt 2 or Thunderbolt 3, can they be daisy chained?
When we look at interfaces, as noted above, there’s a USB Type-C for the consumer market as well as Thunderbolt and 10Gb Ethernet for the professional market.

As far as daisy-chaining, yes. Thunderbolt is a very flexible interface, supporting up to six devices in a daisy chain, on a single port. Thunderbolt 3 is a very new interface that is gaining momentum, one that will not only support extremely high data transfer speeds (up to 2.7GB/s) but also supports up to two 4K displays. We should also not forget that there are still more than 200M devices supporting Thunderbolt 1 and 2 connections.

LACIE‘S GASPARD PLANTROU
How do your solutions work with clients existing storage? And who are your typical M&E users?
With M&E workflows, it’s rare that users work with a single machine and storage solution. From capture to edit to final delivery, our customers’ data interacts with multiple machines, storage solutions and users. Many of our storage solutions feature multiple interfaces such as Thunderbolt, USB 3.0 or FireWire so they can be easily integrated into existing workflows and work seamlessly across the entire video production process.

Our Rugged features Thunderbolt and USB 3.0. That means it’s guaranteed to work with any standard computer or storage scenario on the market. Plus it’s shock, dust and moisture-resistant, allowing it to handle being passed around set or shipped to a client. Lacie 12bigLaCie’s typical M&E users are mid-size post production studios and independent filmmakers and editors looking for RAID solutions.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
The new LaCie 12big Thunderbolt 3 pushes up to 2600MB/s and can handle three streams of 4K 10-bit DPX at 24fps (assuming one stream is 864MB/s). In addition, the storage solution features 96TB to edit and hold tons of 4K footage.

What steps have you taken to work with technologies outside of your own?
With video file sizes growing exponentially, it is more important than ever for us to deliver fast, high-capacity solutions. Recent examples of this include bringing the latest technologies from Intel — Thunderbolt 3 — into our line. We work with engineers from our parent company, Seagate, to incorporate the latest enterprise class core technology for speed and reliability. Plus, we always ensure our solutions are certified to work seamlessly on Mac and Windows.

NETAPP‘S JASON DANIELSON
What are the top three requests that you get from post users?Jason Danielson
As a storage vendor, the first three requests we’re likely to get are around application integration, bandwidth and cost. Our storage systems support well over 100 different applications across a variety of workflows (VFX, HD broadcast post, uncompressed 4K finishing) in post houses of all sizes, from boutiques in Paris to behemoths in Hollywood.

Bandwidth is not an issue, but the bandwidth per dollar is always top of mind for post. So working with the post house to design a solution with suitable bandwidth at an acceptable price point is what we spend much of our time doing.

How should post users decide between SAN, NAS and object storage?
The decision to go with SAN versus NAS depends on the facility’s existing connectivity to the workstations. Our E-Series storage arrays support quite a few file systems. For SAN, our systems integrators usually use Quantum StorNext, but we also see Scale Logic’s HyperFS and Tiger Technology’s metaSAN being used.

For NAS, our systems integrators tend to use EditShare XStream EFS and IBM GPFS. While there are rumblings of a transition away from Fibre Channel-based SAN to Ethernet-based NAS, there are complexities and costs associated with tweaking a 10GigE client network.

The object storage question is a bit more nuanced. Object stores have been so heavily promoted by storage vendors that thE5624ere are many misconceptions about their value. For most of the post houses we talk to, object storage isn’t the answer today. While we have one of the most feature-rich and mature object stores out there, even we say that object stores aren’t for everyone. The questions we ask are:

1) Do you have 10 million files or more? 2) Do you store over a petabyte? 3) Do you have a need for long-term retention? 4) Does your infrastructure need to support multisite production?

If the answer to any of those questions is “yes,” then you should at least investigate object storage. A high-end boutique with six editors is probably not in this realm. It is true that an object store represents a slightly lower-cost bucket for an active archive (content repository), but it comes at a workflow cost of introducing a second tier to the architecture, which needs to be managed by either archive management or media asset management software. Unless such a software system is already in place, then the cost of adding one will drive up the complexity and cost of the implementation. I don’t mean to sound negative about object stores. I am not. I think object stores will play a major role in active-archive content storage in the future. They are just not a good option for a high-bandwidth production tier today or, possibly, ever.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
In order to answer that question, we would ask the post house: “How many streams do you want to play back?” Let’s say we’re talking about 4K (4096×2160), versus the several resolutions that are called 4K). At 4:4:4, that works out to 33MB per frame or 792MB per second. We would typically use flash (SSDs) for 4K playback. Our  2RU 24-SSD storage array, the EF560, can do a little over 9GB per second. That amounts to 11 streams.

But that is only half the answer. This storage array is usually deployed under a parallel file system, which will aggregate the bandwidth of several arrays for shared editing purposes. A larger installation might have eight storage arrays — each with 18 SSDs (to balance bandwidth and cost) — and provide sustained video playback for 70 streams.

What do you expect to see as the next trend relating to storage? What’s going to push storage systems even further?
The introduction of larger, more cost-effective flash drives (SSDs) will have a drastic effect on storage architectures over the next three years. We are now shipping 15TB SSDs. That is a petabyte of extremely fast storage in six rack units. We think the future is flash production tiers in front of object-store active-archive tiers. This will eliminate the need for archive managers and tape libraries in most environments.

HARMONIC‘S ANDY WARMAN
What are the top three requests that you hear from your post clients or potential post clients?andy warman
The most common request is for sustained performance. This is an important aspect since you do not want performance to degrade due to the number of concurrent users, the quantity of content, how full the storage is, or the amount of time the storage has been in service.

Another aspect related to this is the ability to support high-write and -read bandwidth. Being able to offer equal amounts of read and write bandwidth can be very beneficial for editing and transcode workflows, versus solutions that have high-read bandwidth, but relatively low-write performance. Customers are also looking for good value for money. Generally, we would point to value coming from the aforementioned performance as well as cost-effective expansion.

You guys have a “media-aware” solution for post. Can you explain what that is and why you opted to go this way?
Media-aware storage refers to the ability to store different media types in the most effective manner for the file system. A MediaGrid storage system supports multiple different block sizes, rather than a single block size for all media types. In this way, video assets, graphics and audio and project files can use different block sizes that make reading and writing data more efficient. This type of file I/O “tuning” provides some additional performance gains for media access, meaning that video could use, say, 2MB blocks, graphics and audio 512KB, and projects and other files 128KB. Not only can different block sizes be used by different media types, but they are also configurable so UHD files could, say, use 8MB block sizes.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
The storage has no practical storage capacity or bandwidth limit, so we can build a storage system that suits the customer needs. To size a system it becomes a case of balancing the bandwidth and storage capacity by selecting the appropriate number of drives and drive size(s) to match specific needs. The system is built on SAS drives; multiple, fully redundant 10 Gigabit Ethernet connections to client workstations and attached devices and 12 Gigabit redundant SAS interconnects between storage expansion nodes. This means we have high-speed connectivity within the storage as well as out to clients.

Harmonic MediaGrid 4K Content ServerAs needs change, the system can be expanded online with all users maintaining full access. Bandwidth scales in a linear fashion, and because there is a single name space in MediaGrid, the entire storage system can be treated as a single drive, or divided up and granted user level rights to folders within the file system.

Performance is further enhanced by the use of parallel access to data throughout the storage system. The file system provides a map to where all media is stored or is to be stored on disk. Data is strategically placed across the whole storage system to provide the best throughput. Clients simultaneously read and write data through the 10 Gigabit network to all network attached storage nodes rather than data being funneled through a single node or data connection. The result is that performance is the same whether the storage system is 5% or 95% full.

What do you expect to see as the next trend relating to storage? What’s going to push storage systems even further?
The advent of UHD has driven demands on storage further as codecs and therefore data throughput and storage requirements have increased significantly. Faster and more readily accessible storage will continue to grow in importance as delivery platforms continue to expand and expectations for throughput of storage systems continue to grow. We will use whatever performance and storage capacity is available, so offering more of both is inevitable to feed our needs for creativity and storytelling.

JMR’s STEVE KATZ
What are the top three storage-related requests you get from post users?
The most requested is ease of installation and operation. The JMR Share is delivered with euroNAS OS on mirrored SSD boot disks, with enough processing power and memory to Steve Headshot 6.27.16support efficient, high-volume workflows and a perpetual license to support the amount of storage requested, from 20TB minimum to the “unlimited” maximum. It’s intuitive to use and comfortable for anyone familiar with using popular browsers.

Compatibility and interoperability with clients using various hardware, operating systems and applications.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
This can all be calculated by usable bandwidth and data transfer rates, which as with any networked storage can be limited by the network itself. For those using a good 10GbE switch, the network limits data rates to 1250MB/s maximum, which can support more than 270 streams of DNxHD 36, but only one stream of 4K 10-bit “film” resolution. Our product can support ~1800MB/s in a single 16-disk appliance, but without a very robust network this can’t be achieved.

When comparing shared storage product choices, what are the advantages of NAS over SAN, for example?
SAN actually has some advantages over NAS, but unless the user has Fibre Channel hardware installed, it might be a very costly option. The real advantage of NAS is that everyone already has an Ethernet network available that may be sufficient for video file server use. If not, it may be upgraded fairly inexpensively.

JMR Share comes standard with both GbE and 10GbE networking capability right out of the box, and has performance that will saturate 10GbE links; high-availability active/active failover is available as well as SAN Cluster (an extra cost option). The SAN Cluster is equipped with specialized SAN software as well as with 8Gb or 16Gb fibre channel host adapters installed, so it’s ready to go.

What do you expect to see as the next trend relating to storage? The thing that’s going to push storage systems even further?
Faster and lower cost, always! Going to higher speed network adapters, 12Gb SAS internal storage and even SSDs or NVMe drives, it seems the sky is the limit — or, actually, the networking is the limit. We already offer SAS SSDs in the Share as an option, and our higher-end dual-processor/dual-controller Share models (a bit higher cost) using NVMe drives can provide internal data transfer speeds exceeding what any network can support (even multiple 40Gb InfiniBand links). We are seeing a bit of a trend toward SSDs now that higher-capacity models at more reasonable cost, with reasonable endurance, are becoming available.

The State of Storage

Significant trends are afoot in media and entertainment storage.

By Tom Coughlin

Digital storage plays a significant role in the media and entertainment industry, and our specific demands are often very different from typical IT storage. We are dealing with performance requirements of realtime video in capture, editing and post, as well as distribution. On the other hand, the ever-growing archive of long-tail digital content and digitized historical analog content is swelling the demand for archives (both cold and warm) using tape, optical discs and hard drive arrays.

My company, Coughlin Associates, has conducted surveys of digital storage use by media and entertainment professionals since 2009. These results are used in our annual Digital Storage in Media and Entertainment Report. This article presents results from the 2016 survey and some material from the 222-page report to discuss the status of digital storage for professional media and entertainment.

Content Creation and Capture
Pro video cameras are undergoing rapid evolution, driven by higher-resolution content as well as multi-camera content capture, including stereoscopic and virtual reality. In addition, the physical storage media for professional cameras is undergoing rapid evolution as film and magnetic digital tape is impacted by the rapid file access convenience of hard disk drives, optical discs, and the ruggedness of flash-based solid-state storage.

The table below compares the results from the 2009, 2010, 2012, 2013, 2014 and 2015 surveys with those from 2016. Flash memory is the clear leader in pro video camera media, increasing from 19% in 2009 to 66% in 2015 and then down to 54% in 2016, while magnetic tape shows a consistent decline over the same period.

Optical disc use between 2009 and 2016 bounced around between 7% and 17%. Film shows a general decline from 15% usage in 2009 to 2% in 2016. The trend with declining film use follows the trend toward completely digital workflows.

Note that about 60% of survey participants said that they used external storage devices to capture content from their cameras in 2016 (perhaps this is why the HDD percentages are so high). In 2016, 83% said that over 80% of their content is created in a digital format.

In 2016, 93.2% of the survey respondents said they reuse their recording media (compared to 89.9% in 2015, 93.3% in 2014, 84.5% in 2013, 86% in 2012, 79% in 2010 and 75% in 2009). In 2016, 75% of respondents said they archive their camera recording media (compared to 73.6% in 2015, 74.2% in 2014, 81.4% in 2013, 85% in 2012 and 77% in 2010).

Archiving the original recording media may be a practice in decline — especially with expensive reusable media such as flash memory cards. Digital storage on tape, hard disk drives or flash storage allows the reuse of media.

Post Production
The size of content — and amount — has put strains on post network bandwidth and storage. This includes editing and other important operations. As much of this work may take place in smaller facilities, these companies may be doing much of their work on direct attached storage devices and they may share or archive this media in the cloud in order to avoid the infrastructure costs of running a data center.

The graph below shows that for the 2016 survey participants there was a general increase in the use of shared network storage (such as SAN or NAS), and a decrease in DAS storage as the number of people working in a post facility increases. The DAS storage in the larger facilities may be different than that used in smaller facilities.

DAS vs. shared storage by number of people in a post facility.

When participants were asked about their use of direct attached and network storage in digital editing and post, the survey showed the following summary statistics in 2016 (compared to earlier surveys):

– 74.5% had DAS
– 89.8% of these had more than 1 TB of DAS
– 10 to 50 TB was the most popular DAS size (27.5%)
– 17.4% of these had more than 50 TB of DAS storage
– 2.9% had more than 500 TB of DAS storage
– 68.1% had NAS or SAN
– 57.4% had 50 TB or more of network storage in 2016
– About 15% had more than 500 TB of NAS/SAN storage in 2016
– Many survey participants had considerable storage capacities in both DAS and NAS/SAN.

We asked whether survey participants used cloud-based storage for editing and post. In 2016 23.0% of responding participants said yes. The respondents, 20.9% of them, said that they had 1TB or more of their storage capacity in the cloud.

Content Distribution
Distribution of professional video content has many channels. It can use physical media for getting content to digital cinemas or to consumers, the distribution can be done electronically using broadcast, cable or satellite transmission, or through the Internet or mobile phone networks.

The table below gives responses for the percentage of physical media used by the survey respondents for content distribution in 2016, 2015, 2014, 2013, 2012 and 2010. Note that these are the average for the survey population giving their percentage for each physical media and do not and should not be expected to add to 100%. Digital tape, DVD discs, HDDs and Flash Memory are the most popular distribution formats.

Average percentage content on physical media for professional content distribution.

Following are survey observations for electronic content distribution, such as video on demand.

– The average number of hours on a central content delivery system was 2,174 hours in 2016.
– There was an average of 427 hours ingested monthly in 2016.
– In 2016, 38% of respondents had more than 5% of their content on edge servers.
– About 31% used flash memory on their edge servers in 2016.

Archiving and Preservation
Today, most new entertainment and media content is born digital, so it is natural that this content should be preserved in digital form. This requirement places new demands on format preservation for long-term digital archives as well as management and systematic format refreshes during the expected life of a digital archive.

In addition, the cost of analog content digitization and preservation in a digital format has gone down considerably, and many digitization projects are proceeding apace. The growth of digital content archiving will swell the amount of content available for repurposing and long-tail distribution. It will also swell the amount of storage and storage facilities required to store these long-term professional content archives.

Following are some observations from our 2016 survey on trends in digital archiving and content preservation.

– 41% had less than 2,000 hours of content in a long-term archive
– 56.9% archived all the content captured from their cameras
– 54.0% archived copies of content in all of their distribution formats
– 35.9% digitally archived all content captured from their dailies
– 31.3% digitally archived all content captured from rough cuts
– 36.5% digitally archived all content captured from their intermediaries
– 50.9% of the respondents said that their annual archive growth rate was less than 6% in 2016
– About 28.6% had less than 2,000 hours of unconverted analog content
– 16.7% of participants had over 5,000 hours of unconverted analog content
– About 52.5% of the survey respondents have an annual analog conversion rate of 2% or less
– The average rate of conversion is about 3.4% in 2016

Professional media and entertainment content was traditionally archived on film or analog videotapes. Today, the options available for archive media to store digital content depend upon the preferences and existing infrastructure of digital archive facilities. Figure 6 gives the percentage distribution of archive media used by the survey participants.

Percentage of digital long-term archives on various media

Some other observations from the archive and preservation section of the survey:

– About 42.6% never update their digital archives.
– About 76.2% used different storage for archiving and working storage.
– About 49.2% copied and replaced their digital long-term archives every 10 years or less.
– 38.1% said they would use a private or public cloud for archiving in 2016.

Conclusions
Larger content files, driven by higher resolution, higher frame rates, higher dynamic range and stereoscopic and virtual reality video are creating larger video files. This is driving the need for high-performance storage to work on this content and to provide fast delivery, which could drive more creative work to use solid-state storage.

At the same time, cost -effective storage and management of completed work is driving the increased use of hard disk drives, magnetic tape and even optical storage for low-cost storage.

The price of storing content in the cloud has gone down so much that there are magnetic tape-based cloud storage offerings that are less expensive than building one’s own storage data center, at least for small- and moderate-sized facilities.

This trend is expected to grow the use of cloud storage in media and entertainment, especially for archiving, as shown in the figure below.

Growth of cloud storage in media and entertainment.


Dr. Tom Coughlin, president of Coughlin & Associates, is a storage analyst and consultant with over 30 years in the data storage industry. He is the founder and organizer of the Annual Storage Visions Conference as well as the Creative Storage Conference.

Review: Promise Technology’s Pegasus2 R2+ RAID

By Brady Betzel

Every day I see dozens of different hard drives — from some serious RAIDs, like the Avid Nexis (formerly Isis), all the way down to the single-SSD via Thunderbolt. My favorite drives seem to be the ones that connect easily, don’t have huge power supply bricks and offer RAID options, such as RAID-0/RAID-1. If you’ve been to an Apple Store lately then you’ve probably ran into the Promise Technology Pegasus2 R2+ line of products. Also, the Pegasus2 R2+ is featured under the storage tab on www.apple.com. I bring that up because if you are on that page you are a serious contender.

The Pegasus line of products from Promise is often thought of as high-end and high-quality. I’ve never heard anyone say anything bad about Pegasus. From their eight-bay R8 systems all the way down to the R2, there are options to satisfy any hardware-based RAID need you have from 0 to 1, 5 or 6. Lucky for me, I was sent the R2+ RAID to review. I was immediately happy that it was a hardware-controlled RAID as opposed to a software-controlled RAID.

Pegasus2 R2+

Software RAIDs run 100 percent on an external system to control the data structure, but I like my RAID to control itself. The Pegasus2 R2+ is a two-drive hot swappable RAID loaded with two 7200RPM, 3TB Toshiba hard drives. In addition, there is a third bay — the Media Bay — on the top that can be loaded with different pods. You can choose from an SSD reader, a CF/SD card reader or even an additional 1TB hard drive, but it ships with the CF/SD card reader. Keep in mind these pods will only work when connected via Thunderbolt 2 — under USB 3.0 they will not work. Something cool: when you pop out the interchangeable pods they can connect via USB 3.0 separate from the RAID case.

In terms of looks, the Pegasus2 R2+ has a nice black finish, which will go well with any recent Mac Pros you might have lying around. It has a medium-to-small footprint — picture two medium-sized books stacked on top of each other (5.3 x 7.3 x 9.8 inches). It weighs about 13.5 pounds and while I did stuff it in my backpack and carry it around, you know it’s in there. The power cord is nice. I detest the power bricks that typically accompany RAID drives, laptops and anything that sucks a good amount of power. To my delight, Promise has incorporated the actual power supply inside of the RAID, leaving a simple power cable to attach. Thank You! Other than that you have either a USB 3.0 cable or a Thunderbolt 2 cable included in the box.

Running Tests
Out of the box, I plugged in the RAID and it spun up to life. For this review, I found a Mac Pro running a 2.7GHz 12-core Xeon E5, with 64GB of DDR3, and an AMD FirePro D700 graphics card, so there should very little bogging down the transfer pipes when running my tests. I decided to use the AJA System Test for disk-speed testing. I started with the drive in RAID-0 (optimized for speed, both drives are together, no safety) because that is how it is shipped.

DiskSpeedTest Thunderbolt copy

Over Thunderbolt 2, I got around 390MB/sec read and 370 MB/sec write speeds. Over USB, 3.0 still configured in RAID-0, it at about 386MB/sec read/write. When I turned the RAID over to RAID-1 (made for safety, so if one drive is damaged you will most likely be able to have your data rebuilt when you replace the damaged drive), I definitely saw the expected slow down. Over Thunderbolt 2 and USB 3.0, I was getting around 180MB/sec write and 196MB/sec read. Don’t forget, the 6TB drive that ran in RAID-0 is now 3TB when configured in RAID-1.

On the front of the R2+ you have two lights that let you know the drive is plugged in via Thunderbolt 2 or USB 3.0. This actually came in handy, as I was looking to see how I plugged the drive in. Cool!

One thing I was very happy with was how simple the Promise Technology RAID configuration tool was to use. Not only will it give you stats on the drive, like temperature of the drives, health of the drives and even fan speed, it lets you format and designate RAID configurations. This alone would make me think of Promise first when deciding on a RAID to buy. I really liked how simple and easy to use the RAID configuration software was to use.

As a final test I left my Pegasus2 R2+ configured in RAID-0 and pulled a drive out while transferring media to the RAID. The status light on the front changed from a bright blue to an amber color and began to blink. Inside of the Pegasus2 RAID configuration tool an amber exclamation point appeared next to the RAID status as expected. I left the drive alone so it could rebuild itself. Two hours later it was still running, so I left it alone overnight. I didn’t accurately time the rebuild, but by the time I came home the next night it was complete. I only had a few hundred gigabytes worth of data on it, but in the end it came back to life. Hooray!

General Thoughts
In the end, I really love the sleek black exterior, lack of a huge power brick and the RAID configuration software. The additional Media Pods are a cool idea too. I like having a Thunderbolt 2 CF/SD card reader (or better yet an SSD reader — think Red Mag) always ready to go, especially on the Mac Pro shaped like a black cylinder with no card readers built in.

I would really love to have seen what this could do when loaded with SSD drives, but since this review is about what comes with the Pegasus2 R2+, that’s what I’ve done.

Promise Technology has been around a long time and has been known to me to offer very reliable storage solutions. Keep in mind that the R2+ is shipped with the CF/SD card reader, but the other pods can be purchased separately. I couldn’t find anyone selling them online though. When I was writing this review, I saw the retail price of the Pegasus2 R2+ range from $749 to a little over $800. You get a two-year limited warranty, which covers all parts except for the fan and power supply. They are only covered for one year (kind of a bummer). When returning the product for warranty work, you can opt to be sent a loaner, but a credit card is required in case you don’t return it. In this instance, you will be charged retail price of the loaner). You can also opt to send yours in and wait for it to be replaced. Take note that you need a copy of the original receipt and boxes for return.

Summing Up
I really love the stability and elegance of the Pegasus line of RAID systems, and the Pegasus2 R2+ lives up to the beauty and name. If you are a small company or one-person band transferring, transcoding and editing media without the need for SSD speed or Thunderbolt 3 connection, this is the sleek RAID for you.

Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com, and follow him on Twitter @allbetzroff. Brady was recently nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

The cloud and production storage

By Tom Coughlin

The network of connected data centers known as “the cloud” is playing a greater role in many media and entertainment applications. This includes collaborative workflows and proxy viewing, rendering, content distribution and archiving. Cloud services can include processing power as well as various types of digital storage.

In the figure below, you will see Coughlin Associates’ projections (2015 Digital Storage in Media and Entertainment Report), for the growth of professional media and entertainment cloud storage out to 2020. Note that archiving is the biggest projected market for media and entertainment cloud storage.

Media and Entertainment Cloud Storage Capacity Projections

At the 2016 NAB show, there were many companies offering cloud storage and online services for the media and entertainment industry. Some of these were cloud-only offerings and some are hybrid cloud services with some on-premises storage.

In this piece, we will review some of these cloud storage offerings and take a look at how to move content around the cloud, as well as to and from the cloud and on-premise storage. There were also some interesting object storage infrastructure implementations that were of interest at the NAB show.

Archiving in the Cloud
Archive is the biggest application for cloud storage in media and entertainment and several companies have products geared toward these applications, some of them with magnetic tape storage in the cloud. Oracle’s DIVA content storage management software allows the integration of on-premises storage and the Oracle cloud. The recently announced DIVAnet 2.0 allows a converged infrastructure for rich media using single namespace access to DIVArchive on-premises sites and Oracle DIVA Cloud storage as a service.
The Oracle Archive Cloud using DIVA content management offers archive storage for about $0.01/GB/month. This equates to storing 1 petabyte (PB) of content for just $12,000 per year. That is less than the on-premises costs for many content archives. At this price, rich media content companies are considering trusting their long-term content archives to the cloud.

Fujifilm’s Dternity tape-based archive, which offers online access to your data and integrates with applications already in your workflow, had an exhibit at NAB again this year. IBM also offers tape storage in the cloud. In addition to archiving on tape there are HDD cloud storage offerings as well. Major cloud companies such as Google, Azure and AWS offer tape and HDD- based cloud storage.

Quantum showcased its new Q-Cloud Vault long-term cloud storage service. This is fully integrated within workflows powered by StorNext 5.3, Q-Cloud Vault will provide low-cost, Quantum-managed “cold storage” in the public cloud. Because StorNext 5.3 enables end-to-end encryption, users can leverage the cloud as a part of their storage infrastructure to facilitate secure, cost-effective storage of their media content, both on-site and off-site.

In addition to supporting Q-Cloud Vault (pictured above), StorNext 5.3 gives users greater control and flexibility in optimizing their collaborative media workflows for maximum efficiency and productivity.

Cloud-Assisted Media Workflows
In addition to archive-focused cloud storage, some companies at NAB were talking about cloud and hybrid storage focused on non-archive applications.

Fast-paced growth and strong demand for scale-out storage clouds have propelled DDN’s WOS to one of the industry’s top solutions based on the number of objects in production and have fortified DDN’s position as a strong market leader in object storage. Continuing to fuel the pace of its object storage momentum, DDN also announced the availability of its latest WOS platform release. WOS is also an important component in the company’s MediaScaler Converged Media Workflow Storage Platform.

WOS possesses a combination of high-performance, flexible protection, multi-site capabilities and storage efficiencies that make it the perfect solution for a wide range of use cases, including active archive repositories, OpenStack Swift, data management, disaster recovery, content distribution, distributed collaboration workflows, enterprise content repositories, file sync and share, geospatial images, video surveillance, scale-out web and cloud services, and video post-production.

EMC, Pixspan, Aspera and Nvidia are bringing uncompressed 4K workflows to IT infrastructures, advancing digital media workflows with full resolution content over standard 10GbE networks. Customers can now achieve savings and performance increases of 50-80 percent in storage and bandwidth throughout the entire workflow — from on-set through post to final assets. Artists and facilities using creative applications for compositing, visual effects, DI and more can now work faster with camera raw, DPX, EXR, TIFF and Cineon files. Content can be safely stored on EMC’s Isilon scale-out NAS for shared collaborative access to project data in the data center, around the world, or to the cloud.

NetApp Webscale

NetApp StorageGrid Webscale

At NAB, NetApp promoted new features in its StorageGrid Webscale (appliance or software-defined) object storage. The object store has been widely adopted by media sites and media cloud providers who are managing tens of billions of media objects. Now, a majority of the key MAM, file-delivery and archive systems have integrated to StorageGrid Webscale’s Amazon S3 object interface. 

StorageGrid Webscale is a next-generation solution for multi-petabyte distributed content repositories. It provides erasure coding or, alternatively, automatic file copies to remote locations depending on the value of the media and the needs of the workflow.

Scality Ring storage scales linearly across multiple active sites and thousands of servers and can host an unlimited number of objects, providing high performance across a variety of workloads with file or object storage access. The company says the product enables organizations to build Exabyte-scale active archives and scalable content distribution systems, including network DVR/PVR. The product can be used to make a private storage cloud with file and object access and to provide customized web services.

Avere FlashCloud is a hybrid cloud and on-premise storage offering advertised as providing unlimited capacity scaling in the Cloud with unlimited performance scaling to the edge with up to 480 TB of data on FXT Series Edge filers. The dynamic tiering of active data to the edge hides the latency of cloud storage while NFS and SMB access provide file-based storage with a global namespace including public objects, private objects and NAS.

Avere’s FlashMove software transparently moves live online data to the cloud and between cloud providers. FlashMiror replicates data to the cloud for disaster recovery. AES-256 encryption with FIPS 140-2 compliance provides data security with on premise encryption key management. It should be noted that Avere worked with Google to provide the storage cluster used to stream the video showed at NAB during the Lytro Cinema demonstration.

Moving and delivering in the cloud

Moving and delivering in the cloud

SAN Solutions SAN Metro Media ultra-low-latency cloud for media extends a customer’s studio to the cloud with its SMM Ultra-Connect dedicated, secure direct connect, low-latency circuit from the customer’s site to one of SAN Metro Media’s data centers in a metropolitan area. The SMM Ultra-Connect circuit can operate completely off the Internet and transport media at the bandwidth and latencies that large studio applications and workflows require.

Moving and Delivering Content in the Cloud
There are several companies offering data transport services to and from cloud services as well as from on-premise storage to the cloud and back.

At NAB show Aspera (a division of IBM) introduced FASPStream, a turnkey application-software line designed to enable live streaming of broadcast-quality video globally over commodity Internet networks with glitch-free playout and negligible startup time, reducing the need for expensive and limited satellite-based backhaul, transport and distribution.

The FASPStream software uses the FASP bulk data protocol to transport live multicast, unicast UDP, PCP or other file source video, providing timely arrival of live video and data independent of network round-trip delay and packet loss. The company says that less than five seconds of startup delay is required for 50Mbps video streams transported with 250ms round-trip latency and three percent packet loss. These properties are sufficient for 4K streaming between continents.

Aspera is part of a broader group of IBM acquisitions with a strong focus on the media and entertainment industry, including object storage provider Cleversafe.

Signiant announced the integration of its Manager+Agents product with the Avid Interplay | MAM system. Customers can now initiate accelerated file transfers from within Interplay | MAM, making it easier than ever to use the power of Signiant technology in support of global creative processes. Users can now initiate and monitor Signiant file transfers via the Export capability within Avid Interplay|MAM.

FileCatalyst had some media and entertainment case studies, including involvement with NBC’s 2016 Rio Olympics preparation.

Snowball is a PB-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. With Snowball, you don’t need to write any code or purchase any hardware to transfer your data. Simply create a job in the AWS Management Console and a Snowball appliance will be automatically shipped to you. Once it arrives, attach the appliance to your local network, download and run the Snowball client to establish a connection, then use the client to select the file directories that you want to transfer to the appliance.

The client will then encrypt and transfer the files to the appliance at high speed. Once the transfer is complete and the appliance is ready to be returned, the E Ink shipping label will automatically update and you can track the job status via Amazon Simple Notification Service (SNS), text messages or directly in the Console.

EMC and Imagine Communications provide live channel playout with the Versio solution in an offering with EMC’s converged VCE Vblock system and EMC’s Isilon scale-out NAS storage system. EMC’s technology and Versio, Imagine’s cloud-capable channel playout solution, help to enable broadcasters to securely fulfill channel playout across geographically dispersed network to help engage customers with content tailored to their respective operations. EMC also talked about Cloud DVR solutions with Anevia.

Object Storage Infrastructure
A start-up company named Fixstars Solutions provides an innovative storage server (called Olive) with dual core-CPU, FPGA, 512MB RAM, gigabit Ethernet and up to 13TB of non-volatile flash memory storage in a 2.5-inch form factor. The company announced Ceph running on Olive, building high-performance, scalable storage systems at low cost that it feels can provide solutions for broadcasters, studios, cable providers and Internet delivery networks.

Dr. Tom Coughlin, president of Coughlin Associates, is a storage analyst and consultant with over 30 years in the data storage industry. He is the founder and organizer of the Annual Storage Visions Conference as well as the Creative Storage Conference. The 2016 Creative Storage Conference is June 23 in Culver City. It features conferences and exhibits focused on the growing storage demands of HD, UltraHD, 4K and HDR film production and how it is affecting every stage of the production.

Panasas intros DirectFlow for Mac

Storage company Panasas has unveiled DirectFlow for Mac, bringing the performance benefits of parallel I/O over Ethernet to the Mac platform and Apple OS X operating system. Until now, DirectFlow has only been available for Linux. DirectFlow is the parallel data access protocol designed by Panasas and offered as part of the integrated ActiveStor storage solution that incorporates the PanFS file system, as well as NFS and SMB protocols.

DirectFlow allows clients to access Panasas storage directly and in parallel, resulting in higher performance than what can be achieved with industry standard protocols, including NFS and SMB. DirectFlow for Mac allows production teams to ingest, process and deliver video in higher resolution, as well as consolidate their workflows under a single global namespace, all while working alongside DirectFlow for Linux users and other platforms using traditional file protocols. Users also have access to the latest Panasas data-protection techniques based on modern erasure-coding methods.

“When you double the performance of client applications accessing scale-out NAS, you double the productivity of all users,” says David Sallak, VP of products and solutions at Panasas. “This leads to higher quality outcomes because you have more time to perfect the product you are creating, while also reducing the cost of getting the job done.”

Creative Storage Conference is now soliciting presentations

No one can deny how important storage is in the day-to-day lives of post pros. From big studios and facilities to indie editors working with external drives, storage is an indispensable part of the way people work today.  So if you want to talk storage and learn about the latest trends and product offerings, you might want to attend the Creative Storage Conference (CS 2016), celebrating its 10th anniversary this year.

CS 2016 is accepting submissions for presentations, speakers and panels now through May 1, 2016 at www.creativestorage.org. CS 2016 will bring together digital storage providers, equipment and software manufacturers, and professional media and entertainment end users to explore this year’s conference theme, “The Art of Storage.”

The 2016 event will be held Thursday, June 23 at the DoubleTree Hotel West Los Angeles in Culver City. “We have a great agenda planned with six sessions throughout the day and four keynote talks during the conference,” said Tom Coughlin, who is both the chairman and organizer of the event.

The Creative Storage Conference is put on by the Entertainment Storage Alliance and Coughlin Associates.

LaCie 5big Thunderbolt 2 upgraded with Seagate hard drives for 4K

LaCie, a Seagate brand, has made updates to its 5big Thunderbolt 2 pro five-disk storage solution. Now featuring Seagate’s 8TB enterprise class hard disks, the new LaCie 5big provides more capacity (40TB), reliability and an extended warranty. This product is targeting 4K video workflows. The LaCie 5big Thunderbolt 2 featuring these hard disks will be available this quarter in 40TB capacity for $3,999.

With Seagate’s new 8TB hard disks, the LaCie 5big offers a 33 percent capacity increase. These hard disks are designed to operate 24/7— versus 8/5 operations for traditional hard drives — and can support 8,760 hours of operation per year.

The LaCie 5big with Enterprise Class drives comes with a five-year warranty that covers the drives, enclosure and spare parts. The enterprise class drives feature 256MB cache, 7200RPM and rack environment optimization, offering the ideal solution for handling aggressive workloads.

More Details
With this Thunderbolt 2 technology, the LaCie 5big offers sustained speeds of up to 1050MB/s. This is enough bandwidth to edit several video streams in native 4K resolution. This kind of speed allows those working in Apple Final Cut Pro or Adobe Premiere to get maximum quality from footage and see native 4K edits in realtime.

The LaCie 5big offers a range of RAID modes that allow users to tailor the product to their needs. Its hardware RAID delivers sustained performance, better flexibility and the ability to connect the product to another computer while keeping the RAID configuration.

RAID modes 5 and 6 provide complete data protection against disk failure, while still providing speed and capacity needed for pro workflows. This feature helps users who want to use a single storage product for both video editing and backup. In protected RAID modes, even in the case of disk failure, the LaCie 5big’s hot-swappable disks mean zero data loss or downtime.

The LaCie 5big features two Thunderbolt 2 ports for daisy chaining. Pros can daisy chain up to six Thunderbolt devices to a computer via a single cable (included). Thunderbolt 2 is also backward compatible, using the same cables and connectors as first-gen Thunderbolt devices and computers. This allows pros to create a plug and play 4K video editing environment with increased capacity and speed.

The LaCie 5big’s cooling system consists of three key components: a heat-dissipating aluminum casing, a Noctua cooling fan and jumbo heat exhausts. The ultra-quiet Noctua NF-P12 fan pulls heat away from the internal components while producing little noise.

Quick Chat: Scale Logic’s Bob Herzan talks storage

Can you imagine for a second what it would be like without proper storage in our datacentric production and post world? Anarchy! To say it’s a big part of the puzzle would be an understatement.

At postPerspective, we cover storage news, technology and usage in a variety of ways, so recently we reached out to Scale Logic, which provides RAID, SAN and NAS, as well as other archiving solutions, to find out about their products and process.

Some of you might remember that Minneapolis-based Scale Logic Inc. grew out of what was once Rorke Data, which offered products to the post industry for almost 30 years. Scale Logic president/CEO Bob Herzan and about 20 former Rorke Data employees took that technology and experience and built on it, creating new products targeting the media and entertainment space.

Let’s find out more.

Genesis Unlimited is new-ish, but the Genesis platform has been around for how long?
Genesis’ predecessors were released in 2008 as a NUMA programing technology that uses multi-thread processing. The Genesis development team has been building software RAID solutions for the M&E market for eight years, so Genesis Unlimited is a third-generation product. The Genesis RX was introduced over three years ago, and the Genesis Unlimited was announced at NAB 2015.

Genesis Unlimited

Genesis Unlimited

What should pros know about your Unlimited and RX products?
HyperFS and the Genesis RAID software have been combined and sold as a licensed “SAN in a box” solution for five years now, but Unlimited licensing began in April 2015. The Genesis RX series features unlimited connectivity, which allows facilities to connect all their various systems to the central storage with the correct network speed.

Can you talk about how this helps post users specifically?
Imagine having your fixed SAN or NAS solution at your facility, then having the ability to invite freelance editors to come into your facility and simply be able to connect to your shared storage via Fibre or Ethernet and begin collaborating immediately — without the need to purchase additional hardware or software. The peace of mind this offers allows users to stop thinking about the technology and focus on the creative.

How does Unlimited differ from your other offerings?
Unlimited is tailored for reliability and cost, and it aims to solve connectivity, application compatibility, file system and data storage issues in the one box. While all of our products are meant to scale well, Genesis Unlimited’s scalability is designed for independent scale-out in performance and capacity.

You say you target M&E. How is this system optimized for the workflows of post and VFX pros?
Rather than adapted for M&E use, the system was built from the ground up with M&E in mind. Genesis has features like the HyperFS file system, which can optimize its stripe pattern for either GOPs or iframes. The Realtime Initiator offers guaranteed throughput.

Post pros using our tools don’t want to worry about bandwidth control when it comes to mission critical applications. For example, you might have an important customer reviewing your most recent edits while you review your full resolution 4K output in another bay. Our users don’t want to worry about what other workloads are happening on the SAN — if the bandwidth is overtaxed it could cause poor playback. The Unlimited solves the guesswork by allowing your playout server to take priority on the bandwidth available while other workstations on the SAN will share the leftover bandwidth.

Genesis products installed at 4 Max Post in Los Angeles.

Genesis products installed at 4 Max Post in Los Angeles.

Can you further explain the Realtime Initiator?
Genesis Unlimited can detect client machines connected through its connection ports via iQN or WWN, at which point it’s simple to input recognizable names, like “edit bay1” or “Mac1.” After naming the edit bays, you can toggle on or off realtime, which guarantees an amount of bandwidth for that machine and creates two pools of users: those that are realtime and those that are non-realtime. The non-realtime group can get suppressed with either a bandwidth ceiling they can’t collectively go over or a number of IOPs.

There are many practical uses for this, such as ensuring uninterrupted use for a client coming in for review of a project or setting power editors to realtime while the facility’s ancillary stations are non-realtime.

The Realtime Initiator feature will also give a block-level client priority access to the storage to ensure that no matter what the current workload on the SAN is, the most important suite in a facility will be able to playback without dropping frames.

How have you made this product secure in terms of data protection?
RAID-6 protection is standard on Genesis Unlimited, but it also offers RAID-7 protection — we have a hardened, safe Linux kernel and RAID-7.3 capability, which makes it very secure. The partial restore feature further exemplifies our focus on data security by only degrading a small portion of the disk when bad sectors are detected.

How does this relate in real-world workflows?
Well, for the most part, RAID systems themselves are very secure. When you have an issue with a RAID it typically is going to be due to aging hard drives, so as your system gets older your drives begin to fail. RAID-5 allows one drive to fail, RAID-6 allows two and RAID-7 allows up to three before your facility starts to be at risk of loosing data.

Scale Logic's lab techs testing out product.

Scale Logic’s lab techs testing out product.

The creative industry for the most part are not IT professionals and don’t necessarily take the same types of preventive maintenance measures that you see in IT. Finding ways to simplify the users’ experience and building in extra protection lets everyone sleep better at night.

This is a scalable system, but what’s the cost of entry? Can the smaller guys take advantage of Genesis Unlimited?
Yes, Genesis Unlimited is built with the collaborative work group in mind; this could be smaller boutique post houses, on-set production, broadcast and cable stations, houses of worship, as well as corporate facilities moving some of their marketing in-house.

These types of companies may not have the ability, or want, to purchase a dedicated Fibre Channel switch or metadata controller, but with Genesis Unlimited they can scale their solution as they grow. The 12-bay Genesis Unlimited starts at $24K using 2TB drives.

If you were a medium-sized VFX house working on commercials, what kind of system would you need and how does this benefit you?
We would recommend our 24- or 36-bay Genesis Unlimited, depending on their storage and bandwidth needs. We also offer a full line of traditional shared SAN solutions if the customer requires things like a dedicated metadata controller or high availability. These can either be used initially or migrated from a Genesis Unlimited, using existing hardware and licenses.

Do you have an advisory committee?
In relation to the HyperFS and our RX Series RAID storage (RX, RX2 and Unlimited) we are qualified for Adobe Anywhere, and certified with Blackmagic, NewTek, Telestream, FileCatalyst, Levels Beyond, Axle Video, Digital Vision, CatDV and others.

We also have developed an advisory board that will meet four times a year. This board is made up of eight industry veterans who currently hold executive positions at some of the industries leading storage manufacturing companies. We believe this committee and its dedication to the media and entertainment market will not only help drive HyperFS and our solutions sets, but will help us develop more focused features that continues to build efficiencies into our solution set.