Tag Archives: storage

Storage for VFX Studios

By Karen Moltenbrey

Visual effects are dazzling — inviting eye candy, if you will. But when you mention the term “storage,” the wide eyes may turn into a stifled yawn from viewers of the amazing content. Not so for the makers of that content.

They know that the key to a successful project rests within the reliability of their storage solutions. Here, we look at two visual effects studios — both top players in television and feature film effects — as they discuss how data storage enables them to excel at their craft.

Zoic Studios
A Culver City-based visual effects facility, with shops in Vancouver and New York, Zoic Studios has been crafting visual effects for a host of television series since its founding in 2002, starting with Firefly. In addition to a full plate of episodics, Zoic also counts numerous feature films and spots to its credits.

Saker Klippsten

According to Saker Klippsten, CTO, the facility has used a range of storage solutions over the past 16 years from BlueArc (before it was acquired by Hitachi), DataDirect Networks and others, but now uses Dell EMC’s Isilon cluster file storage system for its current needs. “We’ve been a fan of theirs for quite a long time now. I think we were customer number two,” he says, “back when they were trying to break into the media and entertainment sector.”

Locally, the studio uses Intel and NVMe drives for its workstations. NVMe, or non-volatile memory express, is an open logical device interface specification for accessing all-flash storage media attached via PCI Express (PCIe) bus. Previously, Zoic had been using Samsung SSD drives, with Samsung 1TB and 2TB EVO drives, but in the past year and a half, began migrating to NVMe on the local workstations.

Zoic transitioned to the Isilon system in 2004-2005 because of the heavy usage its renderfarm was getting. “Renderfarms work 24/7 and don’t take breaks. Our storage was getting really beat up, and people were starting to complain that it was slow accessing the file system and affecting playback of their footage and media,” explains Klippsten. “We needed to find something that could scale out horizontally.”

At the time, however, file-level storage was pretty much all that was available — “you were limited to this sort of vertical pool of storage,” says Klippsten. “You might have a lot of storage behind it, but you were still limited at the spigot, at the top end. You couldn’t get the data out fast enough.” But Isilon broke through that barrier by creating a cluster storage system that allotted the scale horizontally, “so we could balance our load, our render nodes and our artists across a number of machines, and access and update in parallel at the same time,” he adds.

Klippsten believes that solution was a big breakthrough for a lot of users; nevertheless, it took some time for others to get onboard. “In the media and entertainment industry, everyone seemed to be locked into BlueArc or NetApp,” he notes. Not so with Zoic.

Fairly recently, some new players have come onto the market, including Qumulo, touted as a “next-generation NAS company” built around advanced, distributed software running on commodity hardware. “That’s another storage platform that we have looked at and tested,” says Klippsten, adding that Zoic even has a number of nodes from the vendor.

There are other open-source options out there as well. Recently, Red Hat began offering Gluster Storage, an open, software-defined storage platform for physical, virtual and cloud environments. “And now with NVMe, it’s eliminating a lot of these problems as well,” Klippsten says.

Back when Zoic selected Isilon, there were a number of major issues that affected the studio’s decision making. As Klippsten notes, they had just opened the Vancouver office and were transferring data back and forth. “How do we back up that data? How do we protect it? Storage snapshot technology didn’t really exist at the time,” he says. But, Isilon had a number of features that the studio liked, including SyncIQ, software for asynchronous replication of data. “It could push data between different Isilon clusters from a block level, in a more automated fashion. It was very convenient. It offered a lot of parameters, such as moving data by time of day and access frequency.”

SyncIQ enabled the studio to archive the data. And for dealing with interim changes, such as a mistakenly deleted file, Zoic found Isilon’s SnapshotIQ ideal for fast data recovery. Moreover, Isilon was one of the first to support Aspera, right on the Isilon cluster. “You didn’t have to run it on a separate machine. It was a huge benefit because we transfer a lot of secure, encrypted data between us and a lot of our clients,” notes Klippsten.

Netflix’s The Chilling Adventures of Sabrina

Within the pipeline, Zoic’s storage system sits at the core. It is used immediately as the studio ingests the media, whether it is downloaded or transferred from hard drives – terabytes upon terabytes of data. The data is then cleaned up and distributed to project folders for tasks assigned to the various artists. In essence, it acts as a holding tank for the main production storage as an artist begins working on those specific shots, Klippsten explains.

Aside from using the storage at the floor level, the studio also employs it at the archive level, for data recovery as well as material that might not be accessed for weeks. “We have sort of a tiered level of storage — high-performance and deep-archival storage,” he says.

And the system is invaluable, as Zoic is handling 400 to 500 shots a week. If you multiply that by the number of revisions and versions that take place during that time frame, it adds up to hundreds of terabytes weekly. “Per day, we transfer between LA, Vancouver and New York somewhere around 20TB to 30TB,” he estimates. “That number increases quite a bit because we do a lot of cloud rendering. So, we’re pushing a lot of data up to Google and back for cloud rendering, and all of that hits our Isilon storage.”

When Zoic was founded, it originally saw itself as a visual effects company, but at the end of the day, Klippsten says they’re really a technology company that makes pretty pictures. “We push data and move it around to its limits. We’re constantly coming up with new, creative ideas, trying to find partners that can help provide solutions collaboratively if we cannot create them ourselves. The shot cost is constantly being squeezed by studios, which want these shots done faster and cheaper. So, we have to make sure our artists are working faster, too.”

The Chilling Adventures of Sabrina

Recently, Zoic has been working on a TV project involving a good deal of water simulations and other sims in general — which rapidly generate a tremendous amount of data. Then the data is transferred between the LA and Vancouver facilities. Having storage capable of handling that was unheard of three years ago, Klippsten says. However, Zoic has managed to do so using Isilon along with some off-the-shelf Supermicro storage with NVMe drives, enabling its dynamics department to tackle this and other projects. “When doing full simulation, you need to get that sim in front of the clients as soon as possible so they can comment on it. Simulations take a long time — we’re doing 26GB/sec, which is crazy. It’s close to something in the high-performance computing realm.”

With all that considered, it is hardly surprising to hear Klippsten say that Zoic could not function without a solid storage solution. “It’s funny. When people talk about storage, they are always saying they don’t have enough of it. Even when you have a lot of storage, it’s always running at 99 percent full, and they wonder why you can’t just go out to Best Buy and purchase another hard drive. It doesn’t work that way!”

Milk VFX
Founded just five years ago, Milk VFX is an independent visual effects facility in the UK with locations in London and Cardiff, Wales. While Milk VFX may be young, it was founded by experienced and award-winning VFX supervisors and producers. And the awards have continued, including an Oscar (Ex-Machina), an Emmy (Sherlock) and three BAFTAs, as the studio creates innovative and complex work for high-end television and feature films.

Benoit Leveau

With so much precious data, and a lot of it, the studio has to ensure that its work is secure and the storage system is keeping pace with the staff using it. When the studio was set up, it installed Pixit Media’s PixStor, a parallel file system with limitless storage, for its central storage solution. And, it has been growing with the company ever since. (Milk uses almost no local storage, except for media playback.)

“It was a carefully chosen solution due to its enterprise-level performance,” says Benoit Leveau, head of pipeline at Milk, about the decision to select PixStor. “It allowed us to expand when setting up our second studio in Cardiff and our rendering solutions in the cloud.”

When Milk was shopping for a storage offering while opening the studio, four things were forefront in their minds: speed, scalability, performance and reliability. Those were the functions the group wanted from its storage system — exactly the same four demands that the projects at the studios required.

“A final image requires gigabytes, sometimes terabytes, of data in the form of detailed models, high-resolution textures, animation files, particles and effects caches and so forth,” says Leveau. “We need to be able to review 4K image sequences in real time, so it’s really essential for daily operation.”

This year alone, Milk has completed a number of high-end visual effects sequences for feature films such as Adrift, serving as the principal vendor on this true story about a young couple lost at sea during one of the most catastrophic hurricanes in recorded history. The Milk team created all the major water and storm sequences, including bespoke 100-foot waves, all of which were rendered entirely in the cloud.

As Leveau points out, one of the shots in the film was more than 60TB, as it required complex ocean simulations. “We computed the ocean simulations on our local renderfarm, but the rendering was done in the cloud, and with this setup, we were able to access the data from everywhere almost transparently for the artists,” he explains.

Adrift

The studio also recently completed work on the blockbuster Fantastic Beasts sequel, The Crimes of Grindelwald.

For television, the studio created visual effects for an episode of the Netflix Altered Carbon sci-fi series, where people can live forever, as they digitally store their consciousness (stacks) and then download themselves into new bodies (sleeves). For the episode, the Milk crew created forest fires and the aftermath, as well as an alien planet and escape ship. For Origin, an action-thriller, the team generated 926 VFX shots in 4K for the 10-part series, spanning a wide range of work. Milk is also serving as the VFX vendor for Good Omens, a six-part horror/fantasy/drama series.

“For Origin, all the data had to be online for the duration of the four-month project. At the same time, we commenced work as the sole VFX vendor on the BBC/Amazon Good Omens series, which is now rapidly filling up our PixStor, hence the importance of scalability!” says Leveau.

Main Image: Origin via Milk VFX


Karen Moltenbrey is a veteran VFX and post writer.

Virtual Roundtable: Storage

By Randi Altman

The world of storage is ever changing and complicated. There are many flavors that are meant to match up to specific workflow needs. What matters most to users? In addition to easily-installed and easy-to-use systems that let them focus on the creative and not the tech? Scalability, speed, data protection, the cloud and the need to handle higher and higher frame rates with higher resolutions — meaning larger and larger files. The good news is the tools are growing to meet these needs. New technologies and software enhancements around NVMe are providing extremely low-latency connectivity that supports higher performance workflows. Time will tell how that plays a part in day-to-day workflows.

For this virtual roundtable, we reached out to makers of storage and users of storage. Their questions differ a bit, but their answers often overlap. Enjoy.

Company 3 NY and Deluxe NY Data/IO Supervisor Hollie Grant

Company 3 specializes in DI, finishing and color correction, and Deluxe is an end-to-end post house working on projects from dailies through finishing.

Hollie Grant

How much data did you use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Over the past year, as a rough estimate, my team dealt with around 1.5 petabytes of data. The latter half of this year really ramped up storage-wise. We were cruising along with a normal increase in data per show until the last few months where we had an influx of UHD, 4K and even 6K jobs, which take up to quadruple the space of a “normal” HD or 2K project.

I don’t think we’ll see a decrease in this trend with the take off of 4K televisions as the baseline for consumers and with streaming becoming more popular than ever. OTT films and television have raised the bar for post production, expecting 4K source and native deliveries. Even smaller indie films that we would normally not think twice about space-wise are shooting and finishing 4K in the hopes that Netflix or Amazon will buy their film. This means that even for the projects that once were not a burden on our storage will have to be factored in differently going forward.

Have you ever lost important data due to a hardware failure?Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Triple knock on wood! In my time here we have not lost any data due to an operator error. We follow strict procedures and create redundancy in our data, so if there is a hardware failure we don’t lose anything permanently. We have received hard drives or tapes that failed, but this far along in the digital age most people have more than one copy of their work, and if they don’t, a backup is the first thing I recommend.

Do you find access speed to be a limiting factor with you current storage solution?
We can reach read and write speeds of 1GB on our SAN. We have a pretty fast configuration of disks. Of course, the more sessions you have trying to read or write on a volume, the harder it can be to get playback. That’s why we have around 2.5PB of storage across many volumes so I can organize projects based on the bandwidth they will need and their schedules so we don’t have trouble with speed. This is one of the more challenging aspects of my day-to-day as the size of projects and their demand for larger frame playback increase.

Showtime’s Escape From Dannemora – Co3 provided color grading and conform.

What percentage of your data’s value do you budget toward storage and data security?
I can’t speak to exact percentages, but storage upgrades are a large part of our yearly budget. There is always an ask for new disks in the funding for the year because every year we’re growing along with the size of the data for productions. Our production network infrastructure is designed around security regulations set forth by many studios and the MPAA. A lot of work goes into maintaining that and one of the most important things to us is keeping our clients’ data safe behind multiple “locks and keys.”

What trends do you see in storage?
I see the obvious trends in physical storage size decreasing while bandwidth and data size increases. Along those lines I’m sure we’ll see more movies being post produced with everything needed in “the cloud.” The frontrunners of cloud storage have larger, more secure and redundant forms of storing data, so I think it’s inevitable that we’ll move in that direction. It will also make collaboration much easier. You could have all camera-original material stored there, as well as any transcoded files that editorial and VFX will be working with. Using the cloud as a sort of near-line storage would free up the disks in post facilities to focus on only having online what the artists need while still being able to quickly access anything else. Some companies are already working in a manner similar to this, but I think it will start to be a more common solution moving forward.

creative.space‘s Nick Anderson

What is the biggest trend you’ve seen in the past year in terms of storage?
The biggest trend is NVMe storage. SSDs are finally entering a range where they are forcing storage vendors to re-evaluate their architectures to take advantage of its performance benefits.

Nick Anderson

Can you talk more about NVMe?
When it comes to NVMe, speed, price and form factor are three key things users need to understand. When it comes to speed, it blasts past the limitations of hard drives speeds to deliver 3GB/s per drive, which requires a faster connector (PCIe) to take advantage of. With parallel access and higher IOPS (input/output operations per second), NVMe drives can handle operations that would bring an HDD to its knees. When it comes to price, it is cheaper per GB than past iterations of SSD, making it a feasible alternative for tier one storage in many workflows. Finally, when it comes to form factor, it is smaller and requires less hardware bulk in a purpose-built system so you can get more drives in a smaller amount of space at a lower cost. People I talk to are surprised to hear that they have been paying a premium to put fast SSDs into HDD form factors that choke their performance.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
This is something we have been thinking a lot about and we have some exciting stuff in the works that addresses this need that I can’t go into at this time. For now, we are working with our early adopters to solve these needs in ways that are practical to them, integrating custom software as needed. Moving forward we hope to bring an intuitive and seamless storage experience to the larger industry.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
This gets down to a shift in what kind of data is being processed and how it can be accessed. When it comes to video, big media files and image sequences have driven the push for better performance. 360° video pushes the performance storage further past 4K into 8K, 12K, 16K and beyond. On the other hand, as CGI continues to become more photorealistic and we emerge from the “uncanny valley,” the performance need shifts from big data to small data in many cases as render engines are used instead of video or image files. Moving lots of small data is what these systems were originally designed for, so it will be a welcome shift for users.

When it comes to AI, our file system architectures and NVMe technology are making data easily accessible with less impact on performance. Apart from performance, we monitor thousands of metrics on the system that can be easily connected to your machine learning system of choice. We are still in the early days of this technology and its application to media production, so we are excited to see how customers take advantage of it.

What do you do in your products to help safeguard your users’ data?
From a data integrity perspective, every bit of data gets checksumed on copy and can be restored from that checksum if it gets corrupted. This means that that storage is self-healing with 100% data integrity once it is written to the disk.

As far as safeguarding data from external threats, this is a complicated issue. There are many methods of securing a system, but for post production, performance can’t be compromised. For companies following MPAA recommendations, putting the storage behind physical security is often considered enough. Unfortunately, for many companies without an IT staff, this is where the security stops and the system is left open once you get access to the network. To solve this problem, we developed an LDAP user management system that is built-in to our units that provides that extra layer of software security at no additional charge. Storage access becomes user-based, so system activity can be monitored. As far as administering support, we designed an API gatekeeper to manage data to and from the database that is auditable and secure.

AlphaDogs‘ Terence Curren

Alpha Dogs is a full-service post house in Burbank, California. They provide color correction, graphic design, VFX, sound design and audio mixing.

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
We are primarily a finishing house, so we use hundreds of TBs per year on our SAN. We work at higher resolutions, which means larger file sizes. When we have finished a job and delivered the master files, we archive to LTO and clear the project off the SAN. When we handle the offline on a project, obviously our storage needs rise exponentially. We do foresee those requirements rising substantially this year.

Terence Curren

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
We’ve been lucky in that area (knocking on wood) as our SANs are RAID-protected and we maintain a degree of redundancy. We have had clients’ transfer drives fail. We always recommend they deliver a copy of their media. In the early days of our SAN, which is the Facilis TerraBlock, one of our editors accidentally deleted a volume containing an ongoing project. Fortunately, Facilis engineers were able to recover the lost partition as it hadn’t been overwritten yet. That’s one of the things I really have appreciated about working with Facilis over the years — they have great technical support which is essential in our industry.

Do you find access speed to be a limiting factor with you current storage solution?
Not yet, As we get forced into heavily marketed but unnecessary formats like the coming 8K, we will have to scale to handle the bandwidth overload. I am sure the storage companies are all very excited about that prospect.

What percentage of your data’s value do you budget toward storage and data security?
Again, we don’t maintain long-term storage on projects so it’s not a large consideration in budgeting. Security is very important and one of the reasons our SANs are isolated from the outside world. Hopefully, this is an area in which easily accessible tools for network security become commoditized. Much like deadbolts and burglar alarms in housing, it is now a necessary evil.

What trends do you see in storage?
More storage and higher bandwidths, some of which is being aided by solid state storage, which is very expensive on our level of usage. The prices keep coming down on storage, yet it seems that the increased demand has caused our spending to remain fairly constant over the years.

Cinesite London‘s Chris Perschky

Perschky ensures that Cinesite’s constantly evolving infrastructure provides the technical backbone required for a visual effects facility. His team plans, installs and implements all manner of technology, in addition to providing technical support to the entire company.

Chris Perschky

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Depending on the demands of the project that we are working on we can generate terabytes of data every single day. We have become increasingly adept at separating out data we need to keep long-term from what we only require for a limited time, and our cleanup tends to be aggressive. This allows us to run pretty lean data sets when necessary.

I expect more 4K work to creep in next year and, as such, expect storage demands to increase accordingly.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Our thorough backup procedures mean that we have an offsite copy of all production data within a couple of hours of it being written. As such, when an artist has accidentally overwritten a file we are able to retrieve it from backup swiftly.

Do you find access speed to be a limiting factor with your current storage solution?
Only remotely, thereby requiring a caching solution.

What percentage of your data’s value do you budget toward storage and data security?
Due to the requirements of our clients, we do whatever is necessary to ensure the security of their IP and our work.

Cinesite also worked on Iron Spider for Avengers Infinity War ©2018 Marvel Studios

What trends do you see in storage?
The trendy answer is to move all storage to the cloud, but it is just too expensive. That said, the benefits of cloud storage are well documented, so we need some way of leveraging it. I see more hybrid on-prem and cloud solutions. providing the best of both worlds as demand requires. Full SSD solutions are still way too expensive for most of us, but multi-tier storage solutions will have a larger SSD cache tier as prices drop.

Panasas‘ RW Hawkins

What is the biggest trend you’ve seen in the past year in terms of storage?
The demand for more capacity certainly isn’t slowing down! New formats like ProRes RAW, HDR and stereoscopic images required for VR continue to push the need to scale storage capacity and performance. New Flash technologies address the speed, but not the capacity. As post production houses scale, they see that complexity increases dramatically. Trying to scale to petabytes with individual and limited file servers is a big part of the problem. Parallel file systems are playing a more important role, even in medium-sized shops.

RW Hawkins

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
VR (and, more generally, interactive content creation) is particularly interesting as it takes many of the aspects of VFX and interactive gaming and combines them with post. The VFX industry, for many years, has built batch-oriented pipelines running on multiple Linux boxes to solve many of their production problems. This same approach works well for interactive content production where the footage often needs to be pre-processed (stitched, warped, etc.) before editing. High speed, parallel filesystems are particularly well suited for this type of batch-based work.

The AI/ML space is red hot, and the applications seem boundless. Right now, much of the work is being done at a small scale where direct-attach, all-Flash storage boxes serve the need. As this technology is used on a larger scale, it will put demands on storage that can’t be met by direct-attached storage, so meeting those high IOP needs at scale is certainly something Panasas is looking at.

Can you talk about NVMe?
NVMe is an exciting technology, but not a panacea for all storage problems. While being very fast, and excellent at small operations, it is still very expensive, has small capacity and is difficult to scale to petabyte sizes. The next-generation Panasas ActiveStor Ultra platform uses NVMe for metadata while still leveraging spinning disk and SATA SSD. This hybrid approach, using each storage medium for what it does best, is something we have been doing for more than 10 years.

What do you do in your products to help safeguard your users’ data?
Panasas uses object-based data protection with RAID- 6+. This software-based erasure code protection, at the file level, provides the best scalable data protection. Only files affected by a particular hardware failure need to be rebuilt, and increasing the number of drives doesn’t increase the likelihood of losing data. In a sense, every file is individually protected. On the hardware side, all Panasas hardware provides non-volatile components, including cutting-edge NVDIMM technology to protect our customers’ data. The file system has been proven in the field. We wouldn’t have the high-profile customers we do if we didn’t provide superior performance as well as superior data protection.

Users want more flexible workflows — storage in the cloud, on-premises, etc. How are your offerings reflective of that?
While Panasas leverages an object storage backend, we provide our POSIX-compliant file system client called DirectFlow to allow standard file access to the namespace. Files and directories are the “lingua franca” of the storage world, allowing ultimate compatibility. It is very easy to interface between on-premises storage, remote DR storage and public cloud/REST storage using DirectFlow. Data flows freely and at high speed using standard tools, which makes the Panasas system an ideal scalable repository for data that will be used in a variety of pipelines.

Alkemy X‘s Dave Zeevalk

With studios in Philly, NYC, LA and Amsterdam, Alkemy X provides live-action, design, post, VFX and original content for spots, branded content and more.

Dave Zeevalk

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Each year, our VFX department generates nearly a petabyte of data, from simulation caches to rendered frames. This year, we have seen a significant increase in data usage as client expectations continue to grow and 4K resolution becomes more prominent in episodic television and feature film projects.

In order to use our 200TB server responsibly, we have created a solid system for preserving necessary data and clearing unnecessary files on a regular basis. Additionally, we are diligent in archiving finale projects to our LTO tape systems, and removing them from our production server.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)

Because of our data redundancy, through hourly snapshots and daily backups, we have avoided any data loss even with hardware failure. Although hardware does fail, with these snapshots and backups on a secondary server, we are able to bring data back online extremely quickly in the case of hardware failure on our production server. Years ago, while migrating to Linux, a software issue completely wiped out our production server. Within two hours, we were able to migrate all data back from our snapshots and backups to our production server with no data loss.

Do you find access speed to be a limiting factor with your current storage solution?
There are a few scenarios where we do experience some issues with access speed to the production server. We do a good amount of heavy simulation work, at times writing dozens of terabytes per hour. While at our peak, we have experienced some throttled speeds due to the amount of data being written to the server. Our VFX team also has a checkpoint system for simulation where raw data is saved to the server in parallel to the simulation cache. This allows us to restart a simulation mid-way through the process if a render node drops or fails the job. This raw data is extremely heavy, so while using checkpoints on heavy simulations, we also experience some slower than normal speeds.

What percentage of your data’s value do you budget toward storage and data security?
Our active production server houses 200TB of storage space. We have a secondary backup server, with equivalent storage space that we store hourly snapshots and daily back-ups to.

What trends do you see in storage?
With client expectations continuing to rise, and 4K (and higher at times) becoming more and more regular on jobs, the need for more storage space is ever increasing.

Quantum‘s Jamie Lerner

What is the biggest trend you’ve seen in the past year in terms of storage?
Although the digital transformation to higher resolution content in M&E has been taking place over the past several years, the interesting aspect is that the pace of change over the past 12 months is accelerating. Driving this trend is the mainstream adoption of 4K and high dynamic range (HDR) video, and the strong uptick in applications requiring 8K formats.

Jamie Lerner

Virtual reality and augmented reality applications are booming across the media and entertainment landscape; everywhere from broadcast news and gaming to episodic television. These high-resolution formats add data to streams that must be ingested at a much higher rate, consume more capacity once stored and require significantly more bandwidth when doing realtime editing. All of this translates into a significantly more demanding environment, which must be supported by the storage solution.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
New technologies for producing stunning visual content are opening tremendous opportunities for studios, post houses, distributors, and other media organizations. Sophisticated next-generation cameras and multi-camera arrays enable organizations to capture more visual information, in greater detail than ever before. At the same time, innovative technologies for consuming media are enabling people to view and interact with visual content in a variety of new ways.

To capitalize on new opportunities and meet consumer expectations, many media organizations will need to bolster their storage infrastructure. They need storage solutions that offer scalable capacity to support new ingest sources that capture huge amounts of data, with the performance to edit and add value to this rich media.

Can you talk about NVMe?
The main benefit of NVMe storage is that it provides extremely low latency — therefore allowing users to seek content at very high speed — which is ideal for high stream counts and compressed 4K content workflows.

However, NVMe resources are expensive. Quantum addresses this issue head-on by leveraging NVMe over fabrics (NVMeoF) technology. With NVMeoF, multiple clients can use pooled NVMe storage devices across a network at local speeds and latencies. And when combined with our StorNext, all data is accessible by multiple clients in a global namespace, making this high-performance tier of storage much more cost-effective. Finally, Quantum is in early field trials of a new advancement that will allow customers to benefit even more from NVMe-enabled storage.

What do you do in your products to help safeguard your users’ data?
A storage system must be able to accommodate policies ranging from “throw it out when the job is done” to “keep it forever” and everything in between. The cost of storage demands control over where data lives and when, how many copies of the data exist and where those copies reside over time.

Xcellis scale-out storage powered by StorNext incorporates a broad range of features for data protection. This includes integrated features such as RAID, automated copying, versioning and data replication functionality, all included within our latest release of StorNext.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Given the differences in size and scope of organizations across the media industry, production workflows are incredibly varied and often geographically dispersed. Within this context, flexibility becomes a paramount feature of any modern storage architecture.

We provide flexibility in a number of important ways for our customers. From the perspective of system architecture, and recognizing there is no one-size fits all solution, StorNext allows customers to configure storage with multiple media types that balance performance and capacity requirements across an entire end-to-end workflow. Second, and equally important for those companies that have a global workforce, is that our data replication software FlexSync allows for content to be rapidly distributed to production staff around the globe. And no matter what tier of storage the data resides on, FlexTier provides coordinated and unified access to the content within a single global namespace.

EditShare‘s Bill Thompson

What is the biggest trend you’ve seen in the past year in terms of storage?
In no particular order, the biggest trends for storage in the media and entertainment space are:
1. The need to handle higher and higher data rates associated with higher resolution and higher frame rate content. Across the industry, this is being address with Flash-based storage and the use of emerging technology like NVMe over “X” and 25/50/100G networking.

Bill Thompson

2. The ever-increasing concern about content security and content protection, backup and restoration solutions.

3. The request for more powerful analytics solutions to better manage storage resources.

4. The movement away from proprietary hardware/software storage solutions toward ones that are compatible with commodity hardware and/or virtual environments.

Can you talk about NVMe?
NVMe technology is very interesting and will clearly change the M&E landscape going forward. One of the challenges is that we are in the midst of changing standards and we expect current PCIe-based NVMe components to be replaced by U2/M2 implementations. This migration will require important changes to storage platforms.

In the meantime, we offer non-NVMe Flash-based storage solutions whose performance and price points are equivalent to those claimed by early NVMe implementations.

What do you do in your products to help safeguard your users’ data?
EditShare has been in the forefront of user data protection for many years beginning with our introduction of disk-based and tape-based automated backup and restoration solutions.

We expanded the types of data protection schemes and provided easy-to-use management tools that allow users to tailor the type of redundant protection applied to directories and files. Similarly, we now provide ACL Media Spaces, which allow user privileges to be precisely tailored to their tasks at hand; providing only the rights needed to accomplish their tasks, nothing more, nothing less.

Most recently, we introduced EFS File Auditing, a content security solution that enables system administrators to understand “who did what to my content” and “when and how did they did it.”

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
The EditShare file system is now available in variants that support EditShare hardware-based solutions and hybrid on-premise/cloud solutions. Our Flow automation platform enables users to migrate from on-premise high-speed EFS solutions to cloud-based solutions, such as Amazon S3 and Microsoft Azure, offering the best of both worlds.

Rohde & Schwarz‘s Dirk Thometzek

What is the biggest trend you’ve seen in the past year in terms of storage?
Consumer behavior is the most substantial change that the broadcast and media industry has experienced over the past years. Content is consumed on-demand. In order to stay competitive, content providers need to produce more content. Furthermore, to make the content more desirable, technologies such as UHD and HDR need to be adopted. This obviously has an impact on the amount of data being produced and stored.

Dirk Thometzek

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
In media and entertainment there has always been a remarkable growth of data over time, from the very first simple SCSI hard drives to huge network environments. Nowadays, however, there is a tremendous growth approximating an exponential function. Considering all media will be preserved for a very long time, the M&E storage market segment will keep on growing and innovating.

Looking at the amount of footage being produced, a big challenge is to find the appropriate data. Taking it a step further, there might be content that a producer wouldn’t even think of looking for, but has a relative significance to the original metadata queried. That is where machine learning and AI come into the play. We are looking into automated content indexing with the minimum amount of human interaction where the artificial intelligence learns autonomously and shares information with other databases. The real challenge here is to protect these intelligences from being compromised by unintentional access to the information.

What do you do to help safeguard your users’ data?
In collaboration with our Rohde & Schwarz Cybersecurity division, we are offering complete and protected packages to our customers. It begins with access restrictions to server rooms up to encrypted data transfers. Cyber attacks are complex and opaque, but the security layer must be transparent and usable. In media though, latency is just as critical, which is usually introduced with every security layer.

Can you talk about NVMe?
In order to bring the best value to the customer, we are constantly looking for improvements. The direct PCI communication of NVMe certainly brings a huge improvement in terms of latency since it completely eliminates the SCSI communication layer, so no protocol translation is necessary anymore. This results in much higher bandwidth and more IOPS.

For internal data processing and databases, R&S SpycerNode NVMe is used, which really boosts its performance. Unfortunately, the economic aspects of using this technology for media data storage is currently not considered to be efficient. We are dedicated to getting the best performance-to-cost ratio for the market, and since we have been developing video workstations and servers besides storage for decades now, we know how to get the best performance out of a drive — spinning or solid state.

Economically, it doesn’t seem to be acceptable to a build system with the latest and greatest technology for a workflow when standards will do, just because it is possible. The real art of storage technology lies in a highly customized configuration according to the technical requirements of an application or workflow. R&S SpycerNode will evolve over time and technologies will be added to the family.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Although hybrid workflows are highly desirable, it is quite important to understand the advantages and limits of this technology. High-bandwidth and low-latency wide-area network connections involve certain economical aspects. Without the suitable connection, an uncompressed 4K production does not seem feasible from a remote location — uploading several terabytes to a co-location can take hours or even days to be transferred, even if protocol acceleration is used. However, there are workflows, such as supplemental rendering or proxy editing, that do make sense to offload to a datacenter. R&S SpycerNode is ready to be an integral part of geographically scattered networks and the Spycer Storage family will grow.

Dell EMC‘s Tom Burns

What is the biggest trend you’ve seen in the past year in terms of storage?
The most important storage trend we’ve seen is an increasing need for access to shared content libraries accommodating global production teams. This is becoming an essential part of the production chain for feature films, episodic television, sports broadcasting and now e-sports. For example, teams in the UK and in California can share asset libraries for their file-based workflow via a common object store, whether on-prem or hybrid cloud. This means they don’t have to synchronize workflows using point-to-point transmissions from California to the UK, which can get expensive.

Tom Burns

Achieving this requires seamless integration of on-premises file storage for the high-throughput, low-latency workloads with object storage. The object storage can be in the public cloud or you can have a hybrid private cloud for your media assets. A private or hybrid cloud allows production teams to distribute assets more efficiently and saves money, versus using the public cloud for sharing content. If the production needs it to be there right now, they can still fire up Aspera, Signiant, File Catalyst or other point-to-point solutions and have prioritized content immediately available, while allowing your on-premise cloud to take care of the shared content libraries.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Dell Technologies offers end-to-end storage solutions where customers can position the needle anywhere they want. Are you working purely in the cloud? Are you working purely on-prem? Or, like most people, are you working somewhere in the middle? We have a continuous spectrum of storage between high-throughput low-latency workloads and cloud-based object storage, plus distributed services to support the mix that meets your needs.

The most important thing that we’ve learned is that data is expensive to store, granted, but it’s even more expensive to move. Storing your assets in one place and having that path name never change, that’s been a hallmark of Isilon for 15 years. Now we’re extending that seamless file-to-object spectrum to a global scale, deploying Isilon in the cloud in addition to our ECS object store on premises.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
AR, VR, AI and other emerging technologies offer new opportunities for media companies to change the way they tell and monetize their stories. However, due to the large amounts of data involved, many media organizations are challenged when they rely on storage systems that lack either scalability or performance to meet the needs of these new workflows.

Dell EMC’s file and object storage solutions help media companies cost effectively tier their content based upon access. This allows media organizations to use emerging technologies to improve how stories are told and monetize their content with the assistance of AI-generated metadata, without the challenges inherent in many traditional storage systems.

With artificial intelligence, for example, where it was once the job of interns to categorize content in projects that could span years, AI gives media companies the ability to analyze content in near-realtime and create large, easily searchable content libraries as the content is being migrated from existing tape libraries to object-based storage, or ingested for current projects. The metadata involved in this process includes brand recognition and player/actor identification, as well as speech-to-text, making it easy to determine logo placement for advertising analytics and to find footage for use in future movies or advertisements.

With Dell EMC storage, AI technologies can be brought to the data, removing the need to migrate or replicate data to direct-attach storage for analysis. Our solutions also offer the scalability to store the content for years using affordable archive nodes in Isilon or ECS object storage.

In terms of AR and VR, we are seeing video game companies using this technology to change the way players interact with their environments. Not only have they created a completely new genre with games such as Pokemon Go, they have figured out that audiences want nonlinear narratives told through realtime storytelling. Although AR and VR adoption has been slower for movies and TV compared to the video game industry, we can learn a lot from the successes of video game production and apply similar methodologies to movie and episodic productions in the future.

Can you talk about NVMe?
NVMe solutions are a small but exciting part of a much larger trend: workflows that fully exploit the levels of parallelism possible in modern converged architectures. As we look forward to 8K, 60fps and realtime production, the usage of PCIe bus bandwidth by compute, networking and storage resources will need to be much more balanced than it is today.

When we get into realtime productions, these “next-generation” architectures will involve new production methodologies such as realtime animation using game engines rather than camera-based acquisition of physically staged images. These realtime processes will take a lot of cooperation between hardware, software and networks to fully leverage the highly parallel, low-latency nature of converged infrastructure.

Dell Technologies is heavily invested in next-generation technologies that include NVMe cache drives, software-defined networking, virtualization and containerization that will allow our customers to continuously innovate together with the media industry’s leading ISVs.

What do you do in your products to help safeguard your users’ data?
Your content is your most precious capital asset and should be protected and maintained. If you invest in archiving and backing up your content with enterprise-quality tools, then your assets will continue to be available to generate revenue for you. However, archive and backup are just two pieces of data security that media organizations need to consider. They must also take active measures to deter data breaches and unauthorized access to data.

Protecting data at the edge, especially at the scale required for global collaboration can be challenging. We simplify this process through services such as SecureWorks, which includes offerings like security management and orchestration, vulnerability management, security monitoring, advanced threat services and threat intelligence services.

Our storage products are packed with technologies to keep data safe from unexpected outages and unauthorized access, and to meet industry standards such as alignment to MPAA and TPN best practices for content security. For example, Isilon’s OneFS operating system includes SyncIQ snapshots, providing point-in-time backup that updates automatically and generates a list of restore points.

Isilon also supports role-based access control and integration with Active Directory, MIT Kerberos and LDAP, making it easy to manage account access. For production houses working on multiple customer projects, our storage also supports multi-tenancy and access zones, which means that clients requiring quarantined storage don’t have to share storage space with potential competitors.

Our on-prem object store, ECS, provides long-term, cost-effective object storage with support for globally distributed active archives. This helps our customers with global collaboration, but also provides inherent redundancy. The multi-site redundancy creates an excellent backup mechanism as the system will maintain consistency across all sites, plus automatic failure detection and self-recovery options built into the platform.

Scale Logic‘s Bob Herzan

What is the biggest trend you’ve seen in the past year in terms of storage?
There is and has been a considerable buzz around cloud storage, object storage, AI and NVMe. Scale Logic recently took a private survey to its customer base to help determine the answer to this question. What we found is none of those buzzwords can be considered a trend. We also found that our customers were migrating away from SAN and focusing on building infrastructure around high-performance and scalable NAS.

Bob Herzan

They felt on-premises LTO was still the most viable option for archiving, and finding a more efficient and cost-effective way to manage their data was their highest priority for the next couple of years. There are plenty of early adopters testing out the buzzwords in the industry, but the trend — in my opinion — is to maximize a stable platform with the best overall return on the investment.

End users are not focused so much on storage, but on how a company like ours can help them solve problems within their workflows where storage is an important component.

Can you talk more about NVMe?
NVMe provides an any-K solution and superior metadata low-latency performance and works with our scale-out file system. All of our products have had 100GbE drivers for almost two years, enabling mesh technologies with NVMe for networks as well. As cost comes down, NVMe should start to become more mainstream this year — our team is well versed in supporting NVMe and ready to help facilities research the price-to-performance of NVMe to see if it makes sense for their Genesis and HyperFS Scale Out system.

With AI, VR and machine learning, our industry is even more dependent on storage. How are you addressing this?
We are continually refining and testing our best practices. Our focus on broadcast automation workflows over the years has already enabled our products for AI and machine learning. We are keeping up with the latest technologies, constantly testing in our lab with the latest in software and workflow tools and bringing in other hardware to work within the Genesis Platform.

What do you do in your products to help safeguard your users’ data?
This is a broad question that has different answers depending on which aspect of the Genesis Platform you may be talking about. Simply speaking, we can craft any number of data safeguard strategies and practices based on our customer needs, the current technology they are using and, most importantly, where they see their growth of capacity and data protection needs moving forward. Our safeguards start as simple as enterprise-quality components, mirrored sets, RAID -6, RAID-7.3 and RAID N+M, asynchronous data sync to a second instance, full HA with synchronous data sync to a second instance, virtual IP failover between multiple sites, multi-tier DR and business continuity solutions.

In addition, the Genesis Platform’s 24×7 health monitoring service (HMS) communicates directly with installed products at customer sites, using the equipment serial number to track service outages, system temperature, power supply failure, data storage drive failure and dozens of other mission-critical status updates. This service is available to Scale Logic end users in all regions of the world and complies with enterprise-level security protocols by relying only on outgoing communication via a single port.

Users want more flexible workflows — storage in the cloud, on-premises. Are your offerings reflective of that?
Absolutely. This question defines our go-to-market strategy — it’s in our name and part of our day-to-day culture. Scale Logic takes a consultative role with its clients. We take our 30-plus years of experience and ask many questions. Based on the answers, we can give the customer several options. First off, many customers feel pressured to refresh their storage infrastructure before they’re ready. Scale Logic offers customized extended warranty coverage that takes the pressure off the client and allows them to review their options and then slowly implement the migration and process of taking new technology into production.

Also, our Genesis Platform has been designed to scale, meaning clients can start small and grow as their facility grows. We are not trying to force a single solution to our customers. We educate them on the various options to solve their workflow needs and allow them the luxury of choosing the solution that best meets both their short-term and long-term needs as well as their budget.

Facilis‘ Jim McKenna

What is the biggest trend you’ve seen in the past year in terms of storage?
Recently, I’ve found that conversations around storage inevitably end up highlighting some non-storage aspects of the product. Sort of the “storage and…” discussion where the technology behind the storage is secondary to targeted add-on functionality. Encoding, asset management and ingest are some of the ways that storage manufacturers are offering value-add to their customers.

Jim McKenna

It’s great that customers can now expect more from a shared storage product, but as infrastructure providers we should be most concerned with advancing the technology of the storage system. I’m all for added value — we offer tools ourselves that assist our customers in managing their workflow — but that can’t be the primary differentiator. A premium shared storage system will provide years of service through the deployment of many supporting products from various manufacturers, so I advise people to avoid being caught-up in the value-add marketing from a storage vendor.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
Our industry has always been dependent upon storage in the workflow, but now facilities need to manage large quantities of data efficiently, so it’s becoming more about scaled networks. In the traditional SAN environment, hard-wired Fibre Channel clients are the exclusive members of the production workgroup.

With scalable shared-storage through multiple connection options, everyone in the facility can be included in the collaboration on a project. This includes offload machines for encoding and rendering large HDR and VR content, and MAM systems with localized and cloud analysis of data. User accounts commonly grow into the triple digits when producers, schedulers and assistants all require secure access to the storage network.

Can you talk about NVMe?
Like any new technology, the outlook for NVMe is promising. Solid state architecture solves a lot of problems inherent in HDD-based systems — seek times, read speeds, noise and cooling, form factor, etc. If I had to guess a couple years ago, I would have thought that SATA SSDs would be included in the majority of systems sold by now; instead they’ve barely made a dent in the HDD-based unit sales in this market. Our customers are aware of new technology, but they also prioritize tried-and-true, field-tested product designs and value high capacity at a lower cost per GB.

Spinning HDD will still be the primary storage method in this market for years to come, although solid state has advantages as a helper technology for caching and direct access for high-bandwidth requirements.

What do you do in your products to help safeguard your users’ data?
Integrity and security are priority features in a shared storage system. We go about our security differently than most, and because of this our customers have more confidence in their solution. By using a system of permissions that emanate from the volume-level, and are obscured from the complexities of network ownership attributes, network security training is not required. Because of the simplicity of securing data to only the necessary people, data integrity and privacy is increased.

In the case of data integrity during hardware failure, our software-defined data protection has been guarding our customers assets for over 13 years, and is continually improved. With increasing drive sizes, time to completion of drive recovery is an important factor, as well as system usability during the process.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
When data lifecycle is a concern of our customers, we consult on methods of building a storage hierarchy. There is no one-size-fits-all approach here, as every workflow, facility and engineering scope is different.

Tier 1 storage is our core product line, but we also have solutions for nearline (tier 2) and archive (tier 3). When the discussion turns to the cloud as a replacement for some of the traditional on-premises storage offerings, the complexity of the pricing structure, access model and interface becomes a gating factor. There are a lot of ways to effectively use the cloud, such as compute (AI, encoding, etc.), business continuity, workflow (WAN collaboration) or simple cold storage. These tools, when combined with a strong on-premises storage network, will enhance productivity and ensure on-time delivery of product.

mLogic’s co-founder/CEO Roger Mabon

What is the biggest trend you’ve seen in the past year in terms of storage?
In the M&E industry, high-resolution 4K/8K multi-camera shoots,
stereoscopic VR and HDR video are commonplace and are contributing to the unprecedented amounts of data being generated in today’s media productions. This trend will continue as frame rates and resolutions increase and video professionals move to shoot in these new formats to future-proof their content.

Roger Mabon

With AI, VR and machine learning, etc., our industry is even more dependent on storage. Can you talk about that?
Absolutely. In this environment, content creators must deploy storage solutions that are high capacity, high-performance and fault-tolerant. Furthermore, all of this content must be properly archived so it can be accessed well in to the future. mLogic’s mission is to provide affordable RAID and LTO tape storage solutions that fit this critical need.

How are you addressing this?
The tsunami of data being produced in today’s shoots must be properly managed. First and foremost is the need to protect the original camera files (OCF). Our high-performance mSpeed Thunderbolt 3 RAID solutions are being deployed on-set to protect these OCF. mSpeed is a desktop RAID that features plug-and-play Thunderbolt connectivity, capacities up to 168TB and RAID-6 data protection. Once the OCF is transferred to mSpeed, camera cards can be wiped and put back into production

The next step involves moving the OCF from the on-set RAID to LTO tape. Our portable mTape Thunderbolt 3 LTO solutions are used extensively by media pros to transfer OCF to LTO tape. LTO tape cartridges are shelf stable for 30+ years and cost around $10 per TB. That said, I find that many productions skip the LTO transfer and rely solely on single hard drives to store the OCF. This is a recipe for disaster as hard drives sitting on a shelf have a lifespan of only three to five years. Companies working with the likes of Netflix are required to use LTO for this very reason. Completed projects should also be offloaded from hard drives and RAIDs to LTO tape. These hard drives systems can then be put back into action for the tasks that they are designed for… editing, color correction, VFX, etc.

Can you talk about NVMe?
mLogic does not currently offer storage solutions that incorporate NVMe technology, but we do recognize numerous use cases for content creation applications. Intel is currently shipping an 8TB SSD with PCIe NVMe 3.1 x4 interface that can read/write data at 3000+ MB/second! Imagine a crazy fast and ruggedized NVMe shuttle drive for on-set dailies…

What do you do in your products to help safeguard your users data?

Our 8- and12-drive mSpeed solutions feature hardware RAID data protection. mSpeed can be configured in multiple RAID levels including RAID-6, which will protect the content stored on the unit even if two drives should fail. Our mTape solutions are specifically designed to make it easy to offload media from spinning drives and archive the content to LTO tape for long term data preservation.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We recommend that you make two LTO archives of your content that are geographically separated in secure locations such as the post facility and the production facility. Our mTape Thunderbolt solutions accomplish this task.

In regards to the cloud, transferring terabytes upon terabytes of data takes an enormous amount of time and can be prohibitively expensive, especially when you need to retrieve the content. For now, cloud storage is reserved for productions with big pipes and big budgets.

OWC president Jennifer Soulé 


With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?

We’re constantly working to provide more capacity and faster performance.  For spinning disk solutions, we’re making sure that we’re offering the latest sizes in ever-increasing bays. Our ThunderBay line started as a four-bay, went to a six-bay and will grow to eight-bay in 2019. With 12TB drives, that’s 96TB in a pretty workable form factor. Of course, you also need performance, and that is where our SSD solutions come in as well as integrating the latest interfaces like Thunderbolt 3. For those with greater graphics needs, we also have our Helios FX external GPU box.

Can you talk about NVME?
With our Aura Pro X, Envoy Pro EX, Express 4M2 and ThunderBlade, we’re already into NVMe and don’t see that stopping. By the end of 2019 we expect virtually all of our external Flash-based solutions will be NVMe-based rather than SATA. As the cost of Flash goes down and performance and capacity go up, we expect broader adoption as both primary storage and in secondary cache setups. 2TB drive supply will stabilize and we should see 4TB  and PCIe Gen 4 will double bandwidth.  Bigger, faster and cheaper is a pretty awesome combination.

What do you do in your products to help safeguard your users data?
We focus more on providing products that are compatible with different encryption schemas rather than building something in. As far as overall data protection, we’re always focused on providing the most reliable storage we can. We make sure our power supplies are over what is required to make sure insufficient power is never a factor. We test a multitude of drives in our enclosures to ensure we’re providing the best performing drives.

For our RAID solutions, we do burn-in testing to make sure all the drives are solid. Our SoftRAID technology also provides in-depth drive health monitoring so you know well in advance if a drive is failing.  This is critical because many other SMART-based systems fail to detect bad drives leading to subpar system performance and corrupted data. Of course, all the hardware and software technology we put into our drives don’t do much if people don’t back up their data — so we also work with our customers to find the right solution for their use case or workflow.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
I definitely think we hit on flexibility within the on prem-space by offering a full range of single and multi-drive solutions, spinning disk and SSD options, portable to rack mounted that can be fully setup solutions or DIY where you can use drives you might already have. You’ll have to stay tuned on the cloud part, but we do have plans to use the cloud to expand on the data protection our drives already offer.

Panasas’ new ActiveStor Ultra targets emerging apps: AI, VR

Panasas has introduced ActiveStor Ultra, the next generation of its high-performance computing storage solution, featuring PanFS 8, a plug-and-play, portable, parallel file system. ActiveStor Ultra offers up to 75GB/s per rack on industry-standard commodity hardware.

ActiveStor Ultra comes as a fully integrated plug-and-play appliance running PanFS 8 on industry-standard hardware. PanFS 8 is the completely re-engineered Panasas parallel file system, which now runs on Linux and features intelligent data placement across three tiers of media — metadata on non-volatile memory express (NVMe), small files on SSDs and large files on HDDs — resulting in optimized performance for all data types.

ActiveStor Ultra is designed to support the complex and varied data sets associated with traditional HPC workloads and emerging applications, such as artificial intelligence (AI), autonomous driving and virtual reality (VR). ActiveStor Ultra’s modular architecture and building-block design enables enterprises to start small and scale linearly. With dock-to-data in one hour, ActiveStor Ultra offers fast data access and virtually eliminates manual intervention to deliver the lowest total cost of ownership (TCO).

ActiveStor Ultra will be available early in the second half of 2019.

Symply offering StorNext 6-powered Thunderbolt 3 storage solution

Symply is at NAB New York providing tech previews of its SymplyWorkspace Thunderbolt 3-based SAN technology that uses Quantum’s StorNext 6.

SymplyWorkspace allows laptops and workstations equipped with Thunderbolt 3 to ingest, edit, finish and deliver media through a direct Thunderbolt 3 cable connection, with no adapter needed and without having to move content locally, even at 4K resolutions.

Based on StorNext 6 sharing software, users can connect up to eight laptops and workstations to the system and instantly share video, graphics and other data files using a standard Thunderbolt interface with no additional hardware or adapters.

While the company has not announced pricing it does expect to have systems for sale in Q4. The boxes are expected to start under $10,000 for 48TB and up to four users, making the system well-suited for users such as smaller post houses, companies with in-house creative teams and ad agencies.

Quantum upgrades Xcellis scale-out storage with StoreNext 6.2, NVMe tech

Quantum has made enhancements to its Xcellisscale-out storage appliance portfolio with an upgrade to StorNext 6.2 and the introduction of NVMe storage. StorNext 6.2 bolsters performance for 4K and 8K video while enhancing integration with cloud-based workflows and global collaborative environments. NVMe storage significantly accelerates ingest and other aspects of media workflows.

Quantum’s Xcellis scale-out appliances provide high performance for increasingly demanding applications and higher resolution content. Adding NVMe storage to the Xcellis appliances offers ultra-fast performance: 22 GB/s single-client, uncached streaming bandwidth. Excelero’s NVMesh technology in combination with StorNext ensures all data is accessible by multiple clients in a global namespace, making it easy to access and cost-effective to share Flash-based resources.

Xcellis provides cross-protocol locking for shared access across SAN, NFS and SMB, helping users share content across both Fibre Channel and Ethernet.

With StorNext 6.2, Quantum now offers an S3 interface to Xcellis appliances, allowing them to serve as targets for applications designed to write to RESTful interfaces. This allows pros to use Xcellis as either a gateway to the cloud or as an S3 target for web-based applications.

Xcellis environments can now be managed with a new cloud monitoring tool that enables Quantum’s support team to monitor critical customer environmental factors, speed time to resolution and ultimately increase uptime. When combined with Xcellis Web Services — a suite of services that lets users set policies and adjust system configuration — overall system management is streamlined.

Available with StorNext 6.2, enhanced FlexSync replication capabilities enable users to create local or remote replicas of multitier file system content and metadata. With the ability to protect data for both high-performance systems and massive archives, users now have more flexibility to protect a single directory or an entire file system.

StorNext 6.2 lets administrators provide defined and enforceable quotas and implement quality of service levels for specific users, and it simplifies reporting of used storage capacity. These new features make it easier for administrators to manage large-scale media archives efficiently.

The new S3 interface and NVMe storage option are available today. The other StorNext features and capabilities will be available by December 2018.

 

mLogic at IBC with four new storage solutions

mLogic will be at partner booths during IBC showing four new products at: the mSpeed Pro, mRack Pro, mShare MDC and mTape SAS.

The mLogic mSpeed Pro (pictured) is a 10-drive RAID system with integrated LTO tape. Thishybrid storage solution and hard drive provides high-speed access to media for coloring, editing and VFX, while also providing an extended, long-term archive for content to LTO tape, which promises more than 30+ years of media preservation.

mSpeed Pro supports multiple RAID levels, including RAID-6 for the ultimate in fault tolerance. It connects to any Linux, macOS, or Windows computer via a fast 40Gb/second Thunderbolt 3 port. The unit ships with the mLogic Linear Tape File System (LTFS) Utility, a simple drag-and-drop application that transfers media from the RAID to the LTO.

The mLogic mSpeed Pro will be available in 60, 80 and 100TB with an LT0-7 or LTO-8 tape drive. Pricing starts at $8,999.

The mRack Pro is a 2U rack-mountable archiving solution that features full-height LTO-8 drives and Thunderbolt 3 connectivity. Full-height (FH) LTO-8 drives offer numerous benefits over their half-height counterparts, including:
– Having larger motors that move media faster
– Working more optimally in LTFS (Linear Tape File System) environments
– Providing increased mechanical reliability
– Being a better choice for high-duty cycle workloads
– Having a lower operating temperature

The mRack Pro is available with one or two LTO-8 FH drives. Pricing starts at $7,999.

mLogic’s mShare is a metadata controller (MDC) with PCIe switch and embedded Storage Area Network (SAN) software, all integrated in a single compact rack-mount enclosure. Designed to work with mLogic’s mSAN Thunderbolt 3 RAID, the unit can be configured with Apple Xsan or Tiger Technology Tiger Store software. With mShare and mSAN, collaborative workgroups can be configured over Thunderbolt at a fraction of the cost of traditional SAN solutions. Pricing TBD.

Designed for archiving media in the Linux and Windows environments, mTape SAS is a desktop LTO-7 or LTO-8 that ships bundled with a high-speed SAS PCIe adapter to install in host computers. The mTape SAS can also be bundled with Xendata Workstation 6 archiving software for Windows. Pricing starts at $3,399.

Review: Mobile Filmmaking with Filmic Pro, Gnarbox, LumaFusion

By Brady Betzel

There is a lot of what’s become known as mobile filmmaking being done with cell phones, such as the iPhone, Samsung Galaxy and even the Google Pixel. For this review, I will cover two apps and one hybrid hard drive/mobile media ingest station built specifically for this type of mobile production.

Recently, I’ve heard how great the latest mobile phone camera sensors are, and how those embracing mobile filmmaking are taking advantage of them in their workflows. Those workflows typically have one thing in common: Filmic Pro.

One of the more difficult parts of mobile filmmaking, whether you are using a GoPro, DSLR or your phone, is storage and transferring the media to a workable editing system. The Gnarbox, which is designed to help solve this issue, is in my opinion one of the best solutions for mobile workflows that I have seen.

Finally, editing your footage together in a professional nonlinear editor like Adobe Premiere Pro or Blackmagic’s Resolve takes some skills and dedication. Moreover, if you are doing a lot of family filmmaking (like me), you usually have to wait for the kids to go to sleep to start transferring and editing. However, with the iOS app LumaFusion — used simultaneously with the Gnarbox — you can transfer your GoPro, DSLR or other pro camera shots, while your actors are taking a break, allowing you to clear your memory cards or get started on a quick rough cut to send to executives that might be waiting off site.

Filmic Pro
First up is Filmic Pro V.6. Filmic Pro is an iOS and Android app that gives you fine-tuned control over your phone’s camera, including live image analyzation features, focus pulling and much more.

There are four very useful live analytic views you can enable at the top of the app: Zebra Stripes, Clipping, False Color and Focus Peaking. There is another awesome recording view that allows simultaneous focus and exposure adjustments, conveniently placed where you would naturally rest your thumbs. With the focus pulling feature you can even set start and end focus points that Filmic Pro will run for you — amazing!

There are many options under the hood of Filmic Pro, including the ability to record at almost any frame rate and aspect ratio, such as 9:16 vertical video (Instagram TV anyone?). You can also film at one particular frame rate, such as 120fps and record at a more standard frame rate of 24fps, essentially processing your high-speed footage in the phone. Vertical video is one of those constant questions that arises when producing video for mobile viewing. If you don’t want the app to automatically change to vertical video recording mode, you can set an orientation lock in the settings. When recording video there are four data rate options: Filmic Extreme, with 100Mb/s for any frame size 2K or higher and 50Mb/s for 1080p or lower; Filmic Quality, which limits the data rate to 35Mb/s (your phone’s default data rate); or Economy, which you probably don’t need to use.

I have only touched on a few of the options inside of Filmic Pro. There are many more, including mic input selections, sample rate selections (including 48kHz), timelapse mode and, in my opinion, the most powerful feature, Log recording. Log recording inside of a mobile phone can unlock some unnoticed potential in your phone’s camera chip, allowing for a better ability to match color between cameras or expose details in shadows when doing color correction in post.

The only slightly bad news is that on top of the $14.99 price for the Filmic Pro app itself, to gain access to the Log ability (labeled Cinematographer’s Toolkit) you have to pay an additional $9.99. In the end, $25 is a really, really, really small price to pay for the abilities that Filmic Pro unlocks for you. And while this won’t turn your phone into an Arri Alexa or Red Helium (yet), you can raise your level of mobile cinematography quickly, and if you are using your phone for some B-or C-roll, Filmic Pro can help make your colorist happy, thanks to Log recording.

One feature that I couldn’t test because I do not own a DJI Osmo is that you can control the features on your iOS device from the Osmo itself, which is pretty intriguing. In addition, if you use any of the Moondog Labs anamorphic adapters, Filmic Pro can be programmed to de-squeeze the footage properly.

You can really dive in with Filmic Pro’s library of tutorials here.

Gnarbox 1.0
After running around with GoPro cameras strapped to your (or your dog’s) head all day, there will be some heavy post work to get it offloaded onto your computer system. And, typically, you will have much more than just one GoPro recording during the day. Maybe you took some still photos on your DSLR and phone, shot some drone footage and had GoPro on a chest mount.

As touched on earlier, the Gnarbox 1.0 is a stand-alone WiFi-enabled hard drive and media ingestion station that has SD, microSD, USB 3.0 and USB 2.0 ports to transfer media to the internal 128GB or 256GB Flash memory. You simply insert the memory cards or the camera’s USB cable and connect to the Gnarbox via the App on your phone to begin working or transferring.

There are a bunch of files that will open using the Gnarbox 1.0 iOS and Android apps, but there are some specific files that won’t open, including ProRes, H.265 iPhone recordings, CinemaDNG, etc. However, not all hope is lost. Gnarbox is offering up the Gnarbox 2.0 via IndieGogo and can be pre-ordered. Version 2.0 will offer compatibility with file types such as ProRes, in addition to having faster transfer times and app-free backups.

So while reading this review of the Gnarbox 1.0, keep Version 2 in the back of your mind, since it will likely contain many new features that you will want… if you can wait until the estimated delivery of January 2019.

Gnarbox 1.0 comes in two flavors: a 128GB version for $299.99, and the version I was sent to review, which is 256GB for $399.99. The price is a little steep, but the efficiency this product brings is worth the price of admission. Click here for all the lovely specs.

The drive itself is made to be used with an iPhone or Android-based device primarily, but it can be put into an external hard drive mode to be used with a stand-alone computer. The Gnarbox 1.0 has a write speed of 132MB/s and read speed of 92MB/s when attached to a computer in Mass Storage Mode via the USB 3.0 connection. I actually found myself switching modes a lot when transferring footage or photos back to my main system.

It would be nice to have a way to switch to the external hard drive mode outside of the app, but it’s still pretty easy and takes only a few seconds. To connect your phone or tablet to the Gnarbox 1.0, you need to download the Gnarbox app from the App Store or Google Play Store. From there you can access content on your phone as well as on the Gnarbox when connected to it. In addition to the Gnarbox app, Gnarbox 1.0 can be used with Adobe Lightroom CC and the mobile NLE LumaFusion, which I will cover next in the review.

The reason I love the Gnarbox so much is how simply, efficiently and powerfully it accomplishes its task of storing media without a computer, allowing you to access, edit and export the media to share online without a lot of technical know-how. The one drawback to using cameras like GoPros is it can take a lot of post processing power to get the videos on your system and edited. With the Gnarbox, you just insert your microSD card into the Gnarbox, connect your phone via WiFi, edit your photos or footage then export to your phone or the Gnarbox itself.

If you want to do a full backup of your memory card, you open the Gnarbox app, find the Connected Devices, select some or all of the clips and photos you want to backup to the Gnarbox and click Copy Files. The same screen will show you which files have and have not been backed up yet so you don’t do it multiple times.

When editing photos or video there are many options. If you are simply trimming down a video clip, stringing out a few clips for a highlight reel, adding some color correction, and even some music, then the Gnarbox app is all you will need. With the Gnarbox 1.0, you can select resolution and bit rates. If you’re reading this review you are probably familiar with how resolutions and bit rates work, so I won’t bore you with those explanations. Gnarbox 1.0 allows for 4K, 2.7K. 1080p and 720p resolutions and bitrates of 65 Mbps, 45Mbps, 30Mbps and 10Mbps.

My rule of thumb for social media is that resolution over 1080p doesn’t really apply to many people since most are watching it on their phone, and even with a high-end HDR, 4K, wide gamut… whatever, you really won’t see much difference. The real difference comes in bit rates. Spend your megabytes wisely and put all your eggs in the bit rate basket. The higher the bit rates the better quality your color will be and there will be less tearing or blockiness. In my opinion a higher bit rate 1080p video is worth more than a 4K video with a lower bit rate. It just doesn’t pay off. But, hey, you have the options.

Gnarbox has an awesome support site where you can find tutorial GIFs and writeups covering everything from powering on your Gnarbox to bitrates, like this one. They also have a great YouTube playlist that covers most topics with the Gnarbox, its app, and working with other apps like LumaFusion to get you started. Also, follow them on Instagram for some sweet shots they repost.

LumaFusion
With Filmic Pro to capture your video and with the Gnarbox you can lightly edit and consolidate your media, but you might need to go a little further in the editing than just simple trims. This is where LumaFusion comes in. At the moment, LumaFusion is an iOS only app, but I’ve heard they might be working on an Android version. So for this review I tried to get my hands on an iPad and an iPad Pro because this is where LumaFusion would sing. Alas, I had to settle for my wife’s iPhone 7 Plus. This was actually a small blessing, because I was afraid the app would be way too small to use on a standard iPhone. To my surprise it was actually fine.

LumaFusion is an iOS-based nonlinear editor, much like Adobe Premiere or FCPX, but it only costs $19.99 in the App store. I added LumaFusion to this review because of its tight integration with Gnarbox (by accessing the files directly on the Gnarbox for editing and output), but also because it has presets for Filmic Pro aspect ratios: 1.66:1, 17:9, 2.2:1, 2.39:1, 2.59:1. LumaFusion will also integrate with external drives like the Western Digital wireless SSD, as well as cloud services like Google Drive.

In the actual editing interface LumaFusion allows for advanced editing with titles, music, effects and color correction. It gives you three video and audio tracks to edit with, allowing for J and L cuts or transitions between clips. For an editor like me who is so used to Avid Media Composer that I want to slip and trim in every app, LumaFusion allows for slips, trims, insert edits, overwrite edits, audio track mixing, audio ducking to automatically set your music levels — depending on when dialogue occurs — audio panning, chroma key effects, slow and fast motion effects, titles with different fonts and much more.

There is a lot of versatility inside of LumaFusion, including the ability to export different frame rates between 18, 23.976, 24, 25, 29.97, 30, 48, 50, 59.94, 60, 120 and 240 fps. If you are dealing with 360-degree video, you can even enable the 360-degree metadata flag on export.

LumaFusion has a great reference manual that will fill you in on all the aspects of the app, and it’s a good primer on other subjects like exporting. In addition, they have a YouTube playlist. Simply, you can export for all sorts of social media platforms or even to share over Air Drop between Mac OS and iOS devices. You can choose your export resolution such as 1080p or UHD 4K (3840×2160), as well as your bit rate, and then you can select your codec, whether it be H.264 or H.265. You can also choose whether the container is a MP4 or MOV.

Obviously, some of these output settings will be dictated by the destination, such as YouTube, Instagram or maybe your NLE on your computer system. Bit rate is very important for color fidelity and overall picture quality. LumaFusion has a few settings on export, including: 12Mbps, 24Mbps, 32Mbps and 50Mbps if in 1080p, otherwise 100 Mbps if you are exporting UHD 4k (3840×2160).

LumaFusion is a great solution for someone who needs the fine tuning of a pro NLE on their iPad or iPhone. You can be on an exotic vacation without your laptop and still create intricately edited highlight reels.

Summing Up
In the end, technology is amazing! From the ultra-high-end camera app Filmic Pro to the amazing wireless media hub Gnarbox and even the iOS-based nonlinear editor LumaFusion, you can film, transfer and edit a professional-quality UHD 100Mbps clip without the need for a stand-alone computer.

If you really want to see some amazing footage being created using Filmic Pro you should follow Richard Lackey on all social media platforms. You can find more info on his website. He has some amazing imagery as well as tips on how to shoot more “cinematic” video using your iPhone with Filmic Pro.

The Gnarbox — one of my favorite tools reviewed over the years — serves a purpose and excels. I can’t wait to see how the Gnarbox 2.0 performs when it is released. If you own a GoPro or any type of camera and want a quick and slick way to centralize your media while you are on the road, then you need the Gnarbox.

LumaFusion will finish off your mobile filmmaking vision with titles, trimming and advanced edit options that will leave people wondering how you pulled off such a professional video from your phone or tablet.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

DigitalGlue’s Creative.Space optimized for Resolve workflows

DigitalGlue’s Creative.Space, an on-premise managed storage (OPMS) service, has been optimized for Blackmagic DaVinci Resolve workflows, meeting the technical requirements for inclusion in Blackmagic’s Configuration Guide. DigitalGlue is an equipment, integration and software development provider, that also designs and implements solutions for complete turnkey content creation, post production and distribution.

According to DigitalGlue CEO/CTO Tim Anderson, each Creative.Space system is pre-loaded with a Resolve optimized PostreSQL database server enabling users to simply create databases in Resolve using the same address they use to connect to their storage. In addition, users can schedule database backups with snapshots, ensuring that work is preserved timely and securely. Creative.Space also uses media intelligent caching to move project data and assets into a “fast lane” allowing all collaborators to experience seamless performance.

“We brought a Creative.Space entry-level Auteur unit optimized with a DaVinci Resolve database to the Blackmagic training facility in Burbank,” explains Nick Anderson, Creative.Space product manager. “The Auteur was put through a series of rigorous testing processes and passed each with flying colors. Our Media Intelligent caching allowed the unit to provide full performance to 12 systems at a level that would normally require a much larger and more expensive system.”

Auteur was the first service in the Creative.Space platform to launch. Creative.Space targets collaborative workflows by optimizing the latest hardware and software for efficiency and increased productivity. Auteur starts at 120TB RAW capacity across 12 drives in a 24-bay 4RU chassis with open bays for rapid growth. Every system is custom-built to address each client’s unique needs. Entry level systems are designed for small to medium workgroups using compressed 4K, 6K and 8K workflows and can scale for 4K uncompressed workflows (including 4K OpenEXR) and large multi-user environments.

Avid adds to Nexis product line with Nexis|E5

The Nexis|E5 NL nearline storage solution from Avid is now available. The addition of this high-density on-premises solution to the Avid Nexis family allows Avid users to manage media across all their online, nearline and archive storage resources.

Avid Nexis|E5 NL includes a new web-based Nexis management console for managing, controlling and monitoring Nexis installations. NexislE5 NL can be easily accessed through MediaCentral | Cloud UX or Media Composer and also integrates with MediaCentral|Production Management, MediaCentral|Asset Management and MediaCentral|Editorial Management to help collaboration, with advanced features such as project and bin sharing. Extending the Nexis|FS (file system) to a secondary storage tier makes it easy to search for, find and import media, enabling users to locate content distributed throughout their operations more quickly.

Build for project parking, staging workflows and proxy archive, Avid reports that Nexis | E5 NL streamlines the workflow between active and non-active assets, allowing media organizations to park assets as well as completed projects on high-density nearline storage, and keep them within easy reach for rediscovery and reuse.

Up to eight Nexis|E5 NL engines can be integrated as one virtualizable pool of storage, making content and associated projects and bins more accessible. In addition, other Avid Nexis Enterprise engines can be integrated into a single storage system that is partitioned for better archival organization.

Additional Nexis|E5 NL features include:
• It’s scalable from 480TB of storage to more than 7PB by connecting multiple Nexis|E5 NL engines together as a single nearline system for a highly scalable, lower-cost secondary tier of storage.
• It offers flexible storage infrastructure that can be provisioned with required capacity and fault-tolerance characteristics.
• Users can configure, control and monitor Nexis using the updated management console that looks and feels like a MediaCentral|Cloud UX application. Its dashboard provides an overview of the system’s performance, bandwidth and status, as well as access to quickly configure and manage workspaces, storage groups, user access, notifications and other functions. It offers the flexibility and security of HTML5 along with an interface design that enables mobile device support.

DigitalGlue’s Creative.Space intros all-Flash 1RU OPMS storage

Creative.Space, a division of DigitalGlue that provides on-premise managed storage (OPMS) as a service for production and post companies as well as broadcast networks, has added the Breathless system to its offerings. The product will make its debut at Cine Gear in LA next month.

The Breathless Next Generation Small Form Factor (NGSFF) media storage system offers 36 front-serviceable NVMe SSD bays in 1RU. It is designed for 4K, 6K and 8K uncompressed workflows using JPEG2000, DPX and multi-channel OpenEXR. There are 4TB of NVMe SSDs currently available, but a 16TB version will be available in later this year, allowing 576TB of Flash storage to fit in 1RU. Breathless performs 10 million random read IOPS (Input/Output Operations per Second) of storage performance (up to 475,000 per drive).

Each of the 36 NGSFF SSD bays connects to the motherboard directly over PCIe to deliver maximum potential performance. With dual Intel Skylake-SP CPUs and 24 DDR4 DIMMs of memory, this system is perfect for I/O intensive local workloads, not just for high-end VFX, but also realtime analytics, database and OTT content delivery servers.

Breathless’ OPMS features 24/7 monitoring, technical support and next-day repairs for an all-inclusive, affordable fixed monthly rate of $2,495.00, based on a three-year contract (16TB of SSD).

Breathless is the second Creative.Space system to launch, joining Auteur, which offers 120TB RAW capacity across 12 drives in a 24-bay 4 RU chassis. Every system is custom-built to address each client’s needs. Entry level systems are designed for small to medium workgroups using compressed 4K, 6K and 8K workflows and can scale for 4K uncompressed workflows (including 4K OpenEXR) and large multi-user environments.

DigitalGlue, an equipment, integration and software development provider, also designs and implements turnkey solutions for content creation, post and distribution.