Category Archives: Storage Edition

Virtual Roundtable: Storage

By Randi Altman

The world of storage is ever changing and complicated. There are many flavors that are meant to match up to specific workflow needs. What matters most to users? In addition to easily-installed and easy-to-use systems that let them focus on the creative and not the tech? Scalability, speed, data protection, the cloud and the need to handle higher and higher frame rates with higher resolutions — meaning larger and larger files. The good news is the tools are growing to meet these needs. New technologies and software enhancements around NVMe are providing extremely low-latency connectivity that supports higher performance workflows. Time will tell how that plays a part in day-to-day workflows.

For this virtual roundtable, we reached out to makers of storage and users of storage. Their questions differ a bit, but their answers often overlap. Enjoy.

Company 3 NY and Deluxe NY Data/IO Supervisor Hollie Grant

Company 3 specializes in DI, finishing and color correction, and Deluxe is an end-to-end post house working on projects from dailies through finishing.

Hollie Grant

How much data did you use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Over the past year, as a rough estimate, my team dealt with around 1.5 petabytes of data. The latter half of this year really ramped up storage-wise. We were cruising along with a normal increase in data per show until the last few months where we had an influx of UHD, 4K and even 6K jobs, which take up to quadruple the space of a “normal” HD or 2K project.

I don’t think we’ll see a decrease in this trend with the take off of 4K televisions as the baseline for consumers and with streaming becoming more popular than ever. OTT films and television have raised the bar for post production, expecting 4K source and native deliveries. Even smaller indie films that we would normally not think twice about space-wise are shooting and finishing 4K in the hopes that Netflix or Amazon will buy their film. This means that even for the projects that once were not a burden on our storage will have to be factored in differently going forward.

Have you ever lost important data due to a hardware failure?Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Triple knock on wood! In my time here we have not lost any data due to an operator error. We follow strict procedures and create redundancy in our data, so if there is a hardware failure we don’t lose anything permanently. We have received hard drives or tapes that failed, but this far along in the digital age most people have more than one copy of their work, and if they don’t, a backup is the first thing I recommend.

Do you find access speed to be a limiting factor with you current storage solution?
We can reach read and write speeds of 1GB on our SAN. We have a pretty fast configuration of disks. Of course, the more sessions you have trying to read or write on a volume, the harder it can be to get playback. That’s why we have around 2.5PB of storage across many volumes so I can organize projects based on the bandwidth they will need and their schedules so we don’t have trouble with speed. This is one of the more challenging aspects of my day-to-day as the size of projects and their demand for larger frame playback increase.

Showtime’s Escape From Dannemora – Co3 provided color grading and conform.

What percentage of your data’s value do you budget toward storage and data security?
I can’t speak to exact percentages, but storage upgrades are a large part of our yearly budget. There is always an ask for new disks in the funding for the year because every year we’re growing along with the size of the data for productions. Our production network infrastructure is designed around security regulations set forth by many studios and the MPAA. A lot of work goes into maintaining that and one of the most important things to us is keeping our clients’ data safe behind multiple “locks and keys.”

What trends do you see in storage?
I see the obvious trends in physical storage size decreasing while bandwidth and data size increases. Along those lines I’m sure we’ll see more movies being post produced with everything needed in “the cloud.” The frontrunners of cloud storage have larger, more secure and redundant forms of storing data, so I think it’s inevitable that we’ll move in that direction. It will also make collaboration much easier. You could have all camera-original material stored there, as well as any transcoded files that editorial and VFX will be working with. Using the cloud as a sort of near-line storage would free up the disks in post facilities to focus on only having online what the artists need while still being able to quickly access anything else. Some companies are already working in a manner similar to this, but I think it will start to be a more common solution moving forward.

creative.space‘s Nick Anderson

What is the biggest trend you’ve seen in the past year in terms of storage?
The biggest trend is NVMe storage. SSDs are finally entering a range where they are forcing storage vendors to re-evaluate their architectures to take advantage of its performance benefits.

Nick Anderson

Can you talk more about NVMe?
When it comes to NVMe, speed, price and form factor are three key things users need to understand. When it comes to speed, it blasts past the limitations of hard drives speeds to deliver 3GB/s per drive, which requires a faster connector (PCIe) to take advantage of. With parallel access and higher IOPS (input/output operations per second), NVMe drives can handle operations that would bring an HDD to its knees. When it comes to price, it is cheaper per GB than past iterations of SSD, making it a feasible alternative for tier one storage in many workflows. Finally, when it comes to form factor, it is smaller and requires less hardware bulk in a purpose-built system so you can get more drives in a smaller amount of space at a lower cost. People I talk to are surprised to hear that they have been paying a premium to put fast SSDs into HDD form factors that choke their performance.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
This is something we have been thinking a lot about and we have some exciting stuff in the works that addresses this need that I can’t go into at this time. For now, we are working with our early adopters to solve these needs in ways that are practical to them, integrating custom software as needed. Moving forward we hope to bring an intuitive and seamless storage experience to the larger industry.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
This gets down to a shift in what kind of data is being processed and how it can be accessed. When it comes to video, big media files and image sequences have driven the push for better performance. 360° video pushes the performance storage further past 4K into 8K, 12K, 16K and beyond. On the other hand, as CGI continues to become more photorealistic and we emerge from the “uncanny valley,” the performance need shifts from big data to small data in many cases as render engines are used instead of video or image files. Moving lots of small data is what these systems were originally designed for, so it will be a welcome shift for users.

When it comes to AI, our file system architectures and NVMe technology are making data easily accessible with less impact on performance. Apart from performance, we monitor thousands of metrics on the system that can be easily connected to your machine learning system of choice. We are still in the early days of this technology and its application to media production, so we are excited to see how customers take advantage of it.

What do you do in your products to help safeguard your users’ data?
From a data integrity perspective, every bit of data gets checksumed on copy and can be restored from that checksum if it gets corrupted. This means that that storage is self-healing with 100% data integrity once it is written to the disk.

As far as safeguarding data from external threats, this is a complicated issue. There are many methods of securing a system, but for post production, performance can’t be compromised. For companies following MPAA recommendations, putting the storage behind physical security is often considered enough. Unfortunately, for many companies without an IT staff, this is where the security stops and the system is left open once you get access to the network. To solve this problem, we developed an LDAP user management system that is built-in to our units that provides that extra layer of software security at no additional charge. Storage access becomes user-based, so system activity can be monitored. As far as administering support, we designed an API gatekeeper to manage data to and from the database that is auditable and secure.

AlphaDogs‘ Terence Curren

Alpha Dogs is a full-service post house in Burbank, California. They provide color correction, graphic design, VFX, sound design and audio mixing.

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
We are primarily a finishing house, so we use hundreds of TBs per year on our SAN. We work at higher resolutions, which means larger file sizes. When we have finished a job and delivered the master files, we archive to LTO and clear the project off the SAN. When we handle the offline on a project, obviously our storage needs rise exponentially. We do foresee those requirements rising substantially this year.

Terence Curren

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
We’ve been lucky in that area (knocking on wood) as our SANs are RAID-protected and we maintain a degree of redundancy. We have had clients’ transfer drives fail. We always recommend they deliver a copy of their media. In the early days of our SAN, which is the Facilis TerraBlock, one of our editors accidentally deleted a volume containing an ongoing project. Fortunately, Facilis engineers were able to recover the lost partition as it hadn’t been overwritten yet. That’s one of the things I really have appreciated about working with Facilis over the years — they have great technical support which is essential in our industry.

Do you find access speed to be a limiting factor with you current storage solution?
Not yet, As we get forced into heavily marketed but unnecessary formats like the coming 8K, we will have to scale to handle the bandwidth overload. I am sure the storage companies are all very excited about that prospect.

What percentage of your data’s value do you budget toward storage and data security?
Again, we don’t maintain long-term storage on projects so it’s not a large consideration in budgeting. Security is very important and one of the reasons our SANs are isolated from the outside world. Hopefully, this is an area in which easily accessible tools for network security become commoditized. Much like deadbolts and burglar alarms in housing, it is now a necessary evil.

What trends do you see in storage?
More storage and higher bandwidths, some of which is being aided by solid state storage, which is very expensive on our level of usage. The prices keep coming down on storage, yet it seems that the increased demand has caused our spending to remain fairly constant over the years.

Cinesite London‘s Chris Perschky

Perschky ensures that Cinesite’s constantly evolving infrastructure provides the technical backbone required for a visual effects facility. His team plans, installs and implements all manner of technology, in addition to providing technical support to the entire company.

Chris Perschky

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Depending on the demands of the project that we are working on we can generate terabytes of data every single day. We have become increasingly adept at separating out data we need to keep long-term from what we only require for a limited time, and our cleanup tends to be aggressive. This allows us to run pretty lean data sets when necessary.

I expect more 4K work to creep in next year and, as such, expect storage demands to increase accordingly.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Our thorough backup procedures mean that we have an offsite copy of all production data within a couple of hours of it being written. As such, when an artist has accidentally overwritten a file we are able to retrieve it from backup swiftly.

Do you find access speed to be a limiting factor with your current storage solution?
Only remotely, thereby requiring a caching solution.

What percentage of your data’s value do you budget toward storage and data security?
Due to the requirements of our clients, we do whatever is necessary to ensure the security of their IP and our work.

Cinesite also worked on Iron Spider for Avengers Infinity War ©2018 Marvel Studios

What trends do you see in storage?
The trendy answer is to move all storage to the cloud, but it is just too expensive. That said, the benefits of cloud storage are well documented, so we need some way of leveraging it. I see more hybrid on-prem and cloud solutions. providing the best of both worlds as demand requires. Full SSD solutions are still way too expensive for most of us, but multi-tier storage solutions will have a larger SSD cache tier as prices drop.

Panasas‘ RW Hawkins

What is the biggest trend you’ve seen in the past year in terms of storage?
The demand for more capacity certainly isn’t slowing down! New formats like ProRes RAW, HDR and stereoscopic images required for VR continue to push the need to scale storage capacity and performance. New Flash technologies address the speed, but not the capacity. As post production houses scale, they see that complexity increases dramatically. Trying to scale to petabytes with individual and limited file servers is a big part of the problem. Parallel file systems are playing a more important role, even in medium-sized shops.

RW Hawkins

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
VR (and, more generally, interactive content creation) is particularly interesting as it takes many of the aspects of VFX and interactive gaming and combines them with post. The VFX industry, for many years, has built batch-oriented pipelines running on multiple Linux boxes to solve many of their production problems. This same approach works well for interactive content production where the footage often needs to be pre-processed (stitched, warped, etc.) before editing. High speed, parallel filesystems are particularly well suited for this type of batch-based work.

The AI/ML space is red hot, and the applications seem boundless. Right now, much of the work is being done at a small scale where direct-attach, all-Flash storage boxes serve the need. As this technology is used on a larger scale, it will put demands on storage that can’t be met by direct-attached storage, so meeting those high IOP needs at scale is certainly something Panasas is looking at.

Can you talk about NVMe?
NVMe is an exciting technology, but not a panacea for all storage problems. While being very fast, and excellent at small operations, it is still very expensive, has small capacity and is difficult to scale to petabyte sizes. The next-generation Panasas ActiveStor Ultra platform uses NVMe for metadata while still leveraging spinning disk and SATA SSD. This hybrid approach, using each storage medium for what it does best, is something we have been doing for more than 10 years.

What do you do in your products to help safeguard your users’ data?
Panasas uses object-based data protection with RAID- 6+. This software-based erasure code protection, at the file level, provides the best scalable data protection. Only files affected by a particular hardware failure need to be rebuilt, and increasing the number of drives doesn’t increase the likelihood of losing data. In a sense, every file is individually protected. On the hardware side, all Panasas hardware provides non-volatile components, including cutting-edge NVDIMM technology to protect our customers’ data. The file system has been proven in the field. We wouldn’t have the high-profile customers we do if we didn’t provide superior performance as well as superior data protection.

Users want more flexible workflows — storage in the cloud, on-premises, etc. How are your offerings reflective of that?
While Panasas leverages an object storage backend, we provide our POSIX-compliant file system client called DirectFlow to allow standard file access to the namespace. Files and directories are the “lingua franca” of the storage world, allowing ultimate compatibility. It is very easy to interface between on-premises storage, remote DR storage and public cloud/REST storage using DirectFlow. Data flows freely and at high speed using standard tools, which makes the Panasas system an ideal scalable repository for data that will be used in a variety of pipelines.

Alkemy X‘s Dave Zeevalk

With studios in Philly, NYC, LA and Amsterdam, Alkemy X provides live-action, design, post, VFX and original content for spots, branded content and more.

Dave Zeevalk

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Each year, our VFX department generates nearly a petabyte of data, from simulation caches to rendered frames. This year, we have seen a significant increase in data usage as client expectations continue to grow and 4K resolution becomes more prominent in episodic television and feature film projects.

In order to use our 200TB server responsibly, we have created a solid system for preserving necessary data and clearing unnecessary files on a regular basis. Additionally, we are diligent in archiving finale projects to our LTO tape systems, and removing them from our production server.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)

Because of our data redundancy, through hourly snapshots and daily backups, we have avoided any data loss even with hardware failure. Although hardware does fail, with these snapshots and backups on a secondary server, we are able to bring data back online extremely quickly in the case of hardware failure on our production server. Years ago, while migrating to Linux, a software issue completely wiped out our production server. Within two hours, we were able to migrate all data back from our snapshots and backups to our production server with no data loss.

Do you find access speed to be a limiting factor with your current storage solution?
There are a few scenarios where we do experience some issues with access speed to the production server. We do a good amount of heavy simulation work, at times writing dozens of terabytes per hour. While at our peak, we have experienced some throttled speeds due to the amount of data being written to the server. Our VFX team also has a checkpoint system for simulation where raw data is saved to the server in parallel to the simulation cache. This allows us to restart a simulation mid-way through the process if a render node drops or fails the job. This raw data is extremely heavy, so while using checkpoints on heavy simulations, we also experience some slower than normal speeds.

What percentage of your data’s value do you budget toward storage and data security?
Our active production server houses 200TB of storage space. We have a secondary backup server, with equivalent storage space that we store hourly snapshots and daily back-ups to.

What trends do you see in storage?
With client expectations continuing to rise, and 4K (and higher at times) becoming more and more regular on jobs, the need for more storage space is ever increasing.

Quantum‘s Jamie Lerner

What is the biggest trend you’ve seen in the past year in terms of storage?
Although the digital transformation to higher resolution content in M&E has been taking place over the past several years, the interesting aspect is that the pace of change over the past 12 months is accelerating. Driving this trend is the mainstream adoption of 4K and high dynamic range (HDR) video, and the strong uptick in applications requiring 8K formats.

Jamie Lerner

Virtual reality and augmented reality applications are booming across the media and entertainment landscape; everywhere from broadcast news and gaming to episodic television. These high-resolution formats add data to streams that must be ingested at a much higher rate, consume more capacity once stored and require significantly more bandwidth when doing realtime editing. All of this translates into a significantly more demanding environment, which must be supported by the storage solution.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
New technologies for producing stunning visual content are opening tremendous opportunities for studios, post houses, distributors, and other media organizations. Sophisticated next-generation cameras and multi-camera arrays enable organizations to capture more visual information, in greater detail than ever before. At the same time, innovative technologies for consuming media are enabling people to view and interact with visual content in a variety of new ways.

To capitalize on new opportunities and meet consumer expectations, many media organizations will need to bolster their storage infrastructure. They need storage solutions that offer scalable capacity to support new ingest sources that capture huge amounts of data, with the performance to edit and add value to this rich media.

Can you talk about NVMe?
The main benefit of NVMe storage is that it provides extremely low latency — therefore allowing users to seek content at very high speed — which is ideal for high stream counts and compressed 4K content workflows.

However, NVMe resources are expensive. Quantum addresses this issue head-on by leveraging NVMe over fabrics (NVMeoF) technology. With NVMeoF, multiple clients can use pooled NVMe storage devices across a network at local speeds and latencies. And when combined with our StorNext, all data is accessible by multiple clients in a global namespace, making this high-performance tier of storage much more cost-effective. Finally, Quantum is in early field trials of a new advancement that will allow customers to benefit even more from NVMe-enabled storage.

What do you do in your products to help safeguard your users’ data?
A storage system must be able to accommodate policies ranging from “throw it out when the job is done” to “keep it forever” and everything in between. The cost of storage demands control over where data lives and when, how many copies of the data exist and where those copies reside over time.

Xcellis scale-out storage powered by StorNext incorporates a broad range of features for data protection. This includes integrated features such as RAID, automated copying, versioning and data replication functionality, all included within our latest release of StorNext.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Given the differences in size and scope of organizations across the media industry, production workflows are incredibly varied and often geographically dispersed. Within this context, flexibility becomes a paramount feature of any modern storage architecture.

We provide flexibility in a number of important ways for our customers. From the perspective of system architecture, and recognizing there is no one-size fits all solution, StorNext allows customers to configure storage with multiple media types that balance performance and capacity requirements across an entire end-to-end workflow. Second, and equally important for those companies that have a global workforce, is that our data replication software FlexSync allows for content to be rapidly distributed to production staff around the globe. And no matter what tier of storage the data resides on, FlexTier provides coordinated and unified access to the content within a single global namespace.

EditShare‘s Bill Thompson

What is the biggest trend you’ve seen in the past year in terms of storage?
In no particular order, the biggest trends for storage in the media and entertainment space are:
1. The need to handle higher and higher data rates associated with higher resolution and higher frame rate content. Across the industry, this is being address with Flash-based storage and the use of emerging technology like NVMe over “X” and 25/50/100G networking.

Bill Thompson

2. The ever-increasing concern about content security and content protection, backup and restoration solutions.

3. The request for more powerful analytics solutions to better manage storage resources.

4. The movement away from proprietary hardware/software storage solutions toward ones that are compatible with commodity hardware and/or virtual environments.

Can you talk about NVMe?
NVMe technology is very interesting and will clearly change the M&E landscape going forward. One of the challenges is that we are in the midst of changing standards and we expect current PCIe-based NVMe components to be replaced by U2/M2 implementations. This migration will require important changes to storage platforms.

In the meantime, we offer non-NVMe Flash-based storage solutions whose performance and price points are equivalent to those claimed by early NVMe implementations.

What do you do in your products to help safeguard your users’ data?
EditShare has been in the forefront of user data protection for many years beginning with our introduction of disk-based and tape-based automated backup and restoration solutions.

We expanded the types of data protection schemes and provided easy-to-use management tools that allow users to tailor the type of redundant protection applied to directories and files. Similarly, we now provide ACL Media Spaces, which allow user privileges to be precisely tailored to their tasks at hand; providing only the rights needed to accomplish their tasks, nothing more, nothing less.

Most recently, we introduced EFS File Auditing, a content security solution that enables system administrators to understand “who did what to my content” and “when and how did they did it.”

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
The EditShare file system is now available in variants that support EditShare hardware-based solutions and hybrid on-premise/cloud solutions. Our Flow automation platform enables users to migrate from on-premise high-speed EFS solutions to cloud-based solutions, such as Amazon S3 and Microsoft Azure, offering the best of both worlds.

Rohde & Schwarz‘s Dirk Thometzek

What is the biggest trend you’ve seen in the past year in terms of storage?
Consumer behavior is the most substantial change that the broadcast and media industry has experienced over the past years. Content is consumed on-demand. In order to stay competitive, content providers need to produce more content. Furthermore, to make the content more desirable, technologies such as UHD and HDR need to be adopted. This obviously has an impact on the amount of data being produced and stored.

Dirk Thometzek

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
In media and entertainment there has always been a remarkable growth of data over time, from the very first simple SCSI hard drives to huge network environments. Nowadays, however, there is a tremendous growth approximating an exponential function. Considering all media will be preserved for a very long time, the M&E storage market segment will keep on growing and innovating.

Looking at the amount of footage being produced, a big challenge is to find the appropriate data. Taking it a step further, there might be content that a producer wouldn’t even think of looking for, but has a relative significance to the original metadata queried. That is where machine learning and AI come into the play. We are looking into automated content indexing with the minimum amount of human interaction where the artificial intelligence learns autonomously and shares information with other databases. The real challenge here is to protect these intelligences from being compromised by unintentional access to the information.

What do you do to help safeguard your users’ data?
In collaboration with our Rohde & Schwarz Cybersecurity division, we are offering complete and protected packages to our customers. It begins with access restrictions to server rooms up to encrypted data transfers. Cyber attacks are complex and opaque, but the security layer must be transparent and usable. In media though, latency is just as critical, which is usually introduced with every security layer.

Can you talk about NVMe?
In order to bring the best value to the customer, we are constantly looking for improvements. The direct PCI communication of NVMe certainly brings a huge improvement in terms of latency since it completely eliminates the SCSI communication layer, so no protocol translation is necessary anymore. This results in much higher bandwidth and more IOPS.

For internal data processing and databases, R&S SpycerNode NVMe is used, which really boosts its performance. Unfortunately, the economic aspects of using this technology for media data storage is currently not considered to be efficient. We are dedicated to getting the best performance-to-cost ratio for the market, and since we have been developing video workstations and servers besides storage for decades now, we know how to get the best performance out of a drive — spinning or solid state.

Economically, it doesn’t seem to be acceptable to a build system with the latest and greatest technology for a workflow when standards will do, just because it is possible. The real art of storage technology lies in a highly customized configuration according to the technical requirements of an application or workflow. R&S SpycerNode will evolve over time and technologies will be added to the family.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Although hybrid workflows are highly desirable, it is quite important to understand the advantages and limits of this technology. High-bandwidth and low-latency wide-area network connections involve certain economical aspects. Without the suitable connection, an uncompressed 4K production does not seem feasible from a remote location — uploading several terabytes to a co-location can take hours or even days to be transferred, even if protocol acceleration is used. However, there are workflows, such as supplemental rendering or proxy editing, that do make sense to offload to a datacenter. R&S SpycerNode is ready to be an integral part of geographically scattered networks and the Spycer Storage family will grow.

Dell EMC‘s Tom Burns

What is the biggest trend you’ve seen in the past year in terms of storage?
The most important storage trend we’ve seen is an increasing need for access to shared content libraries accommodating global production teams. This is becoming an essential part of the production chain for feature films, episodic television, sports broadcasting and now e-sports. For example, teams in the UK and in California can share asset libraries for their file-based workflow via a common object store, whether on-prem or hybrid cloud. This means they don’t have to synchronize workflows using point-to-point transmissions from California to the UK, which can get expensive.

Tom Burns

Achieving this requires seamless integration of on-premises file storage for the high-throughput, low-latency workloads with object storage. The object storage can be in the public cloud or you can have a hybrid private cloud for your media assets. A private or hybrid cloud allows production teams to distribute assets more efficiently and saves money, versus using the public cloud for sharing content. If the production needs it to be there right now, they can still fire up Aspera, Signiant, File Catalyst or other point-to-point solutions and have prioritized content immediately available, while allowing your on-premise cloud to take care of the shared content libraries.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Dell Technologies offers end-to-end storage solutions where customers can position the needle anywhere they want. Are you working purely in the cloud? Are you working purely on-prem? Or, like most people, are you working somewhere in the middle? We have a continuous spectrum of storage between high-throughput low-latency workloads and cloud-based object storage, plus distributed services to support the mix that meets your needs.

The most important thing that we’ve learned is that data is expensive to store, granted, but it’s even more expensive to move. Storing your assets in one place and having that path name never change, that’s been a hallmark of Isilon for 15 years. Now we’re extending that seamless file-to-object spectrum to a global scale, deploying Isilon in the cloud in addition to our ECS object store on premises.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
AR, VR, AI and other emerging technologies offer new opportunities for media companies to change the way they tell and monetize their stories. However, due to the large amounts of data involved, many media organizations are challenged when they rely on storage systems that lack either scalability or performance to meet the needs of these new workflows.

Dell EMC’s file and object storage solutions help media companies cost effectively tier their content based upon access. This allows media organizations to use emerging technologies to improve how stories are told and monetize their content with the assistance of AI-generated metadata, without the challenges inherent in many traditional storage systems.

With artificial intelligence, for example, where it was once the job of interns to categorize content in projects that could span years, AI gives media companies the ability to analyze content in near-realtime and create large, easily searchable content libraries as the content is being migrated from existing tape libraries to object-based storage, or ingested for current projects. The metadata involved in this process includes brand recognition and player/actor identification, as well as speech-to-text, making it easy to determine logo placement for advertising analytics and to find footage for use in future movies or advertisements.

With Dell EMC storage, AI technologies can be brought to the data, removing the need to migrate or replicate data to direct-attach storage for analysis. Our solutions also offer the scalability to store the content for years using affordable archive nodes in Isilon or ECS object storage.

In terms of AR and VR, we are seeing video game companies using this technology to change the way players interact with their environments. Not only have they created a completely new genre with games such as Pokemon Go, they have figured out that audiences want nonlinear narratives told through realtime storytelling. Although AR and VR adoption has been slower for movies and TV compared to the video game industry, we can learn a lot from the successes of video game production and apply similar methodologies to movie and episodic productions in the future.

Can you talk about NVMe?
NVMe solutions are a small but exciting part of a much larger trend: workflows that fully exploit the levels of parallelism possible in modern converged architectures. As we look forward to 8K, 60fps and realtime production, the usage of PCIe bus bandwidth by compute, networking and storage resources will need to be much more balanced than it is today.

When we get into realtime productions, these “next-generation” architectures will involve new production methodologies such as realtime animation using game engines rather than camera-based acquisition of physically staged images. These realtime processes will take a lot of cooperation between hardware, software and networks to fully leverage the highly parallel, low-latency nature of converged infrastructure.

Dell Technologies is heavily invested in next-generation technologies that include NVMe cache drives, software-defined networking, virtualization and containerization that will allow our customers to continuously innovate together with the media industry’s leading ISVs.

What do you do in your products to help safeguard your users’ data?
Your content is your most precious capital asset and should be protected and maintained. If you invest in archiving and backing up your content with enterprise-quality tools, then your assets will continue to be available to generate revenue for you. However, archive and backup are just two pieces of data security that media organizations need to consider. They must also take active measures to deter data breaches and unauthorized access to data.

Protecting data at the edge, especially at the scale required for global collaboration can be challenging. We simplify this process through services such as SecureWorks, which includes offerings like security management and orchestration, vulnerability management, security monitoring, advanced threat services and threat intelligence services.

Our storage products are packed with technologies to keep data safe from unexpected outages and unauthorized access, and to meet industry standards such as alignment to MPAA and TPN best practices for content security. For example, Isilon’s OneFS operating system includes SyncIQ snapshots, providing point-in-time backup that updates automatically and generates a list of restore points.

Isilon also supports role-based access control and integration with Active Directory, MIT Kerberos and LDAP, making it easy to manage account access. For production houses working on multiple customer projects, our storage also supports multi-tenancy and access zones, which means that clients requiring quarantined storage don’t have to share storage space with potential competitors.

Our on-prem object store, ECS, provides long-term, cost-effective object storage with support for globally distributed active archives. This helps our customers with global collaboration, but also provides inherent redundancy. The multi-site redundancy creates an excellent backup mechanism as the system will maintain consistency across all sites, plus automatic failure detection and self-recovery options built into the platform.

Scale Logic‘s Bob Herzan

What is the biggest trend you’ve seen in the past year in terms of storage?
There is and has been a considerable buzz around cloud storage, object storage, AI and NVMe. Scale Logic recently took a private survey to its customer base to help determine the answer to this question. What we found is none of those buzzwords can be considered a trend. We also found that our customers were migrating away from SAN and focusing on building infrastructure around high-performance and scalable NAS.

Bob Herzan

They felt on-premises LTO was still the most viable option for archiving, and finding a more efficient and cost-effective way to manage their data was their highest priority for the next couple of years. There are plenty of early adopters testing out the buzzwords in the industry, but the trend — in my opinion — is to maximize a stable platform with the best overall return on the investment.

End users are not focused so much on storage, but on how a company like ours can help them solve problems within their workflows where storage is an important component.

Can you talk more about NVMe?
NVMe provides an any-K solution and superior metadata low-latency performance and works with our scale-out file system. All of our products have had 100GbE drivers for almost two years, enabling mesh technologies with NVMe for networks as well. As cost comes down, NVMe should start to become more mainstream this year — our team is well versed in supporting NVMe and ready to help facilities research the price-to-performance of NVMe to see if it makes sense for their Genesis and HyperFS Scale Out system.

With AI, VR and machine learning, our industry is even more dependent on storage. How are you addressing this?
We are continually refining and testing our best practices. Our focus on broadcast automation workflows over the years has already enabled our products for AI and machine learning. We are keeping up with the latest technologies, constantly testing in our lab with the latest in software and workflow tools and bringing in other hardware to work within the Genesis Platform.

What do you do in your products to help safeguard your users’ data?
This is a broad question that has different answers depending on which aspect of the Genesis Platform you may be talking about. Simply speaking, we can craft any number of data safeguard strategies and practices based on our customer needs, the current technology they are using and, most importantly, where they see their growth of capacity and data protection needs moving forward. Our safeguards start as simple as enterprise-quality components, mirrored sets, RAID -6, RAID-7.3 and RAID N+M, asynchronous data sync to a second instance, full HA with synchronous data sync to a second instance, virtual IP failover between multiple sites, multi-tier DR and business continuity solutions.

In addition, the Genesis Platform’s 24×7 health monitoring service (HMS) communicates directly with installed products at customer sites, using the equipment serial number to track service outages, system temperature, power supply failure, data storage drive failure and dozens of other mission-critical status updates. This service is available to Scale Logic end users in all regions of the world and complies with enterprise-level security protocols by relying only on outgoing communication via a single port.

Users want more flexible workflows — storage in the cloud, on-premises. Are your offerings reflective of that?
Absolutely. This question defines our go-to-market strategy — it’s in our name and part of our day-to-day culture. Scale Logic takes a consultative role with its clients. We take our 30-plus years of experience and ask many questions. Based on the answers, we can give the customer several options. First off, many customers feel pressured to refresh their storage infrastructure before they’re ready. Scale Logic offers customized extended warranty coverage that takes the pressure off the client and allows them to review their options and then slowly implement the migration and process of taking new technology into production.

Also, our Genesis Platform has been designed to scale, meaning clients can start small and grow as their facility grows. We are not trying to force a single solution to our customers. We educate them on the various options to solve their workflow needs and allow them the luxury of choosing the solution that best meets both their short-term and long-term needs as well as their budget.

Facilis‘ Jim McKenna

What is the biggest trend you’ve seen in the past year in terms of storage?
Recently, I’ve found that conversations around storage inevitably end up highlighting some non-storage aspects of the product. Sort of the “storage and…” discussion where the technology behind the storage is secondary to targeted add-on functionality. Encoding, asset management and ingest are some of the ways that storage manufacturers are offering value-add to their customers.

Jim McKenna

It’s great that customers can now expect more from a shared storage product, but as infrastructure providers we should be most concerned with advancing the technology of the storage system. I’m all for added value — we offer tools ourselves that assist our customers in managing their workflow — but that can’t be the primary differentiator. A premium shared storage system will provide years of service through the deployment of many supporting products from various manufacturers, so I advise people to avoid being caught-up in the value-add marketing from a storage vendor.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
Our industry has always been dependent upon storage in the workflow, but now facilities need to manage large quantities of data efficiently, so it’s becoming more about scaled networks. In the traditional SAN environment, hard-wired Fibre Channel clients are the exclusive members of the production workgroup.

With scalable shared-storage through multiple connection options, everyone in the facility can be included in the collaboration on a project. This includes offload machines for encoding and rendering large HDR and VR content, and MAM systems with localized and cloud analysis of data. User accounts commonly grow into the triple digits when producers, schedulers and assistants all require secure access to the storage network.

Can you talk about NVMe?
Like any new technology, the outlook for NVMe is promising. Solid state architecture solves a lot of problems inherent in HDD-based systems — seek times, read speeds, noise and cooling, form factor, etc. If I had to guess a couple years ago, I would have thought that SATA SSDs would be included in the majority of systems sold by now; instead they’ve barely made a dent in the HDD-based unit sales in this market. Our customers are aware of new technology, but they also prioritize tried-and-true, field-tested product designs and value high capacity at a lower cost per GB.

Spinning HDD will still be the primary storage method in this market for years to come, although solid state has advantages as a helper technology for caching and direct access for high-bandwidth requirements.

What do you do in your products to help safeguard your users’ data?
Integrity and security are priority features in a shared storage system. We go about our security differently than most, and because of this our customers have more confidence in their solution. By using a system of permissions that emanate from the volume-level, and are obscured from the complexities of network ownership attributes, network security training is not required. Because of the simplicity of securing data to only the necessary people, data integrity and privacy is increased.

In the case of data integrity during hardware failure, our software-defined data protection has been guarding our customers assets for over 13 years, and is continually improved. With increasing drive sizes, time to completion of drive recovery is an important factor, as well as system usability during the process.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
When data lifecycle is a concern of our customers, we consult on methods of building a storage hierarchy. There is no one-size-fits-all approach here, as every workflow, facility and engineering scope is different.

Tier 1 storage is our core product line, but we also have solutions for nearline (tier 2) and archive (tier 3). When the discussion turns to the cloud as a replacement for some of the traditional on-premises storage offerings, the complexity of the pricing structure, access model and interface becomes a gating factor. There are a lot of ways to effectively use the cloud, such as compute (AI, encoding, etc.), business continuity, workflow (WAN collaboration) or simple cold storage. These tools, when combined with a strong on-premises storage network, will enhance productivity and ensure on-time delivery of product.

mLogic’s co-founder/CEO Roger Mabon

What is the biggest trend you’ve seen in the past year in terms of storage?
In the M&E industry, high-resolution 4K/8K multi-camera shoots,
stereoscopic VR and HDR video are commonplace and are contributing to the unprecedented amounts of data being generated in today’s media productions. This trend will continue as frame rates and resolutions increase and video professionals move to shoot in these new formats to future-proof their content.

Roger Mabon

With AI, VR and machine learning, etc., our industry is even more dependent on storage. Can you talk about that?
Absolutely. In this environment, content creators must deploy storage solutions that are high capacity, high-performance and fault-tolerant. Furthermore, all of this content must be properly archived so it can be accessed well in to the future. mLogic’s mission is to provide affordable RAID and LTO tape storage solutions that fit this critical need.

How are you addressing this?
The tsunami of data being produced in today’s shoots must be properly managed. First and foremost is the need to protect the original camera files (OCF). Our high-performance mSpeed Thunderbolt 3 RAID solutions are being deployed on-set to protect these OCF. mSpeed is a desktop RAID that features plug-and-play Thunderbolt connectivity, capacities up to 168TB and RAID-6 data protection. Once the OCF is transferred to mSpeed, camera cards can be wiped and put back into production

The next step involves moving the OCF from the on-set RAID to LTO tape. Our portable mTape Thunderbolt 3 LTO solutions are used extensively by media pros to transfer OCF to LTO tape. LTO tape cartridges are shelf stable for 30+ years and cost around $10 per TB. That said, I find that many productions skip the LTO transfer and rely solely on single hard drives to store the OCF. This is a recipe for disaster as hard drives sitting on a shelf have a lifespan of only three to five years. Companies working with the likes of Netflix are required to use LTO for this very reason. Completed projects should also be offloaded from hard drives and RAIDs to LTO tape. These hard drives systems can then be put back into action for the tasks that they are designed for… editing, color correction, VFX, etc.

Can you talk about NVMe?
mLogic does not currently offer storage solutions that incorporate NVMe technology, but we do recognize numerous use cases for content creation applications. Intel is currently shipping an 8TB SSD with PCIe NVMe 3.1 x4 interface that can read/write data at 3000+ MB/second! Imagine a crazy fast and ruggedized NVMe shuttle drive for on-set dailies…

What do you do in your products to help safeguard your users data?

Our 8- and12-drive mSpeed solutions feature hardware RAID data protection. mSpeed can be configured in multiple RAID levels including RAID-6, which will protect the content stored on the unit even if two drives should fail. Our mTape solutions are specifically designed to make it easy to offload media from spinning drives and archive the content to LTO tape for long term data preservation.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We recommend that you make two LTO archives of your content that are geographically separated in secure locations such as the post facility and the production facility. Our mTape Thunderbolt solutions accomplish this task.

In regards to the cloud, transferring terabytes upon terabytes of data takes an enormous amount of time and can be prohibitively expensive, especially when you need to retrieve the content. For now, cloud storage is reserved for productions with big pipes and big budgets.

OWC president Jennifer Soulé 


With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?

We’re constantly working to provide more capacity and faster performance.  For spinning disk solutions, we’re making sure that we’re offering the latest sizes in ever-increasing bays. Our ThunderBay line started as a four-bay, went to a six-bay and will grow to eight-bay in 2019. With 12TB drives, that’s 96TB in a pretty workable form factor. Of course, you also need performance, and that is where our SSD solutions come in as well as integrating the latest interfaces like Thunderbolt 3. For those with greater graphics needs, we also have our Helios FX external GPU box.

Can you talk about NVME?
With our Aura Pro X, Envoy Pro EX, Express 4M2 and ThunderBlade, we’re already into NVMe and don’t see that stopping. By the end of 2019 we expect virtually all of our external Flash-based solutions will be NVMe-based rather than SATA. As the cost of Flash goes down and performance and capacity go up, we expect broader adoption as both primary storage and in secondary cache setups. 2TB drive supply will stabilize and we should see 4TB  and PCIe Gen 4 will double bandwidth.  Bigger, faster and cheaper is a pretty awesome combination.

What do you do in your products to help safeguard your users data?
We focus more on providing products that are compatible with different encryption schemas rather than building something in. As far as overall data protection, we’re always focused on providing the most reliable storage we can. We make sure our power supplies are over what is required to make sure insufficient power is never a factor. We test a multitude of drives in our enclosures to ensure we’re providing the best performing drives.

For our RAID solutions, we do burn-in testing to make sure all the drives are solid. Our SoftRAID technology also provides in-depth drive health monitoring so you know well in advance if a drive is failing.  This is critical because many other SMART-based systems fail to detect bad drives leading to subpar system performance and corrupted data. Of course, all the hardware and software technology we put into our drives don’t do much if people don’t back up their data — so we also work with our customers to find the right solution for their use case or workflow.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
I definitely think we hit on flexibility within the on prem-space by offering a full range of single and multi-drive solutions, spinning disk and SSD options, portable to rack mounted that can be fully setup solutions or DIY where you can use drives you might already have. You’ll have to stay tuned on the cloud part, but we do have plans to use the cloud to expand on the data protection our drives already offer.

A Technologist’s Data Storage Primer

By Mike McCarthy

Storage is the concept of keeping all of the files for a particular project or workflow, but they may not all be stored in the same place — different types of data have different requirements and different storage solutions have different strengths and features.

At a fundamental level, most digital data is stored on HDDs or SSDs. HDDs, or hard disk drives, are mechanical devices that store the data on a spinning magnetic surface and move read/write heads over that surface to access the data. They currently max out around 200MB/s and 5ms latency.

SSDs, or solid-state drives, involve no moving parts. SSDs can be built with a number of different architectures and interfaces, but most are based on the same basic Flash memory technology as the CF or SD card in your camera. Some SSDs are SATA drives that use the same interface and form factor as a spinning disk for easy replacement in existing HDD-compatible devices. These devices are limited to SATA’s bandwidth of 600MB/s. Other SSDs use the PCIe interface, either in full-sized PCIe cards or the smaller M.2 form factor. These have much higher potential bandwidths, up to 3000MB/s.

Currently, HDDs are much cheaper for storing large quantities of data but require some level of redundancy for security. SSDs are also capable of failure, but it is a much more rare occurrence. Data recovery for either is very expensive. SSDs are usually cheaper for achieving high bandwidth, unless large capacities are also needed.

RAIDs
Traditionally, hard drives used in professional contexts are grouped together for higher speeds and better data security. These are called RAIDs, which stands for redundant array of independent disks. There are a variety of different approaches to RAID design that are very different from one another.

RAID-0 or striping is technically not redundant, but every file is split across each disk, so each disk only has to retrieve its portion of a requested file. Since these happen in parallel, the result is usually faster than if a single disk had read the entire file, especially for larger files. But if one disk fails, every one of your files will be missing a part of its data, making the remaining partial information pretty useless. The more disks in the array, the higher the chances of one failing, so I rarely see striped arrays composed of more than four disks. It used to be popular to create striped arrays for high-speed access to restorable data, like backed-up source footage, or temp files, but now a single PCIe SSD is far faster, cheaper, smaller and more efficient in most cases.

Sugar

RAID-1 or mirroring is when all of the data is written to more than one drive. This limits the array’s capacity to the size of the smallest source volume, but the data is very secure. There is no speed benefit to writes since each drive must write all of the data, but reads can be distributed across the identical drives with similar performance as RAID-0.

RAID-3, -5 and -6 try to achieve a balance between those benefits for larger arrays with more disks (minimum three). They all require more complicated controllers, so they are more expensive for the same levels of performance. RAID-3 stripes data across all but one drive and then calculates and stores parity (odd/even) data across the data drives and stores it on the last drive. This allows the data from any single failed drive to be restored, based on the parity data. RAID-5 is similar, but the parity volume is alternated depending on the block, allowing the reads to be shared across all disks, not just the “data drives.”

So the capacity of a RAID-3 or RAID-5 array will be the minimum individual disk capacity times the number of disks minus one. RAID-6 is similar but stores two drives worth of parity data, which via some more advanced math than odd/even, allows it to restore the data even if two drives fail at the same time. RAID-6 capacity will be the minimum individual disk capacity times the number of disks minus two, and is usually only used on arrays with many disks. RAID-5 is the most popular option for most media storage arrays, although RAID-6 becomes more popular as the value of the data stored increases and the price of extra drives decreases over time.

Storage Bandwidth
Digital data is stored as a series of ones and zeros, each of which is a bit. One byte is 8 bits, which frequently represents one letter of text, or one pixel of an image (8-bit single channel). Bits are frequently referenced in large quantities to measure data rates, while bytes are usually referenced when measuring stored data. I prefer to use bytes for both purposes, but it is important to know the difference. A Megabit (Mb) is one million bits, while a Megabyte (MB) is one million bytes, or 8 million bits. Similar to metric, Kilo is thousand, Mega is million, Giga is billion, and Tera is trillion. Anything beyond that you can learn as you go.

Networking speeds are measured in bits (Gigabits), but with headers and everything else, it is safer to divide by 10 when converting speed into bytes per second. Estimate 100MB/s for Gigabit, up to 1000MB/s on 10GB, and around 500MB/s for the new N-BaseT standard. Similarly, when transferring files over a 30Mb Internet connection, expect around 3MB/s, then multiple by 60 or 3,600 to get to minutes or hours (180MB/min or 9600MB/hr in this case). So if you have to download a 10GB file on that connection, come back to check on it in an hour.

Magnopus

Because networking standards are measured in bits, and because networking is so important for sharing video files, many video file types are measured in bits as well. An 8Mb H.264 stream is 1MB per second. DNxHD36 is 36Mb/s (or 4.5MB/s when divided by eight), DV and HDV are 25Mb, DVCProHD is 100Mb, etc. Other compression types have variable bit rates depending on the content, but there are still average rates we can make calculations from. Any file’s size divided by its duration will reveal its average data rate. It is important to make sure that your storage has the bandwidth to handle as many streams of video as you need, which will be that average data rate times the number of streams. So 10 streams of DNxHD36 will be 360Mb or 45MB/s.

The other issue to account for is IO requests and drive latency. Lots of small requests require not just high total transfer rates, but high IO performance as well. Hard drives can only fulfill around 100 individual requests per second, regardless of how big those requests are. So while a single drive can easily sustain a 45MB/s stream, satisfying 10 different sets of requests may keep it so busy bouncing between the demands that it can’t keep up. You may need a larger array, with a higher number of (potentially) smaller disks to keep up with the IO demands of multiple streams of data. Audio is worse in this regard in that you are dealing with lots of smaller individual files as your track count increases, even though the data rate is relatively low. SSDs are much better at handling larger numbers of individual requests, usually measured in the thousands or tens of thousands per second per drive.

Storage Capacity
Capacity on the other hand is simpler. Megabytes are usually the smallest increments of data that we have to worry about calculating. A media type’s data rate (in MB/sec) times its duration (in seconds) will give you its expected file size. If you are planning to edit a feature film with 100 hours of offline content in DNxHD36, that is 3600×100 seconds, times 4.5MB/s, equaling 1620000MB, 1620GB, or simply about 1.6TB. But you should add some headroom for unexpected needs, and then a 2TB disk is about 1.8TB when formatted, so it will just barely fit. It is probably worth sizing up to at least 3TB if you are planning to store your renders and exports on there as well.

Once you have a storage solution of the required capacity there is still the issue of connecting it to your system. The most expensive options connect through the network to make them easier to share (although more is required for true shared storage), but that isn’t actually the fastest option or the cheapest. A large array can be connected over USB3 or Thunderbolt, or via the SATA or SAS protocol directly to an internal controller.

There are also options for Fibre Channel, which can allow sharing over a SAN, but this is becoming less popular as 10GbE becomes more affordable. Gigabit Ethernet and USB3 won’t be fast enough for high-bandwidth files to playback, but 10GbE, multichannel SAS, Fibre Channel and Thunderbolt can all handle almost anything up to uncompressed 4K.

Direct attached storage will always have the highest bandwidth and lowest latency, as it has the fewest steps between the stored files and the user. Using Thunderbolt or USB adds another controller and hop, Ethernet even more so.

Different Types of Project Data
Now that we know the options for storage, let’s look at the data we anticipate needing to store. First off we will have lots of video footage of source media (either camera original files, transcoded editing dailies, or both). This is usually in the Terabytes, but the data rates vary dramatically — from 1Mb H.264 files to 200Mb ProRes files to 2400Mb Red files. The data rate for the files you are playing back, combined with the number of playback streams you expect to use, will determine the bandwidth you need from your storage system. These files are usually static in that they don’t get edited or written to in any way after creation.

The exceptions would be sidecar files like RMD and XML files, which will require write access to the media volume. If a certain set of files is static, as long as a backup of the source data exists, they don’t need to be backed up on a regular basis and don’t even necessarily need redundancy. Although if the cost of restoring that data would be high, in regards to lost time during that process, some level of redundancy is still recommended.

Another important set of files we will have is our project files, which actually record the “work” we do in our application. They contain instructions for manipulating our media files during playback or export. The files are usually relatively small, and are constantly changing as we use them. That means they need to be backed up on a regular basis. The more frequent the backups, the less work you lose when something goes wrong.

We will also have a variety of exports and intermediate renders over the course of the project. Whether they are flattened exports for upload and review, VFX files or other renders, these are a more dynamic set of files than our original source footage. And they are generated on our systems instead of being imported from somewhere else. These can usually be regenerated from their source projects, if necessary, but the time and effort required usually makes it worth it to invest in protecting or backing them up. In most workflows, these files don’t change once they are created, which makes it easier to back them up if desired.

There will also be a variety of temp files generated by most editing or VFX programs. Some of these files need high-speed access for best application performance, but they rarely need to be protected or backed up because they can be automatically regenerated by the source applications on the fly if needed.

Choosing the Right Storage for Your Needs
Ok, so we have source footage, project files, exports and temp files that we need to find a place for. If you have a system or laptop with a single data volume, the answer is simple: It all goes on the C drive. But we can achieve far better performance if we have the storage infrastructure to break those files up onto different devices. Newer laptops frequently have both a small SSD and a larger hard disk. In that case we would want our source footage on the (larger) HDD, while the project files should go on the (safer) SSD.

Usually your temp file directories should be located on the SSD as well since it is faster, and your exports can go either place, preferably the SSD if they fit. If you have an external drive of source footage connected, you can back all files up there, but you should probably work from projects stored on the local system, playing back media from the external drive.

A professional workstation can have a variety of different storage options available. I have a system with two SSDs and two RAIDs, so I store my OS and software on one SSD, my projects and temp files on the other SSD, my source footage on one RAID and my exports on the other. I also back up my project folder to the exports RAID on a daily basis, since the SSDs have no redundancy.

Individual Store Solution Case Study Examples
If you are natively editing a short film project shot on Red, then R3Ds can be 300MB/s. That is 1080GB/hour, so five hours of footage will be just over 5TB. It could be stored on a single 6TB external drive, but that won’t give you the bandwidth to play back in real-time (hard drives usually top out around 200MB/s).

Striping your data across two drives in one of those larger external drives would probably provide the needed performance, but with that much data you are unlikely to have a backup elsewhere. So data security becomes more of a concern, leading us toward a RAID-5-based solution. A four-disk array of 2TB drives provides 6TB of usable storage at RAID-5 (4x2TB = 8TB raw capacity, minus 2TB of parity data equals 6TB of usable storage capacity). Using an array of 8 1TB drives would provide higher performance, and 7TB of space before formatting (8x1TB = 8TB raw capacity, minus 1TB of parity, because a single drive failure would only lose 1TB of data in this configuration) but will cost more. (eight-port RAID controller, eight-bay enclosure, and two 1TB drives are usually more expensive than one 2TB drive.)

Larger projects deal with much higher numbers. Another project has 200TB of Red footage that needs to be accessible on a single volume. A 24-bay enclosure with 12TB drives provides 288TB of space, minus two drives worth of data for RAID-6 redundancy (288TB raw-[2x12TB for parity]=264TB usable capacity), which will be more like 240TB of space available in Windows once it is formatted.

Sharing Storage and Files With Others
As Ethernet networking technology has improved, the benefits of expensive SAN (storage area network) solutions over NAS (network attached storage) solutions has diminished. 10Gigabit Ethernet (10GbE) transfers over 1GB of data a second and is relatively cheap to implement. NAS has the benefit of a single host system controlling the writes, usually with software included in the OS. This prevents data corruption and also isolates the client devices from the file system allowing PC, Mac and Linux devices to all access the same files. This comes at the cost of slightly increased latency and occasionally lower total bandwidth, but the prices and complexity of installation are far lower.

So now all but the largest facilities and most demanding workflows are being deployed with NAS-based shared storage solutions. This can be as simple as a main editing system with a large direct attached array sharing its media with an assistant station, over a direct 10GbE link, for about $50. This can be scaled up by adding a switch and connecting more users to it, but the more users sharing the data, the greater the impact on the host system, and the lower the overall performance. Over 3-4 users, it becomes prudent to have a dedicated host system for the storage, for both performance and stability. Once you are buying a dedicated system, there are a variety of other functionalities offered by different vendors to improve performance and collaboration.

Bin Locking and Simultaneous Access
The main step to improve collaboration is to implement what is usually referred to as a “bin locking system.” Even with a top-end SAN solution and strict permissions controls there is still the possibility of users overwriting each other’s work, or at the very least branching the project into two versions that can’t easily be reconciled.

If two people are working on the same sequence at the same time, only one of their sets of changes is going to make it to the master copy of the file without some way of combining the changes (and solutions are being developed). But usually the way to avoid that is to break projects down into smaller pieces and make sure that no two people are ever working on the exact same part. This is accomplished by locking the part (or bin) of the project that a user is editing so that no one else may edit it at the same time. This usually requires some level of server functionality because it involves changes that are not always happening at the local machine.

Avid requires specific support for that from the storage host in order for it to enable that feature. Adobe on the other hand has implemented a simpler storage-based solution, which is effective but not infallible, that works on any shared storage device that offers users write access.

A Note on iSCSI
iSCSI arrays offer some interesting possibilities for read-only data, like source footage, as iSCSI gives block-level access for maximum performance and runs on any network without expensive software. The only limit is that only one system can copy new media to the volume, and there must be a secure way to ensure the remaining systems have read-only access. Projects and exports must be stored elsewhere, but those files require much less capacity and bandwidth than source media. I have not had the opportunity to test out this hybrid SAN theory since I don’t have iSCSI appliances to test with.

A Note on Top-End Ethernet Options
40Gb Ethernet products have been available for a while and we are now seeing 25GB and 100Gb Ethernet products as well. 40Gb cards can be gotten quite cheaply, and I was tempted to use them for direct connect, hoping to see 4GB/s to share fast SSDs between systems. But 40Gb Ethernet is actually a trunk of four parallel 10Gb links and each individual connection is limited to 10Gb. It is easy to share the 40Gb of aggregate bandwidth across 10 systems accessing a 40Gb storage host, but very challenging to get more than 10Gb to a single client system. Having extra lanes on the highway doesn’t get you to work any faster if there are no other cars on the road, it only helps when there is lots of competing traffic.

25Gb Ethernet on the other hand will give you access to nearly 3GB/s for single connections, but as that is newer technology, the prices haven’t come down yet ($500 instead of $50 for a 10GbE direct link). 100Gb Ethernet is four 25Gb links trunked together, and subject to the same aggregate limitations as 40Gb.

Main Image: Courtesy of Sugar Studios LA


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

DigitalGlue 12.3

Storage for Post Studios

By Karen Moltenbrey

The post industry relies heavily on storage solutions, without question. Facilities are jugging a variety of tasks and multiple projects all at once. And deadlines are always looming. Thus, these studios need a storage solution that is fast and reliable. Each studio has different needs and searches to find the right system to fit their particular workflow. Luckily, there are many storage choices for pros to choose from.

For this article, we spoke with two post houses about their storage solutions and why they are a good fit for each of their needs.

Sugar Studios LA
Sugar Studios LA is one-stop shop playground for filmmakers that offers a full range of post production services, including editorial, color, VFX, audio, production and finishing, with each department led by seasoned professionals. Its office suites in the Wiltern Theater Tower, in the center of LA, serve an impressive list of clients, from numerous independent film producers and distributors to Disney, Marvel, Sony, MGM, Universal, Showtime, Netflix, AMC, Mercedes-Benz, Ferrari and others.

Jijo Reed and Sting in one of their post suites.

With so much important data in play at one time, Sugar needs a robust, secure and reliable storage system. However, with diverse offerings come diverse requirements. For its online and color projects, Sugar uses a Symply SAN with 200TB of usable storage. The color workstations are connected via 10Gb Ethernet over Fibre with a 40Gb uplink to the network. For mass storage and offline work, the studio uses a MacOS server acting as a NAS, with 530TB of usable storage connected via a 40Gb network uplink. For Avid offline jobs, the facility has an Avid Nexis Pro with 40TB of storage, and for Avid Pro Tools collaboration, a Facilis TerraBlock with 40TB of usable storage.

“We can collaborate with any and all client stations working on the same or different media and sharing projects across multiple software platforms,” says Jijo Reed, owner/executive producer of Sugar. “No station is limited to what it can do, since every station has access to all media. Centralized storage is so important because not only does it allow collaboration, we always have access to all media and don’t have to fumble through drives. It is also RAID-protected, so we don’t have to be concerned with losing data.”

Prior to employing the centralized storage, Sugar had been using G-Technology’s G-RAID drives, changing over in late 2016. “Once our technical service advisor, Zach Moller, came on board, he began immediately to institute a storage network solution that was tailored to our workflow,” says Reed.

Reed, an award-winning director/producer, founded the company in 2012, using a laptop (running Final Cut Pro 7) and an external hard drive he had purchased on sale at Fry’s. His target base at the time was producers and writers needing sizzle trailers to pitch their projects — at a time when the term “sizzle trailer” was not part of the common vernacular. “I attended festivals to pitch my wares, producing over 15 sizzles the first year,” he says, “and it grew from there.”

Since Reed was creating sizzles for yet-to-be-made features, he was in “pole position” to handle the post for some of these independent films when they got funded. In 2015, he, along with his senior editor, Paul Buhl, turned their focus to feature post work, which was “more lucrative and less exhausting, but mostly, we wanted to tell stories – the whole story.” He rebranded and changed the name of the company from Sizzlepitch to Sugar Studios, and brought on a feature post producer, Chris Harrington. Reed invested heavily in the company, purchasing equipment and acquiring space. Soon, one bay became two, then three and so on. Currently, the company spans three full floors, including the penthouse of the Wiltern Theater Tower.

As Reed proudly points out, the studio space features 21 bays and workstations, two screening theaters, including a 25-seat color and mix DI stage with a Barco DP4K projector and Dolby Atmos configuration. “We are fully staffed, all under one roof, with editorial, full audio services, color correction/grading, VFX and a greenscreen cyclorama stage with on-site 4K cameras, grip and lighting,” he details. “But, it’s the people who make this work. Our passion is obvious to our clients.”

While Sugar was growing and expanding, so, too, was its mass storage solution. According to Zach Moller, it started with the NAS due to its low price and fast (10Gb) connection to every client machine. “The Symply SAN solution was needed because we required a high-bandwidth system for online and color playback that used Fibre Channel technology for the low latency and local drive configuration,” he says.

Moreover, the facility wanted flexibility with its SAN solution; it was very expensive to have every machine connected via Fibre Channel, “and frankly, we didn’t need that bandwidth,” Reed says. “Symply allowed us to have client machines choose whether they connected via Fibre Channel or 10Gb. If this wasn’t the case, we would have been in a pickle, having to purchase expansion chassis for every machine to open up additional PCI slots.” (The bulk of the machines at Sugar connect using the pre-existing 10Gb Ethernet over Fibre network, thus negating the need to use another PCI slot on a Fibre Channel card.)

American Dreamer

At Sugar, the camera masters and production audio are loaded directly to the NAS for mass storage. Then, the group archives the camera masters to LTO for deep archival, for an additional backup. During LTO archival, the studio creates the dailies for the offline edit on either Avid Media Composer (where the MXFs are migrated to the Avid Nexis server) or Adobe Premiere (where the ProRes dailies continue to live on the NAS).

When adding visual effects, the artists render to the Symply SAN when preparing for the online, color and finishing.

The studio works with a wide range of codecs, some of which are extremely taxing on the systems. And, the SAN is ideal, especially for the raster image files (EXRs), since each frame has such a high density — and there can be 100,000 frames per folder. “This can only be accomplished with a premium storage solution: our SAN,” Reed says.

When the studio moved to the EXR codec for the VFX on the American Dreamer feature film, for example, its original NAS solution over 10Gb didn’t have enough bandwidth for playback on its systems (1.2GB/sec). Once it upgraded the SAN solution with dual 16Gb Fibre Channel, they were able to play back uncompressed 4K EXR footage without the headache or frustration of stuttering.

“We have created an environment that caters to the creative process with a technical infrastructure that is superfast and solid. Filmmakers love us, and I couldn’t be prouder of my team for making this happen,” says Reed.

Mike Seabrooke

Postal
Established in 2015, Postal is a boutique creative studio that produces motion graphics, visual effects, animation, live action and editorial, with the vision of transcending all mediums — whether it’s short animations for social media or big-budget visual effects for broadcast. “As a studio, we love to experiment with different techniques. We feel strongly that the idea should always come first,” says Mike Seabrooke, producer at New York’s Postal.

To ensure that these ideas make it to the final stage of a project, the company uses a mixture of hard drives, LTO tapes and servers that house the content while the artists are working on projects, as well as for archival purposes. Specifically, the studio employs the EditShare Storage v.7 shared storage platform and EditShare Ark Tape for managing the LTO tape libraries that serve as nearline and offline backup. This is the system setup that Postal deployed initially when it started up a few years ago, and since then Postal has been continuously updating and expanding it based on its growth as a studio.

Let’s face it, hard drives always have the possibility of failing. But, failure is not something that Postal — or any other post house — can afford. That is why the studio keeps two instances per job on archive drives: a master and a backup. “Organized hard drives give us quick access to previous jobs if need be, which sometimes can be quite the lifesaver,” says Seabrooke.

 

Postal’s Nordstrom project.

LTO tapes, meanwhile, are used to back up the facility’s servers running EditShare v7 – which house Postal’s editorial jobs — on the off chance that something happens to that precious piece of hardware. “The recovery process isn’t the fastest, but the system is compact, self-contained and gives us peace of mind in case anything does go wrong,” Seabrooke explains.

In addition, the studio uses Retrospect backup and restore software for its working projects server. Seabrooke says, “We chose it because it offers a backup service that does not require much oversight.”

When Postal began shopping for a solution for its studio three years ago, reliability was at the top of its list. The facility needed a system it could rely on to back up its data, which would comprise the facility’s entire scope of work. Ease of use was also a concern, as was access. This decision prompted questions such as: Would we have to monitor it constantly? In what timeframe would we be able to access the data? Moreover, cost was yet another factor: Would the solution be effective without breaking our budget?

Postal’s solution indeed enabled them to check off every one of those boxes. “Our projects demand a system that we can count on, with the added benefit of quick retrieval,” Seabrooke says.

Throughout the studio’s production process, the artists are accessing project data on the servers. Then, once they complete the project, the data is transferred to the archival drives for backup. This frees up space on the company servers for new jobs, while providing access to the stored data if needed.

“Storage is so important in our work because it is our work. Starting over on a project is an outcome we cannot allow, so responsible storage is a necessity,” concludes Seabrooke.


Karen Moltenbrey is a long-time VFX and post production writer.


A VFX pro on avoiding the storage ‘space trap’

By Adam Stern

Twenty years is an eternity in any technology-dependent industry. Over the course of two-plus decades of visual effects facility ownership, changing standards, demands, capability upgrades and staff expansion have seen my company, Vancouver-based Artifex Studios, usher in several distinct eras of storage, each with its own challenges. As we’ve migrated to bigger and better systems, one lesson we’ve learned has proven critical to all aspects of our workflow.

Adam Stern

In the early days, Artifex used off-the-shelf hard drives and primitive RAIDs for our storage needs, which brought with it slow transfer speeds and far too much downtime when loading gigabytes of data on and off the system. We barely had any centralized storage, and depended on what was essentially a shared network of workstations — which our then-small VFX house could get away with. Even considering where we were then, which was sub-terabyte, this was a messy problem that needed solving.

We took our first steps into multi-TB NAS using off-the-shelf solutions from companies like Buffalo. This helped our looming storage space crunch but brought new issues, including frequent breakdowns that cost us vital time and lost iterations — even with plenty of space. I recall a particular feature film project we had to deliver right before Christmas. It almost killed us. Our NAS crashed and wouldn’t allow us to pull final shots, while throwing endless error messages our way. I found myself frantically hot-wiring spare drives to enable us to deliver to our client. We made it, but barely.

At that point it was clear a change was needed. We started using a solution that Annex Pro — a Vancouver-based VAR we’d been working with for years — helped put into place. That company was bought and then went away completely.

Our senior FX TD, Stanislav Enilenis, who was also handling IT for us back then, worked with Annex to install the new system. According to Stan, “the switch allowed bandwidth for expansion. However, when we would be in high-production mode, bandwidth became an issue. While the system was an overall improvement from our first multi-terabyte NAS, we had issues. The company was bought out, so getting drives became problematic, parts became harder to source and there were system failures. When we hit top capacity with the-then 20-plus staff all grinding, the system would slow to a crawl and our artists spent more time waiting than working.”

Artifex machine room.

As we transitioned from SD to HD, and then to 4K, our delivery requirements increased along with our rendering demands, causing severe bottlenecks in the established setup. We needed a better solution but options were limited. We were potentially looking at a six-figure investment, in a system not geared to M&E.

In 2014, Artifex was working on the TV series Continuum, which had fairly massive 3D requirements on an incredibly tight turnaround. It was time to make a change. After a number of discussions with Annex, we made the decision to move to an offering from a new company called Qumulo, which provided above-and-beyond service, training and setup. When we expanded into our new facility, Qumulo helped properly move the tech. Our new 48TB pipeline flowed freely and offered features we didn’t previously have, and Qumulo were constantly adding new and requested updates.

Laila Arshid, our current IS manager, has found this to be particularly valuable. “In Qumulo’s dashboard I can see realtime analytics of everything in the system. If we have a slowdown, I can track it to specific workstations and address any issues. We can shut that workstation or render-node down or reroute files so the system stays fast.”

The main lesson we’ve learned throughout every storage system change or upgrade is this: It isn’t just about having a lot of space. That’s an easy trap to fall into, especially today when we’re seeing skyrocketing demands from 4K+ workflows. You can have unlimited storage, but If you can’t utilize it efficiently and at speed, your storage space becomes irrelevant.

In our industry, the number of iterations we can produce has a dramatic impact on the quality of work we’re able to provide, especially with today’s accelerated schedules. One less pass can mean work with less polish, which isn’t acceptable.

Artifex provided VFX for Faster Than Light

Looking forward, we’re researching extended storage on the cloud: an ever-expanding storage pool with the advantages of fast local infrastructure. We currently use GCP for burst rendering with Zync, along with nearline storage, which has been fantastic — but the next step will be integrating these services with our daily production processes. That brings a number of new challenges, including how to combine local and cloud-based rendering and storage in ways that are seamless to our team.

Constantly expanding storage requirements, along with maintaining the best possible speed and efficiency to allow for artist iterations, are the principal drivers for every infrastructure decision at our company — and should be a prime consideration for everyone in our industry.


Adam Stern is the founder of Vancouver, British Columbia’s Artifex. He says the studio’s main goal is to heighten the VFX experience, both artistically and technically, and collaborate globally with filmmakers to tell great stories.


Storage for Interactive, VR

By Karen Moltenbrey

Every vendor in the visual effects and post production industries relies on data storage. However, for those studios working on new media or hybrid projects, which generate far more content in general, they not only need a reliable solution, they need one that can handle terabytes upon terabytes of data.

Here, two companies in the VR space discuss their needs for a storage solution that serve their business requirements.

Lap Van Luu

Magnopus
Located in downtown Los Angeles, Magnopus creates amazing VR and AR experiences. While a fairly new company — it was founded in 2013 — its staff has an extensive history in the VFX and games industries, with Academy Award winners among its founders. So, there is no doubt that the group knows what it takes to create amazing content.

It also knows the necessity of a reliable storage solution and one that can handle the large data generated by an AR or VR project. At Magnopus, the crew uses a custom-built solution leveraging Supermicro architecture. As Magnopus CTO Lap Van Luu points out, they are using an SSG-6048R-E1CR60N 4U chassis that the studio populates with two types of tier storage: the cache read-and-write layer is NVMe, while the second tier is SAS. Both are in a RAID-10 configuration with 1TB of NVMe and 500TB of SAS raw storage.

“This setup allows us to scale to a larger workforce and meet the demands of our artists,” says Luu. “We leverage faster NVMe Flash and larger SAS for the bulk of our storage requirements.”

Before Magnopus, Luu worked at companies with all kinds of storage systems over the past 20 years, including those from NetApp, BlueArc and Isilon, as well as custom builds of ZFS, FreeNAS, Microsoft Windows Storage Spaces and Hadoop configurations. However, since Magnopus opened, it has only switched to a bigger and faster version of its original setup, starting with a custom Supermicro system with 400GB of SSD and 250TB of SAS in the same configuration.

“We went with this configuration because as we were moving more into realtime production than traditional VFX, the need for larger renderfarms and storage IO demands dropped dramatically,” says Luu. “We also knew that we wanted to leverage smart caching due to the cost of Flash storage dropping to a reasonable price point. It was the ideal situation to be in. We were starting a new company with a less-demanding infrastructure with newer technology that was cheaper, faster and better overall.”

Nevertheless, choosing a specific solution was not a decision that was made lightly. “When you move away from your premier storage solution providers, there is always a concern for scalability and reliability. When working in realtime production, the concern to re-render elements wasn’t a factor of hours or days, but rather seconds and minutes. It was important for us to have redundant backups. But for the cost saving on storage, we could easily get mirrored servers and still be saving a significant amount of money.”

Luu knew the studio wanted to leverage Flash caching, so the big question was, How much Flash was necessary to meet the demands of their artists and processing farm? The processing farm was mainly used to generate textures and environments that were imported over to a real-time engine, such as Unity or Unreal Engine. To this end, Magnopus had to find out who offered a solution for caching that was as hands-off as possible and was invisible to all the users. “LSI, now Avago, had a solution with the RAID controller called cachecade, which dealt with all the caching,” he says. “All you had to do was set up some preferences and the RAID controller would take care of the rest.”

However, cachecade had a size limit on the caching layer of 512GB, so the studio had to do some testing to see if it would ever exceed that, and in a rare situation it did, says Luu. “But it was never a worry because behind the flash cache was a 60 SAS drive RAID-10 configuration.”

As Luu explains, when working with VFX, IOPS (IO operations per second) is always the biggest issue due to the heavy demand from certain types of applications. “VFX work and compositing can typically drive any storage solution to a grinding halt when you have a renderfarm taxing the production storage from your artists,” he explains. However, realtime development IO demands are significantly less since the assets are created in a DCC application but imported into a game engine, where processing occurs in realtime and locally. So, storing all those traditional VFX elements are not necessary, and the overall capacity of storage dropped to one-tenth of what was required with VFX, Luu points out.

And since Magnopus has a Flash-based cache layer that is large enough to meet the company’s IO demands, it does not have to leverage localization to reduce the IO demand off the main production server; as a result, the user gets immediate server response. And, it means that all data within the pipeline resides on the company’s main production server — where the company starts and ends any project.

“Magnopus is a content-focused technology company,” Luu says. “All our assets and projects that we create are digital. Storage is extremely important because it is the lifeblood of everything we create. The storage server can be the difference between if a user can focus on creative content creation where the infrastructure is invisible or the frustration of constantly being blocked and delayed by hardware. Enabling everyone to work as efficiently as possible allows for the best results and products for our clients and customers.”

Light Sail VR
Light Sail VR is a Hollywood-based VR boutique that is a pioneer in cinematic virtual reality storytelling. Since its founding three years ago, the studio has been producing a range of interactive, 360- and 180-degree VR content, including original work and branded pieces for Google, ABC, GoPro and Paramount.

Matt Celia on set for Speak of the Devil.

Because Light Sail VR is a unique but small company, employees often have to wear a number of hats. For instance, co-founder Robert Watts is executive producer and handles many of the logistical issues. His partner, Matthew Celia, is creative director and handles more of the technical aspects of the business. So when it comes to managing the company’s storage needs, Celia is the guy. And, having a reliable system that keeps things running smoothly is paramount, as he is also juggling shoots and post-production work. No one can afford delays in production and post, but for a small company, it can be especially disastrous.

Light Sail VR does not simply dabble in VR; it is what the company does exclusively. Most of the projects thus far have been live action, though the group started its first game engine work this year. When the studio produced a piece with GoPro in the first year of its founding, it was on a sneakernet of G-Drives from G-Technology, “and I was going crazy!” says Celia. “VR is fantastic, but it’s very data-intensive. You can max out a computer’s processing very easily, and the render times are extraordinarily long. There’s a lot of shots to get through because every shot becomes a visual effects shot with either stitching, rotoscoping or compositing needed.”

He continues: “I told Robert [Watts] we needed to get a shared storage server so if I max out one computer while I’m working, I can just go to another computer and keep working, rather than wait eight to 10 hours for a render to finish.”

The Speak of the Devil shoot.

Celia had been dialed into the post world for some time. “Before diving into the world of VR, I was a Final Cut guy, and the LumaForge guys and [founder] Sam Mestman were people I always respected in the industry,” he says. So, Celia reached out to them with a cold call and explained that Light Sail VR was doing virtual reality, an uncharted, pioneering new thing, and was going to need a lot of storage — and needed it fast. “I told them, ‘We want to be hooked up to many computers, both Macs and PCs, and don’t want to deal with file structures and those types of things.’”

Celia points out that they are an independent and small boutique, so finding something that was cost effective and reliable was important. LumaForge responded with a solution called Jellyfish Mobile, geared for small teams and on-set work or portable office environments. “I think we got the 30TB NAS server that has four 10Gb Ethernet connections.” That enabled Light Sail VR to hook up the system to all its computers, “and it worked,” he adds. “I could work on one shot, hit render, and go to another computer and continue working on the next shot and hit render, then kind of ping-pong back and forth. It made our lives a lot easier.”

Light Sail VR has since graduated to the larger-capacity Jellyfish Rack system, which is a 160TB solution (expandable up to 1 petabyte).

The storage is located in Light Sail VR’s main office and is hooked up to its computers. The filmmakers shoot in the field and, if on location, download the data to drives, which they transport back to the office and load onto the server. Then, they transcode all the media to DNX. (VR is captured in H.264 format, which is not user friendly for editing due to the high-res frame size.)

Currently, Celia is in New York, having just wrapped the 20th episode of original content for Refinery29, a media company focused on young women that produces editorial and video programming, live events and social, shareable content delivered across major social media platforms, and covers a variety of categories from style to politics and more. Eight of the episodes are currently in various stages of the post pipeline, due to come out later this year. “And having a solid storage server has been a godsend,” Celia says.

The studio backs up locally onto Seagate drives for archival purposes and sometimes employs G-Technology drives for on-set work. “We just got this new G-Tech SSD that’s 2TB. It’s been great for use on set because having an SSD and downloading all the cards while on set makes your wrap process so much faster,” Celia points out.

Lately, Light Sail VR is shooting a lot of VR-180, requiring two 64GB cards per camera — one for the right eye and one for the left eye. But when they are shooting with the Yi Halo next-gen 3D 360-degree Google Jump camera, they use 17 64GB cards. “That’s a lot of data,” says Celia. “You can have a really bad day if you have really bad drives.”

The studio’s previous solution operated via Thunderbolt 1 in a RAID-5. It only worked on a single machine and was not cross-platform. As the studio made the transition over to PC from Mac to take advantage of better hardware capable of supporting VR playback, that solution was just not practical. They also needed a solution that was plug and play, so they could just pop it into a 10Gb Ethernet connection — they did not want fiber, “which can get expensive.”

The Light Sail team.

“I just wanted something very simple that was cross-platform and could handle what we were doing, which is, by the way, 6K or 8K stereo at 60 frames per second – these workloads are larger than most feature films,” Celia says. “So, we needed a lot of storage. We needed it fast. We needed it to be shared.”

However, while Celia searched for a system, one thing became clear to him: The solutions were technical. “It seemed like I would have to be my own IT department.” And, that was just one more hat he did not want to have to wear. “At LumaForge, they are independent filmmakers. They understood what I was trying to do immediately, and were willing to go on that journey with us.”

Say Celia, “I always call hard drives or storage the underwear of the post production world because it’s the thing you hate spending a lot of money on, but you really need it to perform and work.”

Main Image: Magnopus


Karen Moltenbrey is a long-time VFX and post writer.


Storage Roundtable

Production, post, visual effects, VR… you can’t do it without a strong infrastructure. This infrastructure must include storage and products that work hand in hand with it.

This year we spoke to a sampling of those providing storage solutions — of all kinds — for media and entertainment, as well as a storage-agnostic company that helps get your large files from point A to point B safely and quickly.

We gathered questions from real-world users — things that they would ask of these product makers if they were sitting across from them.

Quantum’s Keith Lissak
What kind of storage do you offer, and who is the main user of that storage?
We offer a complete storage ecosystem based around our StorNext shared storage and data management solution,including Xcellis high-performance primary storage, Lattus object storage and Scalar archive and cloud. Our customers include broadcasters, production companies, post facilities, animation/VFX studios, NCAA and professional sports teams, ad agencies and Fortune 500 companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Xcellis features continuous scalability and can be sized to precisely fit current requirements and scaled to meet future demands simply by adding storage arrays. Capacity and performance can grow independently, and no additional accelerators or controllers are needed to reach petabyte scale.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
We don’t have exact numbers, but a growing number of our customers are using cloud storage. Our FlexTier cloud-access solution can be used with both public (AWS, Microsoft Azure and Google Cloud) and private (StorageGrid, CleverSafe, Scality) storage.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We offer a range of StorNext 4K Reference Architecture configurations for handling the demanding workflows, including 4K, 8K and VR. Our customers can choose systems with small or large form-factor HDDs, up to an all-flash SSD system with the ability to handle 66 simultaneous 4K streams.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
StorNext systems are OS-agnostic and can work with all Mac, Windows and Linux clients with no discernible difference.

Zerowait’s Rob Robinson
What kind of storage do you offer, and who is the main user of that storage?
Zerowait’s SimplStor storage product line provides storage administrators scalable, flexible and reliable on-site storage needed for their growing storage requirements and workloads. SimplStor’s platform can be configured to work in Linux or Windows environments and we have several customers with multiple petabytes in their data centers. SimplStor systems have been used in VFX production for many years and we also provide solutions for video creation and many other large data environments.

Additionally, Zerowait specializes in NetApp service, support and upgrades, and we have provided many companies in the media and VFX businesses with off-lease transferrable licensed NetApp storage solutions. Zerowait provides storage hardware, engineering and support for customers that need reliable and big storage. Our engineers support customers with private cloud storage and customers that offer public cloud storage on our storage platforms. We do not provide any public cloud services to our customers.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our customers typically need on-site storage for processing speed and security. We have developed many techniques and monitoring solutions that we have incorporated into our service and hardware platforms. Our SimplStor and NetApp customers need storage infrastructures that scale into the multiple petabytes, and often require GigE, 10GigE or a NetApp FC connectivity solution. For customers that can’t handle the bandwidth constraints of the public Internet to process their workloads, Zerowait has the engineering experience to help our customers get the most of their on-premises storage.

How many of the people buying your solutions are using them with another cloud-based products (i.e. Microsoft Azure)?
Many of our customers use public cloud solutions for their non-proprietary data storage while using our SimplStor and NetApp hardware and support services for their proprietary, business-critical, high-speed and regulatory storage solutions where data security is required.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
SimplStor’s density and scalability make it perfect for use in HD and higher resolution environments. Our SimplStor platform is flexible and we can accommodate customers with special requests based on their unique workloads.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Zerowait’s NetApp and SimplStor platforms are compatible with both Linux (NFS) and Windows (CIFS) environments. OS X is supported in some applications. Every customer has a unique infrastructure and set of applications they are running. Customers will see differences in performance, but our flexibility allows us to customize a solution to maximize the throughput to meet workflow requirements.

Signiant’s Mike Nash
What kind of storage works with your solution, and who is the main user or users of that storage?
Signiant’s Media Shuttle file transfer solution is storage agnostic, and for nearly 200,000 media pros worldwide it is the primary vehicle for sending and sharing large files. Even though Media Shuttle doesn’t provide storage, and many users think of their data as “in Media Shuttle.” In reality, their files are located in whatever storage their IT department has designated. This might be the company’s own on-premises storage, or it could be their AWS or Microsoft Azure cloud storage tenancy. Our users employ a Media Shuttle portal to send and share files; they don’t have to think about where the files are stored.

How are you making sure your products are scalable so people can grow either their use or the bandwidth of their networks (or both)?
Media Shuttle is delivered as a cloud-native SaaS solution, so it can be up and running immediately for new customers, and it can scale up and down as demand changes. The servers that power the software are managed by our DevOps team and monitored 24×7 — and the infrastructure is auto-scaling and instantly available. Signiant does not charge for bandwidth, so customers can use our solutions with any size pipe at no additional cost. And while Media Shuttle can scale up to support the needs of the largest media companies, the SaaS delivery model also makes it accessible to even the smallest production and post facilities.

How many of the people buying your solutions are using them with cloud storage (i.e. AWS or Microsoft Azure)?
Cloud adoption within the M&E industry remains uneven, so it’s no surprise that we see a mixed picture when we look at the storage choices our customers make. Since we first introduced the cloud storage option, there has been a constant month-over-month growth in the number of customers deploying portals with cloud storage. It’s not yet in parity with on-prem storage, but the growth trends are clear.

On-premises content storage is very far from going away. We see many Media Shuttle customers taking a hybrid approach, with some portals using cloud storage and others using on-prem storage. It’s also interesting to note that when customers do choose cloud storage, we increasingly see them use both AWS and Azure.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We can move any size of file. As media files continue to get bigger, the value of our solutions continues to rise. Legacy solutions such as FTP, which lack any file acceleration, will grind things to a halt if 4K, 8K, VR and other huge files need to be moved between locations. And consumer-oriented sharing services like Dropbox and Google Drive become non-starters with these types of files.

What platforms do your system connect to (e.g. Mac OS X, Windows, Linux), and what differences might end-users notice when connecting on these different platforms?
Media Shuttle is designed to work with a wide range of platforms. Users simply log in to portals using any web browser. In the background, a native application installed on the user’s personal computer provides the acceleration functionality. This App works with Windows or Mac OSX systems.

On the IT side of things, no installed software is required for portals deployed with cloud storage. To connect Media Shuttle to on-premises storage, the IT team will run Signiant software on a computer in the customer’s network. This server-side software is available for Linux and Windows.

NetApp’s Jason Danielson
What kind of storage do you offer, and who is the main user of that storage?
NetApp has a wide portfolio of storage and data management products and services. We have four fundamentally different storage platforms — block, file, object and converged infrastructure. We use these platforms and our data fabric software to create a myriad of storage solutions that incorporate flash, disk and cloud storage.

1. NetApp E-Series block storage platform is used by leading shared file systems to create robust and high-bandwidth shared production storage systems. Boutique post houses, broadcast news operations and corporate video departments use these solutions for their production tier.
2. NetApp FAS network-attached file storage runs NetApp OnTap. This platform supports many thousands of applications for tens of thousands of customers in virtualized, private cloud and hybrid cloud environments. In media, this platform is designed for extreme random-access performance. It is used for rendering, transcoding, analytics, software development and the Internet-of-things pipelines.
3. NetApp StorageGrid Webscale object store manages content and data for back-up and active archive (or content repository) use cases. It scales to dozens of petabytes, billions of objects and currently 16 sites. Studios and national broadcast networks use this system and are currently moving content from tape robots and archive silos to a more accessible object tier.
4. NetApp SolidFire converged and hyper-converged platforms are used by cloud providers and enterprises running large private clouds for quality-of-service across hundreds to thousands of applications. Global media enterprises appreciate the ease of scaling, simplicity of QOS quota setting and overall maintenance for largest scale deployments.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The four platforms mentioned above scale up and scale out to support well beyond the largest media operations in the world. So our challenge is not scalability for large environments but appropriate sizing for individual environments. We are careful to design storage and data management solutions that are appropriate to media operations’ individual needs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Seven years ago, NetApp set out on a major initiative to build the data fabric. We are well on the path now with products designed specifically for hybrid cloud (a combination of private cloud and public cloud) workloads. While the uptake in media and entertainment is slower than in other industries, we now have hundreds of customers that use our storage in hybrid cloud workloads, from backup to burst compute.

We help customers wanting to stay cloud-agnostic by using AWS, Microsoft Azure, IBM Cloud, and Google Cloud Platform flexibly and as the project and pricing demands. AWS, Microsoft Azure, IBM, Telsra and ASE along with another hundred or so cloud storage providers include NetApp storage and data management products in their service offerings.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
For higher bandwidth, or bitrate, video production we’ll generally architect a solution with our E-Series storage under either Quantum StorNext or PixitMedia PixStor. Since 2012, when the NetApp E5400 enabled the mainstream adoption of 4K workflows, the E-Series platform has seen three generations of upgrades and the controllers are now more than 4x faster. The chassis has remained the same through these upgrades so some customers have chosen to put the latest controllers into these chassis to improve bandwidth or to utilize faster network interconnect like 16 gigabit fibrechannel. Many post houses continue to use fibrechannel to the workstation for these higher bandwidth video formats while others have chosen to move to Ethernet (40 and 100 Gigabit). As flash (SSDs) continue to drop in price it is starting to be used for video production in all flash arrays or in hybrid configurations. We recently showed our new E570 all flash array supporting NVM Express over Fabrics (NVMe-oF) technology providing 21GB/s of bandwidth and 1 million IOPs with less than 100µs of latency. This technology is initially targeted at super-computing use cases and we will see if it is adopted over the next couple of years for UHD production workloads.

What platforms do your system connect to (Mac OSx, Windows, Linux, etc.), and what differences might end-users notice when connecting on these different platforms?
NetApp maintains a compatibility matrix table that delineates our support of hundreds of client operating systems and networking devices. Specifically, we support Mac OS X, Windows and various Linux distributions. Bandwidth expectations differ between these three operating systems and Ethernet and Fibre Channel connectivity options, but rather than make a blanket statement about these, we prefer to talk with customers about their specific needs and legacy equipment considerations.

G-Technology’s Greg Crosby
What kind of storage do you offer, and who is the main user of that storage?
Western Digital’s G-Technology products provide high-performing and reliable storage solutions for end-to-end creative workflows, from capture and ingest to transfer and shuttle, all the way to editing and final production.

The G-Technology brand supports a wide range of users for both field and in-studio work, with solutions that span a number of portable handheld drives — which are often times used to backup content on-the-go — all the way to in-studio drives that offer capacities up to 144TB. We recognize that each creative has their own unique workflow and some embrace the use of cloud-based products. We are proud to be companions to those cloud services as a central location to store raw content or a conduit to feed cloud features and capabilities.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our line ranges from small portable and rugged drives to large, multi-bay RAID and NAS solutions, for all aspects of the media and entertainment industry. Integrating the latest interface technology such as USB-C or Thunderbolt 3, our storage solutions will take advantage of the ability to quickly transfer files.

We make it easy to take a ton of storage into the field. The G-Speed Shuttle XL drive is available in capacities up to 96TB, and an optional Pelican case, with handle, is available, making it easy to transport in the field and mitigating any concerns about running out of storage. We recently launched the G-Drive mobile SSD R-Series. This drive is built to withstand a three meter (nine foot) drop, and is able to endure accidental bumps or drops, given that it is a solid-state drive.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many of our customers are using cloud-based solutions to complement their creative workflows. We find that most of our customers use our solutions as the primary storage or to easily transfer and shuttle their content since the cloud is not an efficient way to move large amounts of data. We see the cloud capabilities as a great way to share project files and low-resolution content, or collaborate with others on projects as well as distribute share a variety of deliverables.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Today’s camera technology enables not only capture at higher resolutions but also higher frame rates with more dynamic imagery. We have solutions that can easily support multi-stream 4K, 8K and VR workflows or multi-layer photo and visual effects projects. G-Technology is well positioned to support these creative workflows as we integrate the latest technologies into our storage solutions. From small portable and rugged SSD drives to high-capacity and fast multi-drive RAID solutions with the latest Thunderbolt 3 and USB-C interface technology we are ready tackle a variety of creative endeavors.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.), and what differences might users notice when connecting on these different platforms?
Our complete portfolio of external storage solutions work for Mac and PC users alike. With native support for Apple Time Machine, these solutions are formatted for Mac OS out of the box, but can be easily reformatted for Windows users. G-Technology also has a number of strategic partners with technology vendors, including Apple, Atomos, Red Camera, Adobe and Intel.

Panasas’ David Sallak
What kind of storage do you offer, and who is the main user of that storage?
Panasas ActiveStor is an enterprise-class easy-to-deploy parallel scale-out NAS (network-attached storage) that combines Flash and SATA storage with a clustered file system accessed via a high-availability client protocol driver with support for standard protocols.

The ActiveStor storage cluster consists of the ActiveStor Director (ASD-100) control engine, the ActiveStor Hybrid (ASH-100) storage enclosure, the PanFS parallel file system, and the DirectFlow parallel data access protocol for Linux and Mac OS.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
ActiveStor is engineered to scale easily. There are no specific architectural limits for how widely the ActiveStor system can scale out, and adding more workloads and more users is accomplished without system downtime. The latest release of ActiveStor can grow either storage or bandwidth needs in an environment that lets metadata responsiveness, data performance and data capacity scale independently.

For example, we quote capacity and performance numbers for a Panasas storage environment containing 200 ActiveStor Hybrid 100 storage node enclosures with 5 ActiveStor Director 100 units for filesystem metadata management. This configuration would result in a single 57PB namespace delivering 360GB/s of aggregate bandwidth with an excess of 2.6M IOPs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Panasas customers deploy workflows and workloads in ways that are well-suited to consistent on-site performance or availability requirements, while experimenting with remote infrastructure components such as storage and compute provided by cloud vendors. The majority of Panasas customers continue to explore the right ways to leverage cloud-based products in a cost-managed way that avoids surprises.

This means that workflow requirements for file-based storage continue to take precedence when processing real-time video assets, while customers also expect that storage vendors will support the ability to use Panasas in cloud environments where the benefits of a parallel clustered data architecture can exploit the agility of underlying cloud infrastructure without impacting expectations for availability and consistency of performance.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Panasas ActiveStor is engineered to deliver superior application responsiveness via our DirectFlow parallel protocol for applications working in compressed UHD, 4K and higher-resolution media formats. Compared to traditional file-based protocols such as NFS and SMB, DirectFlow provides better granular I/O feedback to applications, resulting in client application performance that aligns well with the compressed UHD, 4K and other extreme-resolution formats.

For uncompressed data, Panasas ActiveStor is designed to support large-scale rendering of these data formats via distributed compute grids such as render farms. The parallel DirectFlow protocol results in better utilization of CPU resources in render nodes when processing frame-based UHD, 4K and higher-resolution formats, resulting in less wall clock time to produce these formats.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Panasas ActiveStor supports macOS and Linux with our higher-performance DirectFlow parallel client software. We support all client platforms via NFS or SMB as well.

Users would notice that when connecting to Panasas ActiveStor via DirectFlow, the I/O experience is as if users were working with local media files on internal drives, compared to working with shared storage where normal protocol access may result in the slight delay associated with open network protocols.

Facilis’ Jim McKenna
What kind of storage do you offer, and who is the main user of that storage?
We have always focused on shared storage for the facility. It’s high-speed attached storage and good for anyone who’s cutting HD or 4K. Our workflow and management features really make us different than basic network storage. We have attachment to the cloud through software that uses all the latest APIs.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Most of our large customers have been with us for several years, and many started pretty small. Our method of scalability is flexible in that you can decide to simply add expansion drives, add another server, or add a head unit that aggregates multiple servers. Each method increases bandwidth as well as capacity.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many customers use cloud, either through a corporate gateway or directly uploaded from the server. Many cloud service providers have ways of accessing the file locations from the facility desktops, so they can treat it like another hard drive. Alternatively, we can schedule, index and manage the uploads and downloads through our software.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Facilis is known for our speed. We still support Fibre Channel when everyone else, it seems, has moved completely to Ethernet, because it provides better speeds for intense 4K and beyond workflows. We can handle UHD playback on 10Gb Ethernet, and up to 4K full frame DPX 60p through Fibre Channel on a single server enclosure.

What platforms do your systems connect to (e.g. Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
We have a custom multi-platform shared file system, not NAS (network attached storage). Even though NAS may be compatible with multiple platforms by using multiple sharing methods, permissions and optimization across platforms is not easily manageable. With Facilis, the same volume, shared one way with one set of permissions, looks and acts native to every OS and even shows up as a local hard disk on the desktop. You can’t get any more cross-platform compatible than that.

SwiftStack’s Mario Blandini
What kind of storage do you offer, and who is the main user of that storage?
We offer hybrid cloud storage for media. SwiftStack is 100% software and runs on-premises atop the server hardware you already buy using local capacity and/or capacity in public cloud buckets. Data is stored in cloud-native format, so no need for gateways, which do not scale. Our technology is used by broadcasters for active archive and OTT distribution, digital animators for distributed transcoding and mobile gaming/eSports for massive concurrency among others.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The SwiftStack software architecture separates access, storage and management, where each function can be run together or on separate hardware. Unlike storage hardware with the mix of bandwidth and capacity being fixed to the ports and drives within, SwiftStack makes it easy to scale the access tier for bandwidth independently from capacity in the storage tier by simply adding server nodes on the fly. On the storage side, capacity in public cloud buckets scales and is managed in the same single namespace.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Objectively, use of capacity in public cloud providers like Amazon Web Services and Google Cloud Platform is still “early days” for many users. Customers in media however are on the leading edge of adoption, not only for hybrid cloud extending their on-premises environment to a public cloud, but also using a second source strategy across two public clouds. Two years ago it was less than 10%, today it is approaching 40%, and by 2020 it looks like the 80/20 rule will likely apply. Users actually do not care much how their data is stored, as long as their user experience is as good or better than it was before, and public clouds are great at delivering content to users.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Arguably, larger assets produced by a growing number of cameras and computers have driven the need to store those assets differently than in the past. A petabyte is the new terabyte in media storage. Banks have many IT admins, where media shops have few. SwiftStack has the same consumption experience as public cloud, which is very different than on-premises solutions of the past. Licensing is based on the amount of data managed, not the total capacity deployed, so you pay-as-you-grow. If you save four replicas or use erasure coding for 1.5X overhead, the price is the same.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
The great thing about cloud storage, whether it is on-premises or residing with your favorite IaaS providers like AWS and Google, the interface is HTTP. In other words, every smartphone, tablet, Chromebook and computer has an identical user experience. For classic applications on systems that do not support AWS S3 as an interface, users see the storage as a mount point or folder in their application — either NFS or SMB. The best part, it is a single namespace where data can come in file, get transformed via object, and get read either way, so the user experience does not need to change even though the data is stored in the most modern way.

Dell EMC’s Tom Burns
What kind of storage do you offer, and who is the main user of that storage?
At Dell EMC, we created two storage platforms for the media and entertainment industry: the Isilon scale-out NAS All-Flash, hybrid and archive platform to consolidate and simplify file-based workflows and the Dell EMC Elastic Cloud Storage (ECS), a scalable enterprise-grade private cloud solution that provides extremely high levels of storage efficiency, resiliency and simplicity designed for both traditional and next-generation workloads.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
In the media industry, change is inevitable. That’s why every Isilon system is built to rapidly and simply adapt by allowing the storage system to scale performance and capacity together, or independently, as more space or processing power is required. This allows you to scale your storage easily as your business needs dictate.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Over the past five years, Dell EMC media and entertainment customers have added more than 1.5 exabytes of Isilon and ECS data storage to simplify and accelerate their workflows.

Isilon’s cloud tiering software, CloudPools, provides policy-based automated tiering that lets you seamlessly integrate with cloud solutions as an additional storage tier for the Isilon cluster at your data center. This allows you to address rapid data growth and optimize data center storage resources by using the cloud as a highly economical storage tier with massive storage capacity.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
As technologies that enhance the viewing experience continue to emerge, including higher frame rates and resolutions, uncompressed 4K, UHD, high dynamic range (HDR) and wide color gamut (WCG), underlying storage infrastructures must effectively scale to keep up with expanding performance requirements.

Dell EMC recently launched the sixth generation of the Isilon platform, including our all-flash (F800), which brings the simplicity and scalability of NAS to uncompressed 4K workflows — something that up until now required expensive silos of storage or complex and inefficient push-pull workflows.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc)? And what differences might end-users notice when connecting on these different platforms?
With Dell EMC Isilon, you can streamline your storage infrastructure by consolidating file-based workflows and media assets, eliminating silos of storage. Isilon scale-out NAS includes integrated support for a wide range of industry-standard protocols allowing the major operating systems to connect using the most suitable protocol, for optimum performance and feature support, including Internet Protocols IPv4, and IPv6, NFS, SMB, HTTP, FTP, OpenStack Swift-based Object access for your cloud initiatives and native Hadoop Distributed File System (HDFS).

The ECS software-defined cloud storage platform provides the ability to store, access, and manipulate unstructured data and is compatible with existing Amazon S3, OpenStack Swift APIs, EMC CAS and EMC Atmos APIs.

EditShare’s Lee Griffin
What kind of storage do you offer, and who is the main user of that storage?
Our storage platforms are tailored for collaborative media workflows and post production. It combines the advanced EFS (that’s EditShare File System, in short) distributed file system with intelligent load balancing. It’s a scalable, fault-tolerant architecture that offers cost-effective connectivity. Within our shared storage platforms, we have a unique take on current cloud workflows, with current security and reliability of cloud-based technology prohibiting full migration to cloud storage for production, EditShare AirFlow uses EFS on-premise storage to provide secure access to media from anywhere in the world with a basic Internet connection. Our main users are creative post houses, broadcasters and large corporate companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Recently, we upgraded all our platforms to EFS and introduced two new single-node platforms, the EFS 200 and 300. These single-node platforms allow users to grow their storage whilst keeping a single namespace which eliminates management of multiple storage volumes. It enables them to better plan for the future, when their facility requires more storage and bandwidth, they can simply add another node.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
No production is in one location, so the ability to move media securely and back up is still a high priority to our clients. From our Flow media asset management and via our automation module, we offer clients the option to backup their valuable content to places like Amazon S3 servers.

How does your system handle UHD, 4K and other higher-than HD resolutions?
We have many clients working with UHD content who are supplying programming content to broadcasters, film distributors and online subscription media providers. Our solutions are designed to work effortlessly with high data rate content, enabling the bandwidth to expand with the addition of more EFS nodes to the intelligent storage pool. So, our system is ready and working now for 4K content and is future proof for even higher data rates in the future.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
EditShare supplies native client EFS drivers to all three platforms, allowing clients to pick and choose which platform they want to work on. If it is an Autodesk Flame for VFX, a Resolve for grading or our own Lightworks for editing on Linux, we don’t mind. In fact, EFS offers a considerable bandwidth improvement when using our EFS drivers over existing AFP and SMB protocol. Improved bandwidth and speed to all three platforms makes for happy clients!

And there are no differences when clients connect. We work with all three platforms the same way, offering a unified workflow to all creative machines, whether on Mac, Windows or PC.

Scale Logic’s Bob Herzan
What kind of storage do you offer, and who is the main user of that storage?
Scale Logic has developed an ecosystem (Genesis Platform) that includes servers, networking, metadata controllers, single and dual-controller RAID products and purpose-built appliances.

We have three different file systems that allow us to use the storage mentioned above to build SAN, NAS, scale-out NAS, object storage and gateways for private and public cloud. We use a combination of disk, tape and Flash technology to build our tiers of storage that allows us to manage media content efficiently with the ability to scale seamlessly as our customers’ requirements change over time.

We work with customers that range from small to enterprise and everything in between. We have a global customer base that includes broadcasters, post production, VFX, corporate, sports and house of worship.

In addition to the Genesis Platform we have also certified three other tier 1 storage vendors to work under our HyperMDC SAN and scale-out NAS metadata controller (HPE, HDS and NetApp). These partnerships complete our ability to consult with any type of customer looking to deploy a media-centric workflow.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Great questions and it’s actually built into the name and culture of our company. When we bring a solution to market it has to scale seamlessly and it needs to be logical when taking the customer’s environment into consideration. We focus on being able to start small but scale any system into a high-availability solution with limited to no downtime. Our solutions can scale independently if clients are looking to add capacity, performance or redundancy.

For example, a customer looking to move to 4K uncompressed workflows could add a Genesis Unlimited as a new workspace focused on the 4K workflow, keeping all existing infrastructure in place alongside it, avoiding major adjustments to their facility’s workflow. As more and more projects move to 4K, the Unlimited can scale capacity, performance and the needed HA requirements with zero downtime.

Customers can then start to migrate their content from their legacy storage over to Unlimited and then repurpose their legacy storage onto the HyperFS file system as second tier storage.Finally, once we have moved the legacy storage onto the new file system we also are more than happy to bring the legacy storage and networking hardware under our global support agreements.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Cloud continues to be ramping up for our industry, and we have many customers using cloud solutions for various aspects of their workflow. As it pertains to content creation, manipulation and long-term archive, we have not seen much adoption with our customer base. The economics just do not support the level of performance or capacity our clients demand.

However, private cloud or cloud-like configurations are becoming more mainstream for our larger customers. Working with on-premise storage while having DR (disaster recovery) replication offsite continues to be the best solution at this point for most of our clients.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Our solutions are built not only for the current resolutions but completely scalable to go beyond them. Many of our HD customers are now putting in UHD and 4K workspaces on the same equipment we installed three years ago. In addition to 4K we have been working with several companies in Asia that have been using our HyperFS file system and Genesis HyperMDC to build 8K workflows for the Olympics.

We have a number of solutions designed to meet our customer’s requirements. Some are done with spinning disk, others with all flash, and then even more that want a hybrid approach to seamlessly combine the technologies.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
All of our solutions are designed to support Windows, Linux, and Mac OS. However, how they support the various operating systems is based on the protocol (block or file) we are designing for the facility. If we are building a SAN that is strictly going to be block level access (8/16/32 Gbps Fibre Channel or 1/10/25/40/100 Gbps iSCSI, we would use our HyperFS file system and universal client drivers across all operating systems. If our clients also are looking for network protocols in addition to the block level clients we can support jSMB and NFS but allow access to block and file folders and files at the same time.

For customers that are not looking for block level access, we would then focus our design work around our Genesis NX or ZX product line. Both of these solutions are based on a NAS operating system and simply present themselves with the appropriate protocol over 1/10/25/40 or 100Gb. Genesis ZX solution is actually a software-defined clustered NAS with enterprise feature sets such as unlimited snapshots, metro clustering, thin provisioning and will scale up over 5 Petabytes.

Sonnet Technologies‘ Greg LaPorte
What kind of storage do you offer, and who is the main user of that storage?
We offer a portable, bus-powered Thunderbolt 3 SSD storage device that fits in your hand. Primary users of this product include video editors and DITs who need a “scratch drive” fast enough to support editing 4K video at 60fps while on location or traveling.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The Fusion Thunderbolt 3 PCIe Flash Drive is currently available with 1TB capacity. With data transfer of up to 2,600 MB/s supported, most users will not run out of bandwidth when using this device.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
Computers with Thunderbolt 3 ports running either macOS Sierra or High Sierra, or Windows 10 are supported. The drive may be formatted to suit the user’s needs, with either an OS-specific format such as HFS+, or cross-platform format such as exFAT.


Post Supervisor: Planning an approach to storage solutions

By Lance Holte

Like virtually everything in post production, storage is an ever-changing technology. Camera resolutions and media bitrates are constantly growing, requiring higher storage bitrates and capacities. Productions are increasingly becoming more mobile, demanding storage solutions that can live in an equally mobile environment. Yesterday’s 4K cameras are being replaced by 8K cameras, and the trend does not look to be slowing down.

Yet, at the same time, productions still vary greatly in size, budget, workflow and schedule, which has necessitated more storage options for post production every year. As a post production supervisor, when deciding on a storage solution for a project or set of projects, I always try to have answers to a number of workflow questions.

Let’s start at the beginning with production questions.

What type of video compression is production planning on recording?
Obviously, more storage will be required if the project is recording to Arriraw rather than H.264.

What camera resolution and frame rate?
Once you know the bitrate from the video compression specs, you can calculate the data size on a per-hour basis. If you don’t feel like sitting down with a calculator or spreadsheet for a few minutes, there are numerous online data size calculators, but I particularly like AJA’s DataCalc application, which has tons of presets for cameras and video and audio formats.

How many cameras and how many hours per day is each camera likely to be recording?
Data size per hour, multiplied by hours per day, multiplied by shoot days, multiplied by number of cameras gives a total estimate of the storage required for the shoot. I usually add 10-20% to this estimate to be safe.

Let’s move on to post questions…

Is it an online/offline workflow?
The simplicity of editing online is awesome, and I’m holding out for the day when all projects can be edited with online media. In the meantime, most larger projects require online/offline editorial, so keep in mind the extra storage space for offline editorial proxies. The upside is that raw camera files can be stored on slower, more affordable (even archival) storage through editorial until the online process begins.

On numerous shows I’ve elected to keep the raw camera files on portable external RAID arrays (cloned and stored in different locations for safety) until picture lock. G-Tech, LaCie, OWC and Western Digital all make 48+ TB external arrays on which I’ve stored raw median urging editorial. When you start the online process, copy the necessary media over to your faster online or grading/finishing storage, and finish the project with only the raw files that are used in the locked cut.

How much editorial staff needs to be working on the project simultaneously?
On smaller projects that only require an editorial staff of two or three people who need to access the media at the same time, you may be able to get away with the editors and assistants network sharing a storage array, and working in different projects. I’ve done numerous smaller projects in which a couple editors connected to an external RAID (I’ve had great success with Proavio and QNAP arrays), which is plugged into one workstation and shares over the network. Of course, the network must have enough bandwidth for both machines to play back the media from the storage array, but that’s the case for any shared storage system.

For larger projects that employ five, 10 or more editors and staff, storage that is designed for team sharing is almost a certain requirement. Avid has opened up integrated shared storage to outside storage vendors the past few years, but Avid’s Nexis solution still remains an excellent option. Aside from providing a solid solution for Media Composer and Symphony, Nexis can also be used with basically any other NLE, ranging from Adobe Premiere Pro to Blackmagic DaVinci Resolve to Final Cut Pro and others. The project sharing abilities within the NLEs vary depending on the application, but the clear trend is moving toward multiple editors and post production personnel working simultaneously in the same project.

Does editorial need to be mobile?
Increasingly, editorial is tending to begin near the start of physical production and this can necessitate the need for editors to be on or near set. This is a pretty simple question to answer but it is worth keeping in mind so that a shoot doesn’t end up without enough storage in a place where additional storage isn’t easily available — or the power requirements can’t be met. It’s also a good moment to plan simple things like the number of shuttle or transfer drives that may be needed to ship media back to home base.

Does the project need to be compartmentalized?
For example, should proxy media be on a separate volume or workspace from the raw media/VFX/music/etc.? Compartmentalization is good. It’s safe. Accidents happen, and it’s a pain if someone accidentally deletes everything on the VFX volume or workspace on the editorial storage array. But it can be catastrophic if everything is stored in the same place and they delete all the VFX, graphics, audio, proxy media, raw media, projects and exports.

Split up the project onto separate volumes, and only give write access to the necessary parties. The bigger the project and team, the bigger the risk for accidents, so err on the side of safety when planning storage organization.

Finally, we move to finishing, delivery and archive questions…

Will the project color and mix in-house? What are the delivery requirements? Resolution? Delivery format? Media and other files?
Color grading and finishing often require the fastest storage speeds of the whole pipeline. By this point, the project should be conformed back to the camera media, and the colorist is often working with high bitrate, high-resolution raw media or DPX sequences, EXRs or other heavy file types. (Of course, there are as many workflows as there are projects, many of which can be very light, but let’s consider the trend toward 4K-plus and the fact that raw media generally isn’t getting lighter.) On the bright side, while grading and finishing arrays need to be fast, they don’t need to be huge, since they won’t house all the raw media or editorial media — only what is used in the final cut.

I’m a fan of using an attached SAS or Thunderbolt array, which is capable of providing high bandwidth to one or two workstations. Anything over 20TB shouldn’t be necessary, since the media will be removed and archived as soon as the project is complete, ready for the next project. Arrays like Areca ARC-5028T2 or Proavio EB800MS give read speeds of 2000+ MB/s,which can play back 4K DPXs in real time.

How should the project be archived?
There are a few follow-up questions to this one, like: Will the project need to be accessed with short notice in the future? LTO is a great long-term archival solution, but pulling large amounts of media off LTO tape isn’t exactly quick. For projects that I suspect will be reopened in the near future, I try to keep an external hard drive or RAID with the necessary media onsite. Sometimes it isn’t possible to keep all of the raw media onsite and quickly accessible, so keeping the editorial media and projects onsite is a good compromise. Offsite, in a controlled, safe, secure location, LTO-6 tapes house a copy of every file used on the project.

Post production technology changes with the blink of an eye, and storage is no exception. Once these questions have been answered, if you are spending any serious amount of money, get an opinion from someone who is intimately familiar with the cutting edge of post production storage. Emphasis on the “post production” part of that sentence, because video I/O is not the same as, say, a bank with the same storage size requirements. The more money devoted to your storage solutions, the more opinions you should seek. Not all storage is created equal, so be 100% positive that the storage you select is optimal for the project’s particular workflow and technical requirements.

There is more than one good storage solution for any workflow, but the first step is always answering as many storage- and workflow-related questions as possible to start taking steps down the right path. Storage decisions are perhaps one of the most complex technical parts of the post process, but like the rest of filmmaking, an exhaustive, thoughtful, and collaborative approach will almost always point in the right direction.

Main Image: G-Tech, QNAP, Avid and Western Digital all make a variety of storage solutions for large and small-scale post production workflows.


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.


What you should ask when searching for storage

Looking to add storage to your post studio? Who isn’t these days? Jonathan Abrams, chief technical officer at New York City’s Nutmeg Creative was kind enough to put together a list that can help all in their quest for the storage solution that best fits their needs.

Here are some questions that customers should ask a storage manufacturer.

What is your stream count at RAID-6?
The storage manufacturer should have stream count specifications available for both Avid DNx and Apple ProRes at varying frame rates and raster sizes. Use this information to help determine which product best fits your environment.

How do I connect my clients to your storage?  
Gigabit Ethernet (copper)? 10 Gigabit Ethernet (50-micron Fiber)? Fiber Channel (FC)? These are listed in ascending order of cost and performance. Combined with the answer to the question above, this narrows down which product a storage manufacturer has that fits your environment.

Can I use whichever network switch I want to and know that it will work, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?
If you are using a Mac with Thunderbolt ports, then you will need a network adapter, such as a Promise SANLink2 10G SFP+ for your shared storage connection. Also ask, “Can I use any Thunderbolt network adapter, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?”

If you are an Avid Media Composer user, ask, “Does your storage present itself to Media Composer as if it was Avid shared storage?”
This will allow the first person who opens a Media Composer project to obtain a lock on a bin.  Other clients can open the same project, though they will not have write access to said bin.

What is covered by support? 
Make certain that both the hardware (chassis and everything inside of it) and the software (client and server) are covered by support. This includes major version upgrades to the server and client software (i.e. v.11 to v.12). You do not want your storage manufacturer to announce a new software version at NAB 2018 and then find out that it’s not covered by your support contract. That upgrade is a separate cost.

For how many years will you be able to replace all of the hardware parts?
Will the storage manufacturer replace any part within three years of your purchase, provided that you have an active support contract? Will they charge you less for support if they cannot replace failed components during that year’s support contract? The variation of this question is, “What is your business model?” If the storage manufacturer will only guarantee availability of all components for three years, then their business model is based upon you buying another server from them in three years. Are you prepared to be locked into that upgrade cycle?

Are you using custom components that I cannot source elsewhere?
If you continue using your storage beyond the date when the manufacturer can replace a failed part, is the failed part a custom part that was only sold to the manufacturer of your storage? Is the failed part one that you may be able to find used or refurbished and swap out yourself?

What is the penalty for not renewing support? Can I purchase support incidents on an as-needed basis?
How many as-needed event purchases equate to you realizing, “We should have renewed support instead.” If you cannot purchase support on an as-needed basis, then you need to ask what the penalty for reinstating support is. This information helps you determine what your risk tolerance is and whether or not there is a date in the future when you can say, “We did not incur a financial loss with that risk.”

Main Image:  Nutmeg Creative’s Jonathan Abrams with the company’s 80 TB of EditShare storage and two spare drive.  Photo Credit:  Larry Closs


Storage in the Studio: VFX Studios

By Karen Maierhofer

It takes talent and the right tools to generate visual effects of all kinds, whether it’s building breathtaking environments, creating amazing creatures or crafting lifelike characters cast in a major role for film, television, games or short-form projects.

Indeed, we are familiar with industry-leading content creation tools such as Autodesk’s Maya, Foundry’s Mari and more, which, when placed into the hands of creatives, the result in pure digital magic. In fact, there is quite a bit of technological magic that occurs at visual effects facilities, including one kind in particular that may not have the inherent sparkle of modeling and animation tools but is just as integral to the visual effects process: storage. Storage solutions are the unsung heroes behind most projects, working behind the scenes to accommodate artists and keep their productive juices flowing.

Here we examine three VFX facilities and their use of various storage solutions and setups as they tackle projects large and small.

Framestore
Since it was founded in 1986, Framestore has placed its visual stamp on a plethora of Oscar-, Emmy- and British Academy Film Award-winning visual effects projects, including Harry Potter, Gravity and Guardians of the Galaxy. With increasingly more projects, Framestore expanded from its original UK location in London to North American locales such as Montreal, New York, Los Angeles and Chicago, handling films as well as immersive digital experiences and integrated advertisements for iconic brands, including Guinness, Geico, Coke and BMW.

Beren Lewis

As the company and its workload grew and expanded into other areas, including integrated advertising, so, too, did its storage needs. “Innovative changes, such as virtual-reality projects, brought on high demand for storage and top-tier performance,” says NYC-based Beren Lewis, CTO of advertising and applied technologies at Framestore. “The team is often required to swiftly accommodate multiple workflows, including stereoscopic 4K and VR.”

Without hesitation, Lewis believes storage is typically the most challenging aspect of technology within the VFX workflow. “If the storage isn’t working, then neither are the artists,” he points out. Furthermore, any issues with storage can potentially lead to massive financial implications for the company due to lost time and revenue.

According to Lewis, Framestore uses its storage solution — a Pixit PixStor General Parallel File System (GPFS) storage cluster using the NetApp E-Series hardware – for all its project data. This includes backups to remote co-location sites, video preprocessing, decompression, disaster recovery preparation, scalability and high performance for VFX, finishing and rendering workloads.

The studio moved all the integrated advertising teams over to the PixStor GPFS clusters this past spring. Currently, Framestore has five primary PixStor clusters using NetApp E-Series in use at each office in London, LA, Chicago and Montreal.

According to Lewis, Framestore partnered with Pixit Media and NetApp to take on increasingly complicated and resource-hungry VR projects. “This partnership has provided the global integrated advertising team with higher performance and nonstop access to data,” he says. “The Pixit Media PixStor software-defined scale-out storage solution running on NetApp E-Series systems brings fast, reliable data access for the integrated advertising division so the team can embrace performance and consistency across all five sites, take a cost-effective, simplified approach to disaster recovery and have a modular infrastructure to support multiple workflows and future expansion.”

BMW

Framestore selected its current solution after reviewing several major storage technologies. It was looking for a single namespace that was very stable, while providing great performance, but it also had to be scalable, Lewis notes. “The PixStor ticked all those boxes and provided the right balance between enterprise-grade hardware and support, and open-source standards,” he explains. “That balance allowed us to seamlessly integrate the PixStor into our network, while still maintaining many of the bespoke tools and services that we had developed in-house over the years, with minimum development time.”

In particular, the storage solution provides the required high performance so that the studio’s VFX, finishing and rendering workloads can all run “full-out with no negative effect on the finishing editors’ or graphic artists’ user experience,” Lewis says. “This is a game-changing capability for an industry that typically partitions off these three workloads to keep artists from having to halt operations. PixStor running on E-Series consolidates all three workloads onto a single IT infrastructure with streamlined end-to-end production of projects, which reduces both time to completion and operational costs, while both IT acquisition and maintenance costs are reduced.”

At Framestore, integrating storage into the workflow is simple. The first step after a project is green-lit is the establishment of a new file set on the PixStor GPFS cluster, where ingested footage and all the CG artist-generated project data will live. “The PixStor is at the heart of the integrated advertising storage workflow from start to finish,” Lewis says. Because the PixStor GPFS cluster serves as the primary storage for all integrated advertising project data, the division’s workstations, renderfarm, editing and finishing stations connect to the cluster for review, generation and storage of project content.

Prior to the move to PixStor/NetApp, Framestore had been using a number of different storage offerings. According to Lewis, they all suffered from the same issues in terms of scalability and degradation of performance under render load — and that load was getting heavier and more unpredictable with every project. “We needed a technology that scaled and allowed us to maintain a single namespace but not suffer from continuous slowdowns for artists due to renderfarm load during crunch times or project delivery.”

Geico

As Lewis explains, with the PixStor/NetApp solution, processing was running up to 270,000 IOPS (I/O operations per second), which was at least several times what Framestore’s previous infrastructure would have been able to handle in a single namespace. “Notably, the development workflow for a major theme-park ride was unhindered by all the VR preprocessing, while backups to remote co-location sites synched every two hours without compromising the artist, rendering or finishing workloads,” he says. “This provided a cost-effective, simplified approach to disaster recovery, and Framestore now has a fast, tightly integrated platform to support its expansion plans.”

To stay at the top of its game, Framestore is always reviewing new technologies, and storage is often part of that conversation. To this end, the studio plans to build on the success it has had with PixStor by expanding the storage to handle some additional editorial playback and render workloads using an all-Non-Volatile Memory Express (NVMe) flash tier. Other projects include a review of object storage technology for use as a long-term, off-premises storage target for archival data.

Without question, the industry’s visual demands are rapidly changing. Not long ago, Framestore could easily predict storage and render requirements for a typical project. But that is no longer the case, and the studio finds itself working in ever-increasing resolutions and frame rates. Whereas projects may have been as small as 3TB in the recent past, nowadays the studio regularly handles multiple projects of 300TB or larger. And the storage must be shared with other projects of varying sizes and scope.

“This new ‘unknowns’ element of our workflow puts many strains on all aspects of our pipeline, but especially the storage,” Lewis points out. “Knowing that our storage can cope with the load and can scale allows us to turn our attention to the other issues that these new types of projects bring to Framestore.”

As Lewis notes, working with high-resolution images and large renderfarms create a unique set of challenges for any storage technology that’s not seen in many other fields. The VFX will often test any storage technology well beyond what other industries are capable of. “If there’s an issue or a break point, we will typically find it in spectacular fashion,” he adds.

Rising Sun Pictures
As a contributor to the design and execution of computer-generated effects on more than 100 feature films since its inception 22 years ago, Rising Sun Pictures (RSP) has pushed the technical bar many times over in film as well as television projects. Based in Adelaide, South Australia, RSP has built a top team of VFX artists who have tackled such box-office hits as Thor: Ragnarok, X-Men and Game of Thrones, as well as the Harry Potter and Hunger Games franchises.

Mark Day

Such demanding, high-level projects require demanding, high-level effects, which, in turn, demand a high-performance, reliable storage solution capable of handling varying data I/O profiles. “With more than 200 employees accessing and writing files in various formats, the need for a fast, reliable and scalable solution is paramount to business continuity,” says Mark Day, director of engineering at RSP.

Recently, RSP installed an Oracle ZS5 storage appliance to handle this important function. This high-performance, unified storage system provides NAS and SAN cloud-converged storage capabilities that enable on-premises storage to seamlessly access Oracle Public Cloud. Its advanced hardware and software architecture includes a multi-threading SMP storage operating system for running multiple workloads and advanced data services without performance degradation. The offering also caches data on DRAM or flash cache for optimal performance and efficiency, while keeping data safely stored on high-capacity SSD (solid state disk) or HDD (hard disk drive) storage.

Previously, the studio had been using an Dell EMC Isilon storage cluster with Avere caching appliances, and the company is still employing the solution for parts of its workflow.

When it came time to upgrade to handle RSP’s increased workload, the facility ran a proof of concept with multiple vendors in September 2016 and benchmarked their systems. Impressed with Oracle, RSP began installation in early 2017. According to Day, RSP liked the solution’s ability to support larger packet sizes — now up to 1MB. In addition, he says its “exceptional” analytics engine gives introspection into a render job.

“It has a very appealing [total cost of ownership], and it has caching right out of the box, removing the need for additional caching appliances,” says Day. Storage is at the center of RSP’s workflow, storing all the relevant information for every department — from live-action plates that are turned over from clients, scene setup files and multi-terabyte cache files to iterations of the final product. “All employees work off this storage, and it needs to accommodate the needs of multiple projects and deadlines with zero downtime,” Day adds.

Machine Room

“Visual effects scenes are getting more complex, and in turn, data sizes are increasing. Working in 4K quadruples file sizes and, therefore, impacts storage performance,” explains Day. “We needed a solution that could cope with these requirements and future trends in the industry.”

According to Day, the data RSP deals with is broad, from small setup files to terabyte geocache files. A one-minute 2K DPX sequence is 17GB for the final pass, while 4K is 68GB. “Keep in mind this is only the final pass; a single shot could include hundreds of passes for a heavy computer-generated sequence,” he points out.

Thus, high-performance storage is important to the effective operation of a visual effects company like RSP. In fact, storage helps the artists stay on the creative edge by enabling them to iterate through the creative process of crafting a shot and a look. “Artists are required to iterate their creative process many times to perfect the look of a shot, and if they experience slowdowns when loading scenes, this can have a dramatic effect on how many iterations they can produce. And in turn, this affects employees’ efficiency and, ultimately, the profitability of the company,” says Day.

Thor: Ragnarok

Most recently, RSP used its new storage solution for work on the blockbuster Thor: Ragnarok, in particular, for the Val’s Flashback sequence — which was extremely complex and involved extensive lighting and texture data, as well as high-frame-rate plates (sometimes more than 1,000fps for multiple live-action footage plates). “Before, our storage refresh early versions of this shot could take up to 24 hours to render on our server farm. But since installing our new storage, we saw this drastically reduced to six hours — that’s a 3x improvement, which is a fantastic outcome,” says Day.

Outpost VFX
A full-service VFX studio for film, broadcast and commercials, Outpost VFX, based in Bournemouth, England, has been operational since late 2012. Since that time, the facility has been growing by leaps and bounds, taking on major projects, including Life, Nocturnal Animals, Jason Bourne and 47 Meters Down.

Paul Francis

Due to this fairly rapid expansion, Outpost VFX has seen the need for increased capacity in its storage needs. “As the company grows and as resolution increases and HDR comes in, file sizes increase, and we need much more capacity to deal with that effectively,” says CTO Paul Francis.

When setting up the facility five years ago, the decision was made to go with PixStor from Pixit Media and Synology’s NAS for its storage solution. “It’s an industry-recognized solution that is extremely resilient to errors. It’s fast, robust and the team at Pixit provides excellent support, which is important to us,” says Francis.

Foremost, the solution had to provide high capacity and high speeds. “We need lots of simultaneous connections to avoid bottlenecks and ensure speedy delivery of data,” Francis adds. “This is the only one we’ve used, really. It has proved to be stable enough to support us through our growth over the last couple of years — growth that has included a physical office move and an increase in artist capacity to 80 seats.”

Outpost VFX mainly works with image data and project files for use with Autodesk’s Maya, Foundry’s Nuke, Side Effects’ Houdini and other VFX and animation tools. The challenge this presents is twofold, both large and small: concern for large file sizes, and problems the group can face with small files, such as metadata. Francis explains: “Sequentially loading small files can be time-consuming due to the current technology, so moving to something that can handle both of these areas will be of great benefit to us.”

Locally, artists use a mix of HDDs from a number of different manufacturers to store reference imagery and so forth — older-generation PCs have mostly Western Digital HDDs while newer PCs have generic SSDs. When replacing or upgrading equipment, Outpost VFX uses Samsung 900 Series SSDs, depending on the required performance and current market prices.

Life

Like many facilities, Outpost VFX is always weighing its options when it comes to finding the best solution for its current and future needs. Presently, it is looking at splitting up some of its storage solutions into smaller segments for greater resilience. “When you only have one storage solution and it fails, everything goes down. We’re looking to break our setup into smaller, faster solutions,” says Francis.

Additionally, security is a concern for Outpost VFX when it comes to its clients. According to Francis, certain shows need to be annexed, meaning the studio will need a separate storage solution outside of its main network to handle that data.

When Outpost VFX begins a job, the group ingests all the plates it needs to work on, and they reside in a new job folder created by production and assigned to a specific drive for active jobs. This folder then becomes the go-to for all assets, elements and shot iterations created throughout the production. For security purposes, these areas of the server are only visible to and accessible by artists, who in turn cannot access the Internet; this ensures that the files are “watertight and immune to leaks,” says Francis, adding that with PixStor, the studio is able to set up different partitions for different areas that artists can jump between easily.

How important is storage to Outpost VFX? “Frankly, there’d be no operation without storage!” Francis says emphatically. “We deal with hundreds of terrabytes of data in visual effects, so having high-capacity, reliable storage available to us at all times is absolutely essential to ensure a smooth and successful operation.”

47 Meters Down

Because the studio delivers visual effects across film, TV and commercials simultaneously, storage is an important factor no matter what the crew is working on. A recent film project like 47 Meters Down required the full gamut of visual effects work, as Outpost VFX was the sole vendor for the project. So, the studio needed the space and responsiveness of a storage system that enabled them to deliver more than 420 shots, a number of which featured heavy 3D builds and multiple layers of render elements.

“We had only about 30 artists at that point, so having a stable solution that was easy for our team to navigate and use was crucial,” Francis points out.

Main Image: From Outpost VFX’s Domestos commercial out of agency MullenLowe London.

Storage in the Studio: Post Houses

By Karen Maierhofer

There are many pieces that go into post production, from conform, color, dubbing and editing to dailies and more. Depending on the project, a post house can be charged with one or two pieces of this complex puzzle, or even the entire workload. No matter the job, the tasks must be done on time and on budget. Unforeseen downtime is unacceptable.

That is why when it comes to choosing a storage solution, post houses are very particular. They need a setup that is secure, reliable and can scale. For them, one size simply does not fit all. They all want a solution that fits their particular needs and the needs of their clients.

Here, we look at three post facilities of various sizes and range of services, and the storage solutions that are a good fit for their business.

Liam Ford

Sim International
The New York City location of Sim has been in existence for over 20 years, operating under the former name of Post Factory NY up until about a month ago when Sim rebranded it and its seven other founding post companies as Sim International. Whether called by its new moniker or its previous one, the facility has grown to become a premier space in the city for offline editorial teams as well as one of the top high-end finishing studios in town, as the list of feature films and episodic shows that have been cut and finished at Sim is quite lengthy. And starting this past year, Sim has launched a boutique commercial finishing division.

According to senior VP of post engineering Liam Ford, the vast majority of the projects at the NYC facility are 4K, much of which is episodic work. “So, the need is for very high-capacity, very high-bandwidth storage,” Ford says. And because the studio is located in New York, where space is limited, that same storage must be as dense as possible.

For its finishing work, Sim New York is using a Quantum Xcellis SAN, a StorNext-based appliance system that can be specifically tuned for 4K media workflow. The system, which was installed approximately two years ago, runs on a 16Gb Fibre Channel network. Almost half a petabyte of storage fits into just a dozen rack units. Meanwhile, an Avid Nexis handles the facility’s offline work.

The Sim SAN serves as the primary playback system for all the editing rooms. While there are SSDs in some of the workstations for caching purposes, the scheduling demands of clients do not leave much time for staging material back and forth between volumes, according to Ford. So, everything gets loaded back to the SAN, and everything is played back from the SAN.

As Ford explains, content comes into the studio from a variety of sources, whether drives, tapes or Internet transfers, and all of that is loaded directly onto the SAN. An online editor then soft-imports all that material into his or her conform application and creates an edited, high-resolution sequence that is rendered back to the SAN. Once at the SAN, that edited sequence is available for a supervised playback session with the in-house colorists, finishing VFX artists and so forth.

“The point is, our SAN is the central hub through which all content at all stages of the finishing process flows,” Ford adds.

Before installing the Xcellis system, the facility had been using local workstation storage only, but the huge growth in the finishing division prompted the transition to the shared SAN file system. “There’s no way we could do the amount of work we now have, and with the flexibility our clients demand, using a local storage workflow,” says Ford.

When it became necessary for the change, there were not a lot of options that met Sim’s demands for high bandwidth and reliable streaming, Ford points out, as Quantum’s StorNext and SGI’s CXFS were the main shared file systems for the M&E space. Sim decided to go with Quantum because of the work the vendor has done in recent years toward improving the M&E experience as well as the ease of installing the new system.

Nevertheless, with the advent of 25Gb and 100Gb Ethernet, Sim has been closely monitoring the high-performance NAS space. “There are a couple of really good options out there right now, and I can see us seriously looking at those products in the near future as, at the very least, an augmentation to our existing Fibre Channel-based storage,” Ford says.

At Sim, editors deal with a significant amount of Camera Raw, DPX and OpenEXR data. “Depending on the project, we could find ourselves needing 1.5GB/sec or more of bandwidth for a single playback session, and that’s just for one show,” says Ford. “We typically have three or four [shows] playing off the SAN at any one time, so the bandwidth needs are huge!”

Master of None

And the editors’ needs continue to evolve, as does their need for storage. “We keep needing more storage, and we need it to be faster and faster. Just when storage technology finally got to the point that doing 10-bit 2K shows was pretty painless, everyone started asking for 16-bit 4K,” Ford points out.

Recently, Sim completed work on the feature American Made and the Netflix show Master of None, in addition to a number of other episodic projects. For these and others shows, the SAN acts as the central hub around which the color correction, online editing, visual effects and deliverables are created.

“The finishing portion of the post pipeline deals exclusively with the highest-quality content available. It used to be that we’d do our work directly from a film reel on a telecine, but those days are long past,” says Ford. “You simply can’t run an efficient finishing pipeline anymore without a lot of storage.”

DigitalFilm Tree
DigitalFilm Tree (DFT) opened its doors in 1999 and now occupies a 10,000-square-foot space in Universal City, California, offering full round-trip post services, including traditional color grading, conform, dailies and VFX, as well as post system rentals and consulting services.

While Universal City may be DFT’s primary location, it has dozens of remote satellite systems — mini post houses for production companies and studios – around the world. Those remote post systems, along with the increase in camera resolution (Alexa, Raw, 4K), have multiplied DFT’s storage needs. Both have resulted in a sea change in the facility’s storage solution.

According to CEO Ramy Katrib, most companies in the media and entertainment industry historically have used block storage, and DFT was no different. But four years ago, the company began looking at object storage, which is used by Silicon Valley companies, like Dropbox and AWS, to store large assets. After significant research, Katrib felt it was a good fit for DFT as well, believing it to be a more economical way to build petabytes of storage, compared to using proprietary block storage.

Ramy Katrib

“We were unique from most of the post houses in that respect,” says Katrib. “We were different from many of the other companies using object storage — they were tech, financial institutions, government agencies, health care; we were the rare one from M&E – but our need for extremely large, scalable and resilient storage was the same as theirs.”

DFT’s primary work centers around scripted television — an industry segment that continues to grow. “We do 15-plus television shows at any given time, and we encourage them to shoot whatever they like, at whatever resolution they desire,” says Katrib. “Most of the industry relies on LTO to back up camera raw materials. We do that too, but we also encourage productions to take advantage of our object storage, and we will store everything they shoot and not punish them for it. It is a rather Utopian workflow. We now give producers access to all their camera raw material. It is extremely effective for our clients.”

Over four years ago, DFT began using a cloud-based platform called OpenStack, which is open-source software that controls large pools of data, to build and design its own object storage system. “We have our own software developers and people who built our hardware, and we are able to adjust to the needs of our clients and the needs of our own workflow,” says Katrib.

DFT designs its custom PC- and Linux-based post systems, including chassis from Super Micro, CPUs from Intel and graphic cards from Nvidia. Storage is provided from a number of companies, including spinning-disc and SSD solutions from Seagate Technology and Western Digital.

DFT then deploys remote dailies systems worldwide, in proximity to where productions are shooting. Each day clients plug their production hard drives (containing all camera raw files) into DFT’s remote dailies system. From DFT’s facility, dailies technicians remotely produce editorial, viewing and promo dailies files, and transfer them to their destinations worldwide. All the while, the camera raw files are transported from the production location to DFT’s ProStack “massively scalable object storage.” In this case, “private cloud storage” consists of servers DFT designed that house all the camera raw materials, with management from DFT post professionals who support clients with access to and management of their files.

DFT provides color grading for Great News.

Recently, storage vendors such as Quantum and Avid have begun building and branding their own object storage solutions not unlike what DFT has constructed at its Universal City locale. And the reason is simple: Object storage provides a clear advantage because of reliability and the low cost. “We looked at it because the storage we were paying for, proprietary block storage, was too expensive to house all the data our clients were generating. And resolutions are only going up. So, every year we needed more storage,” Katrib explains. “We needed a solution that could scale with the practical reality we were living.”

Then, about four years ago when DFT started becoming a software company, one of the developers brought OpenStack to Katrib’s attention. “The open-source platform provided several storage solutions, networking capabilities and cloud compute capabilities for free,” he points out. Of course, the solution is not a panacea, as it requires a company to customize the offering for its own needs and even contribute back to the OpenStack community. But then again, that requirement enables DFT to evolve to the changing needs of its clients without waiting for a manufacturer to do it.

“It does not work out of the box like a solution from IBM, for instance. You have to develop around it,” Katrib says. “You have to have a lab mentality, designing your own hardware and software based on pain points in your own environment. And, sometimes it fails. But when you do it correctly, you realize it is an elegant solution.” However, there are vibrant communities, user groups and tech summits of those leveraging the technology who are willing to assist and collaborate.

DFT has evolved its object storage solution, extending its capabilities from an initial hundreds of terabytes – which is nothing to sneeze at — to hundreds of petabytes of storage. DFT also designs remote post systems and storage solutions for customers in remote locations around the world. And those remote locations can be as simple as a workstation running applications such as Blackmagic’s Resolve or Adobe After Effects and connected to object storage housing all the client’s camera raw material.

The key, Katrib notes, is to have great post and IT pros managing the projects and the system. “I can now place a remote post system with a calibrated 4K monitor and object storage housing the camera raw material, and I can bring the post process to you wherever you are, securely,” he adds. “From wherever you are, you can view the conform, color and effects, and sign off on the final timeline, as if you were at DFT.”

DFT posts American Housewife

In addition to the object storage, DFT is also using Facilis TerraBlock and Avid Nexis systems locally and on remote installs. The company uses those commercial solutions because they provide benefits, including storage performance and feature sets that optimize certain software applications. As Katrib points out, storage is not one flavor fits all, and different solutions work better for certain use cases. In DFT’s case, the commercial storage products provide performance for the playback of multiple 4K streams across the company’s color, VFX and conform departments, while its ProStack high-capacity object storage comes into play for storing the entirety of all files produced by our clients.

“Rather than retrieve files from an LTO tape, as most do when working on a TV series, with object storage, the files are readily available, saving hours in retrieval time,” says Katrib.

Currently, DFT is working on a number of television series, including Great News (color correction only) and Good Behavior (dailies only). For other shows, such as the Roseanne revival, NCIS: Los Angeles, American Housewife and more, it is performing full services such as visual effects, conform, color, dailies and dubbing. And in some instances, even equipment rental.

As the work expands, DFT is looking to extend upon its storage and remote post systems. “We want to have more remote systems where you can do color, conform, VFX, editorial, wherever you are, so the DP or producer can have a monitor in their office and partake in the post process that’s particular to them,” says Katrib. “That is what we are scaling as we speak.”

Broadway Video
Broadway Video is a global media and entertainment company that is primarily engaged in post-production services for television, film, music, digital and commercial projects for the past four decades. Located in New York and Los Angeles, the facility offers one-stop tools and talent for editorial, audio, design, color grading, finishing and screening, as well as digital file storage, preparation, aggregation and delivery of digital content across multiple platforms.

Since its founding in 1979, Broadway Video has grown into an independent studio. During this timeframe, content has evolved greatly, especially in terms of resolution, to where 4K and HD content — including HDR and Atmos sound — is becoming the norm. “Staying current and dealing with those data speeds are necessary in order to work fluidly on a 4K project at 60p,” says Stacey Foster, president and managing director, Broadway Video Digital and Production. “The data requirements are pretty staggering for throughput and in terms of storage.”

Stacey Foster

This led Broadway Video to begin searching a year ago for a storage system that would meet its needs now as well as in the foreseeable future — in short, it also needed a system that is scalable. Their solution: an all-Flash Hitachi Vantara Virtual Storage Platform (VSP) G series. Although quite expensive, a flash-based system is “ridiculously powerful,” says Foster. “Technology is always marching forward, and Flash-based systems are going to become the norm; they are already the norm at the high end.”

Foster has had a long-standing relationship with Hitachi for more than a decade and has witnessed the company’s growth into M&E from the medical and financial worlds where it has been firmly ensconced. According to Foster, Hitachi’s VSP series will enhance Broadway Video’s 4K offerings and transform internal operations by allowing quick turnaround, efficient and cost-effective production, post production and delivery of television shows and commercials. And, the system offers workload scalability, allowing the company to expand and meet the changing needs of the digital media production industry.

“The systems we had were really not that capable of handling DPX files that were up to 50TB, and Hitachi’s VSP product has been handling them effortlessly,” says Foster. “I don’t think other [storage] manufacturers can say that.”

Foster explains that as Broadway Video continued to expand its support of the latest 4K content and technologies, it became clear that a more robust, optimized storage solution was needed as the company moved in this new direction. “It allows us to look at the future and create a foundation to build our post production and digital distribution services on,” Foster says.

Broadway Video’s with Netflix projects sparked the need for a more robust system. Recently, Comedians in Cars Getting Coffee, an Embassy Row production, transitioned to Netflix, and one of the requirements by its new home was the move from 2K to 4K. “It was the perfect reason for us to put together a 4K end-to-end workflow that satisfies this client’s requirements for technical delivery,” Foster points out. “The bottleneck in color and DPX file delivery is completely lifted, and the post staff is able to work quickly and sometimes even faster than in real time when necessary to deliver the final product, with its very large files. And that is a real convenience for them.”

Broadway Video’s Hitachi Vantara Virtual Storage Platform G series.

As a full-service post company, Broadway Video in New York operates 10 production suites of Avids running Adobe Premiere and Blackmagic Resolve, as well as three full mixing suites. “We can have all our workstations simultaneously hit the [storage] system hard and not have the system slow down. That is where Hitachi’s VSP product has set itself apart,” Foster says.

For Comedians in Cars Getting Coffee, like many projects Broadway Video encounters, the cut is in a lower-resolution Avid file. The 4K media is then imported into the Resolve platform, so it is colored in its original material and format. In terms of storage, once the material is past the cutting stage, it is all stored on the Hitachi system. Once the project is completed, it is handed off on spinning disc for archival, though Foster foresees a limited future for spinning discs due to their inherent nature for a limited life span — “anything that spins breaks down,” he adds.

All the suites are fully HD-capable and are tied with shared SAN and ISIS storage; because work on most projects is shared between editing suites, there is little need to use local storage. Currently Broadway Video is still using its previous Avid ISIS products but is slowly transitioning to the Hitachi system only. Foster estimates that at this time next year, the transition will be complete, and the staff will no longer have to support the multiple systems. “The way the systems are set up right now, it’s just easier to cut on ISIS using the Avid workstations. But that will soon change,” he says.

Other advantages the Hitachi system provides is stability and uptime, which Foster maintains is “pretty much 100 percent guaranteed.” As he points out, there is no such thing as downtime in banking and medical, where Hitachi earned its mettle, and bringing that stability to the M&E industry “has been terrific.”

Of course, that is in addition to bandwidth and storage capacity, which is expandable. “There is no limit to the number of petabytes you can have attached,” notes Foster.

Considering that the majority of calls received by Broadway Video center on post work for 4K-based workflows, the new storage solution is a necessary technical addition to the facility’s other state-of-the-art equipment. “In the environment we work in, we spend more and more time on the creative side in terms of the picture cutting and sound mixing, and then it is a rush to get it out the door. If it takes you days to import, color correct, export and deliver — especially with the file sizes we are talking about – then having a fast system with the kind of throughput and bandwidth that is necessary really lifts the burden for the finishing team,” Foster says.

He continues: “The other day the engineers were telling me we were delivering 20 times faster using the Hitachi technology in the final cutting and coloring of a Jerry Seinfeld stand-up special we had done in 4K” resulting in a DPX file that was about 50TB. “And that is pretty significant,” Foster adds.

Main Image: DigitalFilm Tree’s senior colorist Patrick Woodard.