Category Archives: Storage Edition

Storage Roundtable

Production, post, visual effects, VR… you can’t do it without a strong infrastructure. This infrastructure must include storage and products that work hand in hand with it.

This year we spoke to a sampling of those providing storage solutions — of all kinds — for media and entertainment, as well as a storage-agnostic company that helps get your large files from point A to point B safely and quickly.

We gathered questions from real-world users — things that they would ask of these product makers if they were sitting across from them.

Quantum’s Keith Lissak
What kind of storage do you offer, and who is the main user of that storage?
We offer a complete storage ecosystem based around our StorNext shared storage and data management solution,including Xcellis high-performance primary storage, Lattus object storage and Scalar archive and cloud. Our customers include broadcasters, production companies, post facilities, animation/VFX studios, NCAA and professional sports teams, ad agencies and Fortune 500 companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Xcellis features continuous scalability and can be sized to precisely fit current requirements and scaled to meet future demands simply by adding storage arrays. Capacity and performance can grow independently, and no additional accelerators or controllers are needed to reach petabyte scale.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
We don’t have exact numbers, but a growing number of our customers are using cloud storage. Our FlexTier cloud-access solution can be used with both public (AWS, Microsoft Azure and Google Cloud) and private (StorageGrid, CleverSafe, Scality) storage.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We offer a range of StorNext 4K Reference Architecture configurations for handling the demanding workflows, including 4K, 8K and VR. Our customers can choose systems with small or large form-factor HDDs, up to an all-flash SSD system with the ability to handle 66 simultaneous 4K streams.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
StorNext systems are OS-agnostic and can work with all Mac, Windows and Linux clients with no discernible difference.

Zerowait’s Rob Robinson
What kind of storage do you offer, and who is the main user of that storage?
Zerowait’s SimplStor storage product line provides storage administrators scalable, flexible and reliable on-site storage needed for their growing storage requirements and workloads. SimplStor’s platform can be configured to work in Linux or Windows environments and we have several customers with multiple petabytes in their data centers. SimplStor systems have been used in VFX production for many years and we also provide solutions for video creation and many other large data environments.

Additionally, Zerowait specializes in NetApp service, support and upgrades, and we have provided many companies in the media and VFX businesses with off-lease transferrable licensed NetApp storage solutions. Zerowait provides storage hardware, engineering and support for customers that need reliable and big storage. Our engineers support customers with private cloud storage and customers that offer public cloud storage on our storage platforms. We do not provide any public cloud services to our customers.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our customers typically need on-site storage for processing speed and security. We have developed many techniques and monitoring solutions that we have incorporated into our service and hardware platforms. Our SimplStor and NetApp customers need storage infrastructures that scale into the multiple petabytes, and often require GigE, 10GigE or a NetApp FC connectivity solution. For customers that can’t handle the bandwidth constraints of the public Internet to process their workloads, Zerowait has the engineering experience to help our customers get the most of their on-premises storage.

How many of the people buying your solutions are using them with another cloud-based products (i.e. Microsoft Azure)?
Many of our customers use public cloud solutions for their non-proprietary data storage while using our SimplStor and NetApp hardware and support services for their proprietary, business-critical, high-speed and regulatory storage solutions where data security is required.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
SimplStor’s density and scalability make it perfect for use in HD and higher resolution environments. Our SimplStor platform is flexible and we can accommodate customers with special requests based on their unique workloads.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Zerowait’s NetApp and SimplStor platforms are compatible with both Linux (NFS) and Windows (CIFS) environments. OS X is supported in some applications. Every customer has a unique infrastructure and set of applications they are running. Customers will see differences in performance, but our flexibility allows us to customize a solution to maximize the throughput to meet workflow requirements.

Signiant’s Mike Nash
What kind of storage works with your solution, and who is the main user or users of that storage?
Signiant’s Media Shuttle file transfer solution is storage agnostic, and for nearly 200,000 media pros worldwide it is the primary vehicle for sending and sharing large files. Even though Media Shuttle doesn’t provide storage, and many users think of their data as “in Media Shuttle.” In reality, their files are located in whatever storage their IT department has designated. This might be the company’s own on-premises storage, or it could be their AWS or Microsoft Azure cloud storage tenancy. Our users employ a Media Shuttle portal to send and share files; they don’t have to think about where the files are stored.

How are you making sure your products are scalable so people can grow either their use or the bandwidth of their networks (or both)?
Media Shuttle is delivered as a cloud-native SaaS solution, so it can be up and running immediately for new customers, and it can scale up and down as demand changes. The servers that power the software are managed by our DevOps team and monitored 24×7 — and the infrastructure is auto-scaling and instantly available. Signiant does not charge for bandwidth, so customers can use our solutions with any size pipe at no additional cost. And while Media Shuttle can scale up to support the needs of the largest media companies, the SaaS delivery model also makes it accessible to even the smallest production and post facilities.

How many of the people buying your solutions are using them with cloud storage (i.e. AWS or Microsoft Azure)?
Cloud adoption within the M&E industry remains uneven, so it’s no surprise that we see a mixed picture when we look at the storage choices our customers make. Since we first introduced the cloud storage option, there has been a constant month-over-month growth in the number of customers deploying portals with cloud storage. It’s not yet in parity with on-prem storage, but the growth trends are clear.

On-premises content storage is very far from going away. We see many Media Shuttle customers taking a hybrid approach, with some portals using cloud storage and others using on-prem storage. It’s also interesting to note that when customers do choose cloud storage, we increasingly see them use both AWS and Azure.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We can move any size of file. As media files continue to get bigger, the value of our solutions continues to rise. Legacy solutions such as FTP, which lack any file acceleration, will grind things to a halt if 4K, 8K, VR and other huge files need to be moved between locations. And consumer-oriented sharing services like Dropbox and Google Drive become non-starters with these types of files.

What platforms do your system connect to (e.g. Mac OS X, Windows, Linux), and what differences might end-users notice when connecting on these different platforms?
Media Shuttle is designed to work with a wide range of platforms. Users simply log in to portals using any web browser. In the background, a native application installed on the user’s personal computer provides the acceleration functionality. This App works with Windows or Mac OSX systems.

On the IT side of things, no installed software is required for portals deployed with cloud storage. To connect Media Shuttle to on-premises storage, the IT team will run Signiant software on a computer in the customer’s network. This server-side software is available for Linux and Windows.

NetApp’s Jason Danielson
What kind of storage do you offer, and who is the main user of that storage?
NetApp has a wide portfolio of storage and data management products and services. We have four fundamentally different storage platforms — block, file, object and converged infrastructure. We use these platforms and our data fabric software to create a myriad of storage solutions that incorporate flash, disk and cloud storage.

1. NetApp E-Series block storage platform is used by leading shared file systems to create robust and high-bandwidth shared production storage systems. Boutique post houses, broadcast news operations and corporate video departments use these solutions for their production tier.
2. NetApp FAS network-attached file storage runs NetApp OnTap. This platform supports many thousands of applications for tens of thousands of customers in virtualized, private cloud and hybrid cloud environments. In media, this platform is designed for extreme random-access performance. It is used for rendering, transcoding, analytics, software development and the Internet-of-things pipelines.
3. NetApp StorageGrid Webscale object store manages content and data for back-up and active archive (or content repository) use cases. It scales to dozens of petabytes, billions of objects and currently 16 sites. Studios and national broadcast networks use this system and are currently moving content from tape robots and archive silos to a more accessible object tier.
4. NetApp SolidFire converged and hyper-converged platforms are used by cloud providers and enterprises running large private clouds for quality-of-service across hundreds to thousands of applications. Global media enterprises appreciate the ease of scaling, simplicity of QOS quota setting and overall maintenance for largest scale deployments.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The four platforms mentioned above scale up and scale out to support well beyond the largest media operations in the world. So our challenge is not scalability for large environments but appropriate sizing for individual environments. We are careful to design storage and data management solutions that are appropriate to media operations’ individual needs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Seven years ago, NetApp set out on a major initiative to build the data fabric. We are well on the path now with products designed specifically for hybrid cloud (a combination of private cloud and public cloud) workloads. While the uptake in media and entertainment is slower than in other industries, we now have hundreds of customers that use our storage in hybrid cloud workloads, from backup to burst compute.

We help customers wanting to stay cloud-agnostic by using AWS, Microsoft Azure, IBM Cloud, and Google Cloud Platform flexibly and as the project and pricing demands. AWS, Microsoft Azure, IBM, Telsra and ASE along with another hundred or so cloud storage providers include NetApp storage and data management products in their service offerings.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
For higher bandwidth, or bitrate, video production we’ll generally architect a solution with our E-Series storage under either Quantum StorNext or PixitMedia PixStor. Since 2012, when the NetApp E5400 enabled the mainstream adoption of 4K workflows, the E-Series platform has seen three generations of upgrades and the controllers are now more than 4x faster. The chassis has remained the same through these upgrades so some customers have chosen to put the latest controllers into these chassis to improve bandwidth or to utilize faster network interconnect like 16 gigabit fibrechannel. Many post houses continue to use fibrechannel to the workstation for these higher bandwidth video formats while others have chosen to move to Ethernet (40 and 100 Gigabit). As flash (SSDs) continue to drop in price it is starting to be used for video production in all flash arrays or in hybrid configurations. We recently showed our new E570 all flash array supporting NVM Express over Fabrics (NVMe-oF) technology providing 21GB/s of bandwidth and 1 million IOPs with less than 100µs of latency. This technology is initially targeted at super-computing use cases and we will see if it is adopted over the next couple of years for UHD production workloads.

What platforms do your system connect to (Mac OSx, Windows, Linux, etc.), and what differences might end-users notice when connecting on these different platforms?
NetApp maintains a compatibility matrix table that delineates our support of hundreds of client operating systems and networking devices. Specifically, we support Mac OS X, Windows and various Linux distributions. Bandwidth expectations differ between these three operating systems and Ethernet and Fibre Channel connectivity options, but rather than make a blanket statement about these, we prefer to talk with customers about their specific needs and legacy equipment considerations.

G-Technology’s Greg Crosby
What kind of storage do you offer, and who is the main user of that storage?
Western Digital’s G-Technology products provide high-performing and reliable storage solutions for end-to-end creative workflows, from capture and ingest to transfer and shuttle, all the way to editing and final production.

The G-Technology brand supports a wide range of users for both field and in-studio work, with solutions that span a number of portable handheld drives — which are often times used to backup content on-the-go — all the way to in-studio drives that offer capacities up to 144TB. We recognize that each creative has their own unique workflow and some embrace the use of cloud-based products. We are proud to be companions to those cloud services as a central location to store raw content or a conduit to feed cloud features and capabilities.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our line ranges from small portable and rugged drives to large, multi-bay RAID and NAS solutions, for all aspects of the media and entertainment industry. Integrating the latest interface technology such as USB-C or Thunderbolt 3, our storage solutions will take advantage of the ability to quickly transfer files.

We make it easy to take a ton of storage into the field. The G-Speed Shuttle XL drive is available in capacities up to 96TB, and an optional Pelican case, with handle, is available, making it easy to transport in the field and mitigating any concerns about running out of storage. We recently launched the G-Drive mobile SSD R-Series. This drive is built to withstand a three meter (nine foot) drop, and is able to endure accidental bumps or drops, given that it is a solid-state drive.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many of our customers are using cloud-based solutions to complement their creative workflows. We find that most of our customers use our solutions as the primary storage or to easily transfer and shuttle their content since the cloud is not an efficient way to move large amounts of data. We see the cloud capabilities as a great way to share project files and low-resolution content, or collaborate with others on projects as well as distribute share a variety of deliverables.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Today’s camera technology enables not only capture at higher resolutions but also higher frame rates with more dynamic imagery. We have solutions that can easily support multi-stream 4K, 8K and VR workflows or multi-layer photo and visual effects projects. G-Technology is well positioned to support these creative workflows as we integrate the latest technologies into our storage solutions. From small portable and rugged SSD drives to high-capacity and fast multi-drive RAID solutions with the latest Thunderbolt 3 and USB-C interface technology we are ready tackle a variety of creative endeavors.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.), and what differences might users notice when connecting on these different platforms?
Our complete portfolio of external storage solutions work for Mac and PC users alike. With native support for Apple Time Machine, these solutions are formatted for Mac OS out of the box, but can be easily reformatted for Windows users. G-Technology also has a number of strategic partners with technology vendors, including Apple, Atomos, Red Camera, Adobe and Intel.

Panasas’ David Sallak
What kind of storage do you offer, and who is the main user of that storage?
Panasas ActiveStor is an enterprise-class easy-to-deploy parallel scale-out NAS (network-attached storage) that combines Flash and SATA storage with a clustered file system accessed via a high-availability client protocol driver with support for standard protocols.

The ActiveStor storage cluster consists of the ActiveStor Director (ASD-100) control engine, the ActiveStor Hybrid (ASH-100) storage enclosure, the PanFS parallel file system, and the DirectFlow parallel data access protocol for Linux and Mac OS.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
ActiveStor is engineered to scale easily. There are no specific architectural limits for how widely the ActiveStor system can scale out, and adding more workloads and more users is accomplished without system downtime. The latest release of ActiveStor can grow either storage or bandwidth needs in an environment that lets metadata responsiveness, data performance and data capacity scale independently.

For example, we quote capacity and performance numbers for a Panasas storage environment containing 200 ActiveStor Hybrid 100 storage node enclosures with 5 ActiveStor Director 100 units for filesystem metadata management. This configuration would result in a single 57PB namespace delivering 360GB/s of aggregate bandwidth with an excess of 2.6M IOPs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Panasas customers deploy workflows and workloads in ways that are well-suited to consistent on-site performance or availability requirements, while experimenting with remote infrastructure components such as storage and compute provided by cloud vendors. The majority of Panasas customers continue to explore the right ways to leverage cloud-based products in a cost-managed way that avoids surprises.

This means that workflow requirements for file-based storage continue to take precedence when processing real-time video assets, while customers also expect that storage vendors will support the ability to use Panasas in cloud environments where the benefits of a parallel clustered data architecture can exploit the agility of underlying cloud infrastructure without impacting expectations for availability and consistency of performance.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Panasas ActiveStor is engineered to deliver superior application responsiveness via our DirectFlow parallel protocol for applications working in compressed UHD, 4K and higher-resolution media formats. Compared to traditional file-based protocols such as NFS and SMB, DirectFlow provides better granular I/O feedback to applications, resulting in client application performance that aligns well with the compressed UHD, 4K and other extreme-resolution formats.

For uncompressed data, Panasas ActiveStor is designed to support large-scale rendering of these data formats via distributed compute grids such as render farms. The parallel DirectFlow protocol results in better utilization of CPU resources in render nodes when processing frame-based UHD, 4K and higher-resolution formats, resulting in less wall clock time to produce these formats.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Panasas ActiveStor supports macOS and Linux with our higher-performance DirectFlow parallel client software. We support all client platforms via NFS or SMB as well.

Users would notice that when connecting to Panasas ActiveStor via DirectFlow, the I/O experience is as if users were working with local media files on internal drives, compared to working with shared storage where normal protocol access may result in the slight delay associated with open network protocols.

Facilis’ Jim McKenna
What kind of storage do you offer, and who is the main user of that storage?
We have always focused on shared storage for the facility. It’s high-speed attached storage and good for anyone who’s cutting HD or 4K. Our workflow and management features really make us different than basic network storage. We have attachment to the cloud through software that uses all the latest APIs.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Most of our large customers have been with us for several years, and many started pretty small. Our method of scalability is flexible in that you can decide to simply add expansion drives, add another server, or add a head unit that aggregates multiple servers. Each method increases bandwidth as well as capacity.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many customers use cloud, either through a corporate gateway or directly uploaded from the server. Many cloud service providers have ways of accessing the file locations from the facility desktops, so they can treat it like another hard drive. Alternatively, we can schedule, index and manage the uploads and downloads through our software.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Facilis is known for our speed. We still support Fibre Channel when everyone else, it seems, has moved completely to Ethernet, because it provides better speeds for intense 4K and beyond workflows. We can handle UHD playback on 10Gb Ethernet, and up to 4K full frame DPX 60p through Fibre Channel on a single server enclosure.

What platforms do your systems connect to (e.g. Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
We have a custom multi-platform shared file system, not NAS (network attached storage). Even though NAS may be compatible with multiple platforms by using multiple sharing methods, permissions and optimization across platforms is not easily manageable. With Facilis, the same volume, shared one way with one set of permissions, looks and acts native to every OS and even shows up as a local hard disk on the desktop. You can’t get any more cross-platform compatible than that.

SwiftStack’s Mario Blandini
What kind of storage do you offer, and who is the main user of that storage?
We offer hybrid cloud storage for media. SwiftStack is 100% software and runs on-premises atop the server hardware you already buy using local capacity and/or capacity in public cloud buckets. Data is stored in cloud-native format, so no need for gateways, which do not scale. Our technology is used by broadcasters for active archive and OTT distribution, digital animators for distributed transcoding and mobile gaming/eSports for massive concurrency among others.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The SwiftStack software architecture separates access, storage and management, where each function can be run together or on separate hardware. Unlike storage hardware with the mix of bandwidth and capacity being fixed to the ports and drives within, SwiftStack makes it easy to scale the access tier for bandwidth independently from capacity in the storage tier by simply adding server nodes on the fly. On the storage side, capacity in public cloud buckets scales and is managed in the same single namespace.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Objectively, use of capacity in public cloud providers like Amazon Web Services and Google Cloud Platform is still “early days” for many users. Customers in media however are on the leading edge of adoption, not only for hybrid cloud extending their on-premises environment to a public cloud, but also using a second source strategy across two public clouds. Two years ago it was less than 10%, today it is approaching 40%, and by 2020 it looks like the 80/20 rule will likely apply. Users actually do not care much how their data is stored, as long as their user experience is as good or better than it was before, and public clouds are great at delivering content to users.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Arguably, larger assets produced by a growing number of cameras and computers have driven the need to store those assets differently than in the past. A petabyte is the new terabyte in media storage. Banks have many IT admins, where media shops have few. SwiftStack has the same consumption experience as public cloud, which is very different than on-premises solutions of the past. Licensing is based on the amount of data managed, not the total capacity deployed, so you pay-as-you-grow. If you save four replicas or use erasure coding for 1.5X overhead, the price is the same.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
The great thing about cloud storage, whether it is on-premises or residing with your favorite IaaS providers like AWS and Google, the interface is HTTP. In other words, every smartphone, tablet, Chromebook and computer has an identical user experience. For classic applications on systems that do not support AWS S3 as an interface, users see the storage as a mount point or folder in their application — either NFS or SMB. The best part, it is a single namespace where data can come in file, get transformed via object, and get read either way, so the user experience does not need to change even though the data is stored in the most modern way.

Dell EMC’s Tom Burns
What kind of storage do you offer, and who is the main user of that storage?
At Dell EMC, we created two storage platforms for the media and entertainment industry: the Isilon scale-out NAS All-Flash, hybrid and archive platform to consolidate and simplify file-based workflows and the Dell EMC Elastic Cloud Storage (ECS), a scalable enterprise-grade private cloud solution that provides extremely high levels of storage efficiency, resiliency and simplicity designed for both traditional and next-generation workloads.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
In the media industry, change is inevitable. That’s why every Isilon system is built to rapidly and simply adapt by allowing the storage system to scale performance and capacity together, or independently, as more space or processing power is required. This allows you to scale your storage easily as your business needs dictate.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Over the past five years, Dell EMC media and entertainment customers have added more than 1.5 exabytes of Isilon and ECS data storage to simplify and accelerate their workflows.

Isilon’s cloud tiering software, CloudPools, provides policy-based automated tiering that lets you seamlessly integrate with cloud solutions as an additional storage tier for the Isilon cluster at your data center. This allows you to address rapid data growth and optimize data center storage resources by using the cloud as a highly economical storage tier with massive storage capacity.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
As technologies that enhance the viewing experience continue to emerge, including higher frame rates and resolutions, uncompressed 4K, UHD, high dynamic range (HDR) and wide color gamut (WCG), underlying storage infrastructures must effectively scale to keep up with expanding performance requirements.

Dell EMC recently launched the sixth generation of the Isilon platform, including our all-flash (F800), which brings the simplicity and scalability of NAS to uncompressed 4K workflows — something that up until now required expensive silos of storage or complex and inefficient push-pull workflows.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc)? And what differences might end-users notice when connecting on these different platforms?
With Dell EMC Isilon, you can streamline your storage infrastructure by consolidating file-based workflows and media assets, eliminating silos of storage. Isilon scale-out NAS includes integrated support for a wide range of industry-standard protocols allowing the major operating systems to connect using the most suitable protocol, for optimum performance and feature support, including Internet Protocols IPv4, and IPv6, NFS, SMB, HTTP, FTP, OpenStack Swift-based Object access for your cloud initiatives and native Hadoop Distributed File System (HDFS).

The ECS software-defined cloud storage platform provides the ability to store, access, and manipulate unstructured data and is compatible with existing Amazon S3, OpenStack Swift APIs, EMC CAS and EMC Atmos APIs.

EditShare’s Lee Griffin
What kind of storage do you offer, and who is the main user of that storage?
Our storage platforms are tailored for collaborative media workflows and post production. It combines the advanced EFS (that’s EditShare File System, in short) distributed file system with intelligent load balancing. It’s a scalable, fault-tolerant architecture that offers cost-effective connectivity. Within our shared storage platforms, we have a unique take on current cloud workflows, with current security and reliability of cloud-based technology prohibiting full migration to cloud storage for production, EditShare AirFlow uses EFS on-premise storage to provide secure access to media from anywhere in the world with a basic Internet connection. Our main users are creative post houses, broadcasters and large corporate companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Recently, we upgraded all our platforms to EFS and introduced two new single-node platforms, the EFS 200 and 300. These single-node platforms allow users to grow their storage whilst keeping a single namespace which eliminates management of multiple storage volumes. It enables them to better plan for the future, when their facility requires more storage and bandwidth, they can simply add another node.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
No production is in one location, so the ability to move media securely and back up is still a high priority to our clients. From our Flow media asset management and via our automation module, we offer clients the option to backup their valuable content to places like Amazon S3 servers.

How does your system handle UHD, 4K and other higher-than HD resolutions?
We have many clients working with UHD content who are supplying programming content to broadcasters, film distributors and online subscription media providers. Our solutions are designed to work effortlessly with high data rate content, enabling the bandwidth to expand with the addition of more EFS nodes to the intelligent storage pool. So, our system is ready and working now for 4K content and is future proof for even higher data rates in the future.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
EditShare supplies native client EFS drivers to all three platforms, allowing clients to pick and choose which platform they want to work on. If it is an Autodesk Flame for VFX, a Resolve for grading or our own Lightworks for editing on Linux, we don’t mind. In fact, EFS offers a considerable bandwidth improvement when using our EFS drivers over existing AFP and SMB protocol. Improved bandwidth and speed to all three platforms makes for happy clients!

And there are no differences when clients connect. We work with all three platforms the same way, offering a unified workflow to all creative machines, whether on Mac, Windows or PC.

Scale Logic’s Bob Herzan
What kind of storage do you offer, and who is the main user of that storage?
Scale Logic has developed an ecosystem (Genesis Platform) that includes servers, networking, metadata controllers, single and dual-controller RAID products and purpose-built appliances.

We have three different file systems that allow us to use the storage mentioned above to build SAN, NAS, scale-out NAS, object storage and gateways for private and public cloud. We use a combination of disk, tape and Flash technology to build our tiers of storage that allows us to manage media content efficiently with the ability to scale seamlessly as our customers’ requirements change over time.

We work with customers that range from small to enterprise and everything in between. We have a global customer base that includes broadcasters, post production, VFX, corporate, sports and house of worship.

In addition to the Genesis Platform we have also certified three other tier 1 storage vendors to work under our HyperMDC SAN and scale-out NAS metadata controller (HPE, HDS and NetApp). These partnerships complete our ability to consult with any type of customer looking to deploy a media-centric workflow.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Great questions and it’s actually built into the name and culture of our company. When we bring a solution to market it has to scale seamlessly and it needs to be logical when taking the customer’s environment into consideration. We focus on being able to start small but scale any system into a high-availability solution with limited to no downtime. Our solutions can scale independently if clients are looking to add capacity, performance or redundancy.

For example, a customer looking to move to 4K uncompressed workflows could add a Genesis Unlimited as a new workspace focused on the 4K workflow, keeping all existing infrastructure in place alongside it, avoiding major adjustments to their facility’s workflow. As more and more projects move to 4K, the Unlimited can scale capacity, performance and the needed HA requirements with zero downtime.

Customers can then start to migrate their content from their legacy storage over to Unlimited and then repurpose their legacy storage onto the HyperFS file system as second tier storage.Finally, once we have moved the legacy storage onto the new file system we also are more than happy to bring the legacy storage and networking hardware under our global support agreements.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Cloud continues to be ramping up for our industry, and we have many customers using cloud solutions for various aspects of their workflow. As it pertains to content creation, manipulation and long-term archive, we have not seen much adoption with our customer base. The economics just do not support the level of performance or capacity our clients demand.

However, private cloud or cloud-like configurations are becoming more mainstream for our larger customers. Working with on-premise storage while having DR (disaster recovery) replication offsite continues to be the best solution at this point for most of our clients.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Our solutions are built not only for the current resolutions but completely scalable to go beyond them. Many of our HD customers are now putting in UHD and 4K workspaces on the same equipment we installed three years ago. In addition to 4K we have been working with several companies in Asia that have been using our HyperFS file system and Genesis HyperMDC to build 8K workflows for the Olympics.

We have a number of solutions designed to meet our customer’s requirements. Some are done with spinning disk, others with all flash, and then even more that want a hybrid approach to seamlessly combine the technologies.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
All of our solutions are designed to support Windows, Linux, and Mac OS. However, how they support the various operating systems is based on the protocol (block or file) we are designing for the facility. If we are building a SAN that is strictly going to be block level access (8/16/32 Gbps Fibre Channel or 1/10/25/40/100 Gbps iSCSI, we would use our HyperFS file system and universal client drivers across all operating systems. If our clients also are looking for network protocols in addition to the block level clients we can support jSMB and NFS but allow access to block and file folders and files at the same time.

For customers that are not looking for block level access, we would then focus our design work around our Genesis NX or ZX product line. Both of these solutions are based on a NAS operating system and simply present themselves with the appropriate protocol over 1/10/25/40 or 100Gb. Genesis ZX solution is actually a software-defined clustered NAS with enterprise feature sets such as unlimited snapshots, metro clustering, thin provisioning and will scale up over 5 Petabytes.

Sonnet Technologies‘ Greg LaPorte
What kind of storage do you offer, and who is the main user of that storage?
We offer a portable, bus-powered Thunderbolt 3 SSD storage device that fits in your hand. Primary users of this product include video editors and DITs who need a “scratch drive” fast enough to support editing 4K video at 60fps while on location or traveling.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The Fusion Thunderbolt 3 PCIe Flash Drive is currently available with 1TB capacity. With data transfer of up to 2,600 MB/s supported, most users will not run out of bandwidth when using this device.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
Computers with Thunderbolt 3 ports running either macOS Sierra or High Sierra, or Windows 10 are supported. The drive may be formatted to suit the user’s needs, with either an OS-specific format such as HFS+, or cross-platform format such as exFAT.

Post Supervisor: Planning an approach to storage solutions

By Lance Holte

Like virtually everything in post production, storage is an ever-changing technology. Camera resolutions and media bitrates are constantly growing, requiring higher storage bitrates and capacities. Productions are increasingly becoming more mobile, demanding storage solutions that can live in an equally mobile environment. Yesterday’s 4K cameras are being replaced by 8K cameras, and the trend does not look to be slowing down.

Yet, at the same time, productions still vary greatly in size, budget, workflow and schedule, which has necessitated more storage options for post production every year. As a post production supervisor, when deciding on a storage solution for a project or set of projects, I always try to have answers to a number of workflow questions.

Let’s start at the beginning with production questions.

What type of video compression is production planning on recording?
Obviously, more storage will be required if the project is recording to Arriraw rather than H.264.

What camera resolution and frame rate?
Once you know the bitrate from the video compression specs, you can calculate the data size on a per-hour basis. If you don’t feel like sitting down with a calculator or spreadsheet for a few minutes, there are numerous online data size calculators, but I particularly like AJA’s DataCalc application, which has tons of presets for cameras and video and audio formats.

How many cameras and how many hours per day is each camera likely to be recording?
Data size per hour, multiplied by hours per day, multiplied by shoot days, multiplied by number of cameras gives a total estimate of the storage required for the shoot. I usually add 10-20% to this estimate to be safe.

Let’s move on to post questions…

Is it an online/offline workflow?
The simplicity of editing online is awesome, and I’m holding out for the day when all projects can be edited with online media. In the meantime, most larger projects require online/offline editorial, so keep in mind the extra storage space for offline editorial proxies. The upside is that raw camera files can be stored on slower, more affordable (even archival) storage through editorial until the online process begins.

On numerous shows I’ve elected to keep the raw camera files on portable external RAID arrays (cloned and stored in different locations for safety) until picture lock. G-Tech, LaCie, OWC and Western Digital all make 48+ TB external arrays on which I’ve stored raw median urging editorial. When you start the online process, copy the necessary media over to your faster online or grading/finishing storage, and finish the project with only the raw files that are used in the locked cut.

How much editorial staff needs to be working on the project simultaneously?
On smaller projects that only require an editorial staff of two or three people who need to access the media at the same time, you may be able to get away with the editors and assistants network sharing a storage array, and working in different projects. I’ve done numerous smaller projects in which a couple editors connected to an external RAID (I’ve had great success with Proavio and QNAP arrays), which is plugged into one workstation and shares over the network. Of course, the network must have enough bandwidth for both machines to play back the media from the storage array, but that’s the case for any shared storage system.

For larger projects that employ five, 10 or more editors and staff, storage that is designed for team sharing is almost a certain requirement. Avid has opened up integrated shared storage to outside storage vendors the past few years, but Avid’s Nexis solution still remains an excellent option. Aside from providing a solid solution for Media Composer and Symphony, Nexis can also be used with basically any other NLE, ranging from Adobe Premiere Pro to Blackmagic DaVinci Resolve to Final Cut Pro and others. The project sharing abilities within the NLEs vary depending on the application, but the clear trend is moving toward multiple editors and post production personnel working simultaneously in the same project.

Does editorial need to be mobile?
Increasingly, editorial is tending to begin near the start of physical production and this can necessitate the need for editors to be on or near set. This is a pretty simple question to answer but it is worth keeping in mind so that a shoot doesn’t end up without enough storage in a place where additional storage isn’t easily available — or the power requirements can’t be met. It’s also a good moment to plan simple things like the number of shuttle or transfer drives that may be needed to ship media back to home base.

Does the project need to be compartmentalized?
For example, should proxy media be on a separate volume or workspace from the raw media/VFX/music/etc.? Compartmentalization is good. It’s safe. Accidents happen, and it’s a pain if someone accidentally deletes everything on the VFX volume or workspace on the editorial storage array. But it can be catastrophic if everything is stored in the same place and they delete all the VFX, graphics, audio, proxy media, raw media, projects and exports.

Split up the project onto separate volumes, and only give write access to the necessary parties. The bigger the project and team, the bigger the risk for accidents, so err on the side of safety when planning storage organization.

Finally, we move to finishing, delivery and archive questions…

Will the project color and mix in-house? What are the delivery requirements? Resolution? Delivery format? Media and other files?
Color grading and finishing often require the fastest storage speeds of the whole pipeline. By this point, the project should be conformed back to the camera media, and the colorist is often working with high bitrate, high-resolution raw media or DPX sequences, EXRs or other heavy file types. (Of course, there are as many workflows as there are projects, many of which can be very light, but let’s consider the trend toward 4K-plus and the fact that raw media generally isn’t getting lighter.) On the bright side, while grading and finishing arrays need to be fast, they don’t need to be huge, since they won’t house all the raw media or editorial media — only what is used in the final cut.

I’m a fan of using an attached SAS or Thunderbolt array, which is capable of providing high bandwidth to one or two workstations. Anything over 20TB shouldn’t be necessary, since the media will be removed and archived as soon as the project is complete, ready for the next project. Arrays like Areca ARC-5028T2 or Proavio EB800MS give read speeds of 2000+ MB/s,which can play back 4K DPXs in real time.

How should the project be archived?
There are a few follow-up questions to this one, like: Will the project need to be accessed with short notice in the future? LTO is a great long-term archival solution, but pulling large amounts of media off LTO tape isn’t exactly quick. For projects that I suspect will be reopened in the near future, I try to keep an external hard drive or RAID with the necessary media onsite. Sometimes it isn’t possible to keep all of the raw media onsite and quickly accessible, so keeping the editorial media and projects onsite is a good compromise. Offsite, in a controlled, safe, secure location, LTO-6 tapes house a copy of every file used on the project.

Post production technology changes with the blink of an eye, and storage is no exception. Once these questions have been answered, if you are spending any serious amount of money, get an opinion from someone who is intimately familiar with the cutting edge of post production storage. Emphasis on the “post production” part of that sentence, because video I/O is not the same as, say, a bank with the same storage size requirements. The more money devoted to your storage solutions, the more opinions you should seek. Not all storage is created equal, so be 100% positive that the storage you select is optimal for the project’s particular workflow and technical requirements.

There is more than one good storage solution for any workflow, but the first step is always answering as many storage- and workflow-related questions as possible to start taking steps down the right path. Storage decisions are perhaps one of the most complex technical parts of the post process, but like the rest of filmmaking, an exhaustive, thoughtful, and collaborative approach will almost always point in the right direction.

Main Image: G-Tech, QNAP, Avid and Western Digital all make a variety of storage solutions for large and small-scale post production workflows.


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.

Dell 6.15

What you should ask when searching for storage

Looking to add storage to your post studio? Who isn’t these days? Jonathan Abrams, chief technical officer at New York City’s Nutmeg Creative was kind enough to put together a list that can help all in their quest for the storage solution that best fits their needs.

Here are some questions that customers should ask a storage manufacturer.

What is your stream count at RAID-6?
The storage manufacturer should have stream count specifications available for both Avid DNx and Apple ProRes at varying frame rates and raster sizes. Use this information to help determine which product best fits your environment.

How do I connect my clients to your storage?  
Gigabit Ethernet (copper)? 10 Gigabit Ethernet (50-micron Fiber)? Fiber Channel (FC)? These are listed in ascending order of cost and performance. Combined with the answer to the question above, this narrows down which product a storage manufacturer has that fits your environment.

Can I use whichever network switch I want to and know that it will work, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?
If you are using a Mac with Thunderbolt ports, then you will need a network adapter, such as a Promise SANLink2 10G SFP+ for your shared storage connection. Also ask, “Can I use any Thunderbolt network adapter, or must I be using a particular model in order for you to be able to support my configuration and guarantee a baseline of performance?”

If you are an Avid Media Composer user, ask, “Does your storage present itself to Media Composer as if it was Avid shared storage?”
This will allow the first person who opens a Media Composer project to obtain a lock on a bin.  Other clients can open the same project, though they will not have write access to said bin.

What is covered by support? 
Make certain that both the hardware (chassis and everything inside of it) and the software (client and server) are covered by support. This includes major version upgrades to the server and client software (i.e. v.11 to v.12). You do not want your storage manufacturer to announce a new software version at NAB 2018 and then find out that it’s not covered by your support contract. That upgrade is a separate cost.

For how many years will you be able to replace all of the hardware parts?
Will the storage manufacturer replace any part within three years of your purchase, provided that you have an active support contract? Will they charge you less for support if they cannot replace failed components during that year’s support contract? The variation of this question is, “What is your business model?” If the storage manufacturer will only guarantee availability of all components for three years, then their business model is based upon you buying another server from them in three years. Are you prepared to be locked into that upgrade cycle?

Are you using custom components that I cannot source elsewhere?
If you continue using your storage beyond the date when the manufacturer can replace a failed part, is the failed part a custom part that was only sold to the manufacturer of your storage? Is the failed part one that you may be able to find used or refurbished and swap out yourself?

What is the penalty for not renewing support? Can I purchase support incidents on an as-needed basis?
How many as-needed event purchases equate to you realizing, “We should have renewed support instead.” If you cannot purchase support on an as-needed basis, then you need to ask what the penalty for reinstating support is. This information helps you determine what your risk tolerance is and whether or not there is a date in the future when you can say, “We did not incur a financial loss with that risk.”

Main Image:  Nutmeg Creative’s Jonathan Abrams with the company’s 80 TB of EditShare storage and two spare drive.  Photo Credit:  Larry Closs


Storage in the Studio: VFX Studios

By Karen Maierhofer

It takes talent and the right tools to generate visual effects of all kinds, whether it’s building breathtaking environments, creating amazing creatures or crafting lifelike characters cast in a major role for film, television, games or short-form projects.

Indeed, we are familiar with industry-leading content creation tools such as Autodesk’s Maya, Foundry’s Mari and more, which, when placed into the hands of creatives, the result in pure digital magic. In fact, there is quite a bit of technological magic that occurs at visual effects facilities, including one kind in particular that may not have the inherent sparkle of modeling and animation tools but is just as integral to the visual effects process: storage. Storage solutions are the unsung heroes behind most projects, working behind the scenes to accommodate artists and keep their productive juices flowing.

Here we examine three VFX facilities and their use of various storage solutions and setups as they tackle projects large and small.

Framestore
Since it was founded in 1986, Framestore has placed its visual stamp on a plethora of Oscar-, Emmy- and British Academy Film Award-winning visual effects projects, including Harry Potter, Gravity and Guardians of the Galaxy. With increasingly more projects, Framestore expanded from its original UK location in London to North American locales such as Montreal, New York, Los Angeles and Chicago, handling films as well as immersive digital experiences and integrated advertisements for iconic brands, including Guinness, Geico, Coke and BMW.

Beren Lewis

As the company and its workload grew and expanded into other areas, including integrated advertising, so, too, did its storage needs. “Innovative changes, such as virtual-reality projects, brought on high demand for storage and top-tier performance,” says NYC-based Beren Lewis, CTO of advertising and applied technologies at Framestore. “The team is often required to swiftly accommodate multiple workflows, including stereoscopic 4K and VR.”

Without hesitation, Lewis believes storage is typically the most challenging aspect of technology within the VFX workflow. “If the storage isn’t working, then neither are the artists,” he points out. Furthermore, any issues with storage can potentially lead to massive financial implications for the company due to lost time and revenue.

According to Lewis, Framestore uses its storage solution — a Pixit PixStor General Parallel File System (GPFS) storage cluster using the NetApp E-Series hardware – for all its project data. This includes backups to remote co-location sites, video preprocessing, decompression, disaster recovery preparation, scalability and high performance for VFX, finishing and rendering workloads.

The studio moved all the integrated advertising teams over to the PixStor GPFS clusters this past spring. Currently, Framestore has five primary PixStor clusters using NetApp E-Series in use at each office in London, LA, Chicago and Montreal.

According to Lewis, Framestore partnered with Pixit Media and NetApp to take on increasingly complicated and resource-hungry VR projects. “This partnership has provided the global integrated advertising team with higher performance and nonstop access to data,” he says. “The Pixit Media PixStor software-defined scale-out storage solution running on NetApp E-Series systems brings fast, reliable data access for the integrated advertising division so the team can embrace performance and consistency across all five sites, take a cost-effective, simplified approach to disaster recovery and have a modular infrastructure to support multiple workflows and future expansion.”

BMW

Framestore selected its current solution after reviewing several major storage technologies. It was looking for a single namespace that was very stable, while providing great performance, but it also had to be scalable, Lewis notes. “The PixStor ticked all those boxes and provided the right balance between enterprise-grade hardware and support, and open-source standards,” he explains. “That balance allowed us to seamlessly integrate the PixStor into our network, while still maintaining many of the bespoke tools and services that we had developed in-house over the years, with minimum development time.”

In particular, the storage solution provides the required high performance so that the studio’s VFX, finishing and rendering workloads can all run “full-out with no negative effect on the finishing editors’ or graphic artists’ user experience,” Lewis says. “This is a game-changing capability for an industry that typically partitions off these three workloads to keep artists from having to halt operations. PixStor running on E-Series consolidates all three workloads onto a single IT infrastructure with streamlined end-to-end production of projects, which reduces both time to completion and operational costs, while both IT acquisition and maintenance costs are reduced.”

At Framestore, integrating storage into the workflow is simple. The first step after a project is green-lit is the establishment of a new file set on the PixStor GPFS cluster, where ingested footage and all the CG artist-generated project data will live. “The PixStor is at the heart of the integrated advertising storage workflow from start to finish,” Lewis says. Because the PixStor GPFS cluster serves as the primary storage for all integrated advertising project data, the division’s workstations, renderfarm, editing and finishing stations connect to the cluster for review, generation and storage of project content.

Prior to the move to PixStor/NetApp, Framestore had been using a number of different storage offerings. According to Lewis, they all suffered from the same issues in terms of scalability and degradation of performance under render load — and that load was getting heavier and more unpredictable with every project. “We needed a technology that scaled and allowed us to maintain a single namespace but not suffer from continuous slowdowns for artists due to renderfarm load during crunch times or project delivery.”

Geico

As Lewis explains, with the PixStor/NetApp solution, processing was running up to 270,000 IOPS (I/O operations per second), which was at least several times what Framestore’s previous infrastructure would have been able to handle in a single namespace. “Notably, the development workflow for a major theme-park ride was unhindered by all the VR preprocessing, while backups to remote co-location sites synched every two hours without compromising the artist, rendering or finishing workloads,” he says. “This provided a cost-effective, simplified approach to disaster recovery, and Framestore now has a fast, tightly integrated platform to support its expansion plans.”

To stay at the top of its game, Framestore is always reviewing new technologies, and storage is often part of that conversation. To this end, the studio plans to build on the success it has had with PixStor by expanding the storage to handle some additional editorial playback and render workloads using an all-Non-Volatile Memory Express (NVMe) flash tier. Other projects include a review of object storage technology for use as a long-term, off-premises storage target for archival data.

Without question, the industry’s visual demands are rapidly changing. Not long ago, Framestore could easily predict storage and render requirements for a typical project. But that is no longer the case, and the studio finds itself working in ever-increasing resolutions and frame rates. Whereas projects may have been as small as 3TB in the recent past, nowadays the studio regularly handles multiple projects of 300TB or larger. And the storage must be shared with other projects of varying sizes and scope.

“This new ‘unknowns’ element of our workflow puts many strains on all aspects of our pipeline, but especially the storage,” Lewis points out. “Knowing that our storage can cope with the load and can scale allows us to turn our attention to the other issues that these new types of projects bring to Framestore.”

As Lewis notes, working with high-resolution images and large renderfarms create a unique set of challenges for any storage technology that’s not seen in many other fields. The VFX will often test any storage technology well beyond what other industries are capable of. “If there’s an issue or a break point, we will typically find it in spectacular fashion,” he adds.

Rising Sun Pictures
As a contributor to the design and execution of computer-generated effects on more than 100 feature films since its inception 22 years ago, Rising Sun Pictures (RSP) has pushed the technical bar many times over in film as well as television projects. Based in Adelaide, South Australia, RSP has built a top team of VFX artists who have tackled such box-office hits as Thor: Ragnarok, X-Men and Game of Thrones, as well as the Harry Potter and Hunger Games franchises.

Mark Day

Such demanding, high-level projects require demanding, high-level effects, which, in turn, demand a high-performance, reliable storage solution capable of handling varying data I/O profiles. “With more than 200 employees accessing and writing files in various formats, the need for a fast, reliable and scalable solution is paramount to business continuity,” says Mark Day, director of engineering at RSP.

Recently, RSP installed an Oracle ZS5 storage appliance to handle this important function. This high-performance, unified storage system provides NAS and SAN cloud-converged storage capabilities that enable on-premises storage to seamlessly access Oracle Public Cloud. Its advanced hardware and software architecture includes a multi-threading SMP storage operating system for running multiple workloads and advanced data services without performance degradation. The offering also caches data on DRAM or flash cache for optimal performance and efficiency, while keeping data safely stored on high-capacity SSD (solid state disk) or HDD (hard disk drive) storage.

Previously, the studio had been using an Dell EMC Isilon storage cluster with Avere caching appliances, and the company is still employing the solution for parts of its workflow.

When it came time to upgrade to handle RSP’s increased workload, the facility ran a proof of concept with multiple vendors in September 2016 and benchmarked their systems. Impressed with Oracle, RSP began installation in early 2017. According to Day, RSP liked the solution’s ability to support larger packet sizes — now up to 1MB. In addition, he says its “exceptional” analytics engine gives introspection into a render job.

“It has a very appealing [total cost of ownership], and it has caching right out of the box, removing the need for additional caching appliances,” says Day. Storage is at the center of RSP’s workflow, storing all the relevant information for every department — from live-action plates that are turned over from clients, scene setup files and multi-terabyte cache files to iterations of the final product. “All employees work off this storage, and it needs to accommodate the needs of multiple projects and deadlines with zero downtime,” Day adds.

Machine Room

“Visual effects scenes are getting more complex, and in turn, data sizes are increasing. Working in 4K quadruples file sizes and, therefore, impacts storage performance,” explains Day. “We needed a solution that could cope with these requirements and future trends in the industry.”

According to Day, the data RSP deals with is broad, from small setup files to terabyte geocache files. A one-minute 2K DPX sequence is 17GB for the final pass, while 4K is 68GB. “Keep in mind this is only the final pass; a single shot could include hundreds of passes for a heavy computer-generated sequence,” he points out.

Thus, high-performance storage is important to the effective operation of a visual effects company like RSP. In fact, storage helps the artists stay on the creative edge by enabling them to iterate through the creative process of crafting a shot and a look. “Artists are required to iterate their creative process many times to perfect the look of a shot, and if they experience slowdowns when loading scenes, this can have a dramatic effect on how many iterations they can produce. And in turn, this affects employees’ efficiency and, ultimately, the profitability of the company,” says Day.

Thor: Ragnarok

Most recently, RSP used its new storage solution for work on the blockbuster Thor: Ragnarok, in particular, for the Val’s Flashback sequence — which was extremely complex and involved extensive lighting and texture data, as well as high-frame-rate plates (sometimes more than 1,000fps for multiple live-action footage plates). “Before, our storage refresh early versions of this shot could take up to 24 hours to render on our server farm. But since installing our new storage, we saw this drastically reduced to six hours — that’s a 3x improvement, which is a fantastic outcome,” says Day.

Outpost VFX
A full-service VFX studio for film, broadcast and commercials, Outpost VFX, based in Bournemouth, England, has been operational since late 2012. Since that time, the facility has been growing by leaps and bounds, taking on major projects, including Life, Nocturnal Animals, Jason Bourne and 47 Meters Down.

Paul Francis

Due to this fairly rapid expansion, Outpost VFX has seen the need for increased capacity in its storage needs. “As the company grows and as resolution increases and HDR comes in, file sizes increase, and we need much more capacity to deal with that effectively,” says CTO Paul Francis.

When setting up the facility five years ago, the decision was made to go with PixStor from Pixit Media and Synology’s NAS for its storage solution. “It’s an industry-recognized solution that is extremely resilient to errors. It’s fast, robust and the team at Pixit provides excellent support, which is important to us,” says Francis.

Foremost, the solution had to provide high capacity and high speeds. “We need lots of simultaneous connections to avoid bottlenecks and ensure speedy delivery of data,” Francis adds. “This is the only one we’ve used, really. It has proved to be stable enough to support us through our growth over the last couple of years — growth that has included a physical office move and an increase in artist capacity to 80 seats.”

Outpost VFX mainly works with image data and project files for use with Autodesk’s Maya, Foundry’s Nuke, Side Effects’ Houdini and other VFX and animation tools. The challenge this presents is twofold, both large and small: concern for large file sizes, and problems the group can face with small files, such as metadata. Francis explains: “Sequentially loading small files can be time-consuming due to the current technology, so moving to something that can handle both of these areas will be of great benefit to us.”

Locally, artists use a mix of HDDs from a number of different manufacturers to store reference imagery and so forth — older-generation PCs have mostly Western Digital HDDs while newer PCs have generic SSDs. When replacing or upgrading equipment, Outpost VFX uses Samsung 900 Series SSDs, depending on the required performance and current market prices.

Life

Like many facilities, Outpost VFX is always weighing its options when it comes to finding the best solution for its current and future needs. Presently, it is looking at splitting up some of its storage solutions into smaller segments for greater resilience. “When you only have one storage solution and it fails, everything goes down. We’re looking to break our setup into smaller, faster solutions,” says Francis.

Additionally, security is a concern for Outpost VFX when it comes to its clients. According to Francis, certain shows need to be annexed, meaning the studio will need a separate storage solution outside of its main network to handle that data.

When Outpost VFX begins a job, the group ingests all the plates it needs to work on, and they reside in a new job folder created by production and assigned to a specific drive for active jobs. This folder then becomes the go-to for all assets, elements and shot iterations created throughout the production. For security purposes, these areas of the server are only visible to and accessible by artists, who in turn cannot access the Internet; this ensures that the files are “watertight and immune to leaks,” says Francis, adding that with PixStor, the studio is able to set up different partitions for different areas that artists can jump between easily.

How important is storage to Outpost VFX? “Frankly, there’d be no operation without storage!” Francis says emphatically. “We deal with hundreds of terrabytes of data in visual effects, so having high-capacity, reliable storage available to us at all times is absolutely essential to ensure a smooth and successful operation.”

47 Meters Down

Because the studio delivers visual effects across film, TV and commercials simultaneously, storage is an important factor no matter what the crew is working on. A recent film project like 47 Meters Down required the full gamut of visual effects work, as Outpost VFX was the sole vendor for the project. So, the studio needed the space and responsiveness of a storage system that enabled them to deliver more than 420 shots, a number of which featured heavy 3D builds and multiple layers of render elements.

“We had only about 30 artists at that point, so having a stable solution that was easy for our team to navigate and use was crucial,” Francis points out.

Main Image: From Outpost VFX’s Domestos commercial out of agency MullenLowe London.


Storage in the Studio: Post Houses

By Karen Maierhofer

There are many pieces that go into post production, from conform, color, dubbing and editing to dailies and more. Depending on the project, a post house can be charged with one or two pieces of this complex puzzle, or even the entire workload. No matter the job, the tasks must be done on time and on budget. Unforeseen downtime is unacceptable.

That is why when it comes to choosing a storage solution, post houses are very particular. They need a setup that is secure, reliable and can scale. For them, one size simply does not fit all. They all want a solution that fits their particular needs and the needs of their clients.

Here, we look at three post facilities of various sizes and range of services, and the storage solutions that are a good fit for their business.

Liam Ford

Sim International
The New York City location of Sim has been in existence for over 20 years, operating under the former name of Post Factory NY up until about a month ago when Sim rebranded it and its seven other founding post companies as Sim International. Whether called by its new moniker or its previous one, the facility has grown to become a premier space in the city for offline editorial teams as well as one of the top high-end finishing studios in town, as the list of feature films and episodic shows that have been cut and finished at Sim is quite lengthy. And starting this past year, Sim has launched a boutique commercial finishing division.

According to senior VP of post engineering Liam Ford, the vast majority of the projects at the NYC facility are 4K, much of which is episodic work. “So, the need is for very high-capacity, very high-bandwidth storage,” Ford says. And because the studio is located in New York, where space is limited, that same storage must be as dense as possible.

For its finishing work, Sim New York is using a Quantum Xcellis SAN, a StorNext-based appliance system that can be specifically tuned for 4K media workflow. The system, which was installed approximately two years ago, runs on a 16Gb Fibre Channel network. Almost half a petabyte of storage fits into just a dozen rack units. Meanwhile, an Avid Nexis handles the facility’s offline work.

The Sim SAN serves as the primary playback system for all the editing rooms. While there are SSDs in some of the workstations for caching purposes, the scheduling demands of clients do not leave much time for staging material back and forth between volumes, according to Ford. So, everything gets loaded back to the SAN, and everything is played back from the SAN.

As Ford explains, content comes into the studio from a variety of sources, whether drives, tapes or Internet transfers, and all of that is loaded directly onto the SAN. An online editor then soft-imports all that material into his or her conform application and creates an edited, high-resolution sequence that is rendered back to the SAN. Once at the SAN, that edited sequence is available for a supervised playback session with the in-house colorists, finishing VFX artists and so forth.

“The point is, our SAN is the central hub through which all content at all stages of the finishing process flows,” Ford adds.

Before installing the Xcellis system, the facility had been using local workstation storage only, but the huge growth in the finishing division prompted the transition to the shared SAN file system. “There’s no way we could do the amount of work we now have, and with the flexibility our clients demand, using a local storage workflow,” says Ford.

When it became necessary for the change, there were not a lot of options that met Sim’s demands for high bandwidth and reliable streaming, Ford points out, as Quantum’s StorNext and SGI’s CXFS were the main shared file systems for the M&E space. Sim decided to go with Quantum because of the work the vendor has done in recent years toward improving the M&E experience as well as the ease of installing the new system.

Nevertheless, with the advent of 25Gb and 100Gb Ethernet, Sim has been closely monitoring the high-performance NAS space. “There are a couple of really good options out there right now, and I can see us seriously looking at those products in the near future as, at the very least, an augmentation to our existing Fibre Channel-based storage,” Ford says.

At Sim, editors deal with a significant amount of Camera Raw, DPX and OpenEXR data. “Depending on the project, we could find ourselves needing 1.5GB/sec or more of bandwidth for a single playback session, and that’s just for one show,” says Ford. “We typically have three or four [shows] playing off the SAN at any one time, so the bandwidth needs are huge!”

Master of None

And the editors’ needs continue to evolve, as does their need for storage. “We keep needing more storage, and we need it to be faster and faster. Just when storage technology finally got to the point that doing 10-bit 2K shows was pretty painless, everyone started asking for 16-bit 4K,” Ford points out.

Recently, Sim completed work on the feature American Made and the Netflix show Master of None, in addition to a number of other episodic projects. For these and others shows, the SAN acts as the central hub around which the color correction, online editing, visual effects and deliverables are created.

“The finishing portion of the post pipeline deals exclusively with the highest-quality content available. It used to be that we’d do our work directly from a film reel on a telecine, but those days are long past,” says Ford. “You simply can’t run an efficient finishing pipeline anymore without a lot of storage.”

DigitalFilm Tree
DigitalFilm Tree (DFT) opened its doors in 1999 and now occupies a 10,000-square-foot space in Universal City, California, offering full round-trip post services, including traditional color grading, conform, dailies and VFX, as well as post system rentals and consulting services.

While Universal City may be DFT’s primary location, it has dozens of remote satellite systems — mini post houses for production companies and studios – around the world. Those remote post systems, along with the increase in camera resolution (Alexa, Raw, 4K), have multiplied DFT’s storage needs. Both have resulted in a sea change in the facility’s storage solution.

According to CEO Ramy Katrib, most companies in the media and entertainment industry historically have used block storage, and DFT was no different. But four years ago, the company began looking at object storage, which is used by Silicon Valley companies, like Dropbox and AWS, to store large assets. After significant research, Katrib felt it was a good fit for DFT as well, believing it to be a more economical way to build petabytes of storage, compared to using proprietary block storage.

Ramy Katrib

“We were unique from most of the post houses in that respect,” says Katrib. “We were different from many of the other companies using object storage — they were tech, financial institutions, government agencies, health care; we were the rare one from M&E – but our need for extremely large, scalable and resilient storage was the same as theirs.”

DFT’s primary work centers around scripted television — an industry segment that continues to grow. “We do 15-plus television shows at any given time, and we encourage them to shoot whatever they like, at whatever resolution they desire,” says Katrib. “Most of the industry relies on LTO to back up camera raw materials. We do that too, but we also encourage productions to take advantage of our object storage, and we will store everything they shoot and not punish them for it. It is a rather Utopian workflow. We now give producers access to all their camera raw material. It is extremely effective for our clients.”

Over four years ago, DFT began using a cloud-based platform called OpenStack, which is open-source software that controls large pools of data, to build and design its own object storage system. “We have our own software developers and people who built our hardware, and we are able to adjust to the needs of our clients and the needs of our own workflow,” says Katrib.

DFT designs its custom PC- and Linux-based post systems, including chassis from Super Micro, CPUs from Intel and graphic cards from Nvidia. Storage is provided from a number of companies, including spinning-disc and SSD solutions from Seagate Technology and Western Digital.

DFT then deploys remote dailies systems worldwide, in proximity to where productions are shooting. Each day clients plug their production hard drives (containing all camera raw files) into DFT’s remote dailies system. From DFT’s facility, dailies technicians remotely produce editorial, viewing and promo dailies files, and transfer them to their destinations worldwide. All the while, the camera raw files are transported from the production location to DFT’s ProStack “massively scalable object storage.” In this case, “private cloud storage” consists of servers DFT designed that house all the camera raw materials, with management from DFT post professionals who support clients with access to and management of their files.

DFT provides color grading for Great News.

Recently, storage vendors such as Quantum and Avid have begun building and branding their own object storage solutions not unlike what DFT has constructed at its Universal City locale. And the reason is simple: Object storage provides a clear advantage because of reliability and the low cost. “We looked at it because the storage we were paying for, proprietary block storage, was too expensive to house all the data our clients were generating. And resolutions are only going up. So, every year we needed more storage,” Katrib explains. “We needed a solution that could scale with the practical reality we were living.”

Then, about four years ago when DFT started becoming a software company, one of the developers brought OpenStack to Katrib’s attention. “The open-source platform provided several storage solutions, networking capabilities and cloud compute capabilities for free,” he points out. Of course, the solution is not a panacea, as it requires a company to customize the offering for its own needs and even contribute back to the OpenStack community. But then again, that requirement enables DFT to evolve to the changing needs of its clients without waiting for a manufacturer to do it.

“It does not work out of the box like a solution from IBM, for instance. You have to develop around it,” Katrib says. “You have to have a lab mentality, designing your own hardware and software based on pain points in your own environment. And, sometimes it fails. But when you do it correctly, you realize it is an elegant solution.” However, there are vibrant communities, user groups and tech summits of those leveraging the technology who are willing to assist and collaborate.

DFT has evolved its object storage solution, extending its capabilities from an initial hundreds of terabytes – which is nothing to sneeze at — to hundreds of petabytes of storage. DFT also designs remote post systems and storage solutions for customers in remote locations around the world. And those remote locations can be as simple as a workstation running applications such as Blackmagic’s Resolve or Adobe After Effects and connected to object storage housing all the client’s camera raw material.

The key, Katrib notes, is to have great post and IT pros managing the projects and the system. “I can now place a remote post system with a calibrated 4K monitor and object storage housing the camera raw material, and I can bring the post process to you wherever you are, securely,” he adds. “From wherever you are, you can view the conform, color and effects, and sign off on the final timeline, as if you were at DFT.”

DFT posts American Housewife

In addition to the object storage, DFT is also using Facilis TerraBlock and Avid Nexis systems locally and on remote installs. The company uses those commercial solutions because they provide benefits, including storage performance and feature sets that optimize certain software applications. As Katrib points out, storage is not one flavor fits all, and different solutions work better for certain use cases. In DFT’s case, the commercial storage products provide performance for the playback of multiple 4K streams across the company’s color, VFX and conform departments, while its ProStack high-capacity object storage comes into play for storing the entirety of all files produced by our clients.

“Rather than retrieve files from an LTO tape, as most do when working on a TV series, with object storage, the files are readily available, saving hours in retrieval time,” says Katrib.

Currently, DFT is working on a number of television series, including Great News (color correction only) and Good Behavior (dailies only). For other shows, such as the Roseanne revival, NCIS: Los Angeles, American Housewife and more, it is performing full services such as visual effects, conform, color, dailies and dubbing. And in some instances, even equipment rental.

As the work expands, DFT is looking to extend upon its storage and remote post systems. “We want to have more remote systems where you can do color, conform, VFX, editorial, wherever you are, so the DP or producer can have a monitor in their office and partake in the post process that’s particular to them,” says Katrib. “That is what we are scaling as we speak.”

Broadway Video
Broadway Video is a global media and entertainment company that is primarily engaged in post-production services for television, film, music, digital and commercial projects for the past four decades. Located in New York and Los Angeles, the facility offers one-stop tools and talent for editorial, audio, design, color grading, finishing and screening, as well as digital file storage, preparation, aggregation and delivery of digital content across multiple platforms.

Since its founding in 1979, Broadway Video has grown into an independent studio. During this timeframe, content has evolved greatly, especially in terms of resolution, to where 4K and HD content — including HDR and Atmos sound — is becoming the norm. “Staying current and dealing with those data speeds are necessary in order to work fluidly on a 4K project at 60p,” says Stacey Foster, president and managing director, Broadway Video Digital and Production. “The data requirements are pretty staggering for throughput and in terms of storage.”

Stacey Foster

This led Broadway Video to begin searching a year ago for a storage system that would meet its needs now as well as in the foreseeable future — in short, it also needed a system that is scalable. Their solution: an all-Flash Hitachi Vantara Virtual Storage Platform (VSP) G series. Although quite expensive, a flash-based system is “ridiculously powerful,” says Foster. “Technology is always marching forward, and Flash-based systems are going to become the norm; they are already the norm at the high end.”

Foster has had a long-standing relationship with Hitachi for more than a decade and has witnessed the company’s growth into M&E from the medical and financial worlds where it has been firmly ensconced. According to Foster, Hitachi’s VSP series will enhance Broadway Video’s 4K offerings and transform internal operations by allowing quick turnaround, efficient and cost-effective production, post production and delivery of television shows and commercials. And, the system offers workload scalability, allowing the company to expand and meet the changing needs of the digital media production industry.

“The systems we had were really not that capable of handling DPX files that were up to 50TB, and Hitachi’s VSP product has been handling them effortlessly,” says Foster. “I don’t think other [storage] manufacturers can say that.”

Foster explains that as Broadway Video continued to expand its support of the latest 4K content and technologies, it became clear that a more robust, optimized storage solution was needed as the company moved in this new direction. “It allows us to look at the future and create a foundation to build our post production and digital distribution services on,” Foster says.

Broadway Video’s with Netflix projects sparked the need for a more robust system. Recently, Comedians in Cars Getting Coffee, an Embassy Row production, transitioned to Netflix, and one of the requirements by its new home was the move from 2K to 4K. “It was the perfect reason for us to put together a 4K end-to-end workflow that satisfies this client’s requirements for technical delivery,” Foster points out. “The bottleneck in color and DPX file delivery is completely lifted, and the post staff is able to work quickly and sometimes even faster than in real time when necessary to deliver the final product, with its very large files. And that is a real convenience for them.”

Broadway Video’s Hitachi Vantara Virtual Storage Platform G series.

As a full-service post company, Broadway Video in New York operates 10 production suites of Avids running Adobe Premiere and Blackmagic Resolve, as well as three full mixing suites. “We can have all our workstations simultaneously hit the [storage] system hard and not have the system slow down. That is where Hitachi’s VSP product has set itself apart,” Foster says.

For Comedians in Cars Getting Coffee, like many projects Broadway Video encounters, the cut is in a lower-resolution Avid file. The 4K media is then imported into the Resolve platform, so it is colored in its original material and format. In terms of storage, once the material is past the cutting stage, it is all stored on the Hitachi system. Once the project is completed, it is handed off on spinning disc for archival, though Foster foresees a limited future for spinning discs due to their inherent nature for a limited life span — “anything that spins breaks down,” he adds.

All the suites are fully HD-capable and are tied with shared SAN and ISIS storage; because work on most projects is shared between editing suites, there is little need to use local storage. Currently Broadway Video is still using its previous Avid ISIS products but is slowly transitioning to the Hitachi system only. Foster estimates that at this time next year, the transition will be complete, and the staff will no longer have to support the multiple systems. “The way the systems are set up right now, it’s just easier to cut on ISIS using the Avid workstations. But that will soon change,” he says.

Currently, Broadway Video is still using its Avid ISIS products but is slowly transitioning to the Hitachi system. Foster estimates that at this time next year, the transition will be complete, and the staff will no longer have to support the multiple systems. “The way the systems are set up right now, it’s just easier to cut on ISIS using the Avid workstations. But that will soon change,” he says.

Other advantages the Hitachi system provides is stability and uptime, which Foster maintains is “pretty much 100 percent guaranteed.” As he points out, there is no such thing as downtime in banking and medical, where Hitachi earned its mettle, and bringing that stability to the M&E industry “has been terrific.”

Of course, that is in addition to bandwidth and storage capacity, which is expandable. “There is no limit to the number of petabytes you can have attached,” notes Foster.

Considering that the majority of calls received by Broadway Video center on post work for 4K-based workflows, the new storage solution is a necessary technical addition to the facility’s other state-of-the-art equipment. “In the environment we work in, we spend more and more time on the creative side in terms of the picture cutting and sound mixing, and then it is a rush to get it out the door. If it takes you days to import, color correct, export and deliver — especially with the file sizes we are talking about – then having a fast system with the kind of throughput and bandwidth that is necessary really lifts the burden for the finishing team,” Foster says.

He continues: “The other day the engineers were telling me we were delivering 20 times faster using the Hitachi technology in the final cutting and coloring of a Jerry Seinfeld stand-up special we had done in 4K” resulting in a DPX file that was about 50TB. “And that is pretty significant,” Foster adds.

Main Image: DigitalFilm Tree’s senior colorist Patrick Woodard.


Storage Workflows for 4K and Beyond

Technicolor-Postworks and Deluxe Creative Services share their stories.

By Beth Marchant

Once upon a time, an editorial shop was a sneaker-net away from the other islands in the pipeline archipelago. That changed when the last phases of the digital revolution set many traditional editorial facilities into swift expansion mode to include more post production services under one roof.

The consolidating business environment in the post industry of the past several years then brought more of those expanded, overlapping divisions together. That’s a lot for any network to handle, let alone one containing some of the highest quality and most data-dense sound and pictures being created today. The networked storage systems connecting them all must be robust, efficient and realtime without fail, but also capable of expanding and contracting with the fluctuations of client requests, job sizes, acquisitions and, of course, evolving technology.

There’s a “relief valve” in the cloud and object storage, say facility CTOs minding the flow, but it’s still a delicate balance between local pooled and tiered storage and iron-clad cloud-based networks their clients will trust.

Technicolor-Postworks
Joe Beirne, CTO of Technicolor-PostWorks New York, is probably as familiar as one can be with complex nonlinear editorial workflows. A user of Avid’s earliest NLEs, an early adopter of networked editing and an immersive interactive filmmaker who experimented early with bluescreen footage, Beirne began his career as a technical advisor and producer for high-profile mixed-format feature documentaries, including Michael Moore’s Fahrenheit 9/11 and the last film in Godfrey Reggio’s KOYAANISQATSI trilogy.

Joe Beirne

Joe Beirne

In his 11 years as a technology strategist at Technicolor-PostWorks New York, Beirne has also become fluent in evolving color, DI and audio workflows for clients such as HBO, Lionsgate, Discovery and Amazon Studios. CTO since 2011, when PostWorks NY acquired the East Coast Technicolor facility and the color science that came with it, he now oversees the increasingly complicated ecosystem that moves and stores vast amounts of high-resolution footage and data while simultaneously holding those separate and variously intersecting workflows together.

As the first post facility in New York to handle petabyte levels of editorial-based storage, Technicolor-PostWorks learned early how to manage the data explosion unleashed by digital cameras and NLEs. “That’s not because we had a petabyte SAN or NAS or near-line storage,” explains Beirne. “But we had literally 25 to 30 Avid Unity systems that were all in aggregate at once. We had a lot of storage spread out over the campus of buildings that we ran on the traditional PostWorks editorial side of the business.”

The TV finishing and DI business that developed at PostWorks in 2005, when Beirne joined the company (he was previously a client), eventually necessitated a different route. “As we’ve grown, we’ve expanded out to tiered storage, as everyone is doing, and also to the cloud,” he says. “Like we’ve done with our creative platforms, we have channeled our different storage systems and subsystems to meet specific needs. But they all have a very promiscuous relationship with each other!”

TPW’s high-performance storage in its production network is a combination of local or semi-locally attached near-line storage tethered by several Quantum StorNext SANs, all of it air-gapped — or physically segregated —from the public Internet. “We’ve got multiple SANs in the main Technicolor mothership on Leroy Street with multiple metadata controllers,” says Beirne. “We’ve also got some client-specific storage, so we have a SAN that can be dedicated to a particular account. We did that for a particular client who has very restrictive policies about shared storage.”

TPW’s editorial media, for the most part, resides in Avid’s ISIS system and is in the process of transitioning to its software-defined replacement, Nexis. “We have hundreds of Avids, a few Adobe and even some Final Cut systems connected to that collection of Nexis and ISIS and Unity systems,” he says. “We’re currently testing the Nexis pipeline for our needs but, in general, we’re going to keep using this kind of storage for the foreseeable future. We have multiple storage servers that serve that part of our business.”

Beirne says most every project the facility touches is archived to LTO tape. “We have a little bit of disc-to-tape archiving going on for the same reasons everybody else does,” he adds. “And some SAN volume hot spots that are all SSD (solid state drives) or a hybrid.” The facility is also in the process of improving the bandwidth of its overall switching fabric, both on the Fibre Channel side and on the Ethernet side. “That means we’re moving to 32Gb and multiple 16Gb links,” he says. “We’re also exploring a 40Gb Ethernet backbone.”

Technicolor-Postworks 4K theater at their Leroy Street location.

This backbone, he adds, carries an exponential amount of data every day. “Now we have what are like two nested networks of storage at a lot of the artist workstations,” he explains. “That’s a complicating feature. It’s this big, kind of octopus, actually. Scratch that: it’s like two octopi on top of one another. That’s not even mentioning the baseband LAN network that interweaves this whole thing. They, of course, are now getting intermixed because we are also doing IT-based switching. The entire, complex ecosystem is evolving and everything that interacts with it is evolving right along with it.”

The cloud is providing some relief and handles multiple types of storage workflows across TPW’s various business units. “Different flavors of the commercial cloud, as well as our own private cloud, handle those different pools of storage outside our premises,” Beirne says. “We’re collaborating right now with an international account in another territory and we’re touching their storage envelope through the Azure cloud (Microsoft’s enterprise-grade cloud platform). Our Azure cloud and theirs touch and we push data from that storage back and forth between us. That particular collaboration happened because we both had an Azure instance, and those kinds of server-to-server transactions that occur entirely in the cloud work very well. We also had a relationship with one of the studios in which we made a similar connection through Amazon’s S3 cloud.”

Given the trepidations most studios still have about the cloud, Beirne admits there will always be some initial, instinctive mistrust from both clients and staff when you start moving any content away from computers that are not your own and you don’t control. “What made that first cloud solution work, and this is kind of goofy, is we used Aspera to move the data, even though it was between adjacent racks. But we took advantage of the high-bandwidth backbone to do it efficiently.”

Both TPW in New York and Technicolor in Los Angeles have since leveraged the cloud aggressively. “We our own cloud that we built, and big Technicolor has a very substantial purpose-built cloud, as well as Technicolor Pulse, their new storage-related production service in the cloud. They also use object storage and have some even newer technology that will be launching shortly.”

The caveat to moving any storage-related workflow into the cloud is thorough and continual testing, says Beirne. “Do I have more concern for my clients’ media in the cloud than I do when sending my own tax forms electronically? Yea, I probably do,” he says. “It’s a very, very high threshold that we need to pass. But that said, there’s quite a bit of low-impact support stuff that we can do on the cloud. Review and approval stuff has been happening in the cloud for some time.” As a result, the facility has seen an increase, like everyone else, in virtual client sessions, like live color sessions and live mix sessions from city to city or continent to continent. “To do that, we usually have a closed circuit that we open between two facilities and have calibrated displays on either end. And, we also use PIX and other normal dailies systems.”

“How we process and push this media around ultimately defines our business,” he concludes. “It’s increasingly bigger projects that are made more demanding from a computing point of view. And then spreading that out in a safe and effective way to where people want to access it, that’s the challenge we confront every single day. There’s this enormous tension between the desire to be mobile and open and computing everywhere and anywhere, with these incredibly powerful computer systems we now carry around in our pockets and the bandwidth of the content that we’re making, which is high frame rate, high resolution, high dynamic range and high everything. And with 8K — HDR and stereo wavefront data goes way beyond 8K and what the retina even sees — and 10-bit or more coming in the broadcast chain, it will be more of the same.” TPW is already doing 16-bit processing for all of its film projects and most of its television work. “That’s piles and piles and piles of data that also scales linearly. It’s never going to stop. And we have a VR lab here now, and there’s no end of the data when you start including everything in and outside of the frame. That’s what keeps me up at night.”

Deluxe Creative Services
Before becoming CTO at Deluxe Creative Services, Mike Chiado had a 15-year career as a color engineer and image scientist at Company 3, the grading and finishing powerhouse acquired by Deluxe in 2010. He now manages the pipelines of a commercial, television and film Creative Services division that encompasses not just dailies, editorial and color, but sound, VFX, 3D conversion, virtual reality, interactive design and restoration.

MikeChiado

Mike Chiado

That’s a hugely data-heavy load to begin with, and as VR and 8K projects become more common, managing the data stored and coursing through DCS’ network will get even more demanding. Branded companies currently under the monster Deluxe umbrella include Beast, Company 3, DDP, Deluxe/Culver City, Deluxe VR, Editpool, Efilm, Encore, Flagstaff Studios, Iloura, Level 3, Method Studios, StageOne Sound, Stereo D, and Rushes.

“Actually, that’s nothing when you consider that all the delivery and media teams from Deluxe Delivery and Deluxe Digital Cinema are downstream of Creative Services,” says Chiado. “That’s a much bigger network and storage challenge at that level.” Still, the storage challenges of Chiado’s segment are routinely complicated by the twin monkey wrenches of the collaborative and computer kind that can unhinge any technology-driven art form.

“Each area of the business has its own specific problems that recur: television has its issues, commercial work has its issues and features its issues. For us, commercials and features are more alike than you might think, partly due to the constantly changing visual effects but also due to shifting schedules. Television is much more regimented,” he says. “But sometimes we get hard drives in on a commercial or feature and we think, ‘Well that’s not what we talked about at all!”

Company 3’s file-based digital intermediate work quickly clarified Chiado’s technical priorities. “The thing that we learned early on is realtime playback is just so critical,” he says. “When we did our very first file-based DI job 13 years ago, we were so excited that we could display a certain resolution. OK, it was slipping a little bit from realtime, maybe we’ll get 22 frames a second, or 23, but then the director walked out after five minutes and said, ‘No. This won’t work.’ He couldn’t care less about the resolution because it was only always about realtime and solid playback. Luckily, we learned our lesson pretty quickly and learned it well! In Deluxe Creative Services, that still is the number one priority.”

It’s also helped him cut through unnecessary sales pitches from storage vendors unfamiliar with Deluxe’s business. “When I talk to them, I say, ‘Don’t tell me about bit rates. I’m going to tell you a frame rate I want to hit and a resolution, and you tell me if we can hit it or not with your solution. I don’t want to argue bits; I want tell you this is what I need to do and you’re going to tell me whether or not your storage can do that.’ The storage vendors that we’re going to bank our A-client work on better understand fundamentally what we need.”

Because some of the Deluxe company brands share office space — Method and Company 3 moved into a 63,376-square-foot former warehouse in Santa Monica a few years ago — they have access to the same storage infrastructure. “But there are often volumes specially purpose-built for a particular job,” says Chiado. “In that way, we’ve created volumes focused on supporting 4K feature work and others set up specifically for CG desktop environments that are shared across 400 people in that one building. We also have similar business units in Company 3 and Efilm, so sometimes it makes sense that we would want, for artist or client reasons, to have somebody in a different location from where the data resides. For example, having the artist in Santa Monica and the director and DP in Hollywood is something we do regularly.”

Chiado says Deluxe has designed and built with network solution and storage solution providers a system “that suits our needs. But for the most part, we’re using off-the-shelf products for storage. The magic is how we tune them to be able to work with our systems.”

Those vendors include Quantum, DDN Storage and EMC’s network-attached storage Isilon. “For our most robust needs, like 4K feature workflows, we rely on DDN,” he says. “We’ve actually already done some 8K workflows. Crazy world we live in!” For long-term archiving, each Deluxe Creative Service location worldwide has an LTO-tape robot library. “In some cases, we’ll have a near-line tier two volume that stages it. And for the past few years, we’re using object storage in some locations to help with that.”

Although the entire group of Deluxe divisions and offices are linked by a robust 10GigE network that sometimes takes advantage of dark fiber, unused fiber optic cables leased from larger fiber-optic communications companies, Chiado says the storage they use is all very specific to each business unit. “We’re moving stuff around all the time but projects are pretty much residing in one spot or another,” he says. “Often, there are a thousand reasons why — it may be for tax incentives in a particular location, it may be for project-specific needs. Or it’s just that we’re talking about the London and LA locations.”

With one eye on the future and another on budgets, Chiado says pooled storage has helped DCS keep costs down while managing larger and larger subsets of data-heavy projects. “We are always on the lookout for ways to handle the next thing, like the arrival of 8K workflows, but we’ve gained huge, huge efficiencies from pooled storage,” he says. “So that’s the beauty of what we build, specific to each of our world locations. We move it around if we have to between locations but inside that location, everybody works with the content in one place. That right there was a major efficiency in our workflows.”

Beyond that, he says, how to handle 8K is still an open question. “We may have to make an island, and it’s been testing so far, but we do everything we can to keep it in one place and leverage whatever technology that’s required for the job,” Chiado says. “We have isolated instances of SSDs (solid-state drives) but we don’t have large-scale deployment of SSDs yet. On the other end, we’re working with cloud vendors, too, to be able to maximize our investments.”

Although the company is still working through cloud security issues, Chiado says Deluxe is “actively engaging with cloud vendors because we aren’t convinced that our clients are going to be happy with the security protocols in place right now. The nature of the business is we are regularly involved with our clients and MPAA and have ongoing security audits. We also have a group within Deluxe that helps us maintain the best standards, but each show that comes in may have its own unique security needs. It’s a constant, evolving process. It’s been really difficult to get our heads and our clients’ heads around using the cloud for rendering, transcoding or for storage.”

Luckily, that’s starting to change. “We’re getting good traction now, with a few of the studios getting ready to greenlight cloud use and our own pipeline development to support it,” he adds. “They are hand in hand. But I think once we move over this hurdle, this is going to help the industry tremendously.”

Beyond those longer-term challenges, Chiado says the day-to-day demands of each division haven’t changed much. “Everybody always needs more storage, so we are constantly looking at ways to make that happen,” he says. “The better we can monitor our storage and make our in-house people feel comfortable moving stuff off near-line to tape and bring it back again, the better we can put the storage where we need it. But I’m very optimistic about the future, especially about having a relief valve in the cloud.”

Our main image is the shared 4K theater at Company 3 and Method.


VFX Storage: The Molecule

Evolving to a virtual private local cloud?

By Beth Marchant

VFX artists, supervisors and technologists have long been on the cutting-edge of evolving post workflows. The networks built to move, manage, iterate, render and put every pixel into one breathtaking final place are the real super heroes here, and as New York’s The Molecule expands to meet the rising demand for prime-time visual effects, it pulls even more power from its evolving storage pipeline in and out of the cloud.

The Molecule CEO/CTO Chris Healer has a fondness for unusual workarounds. While studying film in college, he built a 16mm projector out of Legos and wrote a 3D graphics library for DOS. In his professional life, he swiftly transitioned from Web design to motion capture and 3D animation. He still wears many hats at his now bicoastal VFX and VR facility, The Molecule —which he founded in New York in 2005 — including CEO, CTO, VFX supervisor, designer, software developer and scientist. In those intersecting capacities, Healer has created the company’s renderfarm, developed and automated its workflow, linking and preview tools and designed and built out its cloud-based compositing pipeline.

When the original New York office went into growth mode, Healer (pictured at his new, under-construction facility) turned to GPL Technologies, a VFX and post-focused digital media pipeline and data infrastructure developer, to help him build an entirely new network foundation for the new location the company will move to later this summer. “Up to this point, we’ve had the same system and we’ve asked GPL to come in and help us create a new one from scratch,” he says. “But any time you hire anyone to help with this kind of thing, you’ve really got to do your own research and figure out what makes sense for your artists, your workflows and, ultimately, your bottom line.”

The new facility will start with 65 seats and expand to more than 100 within the next year to 18 months. Current clients include the major networks, Showtime, HBO, AMC, Netflix and director/producer Doug Limon.

UKS-beforesmall      UKS-aftersmall
Netflix’s Unbreakable Kimmy Schmidt is just one of the shows The Molecule works on.

Healer’s experience as an artist, developer, supervisor and business owner has given him a seasoned perspective on how to develop VFX pipeline work. “There’s a huge disparity between what the conventional user wants to do, i.e. share data, and the much longer dialog you need to have to build a network. Connecting and sharing data is really just the beginning of a very long story that involves so many other factors: how many things are you connecting to? What type of connection do you have? How far away are you from what you’re connecting to? How much data are you moving, and it is all at once or a continuous stream? Users are so different, too.”

Complicating these questions, he says, are a facility’s willingness to embrace new technology before it’s been vetted in the market. “I generally resist the newest technologies,” he says. “My instinct is that I would prefer an older system that’s been tested for years upon years. You go to NAB and see all kinds of cool stuff that appears to be working the way it should. But it hasn’t been tried in different kinds of circumstances or its being pitched to the broadcast industry and may not work well for VFX.”

Making a Choice
He was convinced by EMC’s Isilon system, based on customer feedback and the hardware has already been delivered to the new office. “We won’t install it until construction is complete, but all the documentation is pointing in the right direction,” he says. “Still, it’s a bit of a risk until we get it up and running.”

Last October, Dell announced it would acquire EMC in a deal that is set to close in mid-July. That should suit The Molecule just fine —most of its artists computers are either Dell or HP running Nvidia graphics.

A traditional mass configuration on a single GigE line can only do up to 100MB per second. “A 10GigE connection running in NFS can, theoretically, do 10 times that,” says Healer. “But 10GigE works slightly differently, like an LA freeway, where you don’t change the speed limit but you change the number of lanes and the on and off ramp lights to keep the traffic flowing. It’s not just a bigger gun for a bigger job, but more complexity in the whole system. Isilon seems to do that very well and it’s why we chose them.”

His company’s fast growth, Healer says, has “presented a lot of philosophical questions about disk and RAID redundancy, for example. If you lose a disk in RAID-5 you’re OK, but if two fail, you’re screwed. Clustered file systems like GlusterFS and OneFS, which Isilon uses, have a lot more redundancy built in so you could lose quite a lot of disks and still be fine. If your number is up and on that unlucky day you lost six disks, then you would have backup. But that still doesn’t answer what happens if you have a fire in your office or, more likely, there’s a fire elsewhere in the building and it causes the sprinklers to go off. Suddenly, the need for off-site storage is very important for us, so that’s where we are pushing into next.”

Healer honed in on several metrics to help him determine the right path. “The solutions we looked at had to have the following: DR, or disaster recovery, replication, scalability, off-site storage, undelete and versioning snapshots. And they don’t exactly overlap. I talked to a guy just the other day at Rsync.net, which does cloud storage of off-site backups (not to be confused with the Unix command, though they are related). That’s the direction we’re headed. But VFX is just such a hard fit for any of these new data centers because they don’t want to accept and sync 10TB of data per day.”

A rendering of The Molecule NYC's new location.His current goal is simply to sync material between the two offices. “The holy grail of that scenario is that neither office has the definitive master copy of the material and there is a floating cloud copy somewhere out there that both offices are drawing from,” he says. “There’s a process out there called ‘sharding,’ as in a shard of glass, that MongoDB and Scality and other systems use that says that the data is out there everywhere but is physically diverse. It’s local but local against synchronization of its partners. This makes sense, but not if you’re moving terabytes.”

The model Healer is hoping to implement is to “basically offshore the whole company,” he says. “We’ve been working for the past few months with a New York metro startup called Packet which has a really unique concept of a virtual private local cloud. It’s a mouthful but it’s where we need to be.” If The Molecule is doing work in New York City, Healer points out, Packet is close enough that network transmissions are fast enough and “it’s as if the machines were on our local network, which is amazing. It’s huge. It the Amazon cloud data center is 500 miles away from your office, that drastically changes how well you can treat those machines as if they are local. I really like this movement of virtual private local that says, ‘We’re close by, we’re very secure and we have more capacity than individual facilities could ever want.’ But they are off-site and the multiple other companies that use them are in their own discrete containers that never crosses. Plus, you pay per use — basically per hour and per resource. In my ideal future world, we would have some rendering capacity in our office, some other rendering capacity at Packet and off-site storage at Rsync.net. If that works out, we could potentially virtualize the whole workflow and join our New York and LA office and any other satellite office we want to set up in the future.”

The VFX market, especially in New York, has certainly come into its own in recent years. “It’s great to be in an era when nearly every single frame of every single shot of both television and film is touched in some way by visual effects, and budgets are climbing back and the tax credits have brought a lot more VFX artists, companies and projects to town,” Healer says. “But we’re also heading toward a time when the actual brick-and-mortar space of an office may not be as critical as it is now, and that would be a huge boon for the visual effects industry and the resources we provide.”