Tag Archives: NetApp

NetApp targets M&E workflows with ASE Cloud

NetApp is collaborating with ASE to expand the company’s ASE Cloud to the US, providing flexible cloud access for media and entertainment companies that need high bandwidth to manage massive HD files, which are often 4K and beyond.

ASE Cloud is built on NetApp StorageGrid Webscale, enabling ASE to offer object storage at an efficient cost per Gigabyte. The cloud service enables companies to manage and control their data in a public cloud without data egress fees, all managed through a web portal.

StorageGrid Webscale is a scale-optimized data solution that maximizes control over rich content, enabling secure and fluid movement. The software-defined object storage solution, says NetApp, allows customers to determine where and how their data is stored, depending on where it is in the content lifecycle. The solution also protects customer data with layered erasure coding, which combines node-level and geo-distributed erasure coding to efficiently prevent data loss.

ASE Cloud, powered by NetApp, is now available in the US.

Storage Roundtable

Manufacturers weigh in on trends, needs.

By Randi Altman

Storage is the backbone of today’s workflows, from set to post to archive. There are many types of storage offerings from many different companies, so how do you know what’s right for your needs?

In an effort to educate, we gathered questions from users in the field. “If you were sitting across a table from makers of storage, what would you ask?”

The following is a virtual roundtable featuring a diverse set of storage makers answering a variety of questions. We hope it’s helpful. If you have a question that you would like to ask of these companies, feel free to email me directly at randi@postPerspective.com and I will get them answered.

SCALE LOGIC’S BOB HERZAN
What are the top three requests you get from your post clients?
A post client’s primary concern is reliability. They want to be assured that the storage solution they are buying supports all of their applications and will provide the performance each application will need when they need it. The solution needs the ability to interact with MAM or PAM solutions and they need to be able to search and retrieve their assets and to future proof, scale and manage the storage in a tiered infrastructure.

Secondly, the client wants to be able to use their content in a way that makes sense. Assets need to be accessible to the stakeholders of a project, no matter how big or complex the storage ecosystem.

Finally, the client wants to see the options available to develop a long-term archiving process that can assure the long-term preservation of their finished assets. All three of these areas can be very daunting to our customers, and being able to wade through all of the technology options and make the right choices for each business is our specialty.

How should post users decide between SAN, NAS and object storage?
There are a number of factors to consider, including overall bandwidth, individual client bandwidth, project lifespan and overall storage requirements. Because high-speed online storage typically has the highest infrastructure costs, a tiered approach makes the most sense for many facilities, where SAN, NAS, cloud or object storage may all be used at the same time. In this case, the speed with which a user will need access to a project is directly related to the type of storage the project is stored on.

Scale Logic uses a consultative approach with our customers to architect a solution that will fit both their workflow and budget requirements. We look at the time it takes to accomplish a task, what risks, if any, are acceptable, the size of the assets and the obvious, but nonetheless, vital budgetary considerations. One of the best tools in our toolbox is our HyperFS file system, which allows customers the ability to choose any one of four tiers of storage solutions while allowing full scalability to incorporate SAN, NAS, cloud and object storage as they grow.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
Above everything else we want to tailor a solution to the needs of the clients. With our consultative approach we take a look not only at the requirements to build the best solution for today, but also  the ability to grow and scale up to the needs of tomorrow. We look at scalability not just from the perspective of having more ability to do things, but in doing the most with what we have. While even our entry level system is capable of doing 10 streams of 4K, it’s equally, if not more, important to make sure that those streams are directed to the people who need them most while allowing other users access at lower resolutions.

GENESIS Unlimited

Our Advanced QoS can learn the I/O patterns/behavior for an application while admins can give those applications a “realtime” or “non-realtime” status. This means “non-realtime” applications auto-throttle down to allow realtime apps the bandwidth. Many popular applications come pre-learned, like Finder, Resolve, Premiere or Flame. In addition, admins can add their own apps.

What do you expect to see as the next trend relating to storage?
Storage always evolves. Whatever is next in post production storage is already in use elsewhere as we are a pretty risk-averse group, for obvious reasons. With that said, the adoption of Unified Storage Platforms and hybrid cloud workflows will be the next big thing for big media producers like post facilities. The need for local online and nearline storage must remain for realtime, resolution-intense processes and data movement between tiers, but the decision-making process and asset management is better served globally by increased shared access and creative input.

The entertainment industry has pushed the limits of storage for over 30 years with no end in sight. In addition, the ability to manage storage tiers and collaborate both on-prem and off will dictate the type of storage solutions our customers will need to invest in. The evolution of storage needs continues to be driven by the consumer: TVs and displays have moved to demanding 4K content from the producers. The increased success of the small professional cameras allows more access to multi-camera shoots. However, as performance and capacity continues to grow for our customers, it brings the complexity down to managing large data farms effectively, efficiently and affordably. That is on the horizon in our future solution designs. Expensive, proprietary hardware will be a thing of the past and open, affordable storage will be the norm, with user-friendly and intuitive software developed to automate, simplify, and monetize our customer assets while maintaining industry compatibility.

SMALL TREE‘S CORKY SEEBER
How do your solutions work with clients’ existing storage? And who is your typical client?
There are many ways to have multiple storage solutions co-exist within the post house, most of these choices are driven by the intended use of the content and the size and budget of the customer. The ability to migrate content from one storage medium to another is key to allowing customers to take full advantage of our shared storage solutions.

Our goal is to provide simple solutions for the small to medium facilities, using Ethernet connectivity from clients to the server to keep costs down and make support of the storage less complicated. Ethernet connectivity also enables the ability to provide access to existing storage pools via Ethernet switches.

What steps have you taken to work with technologies outside of your own?
Today’s storage providers need to actively design their products to allow the post house to maximize the investment in their shared storage choice. Our custom software is open-sourced based, which allows greater flexibility to integrate with a wider range of technologies seamlessly.

Additionally, the actual communication between products from different companies can be a problem. Storage designs that allow the ability to use copper or optical Ethernet and Fibre Channel connectivity provide a wide range of options to ensure all aspects of the workflow can be supported from ingest to archive.

What challenges, if any, do larger drives represent?
Today’s denser drives, while providing more storage space within the same physical footprint, do have some characteristics that need to be factored in when making your storage solution decisions. Larger drives will take longer to configure and rebuild data sets once a failed disk occurs, and in some cases may be slightly slower than less dense disk drives. You may want to consider using different RAID protocols or even using software RAID protection rather than hardware RAID protection to minimize some of the challenges that the new, larger disk drives present.

When do you recommend NAS over SAN deployments?
This is an age-old question as both deployments have advantages. Typically, NAS deployments make more sense for smaller customers as they may require less networking infrastructure. If you can direct connect all of your clients to the storage and save the cost of a switch, why not do that?

SAN deployments make sense for larger customers who have such a large number of clients that making direct connections to the server is impractical or impossible: these require additional software to keep everything straight.

In the past, SAN deployments were viewed as the superior option, mostly due to Fibre Channel being faster than Ethernet. With the wide acceptance of 10GbE, there is a convergence of sorts, and NAS performance is no longer considered a weakness compared to SAN. Performance aside, a SAN deployment makes more sense for very large customers with hundreds of clients and multiple large storage pools that need to support universal access.

QUANTUM‘S JANET LAFLEUR
What are the top three requests that you get from post users?
1) Shared storage with both SAN and NAS access to collaborate more broadly acrossJanet Lafleur groups. For streaming high-resolution content to editorial workstations, there’s nothing that can match the performance of shared SAN storage, but not all production team members need the power of SAN.

For example, animation and editorial workflows often share content. While editorial operations stream content from a SAN connection, a NAS gateway using a higher-speed IP protocol optimized for video (such as our StorNext DLC) can be used for rendering. By working with NAS, producers and other staff who primarily access proxies, images, scripts and other text documents can more easily access this content directly from their desktops. Our Xcellis workflow storage offers NAS access out of the box, so content can be shared over IP and over Fibre Channel SAN.

2) A starting point for smaller shops that scales smoothly. For a small shop with a handful of workstations, it can be hard to find a storage solution that fits into the budget now but doesn’t require a forklift upgrade later when the business grows. That’s one reason we built Xcellis workflow storage with a converged architecture that combines metadata storage and content storage. Xcellis provides a tighter footprint for smaller sites, but still can scale up for hundreds of users and multiple petabytes of content.

3) Simple setup and management of storage. No one wants to spend time deploying, managing and upgrading complex storage infrastructure, especially not post users who just want storage that supports their workflow. That’s why we are continuing to enhance StorNext Connect, which can not only identify problems before they affect users but also reduce the risk of downtime or degraded performance by eliminating error-prone manual tasks. We want our customers to be able to focus on content creation, not on managing storage.

How should post users decide between SAN, NAS and object storage?
Media workflows are complex, with unique requirements at each step. SAN, NAS and object storage all have qualities that make them ideal for specific workflow functions.

SAN: High-resolution, high-image-quality content production requires low-latency, high-performance storage that can stream 4K or greater — plus HDR, HFR content — to multiple workstations without dropping frames. Fibre Channel SANs are the only way to ensure performance for multi-streaming this content.

Object storage: For content libraries that are being actively monetized, object storage delivers the disk-level of performance needed for transcoding and reuse. Object storage also scales beyond the petabyte level, and the self-balancing nature of its erasure code algorithms make replacing aging disks with next-generation ones much simpler and faster than is possible with RAID systems.

Quantum XcellisNAS: High-performance IP-based connections are ideal for enabling render server farms to access content from shared storage. The simplicity of deploying NAS is also recommended for low-bandwidth functions such as review and approval, plus DVD authoring, closed captioning and subtitling.

With an integrated, complete storage infrastructure, such as those built with our StorNext platform, users can work with any or all of these technologies — as well as digital tape and cloud — and target the right storage for the right task.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
So much depends on the configuration: how many spindles, how many controllers, etc. At NAB 2016, our StorNext Pro 4K demo system delivered eight to 10 streams of 4K 10-bit DPX with headroom to stream more. The solution included four RAID-6 arrays of 24 drives each with redundant Xcellis Workflow Directors for an 84TB usable capacity in a neat 10U rack.

The StorNext platform allows users to scale performance and capacity independently. The need for more capacity can be addressed with the simple addition of Xcellis storage expansion arrays. The need for more performance can be met with an upgrade of the Xcellis Workflow Director to support more concurrent file systems.

PANASAS‘ DAVID SALLAK
What are the top three storage-related requests/needs that you get from your post clients or potential post clients?
They want native support for Mac, high performance and a system that is easier to grow and manage than SAN.

When comparing shared storage product choices, what are the advantages of NAS over SAN? Does the easier administration of NAS compared to SAN factor into your choice of storage?
NAS is easier to manage than SAN. Scale-out NAS is easier to grow thPanasasan SAN, and is designed for high availability. If scale-out NAS could be as fast as SAN, then SAN buyers would be very attracted to scale-out NAS.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
As many streams as possible. Post users always need more performance for future projects and media formats, so storage should support a lot of streams of ProRes HD or DNxHD and be capable of handling uncompressed DPX formats that come from graphics departments.

What do you expect to see as the next trend relating to storage? The thing that’s going to push storage systems even further?
Large post production facilities need greater scalability, higher performance, easier use, and affordable pricing.

HGST‘s JEFF GREENWALD
What are the top three requests you get from your post clients or potential post clients?
They’re looking for better ways to develop cost efficiencies of their workflows. Secondly, they’re looking for ways to improve the performance of those workflows. Finally, they’re looking for ways to improve and enhance data delivery and availability.

How should post users decide between SAN, NAS and object storage?
There are four criteria that customers must evaluate in order to make trade-offs between the various storage technologies as well as storage tiers. Customers must evaluate the media quantity of data, and they must also evaluate the frequency of acceptability. They must evaluate the latency requirements of data delivery, and, finally they must balance these three evaluations across their financial budgets.

Active ArchiverHow many data streams of 4K 10-bit DPX at 24fps can your storage provide?
In order to calculate quantity of video streams you must balance available bandwidth as well as file sizes and data delivery requirements toward the desired capacity. Also, jitter and data loss continue to shrink available bandwidth for retries and resends.

What do you expect to see as the next trend relating to storage, and what will push storage even further?
There are two trends that will dramatically transform the storage industry. The first is storage analytics, and the second is new and innovative usage of automatic meta-tagging of file data.

New technologies like SMR, optical and DNA-based object storage have not yet proven to be technology disruptors in storage, therefore it is likely that storage technology advancements will be evolutionary as opposed to revolutionary in the next 10 years.

G-TECH‘S MICHAEL WILLIAMS
Who is using your gear in the post world? What types of pros?
Filmmakers, digital imaging technicians, editors, audio technicians and photographers all use our solutions. These are the pros that capture, store, transfer and edit motion pictures, indie films, TV shows, music, photography and more. We offer everything from rugged standalone portable drives to high-performance RAID solutions to high-capacity network storage for editing and collaboration.

You recently entered the world of NAS storage. Can you talk about the types of pros taking advantage of that tech?
Our NAS customers run the gamut from DITs to production coordinators to video editors and beyond. With camera technology advancing so rapidly, they are looking for storage solutions that can fit within the demanding workflows they encounter every day.

With respect to episodic, feature film, commercials or in-house video production storage, needs are rising faster than ever before and many IT staffs are shrinking, so we introduced the G-Rack 12 NAS platform. We are able to use HGST’s new 10TB enterprise-class hard drives to deliver 120TB of raw storage in a 2RU platform, providing the required collaboration and performance.

We have also made sure that our NAS OS on the G-Rack 12 is designed to be easily administered by the DIT, video editor or someone else on the production staff and not necessarily a Linux IT tech.

Production teams need to work smarter — DITs, video editors, DPs and the like can do the video shoot, get the video ingested into a device and get the post team working on it much faster than in days past. We all know that time is money; this is why we entered the NAS market.

Any other new tech on the horizon that might affect how you make storage or a certain technology that might drive your storage in other directions?
The integration of G-Technology — along with SanDisk and HGST — into Western Digital is opening up doors in terms of new technologies. In addition to our current high-capacity, enterprise-class HDD-based offerings, SSD devices are now available to give us the opportunity to expand our offerings to a broader range of solutions.

G-RACK 12This, in addition to new external device interfaces, is paving the way for higher-performance storage solutions. At NAB this year, we demonstrated Thunderbolt 3 and USB-C solutions with higher-performance storage media and network connectivity. We are currently shipping the USB solutions and the technology demos we gave provide a glimpse into future solutions. In addition, we’re always on the lookout for new form factors and technologies that will make our storage solutions faster, more powerful, more reliable and affordable.

What kind of connections do your drives have, and if it’s Thunderbolt 2 or Thunderbolt 3, can they be daisy chained?
When we look at interfaces, as noted above, there’s a USB Type-C for the consumer market as well as Thunderbolt and 10Gb Ethernet for the professional market.

As far as daisy-chaining, yes. Thunderbolt is a very flexible interface, supporting up to six devices in a daisy chain, on a single port. Thunderbolt 3 is a very new interface that is gaining momentum, one that will not only support extremely high data transfer speeds (up to 2.7GB/s) but also supports up to two 4K displays. We should also not forget that there are still more than 200M devices supporting Thunderbolt 1 and 2 connections.

LACIE‘S GASPARD PLANTROU
How do your solutions work with clients existing storage? And who are your typical M&E users?
With M&E workflows, it’s rare that users work with a single machine and storage solution. From capture to edit to final delivery, our customers’ data interacts with multiple machines, storage solutions and users. Many of our storage solutions feature multiple interfaces such as Thunderbolt, USB 3.0 or FireWire so they can be easily integrated into existing workflows and work seamlessly across the entire video production process.

Our Rugged features Thunderbolt and USB 3.0. That means it’s guaranteed to work with any standard computer or storage scenario on the market. Plus it’s shock, dust and moisture-resistant, allowing it to handle being passed around set or shipped to a client. Lacie 12bigLaCie’s typical M&E users are mid-size post production studios and independent filmmakers and editors looking for RAID solutions.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
The new LaCie 12big Thunderbolt 3 pushes up to 2600MB/s and can handle three streams of 4K 10-bit DPX at 24fps (assuming one stream is 864MB/s). In addition, the storage solution features 96TB to edit and hold tons of 4K footage.

What steps have you taken to work with technologies outside of your own?
With video file sizes growing exponentially, it is more important than ever for us to deliver fast, high-capacity solutions. Recent examples of this include bringing the latest technologies from Intel — Thunderbolt 3 — into our line. We work with engineers from our parent company, Seagate, to incorporate the latest enterprise class core technology for speed and reliability. Plus, we always ensure our solutions are certified to work seamlessly on Mac and Windows.

NETAPP‘S JASON DANIELSON
What are the top three requests that you get from post users?Jason Danielson
As a storage vendor, the first three requests we’re likely to get are around application integration, bandwidth and cost. Our storage systems support well over 100 different applications across a variety of workflows (VFX, HD broadcast post, uncompressed 4K finishing) in post houses of all sizes, from boutiques in Paris to behemoths in Hollywood.

Bandwidth is not an issue, but the bandwidth per dollar is always top of mind for post. So working with the post house to design a solution with suitable bandwidth at an acceptable price point is what we spend much of our time doing.

How should post users decide between SAN, NAS and object storage?
The decision to go with SAN versus NAS depends on the facility’s existing connectivity to the workstations. Our E-Series storage arrays support quite a few file systems. For SAN, our systems integrators usually use Quantum StorNext, but we also see Scale Logic’s HyperFS and Tiger Technology’s metaSAN being used.

For NAS, our systems integrators tend to use EditShare XStream EFS and IBM GPFS. While there are rumblings of a transition away from Fibre Channel-based SAN to Ethernet-based NAS, there are complexities and costs associated with tweaking a 10GigE client network.

The object storage question is a bit more nuanced. Object stores have been so heavily promoted by storage vendors that thE5624ere are many misconceptions about their value. For most of the post houses we talk to, object storage isn’t the answer today. While we have one of the most feature-rich and mature object stores out there, even we say that object stores aren’t for everyone. The questions we ask are:

1) Do you have 10 million files or more? 2) Do you store over a petabyte? 3) Do you have a need for long-term retention? 4) Does your infrastructure need to support multisite production?

If the answer to any of those questions is “yes,” then you should at least investigate object storage. A high-end boutique with six editors is probably not in this realm. It is true that an object store represents a slightly lower-cost bucket for an active archive (content repository), but it comes at a workflow cost of introducing a second tier to the architecture, which needs to be managed by either archive management or media asset management software. Unless such a software system is already in place, then the cost of adding one will drive up the complexity and cost of the implementation. I don’t mean to sound negative about object stores. I am not. I think object stores will play a major role in active-archive content storage in the future. They are just not a good option for a high-bandwidth production tier today or, possibly, ever.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
In order to answer that question, we would ask the post house: “How many streams do you want to play back?” Let’s say we’re talking about 4K (4096×2160), versus the several resolutions that are called 4K). At 4:4:4, that works out to 33MB per frame or 792MB per second. We would typically use flash (SSDs) for 4K playback. Our  2RU 24-SSD storage array, the EF560, can do a little over 9GB per second. That amounts to 11 streams.

But that is only half the answer. This storage array is usually deployed under a parallel file system, which will aggregate the bandwidth of several arrays for shared editing purposes. A larger installation might have eight storage arrays — each with 18 SSDs (to balance bandwidth and cost) — and provide sustained video playback for 70 streams.

What do you expect to see as the next trend relating to storage? What’s going to push storage systems even further?
The introduction of larger, more cost-effective flash drives (SSDs) will have a drastic effect on storage architectures over the next three years. We are now shipping 15TB SSDs. That is a petabyte of extremely fast storage in six rack units. We think the future is flash production tiers in front of object-store active-archive tiers. This will eliminate the need for archive managers and tape libraries in most environments.

HARMONIC‘S ANDY WARMAN
What are the top three requests that you hear from your post clients or potential post clients?andy warman
The most common request is for sustained performance. This is an important aspect since you do not want performance to degrade due to the number of concurrent users, the quantity of content, how full the storage is, or the amount of time the storage has been in service.

Another aspect related to this is the ability to support high-write and -read bandwidth. Being able to offer equal amounts of read and write bandwidth can be very beneficial for editing and transcode workflows, versus solutions that have high-read bandwidth, but relatively low-write performance. Customers are also looking for good value for money. Generally, we would point to value coming from the aforementioned performance as well as cost-effective expansion.

You guys have a “media-aware” solution for post. Can you explain what that is and why you opted to go this way?
Media-aware storage refers to the ability to store different media types in the most effective manner for the file system. A MediaGrid storage system supports multiple different block sizes, rather than a single block size for all media types. In this way, video assets, graphics and audio and project files can use different block sizes that make reading and writing data more efficient. This type of file I/O “tuning” provides some additional performance gains for media access, meaning that video could use, say, 2MB blocks, graphics and audio 512KB, and projects and other files 128KB. Not only can different block sizes be used by different media types, but they are also configurable so UHD files could, say, use 8MB block sizes.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
The storage has no practical storage capacity or bandwidth limit, so we can build a storage system that suits the customer needs. To size a system it becomes a case of balancing the bandwidth and storage capacity by selecting the appropriate number of drives and drive size(s) to match specific needs. The system is built on SAS drives; multiple, fully redundant 10 Gigabit Ethernet connections to client workstations and attached devices and 12 Gigabit redundant SAS interconnects between storage expansion nodes. This means we have high-speed connectivity within the storage as well as out to clients.

Harmonic MediaGrid 4K Content ServerAs needs change, the system can be expanded online with all users maintaining full access. Bandwidth scales in a linear fashion, and because there is a single name space in MediaGrid, the entire storage system can be treated as a single drive, or divided up and granted user level rights to folders within the file system.

Performance is further enhanced by the use of parallel access to data throughout the storage system. The file system provides a map to where all media is stored or is to be stored on disk. Data is strategically placed across the whole storage system to provide the best throughput. Clients simultaneously read and write data through the 10 Gigabit network to all network attached storage nodes rather than data being funneled through a single node or data connection. The result is that performance is the same whether the storage system is 5% or 95% full.

What do you expect to see as the next trend relating to storage? What’s going to push storage systems even further?
The advent of UHD has driven demands on storage further as codecs and therefore data throughput and storage requirements have increased significantly. Faster and more readily accessible storage will continue to grow in importance as delivery platforms continue to expand and expectations for throughput of storage systems continue to grow. We will use whatever performance and storage capacity is available, so offering more of both is inevitable to feed our needs for creativity and storytelling.

JMR’s STEVE KATZ
What are the top three storage-related requests you get from post users?
The most requested is ease of installation and operation. The JMR Share is delivered with euroNAS OS on mirrored SSD boot disks, with enough processing power and memory to Steve Headshot 6.27.16support efficient, high-volume workflows and a perpetual license to support the amount of storage requested, from 20TB minimum to the “unlimited” maximum. It’s intuitive to use and comfortable for anyone familiar with using popular browsers.

Compatibility and interoperability with clients using various hardware, operating systems and applications.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
This can all be calculated by usable bandwidth and data transfer rates, which as with any networked storage can be limited by the network itself. For those using a good 10GbE switch, the network limits data rates to 1250MB/s maximum, which can support more than 270 streams of DNxHD 36, but only one stream of 4K 10-bit “film” resolution. Our product can support ~1800MB/s in a single 16-disk appliance, but without a very robust network this can’t be achieved.

When comparing shared storage product choices, what are the advantages of NAS over SAN, for example?
SAN actually has some advantages over NAS, but unless the user has Fibre Channel hardware installed, it might be a very costly option. The real advantage of NAS is that everyone already has an Ethernet network available that may be sufficient for video file server use. If not, it may be upgraded fairly inexpensively.

JMR Share comes standard with both GbE and 10GbE networking capability right out of the box, and has performance that will saturate 10GbE links; high-availability active/active failover is available as well as SAN Cluster (an extra cost option). The SAN Cluster is equipped with specialized SAN software as well as with 8Gb or 16Gb fibre channel host adapters installed, so it’s ready to go.

What do you expect to see as the next trend relating to storage? The thing that’s going to push storage systems even further?
Faster and lower cost, always! Going to higher speed network adapters, 12Gb SAS internal storage and even SSDs or NVMe drives, it seems the sky is the limit — or, actually, the networking is the limit. We already offer SAS SSDs in the Share as an option, and our higher-end dual-processor/dual-controller Share models (a bit higher cost) using NVMe drives can provide internal data transfer speeds exceeding what any network can support (even multiple 40Gb InfiniBand links). We are seeing a bit of a trend toward SSDs now that higher-capacity models at more reasonable cost, with reasonable endurance, are becoming available.

The cloud and production storage

By Tom Coughlin

The network of connected data centers known as “the cloud” is playing a greater role in many media and entertainment applications. This includes collaborative workflows and proxy viewing, rendering, content distribution and archiving. Cloud services can include processing power as well as various types of digital storage.

In the figure below, you will see Coughlin Associates’ projections (2015 Digital Storage in Media and Entertainment Report), for the growth of professional media and entertainment cloud storage out to 2020. Note that archiving is the biggest projected market for media and entertainment cloud storage.

Media and Entertainment Cloud Storage Capacity Projections

At the 2016 NAB show, there were many companies offering cloud storage and online services for the media and entertainment industry. Some of these were cloud-only offerings and some are hybrid cloud services with some on-premises storage.

In this piece, we will review some of these cloud storage offerings and take a look at how to move content around the cloud, as well as to and from the cloud and on-premise storage. There were also some interesting object storage infrastructure implementations that were of interest at the NAB show.

Archiving in the Cloud
Archive is the biggest application for cloud storage in media and entertainment and several companies have products geared toward these applications, some of them with magnetic tape storage in the cloud. Oracle’s DIVA content storage management software allows the integration of on-premises storage and the Oracle cloud. The recently announced DIVAnet 2.0 allows a converged infrastructure for rich media using single namespace access to DIVArchive on-premises sites and Oracle DIVA Cloud storage as a service.
The Oracle Archive Cloud using DIVA content management offers archive storage for about $0.01/GB/month. This equates to storing 1 petabyte (PB) of content for just $12,000 per year. That is less than the on-premises costs for many content archives. At this price, rich media content companies are considering trusting their long-term content archives to the cloud.

Fujifilm’s Dternity tape-based archive, which offers online access to your data and integrates with applications already in your workflow, had an exhibit at NAB again this year. IBM also offers tape storage in the cloud. In addition to archiving on tape there are HDD cloud storage offerings as well. Major cloud companies such as Google, Azure and AWS offer tape and HDD- based cloud storage.

Quantum showcased its new Q-Cloud Vault long-term cloud storage service. This is fully integrated within workflows powered by StorNext 5.3, Q-Cloud Vault will provide low-cost, Quantum-managed “cold storage” in the public cloud. Because StorNext 5.3 enables end-to-end encryption, users can leverage the cloud as a part of their storage infrastructure to facilitate secure, cost-effective storage of their media content, both on-site and off-site.

In addition to supporting Q-Cloud Vault (pictured above), StorNext 5.3 gives users greater control and flexibility in optimizing their collaborative media workflows for maximum efficiency and productivity.

Cloud-Assisted Media Workflows
In addition to archive-focused cloud storage, some companies at NAB were talking about cloud and hybrid storage focused on non-archive applications.

Fast-paced growth and strong demand for scale-out storage clouds have propelled DDN’s WOS to one of the industry’s top solutions based on the number of objects in production and have fortified DDN’s position as a strong market leader in object storage. Continuing to fuel the pace of its object storage momentum, DDN also announced the availability of its latest WOS platform release. WOS is also an important component in the company’s MediaScaler Converged Media Workflow Storage Platform.

WOS possesses a combination of high-performance, flexible protection, multi-site capabilities and storage efficiencies that make it the perfect solution for a wide range of use cases, including active archive repositories, OpenStack Swift, data management, disaster recovery, content distribution, distributed collaboration workflows, enterprise content repositories, file sync and share, geospatial images, video surveillance, scale-out web and cloud services, and video post-production.

EMC, Pixspan, Aspera and Nvidia are bringing uncompressed 4K workflows to IT infrastructures, advancing digital media workflows with full resolution content over standard 10GbE networks. Customers can now achieve savings and performance increases of 50-80 percent in storage and bandwidth throughout the entire workflow — from on-set through post to final assets. Artists and facilities using creative applications for compositing, visual effects, DI and more can now work faster with camera raw, DPX, EXR, TIFF and Cineon files. Content can be safely stored on EMC’s Isilon scale-out NAS for shared collaborative access to project data in the data center, around the world, or to the cloud.

NetApp Webscale

NetApp StorageGrid Webscale

At NAB, NetApp promoted new features in its StorageGrid Webscale (appliance or software-defined) object storage. The object store has been widely adopted by media sites and media cloud providers who are managing tens of billions of media objects. Now, a majority of the key MAM, file-delivery and archive systems have integrated to StorageGrid Webscale’s Amazon S3 object interface. 

StorageGrid Webscale is a next-generation solution for multi-petabyte distributed content repositories. It provides erasure coding or, alternatively, automatic file copies to remote locations depending on the value of the media and the needs of the workflow.

Scality Ring storage scales linearly across multiple active sites and thousands of servers and can host an unlimited number of objects, providing high performance across a variety of workloads with file or object storage access. The company says the product enables organizations to build Exabyte-scale active archives and scalable content distribution systems, including network DVR/PVR. The product can be used to make a private storage cloud with file and object access and to provide customized web services.

Avere FlashCloud is a hybrid cloud and on-premise storage offering advertised as providing unlimited capacity scaling in the Cloud with unlimited performance scaling to the edge with up to 480 TB of data on FXT Series Edge filers. The dynamic tiering of active data to the edge hides the latency of cloud storage while NFS and SMB access provide file-based storage with a global namespace including public objects, private objects and NAS.

Avere’s FlashMove software transparently moves live online data to the cloud and between cloud providers. FlashMiror replicates data to the cloud for disaster recovery. AES-256 encryption with FIPS 140-2 compliance provides data security with on premise encryption key management. It should be noted that Avere worked with Google to provide the storage cluster used to stream the video showed at NAB during the Lytro Cinema demonstration.

Moving and delivering in the cloud

Moving and delivering in the cloud

SAN Solutions SAN Metro Media ultra-low-latency cloud for media extends a customer’s studio to the cloud with its SMM Ultra-Connect dedicated, secure direct connect, low-latency circuit from the customer’s site to one of SAN Metro Media’s data centers in a metropolitan area. The SMM Ultra-Connect circuit can operate completely off the Internet and transport media at the bandwidth and latencies that large studio applications and workflows require.

Moving and Delivering Content in the Cloud
There are several companies offering data transport services to and from cloud services as well as from on-premise storage to the cloud and back.

At NAB show Aspera (a division of IBM) introduced FASPStream, a turnkey application-software line designed to enable live streaming of broadcast-quality video globally over commodity Internet networks with glitch-free playout and negligible startup time, reducing the need for expensive and limited satellite-based backhaul, transport and distribution.

The FASPStream software uses the FASP bulk data protocol to transport live multicast, unicast UDP, PCP or other file source video, providing timely arrival of live video and data independent of network round-trip delay and packet loss. The company says that less than five seconds of startup delay is required for 50Mbps video streams transported with 250ms round-trip latency and three percent packet loss. These properties are sufficient for 4K streaming between continents.

Aspera is part of a broader group of IBM acquisitions with a strong focus on the media and entertainment industry, including object storage provider Cleversafe.

Signiant announced the integration of its Manager+Agents product with the Avid Interplay | MAM system. Customers can now initiate accelerated file transfers from within Interplay | MAM, making it easier than ever to use the power of Signiant technology in support of global creative processes. Users can now initiate and monitor Signiant file transfers via the Export capability within Avid Interplay|MAM.

FileCatalyst had some media and entertainment case studies, including involvement with NBC’s 2016 Rio Olympics preparation.

Snowball is a PB-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. With Snowball, you don’t need to write any code or purchase any hardware to transfer your data. Simply create a job in the AWS Management Console and a Snowball appliance will be automatically shipped to you. Once it arrives, attach the appliance to your local network, download and run the Snowball client to establish a connection, then use the client to select the file directories that you want to transfer to the appliance.

The client will then encrypt and transfer the files to the appliance at high speed. Once the transfer is complete and the appliance is ready to be returned, the E Ink shipping label will automatically update and you can track the job status via Amazon Simple Notification Service (SNS), text messages or directly in the Console.

EMC and Imagine Communications provide live channel playout with the Versio solution in an offering with EMC’s converged VCE Vblock system and EMC’s Isilon scale-out NAS storage system. EMC’s technology and Versio, Imagine’s cloud-capable channel playout solution, help to enable broadcasters to securely fulfill channel playout across geographically dispersed network to help engage customers with content tailored to their respective operations. EMC also talked about Cloud DVR solutions with Anevia.

Object Storage Infrastructure
A start-up company named Fixstars Solutions provides an innovative storage server (called Olive) with dual core-CPU, FPGA, 512MB RAM, gigabit Ethernet and up to 13TB of non-volatile flash memory storage in a 2.5-inch form factor. The company announced Ceph running on Olive, building high-performance, scalable storage systems at low cost that it feels can provide solutions for broadcasters, studios, cable providers and Internet delivery networks.

Dr. Tom Coughlin, president of Coughlin Associates, is a storage analyst and consultant with over 30 years in the data storage industry. He is the founder and organizer of the Annual Storage Visions Conference as well as the Creative Storage Conference. The 2016 Creative Storage Conference is June 23 in Culver City. It features conferences and exhibits focused on the growing storage demands of HD, UltraHD, 4K and HDR film production and how it is affecting every stage of the production.

NetApp and Tekserve reach out to M&E community in NYC

good group shot small

NEW YORK — Over the last 30 days, NetApp has been meeting casually with users and potential users on both coasts. During SMPTE in Los Angeles, they hosted a dinner with CTO/engineers from a variety of large post houses and film studios.

Last week in New York City, they held a cocktail party in conjunction with Tekserve (www.tekserve.com) – the goal was to talk about technology and the needs of the user. Techie types from HBO, Showtime, CBS Sports, Al Jazeera America, Prime Focus and others came together over food and drinks.

Tekserve, a leader in media and entertainment technology that provides workflow solutions and services to companies, is now offering NetApp (www.netapp.com) solutions to the market. “NetApp has a well-deserved reputation for performance and reliability in enterprise storage. We are excited to introduce their new products to the post production and broadcast community in the New York area,” says Tekserve CTO Aaron Freimark.

Jason Danielson, media and entertainment solutions, at NetApp, says, “Tekserve complements NetApp’s media storage portfolio with their trusted expertise at building production SANs combined with their practice in large-scale mobile device roll-outs. We look forward to a successful partnership.”

Continue reading