Tag Archives: Quantum

Quantum shipping StorNext 5.4

Quantum has introduced StorNext 5.4, the latest release of their workflow storage platform, designed to bring efficiency and flexibility to media content management. StorNext 5.4 enhancements include the ability to integrate existing public cloud storage accounts and third-party object storage (private cloud) — including Amazon Web Services, Microsoft Azure, Google Cloud, NetApp StorageGRID, IBM Cleversafe and Scality Ring — as archive tiers in a StorNext-managed media environment. It also lets users deploy applications embedded within StorNext-powered Xcellis workflow storage appliances.

Quantum has also included a new feature called StorNext Storage Manager, offering automated, policy-based movement of content into and out of users’ existing public and private clouds while maintaining the visibility and access that StorNext provides. It offers seamless integration for public and private clouds within a StorNext-managed environment — as well as primary disk and tape storage tiers, full user and application access to media stored in the cloud without additional hardware or software, and extended versioning across sites and the cloud.

By enabling applications to run inside its Xcellis Workflow Director, the new Dynamic Application Environment (DAE) capability in StorNext 5.4 allows users to leverage a converged storage architecture, reducing the time, cost and complexity of deploying and maintaining applications.

StorNext 5.4 is currently shipping with all newly-purchased Xcellis, StorNext M-Series and StorNext Pro Solutions, as well as Artico archive appliances. It is available at no additional cost for StorNext 5 users under current support contracts.

Storage Roundtable

Manufacturers weigh in on trends, needs.

By Randi Altman

Storage is the backbone of today’s workflows, from set to post to archive. There are many types of storage offerings from many different companies, so how do you know what’s right for your needs?

In an effort to educate, we gathered questions from users in the field. “If you were sitting across a table from makers of storage, what would you ask?”

The following is a virtual roundtable featuring a diverse set of storage makers answering a variety of questions. We hope it’s helpful. If you have a question that you would like to ask of these companies, feel free to email me directly at randi@postPerspective.com and I will get them answered.

SCALE LOGIC’S BOB HERZAN
What are the top three requests you get from your post clients?
A post client’s primary concern is reliability. They want to be assured that the storage solution they are buying supports all of their applications and will provide the performance each application will need when they need it. The solution needs the ability to interact with MAM or PAM solutions and they need to be able to search and retrieve their assets and to future proof, scale and manage the storage in a tiered infrastructure.

Secondly, the client wants to be able to use their content in a way that makes sense. Assets need to be accessible to the stakeholders of a project, no matter how big or complex the storage ecosystem.

Finally, the client wants to see the options available to develop a long-term archiving process that can assure the long-term preservation of their finished assets. All three of these areas can be very daunting to our customers, and being able to wade through all of the technology options and make the right choices for each business is our specialty.

How should post users decide between SAN, NAS and object storage?
There are a number of factors to consider, including overall bandwidth, individual client bandwidth, project lifespan and overall storage requirements. Because high-speed online storage typically has the highest infrastructure costs, a tiered approach makes the most sense for many facilities, where SAN, NAS, cloud or object storage may all be used at the same time. In this case, the speed with which a user will need access to a project is directly related to the type of storage the project is stored on.

Scale Logic uses a consultative approach with our customers to architect a solution that will fit both their workflow and budget requirements. We look at the time it takes to accomplish a task, what risks, if any, are acceptable, the size of the assets and the obvious, but nonetheless, vital budgetary considerations. One of the best tools in our toolbox is our HyperFS file system, which allows customers the ability to choose any one of four tiers of storage solutions while allowing full scalability to incorporate SAN, NAS, cloud and object storage as they grow.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
Above everything else we want to tailor a solution to the needs of the clients. With our consultative approach we take a look not only at the requirements to build the best solution for today, but also  the ability to grow and scale up to the needs of tomorrow. We look at scalability not just from the perspective of having more ability to do things, but in doing the most with what we have. While even our entry level system is capable of doing 10 streams of 4K, it’s equally, if not more, important to make sure that those streams are directed to the people who need them most while allowing other users access at lower resolutions.

GENESIS Unlimited

Our Advanced QoS can learn the I/O patterns/behavior for an application while admins can give those applications a “realtime” or “non-realtime” status. This means “non-realtime” applications auto-throttle down to allow realtime apps the bandwidth. Many popular applications come pre-learned, like Finder, Resolve, Premiere or Flame. In addition, admins can add their own apps.

What do you expect to see as the next trend relating to storage?
Storage always evolves. Whatever is next in post production storage is already in use elsewhere as we are a pretty risk-averse group, for obvious reasons. With that said, the adoption of Unified Storage Platforms and hybrid cloud workflows will be the next big thing for big media producers like post facilities. The need for local online and nearline storage must remain for realtime, resolution-intense processes and data movement between tiers, but the decision-making process and asset management is better served globally by increased shared access and creative input.

The entertainment industry has pushed the limits of storage for over 30 years with no end in sight. In addition, the ability to manage storage tiers and collaborate both on-prem and off will dictate the type of storage solutions our customers will need to invest in. The evolution of storage needs continues to be driven by the consumer: TVs and displays have moved to demanding 4K content from the producers. The increased success of the small professional cameras allows more access to multi-camera shoots. However, as performance and capacity continues to grow for our customers, it brings the complexity down to managing large data farms effectively, efficiently and affordably. That is on the horizon in our future solution designs. Expensive, proprietary hardware will be a thing of the past and open, affordable storage will be the norm, with user-friendly and intuitive software developed to automate, simplify, and monetize our customer assets while maintaining industry compatibility.

SMALL TREE‘S CORKY SEEBER
How do your solutions work with clients’ existing storage? And who is your typical client?
There are many ways to have multiple storage solutions co-exist within the post house, most of these choices are driven by the intended use of the content and the size and budget of the customer. The ability to migrate content from one storage medium to another is key to allowing customers to take full advantage of our shared storage solutions.

Our goal is to provide simple solutions for the small to medium facilities, using Ethernet connectivity from clients to the server to keep costs down and make support of the storage less complicated. Ethernet connectivity also enables the ability to provide access to existing storage pools via Ethernet switches.

What steps have you taken to work with technologies outside of your own?
Today’s storage providers need to actively design their products to allow the post house to maximize the investment in their shared storage choice. Our custom software is open-sourced based, which allows greater flexibility to integrate with a wider range of technologies seamlessly.

Additionally, the actual communication between products from different companies can be a problem. Storage designs that allow the ability to use copper or optical Ethernet and Fibre Channel connectivity provide a wide range of options to ensure all aspects of the workflow can be supported from ingest to archive.

What challenges, if any, do larger drives represent?
Today’s denser drives, while providing more storage space within the same physical footprint, do have some characteristics that need to be factored in when making your storage solution decisions. Larger drives will take longer to configure and rebuild data sets once a failed disk occurs, and in some cases may be slightly slower than less dense disk drives. You may want to consider using different RAID protocols or even using software RAID protection rather than hardware RAID protection to minimize some of the challenges that the new, larger disk drives present.

When do you recommend NAS over SAN deployments?
This is an age-old question as both deployments have advantages. Typically, NAS deployments make more sense for smaller customers as they may require less networking infrastructure. If you can direct connect all of your clients to the storage and save the cost of a switch, why not do that?

SAN deployments make sense for larger customers who have such a large number of clients that making direct connections to the server is impractical or impossible: these require additional software to keep everything straight.

In the past, SAN deployments were viewed as the superior option, mostly due to Fibre Channel being faster than Ethernet. With the wide acceptance of 10GbE, there is a convergence of sorts, and NAS performance is no longer considered a weakness compared to SAN. Performance aside, a SAN deployment makes more sense for very large customers with hundreds of clients and multiple large storage pools that need to support universal access.

QUANTUM‘S JANET LAFLEUR
What are the top three requests that you get from post users?
1) Shared storage with both SAN and NAS access to collaborate more broadly acrossJanet Lafleur groups. For streaming high-resolution content to editorial workstations, there’s nothing that can match the performance of shared SAN storage, but not all production team members need the power of SAN.

For example, animation and editorial workflows often share content. While editorial operations stream content from a SAN connection, a NAS gateway using a higher-speed IP protocol optimized for video (such as our StorNext DLC) can be used for rendering. By working with NAS, producers and other staff who primarily access proxies, images, scripts and other text documents can more easily access this content directly from their desktops. Our Xcellis workflow storage offers NAS access out of the box, so content can be shared over IP and over Fibre Channel SAN.

2) A starting point for smaller shops that scales smoothly. For a small shop with a handful of workstations, it can be hard to find a storage solution that fits into the budget now but doesn’t require a forklift upgrade later when the business grows. That’s one reason we built Xcellis workflow storage with a converged architecture that combines metadata storage and content storage. Xcellis provides a tighter footprint for smaller sites, but still can scale up for hundreds of users and multiple petabytes of content.

3) Simple setup and management of storage. No one wants to spend time deploying, managing and upgrading complex storage infrastructure, especially not post users who just want storage that supports their workflow. That’s why we are continuing to enhance StorNext Connect, which can not only identify problems before they affect users but also reduce the risk of downtime or degraded performance by eliminating error-prone manual tasks. We want our customers to be able to focus on content creation, not on managing storage.

How should post users decide between SAN, NAS and object storage?
Media workflows are complex, with unique requirements at each step. SAN, NAS and object storage all have qualities that make them ideal for specific workflow functions.

SAN: High-resolution, high-image-quality content production requires low-latency, high-performance storage that can stream 4K or greater — plus HDR, HFR content — to multiple workstations without dropping frames. Fibre Channel SANs are the only way to ensure performance for multi-streaming this content.

Object storage: For content libraries that are being actively monetized, object storage delivers the disk-level of performance needed for transcoding and reuse. Object storage also scales beyond the petabyte level, and the self-balancing nature of its erasure code algorithms make replacing aging disks with next-generation ones much simpler and faster than is possible with RAID systems.

Quantum XcellisNAS: High-performance IP-based connections are ideal for enabling render server farms to access content from shared storage. The simplicity of deploying NAS is also recommended for low-bandwidth functions such as review and approval, plus DVD authoring, closed captioning and subtitling.

With an integrated, complete storage infrastructure, such as those built with our StorNext platform, users can work with any or all of these technologies — as well as digital tape and cloud — and target the right storage for the right task.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
So much depends on the configuration: how many spindles, how many controllers, etc. At NAB 2016, our StorNext Pro 4K demo system delivered eight to 10 streams of 4K 10-bit DPX with headroom to stream more. The solution included four RAID-6 arrays of 24 drives each with redundant Xcellis Workflow Directors for an 84TB usable capacity in a neat 10U rack.

The StorNext platform allows users to scale performance and capacity independently. The need for more capacity can be addressed with the simple addition of Xcellis storage expansion arrays. The need for more performance can be met with an upgrade of the Xcellis Workflow Director to support more concurrent file systems.

PANASAS‘ DAVID SALLAK
What are the top three storage-related requests/needs that you get from your post clients or potential post clients?
They want native support for Mac, high performance and a system that is easier to grow and manage than SAN.

When comparing shared storage product choices, what are the advantages of NAS over SAN? Does the easier administration of NAS compared to SAN factor into your choice of storage?
NAS is easier to manage than SAN. Scale-out NAS is easier to grow thPanasasan SAN, and is designed for high availability. If scale-out NAS could be as fast as SAN, then SAN buyers would be very attracted to scale-out NAS.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
As many streams as possible. Post users always need more performance for future projects and media formats, so storage should support a lot of streams of ProRes HD or DNxHD and be capable of handling uncompressed DPX formats that come from graphics departments.

What do you expect to see as the next trend relating to storage? The thing that’s going to push storage systems even further?
Large post production facilities need greater scalability, higher performance, easier use, and affordable pricing.

HGST‘s JEFF GREENWALD
What are the top three requests you get from your post clients or potential post clients?
They’re looking for better ways to develop cost efficiencies of their workflows. Secondly, they’re looking for ways to improve the performance of those workflows. Finally, they’re looking for ways to improve and enhance data delivery and availability.

How should post users decide between SAN, NAS and object storage?
There are four criteria that customers must evaluate in order to make trade-offs between the various storage technologies as well as storage tiers. Customers must evaluate the media quantity of data, and they must also evaluate the frequency of acceptability. They must evaluate the latency requirements of data delivery, and, finally they must balance these three evaluations across their financial budgets.

Active ArchiverHow many data streams of 4K 10-bit DPX at 24fps can your storage provide?
In order to calculate quantity of video streams you must balance available bandwidth as well as file sizes and data delivery requirements toward the desired capacity. Also, jitter and data loss continue to shrink available bandwidth for retries and resends.

What do you expect to see as the next trend relating to storage, and what will push storage even further?
There are two trends that will dramatically transform the storage industry. The first is storage analytics, and the second is new and innovative usage of automatic meta-tagging of file data.

New technologies like SMR, optical and DNA-based object storage have not yet proven to be technology disruptors in storage, therefore it is likely that storage technology advancements will be evolutionary as opposed to revolutionary in the next 10 years.

G-TECH‘S MICHAEL WILLIAMS
Who is using your gear in the post world? What types of pros?
Filmmakers, digital imaging technicians, editors, audio technicians and photographers all use our solutions. These are the pros that capture, store, transfer and edit motion pictures, indie films, TV shows, music, photography and more. We offer everything from rugged standalone portable drives to high-performance RAID solutions to high-capacity network storage for editing and collaboration.

You recently entered the world of NAS storage. Can you talk about the types of pros taking advantage of that tech?
Our NAS customers run the gamut from DITs to production coordinators to video editors and beyond. With camera technology advancing so rapidly, they are looking for storage solutions that can fit within the demanding workflows they encounter every day.

With respect to episodic, feature film, commercials or in-house video production storage, needs are rising faster than ever before and many IT staffs are shrinking, so we introduced the G-Rack 12 NAS platform. We are able to use HGST’s new 10TB enterprise-class hard drives to deliver 120TB of raw storage in a 2RU platform, providing the required collaboration and performance.

We have also made sure that our NAS OS on the G-Rack 12 is designed to be easily administered by the DIT, video editor or someone else on the production staff and not necessarily a Linux IT tech.

Production teams need to work smarter — DITs, video editors, DPs and the like can do the video shoot, get the video ingested into a device and get the post team working on it much faster than in days past. We all know that time is money; this is why we entered the NAS market.

Any other new tech on the horizon that might affect how you make storage or a certain technology that might drive your storage in other directions?
The integration of G-Technology — along with SanDisk and HGST — into Western Digital is opening up doors in terms of new technologies. In addition to our current high-capacity, enterprise-class HDD-based offerings, SSD devices are now available to give us the opportunity to expand our offerings to a broader range of solutions.

G-RACK 12This, in addition to new external device interfaces, is paving the way for higher-performance storage solutions. At NAB this year, we demonstrated Thunderbolt 3 and USB-C solutions with higher-performance storage media and network connectivity. We are currently shipping the USB solutions and the technology demos we gave provide a glimpse into future solutions. In addition, we’re always on the lookout for new form factors and technologies that will make our storage solutions faster, more powerful, more reliable and affordable.

What kind of connections do your drives have, and if it’s Thunderbolt 2 or Thunderbolt 3, can they be daisy chained?
When we look at interfaces, as noted above, there’s a USB Type-C for the consumer market as well as Thunderbolt and 10Gb Ethernet for the professional market.

As far as daisy-chaining, yes. Thunderbolt is a very flexible interface, supporting up to six devices in a daisy chain, on a single port. Thunderbolt 3 is a very new interface that is gaining momentum, one that will not only support extremely high data transfer speeds (up to 2.7GB/s) but also supports up to two 4K displays. We should also not forget that there are still more than 200M devices supporting Thunderbolt 1 and 2 connections.

LACIE‘S GASPARD PLANTROU
How do your solutions work with clients existing storage? And who are your typical M&E users?
With M&E workflows, it’s rare that users work with a single machine and storage solution. From capture to edit to final delivery, our customers’ data interacts with multiple machines, storage solutions and users. Many of our storage solutions feature multiple interfaces such as Thunderbolt, USB 3.0 or FireWire so they can be easily integrated into existing workflows and work seamlessly across the entire video production process.

Our Rugged features Thunderbolt and USB 3.0. That means it’s guaranteed to work with any standard computer or storage scenario on the market. Plus it’s shock, dust and moisture-resistant, allowing it to handle being passed around set or shipped to a client. Lacie 12bigLaCie’s typical M&E users are mid-size post production studios and independent filmmakers and editors looking for RAID solutions.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
The new LaCie 12big Thunderbolt 3 pushes up to 2600MB/s and can handle three streams of 4K 10-bit DPX at 24fps (assuming one stream is 864MB/s). In addition, the storage solution features 96TB to edit and hold tons of 4K footage.

What steps have you taken to work with technologies outside of your own?
With video file sizes growing exponentially, it is more important than ever for us to deliver fast, high-capacity solutions. Recent examples of this include bringing the latest technologies from Intel — Thunderbolt 3 — into our line. We work with engineers from our parent company, Seagate, to incorporate the latest enterprise class core technology for speed and reliability. Plus, we always ensure our solutions are certified to work seamlessly on Mac and Windows.

NETAPP‘S JASON DANIELSON
What are the top three requests that you get from post users?Jason Danielson
As a storage vendor, the first three requests we’re likely to get are around application integration, bandwidth and cost. Our storage systems support well over 100 different applications across a variety of workflows (VFX, HD broadcast post, uncompressed 4K finishing) in post houses of all sizes, from boutiques in Paris to behemoths in Hollywood.

Bandwidth is not an issue, but the bandwidth per dollar is always top of mind for post. So working with the post house to design a solution with suitable bandwidth at an acceptable price point is what we spend much of our time doing.

How should post users decide between SAN, NAS and object storage?
The decision to go with SAN versus NAS depends on the facility’s existing connectivity to the workstations. Our E-Series storage arrays support quite a few file systems. For SAN, our systems integrators usually use Quantum StorNext, but we also see Scale Logic’s HyperFS and Tiger Technology’s metaSAN being used.

For NAS, our systems integrators tend to use EditShare XStream EFS and IBM GPFS. While there are rumblings of a transition away from Fibre Channel-based SAN to Ethernet-based NAS, there are complexities and costs associated with tweaking a 10GigE client network.

The object storage question is a bit more nuanced. Object stores have been so heavily promoted by storage vendors that thE5624ere are many misconceptions about their value. For most of the post houses we talk to, object storage isn’t the answer today. While we have one of the most feature-rich and mature object stores out there, even we say that object stores aren’t for everyone. The questions we ask are:

1) Do you have 10 million files or more? 2) Do you store over a petabyte? 3) Do you have a need for long-term retention? 4) Does your infrastructure need to support multisite production?

If the answer to any of those questions is “yes,” then you should at least investigate object storage. A high-end boutique with six editors is probably not in this realm. It is true that an object store represents a slightly lower-cost bucket for an active archive (content repository), but it comes at a workflow cost of introducing a second tier to the architecture, which needs to be managed by either archive management or media asset management software. Unless such a software system is already in place, then the cost of adding one will drive up the complexity and cost of the implementation. I don’t mean to sound negative about object stores. I am not. I think object stores will play a major role in active-archive content storage in the future. They are just not a good option for a high-bandwidth production tier today or, possibly, ever.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
In order to answer that question, we would ask the post house: “How many streams do you want to play back?” Let’s say we’re talking about 4K (4096×2160), versus the several resolutions that are called 4K). At 4:4:4, that works out to 33MB per frame or 792MB per second. We would typically use flash (SSDs) for 4K playback. Our  2RU 24-SSD storage array, the EF560, can do a little over 9GB per second. That amounts to 11 streams.

But that is only half the answer. This storage array is usually deployed under a parallel file system, which will aggregate the bandwidth of several arrays for shared editing purposes. A larger installation might have eight storage arrays — each with 18 SSDs (to balance bandwidth and cost) — and provide sustained video playback for 70 streams.

What do you expect to see as the next trend relating to storage? What’s going to push storage systems even further?
The introduction of larger, more cost-effective flash drives (SSDs) will have a drastic effect on storage architectures over the next three years. We are now shipping 15TB SSDs. That is a petabyte of extremely fast storage in six rack units. We think the future is flash production tiers in front of object-store active-archive tiers. This will eliminate the need for archive managers and tape libraries in most environments.

HARMONIC‘S ANDY WARMAN
What are the top three requests that you hear from your post clients or potential post clients?andy warman
The most common request is for sustained performance. This is an important aspect since you do not want performance to degrade due to the number of concurrent users, the quantity of content, how full the storage is, or the amount of time the storage has been in service.

Another aspect related to this is the ability to support high-write and -read bandwidth. Being able to offer equal amounts of read and write bandwidth can be very beneficial for editing and transcode workflows, versus solutions that have high-read bandwidth, but relatively low-write performance. Customers are also looking for good value for money. Generally, we would point to value coming from the aforementioned performance as well as cost-effective expansion.

You guys have a “media-aware” solution for post. Can you explain what that is and why you opted to go this way?
Media-aware storage refers to the ability to store different media types in the most effective manner for the file system. A MediaGrid storage system supports multiple different block sizes, rather than a single block size for all media types. In this way, video assets, graphics and audio and project files can use different block sizes that make reading and writing data more efficient. This type of file I/O “tuning” provides some additional performance gains for media access, meaning that video could use, say, 2MB blocks, graphics and audio 512KB, and projects and other files 128KB. Not only can different block sizes be used by different media types, but they are also configurable so UHD files could, say, use 8MB block sizes.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
The storage has no practical storage capacity or bandwidth limit, so we can build a storage system that suits the customer needs. To size a system it becomes a case of balancing the bandwidth and storage capacity by selecting the appropriate number of drives and drive size(s) to match specific needs. The system is built on SAS drives; multiple, fully redundant 10 Gigabit Ethernet connections to client workstations and attached devices and 12 Gigabit redundant SAS interconnects between storage expansion nodes. This means we have high-speed connectivity within the storage as well as out to clients.

Harmonic MediaGrid 4K Content ServerAs needs change, the system can be expanded online with all users maintaining full access. Bandwidth scales in a linear fashion, and because there is a single name space in MediaGrid, the entire storage system can be treated as a single drive, or divided up and granted user level rights to folders within the file system.

Performance is further enhanced by the use of parallel access to data throughout the storage system. The file system provides a map to where all media is stored or is to be stored on disk. Data is strategically placed across the whole storage system to provide the best throughput. Clients simultaneously read and write data through the 10 Gigabit network to all network attached storage nodes rather than data being funneled through a single node or data connection. The result is that performance is the same whether the storage system is 5% or 95% full.

What do you expect to see as the next trend relating to storage? What’s going to push storage systems even further?
The advent of UHD has driven demands on storage further as codecs and therefore data throughput and storage requirements have increased significantly. Faster and more readily accessible storage will continue to grow in importance as delivery platforms continue to expand and expectations for throughput of storage systems continue to grow. We will use whatever performance and storage capacity is available, so offering more of both is inevitable to feed our needs for creativity and storytelling.

JMR’s STEVE KATZ
What are the top three storage-related requests you get from post users?
The most requested is ease of installation and operation. The JMR Share is delivered with euroNAS OS on mirrored SSD boot disks, with enough processing power and memory to Steve Headshot 6.27.16support efficient, high-volume workflows and a perpetual license to support the amount of storage requested, from 20TB minimum to the “unlimited” maximum. It’s intuitive to use and comfortable for anyone familiar with using popular browsers.

Compatibility and interoperability with clients using various hardware, operating systems and applications.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
This can all be calculated by usable bandwidth and data transfer rates, which as with any networked storage can be limited by the network itself. For those using a good 10GbE switch, the network limits data rates to 1250MB/s maximum, which can support more than 270 streams of DNxHD 36, but only one stream of 4K 10-bit “film” resolution. Our product can support ~1800MB/s in a single 16-disk appliance, but without a very robust network this can’t be achieved.

When comparing shared storage product choices, what are the advantages of NAS over SAN, for example?
SAN actually has some advantages over NAS, but unless the user has Fibre Channel hardware installed, it might be a very costly option. The real advantage of NAS is that everyone already has an Ethernet network available that may be sufficient for video file server use. If not, it may be upgraded fairly inexpensively.

JMR Share comes standard with both GbE and 10GbE networking capability right out of the box, and has performance that will saturate 10GbE links; high-availability active/active failover is available as well as SAN Cluster (an extra cost option). The SAN Cluster is equipped with specialized SAN software as well as with 8Gb or 16Gb fibre channel host adapters installed, so it’s ready to go.

What do you expect to see as the next trend relating to storage? The thing that’s going to push storage systems even further?
Faster and lower cost, always! Going to higher speed network adapters, 12Gb SAS internal storage and even SSDs or NVMe drives, it seems the sky is the limit — or, actually, the networking is the limit. We already offer SAS SSDs in the Share as an option, and our higher-end dual-processor/dual-controller Share models (a bit higher cost) using NVMe drives can provide internal data transfer speeds exceeding what any network can support (even multiple 40Gb InfiniBand links). We are seeing a bit of a trend toward SSDs now that higher-capacity models at more reasonable cost, with reasonable endurance, are becoming available.

Guru Studio on animated content, growth spurts and adaptability

Toronto’s Guru Studio creates characters and tells stories through animation. Its first original production, the Emmy-nominated Justin Time, has become a Netflix Original and a favorite among young children around the world. Another preschooler favorite is Paw Patrol, which is a creative production with Spin Master Entertainment.

Guru has several original programs in various stages of production, as well as other creative partnerships with the likes of Mattel and Nickelodeon. Guru’s programming airs around the world on Netflix, Disney Junior, Nickelodeon and other distribution channels

Over the past few years, Guru has gone through a major growth spurt. While growth is welcome, it comes with its own set of challenges. To find out more about the studio, its content and recent growth, postPerspective reached out to the Guru team — EVP of content & strategy Mary Bredin, director of IT Jason Burnard and post production manager Chris Sandy — to learn about what led up to the growth, how they handled it and how they remain successful.

Paw Patrol

You create your own content — or collaborate with others — and provide your post services. Which came first? Post services or content?
Bredin: We’re an entertainment company specializing in animation. We do post production on our shows and others, so content has always come first, but we don’t offer post as a service. Until recently, post was left to the producers to handle, but the bigger we got the more we really needed someone to focus solely on post. So last year we brought in Chris Sandy to do just that. With companies like Netflix needing 20 different language versions, it was critical to have someone in-house to do as much of the post as possible.

Are there parts of post that you don’t handle in-house?
Sandy: We take post as far as we possibly can under our own roof, further than a live-action studio could because it’s animation. We do the color correction in-house so that we have full control over the final look. Essentially, we finalize the picture and then go outside of Guru for sound design and packaging.

Tell us more about the production and post services you offer and the tools you use.
Bredin: Our services are all about creating stories and characters for our clients. The most important thing is understanding the creative vision. We view the technology as the means to that end, the tools that help us do our best work. Having said that, we use some amazing tools and we have amazingly creative technical directors.

Burnard: We use a whole gamut of software, including Maya, Harmony, Photoshop and Premiere. Houdini is a new one for us and it’s working out well. We also use Shotgun for 3D productions to help track our assets in the database. We use all types of software in different combinations at various stages of a project.

Bredin: I often tell people who don’t understand how animation works that we’re a high-tech company because of what we do with software. We bend it, twist it, push it — because for the team to develop different and unique looks in a very saturated market, we have to be really innovative with the software and hardware. Besides that, our workload and staff quadrupled in just three years. Right now Guru has about 300 people in the studio, and we’re running four shows. So we have to be efficient in order to stay on track.Hon Michael Coteau, MPP + Guru Studio | Photo by // Photagonist.ca

That’s tremendous growth in a short time. Presumably you had some growing pains. What sort of changes did you have to make?
Burnard: One example would be our storage infrastructure. We had a NAS environment before, which was suitable for our size at that time, but as we grew, NAS really started to slow us down. It couldn’t serve the files to the artists quickly enough. Then, in addition to that, we had the renderfarm also bidding for time on storage. As the production process moves along, the assets get larger and require more resources. The NAS solution would slow everything to a crawl — to the point where we had to schedule time for the artists versus time for the render during work hours.

We looked at a variety of storage solutions. After extensive research we decided to build our system using Quantum’s StorNext Pro 4K. Now, we have about 300 artists accessing the Quantum solution consistently during the day, along with another 70 render nodes. In the evenings, a large portion of the artist workstations become part of the renderfarm, totaling 200-plus render nodes. To do this, we split the storage into two parts. One is dedicated to the artists so they can quickly access the files, make changes, and put them back into storage. The other part maintains the farm and makes sure that content can be created quickly without impacting the artists’ work.

Was there any part of the storage change that was especially helpful for post?
Burnard: There’s a feature Quantum offers called DLC (Distributed LAN Client) that really comes in handy during post production, when the files are at their largest. Editors can access content very quickly and make changes on the shared storage across an IP connection without having to download files onto their local machines.

Sandy: It really streamlines our day because we can just work off the server.

Justin Time

What about the rest of the ecosystem? Can you give an example of your workflow on a recent show?
Burnard: I’ll talk about Justin Time. It was actually the first project we ran on the Quantum solution from start to finish. For each episode, artists start by creating 3D models and assets in Maya based on storyboards and Leica reels. Then the animation and background artists take the models and create the backgrounds and scenes, then basically start animation using Maya. Once done, the animated and rendered scenes are moved to the compositing teams.

Sandy: Up to this point, the picture is flat. When it moves to the comp stage, artists use Nuke to light it, give it texture and add effects.

Burnard: Incidentally, this is the point where NAS would have hit a wall, because the comp files are pretty large. We’re probably saving the production team about 20 hours every week by just that one improvement.

Sandy: Once the comp is done, they hand it off to me for post. I work with composers and a sound design company on sound effects, music and sweetening of the dialog. We bring it all together for a premix, then mix. After that, I screen it with the creative team one last time to make sure nothing was lost in rendering, and then it’s ready for the post house to do the final output for the broadcaster. They add the opening and end credits, master the tapes or files for broadcast, and perform technical QC to make sure it’s up to spec. Then we send it off to the broadcaster.

What’s next for Guru?
Bredin: We’re gearing up to start our next show, True and the Rainbow Kingdom, which is another Netflix Original. Meanwhile, we continue to work with our creative partners and develop our own new shows to sell to broadcasters. It’s the cycle of production, but we’re well equipped for the challenge.

IBC 2015: Adventures in archiving

By Tom Coughlin

Once you have your content and have completed that award-winning new project, Oscar-nominated film or brilliant and effective commercial, where does your data go? Today, video content can be reused and repurposed a number of times, producing a continuing revenue stream by providing viewing for many generations of people. That makes video archives valuable and also requires changes from in-active to more active archives. This article looks at some of the archiving products on display at the 2015 IBC.

The figure to the right shows our estimate of revenue spent on various media and entertainment storage markets in 2014 (from the Digital Storage in Media and Entertainment Report from Coughlin Associates). Note that although almost 96 percent of all M&E storage capacity is used for archiving about 45 percent of the spending is for archiving.

Quantum showcased its StorNext 5 shared storage architecture, which includes high-performance online storage, extended online storage and tape- and cloud-based archives. The company also highlighted the StorNext Connect, a management and monitoring console that provides an at-a-glance dashboard of the entire StorNext environment. At IBC, Quantum introduced their Q-Cloud Archive that extends StorNext workflow capabilities to the cloud, allowing end-to-end StorNext environments to leverage cloud storage fully with no additional hardware, separate applications or programming while maintaining full compatibility with existing software applications.

The Quantum Storage Manager migrates data from online storage to its object-based Lattus, allowing secure, long-term storage with greater durability than RAID and extremely high scalability. Content can be migrated from Lattus to tape archives or Q-Cloud archives automatically. In addition Quantum’s Artico intelligent NAS archive appliance was on display, offering low cost scale-out storage for active media archives that can scale to PBs of content across HDDs, extended online storage, tape and cloud storage.

Also during IBC, the LTO Program Technology Provider Companies — HP, IBM and Quantum —announced the LTO-7 tape format that will be available in late 2015. The native capacity of this drive is 6TB, while 2.5:1 compression provides 15TB of storage with up to 750MB/s data rates. This product will provide over twice the capacity of the LTO-6 drive generation. The LTO roadmap goes out to a generation 10 product with up to 120TB of compressed content and about 48TB native capacity.

LTO proponents said that tape has some advantages over hard disk drives for archiving, despite the difference in latency to access content. In particular, they said tape has and error rate two orders of magnitude lower than HDDs, providing more accurate recording and reading of content. Among the interesting LTO developments at IBC were the M-Logic Thunderbolt interface tape drives.

Tape can also be combined with capacity SATA HDDs to provide storage systems with performance approaching hard disk drive arrays and costs approaching magnetic tape libraries. Crossroads has teamed up with Fujifilm to provide NAS systems combining HDDs and tape and including cloud storage combining tape and HDDs. In fact archiving is becoming one of the biggest growing applications in the media and entertainment industry, according to the 2015 Digital Storage in Media and Entertainment Report from Coughlin Associates.

Oracle was also showing its tape storage systems with 8TB native storage capacity in a half-inch tape form factor. Oracle now includes Front Porch Digital with its cloud archiving platform as well as digital ingest solutions for older analog and digital format media.

Some companies also use flash memory as a content cache in order to match the high speeds of data transfers to and from a tape library system. Companies such as Xendata provide LTO tape and optical disc libraries for media and entertainment customers. Spectra Logic has made a big push into HDD-based archiving, using shingled magnetic recording (SMR) 3.5-inch HDDs in their DPE storage system to provide unstructured storage costs as low as 9 cents/GB. This system can provide up to 7.4PB of raw capacity in a single rack with 1GB/s data rates. This sort of system is best for data that is seldom or never overwritten because of the use of SMR HDDS.

Sony was showing its 300GB Blu-ray optical WORM discs, although it was not clear if the product is being shipped in storage cartridges in 2015. Archiving is a significant driver of M&E storage demand. This is because all video eventually ends up in an archive. Because of more frequent access of archived content, the performance requirements of many archives are more demanding than in the past. This has led to the use of HDD-based archives and archives combining HDDs and magnetic tape. Even flash memory can play a role as a write and read cache in a tape based system.

Dr. Tom Coughlin, president of Coughlin Associates, has over 35 years in the data storage industry. Coughlin is also the founder and organizer of the annual Storage Visions Conference, a partner to the International Consumer Electronics Show, as well as the Creative Storage Conference

Public, Private, Hybrid Cloud: the basics and benefits

By Alex Grossman

The cloud is everywhere, and media facilities are constantly being inundated with messages about the benefits the cloud offers in every area, from production to delivery. While it is easy to locate information on how the cloud can be used for ingest, storage, post operations, transcoding, rendering, archive and, of course, delivery, many media facilities still have questions about the public, private and hybrid clouds, and how each of these cloud models can relate to their business. The following is a brief guide intended to answer these questions.

Public
Public cloud is the cloud as most people see it: a set of services hosted outside a facility and accessed through the Web, either securely through a gateway appliance or simply through a browser. The public nature of this cloud model does not mean that content from one person or one company can be accessed by another. It simply means that the same physical hardware is being shared by multiple users — a “multi-tenant” arrangement in which data from different users resides on one system. Through this approach, users get to take advantage of the scale of many servers and storage systems at the cloud facility. This scale can also improve accessibility and performance, which can be key considerations for many content creators.

Public cloud is the most versatile type of cloud, and it can offer a range of services, from hosted applications to “compute” capabilities. For media companies these services range from transcoding, rendering and animation to business services such as project tracking, billing and, in some cases, file sharing. (Box and Dropbox are good examples of file sharing enabled by public cloud.) Services may be generic or branded, and they are most often offered by a software vendor using a public cloud, or by the public cloud vendor itself. Public clouds are popular for content and asset storage, both for short-term transcode to delivery or project-in-process storage and for longer-term “third copy” off-site archive.

Public clouds can be very appealing due to the OPEX or pay-as-you-go nature of billing and the lack of any capital expense with ongoing hardware purchase and refresh, but the downside of this is that public clouds remove control over workflow. While most public cloud vendors today are large and financially stable, it remains important to choose carefully.

Moreover, taking advantage of public cloud is rarely easy. This path involves dealing with new vendors, and possibly with unfamiliar applications and hardware gateways, and there can be unexpected charges for simple operations such as retrieving data. Although content security concerns are mostly overblown, they nevertheless are a source of apprehension for many potential public cloud users. These uncertainties have a lot of media companies looking to private cloud.

Private
Private cloud can most simply be defined for media companies as a walled machine room environment with workflow compute and storage capabilities offering outside connectivity, while at the same time preventing outside intrusions into the facility.

A well-designed private cloud will allow facilities to extend most of their production and archive capabilities to remote users. The main difference between this approach and most current (non-cloud) storage and compute operations in a facility today is simply that a private cloud can isolate the current workflow from the outside world while extending a portion to remote users based on preferences and polices.

The idea of remote access is not confined to private cloud. It is possible to provide facility access to external users through normal networking protocols, but the private cloud takes this a step further through easier access for authorized users and greater security for others. The media facility remains in complete control of its content and assets. Furthermore, it can host its applications, and its content remains in its on-site storage, safe and secure, with no retrieval fees.

A facility that embraces private cloud cannot take advantage of the scale or pay-as-you-go benefits of public cloud. Thus, in order to provide greater accessibility and flexibility, some media companies have adopted a private cloud model as an extension of their online operations. Private cloud can effectively replace much of the current hardware used today in post and archive operations, so it is a more cost-effective solution for many considering cloud benefits.

Hybrid
Hybrid cloud is an interesting proposition. In the enterprise IT world, hybrid cloud implementations are seen as way to bridge private and public and realize the best of both worlds — lower OPEX for certain functions such as SAAS (software as a service) and the security of keeping valuable data back in their data centers.

For media professionals, hybrid cloud may have even greater benefits. Considering the changing delivery requirements facing the industry and the sheer volume of content being created and reviewed — and, of course, keeping in mind the value of re-monetization — hybrid cloud has exciting potential. Well-designed hybrid cloud can provide the benefits of public and private cloud while taking advantage of the cost savings and reduced complexity that come with maintaining on-premise end-to end-hardware. By sharing the load between the hardware at a facility and the massive scale of a public cloud, a media company can extend its workflow easily while controlling every stage — even on a project-by-project basis.

Choosing between public, private and hybrid cloud can be a daunting task. It is a decision that must start with understanding the unique needs and goals of the media company and its operations, and then involve careful mapping of the various vendors’ offering solutions — with cost considerations always in mind. In the end, a facility may choose neither public cloud, private cloud nor hybrid cloud, but then it may miss out on the many and growing benefits enabled by the cloud.

Alex Grossman, a cloud workflow expert, is VP, Media and Entertainment at Quantum. You can follow him on Twitter @activeguy.

Quantum at NAB with Artico intelligent NAS archive appliance

Quantum will be at NAB demonstrating its new Artico intelligent NAS archive appliance. Artico offers broadcast and post facilities using scale-out NAS systems an entry point for establishing media archives — outside of StorNext environments — that can scale to hold petabytes of content across Quantum’s Lattus extended online, Scalar tape and Q-Cloud Archive storage.

Powered by Quantum‘s StorNext 5 software and providing a simple NAS presentation, Artico incorporates StorNext Storage Manager policies to maximize storage efficiency. When used in combination with any StorNext-qualified media asset management (MAM) system, Artico can move files from online storage to longer-term archive with no user intervention, while maintaining full access to the files.

The system offers NAS connectivity and intelligent, policy-driven file movement across LTO tape, LTFS, object storage and cloud archive systems. Artico was designed to enable long-term protection of content through the ability to create multiple archive copies and with support for multiple archive systems.

StorNext Pro Foundation provides Xsan capability, integrated shared storage

Quantum is now offering StorNext Pro Foundation, a low-cost, integrated shared storage system designed specifically for smaller workgroups. It is built on StorNext 5 collaboration and workflow software platform, offering the capabilities of Quantum’s StorNext Pro Solutions to a new audience of media pros in post and broadcast.

For those of you who who enjoy, or enjoyed, working with Xsan, StorNext Pro Foundation provides full Xsan compatibility, enabling smaller workgroups — such as ad agencies and corporate video production departments that create significant amounts of video — to refresh or upgrade their Xsan storage environments.

According to Quantum, the underlying StorNext 5 platform ensures that as customers’ workflows evolve, the solution will provide the power to meet their needs for content ingest from multiple sources, collaborative content creation, on-time delivery in any format and content preservation for future monetization.

Current StorNext users can use StorNext Pro Foundation to add smaller workgroups for graphics, rendering and effects.

StorNext Pro Foundation comes in either 48TB or 96TB configurations, which respectively support five and seven simultaneous Xsan/Windows/Linux SAN clients, including two Windows/Linux SAN clients as core system components. 
Either configuration can support up to four file systems (volumes) comprising 100 million files; the 48TB configuration is upgradable to 96TB; users have the option of purchasing a StorNext AEL500 tape archive with either the 48TB or the 96TB configuration.

StorNext Pro Foundation includes one year of Quantum support with 30-minute phone response. StorNext Pro Foundation available now through select resellers.

“StorNext Pro Foundation is a great solution for smaller workgroups,” reports Dave Van Hoy, president of Quantum reseller Advanced Systems Group. “It delivers the power of Quantum’s StorNext 5 software in a complete and Xsan-compatible system designed specifically for small workgroups, providing optimized performance, reliability and service.”

IBC Blog: The age of archives

By Tom Coughlin

With the flood of video content being created due to the ease and cost-effectiveness of digital video capture and production and the decreasing costs of storage, digital content saved for the long term will explode. Moving from today’s HD to 4K UHD and then to 8K UHD and at higher frame rates will require greater storage capacities and more sophisticated data management services.

During the IBC 2014 Oracle announced that it was acquiring Front Porch Digital (FPD). Front Porch Digital adds storage management features to Oracle that should complement current Oracle storage products. FPD has been using Oracle tape systems for some time in its largest archives. FPD was the champion and promoter of the AFX storage format that has been adopted as a SMPTE standard. This standard has become popular with many FPD customers Continue reading

Geoff Stedman on the evolving media storage landscape

Earlier this year, storage industry vet Geoff Stedman came on board as senior VP of Quantum’s StorNext solutions product portfolio. You might remember him from his many years at Omneon and then Harmonic after the company was acquired.

He spent seven years in total there before deciding to try something new: a role as VP of marketing for an enterprise IT storage company, Tintri. After two years the pull of the media and entertainment industry was just too much for him, and Stedman (pictured, above) returned.

Considering Stedman’s background and unique perspective — taking a break from an industry he knows so well and looking at it with a new set of eyes — we decided to pick his brain a bit about storage technology and where it’s headed into the future.

Continue reading