Tag Archives: storage

Quantum shipping StorNext 5.4

Quantum has introduced StorNext 5.4, the latest release of their workflow storage platform, designed to bring efficiency and flexibility to media content management. StorNext 5.4 enhancements include the ability to integrate existing public cloud storage accounts and third-party object storage (private cloud) — including Amazon Web Services, Microsoft Azure, Google Cloud, NetApp StorageGRID, IBM Cleversafe and Scality Ring — as archive tiers in a StorNext-managed media environment. It also lets users deploy applications embedded within StorNext-powered Xcellis workflow storage appliances.

Quantum has also included a new feature called StorNext Storage Manager, offering automated, policy-based movement of content into and out of users’ existing public and private clouds while maintaining the visibility and access that StorNext provides. It offers seamless integration for public and private clouds within a StorNext-managed environment — as well as primary disk and tape storage tiers, full user and application access to media stored in the cloud without additional hardware or software, and extended versioning across sites and the cloud.

By enabling applications to run inside its Xcellis Workflow Director, the new Dynamic Application Environment (DAE) capability in StorNext 5.4 allows users to leverage a converged storage architecture, reducing the time, cost and complexity of deploying and maintaining applications.

StorNext 5.4 is currently shipping with all newly-purchased Xcellis, StorNext M-Series and StorNext Pro Solutions, as well as Artico archive appliances. It is available at no additional cost for StorNext 5 users under current support contracts.

Promise, Symply team up on Thunderbolt 3 RAID system

Storage solutions companies Promise Technology and Symply have launched Pegasus3 Symply Edition, the next generation of the Pegasus desktop RAID storage system. The new system combines 40Gb/s Thunderbolt 3 speed with Symply’s storage management suite.

According to both companies, Pegasus3 Symply Edition complements the new MacBook Pro — it’s optimized for performance and content protection. The Pegasus3 Symply Edition offers the speed needed for creative pro generating high-resolution video and rich media content, and also the safety and security of full-featured RAID protection.

The intuitive Symply software suite allows for easy setup, optimization and management. The dual Thunderbolt 3 ports provide fast connectivity and the ability to connect up to six daisy-chained devices on a single Thunderbolt 3 port while adding new management tools and support from Symply.

“As the Symply solution family grows, Pegasus3 Symply Edition will continue to be an important part in the larger, shared creative workflows built around Promise and Symply solutions,” said Alex Grossman, president and CEO, Symply.

The Pegasus3 Symply Edition is available in three models — Pegasus R4, Pegasus R6 and Pegasus R8 — delivering four-, six- and eight-drive configurations of RAID storage, respectively. Each system is ready to go “out of the box” for Mac users with a 1m 40Gb/s Active Thunderbolt 3 cable for easy, high-speed connectivity.

Every Pegasus3 Symply Edition will include Symply’s Always-Up-to-Date Mac OS management app. iOS and Apple Watch apps to monitor your Pegasus3 Symply Edition system remotely are coming soon. The Symply Management suite will support most earlier Pegasus systems. The Pegasus3 Symply Edition includes a full three-year warranty, tech support and 24/7 media and creative user support worldwide.

The Pegasus3 Symply Edition lineup will be available on the Apple online store, at select Apple retail stores and at resellers.

IBC 2016: VR and 8K will drive M&E storage demand

By Tom Coughlin

While attending the 2016 IBC show, I noticed some interesting trends, cool demos and new offerings. For example, while flying drones were missing, VR goggles were everywhere; IBM was showing 8K video editing using flash memory and magnetic tape; the IBC itself featured a fully IP-based video studio showing the path to future media production using lower-cost commodity hardware with software management; and, it became clear that digital technology is driving new entertainment experiences and will dictate the next generation of content distribution, including the growing trend to OTT channels.

In general, IBC 2016 featured the move to higher resolution and more immersive content. On display throughout the show was 360-degree video for virtual reality, as well as 4K and 8K workflows. Virtual reality and 8K are driving new levels of performance and storage demand, and these are just some of the ways that media and entertainment pros are future-zone-2increasing the size of video files. Nokia’s Ozo was just one of several multi-camera content capture devices on display for 360-degree video.

Besides multi-camera capture technology and VR editing, the Future Tech Zone at IBC included even larger 360-degree video display spheres than at the 2015 event. These were from Puffer Fish (pictured right). The smaller-sized spherical display was touch-sensitive so you could move your hand across the surface and cause the display to move (sadly, I didn’t get to try the big sphere).

IBM had a demonstration of a 4K/8K video editing workflow using the IBM FlashSystem and IBM Enterprise tape storage technology, which was a collaboration between the IBM Tokyo Laboratory and IBM’s Storage Systems division. This work was done to support the move to 4K/8K broadcasts in Japan by 2018, with a broadcast satellite and delivery of 8K video streams of the 2020 Tokyo Olympic Games. The combination of flash memory storage for working content and tape for inactive content is referred to as FLAPE (flash and tAPE).

The graphic below shows a schematic of the 8K video workflow demonstration.

The argument for FLAPE appears to be that flash performance is needed for editing 8K content and the magnetic tape provides low-cost storage the 8K content, which may require greater than 18TB for an hour of raw content (depending upon the sampling and frame rate). Note that magnetic tape is often used for archiving of video content, so this is a rather unusual application. The IBM demonstration, plus discussions with media and entertainment professionals at IBC indicate that with the declining costs of flash memory and the performance demands of 8K, 8K workflows may finally drive increased demand for flash memory for post production.

Avid was promoting their Nexis file system, the successor to ISIS. The company uses SSDs for metadata, but generally flash isn’t used for actual editing yet. They agreed that as flash costs drop, flash could find a role for higher resolution and richer media. Avid has embraced open source for their code and provides free APIs for their storage. The company sees a hybrid of on-site and cloud storage for many media and entertainment applications.

EditShare announced a significant update to its XStream EFS Shared Storage Platform (our main image). The update provides non-disruptive scaling to over 5PB with millions of assets in a single namespace. The system provides a distributed file system with multiple levels of hardware redundancy and reduced downtime. An EFS cluster can be configured with a mix of capacity and performance with SSDs for high data rate content and SATA HDD for cost-efficient higher-performance storage — 8TB HDDs have been qualified for the system. The latest release expands optimization support for file-per-frame media.

The IBC IP Interoperability Zone was showing a complete IP-based studio (pictured right) was done with the cooperation of AIMS and the IABM. The zone brings to life the work of the JT-NM (the Joint Task Force on Networked Media, a combined initiative of AMWA, EBU, SMPTE and VSF) and the AES on a common roadmap for IP interoperability. Central to the IBC Feature Area was a live production studio, based on the technologies of the JT-NM roadmap that Belgian broadcaster VRT has been using daily on-air all this summer as part of the LiveIP Project, which is a collaboration between VRT, the European Broadcasting Union (EBU) and LiveIP’s 12 technology partners.

Summing Up
IBC 2016 showed some clear trends to more immersive, richer content with the numerous displays of 360-degree and VR content and many demonstrations of 4K and even 8K workflows. Clearly, the trend is for higher-capacity, higher-performance workflows and storage systems that support this workflow. This is leading to a gradual move to use flash memory to support these workflows as the costs for flash go down. At the same time, the move to IP-based equipment will lead to lower-cost commodity hardware with software control.

Storage analyst Tom Coughlin is president of Coughlin Associates. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide. He also  publishes the Digital Storage Technology Newsletter, the Digital Storage in Media and Entertainment Report.

Introducing a new section on our site: techToolbox

In our quest to bring even more information and resources to postPerspective, we have launched a new section called techToolbox — a repository of sorts, where you can find white papers, tutorials, videos and more from a variety of product makers.

To kick-off our new section, we’re focusing our first techToolbox on storage. Of all the technologies required for today’s entertainment infrastructure, storage remains one of the most crucial. Without the ability to store data in an efficient and reliable fashion, everything breaks down.

In techToolbox: Storage, we highlight some of today’s key advances in storage technology, with each providing a technical breakdown of why they could be the solution to your needs.

Check it out here.

Archion’s new Omni Hybrid storage targets VR, VFX, animation

Archion Technologies has introduced the EditStor Omni Hybrid, a collaborative storage solution for virtual reality, visual effects, animation, motion graphics and post workflows.

In terms of performance, an Omni Hybrid with one expansion chassis offers 8000MB/second for 4K and other streaming demands, and over 600,000 IOPS for rendering and motion graphics. The product has been certified for Adobe After Effects, Autodesk’s Maya/Flame/Lustre, The Foundry’s Nuke and Modo, Assimilate Scratch and Blackmagic’s Resolve and Fusion.  The Omni Hybrid is scalable up to a 1.5Petabytes, and can be expanded without shutdown.

“We have Omni Hybrid in post production facilities that range from high-end TV and film to massive reality productions,” reports Archion CTO James Tucci. “They are all doing graphics and editorial work on one storage system.”

Grading & Compositing Storage: Northern Lights

Speed is key for artist Chris Hengeveld.

By Beth Marchant

For Flame artist Chris Hengeveld of Northern Lights in New York City, high-performance file-level storage and a Fibre Channel connection mean it’s never been easier for him to download original source footage and share reference files with editorial on another floor. But Hengeveld still does 80 percent of his work the old-fashioned way: off hand-delivered drives that come in with raw footage from production.

Chris Hengeveld

The bicoastal editorial and finishing facility Northern Lights — parent company to motion graphics house Mr. Wonderful, the audio facility SuperExploder and production boutique Bodega — has an enviably symbiotic relationship with its various divisions. “We’re a small company but can go where we need to go,” says colorist/compositor Hengeveld. “We also help each other out. I do a lot of compositing, and Mr. Wonderful might be able to help me out or an assistant editor here might help me with After Effects work. There’s a lot of spillover between the companies, and I think that’s why we stay busy.”

Hengeveld, who has been with Northern Lights for nine years, uses Flame Premium, Autodesk’s visual effects finishing bundle of Flame and Flare with grading software Lustre. “It lets me do everything from final color work, VFX and compositing to plain-old finishing to get it out of the box and onto the air,” he says. With Northern Lights’ TV-centric work now including a growing cache of Web content, Hengeveld must often grade and finish in parallel. “No matter how you send it out, chances are what you’ve done is going to make it to the Web in some way. We make sure that what we make look good on TV also looks good on the Web. It’s often just two different outputs. What looks good on broadcast you often have to goose a bit to get it to look good on the Web. Also, the audio specs are slightly different.”

Hengeveld provided compositing and color on this spot for Speedo.

Editorial workflows typically begin on the floor above Hengeveld in Avid, “and an increasing number, as time goes by, in Adobe Premiere,” he says. Editors are connected to media through a TerraBlock shared storage system from Facilis. “Each room works off a partition from the TerraBlock, though typically with files transcoded from the original footage,” he says. “There’s very little that gets translated from them to me, in terms of clip-based material. But we do have an Aurora RAID from Rorke (now Scale Logic) off which we run a HyperFS SAN — a very high-performance, file-level storage area network — that connects to all the rooms and lets us share material very easily.”

The Avids in editorial at Northern Lights are connected by Gigabit Ethernet, but Hengeveld’s room is connected by Fibre. “I get very fast downloading of whatever I need. That system includes Mr. Wonderful, too, so we can share what we need to, when we need to. But I don’t really share much of the Avid work except for reference files.” For that, he goes back to raw camera footage. “I’d say bout 80 percent of the time, I’m pulling that raw shoot material off of G-Technology drives. It’s still sneaker-net on getting those source drives, and I don’t think that’s ever going to change,” he says. “I sometimes get 6TB of footage in for certain jobs and you’re not going to copy that all to a centrally located storage, especially when you’ll end up using about a hundredth of that material.”

The source drives are typically dupes from the production company, which more often than not is sister company Bodega. “These drives are not made for permanent storage,” he says. “These are transitional drives. But if you’re storing stuff that you want to access in five to six years, it’s really got to go to LTO or some other system.” It’s another reason he’s so committed to Flame and Lustre, he says. Both archive every project locally with its complete media, which can be then be easily dropped onto an LTO for safe long-term storage.

Time or money constraints can shift this basic workflow for Hengeveld, who sometimes receives a piece of a project from an editor that has been stripped of its color correction. “In that case, instead of loading in the raw material, I would load in the 15- or 30-second clip that they’ve created and work off of that. The downside with that is if the clip was shot with an adjustable format camera like a Red or Arri RAW, I lose that control. But at least, if they shoot it in Log-C, I still have the ability to have material that has a lot of latitude to work with. It’s not desirable, but for better stuff I almost always go back to the original source material and do a conform. But you sometimes are forced to make concessions, depending on how much time or budget the client has.”

A recent spot for IZod, with color by Hengeveld.

Those same constraints, paired with advances in technology, also mean far fewer in-person client meetings. “So much of this stuff is being evaluated on their computer after I’ve done a grade or composite on it,” he says. “I guess they feel more trust with the companies they’re working with. And let’s be honest: when you get into these very detailed composites, it can be like watching paint dry. Yet, many times when I’m grading,  I love having a client here because I think the sum of two is always greater than one. I enjoy the interaction. I learn something and I get to know my client better, too. I find out more about their subjectivity and what they like. There’s a lot to be said for it.”

Hengeveld also knows that his clients can often be more efficient at their own offices, especially when handling multiple projects at once, influencing their preferences for virtual meetings. “That’s the reality. There’s good and bad about that trade off. But sometimes, nothing beats an in-person session.”

Our main image is from NBC’s Rokerthon.

Storage Workflows for 4K and Beyond

Technicolor-Postworks and Deluxe Creative Services share their stories.

By Beth Marchant

Once upon a time, an editorial shop was a sneaker-net away from the other islands in the pipeline archipelago. That changed when the last phases of the digital revolution set many traditional editorial facilities into swift expansion mode to include more post production services under one roof.

The consolidating business environment in the post industry of the past several years then brought more of those expanded, overlapping divisions together. That’s a lot for any network to handle, let alone one containing some of the highest quality and most data-dense sound and pictures being created today. The networked storage systems connecting them all must be robust, efficient and realtime without fail, but also capable of expanding and contracting with the fluctuations of client requests, job sizes, acquisitions and, of course, evolving technology.

There’s a “relief valve” in the cloud and object storage, say facility CTOs minding the flow, but it’s still a delicate balance between local pooled and tiered storage and iron-clad cloud-based networks their clients will trust.

Technicolor-Postworks
Joe Beirne, CTO of Technicolor-PostWorks New York, is probably as familiar as one can be with complex nonlinear editorial workflows. A user of Avid’s earliest NLEs, an early adopter of networked editing and an immersive interactive filmmaker who experimented early with bluescreen footage, Beirne began his career as a technical advisor and producer for high-profile mixed-format feature documentaries, including Michael Moore’s Fahrenheit 9/11 and the last film in Godfrey Reggio’s KOYAANISQATSI trilogy.

Joe Beirne

Joe Beirne

In his 11 years as a technology strategist at Technicolor-PostWorks New York, Beirne has also become fluent in evolving color, DI and audio workflows for clients such as HBO, Lionsgate, Discovery and Amazon Studios. CTO since 2011, when PostWorks NY acquired the East Coast Technicolor facility and the color science that came with it, he now oversees the increasingly complicated ecosystem that moves and stores vast amounts of high-resolution footage and data while simultaneously holding those separate and variously intersecting workflows together.

As the first post facility in New York to handle petabyte levels of editorial-based storage, Technicolor-PostWorks learned early how to manage the data explosion unleashed by digital cameras and NLEs. “That’s not because we had a petabyte SAN or NAS or near-line storage,” explains Beirne. “But we had literally 25 to 30 Avid Unity systems that were all in aggregate at once. We had a lot of storage spread out over the campus of buildings that we ran on the traditional PostWorks editorial side of the business.”

The TV finishing and DI business that developed at PostWorks in 2005, when Beirne joined the company (he was previously a client), eventually necessitated a different route. “As we’ve grown, we’ve expanded out to tiered storage, as everyone is doing, and also to the cloud,” he says. “Like we’ve done with our creative platforms, we have channeled our different storage systems and subsystems to meet specific needs. But they all have a very promiscuous relationship with each other!”

TPW’s high-performance storage in its production network is a combination of local or semi-locally attached near-line storage tethered by several Quantum StorNext SANs, all of it air-gapped — or physically segregated —from the public Internet. “We’ve got multiple SANs in the main Technicolor mothership on Leroy Street with multiple metadata controllers,” says Beirne. “We’ve also got some client-specific storage, so we have a SAN that can be dedicated to a particular account. We did that for a particular client who has very restrictive policies about shared storage.”

TPW’s editorial media, for the most part, resides in Avid’s ISIS system and is in the process of transitioning to its software-defined replacement, Nexis. “We have hundreds of Avids, a few Adobe and even some Final Cut systems connected to that collection of Nexis and ISIS and Unity systems,” he says. “We’re currently testing the Nexis pipeline for our needs but, in general, we’re going to keep using this kind of storage for the foreseeable future. We have multiple storage servers that serve that part of our business.”

Beirne says most every project the facility touches is archived to LTO tape. “We have a little bit of disc-to-tape archiving going on for the same reasons everybody else does,” he adds. “And some SAN volume hot spots that are all SSD (solid state drives) or a hybrid.” The facility is also in the process of improving the bandwidth of its overall switching fabric, both on the Fibre Channel side and on the Ethernet side. “That means we’re moving to 32Gb and multiple 16Gb links,” he says. “We’re also exploring a 40Gb Ethernet backbone.”

Technicolor-Postworks 4K theater at their Leroy Street location.

This backbone, he adds, carries an exponential amount of data every day. “Now we have what are like two nested networks of storage at a lot of the artist workstations,” he explains. “That’s a complicating feature. It’s this big, kind of octopus, actually. Scratch that: it’s like two octopi on top of one another. That’s not even mentioning the baseband LAN network that interweaves this whole thing. They, of course, are now getting intermixed because we are also doing IT-based switching. The entire, complex ecosystem is evolving and everything that interacts with it is evolving right along with it.”

The cloud is providing some relief and handles multiple types of storage workflows across TPW’s various business units. “Different flavors of the commercial cloud, as well as our own private cloud, handle those different pools of storage outside our premises,” Beirne says. “We’re collaborating right now with an international account in another territory and we’re touching their storage envelope through the Azure cloud (Microsoft’s enterprise-grade cloud platform). Our Azure cloud and theirs touch and we push data from that storage back and forth between us. That particular collaboration happened because we both had an Azure instance, and those kinds of server-to-server transactions that occur entirely in the cloud work very well. We also had a relationship with one of the studios in which we made a similar connection through Amazon’s S3 cloud.”

Given the trepidations most studios still have about the cloud, Beirne admits there will always be some initial, instinctive mistrust from both clients and staff when you start moving any content away from computers that are not your own and you don’t control. “What made that first cloud solution work, and this is kind of goofy, is we used Aspera to move the data, even though it was between adjacent racks. But we took advantage of the high-bandwidth backbone to do it efficiently.”

Both TPW in New York and Technicolor in Los Angeles have since leveraged the cloud aggressively. “We our own cloud that we built, and big Technicolor has a very substantial purpose-built cloud, as well as Technicolor Pulse, their new storage-related production service in the cloud. They also use object storage and have some even newer technology that will be launching shortly.”

The caveat to moving any storage-related workflow into the cloud is thorough and continual testing, says Beirne. “Do I have more concern for my clients’ media in the cloud than I do when sending my own tax forms electronically? Yea, I probably do,” he says. “It’s a very, very high threshold that we need to pass. But that said, there’s quite a bit of low-impact support stuff that we can do on the cloud. Review and approval stuff has been happening in the cloud for some time.” As a result, the facility has seen an increase, like everyone else, in virtual client sessions, like live color sessions and live mix sessions from city to city or continent to continent. “To do that, we usually have a closed circuit that we open between two facilities and have calibrated displays on either end. And, we also use PIX and other normal dailies systems.”

“How we process and push this media around ultimately defines our business,” he concludes. “It’s increasingly bigger projects that are made more demanding from a computing point of view. And then spreading that out in a safe and effective way to where people want to access it, that’s the challenge we confront every single day. There’s this enormous tension between the desire to be mobile and open and computing everywhere and anywhere, with these incredibly powerful computer systems we now carry around in our pockets and the bandwidth of the content that we’re making, which is high frame rate, high resolution, high dynamic range and high everything. And with 8K — HDR and stereo wavefront data goes way beyond 8K and what the retina even sees — and 10-bit or more coming in the broadcast chain, it will be more of the same.” TPW is already doing 16-bit processing for all of its film projects and most of its television work. “That’s piles and piles and piles of data that also scales linearly. It’s never going to stop. And we have a VR lab here now, and there’s no end of the data when you start including everything in and outside of the frame. That’s what keeps me up at night.”

Deluxe Creative Services
Before becoming CTO at Deluxe Creative Services, Mike Chiado had a 15-year career as a color engineer and image scientist at Company 3, the grading and finishing powerhouse acquired by Deluxe in 2010. He now manages the pipelines of a commercial, television and film Creative Services division that encompasses not just dailies, editorial and color, but sound, VFX, 3D conversion, virtual reality, interactive design and restoration.

MikeChiado

Mike Chiado

That’s a hugely data-heavy load to begin with, and as VR and 8K projects become more common, managing the data stored and coursing through DCS’ network will get even more demanding. Branded companies currently under the monster Deluxe umbrella include Beast, Company 3, DDP, Deluxe/Culver City, Deluxe VR, Editpool, Efilm, Encore, Flagstaff Studios, Iloura, Level 3, Method Studios, StageOne Sound, Stereo D, and Rushes.

“Actually, that’s nothing when you consider that all the delivery and media teams from Deluxe Delivery and Deluxe Digital Cinema are downstream of Creative Services,” says Chiado. “That’s a much bigger network and storage challenge at that level.” Still, the storage challenges of Chiado’s segment are routinely complicated by the twin monkey wrenches of the collaborative and computer kind that can unhinge any technology-driven art form.

“Each area of the business has its own specific problems that recur: television has its issues, commercial work has its issues and features its issues. For us, commercials and features are more alike than you might think, partly due to the constantly changing visual effects but also due to shifting schedules. Television is much more regimented,” he says. “But sometimes we get hard drives in on a commercial or feature and we think, ‘Well that’s not what we talked about at all!”

Company 3’s file-based digital intermediate work quickly clarified Chiado’s technical priorities. “The thing that we learned early on is realtime playback is just so critical,” he says. “When we did our very first file-based DI job 13 years ago, we were so excited that we could display a certain resolution. OK, it was slipping a little bit from realtime, maybe we’ll get 22 frames a second, or 23, but then the director walked out after five minutes and said, ‘No. This won’t work.’ He couldn’t care less about the resolution because it was only always about realtime and solid playback. Luckily, we learned our lesson pretty quickly and learned it well! In Deluxe Creative Services, that still is the number one priority.”

It’s also helped him cut through unnecessary sales pitches from storage vendors unfamiliar with Deluxe’s business. “When I talk to them, I say, ‘Don’t tell me about bit rates. I’m going to tell you a frame rate I want to hit and a resolution, and you tell me if we can hit it or not with your solution. I don’t want to argue bits; I want tell you this is what I need to do and you’re going to tell me whether or not your storage can do that.’ The storage vendors that we’re going to bank our A-client work on better understand fundamentally what we need.”

Because some of the Deluxe company brands share office space — Method and Company 3 moved into a 63,376-square-foot former warehouse in Santa Monica a few years ago — they have access to the same storage infrastructure. “But there are often volumes specially purpose-built for a particular job,” says Chiado. “In that way, we’ve created volumes focused on supporting 4K feature work and others set up specifically for CG desktop environments that are shared across 400 people in that one building. We also have similar business units in Company 3 and Efilm, so sometimes it makes sense that we would want, for artist or client reasons, to have somebody in a different location from where the data resides. For example, having the artist in Santa Monica and the director and DP in Hollywood is something we do regularly.”

Chiado says Deluxe has designed and built with network solution and storage solution providers a system “that suits our needs. But for the most part, we’re using off-the-shelf products for storage. The magic is how we tune them to be able to work with our systems.”

Those vendors include Quantum, DDN Storage and EMC’s network-attached storage Isilon. “For our most robust needs, like 4K feature workflows, we rely on DDN,” he says. “We’ve actually already done some 8K workflows. Crazy world we live in!” For long-term archiving, each Deluxe Creative Service location worldwide has an LTO-tape robot library. “In some cases, we’ll have a near-line tier two volume that stages it. And for the past few years, we’re using object storage in some locations to help with that.”

Although the entire group of Deluxe divisions and offices are linked by a robust 10GigE network that sometimes takes advantage of dark fiber, unused fiber optic cables leased from larger fiber-optic communications companies, Chiado says the storage they use is all very specific to each business unit. “We’re moving stuff around all the time but projects are pretty much residing in one spot or another,” he says. “Often, there are a thousand reasons why — it may be for tax incentives in a particular location, it may be for project-specific needs. Or it’s just that we’re talking about the London and LA locations.”

With one eye on the future and another on budgets, Chiado says pooled storage has helped DCS keep costs down while managing larger and larger subsets of data-heavy projects. “We are always on the lookout for ways to handle the next thing, like the arrival of 8K workflows, but we’ve gained huge, huge efficiencies from pooled storage,” he says. “So that’s the beauty of what we build, specific to each of our world locations. We move it around if we have to between locations but inside that location, everybody works with the content in one place. That right there was a major efficiency in our workflows.”

Beyond that, he says, how to handle 8K is still an open question. “We may have to make an island, and it’s been testing so far, but we do everything we can to keep it in one place and leverage whatever technology that’s required for the job,” Chiado says. “We have isolated instances of SSDs (solid-state drives) but we don’t have large-scale deployment of SSDs yet. On the other end, we’re working with cloud vendors, too, to be able to maximize our investments.”

Although the company is still working through cloud security issues, Chiado says Deluxe is “actively engaging with cloud vendors because we aren’t convinced that our clients are going to be happy with the security protocols in place right now. The nature of the business is we are regularly involved with our clients and MPAA and have ongoing security audits. We also have a group within Deluxe that helps us maintain the best standards, but each show that comes in may have its own unique security needs. It’s a constant, evolving process. It’s been really difficult to get our heads and our clients’ heads around using the cloud for rendering, transcoding or for storage.”

Luckily, that’s starting to change. “We’re getting good traction now, with a few of the studios getting ready to greenlight cloud use and our own pipeline development to support it,” he adds. “They are hand in hand. But I think once we move over this hurdle, this is going to help the industry tremendously.”

Beyond those longer-term challenges, Chiado says the day-to-day demands of each division haven’t changed much. “Everybody always needs more storage, so we are constantly looking at ways to make that happen,” he says. “The better we can monitor our storage and make our in-house people feel comfortable moving stuff off near-line to tape and bring it back again, the better we can put the storage where we need it. But I’m very optimistic about the future, especially about having a relief valve in the cloud.”

Our main image is the shared 4K theater at Company 3 and Method.

VFX Storage: The Molecule

Evolving to a virtual private local cloud?

By Beth Marchant

VFX artists, supervisors and technologists have long been on the cutting-edge of evolving post workflows. The networks built to move, manage, iterate, render and put every pixel into one breathtaking final place are the real super heroes here, and as New York’s The Molecule expands to meet the rising demand for prime-time visual effects, it pulls even more power from its evolving storage pipeline in and out of the cloud.

The Molecule CEO/CTO Chris Healer has a fondness for unusual workarounds. While studying film in college, he built a 16mm projector out of Legos and wrote a 3D graphics library for DOS. In his professional life, he swiftly transitioned from Web design to motion capture and 3D animation. He still wears many hats at his now bicoastal VFX and VR facility, The Molecule —which he founded in New York in 2005 — including CEO, CTO, VFX supervisor, designer, software developer and scientist. In those intersecting capacities, Healer has created the company’s renderfarm, developed and automated its workflow, linking and preview tools and designed and built out its cloud-based compositing pipeline.

When the original New York office went into growth mode, Healer (pictured at his new, under-construction facility) turned to GPL Technologies, a VFX and post-focused digital media pipeline and data infrastructure developer, to help him build an entirely new network foundation for the new location the company will move to later this summer. “Up to this point, we’ve had the same system and we’ve asked GPL to come in and help us create a new one from scratch,” he says. “But any time you hire anyone to help with this kind of thing, you’ve really got to do your own research and figure out what makes sense for your artists, your workflows and, ultimately, your bottom line.”

The new facility will start with 65 seats and expand to more than 100 within the next year to 18 months. Current clients include the major networks, Showtime, HBO, AMC, Netflix and director/producer Doug Limon.

UKS-beforesmall      UKS-aftersmall
Netflix’s Unbreakable Kimmy Schmidt is just one of the shows The Molecule works on.

Healer’s experience as an artist, developer, supervisor and business owner has given him a seasoned perspective on how to develop VFX pipeline work. “There’s a huge disparity between what the conventional user wants to do, i.e. share data, and the much longer dialog you need to have to build a network. Connecting and sharing data is really just the beginning of a very long story that involves so many other factors: how many things are you connecting to? What type of connection do you have? How far away are you from what you’re connecting to? How much data are you moving, and it is all at once or a continuous stream? Users are so different, too.”

Complicating these questions, he says, are a facility’s willingness to embrace new technology before it’s been vetted in the market. “I generally resist the newest technologies,” he says. “My instinct is that I would prefer an older system that’s been tested for years upon years. You go to NAB and see all kinds of cool stuff that appears to be working the way it should. But it hasn’t been tried in different kinds of circumstances or its being pitched to the broadcast industry and may not work well for VFX.”

Making a Choice
He was convinced by EMC’s Isilon system, based on customer feedback and the hardware has already been delivered to the new office. “We won’t install it until construction is complete, but all the documentation is pointing in the right direction,” he says. “Still, it’s a bit of a risk until we get it up and running.”

Last October, Dell announced it would acquire EMC in a deal that is set to close in mid-July. That should suit The Molecule just fine —most of its artists computers are either Dell or HP running Nvidia graphics.

A traditional mass configuration on a single GigE line can only do up to 100MB per second. “A 10GigE connection running in NFS can, theoretically, do 10 times that,” says Healer. “But 10GigE works slightly differently, like an LA freeway, where you don’t change the speed limit but you change the number of lanes and the on and off ramp lights to keep the traffic flowing. It’s not just a bigger gun for a bigger job, but more complexity in the whole system. Isilon seems to do that very well and it’s why we chose them.”

His company’s fast growth, Healer says, has “presented a lot of philosophical questions about disk and RAID redundancy, for example. If you lose a disk in RAID-5 you’re OK, but if two fail, you’re screwed. Clustered file systems like GlusterFS and OneFS, which Isilon uses, have a lot more redundancy built in so you could lose quite a lot of disks and still be fine. If your number is up and on that unlucky day you lost six disks, then you would have backup. But that still doesn’t answer what happens if you have a fire in your office or, more likely, there’s a fire elsewhere in the building and it causes the sprinklers to go off. Suddenly, the need for off-site storage is very important for us, so that’s where we are pushing into next.”

Healer honed in on several metrics to help him determine the right path. “The solutions we looked at had to have the following: DR, or disaster recovery, replication, scalability, off-site storage, undelete and versioning snapshots. And they don’t exactly overlap. I talked to a guy just the other day at Rsync.net, which does cloud storage of off-site backups (not to be confused with the Unix command, though they are related). That’s the direction we’re headed. But VFX is just such a hard fit for any of these new data centers because they don’t want to accept and sync 10TB of data per day.”

A rendering of The Molecule NYC's new location.His current goal is simply to sync material between the two offices. “The holy grail of that scenario is that neither office has the definitive master copy of the material and there is a floating cloud copy somewhere out there that both offices are drawing from,” he says. “There’s a process out there called ‘sharding,’ as in a shard of glass, that MongoDB and Scality and other systems use that says that the data is out there everywhere but is physically diverse. It’s local but local against synchronization of its partners. This makes sense, but not if you’re moving terabytes.”

The model Healer is hoping to implement is to “basically offshore the whole company,” he says. “We’ve been working for the past few months with a New York metro startup called Packet which has a really unique concept of a virtual private local cloud. It’s a mouthful but it’s where we need to be.” If The Molecule is doing work in New York City, Healer points out, Packet is close enough that network transmissions are fast enough and “it’s as if the machines were on our local network, which is amazing. It’s huge. It the Amazon cloud data center is 500 miles away from your office, that drastically changes how well you can treat those machines as if they are local. I really like this movement of virtual private local that says, ‘We’re close by, we’re very secure and we have more capacity than individual facilities could ever want.’ But they are off-site and the multiple other companies that use them are in their own discrete containers that never crosses. Plus, you pay per use — basically per hour and per resource. In my ideal future world, we would have some rendering capacity in our office, some other rendering capacity at Packet and off-site storage at Rsync.net. If that works out, we could potentially virtualize the whole workflow and join our New York and LA office and any other satellite office we want to set up in the future.”

The VFX market, especially in New York, has certainly come into its own in recent years. “It’s great to be in an era when nearly every single frame of every single shot of both television and film is touched in some way by visual effects, and budgets are climbing back and the tax credits have brought a lot more VFX artists, companies and projects to town,” Healer says. “But we’re also heading toward a time when the actual brick-and-mortar space of an office may not be as critical as it is now, and that would be a huge boon for the visual effects industry and the resources we provide.”

Storage Roundtable

Manufacturers weigh in on trends, needs.

By Randi Altman

Storage is the backbone of today’s workflows, from set to post to archive. There are many types of storage offerings from many different companies, so how do you know what’s right for your needs?

In an effort to educate, we gathered questions from users in the field. “If you were sitting across a table from makers of storage, what would you ask?”

The following is a virtual roundtable featuring a diverse set of storage makers answering a variety of questions. We hope it’s helpful. If you have a question that you would like to ask of these companies, feel free to email me directly at randi@postPerspective.com and I will get them answered.

SCALE LOGIC’S BOB HERZAN
What are the top three requests you get from your post clients?
A post client’s primary concern is reliability. They want to be assured that the storage solution they are buying supports all of their applications and will provide the performance each application will need when they need it. The solution needs the ability to interact with MAM or PAM solutions and they need to be able to search and retrieve their assets and to future proof, scale and manage the storage in a tiered infrastructure.

Secondly, the client wants to be able to use their content in a way that makes sense. Assets need to be accessible to the stakeholders of a project, no matter how big or complex the storage ecosystem.

Finally, the client wants to see the options available to develop a long-term archiving process that can assure the long-term preservation of their finished assets. All three of these areas can be very daunting to our customers, and being able to wade through all of the technology options and make the right choices for each business is our specialty.

How should post users decide between SAN, NAS and object storage?
There are a number of factors to consider, including overall bandwidth, individual client bandwidth, project lifespan and overall storage requirements. Because high-speed online storage typically has the highest infrastructure costs, a tiered approach makes the most sense for many facilities, where SAN, NAS, cloud or object storage may all be used at the same time. In this case, the speed with which a user will need access to a project is directly related to the type of storage the project is stored on.

Scale Logic uses a consultative approach with our customers to architect a solution that will fit both their workflow and budget requirements. We look at the time it takes to accomplish a task, what risks, if any, are acceptable, the size of the assets and the obvious, but nonetheless, vital budgetary considerations. One of the best tools in our toolbox is our HyperFS file system, which allows customers the ability to choose any one of four tiers of storage solutions while allowing full scalability to incorporate SAN, NAS, cloud and object storage as they grow.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
Above everything else we want to tailor a solution to the needs of the clients. With our consultative approach we take a look not only at the requirements to build the best solution for today, but also  the ability to grow and scale up to the needs of tomorrow. We look at scalability not just from the perspective of having more ability to do things, but in doing the most with what we have. While even our entry level system is capable of doing 10 streams of 4K, it’s equally, if not more, important to make sure that those streams are directed to the people who need them most while allowing other users access at lower resolutions.

GENESIS Unlimited

Our Advanced QoS can learn the I/O patterns/behavior for an application while admins can give those applications a “realtime” or “non-realtime” status. This means “non-realtime” applications auto-throttle down to allow realtime apps the bandwidth. Many popular applications come pre-learned, like Finder, Resolve, Premiere or Flame. In addition, admins can add their own apps.

What do you expect to see as the next trend relating to storage?
Storage always evolves. Whatever is next in post production storage is already in use elsewhere as we are a pretty risk-averse group, for obvious reasons. With that said, the adoption of Unified Storage Platforms and hybrid cloud workflows will be the next big thing for big media producers like post facilities. The need for local online and nearline storage must remain for realtime, resolution-intense processes and data movement between tiers, but the decision-making process and asset management is better served globally by increased shared access and creative input.

The entertainment industry has pushed the limits of storage for over 30 years with no end in sight. In addition, the ability to manage storage tiers and collaborate both on-prem and off will dictate the type of storage solutions our customers will need to invest in. The evolution of storage needs continues to be driven by the consumer: TVs and displays have moved to demanding 4K content from the producers. The increased success of the small professional cameras allows more access to multi-camera shoots. However, as performance and capacity continues to grow for our customers, it brings the complexity down to managing large data farms effectively, efficiently and affordably. That is on the horizon in our future solution designs. Expensive, proprietary hardware will be a thing of the past and open, affordable storage will be the norm, with user-friendly and intuitive software developed to automate, simplify, and monetize our customer assets while maintaining industry compatibility.

SMALL TREE‘S CORKY SEEBER
How do your solutions work with clients’ existing storage? And who is your typical client?
There are many ways to have multiple storage solutions co-exist within the post house, most of these choices are driven by the intended use of the content and the size and budget of the customer. The ability to migrate content from one storage medium to another is key to allowing customers to take full advantage of our shared storage solutions.

Our goal is to provide simple solutions for the small to medium facilities, using Ethernet connectivity from clients to the server to keep costs down and make support of the storage less complicated. Ethernet connectivity also enables the ability to provide access to existing storage pools via Ethernet switches.

What steps have you taken to work with technologies outside of your own?
Today’s storage providers need to actively design their products to allow the post house to maximize the investment in their shared storage choice. Our custom software is open-sourced based, which allows greater flexibility to integrate with a wider range of technologies seamlessly.

Additionally, the actual communication between products from different companies can be a problem. Storage designs that allow the ability to use copper or optical Ethernet and Fibre Channel connectivity provide a wide range of options to ensure all aspects of the workflow can be supported from ingest to archive.

What challenges, if any, do larger drives represent?
Today’s denser drives, while providing more storage space within the same physical footprint, do have some characteristics that need to be factored in when making your storage solution decisions. Larger drives will take longer to configure and rebuild data sets once a failed disk occurs, and in some cases may be slightly slower than less dense disk drives. You may want to consider using different RAID protocols or even using software RAID protection rather than hardware RAID protection to minimize some of the challenges that the new, larger disk drives present.

When do you recommend NAS over SAN deployments?
This is an age-old question as both deployments have advantages. Typically, NAS deployments make more sense for smaller customers as they may require less networking infrastructure. If you can direct connect all of your clients to the storage and save the cost of a switch, why not do that?

SAN deployments make sense for larger customers who have such a large number of clients that making direct connections to the server is impractical or impossible: these require additional software to keep everything straight.

In the past, SAN deployments were viewed as the superior option, mostly due to Fibre Channel being faster than Ethernet. With the wide acceptance of 10GbE, there is a convergence of sorts, and NAS performance is no longer considered a weakness compared to SAN. Performance aside, a SAN deployment makes more sense for very large customers with hundreds of clients and multiple large storage pools that need to support universal access.

QUANTUM‘S JANET LAFLEUR
What are the top three requests that you get from post users?
1) Shared storage with both SAN and NAS access to collaborate more broadly acrossJanet Lafleur groups. For streaming high-resolution content to editorial workstations, there’s nothing that can match the performance of shared SAN storage, but not all production team members need the power of SAN.

For example, animation and editorial workflows often share content. While editorial operations stream content from a SAN connection, a NAS gateway using a higher-speed IP protocol optimized for video (such as our StorNext DLC) can be used for rendering. By working with NAS, producers and other staff who primarily access proxies, images, scripts and other text documents can more easily access this content directly from their desktops. Our Xcellis workflow storage offers NAS access out of the box, so content can be shared over IP and over Fibre Channel SAN.

2) A starting point for smaller shops that scales smoothly. For a small shop with a handful of workstations, it can be hard to find a storage solution that fits into the budget now but doesn’t require a forklift upgrade later when the business grows. That’s one reason we built Xcellis workflow storage with a converged architecture that combines metadata storage and content storage. Xcellis provides a tighter footprint for smaller sites, but still can scale up for hundreds of users and multiple petabytes of content.

3) Simple setup and management of storage. No one wants to spend time deploying, managing and upgrading complex storage infrastructure, especially not post users who just want storage that supports their workflow. That’s why we are continuing to enhance StorNext Connect, which can not only identify problems before they affect users but also reduce the risk of downtime or degraded performance by eliminating error-prone manual tasks. We want our customers to be able to focus on content creation, not on managing storage.

How should post users decide between SAN, NAS and object storage?
Media workflows are complex, with unique requirements at each step. SAN, NAS and object storage all have qualities that make them ideal for specific workflow functions.

SAN: High-resolution, high-image-quality content production requires low-latency, high-performance storage that can stream 4K or greater — plus HDR, HFR content — to multiple workstations without dropping frames. Fibre Channel SANs are the only way to ensure performance for multi-streaming this content.

Object storage: For content libraries that are being actively monetized, object storage delivers the disk-level of performance needed for transcoding and reuse. Object storage also scales beyond the petabyte level, and the self-balancing nature of its erasure code algorithms make replacing aging disks with next-generation ones much simpler and faster than is possible with RAID systems.

Quantum XcellisNAS: High-performance IP-based connections are ideal for enabling render server farms to access content from shared storage. The simplicity of deploying NAS is also recommended for low-bandwidth functions such as review and approval, plus DVD authoring, closed captioning and subtitling.

With an integrated, complete storage infrastructure, such as those built with our StorNext platform, users can work with any or all of these technologies — as well as digital tape and cloud — and target the right storage for the right task.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
So much depends on the configuration: how many spindles, how many controllers, etc. At NAB 2016, our StorNext Pro 4K demo system delivered eight to 10 streams of 4K 10-bit DPX with headroom to stream more. The solution included four RAID-6 arrays of 24 drives each with redundant Xcellis Workflow Directors for an 84TB usable capacity in a neat 10U rack.

The StorNext platform allows users to scale performance and capacity independently. The need for more capacity can be addressed with the simple addition of Xcellis storage expansion arrays. The need for more performance can be met with an upgrade of the Xcellis Workflow Director to support more concurrent file systems.

PANASAS‘ DAVID SALLAK
What are the top three storage-related requests/needs that you get from your post clients or potential post clients?
They want native support for Mac, high performance and a system that is easier to grow and manage than SAN.

When comparing shared storage product choices, what are the advantages of NAS over SAN? Does the easier administration of NAS compared to SAN factor into your choice of storage?
NAS is easier to manage than SAN. Scale-out NAS is easier to grow thPanasasan SAN, and is designed for high availability. If scale-out NAS could be as fast as SAN, then SAN buyers would be very attracted to scale-out NAS.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
As many streams as possible. Post users always need more performance for future projects and media formats, so storage should support a lot of streams of ProRes HD or DNxHD and be capable of handling uncompressed DPX formats that come from graphics departments.

What do you expect to see as the next trend relating to storage? The thing that’s going to push storage systems even further?
Large post production facilities need greater scalability, higher performance, easier use, and affordable pricing.

HGST‘s JEFF GREENWALD
What are the top three requests you get from your post clients or potential post clients?
They’re looking for better ways to develop cost efficiencies of their workflows. Secondly, they’re looking for ways to improve the performance of those workflows. Finally, they’re looking for ways to improve and enhance data delivery and availability.

How should post users decide between SAN, NAS and object storage?
There are four criteria that customers must evaluate in order to make trade-offs between the various storage technologies as well as storage tiers. Customers must evaluate the media quantity of data, and they must also evaluate the frequency of acceptability. They must evaluate the latency requirements of data delivery, and, finally they must balance these three evaluations across their financial budgets.

Active ArchiverHow many data streams of 4K 10-bit DPX at 24fps can your storage provide?
In order to calculate quantity of video streams you must balance available bandwidth as well as file sizes and data delivery requirements toward the desired capacity. Also, jitter and data loss continue to shrink available bandwidth for retries and resends.

What do you expect to see as the next trend relating to storage, and what will push storage even further?
There are two trends that will dramatically transform the storage industry. The first is storage analytics, and the second is new and innovative usage of automatic meta-tagging of file data.

New technologies like SMR, optical and DNA-based object storage have not yet proven to be technology disruptors in storage, therefore it is likely that storage technology advancements will be evolutionary as opposed to revolutionary in the next 10 years.

G-TECH‘S MICHAEL WILLIAMS
Who is using your gear in the post world? What types of pros?
Filmmakers, digital imaging technicians, editors, audio technicians and photographers all use our solutions. These are the pros that capture, store, transfer and edit motion pictures, indie films, TV shows, music, photography and more. We offer everything from rugged standalone portable drives to high-performance RAID solutions to high-capacity network storage for editing and collaboration.

You recently entered the world of NAS storage. Can you talk about the types of pros taking advantage of that tech?
Our NAS customers run the gamut from DITs to production coordinators to video editors and beyond. With camera technology advancing so rapidly, they are looking for storage solutions that can fit within the demanding workflows they encounter every day.

With respect to episodic, feature film, commercials or in-house video production storage, needs are rising faster than ever before and many IT staffs are shrinking, so we introduced the G-Rack 12 NAS platform. We are able to use HGST’s new 10TB enterprise-class hard drives to deliver 120TB of raw storage in a 2RU platform, providing the required collaboration and performance.

We have also made sure that our NAS OS on the G-Rack 12 is designed to be easily administered by the DIT, video editor or someone else on the production staff and not necessarily a Linux IT tech.

Production teams need to work smarter — DITs, video editors, DPs and the like can do the video shoot, get the video ingested into a device and get the post team working on it much faster than in days past. We all know that time is money; this is why we entered the NAS market.

Any other new tech on the horizon that might affect how you make storage or a certain technology that might drive your storage in other directions?
The integration of G-Technology — along with SanDisk and HGST — into Western Digital is opening up doors in terms of new technologies. In addition to our current high-capacity, enterprise-class HDD-based offerings, SSD devices are now available to give us the opportunity to expand our offerings to a broader range of solutions.

G-RACK 12This, in addition to new external device interfaces, is paving the way for higher-performance storage solutions. At NAB this year, we demonstrated Thunderbolt 3 and USB-C solutions with higher-performance storage media and network connectivity. We are currently shipping the USB solutions and the technology demos we gave provide a glimpse into future solutions. In addition, we’re always on the lookout for new form factors and technologies that will make our storage solutions faster, more powerful, more reliable and affordable.

What kind of connections do your drives have, and if it’s Thunderbolt 2 or Thunderbolt 3, can they be daisy chained?
When we look at interfaces, as noted above, there’s a USB Type-C for the consumer market as well as Thunderbolt and 10Gb Ethernet for the professional market.

As far as daisy-chaining, yes. Thunderbolt is a very flexible interface, supporting up to six devices in a daisy chain, on a single port. Thunderbolt 3 is a very new interface that is gaining momentum, one that will not only support extremely high data transfer speeds (up to 2.7GB/s) but also supports up to two 4K displays. We should also not forget that there are still more than 200M devices supporting Thunderbolt 1 and 2 connections.

LACIE‘S GASPARD PLANTROU
How do your solutions work with clients existing storage? And who are your typical M&E users?
With M&E workflows, it’s rare that users work with a single machine and storage solution. From capture to edit to final delivery, our customers’ data interacts with multiple machines, storage solutions and users. Many of our storage solutions feature multiple interfaces such as Thunderbolt, USB 3.0 or FireWire so they can be easily integrated into existing workflows and work seamlessly across the entire video production process.

Our Rugged features Thunderbolt and USB 3.0. That means it’s guaranteed to work with any standard computer or storage scenario on the market. Plus it’s shock, dust and moisture-resistant, allowing it to handle being passed around set or shipped to a client. Lacie 12bigLaCie’s typical M&E users are mid-size post production studios and independent filmmakers and editors looking for RAID solutions.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
The new LaCie 12big Thunderbolt 3 pushes up to 2600MB/s and can handle three streams of 4K 10-bit DPX at 24fps (assuming one stream is 864MB/s). In addition, the storage solution features 96TB to edit and hold tons of 4K footage.

What steps have you taken to work with technologies outside of your own?
With video file sizes growing exponentially, it is more important than ever for us to deliver fast, high-capacity solutions. Recent examples of this include bringing the latest technologies from Intel — Thunderbolt 3 — into our line. We work with engineers from our parent company, Seagate, to incorporate the latest enterprise class core technology for speed and reliability. Plus, we always ensure our solutions are certified to work seamlessly on Mac and Windows.

NETAPP‘S JASON DANIELSON
What are the top three requests that you get from post users?Jason Danielson
As a storage vendor, the first three requests we’re likely to get are around application integration, bandwidth and cost. Our storage systems support well over 100 different applications across a variety of workflows (VFX, HD broadcast post, uncompressed 4K finishing) in post houses of all sizes, from boutiques in Paris to behemoths in Hollywood.

Bandwidth is not an issue, but the bandwidth per dollar is always top of mind for post. So working with the post house to design a solution with suitable bandwidth at an acceptable price point is what we spend much of our time doing.

How should post users decide between SAN, NAS and object storage?
The decision to go with SAN versus NAS depends on the facility’s existing connectivity to the workstations. Our E-Series storage arrays support quite a few file systems. For SAN, our systems integrators usually use Quantum StorNext, but we also see Scale Logic’s HyperFS and Tiger Technology’s metaSAN being used.

For NAS, our systems integrators tend to use EditShare XStream EFS and IBM GPFS. While there are rumblings of a transition away from Fibre Channel-based SAN to Ethernet-based NAS, there are complexities and costs associated with tweaking a 10GigE client network.

The object storage question is a bit more nuanced. Object stores have been so heavily promoted by storage vendors that thE5624ere are many misconceptions about their value. For most of the post houses we talk to, object storage isn’t the answer today. While we have one of the most feature-rich and mature object stores out there, even we say that object stores aren’t for everyone. The questions we ask are:

1) Do you have 10 million files or more? 2) Do you store over a petabyte? 3) Do you have a need for long-term retention? 4) Does your infrastructure need to support multisite production?

If the answer to any of those questions is “yes,” then you should at least investigate object storage. A high-end boutique with six editors is probably not in this realm. It is true that an object store represents a slightly lower-cost bucket for an active archive (content repository), but it comes at a workflow cost of introducing a second tier to the architecture, which needs to be managed by either archive management or media asset management software. Unless such a software system is already in place, then the cost of adding one will drive up the complexity and cost of the implementation. I don’t mean to sound negative about object stores. I am not. I think object stores will play a major role in active-archive content storage in the future. They are just not a good option for a high-bandwidth production tier today or, possibly, ever.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
In order to answer that question, we would ask the post house: “How many streams do you want to play back?” Let’s say we’re talking about 4K (4096×2160), versus the several resolutions that are called 4K). At 4:4:4, that works out to 33MB per frame or 792MB per second. We would typically use flash (SSDs) for 4K playback. Our  2RU 24-SSD storage array, the EF560, can do a little over 9GB per second. That amounts to 11 streams.

But that is only half the answer. This storage array is usually deployed under a parallel file system, which will aggregate the bandwidth of several arrays for shared editing purposes. A larger installation might have eight storage arrays — each with 18 SSDs (to balance bandwidth and cost) — and provide sustained video playback for 70 streams.

What do you expect to see as the next trend relating to storage? What’s going to push storage systems even further?
The introduction of larger, more cost-effective flash drives (SSDs) will have a drastic effect on storage architectures over the next three years. We are now shipping 15TB SSDs. That is a petabyte of extremely fast storage in six rack units. We think the future is flash production tiers in front of object-store active-archive tiers. This will eliminate the need for archive managers and tape libraries in most environments.

HARMONIC‘S ANDY WARMAN
What are the top three requests that you hear from your post clients or potential post clients?andy warman
The most common request is for sustained performance. This is an important aspect since you do not want performance to degrade due to the number of concurrent users, the quantity of content, how full the storage is, or the amount of time the storage has been in service.

Another aspect related to this is the ability to support high-write and -read bandwidth. Being able to offer equal amounts of read and write bandwidth can be very beneficial for editing and transcode workflows, versus solutions that have high-read bandwidth, but relatively low-write performance. Customers are also looking for good value for money. Generally, we would point to value coming from the aforementioned performance as well as cost-effective expansion.

You guys have a “media-aware” solution for post. Can you explain what that is and why you opted to go this way?
Media-aware storage refers to the ability to store different media types in the most effective manner for the file system. A MediaGrid storage system supports multiple different block sizes, rather than a single block size for all media types. In this way, video assets, graphics and audio and project files can use different block sizes that make reading and writing data more efficient. This type of file I/O “tuning” provides some additional performance gains for media access, meaning that video could use, say, 2MB blocks, graphics and audio 512KB, and projects and other files 128KB. Not only can different block sizes be used by different media types, but they are also configurable so UHD files could, say, use 8MB block sizes.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
The storage has no practical storage capacity or bandwidth limit, so we can build a storage system that suits the customer needs. To size a system it becomes a case of balancing the bandwidth and storage capacity by selecting the appropriate number of drives and drive size(s) to match specific needs. The system is built on SAS drives; multiple, fully redundant 10 Gigabit Ethernet connections to client workstations and attached devices and 12 Gigabit redundant SAS interconnects between storage expansion nodes. This means we have high-speed connectivity within the storage as well as out to clients.

Harmonic MediaGrid 4K Content ServerAs needs change, the system can be expanded online with all users maintaining full access. Bandwidth scales in a linear fashion, and because there is a single name space in MediaGrid, the entire storage system can be treated as a single drive, or divided up and granted user level rights to folders within the file system.

Performance is further enhanced by the use of parallel access to data throughout the storage system. The file system provides a map to where all media is stored or is to be stored on disk. Data is strategically placed across the whole storage system to provide the best throughput. Clients simultaneously read and write data through the 10 Gigabit network to all network attached storage nodes rather than data being funneled through a single node or data connection. The result is that performance is the same whether the storage system is 5% or 95% full.

What do you expect to see as the next trend relating to storage? What’s going to push storage systems even further?
The advent of UHD has driven demands on storage further as codecs and therefore data throughput and storage requirements have increased significantly. Faster and more readily accessible storage will continue to grow in importance as delivery platforms continue to expand and expectations for throughput of storage systems continue to grow. We will use whatever performance and storage capacity is available, so offering more of both is inevitable to feed our needs for creativity and storytelling.

JMR’s STEVE KATZ
What are the top three storage-related requests you get from post users?
The most requested is ease of installation and operation. The JMR Share is delivered with euroNAS OS on mirrored SSD boot disks, with enough processing power and memory to Steve Headshot 6.27.16support efficient, high-volume workflows and a perpetual license to support the amount of storage requested, from 20TB minimum to the “unlimited” maximum. It’s intuitive to use and comfortable for anyone familiar with using popular browsers.

Compatibility and interoperability with clients using various hardware, operating systems and applications.

How many data streams of 4K 10-bit DPX at 24fps can your storage provide?
This can all be calculated by usable bandwidth and data transfer rates, which as with any networked storage can be limited by the network itself. For those using a good 10GbE switch, the network limits data rates to 1250MB/s maximum, which can support more than 270 streams of DNxHD 36, but only one stream of 4K 10-bit “film” resolution. Our product can support ~1800MB/s in a single 16-disk appliance, but without a very robust network this can’t be achieved.

When comparing shared storage product choices, what are the advantages of NAS over SAN, for example?
SAN actually has some advantages over NAS, but unless the user has Fibre Channel hardware installed, it might be a very costly option. The real advantage of NAS is that everyone already has an Ethernet network available that may be sufficient for video file server use. If not, it may be upgraded fairly inexpensively.

JMR Share comes standard with both GbE and 10GbE networking capability right out of the box, and has performance that will saturate 10GbE links; high-availability active/active failover is available as well as SAN Cluster (an extra cost option). The SAN Cluster is equipped with specialized SAN software as well as with 8Gb or 16Gb fibre channel host adapters installed, so it’s ready to go.

What do you expect to see as the next trend relating to storage? The thing that’s going to push storage systems even further?
Faster and lower cost, always! Going to higher speed network adapters, 12Gb SAS internal storage and even SSDs or NVMe drives, it seems the sky is the limit — or, actually, the networking is the limit. We already offer SAS SSDs in the Share as an option, and our higher-end dual-processor/dual-controller Share models (a bit higher cost) using NVMe drives can provide internal data transfer speeds exceeding what any network can support (even multiple 40Gb InfiniBand links). We are seeing a bit of a trend toward SSDs now that higher-capacity models at more reasonable cost, with reasonable endurance, are becoming available.

The State of Storage

Significant trends are afoot in media and entertainment storage.

By Tom Coughlin

Digital storage plays a significant role in the media and entertainment industry, and our specific demands are often very different from typical IT storage. We are dealing with performance requirements of realtime video in capture, editing and post, as well as distribution. On the other hand, the ever-growing archive of long-tail digital content and digitized historical analog content is swelling the demand for archives (both cold and warm) using tape, optical discs and hard drive arrays.

My company, Coughlin Associates, has conducted surveys of digital storage use by media and entertainment professionals since 2009. These results are used in our annual Digital Storage in Media and Entertainment Report. This article presents results from the 2016 survey and some material from the 222-page report to discuss the status of digital storage for professional media and entertainment.

Content Creation and Capture
Pro video cameras are undergoing rapid evolution, driven by higher-resolution content as well as multi-camera content capture, including stereoscopic and virtual reality. In addition, the physical storage media for professional cameras is undergoing rapid evolution as film and magnetic digital tape is impacted by the rapid file access convenience of hard disk drives, optical discs, and the ruggedness of flash-based solid-state storage.

The table below compares the results from the 2009, 2010, 2012, 2013, 2014 and 2015 surveys with those from 2016. Flash memory is the clear leader in pro video camera media, increasing from 19% in 2009 to 66% in 2015 and then down to 54% in 2016, while magnetic tape shows a consistent decline over the same period.

Optical disc use between 2009 and 2016 bounced around between 7% and 17%. Film shows a general decline from 15% usage in 2009 to 2% in 2016. The trend with declining film use follows the trend toward completely digital workflows.

Note that about 60% of survey participants said that they used external storage devices to capture content from their cameras in 2016 (perhaps this is why the HDD percentages are so high). In 2016, 83% said that over 80% of their content is created in a digital format.

In 2016, 93.2% of the survey respondents said they reuse their recording media (compared to 89.9% in 2015, 93.3% in 2014, 84.5% in 2013, 86% in 2012, 79% in 2010 and 75% in 2009). In 2016, 75% of respondents said they archive their camera recording media (compared to 73.6% in 2015, 74.2% in 2014, 81.4% in 2013, 85% in 2012 and 77% in 2010).

Archiving the original recording media may be a practice in decline — especially with expensive reusable media such as flash memory cards. Digital storage on tape, hard disk drives or flash storage allows the reuse of media.

Post Production
The size of content — and amount — has put strains on post network bandwidth and storage. This includes editing and other important operations. As much of this work may take place in smaller facilities, these companies may be doing much of their work on direct attached storage devices and they may share or archive this media in the cloud in order to avoid the infrastructure costs of running a data center.

The graph below shows that for the 2016 survey participants there was a general increase in the use of shared network storage (such as SAN or NAS), and a decrease in DAS storage as the number of people working in a post facility increases. The DAS storage in the larger facilities may be different than that used in smaller facilities.

DAS vs. shared storage by number of people in a post facility.

When participants were asked about their use of direct attached and network storage in digital editing and post, the survey showed the following summary statistics in 2016 (compared to earlier surveys):

– 74.5% had DAS
– 89.8% of these had more than 1 TB of DAS
– 10 to 50 TB was the most popular DAS size (27.5%)
– 17.4% of these had more than 50 TB of DAS storage
– 2.9% had more than 500 TB of DAS storage
– 68.1% had NAS or SAN
– 57.4% had 50 TB or more of network storage in 2016
– About 15% had more than 500 TB of NAS/SAN storage in 2016
– Many survey participants had considerable storage capacities in both DAS and NAS/SAN.

We asked whether survey participants used cloud-based storage for editing and post. In 2016 23.0% of responding participants said yes. The respondents, 20.9% of them, said that they had 1TB or more of their storage capacity in the cloud.

Content Distribution
Distribution of professional video content has many channels. It can use physical media for getting content to digital cinemas or to consumers, the distribution can be done electronically using broadcast, cable or satellite transmission, or through the Internet or mobile phone networks.

The table below gives responses for the percentage of physical media used by the survey respondents for content distribution in 2016, 2015, 2014, 2013, 2012 and 2010. Note that these are the average for the survey population giving their percentage for each physical media and do not and should not be expected to add to 100%. Digital tape, DVD discs, HDDs and Flash Memory are the most popular distribution formats.

Average percentage content on physical media for professional content distribution.

Following are survey observations for electronic content distribution, such as video on demand.

– The average number of hours on a central content delivery system was 2,174 hours in 2016.
– There was an average of 427 hours ingested monthly in 2016.
– In 2016, 38% of respondents had more than 5% of their content on edge servers.
– About 31% used flash memory on their edge servers in 2016.

Archiving and Preservation
Today, most new entertainment and media content is born digital, so it is natural that this content should be preserved in digital form. This requirement places new demands on format preservation for long-term digital archives as well as management and systematic format refreshes during the expected life of a digital archive.

In addition, the cost of analog content digitization and preservation in a digital format has gone down considerably, and many digitization projects are proceeding apace. The growth of digital content archiving will swell the amount of content available for repurposing and long-tail distribution. It will also swell the amount of storage and storage facilities required to store these long-term professional content archives.

Following are some observations from our 2016 survey on trends in digital archiving and content preservation.

– 41% had less than 2,000 hours of content in a long-term archive
– 56.9% archived all the content captured from their cameras
– 54.0% archived copies of content in all of their distribution formats
– 35.9% digitally archived all content captured from their dailies
– 31.3% digitally archived all content captured from rough cuts
– 36.5% digitally archived all content captured from their intermediaries
– 50.9% of the respondents said that their annual archive growth rate was less than 6% in 2016
– About 28.6% had less than 2,000 hours of unconverted analog content
– 16.7% of participants had over 5,000 hours of unconverted analog content
– About 52.5% of the survey respondents have an annual analog conversion rate of 2% or less
– The average rate of conversion is about 3.4% in 2016

Professional media and entertainment content was traditionally archived on film or analog videotapes. Today, the options available for archive media to store digital content depend upon the preferences and existing infrastructure of digital archive facilities. Figure 6 gives the percentage distribution of archive media used by the survey participants.

Percentage of digital long-term archives on various media

Some other observations from the archive and preservation section of the survey:

– About 42.6% never update their digital archives.
– About 76.2% used different storage for archiving and working storage.
– About 49.2% copied and replaced their digital long-term archives every 10 years or less.
– 38.1% said they would use a private or public cloud for archiving in 2016.

Conclusions
Larger content files, driven by higher resolution, higher frame rates, higher dynamic range and stereoscopic and virtual reality video are creating larger video files. This is driving the need for high-performance storage to work on this content and to provide fast delivery, which could drive more creative work to use solid-state storage.

At the same time, cost -effective storage and management of completed work is driving the increased use of hard disk drives, magnetic tape and even optical storage for low-cost storage.

The price of storing content in the cloud has gone down so much that there are magnetic tape-based cloud storage offerings that are less expensive than building one’s own storage data center, at least for small- and moderate-sized facilities.

This trend is expected to grow the use of cloud storage in media and entertainment, especially for archiving, as shown in the figure below.

Growth of cloud storage in media and entertainment.


Dr. Tom Coughlin, president of Coughlin & Associates, is a storage analyst and consultant with over 30 years in the data storage industry. He is the founder and organizer of the Annual Storage Visions Conference as well as the Creative Storage Conference.