Tag Archives: storage

Quantum’s StorNext 6 targets high-res, scalable, global workflows

The industry’s ongoing shift to higher-resolution formats, its use of more cameras to capture footage and its embrace of additional distribution formats and platforms is putting pressure on storage infrastructure. For content creators and owners to take full advantage of their content, storage must not only deliver scalable performance and capacity but also ensure that media assets remain readily available to users and workflow applications. Quantum’s new StorNext 6 is engineered to address these requirements.

StorNext 6 will begin shipping with all newly purchased Xcellis and StorNext M-Series offerings, as well as Artico archive appliances, in early summer. It will be available at no additional cost for StorNext 5 users under current support contracts.

Leveraging its extensive real-world 4K testing and a series of 4K reference architectures developed from test data, Quantum’s StorNext platform provides scalable storage that delivers high performance using less hardware than competing systems. StorNext 6 offers a new quality of service (QoS) feature that empowers facilities to further tune and optimize performance across all client workstations, and on a machine-by-machine basis, in a shared storage environment.

Using QoS to specify bandwidth allocation to individual workstations, a facility can guarantee that more demanding tasks, such as 4K playback or color correction, get the bandwidth they need to maintain the highest video quality. At the same time, QoS allows the facility to set parameters ensuring that less timely or demanding tasks do not consume an unnecessary amount of bandwidth. As a result, StorNext 6 users can take on work with higher-resolution content and easily optimize their storage resources to accommodate the high-performance demands of such projects.

StorNext 6 includes a new feature called FlexSpace, which allows multiple instances of StorNext — and geographically distributed teams — located anywhere in the world to share a single archive repository, allowing collaboration with the same content. Users at different sites can store files in the shared archive, as well as browse and pull data from the repository. Because the movement of content can be fully automated according to policies, all users have access to the content they need without having it expressly shipped to them.

Shared archive options include both public cloud storage on Amazon Web Services (AWS), Microsoft Azure or Google Cloud via StorNext’s existing FlexTier capability and private cloud storage based on Quantum’s Lattus object storage or, through FlexTier third-party object storage, such as NetApp StorageGrid, IBM Cleversafe and Scality Ring. In addition to simplifying collaborative work, FlexSpace also makes it easy for multinational companies to establish protected off-site content storage.

FlexSync, which is new to StorNext 6, provides a fast and simple way to synchronize content between multiple StorNext systems that is highly manageable and automated. FlexSync supports one-to-one, one-to-many and many-to-one file replication scenarios and can be configured to operate at almost any level: specific files, specific folders or entire file systems. By leveraging enhancements in file system metadata monitoring, FlexSync recognizes changes instantly and can immediately begin reflecting those changes on another system. This approach avoids the need to lock the file systems to identify changes, reducing synchronization time from hours or days to minutes, or even seconds. As a result, users can also set policies that automatically trigger copies of files so that they are available at multiple sites, enabling different teams to access content quickly and easily whenever it’s needed. In addition, by providing automatic replication across sites, FlexSync offers increased data protection.

StorNext 6 also gives users greater control and selectivity in maximizing their use of storage on an ROI basis. When archive policies call for storage across disk, tape and the cloud, StorNext makes a copy for each. A new copy expiration feature enables users to set additional rules determining when individual copies are removed from a particular storage tier. This approach makes it simpler to maintain data on the storage medium most appropriate and economical and, in turn, to free up space on more expensive storage. When one of several copies of a file is removed from storage, a complementary selectable retrieve function in StorNext 6 enables users to dictate which of the remaining copies is the first priority for retrieval. As a result, users can ensure that the file is retrieved from the most appropriate storage tier.

StorNext 6 offers valuable new capabilities for those facilities that subscribe to Motion Picture Association of America (MPAA) rules for content auditing and tracking. The platform can now track changes in files and provide reports on who changed a file, when the changes were made, what was changed and whether and to where a file was moved. With this knowledge, a facility can see exactly how its team handled specific files and also provide its clients with details about how files were managed during production.

As facilities begin to move to 4K production, they need a storage system that can be expanded for both performance and capacity in a non-disruptive manner. StorNext 6 provides for online stripe group management, allowing systems to have additional storage capacity added to existing stripe groups without having to go offline and disrupt critical workflows.

Another enhancement in StorNext 6 allows StorNext Storage Manager to automate archives in an environment with Mac clients, effectively eliminating the lengthy retrieve process previously required to access an archived directory that contains offline files  which can number in the hundreds of thousands, or even millions.

Last Chance to Enter to Win an Amazon Echo… Take our Storage Survey Now!

If you’re working in post production, animation, VFX and/or VR/AR/360, please take our short survey and tell us what works (and what doesn’t work) for your day-to-day needs.

What do you need from a storage solution? Your opinion is important to us, so please complete the survey by Wednesday, March 8th.

We want to hear your thoughts… so click here to get started now!

 

 

Quantum shipping StorNext 5.4

Quantum has introduced StorNext 5.4, the latest release of their workflow storage platform, designed to bring efficiency and flexibility to media content management. StorNext 5.4 enhancements include the ability to integrate existing public cloud storage accounts and third-party object storage (private cloud) — including Amazon Web Services, Microsoft Azure, Google Cloud, NetApp StorageGRID, IBM Cleversafe and Scality Ring — as archive tiers in a StorNext-managed media environment. It also lets users deploy applications embedded within StorNext-powered Xcellis workflow storage appliances.

Quantum has also included a new feature called StorNext Storage Manager, offering automated, policy-based movement of content into and out of users’ existing public and private clouds while maintaining the visibility and access that StorNext provides. It offers seamless integration for public and private clouds within a StorNext-managed environment — as well as primary disk and tape storage tiers, full user and application access to media stored in the cloud without additional hardware or software, and extended versioning across sites and the cloud.

By enabling applications to run inside its Xcellis Workflow Director, the new Dynamic Application Environment (DAE) capability in StorNext 5.4 allows users to leverage a converged storage architecture, reducing the time, cost and complexity of deploying and maintaining applications.

StorNext 5.4 is currently shipping with all newly-purchased Xcellis, StorNext M-Series and StorNext Pro Solutions, as well as Artico archive appliances. It is available at no additional cost for StorNext 5 users under current support contracts.

Promise, Symply team up on Thunderbolt 3 RAID system

Storage solutions companies Promise Technology and Symply have launched Pegasus3 Symply Edition, the next generation of the Pegasus desktop RAID storage system. The new system combines 40Gb/s Thunderbolt 3 speed with Symply’s storage management suite.

According to both companies, Pegasus3 Symply Edition complements the new MacBook Pro — it’s optimized for performance and content protection. The Pegasus3 Symply Edition offers the speed needed for creative pro generating high-resolution video and rich media content, and also the safety and security of full-featured RAID protection.

The intuitive Symply software suite allows for easy setup, optimization and management. The dual Thunderbolt 3 ports provide fast connectivity and the ability to connect up to six daisy-chained devices on a single Thunderbolt 3 port while adding new management tools and support from Symply.

“As the Symply solution family grows, Pegasus3 Symply Edition will continue to be an important part in the larger, shared creative workflows built around Promise and Symply solutions,” said Alex Grossman, president and CEO, Symply.

The Pegasus3 Symply Edition is available in three models — Pegasus R4, Pegasus R6 and Pegasus R8 — delivering four-, six- and eight-drive configurations of RAID storage, respectively. Each system is ready to go “out of the box” for Mac users with a 1m 40Gb/s Active Thunderbolt 3 cable for easy, high-speed connectivity.

Every Pegasus3 Symply Edition will include Symply’s Always-Up-to-Date Mac OS management app. iOS and Apple Watch apps to monitor your Pegasus3 Symply Edition system remotely are coming soon. The Symply Management suite will support most earlier Pegasus systems. The Pegasus3 Symply Edition includes a full three-year warranty, tech support and 24/7 media and creative user support worldwide.

The Pegasus3 Symply Edition lineup will be available on the Apple online store, at select Apple retail stores and at resellers.

IBC 2016: VR and 8K will drive M&E storage demand

By Tom Coughlin

While attending the 2016 IBC show, I noticed some interesting trends, cool demos and new offerings. For example, while flying drones were missing, VR goggles were everywhere; IBM was showing 8K video editing using flash memory and magnetic tape; the IBC itself featured a fully IP-based video studio showing the path to future media production using lower-cost commodity hardware with software management; and, it became clear that digital technology is driving new entertainment experiences and will dictate the next generation of content distribution, including the growing trend to OTT channels.

In general, IBC 2016 featured the move to higher resolution and more immersive content. On display throughout the show was 360-degree video for virtual reality, as well as 4K and 8K workflows. Virtual reality and 8K are driving new levels of performance and storage demand, and these are just some of the ways that media and entertainment pros are future-zone-2increasing the size of video files. Nokia’s Ozo was just one of several multi-camera content capture devices on display for 360-degree video.

Besides multi-camera capture technology and VR editing, the Future Tech Zone at IBC included even larger 360-degree video display spheres than at the 2015 event. These were from Puffer Fish (pictured right). The smaller-sized spherical display was touch-sensitive so you could move your hand across the surface and cause the display to move (sadly, I didn’t get to try the big sphere).

IBM had a demonstration of a 4K/8K video editing workflow using the IBM FlashSystem and IBM Enterprise tape storage technology, which was a collaboration between the IBM Tokyo Laboratory and IBM’s Storage Systems division. This work was done to support the move to 4K/8K broadcasts in Japan by 2018, with a broadcast satellite and delivery of 8K video streams of the 2020 Tokyo Olympic Games. The combination of flash memory storage for working content and tape for inactive content is referred to as FLAPE (flash and tAPE).

The graphic below shows a schematic of the 8K video workflow demonstration.

The argument for FLAPE appears to be that flash performance is needed for editing 8K content and the magnetic tape provides low-cost storage the 8K content, which may require greater than 18TB for an hour of raw content (depending upon the sampling and frame rate). Note that magnetic tape is often used for archiving of video content, so this is a rather unusual application. The IBM demonstration, plus discussions with media and entertainment professionals at IBC indicate that with the declining costs of flash memory and the performance demands of 8K, 8K workflows may finally drive increased demand for flash memory for post production.

Avid was promoting their Nexis file system, the successor to ISIS. The company uses SSDs for metadata, but generally flash isn’t used for actual editing yet. They agreed that as flash costs drop, flash could find a role for higher resolution and richer media. Avid has embraced open source for their code and provides free APIs for their storage. The company sees a hybrid of on-site and cloud storage for many media and entertainment applications.

EditShare announced a significant update to its XStream EFS Shared Storage Platform (our main image). The update provides non-disruptive scaling to over 5PB with millions of assets in a single namespace. The system provides a distributed file system with multiple levels of hardware redundancy and reduced downtime. An EFS cluster can be configured with a mix of capacity and performance with SSDs for high data rate content and SATA HDD for cost-efficient higher-performance storage — 8TB HDDs have been qualified for the system. The latest release expands optimization support for file-per-frame media.

The IBC IP Interoperability Zone was showing a complete IP-based studio (pictured right) was done with the cooperation of AIMS and the IABM. The zone brings to life the work of the JT-NM (the Joint Task Force on Networked Media, a combined initiative of AMWA, EBU, SMPTE and VSF) and the AES on a common roadmap for IP interoperability. Central to the IBC Feature Area was a live production studio, based on the technologies of the JT-NM roadmap that Belgian broadcaster VRT has been using daily on-air all this summer as part of the LiveIP Project, which is a collaboration between VRT, the European Broadcasting Union (EBU) and LiveIP’s 12 technology partners.

Summing Up
IBC 2016 showed some clear trends to more immersive, richer content with the numerous displays of 360-degree and VR content and many demonstrations of 4K and even 8K workflows. Clearly, the trend is for higher-capacity, higher-performance workflows and storage systems that support this workflow. This is leading to a gradual move to use flash memory to support these workflows as the costs for flash go down. At the same time, the move to IP-based equipment will lead to lower-cost commodity hardware with software control.

Storage analyst Tom Coughlin is president of Coughlin Associates. He has over 30 years in the data storage industry and is the author of Digital Storage in Consumer Electronics: The Essential Guide. He also  publishes the Digital Storage Technology Newsletter, the Digital Storage in Media and Entertainment Report.

Introducing a new section on our site: techToolbox

In our quest to bring even more information and resources to postPerspective, we have launched a new section called techToolbox — a repository of sorts, where you can find white papers, tutorials, videos and more from a variety of product makers.

To kick-off our new section, we’re focusing our first techToolbox on storage. Of all the technologies required for today’s entertainment infrastructure, storage remains one of the most crucial. Without the ability to store data in an efficient and reliable fashion, everything breaks down.

In techToolbox: Storage, we highlight some of today’s key advances in storage technology, with each providing a technical breakdown of why they could be the solution to your needs.

Check it out here.

Archion’s new Omni Hybrid storage targets VR, VFX, animation

Archion Technologies has introduced the EditStor Omni Hybrid, a collaborative storage solution for virtual reality, visual effects, animation, motion graphics and post workflows.

In terms of performance, an Omni Hybrid with one expansion chassis offers 8000MB/second for 4K and other streaming demands, and over 600,000 IOPS for rendering and motion graphics. The product has been certified for Adobe After Effects, Autodesk’s Maya/Flame/Lustre, The Foundry’s Nuke and Modo, Assimilate Scratch and Blackmagic’s Resolve and Fusion.  The Omni Hybrid is scalable up to a 1.5Petabytes, and can be expanded without shutdown.

“We have Omni Hybrid in post production facilities that range from high-end TV and film to massive reality productions,” reports Archion CTO James Tucci. “They are all doing graphics and editorial work on one storage system.”

Grading & Compositing Storage: Northern Lights

Speed is key for artist Chris Hengeveld.

By Beth Marchant

For Flame artist Chris Hengeveld of Northern Lights in New York City, high-performance file-level storage and a Fibre Channel connection mean it’s never been easier for him to download original source footage and share reference files with editorial on another floor. But Hengeveld still does 80 percent of his work the old-fashioned way: off hand-delivered drives that come in with raw footage from production.

Chris Hengeveld

The bicoastal editorial and finishing facility Northern Lights — parent company to motion graphics house Mr. Wonderful, the audio facility SuperExploder and production boutique Bodega — has an enviably symbiotic relationship with its various divisions. “We’re a small company but can go where we need to go,” says colorist/compositor Hengeveld. “We also help each other out. I do a lot of compositing, and Mr. Wonderful might be able to help me out or an assistant editor here might help me with After Effects work. There’s a lot of spillover between the companies, and I think that’s why we stay busy.”

Hengeveld, who has been with Northern Lights for nine years, uses Flame Premium, Autodesk’s visual effects finishing bundle of Flame and Flare with grading software Lustre. “It lets me do everything from final color work, VFX and compositing to plain-old finishing to get it out of the box and onto the air,” he says. With Northern Lights’ TV-centric work now including a growing cache of Web content, Hengeveld must often grade and finish in parallel. “No matter how you send it out, chances are what you’ve done is going to make it to the Web in some way. We make sure that what we make look good on TV also looks good on the Web. It’s often just two different outputs. What looks good on broadcast you often have to goose a bit to get it to look good on the Web. Also, the audio specs are slightly different.”

Hengeveld provided compositing and color on this spot for Speedo.

Editorial workflows typically begin on the floor above Hengeveld in Avid, “and an increasing number, as time goes by, in Adobe Premiere,” he says. Editors are connected to media through a TerraBlock shared storage system from Facilis. “Each room works off a partition from the TerraBlock, though typically with files transcoded from the original footage,” he says. “There’s very little that gets translated from them to me, in terms of clip-based material. But we do have an Aurora RAID from Rorke (now Scale Logic) off which we run a HyperFS SAN — a very high-performance, file-level storage area network — that connects to all the rooms and lets us share material very easily.”

The Avids in editorial at Northern Lights are connected by Gigabit Ethernet, but Hengeveld’s room is connected by Fibre. “I get very fast downloading of whatever I need. That system includes Mr. Wonderful, too, so we can share what we need to, when we need to. But I don’t really share much of the Avid work except for reference files.” For that, he goes back to raw camera footage. “I’d say bout 80 percent of the time, I’m pulling that raw shoot material off of G-Technology drives. It’s still sneaker-net on getting those source drives, and I don’t think that’s ever going to change,” he says. “I sometimes get 6TB of footage in for certain jobs and you’re not going to copy that all to a centrally located storage, especially when you’ll end up using about a hundredth of that material.”

The source drives are typically dupes from the production company, which more often than not is sister company Bodega. “These drives are not made for permanent storage,” he says. “These are transitional drives. But if you’re storing stuff that you want to access in five to six years, it’s really got to go to LTO or some other system.” It’s another reason he’s so committed to Flame and Lustre, he says. Both archive every project locally with its complete media, which can be then be easily dropped onto an LTO for safe long-term storage.

Time or money constraints can shift this basic workflow for Hengeveld, who sometimes receives a piece of a project from an editor that has been stripped of its color correction. “In that case, instead of loading in the raw material, I would load in the 15- or 30-second clip that they’ve created and work off of that. The downside with that is if the clip was shot with an adjustable format camera like a Red or Arri RAW, I lose that control. But at least, if they shoot it in Log-C, I still have the ability to have material that has a lot of latitude to work with. It’s not desirable, but for better stuff I almost always go back to the original source material and do a conform. But you sometimes are forced to make concessions, depending on how much time or budget the client has.”

A recent spot for IZod, with color by Hengeveld.

Those same constraints, paired with advances in technology, also mean far fewer in-person client meetings. “So much of this stuff is being evaluated on their computer after I’ve done a grade or composite on it,” he says. “I guess they feel more trust with the companies they’re working with. And let’s be honest: when you get into these very detailed composites, it can be like watching paint dry. Yet, many times when I’m grading,  I love having a client here because I think the sum of two is always greater than one. I enjoy the interaction. I learn something and I get to know my client better, too. I find out more about their subjectivity and what they like. There’s a lot to be said for it.”

Hengeveld also knows that his clients can often be more efficient at their own offices, especially when handling multiple projects at once, influencing their preferences for virtual meetings. “That’s the reality. There’s good and bad about that trade off. But sometimes, nothing beats an in-person session.”

Our main image is from NBC’s Rokerthon.

Storage Workflows for 4K and Beyond

Technicolor-Postworks and Deluxe Creative Services share their stories.

By Beth Marchant

Once upon a time, an editorial shop was a sneaker-net away from the other islands in the pipeline archipelago. That changed when the last phases of the digital revolution set many traditional editorial facilities into swift expansion mode to include more post production services under one roof.

The consolidating business environment in the post industry of the past several years then brought more of those expanded, overlapping divisions together. That’s a lot for any network to handle, let alone one containing some of the highest quality and most data-dense sound and pictures being created today. The networked storage systems connecting them all must be robust, efficient and realtime without fail, but also capable of expanding and contracting with the fluctuations of client requests, job sizes, acquisitions and, of course, evolving technology.

There’s a “relief valve” in the cloud and object storage, say facility CTOs minding the flow, but it’s still a delicate balance between local pooled and tiered storage and iron-clad cloud-based networks their clients will trust.

Technicolor-Postworks
Joe Beirne, CTO of Technicolor-PostWorks New York, is probably as familiar as one can be with complex nonlinear editorial workflows. A user of Avid’s earliest NLEs, an early adopter of networked editing and an immersive interactive filmmaker who experimented early with bluescreen footage, Beirne began his career as a technical advisor and producer for high-profile mixed-format feature documentaries, including Michael Moore’s Fahrenheit 9/11 and the last film in Godfrey Reggio’s KOYAANISQATSI trilogy.

Joe Beirne

Joe Beirne

In his 11 years as a technology strategist at Technicolor-PostWorks New York, Beirne has also become fluent in evolving color, DI and audio workflows for clients such as HBO, Lionsgate, Discovery and Amazon Studios. CTO since 2011, when PostWorks NY acquired the East Coast Technicolor facility and the color science that came with it, he now oversees the increasingly complicated ecosystem that moves and stores vast amounts of high-resolution footage and data while simultaneously holding those separate and variously intersecting workflows together.

As the first post facility in New York to handle petabyte levels of editorial-based storage, Technicolor-PostWorks learned early how to manage the data explosion unleashed by digital cameras and NLEs. “That’s not because we had a petabyte SAN or NAS or near-line storage,” explains Beirne. “But we had literally 25 to 30 Avid Unity systems that were all in aggregate at once. We had a lot of storage spread out over the campus of buildings that we ran on the traditional PostWorks editorial side of the business.”

The TV finishing and DI business that developed at PostWorks in 2005, when Beirne joined the company (he was previously a client), eventually necessitated a different route. “As we’ve grown, we’ve expanded out to tiered storage, as everyone is doing, and also to the cloud,” he says. “Like we’ve done with our creative platforms, we have channeled our different storage systems and subsystems to meet specific needs. But they all have a very promiscuous relationship with each other!”

TPW’s high-performance storage in its production network is a combination of local or semi-locally attached near-line storage tethered by several Quantum StorNext SANs, all of it air-gapped — or physically segregated —from the public Internet. “We’ve got multiple SANs in the main Technicolor mothership on Leroy Street with multiple metadata controllers,” says Beirne. “We’ve also got some client-specific storage, so we have a SAN that can be dedicated to a particular account. We did that for a particular client who has very restrictive policies about shared storage.”

TPW’s editorial media, for the most part, resides in Avid’s ISIS system and is in the process of transitioning to its software-defined replacement, Nexis. “We have hundreds of Avids, a few Adobe and even some Final Cut systems connected to that collection of Nexis and ISIS and Unity systems,” he says. “We’re currently testing the Nexis pipeline for our needs but, in general, we’re going to keep using this kind of storage for the foreseeable future. We have multiple storage servers that serve that part of our business.”

Beirne says most every project the facility touches is archived to LTO tape. “We have a little bit of disc-to-tape archiving going on for the same reasons everybody else does,” he adds. “And some SAN volume hot spots that are all SSD (solid state drives) or a hybrid.” The facility is also in the process of improving the bandwidth of its overall switching fabric, both on the Fibre Channel side and on the Ethernet side. “That means we’re moving to 32Gb and multiple 16Gb links,” he says. “We’re also exploring a 40Gb Ethernet backbone.”

Technicolor-Postworks 4K theater at their Leroy Street location.

This backbone, he adds, carries an exponential amount of data every day. “Now we have what are like two nested networks of storage at a lot of the artist workstations,” he explains. “That’s a complicating feature. It’s this big, kind of octopus, actually. Scratch that: it’s like two octopi on top of one another. That’s not even mentioning the baseband LAN network that interweaves this whole thing. They, of course, are now getting intermixed because we are also doing IT-based switching. The entire, complex ecosystem is evolving and everything that interacts with it is evolving right along with it.”

The cloud is providing some relief and handles multiple types of storage workflows across TPW’s various business units. “Different flavors of the commercial cloud, as well as our own private cloud, handle those different pools of storage outside our premises,” Beirne says. “We’re collaborating right now with an international account in another territory and we’re touching their storage envelope through the Azure cloud (Microsoft’s enterprise-grade cloud platform). Our Azure cloud and theirs touch and we push data from that storage back and forth between us. That particular collaboration happened because we both had an Azure instance, and those kinds of server-to-server transactions that occur entirely in the cloud work very well. We also had a relationship with one of the studios in which we made a similar connection through Amazon’s S3 cloud.”

Given the trepidations most studios still have about the cloud, Beirne admits there will always be some initial, instinctive mistrust from both clients and staff when you start moving any content away from computers that are not your own and you don’t control. “What made that first cloud solution work, and this is kind of goofy, is we used Aspera to move the data, even though it was between adjacent racks. But we took advantage of the high-bandwidth backbone to do it efficiently.”

Both TPW in New York and Technicolor in Los Angeles have since leveraged the cloud aggressively. “We our own cloud that we built, and big Technicolor has a very substantial purpose-built cloud, as well as Technicolor Pulse, their new storage-related production service in the cloud. They also use object storage and have some even newer technology that will be launching shortly.”

The caveat to moving any storage-related workflow into the cloud is thorough and continual testing, says Beirne. “Do I have more concern for my clients’ media in the cloud than I do when sending my own tax forms electronically? Yea, I probably do,” he says. “It’s a very, very high threshold that we need to pass. But that said, there’s quite a bit of low-impact support stuff that we can do on the cloud. Review and approval stuff has been happening in the cloud for some time.” As a result, the facility has seen an increase, like everyone else, in virtual client sessions, like live color sessions and live mix sessions from city to city or continent to continent. “To do that, we usually have a closed circuit that we open between two facilities and have calibrated displays on either end. And, we also use PIX and other normal dailies systems.”

“How we process and push this media around ultimately defines our business,” he concludes. “It’s increasingly bigger projects that are made more demanding from a computing point of view. And then spreading that out in a safe and effective way to where people want to access it, that’s the challenge we confront every single day. There’s this enormous tension between the desire to be mobile and open and computing everywhere and anywhere, with these incredibly powerful computer systems we now carry around in our pockets and the bandwidth of the content that we’re making, which is high frame rate, high resolution, high dynamic range and high everything. And with 8K — HDR and stereo wavefront data goes way beyond 8K and what the retina even sees — and 10-bit or more coming in the broadcast chain, it will be more of the same.” TPW is already doing 16-bit processing for all of its film projects and most of its television work. “That’s piles and piles and piles of data that also scales linearly. It’s never going to stop. And we have a VR lab here now, and there’s no end of the data when you start including everything in and outside of the frame. That’s what keeps me up at night.”

Deluxe Creative Services
Before becoming CTO at Deluxe Creative Services, Mike Chiado had a 15-year career as a color engineer and image scientist at Company 3, the grading and finishing powerhouse acquired by Deluxe in 2010. He now manages the pipelines of a commercial, television and film Creative Services division that encompasses not just dailies, editorial and color, but sound, VFX, 3D conversion, virtual reality, interactive design and restoration.

MikeChiado

Mike Chiado

That’s a hugely data-heavy load to begin with, and as VR and 8K projects become more common, managing the data stored and coursing through DCS’ network will get even more demanding. Branded companies currently under the monster Deluxe umbrella include Beast, Company 3, DDP, Deluxe/Culver City, Deluxe VR, Editpool, Efilm, Encore, Flagstaff Studios, Iloura, Level 3, Method Studios, StageOne Sound, Stereo D, and Rushes.

“Actually, that’s nothing when you consider that all the delivery and media teams from Deluxe Delivery and Deluxe Digital Cinema are downstream of Creative Services,” says Chiado. “That’s a much bigger network and storage challenge at that level.” Still, the storage challenges of Chiado’s segment are routinely complicated by the twin monkey wrenches of the collaborative and computer kind that can unhinge any technology-driven art form.

“Each area of the business has its own specific problems that recur: television has its issues, commercial work has its issues and features its issues. For us, commercials and features are more alike than you might think, partly due to the constantly changing visual effects but also due to shifting schedules. Television is much more regimented,” he says. “But sometimes we get hard drives in on a commercial or feature and we think, ‘Well that’s not what we talked about at all!”

Company 3’s file-based digital intermediate work quickly clarified Chiado’s technical priorities. “The thing that we learned early on is realtime playback is just so critical,” he says. “When we did our very first file-based DI job 13 years ago, we were so excited that we could display a certain resolution. OK, it was slipping a little bit from realtime, maybe we’ll get 22 frames a second, or 23, but then the director walked out after five minutes and said, ‘No. This won’t work.’ He couldn’t care less about the resolution because it was only always about realtime and solid playback. Luckily, we learned our lesson pretty quickly and learned it well! In Deluxe Creative Services, that still is the number one priority.”

It’s also helped him cut through unnecessary sales pitches from storage vendors unfamiliar with Deluxe’s business. “When I talk to them, I say, ‘Don’t tell me about bit rates. I’m going to tell you a frame rate I want to hit and a resolution, and you tell me if we can hit it or not with your solution. I don’t want to argue bits; I want tell you this is what I need to do and you’re going to tell me whether or not your storage can do that.’ The storage vendors that we’re going to bank our A-client work on better understand fundamentally what we need.”

Because some of the Deluxe company brands share office space — Method and Company 3 moved into a 63,376-square-foot former warehouse in Santa Monica a few years ago — they have access to the same storage infrastructure. “But there are often volumes specially purpose-built for a particular job,” says Chiado. “In that way, we’ve created volumes focused on supporting 4K feature work and others set up specifically for CG desktop environments that are shared across 400 people in that one building. We also have similar business units in Company 3 and Efilm, so sometimes it makes sense that we would want, for artist or client reasons, to have somebody in a different location from where the data resides. For example, having the artist in Santa Monica and the director and DP in Hollywood is something we do regularly.”

Chiado says Deluxe has designed and built with network solution and storage solution providers a system “that suits our needs. But for the most part, we’re using off-the-shelf products for storage. The magic is how we tune them to be able to work with our systems.”

Those vendors include Quantum, DDN Storage and EMC’s network-attached storage Isilon. “For our most robust needs, like 4K feature workflows, we rely on DDN,” he says. “We’ve actually already done some 8K workflows. Crazy world we live in!” For long-term archiving, each Deluxe Creative Service location worldwide has an LTO-tape robot library. “In some cases, we’ll have a near-line tier two volume that stages it. And for the past few years, we’re using object storage in some locations to help with that.”

Although the entire group of Deluxe divisions and offices are linked by a robust 10GigE network that sometimes takes advantage of dark fiber, unused fiber optic cables leased from larger fiber-optic communications companies, Chiado says the storage they use is all very specific to each business unit. “We’re moving stuff around all the time but projects are pretty much residing in one spot or another,” he says. “Often, there are a thousand reasons why — it may be for tax incentives in a particular location, it may be for project-specific needs. Or it’s just that we’re talking about the London and LA locations.”

With one eye on the future and another on budgets, Chiado says pooled storage has helped DCS keep costs down while managing larger and larger subsets of data-heavy projects. “We are always on the lookout for ways to handle the next thing, like the arrival of 8K workflows, but we’ve gained huge, huge efficiencies from pooled storage,” he says. “So that’s the beauty of what we build, specific to each of our world locations. We move it around if we have to between locations but inside that location, everybody works with the content in one place. That right there was a major efficiency in our workflows.”

Beyond that, he says, how to handle 8K is still an open question. “We may have to make an island, and it’s been testing so far, but we do everything we can to keep it in one place and leverage whatever technology that’s required for the job,” Chiado says. “We have isolated instances of SSDs (solid-state drives) but we don’t have large-scale deployment of SSDs yet. On the other end, we’re working with cloud vendors, too, to be able to maximize our investments.”

Although the company is still working through cloud security issues, Chiado says Deluxe is “actively engaging with cloud vendors because we aren’t convinced that our clients are going to be happy with the security protocols in place right now. The nature of the business is we are regularly involved with our clients and MPAA and have ongoing security audits. We also have a group within Deluxe that helps us maintain the best standards, but each show that comes in may have its own unique security needs. It’s a constant, evolving process. It’s been really difficult to get our heads and our clients’ heads around using the cloud for rendering, transcoding or for storage.”

Luckily, that’s starting to change. “We’re getting good traction now, with a few of the studios getting ready to greenlight cloud use and our own pipeline development to support it,” he adds. “They are hand in hand. But I think once we move over this hurdle, this is going to help the industry tremendously.”

Beyond those longer-term challenges, Chiado says the day-to-day demands of each division haven’t changed much. “Everybody always needs more storage, so we are constantly looking at ways to make that happen,” he says. “The better we can monitor our storage and make our in-house people feel comfortable moving stuff off near-line to tape and bring it back again, the better we can put the storage where we need it. But I’m very optimistic about the future, especially about having a relief valve in the cloud.”

Our main image is the shared 4K theater at Company 3 and Method.

VFX Storage: The Molecule

Evolving to a virtual private local cloud?

By Beth Marchant

VFX artists, supervisors and technologists have long been on the cutting-edge of evolving post workflows. The networks built to move, manage, iterate, render and put every pixel into one breathtaking final place are the real super heroes here, and as New York’s The Molecule expands to meet the rising demand for prime-time visual effects, it pulls even more power from its evolving storage pipeline in and out of the cloud.

The Molecule CEO/CTO Chris Healer has a fondness for unusual workarounds. While studying film in college, he built a 16mm projector out of Legos and wrote a 3D graphics library for DOS. In his professional life, he swiftly transitioned from Web design to motion capture and 3D animation. He still wears many hats at his now bicoastal VFX and VR facility, The Molecule —which he founded in New York in 2005 — including CEO, CTO, VFX supervisor, designer, software developer and scientist. In those intersecting capacities, Healer has created the company’s renderfarm, developed and automated its workflow, linking and preview tools and designed and built out its cloud-based compositing pipeline.

When the original New York office went into growth mode, Healer (pictured at his new, under-construction facility) turned to GPL Technologies, a VFX and post-focused digital media pipeline and data infrastructure developer, to help him build an entirely new network foundation for the new location the company will move to later this summer. “Up to this point, we’ve had the same system and we’ve asked GPL to come in and help us create a new one from scratch,” he says. “But any time you hire anyone to help with this kind of thing, you’ve really got to do your own research and figure out what makes sense for your artists, your workflows and, ultimately, your bottom line.”

The new facility will start with 65 seats and expand to more than 100 within the next year to 18 months. Current clients include the major networks, Showtime, HBO, AMC, Netflix and director/producer Doug Limon.

UKS-beforesmall      UKS-aftersmall
Netflix’s Unbreakable Kimmy Schmidt is just one of the shows The Molecule works on.

Healer’s experience as an artist, developer, supervisor and business owner has given him a seasoned perspective on how to develop VFX pipeline work. “There’s a huge disparity between what the conventional user wants to do, i.e. share data, and the much longer dialog you need to have to build a network. Connecting and sharing data is really just the beginning of a very long story that involves so many other factors: how many things are you connecting to? What type of connection do you have? How far away are you from what you’re connecting to? How much data are you moving, and it is all at once or a continuous stream? Users are so different, too.”

Complicating these questions, he says, are a facility’s willingness to embrace new technology before it’s been vetted in the market. “I generally resist the newest technologies,” he says. “My instinct is that I would prefer an older system that’s been tested for years upon years. You go to NAB and see all kinds of cool stuff that appears to be working the way it should. But it hasn’t been tried in different kinds of circumstances or its being pitched to the broadcast industry and may not work well for VFX.”

Making a Choice
He was convinced by EMC’s Isilon system, based on customer feedback and the hardware has already been delivered to the new office. “We won’t install it until construction is complete, but all the documentation is pointing in the right direction,” he says. “Still, it’s a bit of a risk until we get it up and running.”

Last October, Dell announced it would acquire EMC in a deal that is set to close in mid-July. That should suit The Molecule just fine —most of its artists computers are either Dell or HP running Nvidia graphics.

A traditional mass configuration on a single GigE line can only do up to 100MB per second. “A 10GigE connection running in NFS can, theoretically, do 10 times that,” says Healer. “But 10GigE works slightly differently, like an LA freeway, where you don’t change the speed limit but you change the number of lanes and the on and off ramp lights to keep the traffic flowing. It’s not just a bigger gun for a bigger job, but more complexity in the whole system. Isilon seems to do that very well and it’s why we chose them.”

His company’s fast growth, Healer says, has “presented a lot of philosophical questions about disk and RAID redundancy, for example. If you lose a disk in RAID-5 you’re OK, but if two fail, you’re screwed. Clustered file systems like GlusterFS and OneFS, which Isilon uses, have a lot more redundancy built in so you could lose quite a lot of disks and still be fine. If your number is up and on that unlucky day you lost six disks, then you would have backup. But that still doesn’t answer what happens if you have a fire in your office or, more likely, there’s a fire elsewhere in the building and it causes the sprinklers to go off. Suddenly, the need for off-site storage is very important for us, so that’s where we are pushing into next.”

Healer honed in on several metrics to help him determine the right path. “The solutions we looked at had to have the following: DR, or disaster recovery, replication, scalability, off-site storage, undelete and versioning snapshots. And they don’t exactly overlap. I talked to a guy just the other day at Rsync.net, which does cloud storage of off-site backups (not to be confused with the Unix command, though they are related). That’s the direction we’re headed. But VFX is just such a hard fit for any of these new data centers because they don’t want to accept and sync 10TB of data per day.”

A rendering of The Molecule NYC's new location.His current goal is simply to sync material between the two offices. “The holy grail of that scenario is that neither office has the definitive master copy of the material and there is a floating cloud copy somewhere out there that both offices are drawing from,” he says. “There’s a process out there called ‘sharding,’ as in a shard of glass, that MongoDB and Scality and other systems use that says that the data is out there everywhere but is physically diverse. It’s local but local against synchronization of its partners. This makes sense, but not if you’re moving terabytes.”

The model Healer is hoping to implement is to “basically offshore the whole company,” he says. “We’ve been working for the past few months with a New York metro startup called Packet which has a really unique concept of a virtual private local cloud. It’s a mouthful but it’s where we need to be.” If The Molecule is doing work in New York City, Healer points out, Packet is close enough that network transmissions are fast enough and “it’s as if the machines were on our local network, which is amazing. It’s huge. It the Amazon cloud data center is 500 miles away from your office, that drastically changes how well you can treat those machines as if they are local. I really like this movement of virtual private local that says, ‘We’re close by, we’re very secure and we have more capacity than individual facilities could ever want.’ But they are off-site and the multiple other companies that use them are in their own discrete containers that never crosses. Plus, you pay per use — basically per hour and per resource. In my ideal future world, we would have some rendering capacity in our office, some other rendering capacity at Packet and off-site storage at Rsync.net. If that works out, we could potentially virtualize the whole workflow and join our New York and LA office and any other satellite office we want to set up in the future.”

The VFX market, especially in New York, has certainly come into its own in recent years. “It’s great to be in an era when nearly every single frame of every single shot of both television and film is touched in some way by visual effects, and budgets are climbing back and the tax credits have brought a lot more VFX artists, companies and projects to town,” Healer says. “But we’re also heading toward a time when the actual brick-and-mortar space of an office may not be as critical as it is now, and that would be a huge boon for the visual effects industry and the resources we provide.”