OWC 12.4

Category Archives: Storage

Object Matrix and Arvato partner for managing digital archives

Object Matrix and Arvato Systems have partnered to help companies instantly access, manage, browse and edit clips from their digital archives.

Using Arvato’s production asset management platform, VPMS EditMate along with the media-focused object storage solution from Object Matrix, MatrixStore, the companies report that organizations can significantly reduce the time needed to manage media workflows, while making content easily discoverable. The integration makes it easy to unlock assets held in archive, enable creative collaboration and monetize archived assets.

MatrixStore is a media-focused private and hybrid cloud storage platform that provides instant access to all media assets. Built upon object-based storage technology, MatrixStore provides digital content governance through an integrated and automated storage platform supporting multiple media-based workflows while providing a secure and scalable solution.

VPMS EditMate is a toolkit built for managing and editing projects in a streamlined, intuitive and efficient manner, all from within Adobe Premiere Pro. From project creation and collecting media, to the export and storage of edited material, users benefit from a series of features designed to simplify the spectrum of tasks involved in a modern and collaborative editing environment.

Review: Samsung’s 970 EVO Plus 500GB NVMe M.2 SSD

By Brady Betzel

It seems that the SSD drives are dropping in price by the hour. (This might be a slight over-exaggeration, but you understand what I mean.) Over the last year or so there has been a huge difference in pricing, including high-speed NVMe SSD drives. One of those is the highly touted Samsung EVO Plus NVMe line.

In this review, I am going to go over Samsung’s 500GB version of the 970 EVO Plus NVMe M.2 SSD drive. The Samsung 970 EVO Plus NVMe M.2 SSD drive comes in four sizes — 250GB, 500GB, 1TB, and 2TB — and retails (according to www.samsung.com) for $74.99, $119.99, $229.99 and $479.99, respectively. For what it’s worth, I really didn’t see much of price difference on other sites I visited, namely Amazon.com and Best Buy.

On paper, the EVO Plus line of drives can achieve speeds of up to 3,500MB/s read and 3,300MB/s write. Keep in mind that the lower the storage size the lower the read/write speeds will be. For instance, the EVO Plus 250GB SSD can still get up to 3,500MB/s in sequential read speeds, while the sequential write speeds dwindle down to max speeds of 2,300MB/s. Comparatively, the “standard” EVO line can get 3,400MB/s to 3,500MB/s sequential read speeds and 1,500MB/s sequential write speeds on the 250GB EVO SSD. The 500GB version costs just $89.99, but if you need more storage size, you will have to pay more.

There is another SSD to compare the 970 EVO Plus to, and that is the 970 Pro, which only comes in 512GB and 1TB sizes — costing around $169.99 and $349.99, respectively. While the Pro version has similar read speeds to the Plus (up to 3,500MB/s read) and actually slower write speeds (up to 2,700MB/s), the real ticket to admission for the Samsung 970 Pro is the Terabytes Written (TBW) warranty period. Samsung warranties the 970 line of drives for five years or Terabytes Written, whichever comes first. In the 500GB line of 970 drives, the “standard” and Plus 970 cover 300TBW, while the Pro covers a whopping 600TBW.

Samsung says its use of the latest V-NAND technology, in addition to its Phoenix controller, provides the highest speeds and power efficiency of the EVO NVMe drives. Essentially, V-NAND is a way to vertically stack memory instead of the previous method of stacking memory in a planar way. Stacking vertically allows for more memory in the same space in addition to longer life spans. You can read more about the Phoenix controller here.

If you are like me and want both a good warranty (or, really, faith in the product) and blazing speeds, check out the Samsung 970 EVO Plus line of drives. Great price point with almost all of the features as the Pro line. The 970 line of NVMe M.2 SSD drives fits the 2280 form factor (meaning 22mm x 80mm) and fits an M key-style interface. It’s important to understand what interface your SSD is compatible with: either M key (or M) or B key. Cards in the Samsung 970 EVO line are all M key. Most newer motherboards will have at least one if not two M.2 ports to plug drives into. You can also find PCIe adapters for under $20 or $30 on Amazon that will give you essentially the same read/write speeds. External USB 3.1 Gen 2, USB-C enclosures can also be found that will give you an easier way of replacing the drives when needed without having to open your case.

One really amazing way to use these newly lower-priced drives: When color correcting, editing, and/or performing VFX miracles in apps like Adobe Premiere Pro or Blackmagic Resolve, use NVMe drives for only cache, still stores, renders and/or optimized media. With the low cost of these NVMe M.2 drives, you might be able to include the price of one when charging a client and throw it on the shelf when done, complete with the project and media. Not only will you have a super-fast way to access the media, but you can easily get another one in the system when using an external drive.

Summing Up
In the end, the price points of the Samsung 970 EVO Plus NVMe M.2 drives are right in the sweet spot. There are, of course, competing drives that run a little bit cheaper, like the Western Digital Black SN750 NVMe SSDs (at around $99 for the 500GB model), but they come with a slightly slower read/write speed. So for my money, the Samsung 970 line of NVMe drives is a great combination of speed and value that can take your computer to the next level.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

OWC 12.4

Wildlife DP Steve Lumpkin on the road and looking for speed

For more than a decade, Steve Lumpkin has been traveling to the Republic of Botswana to capture and celebrate the country’s diverse and protected wildlife population. As a cinematographer and still photographer, Under Prairies Skies Photography‘s Lumpkin will spend a total of 65 days this year filming in the bush for his current project, Endless Treasures of Botswana.

Steve Lumpkin

It’s a labor of love that comes through in his stunning photographs, whether they depict a proud and healthy lioness washed with early-morning sunlight, an indolent leopard draped over a tree branch or a herd of elephants traversing a brilliant green meadow. The big cats hold a special place in Lumpkin’s heart, and documenting Botswana’s largest pride of lions is central to the project’s mission.

“Our team stands witness to the greatest conservation of the natural world on the planet. Botswana has the will and the courage to protect all things wild,” he explains. “I wanted to fund a not-for-profit effort to create both still images and films that would showcase The Republic of Botswana’s success in protecting these vulnerable species. In return, the government granted me a two-year filming permit to bring back emotional, true tales from the bush.”

Lumpkin recently graduated to shooting 4K video in the bush in Apple ProRes Raw, using a Sony FS5 camera and an Atomos Inferno recorder. He brings the raw footage back to his US studio for post, working in Apple Final Cut Pro on an iMac 5K and employing a variety of tools, including Color Grading Central and Neat Video.

Leopard

Until recently, Lumpkin was hitting a performance snag when transferring files from his QNAP TBS 882T NAS storage system to his iMac Pro. “I was only getting read times of about 100 Mb/sec from Thunderbolt, so editing 4K footage was painful,” he says. “At the time, I was transitioning to ProRes RAW, and I knew I needed a big performance kick.”

On the recommendation of Bob Zelin, video engineering consultant and owner of Rescue 1, Lumpkin installed Sonnet’s Solo10G Thunderbolt 3 adapter. The Solo10G uses the 10GbE standard to connect computers via Ethernet cables to high-speed infrastructure and storage systems. “Instantly, I jumped to a transfer rate of more than 880MB per second, a nearly tenfold throughput increase,” he says. “The system just screams now – the Solo10G has accelerated every piece of my workflow, from ingest to 4K editing to rendering and output.”

“So many colleagues I know are struggling with this exact problem — they need to work with huge files and they’ve got these big storage arrays, but their Thunderbolt 2 or 3 connections alone just aren’t cutting it.”

With Lumpkin, everything comes down to the wildlife. He appreciates any tools that help streamline his ability to tell the story of the country and its tremendous success in protecting threatened species. “The work we’re doing on behalf of Botswana is really what it’s all about — in 10 or 15 years, that country might be the only place on the planet where some of these animals still exist.

“Botswana has the largest herd of elephants in Africa and the largest group of wild dogs, of which there are only about 6,000 left,” says Lumpkin. “Products like Sonnet’s Solo10G, Final Cut, the Sony FS5 camera and Atomos Inferno, among others, help our team celebrate Botswana’s recognition as the conservation leader of Africa.”


Whiskytree experiences growth, upgrades tools

Visual effects and content creation company Whiskytree has gone through a growth spurt that included a substantial increase in staff, a new physical space and new infrastructure.

Providing content for films, television, the Web, apps, game and VR or AR, Whiskytree’s team of artists, designers and technicians use applications such as Autodesk Maya, Side Effects Houdini, Autodesk Arnold, Gaffer and Foundry Nuke on Linux — along with custom tools — to create computer graphics and visual effects.

To help manage its growth and the increase in data that came with it, Whiskytree recently installed Panasas ActiveStor. The platform is used to store and manage Whiskytree’s computer graphics and visual effects workflows, including data-intensive rendering and realtime collaboration using extremely large data sets for movies, commercials and advertising; work for realtime render engines and games; and augmented reality and virtual reality applications.

“We recently tripled our employee count in a single month while simultaneously finalizing the build-out of our new facility and network infrastructure, all while working on a 700-shot feature film project [The Captain],” says Jonathan Harb, chief executive officer and owner of Whiskytree. “Panasas not only delivered the scalable performance that we required during this critical period, but also delivered a high level of support and expertise. This allowed us to add artists at the rapid pace we needed with an easy-to-work-with solution that didn’t require fine-tuning to maintain and improve our workflow and capacity in an uninterrupted fashion. We literally moved from our old location on a Friday, then began work in our new facility the following Monday morning, with no production downtime. The company’s ‘set it and forget it’ appliance resulted in overall smooth operations, even under the trying circumstances.”

In the past, Whiskytree operated a multi-vendor storage solution that was complex and time consuming to administer, modify and troubleshoot. With the office relocation and rapid team expansion, Whiskytree didn’t have time to build a new custom solution or spend a lot of time tuning. It also needed storage that would grow as project and facility needs change.

Projects from the studio include Thor: Ragnarok, Monster Hunt 2, Bolden, Mother, Star Wars: The Last Jedi, Downsizing, Warcraft and Rogue One: A Star Wars.


Facilis, ATTO partner on 25Gb adapters for new Macs

Facilis, which makes high-performance shared storage solutions, has partnered with ATTO Technology to integrate ATTO’s new Thunderlink NS 3252 Thunderbolt 3 to 25GbE adapter within the Facilis Hub shared storage platform. The solution provides flexible, scalable, high-bandwidth connectivity for Apple’s new Mac Pro, iMac Pro and Mac mini.

At IBC in Amsterdam, Facilis will demonstrate 4K and 8K editing workflows featuring their Hub shared storage platform with ATTO Celerity 32Gb and 16Gb Fibre Channel HBAs and FastFrame 25Gbps Ethernet. In addition, Facilis servers include 10GigE optical and copper ATTO HBAs as well as ATTO 12GB SAS internal and external interface cards. These technologies allow Facilis to create powerful solutions that fulfill a diverse set of customer connectivity needs and workflow demands.

Facilis has been beta testing the soon-to-be released ATTO 360 tuning, monitoring and analytics application, an Ethernet network optimization tool designed for creative professionals looking to unlock the potential of ATTO FastFrame and ThunderLink adapters.

“We’re very happy to expand our longstanding partnership with Facilis” says ATTO CEO Jeff Lowe. “The new Facilis Hub Shared Storage platform is a powerful storage solution for media professionals working in compressed and uncompressed high-resolution video finishing formats utilizing Ethernet, Fibre Channel or both.”

At the IBC show, Facilis will also show the newly shipped Facilis Hub shared storage system, and previews of version 8.0 Hub software management. Built as an entirely new platform, Facilis Hub represents the evolution of the Facilis shared file system with the block-level virtualization and multi-connectivity performance required for demanding media production workflows. Version 7.2 of the Facilis system software and FastTracker 3.0 are available now and included in all Hub systems.


Building a massive editing storage setup on a budget

By Mike McCarthy

This year, I oversaw the editing process for a large international film production. This involved setting up a collaborative editing facility in the US, at Vasquez Saloon, with a large amount of high-speed storage for the source footage. While there was “only” 6.5TB of offline DNxHR files, they shot around 150TB of Red footage that we needed to have available for onsite VFX, conform, etc. Once we finished the edit, we were actually using 40TB of that footage in the cut, which we needed at another location for further remote collaboration. So I was in the market for some large storage solutions.

Our last few projects have been small enough to fit on eight-bay desktop eSAS arrays, which are quiet and relatively cheap. (Act of Valor was on a 24TB array of 3TB drives in 2010, while 6 Below was on 64TB arrays of 8TB drives.) Now that we have 12TB drives available, that allows those to go to 96TB, but we needed more capacity than that. With that much data on a single spindle, you lose more capacity to maintain redundancy, with RAID-6 dropping the raw space to 72TB.

Large numbers of smaller drives offer better performance and more efficient redundancy, as well as being cheaper per TB, at least for the drives. But once you get into large rack-mounted arrays, they are much louder and need to be located farther from the creative space, requiring different interconnects than direct attached SAS. My initial quotes were for a 24x 8TB solution, offering 192TB storage, before RAID-6 and such left us with 160 usable Terabytes of space for around $15K.

I was in the process of ordering one of those from ProAvio when they folded last Thanksgiving, resetting my acquisition process. I looked into building one myself, with a SAS storage chassis and bare drives, when I stumbled across refurbished servers on eBay. There are numerous companies selling used servers that include storage chassis, backplanes and RAID cards, for less than just this case costs new.

The added benefit is that these include a fully functioning Xeon-level computer system as well. At the very least, this allows you to share the storage over a 10GbE network, and in our case we were also able to use it as a rendernode and eventually a user workstation. That solution worked well enough that we will be using similar items for future artist stations, even without that type of storage requirement. I have setup two separate systems so far, for different needs, and learned a lot in the process. I thought I would share some of those details on here.

Why use refurbished systems for top end work? Most of the CPU advances in the last few years have come in the form of increased core counts and energy efficiency. This means that in lightly threaded applications, CPUs from a few years ago will perform nearly as well as brand-new ones. And previous generation DDR3 RAM is much cheaper than DDR4. PCIe 3.0 has been around for many generations, but older systems won’t have Thunderbolt3 and may not even have USB 3. USB 3 can be added with an expansion card, but Thunderbolt will require a current generation system. The other primary limitation is finding systems that have drivers for running Windows 10, since those systems are usually designed for Linux and Windows Server. Make sure you verify the motherboard will support Windows 10 before you make a selection. (Unfortunately, Windows 7 is finally dying, with no support from Microsoft or current application releases.)

Workstations and servers are closely related at the hardware level, but have a few design differences. They use the same chipsets and Xeon processors, but servers are designed for remote administration in racks while workstations are designed to be quieter towers with more graphics capability. But servers can be used for workstation tasks with a few modifications, and used servers can be acquired very cheaply. Also, servers frequently have the infrastructure for large drive arrays, while workstations are usually designed to connect to separate storage for larger datasets.

Recognizing these facts, I set out to build a large repository for my 150TB of Red footage on a system that could also run my Adobe applications and process the data. While 8TB drives are currently the optimal size for storing the most data for the lowest total price that will change over time. And 150TB of data required more than 16 drives, so I focused on 4U systems with 24 drive bays. With 192TB of RAW storage, minus two drives for RAID-6 (16TB) and 10% for Windows overhead leaves me with 160TB of storage space reported in Windows.

4U chassis also allow for full-height PCIe cards, which is important for modern GPUs. Finding support for full-height PCIe slots is probably the biggest challenge in selecting a chassis, as most server cards are low profile. A 1U chassis can fit a dual-slot GPU if it’s designed to accept one horizontally, but cooling may be an issue for workstation cards. A 2U chassis has the same issue, so you must have a 3U or 4U chassis to install full-height PCIe cards vertically, and the extra space will help with cooling and acoustics as well.

Dell and HP offer options as well, but I went with Supermicro since their design fit my needs the best. I got a 4U chassis with a 24-port pass through SAS back plane for maximum storage performance and a X9DRi-LNF4+ motherboard that was supposed to support Windows 7 and Windows 10. The pass-through backplane gave full speed access to 24 drives over six-quad channel SFF-8643 ports, but required a 24-port RAID card and more cables. The other option is a port multiplying backplane, which has a single or dual SFF-8643 connection to the RAID card. This allows for further expansion at the expense of potential complexity and latency. And 12G SAS is 1.5GB/s per lane, so in theory a single SFF-8643 cable can pass up to 6GB/s, which should be as much as most RAID controllers can handle anyway.

The system cost about $2K, plus $5K for the 24 drives, which is less than half of what I was looking at paying for a standalone external SAS array and it included a full computer with 20 CPU cores and 128GB RAM. I considered it a bit of a risk, as I had never done something at that scale and there was no warranty, but we decided that the cost savings was worth a try. It wasn’t without its challenges, but it is definitely a viable solution for a certain type of customer. (One with more skills than money.)

Putting it to Use
The machine ran loud, as was to be expected with 24 drives and five fans but it was installed in a machine room with our rack mount UPS and network switches, so the noise wasn’t a problem. I ran 30-foot USB and HDMI cables to the user station in the next room and frequently controlled it via VNC. I added an Nvidia Pascal Quadro card, a 10GbE card and a USB 3 card, as well as a SATA SSD for the OS in an optional 2.5-inch drive tray. Once I got the array set up and initialized, it benchmarked at over 3000MB/s transfer rate. This was far more than I needed for Red files, but I won’t turn down excess speed for future use with uncompressed 8K frames or 40GbE network connections.

Initially, I had trouble with Windows 10. I was getting bluescreen APCI bios errors on boot, but Windows 7 worked flawlessly. I used Win7 for a month, but I knew I would need to move to Win10 within the year and was looking at building more systems. So I knew I needed to confirm that Win10 could work successfully. I eventually determined that it was Windows Update — always been the bane of my existence when using Win10 — which was causing the problem. It was automatically updating one of the chipset drivers to a version that prevented the system from booting. The only solution was to prevent Win10 from accessing the Internet until after the current driver was successfully involved. The only way to disable Windows update during install is to totally disconnect the system from the network. Once I did that everything worked great, and I ordered another system.

The second time I didn’t need as much data, so I went with a 16-bay 3U chassis… which was a mistake. It ran hotter and louder with less case space, and it doesn’t fit GPUs with top-mounted power plugs or full-sized CPU coolers. So regardless of how many drive bays you need, I recommend buying a 24-bay 4U system for the space it gives you. (The SuperMicro 36 bay systems look the same from the front, but have less space available since the extra 12 bays in the rear constrain the motherboard similar to a 2U case.) The extra space also gives you more options for cards and cooling solutions.

I also tried an NVMe drive in a PCIe slot and while it works booting is not an option without modding the BIOS, which I was not about to experiment with. So I installed the OS on a SATA SSD again, and was able to adapt it to one of the 16 standard drive bays, as I only needed 8 of them for my 64TB array. This system had a pass through backplane with 16 single port SATA connectors, which is much messier than the SFF-8643 connectors. But it works, and it’s simpler to mix the drives between the RAID card and the motherboard, which is a plus.

When I received the unit, it was FAR louder than the previously ordered 4U one, for a number of reasons. It had 800W power supplies — instead of the 920W-SQ (Super-quiet) ones in my first one — and the smaller case had different airflow limitations. I needed this one to be quieter than the first system, as I was going to be running it next to my desk instead of in a machine room. So I set about redesigning the cooling system, which was the source of 90% of the noise. I got the power supplies replaced with 920SQ ones, although the 700W ones are supposed to be quiet as well, and much cheaper.

I replaced the 5x 80mm 5000RPM jet engine system fans with Noctua 1800RPM fans, which made the system quiet but didn’t provide enough air flow for the passively cooled CPUs. I then ordered two large CPU coolers with horizontally mounted 92mm fans to cool the Xeon chips, replacing the default passive heatsinks that us case airflow for cooling. I also installed a 40mmx20 fan on the RAID card that had been overheating even with the default jet engine sounding fans. Once I had those eight Noctua fans installed, the system was whisper quiet and could render at 100% CPU usage without throttling or overheating. So I was able to build a system with 16 cores and 128GB RAM for about $1500, not counting the 64TB storage, which doubles that price, and the GPU, which I already had. (Although a GTX1660 can be had for $300, and would be a good fit in that budget range.) The first one I built had 20 cores at 3GHz, and 128GB RAM for about $2,000, plus $5000 for the 192TB storage. I was originally looking at getting just the 192TB external arrays for twice that price, so by comparison this was half the cost with a high-end computer tossed in as a bonus.

Looking Ahead
The things I plan to do differently in the future include:
Always getting the 4U chassis for maximum flexibility,
making sure to get quiet power supplies ($50 to $150) and
budgeting to replace all the fans and CPU coolers if noise is going to be an issue ($200).

But at the end of the day, you should be able to get a powerful dual-socket system ready to support massive storage volume for around $2,000. This solution makes the most sense when you need large capacity storage, as well as the editing system. Otherwise some of what you are paying for is going to waste.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Review: LaCie mobile, high Speed 1TB SSD

By Brady Betzel

With the flood of internal and external hard drives hitting the market at relatively low prices, it is sometimes hard to wade through the swamp and find the drive that is right for your workflow. In terms of external drives, do you need a RAID? USB-C? Is Thunderbolt 3 the same as USB-C? Should I save money and go with a spinning drive? Are spinning drives even cheaper than SSD drives these days? All of these questions are valid and, hopefully, I will answer them.

For this review, I’m taking a look at the LaCie Mobile SSD  which comes in three versions: 500GB, 1TB and 2TB, costing around $129.95, $219.95 and $399.95, respectively. According to LaCie’s website the mobile SSD drives are exclusive to Apple, but with some searching on Amazon you can find all three available as well and at lower prices than I’ve mentioned. The 1TB version I am seeing for $152.95 is being sold on Amazon through LaCie, so I assume the warranty still holds up.

I was sent the 1TB version of the LaCie Mobile SSD for review and testing. Along with the drive itself, you will get two connection cables: a (USB 3.0 speed) USB-A to USB-C cable, as well as a (USB 3.1 Gen2 speed) GenUSB-C to USB-C cable. For clarity, USB-C is the type of connection — the oval-like shape and technology used to transfer data. While USB-C will work on Thunderbolt 3 connections, Thunderbolt 3 only connections will not work on USB-C connections. Yes, that is super-confusing considering they look the same. But in the real world, Thunderbolt 3 is more Mac OS-based while USB-C is more Windows-based. You can find rare Thunderbolt 3 connections on Windows-based PCs, but you are more likely to find USB-C. That being said, the LaCie Mobile SSD is compatible with both USB-C and Thunderbolt 3, as well as USB 3.0. Keep in mind you will not get the high transfer speed with the USB 3.0 to USB-C cable. You will only get that with the (USB 3.1 Gen 2) USB-C to USB-C cable. The drive comes formatted as exFAT, which is immediately compatible with both Mac OS and Windows.

So, are spinning drives worth the cheaper price? In my opinion, no. Spinning drives are more fragile when moved around a lot and they transfer at much slower speeds. Advertised speeds vary from about 130MB/s for spinning drives to 540MB/s for SSDs, so for today what amounts to $100 more will give you a significant speed increase.

A very valuable piece of the LaCie Mobile SSD purchase is the limited three-year warranty and three years of data recovery services for free. No matter how your data becomes corrupted, Seagate will try and recover it — Seagate became LaCie’s parent company in 2014. Each product is eligible for one in-lab data recovery attempt and can be turned around in as little as two days, depending on the type of recovery. The recovered media will then be sent back to you on a storage device as well as be available to you from a cloud-based account that will be hosted online for 60 days. This is a great feature that’s included in the price.

The drive itself is small, measuring approximately .35” x 3” x 3.8” and weighing only .22 lbs. The outside has sharp lines much in the vein of a faceted diamond. It feels solid and great to carry. The color is about the same as a MacBook Pro, space gray and is made of aluminum.

Transfer SpeedsAlright, let’s get to the nitty-gritty: transfer speeds. I tested the LaCie Mobile SSD on both a Windows-based PC with USB-C and an iMac Pro with Thunderbolt 3/USB-C. On the Windows PC, I initially connected the drive to a port on the front of my system and I was only getting around 150MB/s write speed (about the speed of USB 3.0). Immediately, I knew something was wrong, so I connected to a USB-C port that was in a PCI-e slot in the rear of my PC. On that port I was getting 440.9MB/s write speed and 516.3MB/s read speeds. Moral of the story, make sure your USB-C ports are not just for charging or simply the USB-C connector running at USB 3.0 speeds.

On the iMac Pro, I was getting write speeds of 487.2MB/s and read speeds of 523.9MB/s. This is definitely on par with the correct Windows PC transfer speeds. The retail packaging on the LaCie Mobile SSD states a 540MB/s speed (doesn’t differentiate between read or write), but much like retail miles-per-gallon readouts on car sales brochures, you have to take their numbers with a few grains of salt. And while I have previoulsy tested drives (not from LaCie) that would initially transfer at a high rate and drop down, the LaCie Mobile SSD drive sustained the high speed transfer rates.

Summing Up
In the end, the size and design of the LaCie Mobile SSD will be one of the larger factors in determining if you buy this drive. It’s small. Like real small, but it feels sturdy. I don’t think anyone can argue that the LaCie Rugged drives (the ones that are orange-rubber encased) are a staple of the post industry. I really wish LaCie kept that tradition and added a tiny little orange rubberized edge. Not only does it feel safer for some reason, but it is a trademark that immediately says, “I’m a professional.”

Besides the appearance, the $152.95 price tag for a 1TB SSD drive that can easily fit into your shirt pocket without being noticed is pretty reasonable. At $219.95 I might say keep looking around. In addition, if you aren’t already an Adobe Creative Cloud subscriber you will get a free 30-day trial (normally seven days) included with purchase.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


Review: Western Digital’s Blue SN500 NVMe SSD

By Brady Betzel

Since we began the transfer of power from the old standard SATA 3.5-inch hard drives to SSD drives, multimedia-based computer users have seen a dramatic uptick in read and write speeds. The only issue has been price. You can still find a 3.5-inch brick drive, ranging in size from 2TB to 4TB, for under $200 (maybe closer to $100), but if you upgraded to an SSD drive over the past five years, you were looking at a huge jump in price. Hundreds, if not thousands, of dollars. These days you are looking at just a couple of hundred for 1TB and even less for 256GB or 512GB SSD.

Western Digital hopes you’ll think of NVMe SSD drives as more of an automatic purchase than a luxury with the Western Digital Blue SN500 NVMe M.2 2280 line of SSD drives.

Before you get started, you will need a somewhat modern computer with an NVMe M.2-compatible motherboard (also referred to as a PCIe Gen 3 interface). This NVMe SSD is a “B+M key” configuration, so you will need to make sure you are compatible. Once you confirm that your motherboard is compatible, you can start shopping around. The Western Digital Blue series has always been the budget-friendly level of hard drives. Western Digital also offers the next level up: the Black series. In terms of NVMe SSD M.2 drives, the Western Digital Blue series drives will be budget-friendly, but they also use two fewer PCIe lanes, which results in a slower read/write speed. The Black series uses up to four PCIe lanes, as well as has a heat sink to dissipate the heat. But for this review, I am focusing on the Blue series and how it performs.

On paper the Western Digital Blue SN500 NVMe SSD is available in either 250GB or 500GB sizes, measures approximately 80mm long and uses the M.2 2280 form factor for the PCIe Gen 3 interface in up to two lanes. Technically, the 500GB drive can achieve up to 1,700MB/s read and 1450MB/s write speeds, and the 250GB can achieve up to 1700MB/s read and 1300MB/s write speeds.

As of this review, the 250GB version sells for $53.99, while the 500GB version sells for $75.99. You can find specs on the Western Digital website and learn more about the Black series as well.

One of the coolest things about these NVMe drives is that they come standard with a five-year limited warranty (or max endurance limit). The max endurance (aka TBW — terabytes written) for the 250GB SSD is 150TB, while the max endurance for the 500GB version is 300TB. Both versions have a MTTF (mean time to failure) of 1.75 million hours.

In addition, the drive uses an in-house controller and 3D NAND logic. Now those words might sound like nonsense, but the in-house controller is what tells the NVMe what to do and when to do it (it’s essentially a dedicated processor), while3D NAND is a way of cramming more memory into smaller spaces. Instead of hard drive manufacturers adding more memory on the same platform in an x- or y-axis, they achieve more storage space by stacking layers vertically on top — or on the z-axis.

Testing Read and Write Speeds
Keep in mind that I ran these tests on a Windows-based PC. Doing a straight file transfer, I was getting about 1GB/s. When using Crystal Disk Mark, I would get a burst of speed at the top, slow down a little and then mellow out. Using a 4GB sample, my speeds were:
“Seq Q32T” – Read: 1749.5 MB/s – Write: 1456.6 MB/s
“4KiB Q8T8” – Read: 1020.4 MB/s – Write: 1039.9 MB/s
“4KiB Q32T1” – Read: 732.5 MB/s – Write: 676.5 MB/s
“4KiB Q1T1” – Read: 35.77 MB/s – Write: 185.5 MB/s

If you would like to read exactly what these types of tests entail, check out the Crystal Disk Mark info page. In the AJA System Test I had a little drop off, but with a 4GB test file size, I got an initial read speed of 1457MB/s and a write speed of 1210MB/s, which seems to fall more in line with what Western Digital is touting. The second time I ran the AJA System Test, I got a read speed of 1458MB/s and write speed of 883MB/s. I wanted a third opinion, so I ran the Blackmagic Design Disk Speed Test (you’ll have to install drivers for a Blackmagic card, like the Ultrastudio 4K). On my first run, I got a read speed of 1359.6MB/s and write speed of 1305.8MB/s. On my second run, I got a read speed of 1340.5MB/s and write speed of 968.3MB/s. My read numbers were generally above 1300MB/s, and my write numbers varied between 800 and 1000MB/s. Not terrible for a sub-$100 hard drive.

Summing Up
In the end, the Western Digital Blue SN500 NVMe SSD is an amazing value at under $100, and hopefully we will get expanded sizes in the future. The drive is a B+M key configuration, so when you are looking at compatibility, make sure to check which key your PCIe card, external drive case or motherboard supports. It is typically M or B+M key, but I found a PCI card that supported both. If you need more space and speed than the WD Blue series can offer, check out Western Digital’s Black series of NVMe SSDs.

The sticker price starts to go up significantly when you hit the 1TB or 2TB marks — $279.99 and $529.99, respectively (with the heat sink attachment). If you stick to the 500GB version, you are looking at a more modest price tag.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


Sonnet adds new card and adapter to 10GbE line

Sonnet Technologies is offering the Solo10G SFP+ PCIe card and the Solo10G SFP+ Thunderbolt 3 Edition adapter, the latest products in the company’s line of 10 Gigabit Ethernet (10GbE) network adapters.

Solo10G SFP+ adapters add fast 10GbE network connectivity to a wide range of computers, enabling users to easily connect to 10GbE-enabled network infrastructure and storage systems via LC fiber optic cables (sold separately). Both products include a 10GBase-SR (short-range) SFP+ transceiver (the most commonly used optical transceiver), enabling 10Gb connectivity at distances up to 300 meters.

The Solo10G SFP+ PCIe card is a low-profile x4 PCIe 3.0 adapter card that offers Mac, Windows and Linux users an easy-to-install and easy-to-manage solution for adding 10GbE fiber network connectivity to computers with PCIe card slots. This card is also suited for use in a multi-slot Thunderbolt-to-PCIe card expansion system connected to a Mac. The Solo10G SFP+ Thunderbolt 3 Edition adapter is a compact, rugged, bus-powered, fanless Thunderbolt 3 adapter for Mac and Windows computers with Thunderbolt 3 ports.

Sonnet’s Solo10G SFP+ products offer Mac users a plug-and-play experience with no driver installation required; Windows and Linux use only requires a simple driver installation. Both products are configured using operating system settings, so there’s no separate management program to install or run.

With its broad OS support and small form factor, the Solo10G SFP+ PCIe card allows companies to standardize on a single adapter and deploy it across platforms with ease. For users with Thunderbolt 3-equipped Mac and Windows computers, the Solo10G SFP+ Thunderbolt 3 Edition adapter is a simple external solution for adding 10GbE fiber network connectivity. From its replaceable captive cable to its bus-powered operation, the Thunderbolt 3 adapter is highly portable.

Solo10G SFP+ products were engineered with security features essential to today’s users. Incorporating encryption in hardware, the Sonnet network adapters are protected against malicious firmware modification. Any unauthorized attempt to modify the firmware to enable covert computer access renders them inoperable. These security features prevent the Solo10G SFP+ adapters from being reprogrammed, except by a manufacturer’s update using a secure encryption key.

Measuring a compact 3.1 inches wide by 4.9 inches deep by 1.1 inches tall — less than half the size of every other adapter in its class — the Solo10G SFP+ Thunderbolt 3 Edition adapter features an aluminum enclosure that effectively cools the circuitry and eliminates the need for a fan, enabling silent operation. Unlike every other 10GbE fiber Thunderbolt adapter available, Sonnet’s Solo10G SFP+ adapter requires no power adapter and instead is powered by the computer to which it’s connected.

The Solo10G SFP+ PCIe card and Solo10G SFP+ Thunderbolt 3 Edition adapter are available now for $149 and $249, respectively.

Atto’s FibreBridge now part of NetApp’s MetroCluster

Atto Technology has teamed with NetApp to offer Atto FibreBridge 7600N as a key component in the MetroCluster continuous data availability solution. Atto FibreBridge 7600N storage controller enables synchronous site-to-site replication up to 300km by providing low latency 32Gb Fibre Channel connections to NetApp flash and disk systems while maintaining high resiliency. FibreBridge 7600N supports up to 1.2 million IOPS and 6,400MB/s per controller.

NetApp MetroCluster enhances the built-in high availability and non-disruptive operations of NetApp systems with Ontap software, providing an additional layer of protection for the entire storage and host environment.

The Atto XstreamCore FC 7600 is a hardware protocol converter that connects 32Gb Fibre Channel ports to 12Gb SAS. It allows post and production houses to free up server resources normally used for handling storage activity and distribute storage connections across up to 64 servers with less than four micro seconds of latency. XstreamCore FC 7600 offers the flexibility needed for modern media production, allowing streaming of uncompressed HD, 4K and larger video, adding shared capabilities to direct attached storage and remotely locating direct attached disk or tape devices. This is a major advantage in workflow management, system architecting and layout of production facilities.

FibreBridge 7600N is one of Atto XstreamCore storage controller products, just one of Atto’s broad portfolio of connectivity solutions widely tested and certified for compatibility with all operating systems and platforms.

NAB 2019: Storage for M&E workflows

By Tom Coughlin

Storage is a vital element in modern post production, since that’s where the video content lives. Let’s look at trends in media post production storage and products shown at the 2019 NAB show. First let’s look at general post production storage architectures and storage trends.

My company produces the yearly “Digital Storage in Media and Entertainment Report,” so we are keeping an eye on storage all year round. The image to the right is a schematic from our 2018 report — it’s a nonlinear editing station showing  optional connections to shared online (or realtime) storage via a SAN or NAS (or even a cloud-based object storage system) and a host bus adapter (HBA or xGbE card). I hope this gives you some good background for what’s to come.

Our 2018 report also includes data from our annual Digital Storage in Media and Entertainment Professional Survey. The report shows that storage capacity annual demand is expected to be over 110 Exabytes of storage by 2023. In 2018 48% of responding survey participants said that they used cloud-based storage for editing and post production. And 56% also said that they have 1TB or more storage capacity in the cloud. In 2018, Internet distribution was the most popular way to view proxies.

All of this proves that M&E pros will continue to use multiple types of digital storage to enable their workflows, with significant growth in the use of cloud storage for collaborative and field projects. With that in mind, let’s dig into some of the storage offerings that were on display at NAB 2019.

Workflow Storage
Dell Technologies said that significant developments in its work with VMware unlock the value of virtualization for applications and tools to automate many critical M&E workflows and operations. Dell EMC and VMware said that they are about to unveil the recipe book for making virtualization a reality for the M&E industry.

Qumulo announced an expansion of its cloud-native file storage offerings. The company introduced two new products —CloudStudio and CloudContinuity — as well as support for Qumulo’s cloud-native, distributed hybrid file system on the Google Cloud Platform (GCP). Qumulo has partnered with Google to support Qumulo’s hybrid cloud file system on GCP and on the Google Cloud Platform Marketplace. Enterprises will be able to take advantage of the elastic compute resources, operational agility, and advanced services that Google’s public cloud offers. With the addition of the Google Cloud Platform, Qumulo is able to provide multi-cloud platform support, making it easy for users to store, manage and access their data, workloads and applications in both Amazon Web Services (AWS) and GCP. Qumulo also enables data replication between clouds for migration or multi-copy requirements.

M&E companies of any size can scale production into the public cloud with CloudStudio, which securely moves traditionally on-prem workspaces, including desktops, applications and data, to the public cloud on both the AWS and GCP platforms. Qumulo’s file storage software is the same whether on-prem or in the cloud, making the transition seamless and easy and eliminating the need to reconfigure applications or retrain users.

CloudContinuity enables users to automatically replicate their data from an on-prem Qumulo cluster to a Qumulo instance running in the cloud. Should a primary on-prem storage system experience a catastrophic failure, customers can redirect users and applications to the Qumulo cloud, where they will have access to all of their data immediately. CloudContinuity also enables quick, automated fail-back to an on-prem cluster in disaster recovery scenarios.

Quantum announced its VS-Series, designed for surveillance and industrial IoT applications. The VS-Series is available in a broad range of server choices, suitable for deployments with fewer than 10 cameras up to the largest environments with thousands of cameras. Using the VS-Series, security pros can efficiently record and store surveillance footage and run an entire security infrastructure on a single platform.

Quantum’s VS-Series architecture is based on the Quantum Cloud Storage Platform (CSP), a new software-defined storage platform specifically designed for storing machine and sensor-generated data. Like storage technologies used in the cloud, the Quantum CSP is software-defined and can be deployed on bare metal, as a virtual machine, or as part of a hyperconverged infrastructure.Unlike other software-defined storage technologies, the Quantum CSP was designed specifically for video and other forms of high-resolution content — engineered for extremely low latency, maximizing the streaming performance of large files to storage.

The Quantum Cloud Storage Platform allows high-speed video recording with optimal camera density and can host and run certified VMS management applications, recording servers and other building control servers on a single platform.

Quantum say that the VS-Series product line is being offered in a variety of deployment options, including software-only, mini-tower and 1U, 2U and 4U hyperconverged servers.

Key VS-Series attributes:
– Supports high camera density and software architecture that enables users to run their entire security infrastructure on a single hyperconverged platform.
– Offers a software-defined platform with the broadest range of deployment options. Many appliances can scale out for more cameras or scale up for increased retention.
– Comes pre-installed with certified VMS applications and can be installed and configured in minutes.
– Offers a fault-tolerant design to minimize hardware and software issues, which is meant to virtually eliminate downtime

Quantum was also showing its R-3000 at NAB. This box was designed for in-vehicle data capture for developing driver assistance and autonomous driving systems. This NAS box includes storage modules of 60TB with HDDs and 23TB or 46TB using SSDs. It works off 12 volt power and features two 10 GbE ports.

Arrow Distribution bundled NetApp storage appliances with Axle AI software. The three solutions offered are the VM100, VM200 and VM400 with 100TB, 200TB and 400TB, respectively, with 10GbE network interfaces and NetApp’s FAS architecture. Each configuration also includes an Intel-based application server running a five-user version of Axle AI 2019. The software includes a browser front-end that allows multiple users to tag, catalog and search their media files, as well as a range of AI-driven options for automatically cataloging and discovering specific visual and audio attributes within those files.

Avid Nexis|Cloudspaces

Avid Nexis|Cloudspaces is a storage as a service (SaaS) offering for post, news and sports teams, enabling them to store and park media and projects not currently in production in the cloud, leveraging Microsoft Azure. This frees up local Nexis storage space for production work. The company is offering all Avid Nexis users a limited-time free offer of 2TB of Microsoft Azure storage that is auto-provisioned for easy setup and can scale as needed. Avid Nexis manages these Cloudspaces alongside local workspaces, allowing unified content management.

DDP was showing a rack with hybrid SSD/HDD storage that the company says provides 24/7 365 days of reliable operation with zero interruptions and a transparent failover setup. DDP has redesigned its GUI to provide faster operation and easier use.

Facilis displayed its new Hub shared storage line developed specifically for media production workflows. Built as an entirely new platform, Facilis Hub represents the evolution of the Facilis shared file system with the block-level virtualization and multi-connectivity performance required in shared creative environments. This solution offers both block-mode Fibre Channel and Ethernet connectivity simultaneously, allowing connection through either method with the same permissions, user accounts and desktop appearance.

Facilis’ Object Cloud is an integrated disk-caching system for cloud and LTO backup and archive that includes up to 100TB of cloud storage for one low yearly cost. A native Facilis virtual volume can display cloud, tape and spinning disk data in the same directory structure, on the client desktop. Every Facilis Hub shared storage server comes with unlimited seats of the Facilis FastTracker asset tracking application. The Object Cloud software and storage package is available for most Facilis servers running version 7.2 or higher.

Facilis also had particular product updates. The Facilis 8 has 1GB/s data rates through standard dual-port 10GbE and options for 40GbE and Fibre Channel connectivity with 32TB, 48TB and 64TB capacities. The Facilis Hub 16 model offers 2GB/s speed with 16 HDDs with 64TB, 96TB and 128TB capacities. The company’s Hub Hybrid 16 model and SSD offers SSDs in an integrated high-capacity HDD-based storage system offering performance of 3GB/s and 4GB/s. With two or more Hub 16 or Hub 32 servers attached through 32Gb Fibre Channel controllers, Facilis Hub One configurations can be fully redundant, with multi-server bandwidth aggregated into a single point of network connectivity. The Hub One starts at 128GB and scales to 1PB.

Pixit Media announced the launch of PixStor 5, the latest version of its leading scale-out data-driven storage platform. According to the company, “PixStor 5 is an enterprise-class scale-out NAS platform delivering guaranteed 99% performance for all types of workflow and a single global namespace across multiple storage tiers — from on-prem for the cloud.”

New PixStor 5 highlights include:

PixStor 5

– Secure container services – This new feature offers multi-tenancy from a single storage fabric. PixStor 5 enables creative studios to deploy secure media environments without crippling productivity and creativity and aligns with TPN security accreditation standards to attract A-list clients.
– Cloud workflow flexibility — PixStor 5 expands your workflows cost-effectively into the cloud with fully automated seamless deployment to cloud marketplaces, enabling hybrid workflows for burst render and cloud-first workflows for global collaboration. PixStor 5 will soon be available in the Google Cloud Platform Marketplace, followed shortly by AWS and Azure.
– Enhanced search capabilities — Using machine learning and artificial intelligence cloud-based tools to drive powerful media indexing and search capabilities, users can perform fast, easy and accurate content searches across their entire global namespace.
– Deep granular analytics – With single-pane-of-glass management and user-friendly dashboards, PixStor 5 allows a holistic view of the entire filesystem and delivers business-relevant metrics to reinforce storage strategies.

GB Labs launched new software, features and updates to its FastNAS and Space, Echo and Vault ranges at NAB. The Space, Echo and Vault ranges got intelligent new software features, including the Mosaic asset organizer and the latest Analytics Center, along with brand-new Core.4 and Core.4 Lite software. The new Core software is also now included in the FastNAS product range.

GB Labs

Mosaic software, which already features on the FastNAS, range, could be compared to a MAM.  It is an asset organizer that can automatically scour all in-built metadata and integrate with AI tagging systems to give users the power to find what they’re looking for without having to manually enter any metadata.

Analytics Center will give users the visibility into their network so that they can see how they’re using their data, giving them a better understanding of individual or system-wide use, with suggestions on how to optimize their systems much more quickly and at a lower cost.

The new Core.4 software for both ranges builds on GB Labs’ current Core.3 OS offering a high-performance custom OS that is specifically used to serve media files. It allows a stable performance for every user and the best from the least amount of disk, which saves power.

EditShare’s flagship EditShare EFS scale-out storage enterprise scale-out storage solution was on display. It was developed for large-scale media organizations and supports hundreds of users simultaneously, with embedded tools for sharing media and collaborating across departments, across sites and around the world.

EditShare was showcasing advancements in its EFS File Auditing technology, the industry’s only realtime auditing platform designed to manage, monitor and secure your media from inception to delivery. EFS File Auditing keeps track of all digital assets and captures every digital footprint that a file takes throughout its life cycle, including copying, modifying and deleting of any content within a project.

Storbyte introduced its eco-friendly SBJ-496 at the 2019 NAB show. According to the company, this product is a new design in high-capacity disk systems for long-term management of digital media content with enterprise-class availability and data services. Ideal for large archive libraries, the SBJ-496 requires little to no electricity to maintain data, and its environmentally friendly green design allows unrestricted air flow, generates minimal heat and saves on cooling expenses.

Echo Flash

The new EcoFlash SBS-448, for digital content creation and streaming, is an efficient solid-state storage array that can deliver over 20GB of data per second. EcoFlash SBS-448 consumes less than half the electrical power and produces a lot less heat. Its patented design extends its lifespan significantly, resulting in a total operating cost per terabyte that is 300-500% lower.

NGD Systems was showing its computational storage product with several system partners at NAB, including at the Echostreams booth for its 1U platforms. NGD said that its M.2 and upcoming EDSFF form factors can be used in dense and performance-optimized solutions within the EchoStreams 1U server and canister system. In addition to providing data analytics and realtime analysis capture, the combination of NGD Systems products and EchoStreams 1U platforms allow for deployment at the extreme edge for use in onsite video acquisition and post processing at the edge.

OpenDrives was showcasing its Atlas software platform and product family of shared storage solutions. Its NAB demo was built on a single Summit system, including the OmniDrive media accelerator, powered by NVMe, to significantly boosts editorial, transcoding, color grading and visual effects shared workflows. OpenDrives is moving to a 2U form factor in its manufacturing, streamlining systems without sacrificing performance.

iX Systems said that their TrueNAS enterprise storage appliances deliver a perfect range of features and scalability for next-gen M&E workflows. AIC had an exhibit showing several enterprise storage systems, including some with NGD Systems computational storage SSDs. Promise Technology said that its VTrak NAS has been optimized for video application environments. Sony was offering PCIe SSD data storage servers. Other companies showing workflow storage products included Asustor, elements, PAC Storage and Rocstor.

Conclusions
The media and entertainment industry has unique requirements for storage to support modern digital workflows. A number of large and small companies have come up with a variety of local and cloud-based approaches to provide storage for post production applications. The NAB show is one of the world’s largest forums for such products and a great place to learn about what the digital storage and memory industry has to offer media and entertainment professionals.


Tom Coughlin, president of Coughlin Associates, is a digital storage analyst and business/technology consultant. He is active with SMPTE, SNIA, the IEEE — he is president of IEEE-USA and active in the CES, where he is chairman of the Future Directions Committee) and other pro organizations. 

Quantum offers new F-Series NVMe storage arrays

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.

Facilis Launches Hub shared storage line

Facilis Technology rolled out its new Hub Shared Storage line for media production workflows during the NAB show. Facilis Hub includes new hardware and an integrated disk-caching system for cloud and LTO backup and archive designed to provide block-level virtualization and multi-connectivity performance.

“Hub Shared Storage is an all-new product based on our Hub Server that launched in 2017. It’s the answer to our customers’ requests for a more compact server chassis, lower-cost hybrid (SSD and HDD) options and integrated cloud and LTO archive features,” says Jim McKenna, VP of sales and marketing at Facilis. “We deliver all of this with new, more powerful hardware, new drive capacity options and a new look to both the system and software interface.”

The Facilis shared storage network allows both block-mode Fibre Channel and Ethernet connectivity simultaneously with the ability to connect through either method with the same permissions, user accounts and desktop appearance. This expands user access, connection resiliency and network permissions. The system can be configured as a direct-attached drive or segmented into various-sized volumes that carry individual permissions for read and write access.

Facilis Object Cloud
Object Cloud is an integrated disk-caching system for cloud and LTO backup and archive that includes up to 100TB of cloud storage for an annual fee. The Facilis Virtual Volume can display cloud, tape and spinning disk data in the same directory structure on the client desktop.

“A big problem for our customers is managing multiple interfaces for the various locations of their data. With Object Cloud, files in multiple locations reside in the same directory structure and are tracked by our FastTracker asset tracking in the same database as any active media asset,” says McKenna. “Object Cloud uses Object Storage technology to virtualize a Facilis volume with cloud and LTO locations. This gives access to files that exist entirely on disk, in the Cloud or on LTO, or even partially on disk and partially in the cloud.”

Every Facilis Hub Shared Storage server comes with unlimited seats in the Facilis FastTracker asset tracking application. The Object Cloud Software and Storage package is available for most Facilis servers running version 7.2 or higher.

SymplyWorkspace: high-speed, multi-user SAN for smaller post houses

Symply has launched SymplyWorkspace, a SAN system that uses Quantum’s StorNext 6 for high-speed collaboration over Thunderbolt 3 for up to eight simultaneous Mac, Windows, or Linux editors, with RAID protection for content safety.

SymplyWorkspace is designed for sharing content in realtime video production. The product features a compact desk-side design geared to smaller post houses, in-house creatives, ad agencies or any creative house needing an affordable high-speed sharing solution.

“With the high adoption rates of Thunderbolt in smaller post houses, with in-house creatives and with other content creators, connecting high-speed shared storage has been a hassle that requires expensive and bulky adapters and rack-mounted, hot and noisy storage, servers and switches,” explains Nick Warburton from Global Distribution, which owns Symply. “SymplyWorkspace allows Thunderbolt 3 clients to just plug into the desk-side system to ingest, edit, finish and deliver without ever moving content locally, even at 4K resolutions, with no adapters or racks needed.”

Based on the Quantum StorNext 6 sharing software, SymplyWorkspace allows users to connect up to eight laptops and workstations to the system and share video files, graphics and other data files instantly with no copying and without concerns for version control or duplicated files. A file server can also be attached to enable re-sharing of content to other users across Ethernet networks.

Symply has also addressed the short cable-length issues commonly cited with Thunderbolt. By using the latest Thunderbolt 3 optical cable technology from Corning, clients can be up to 50 feet away from SymplyWorkspace while maintaining full high-speed collaboration.

The complete SymplyWorkspace solution starts at $10,995 for 24TB of RAID-protected storage and four simultaneous Mac users. Four additional users (up to eight total) can be added at any time. The product is also available in configurations up to 288TB and supporting multiple 4K streams, with any combination of up to eight Mac, Windows or Linux users. It’s available now through worldwide resellers and joins the SymplyUltra line of workflow storage solutions for larger post and broadcast facilities.

Western Digital adds NVMe to its WD Blue solid state drive

Western Digital has added an NVMe model to its WD Blue solid state drive (SSD) portfolio. The WD Blue SN500 NVMe SSD offers three times the performance of its SATA counterpart and is optimized for multitasking and resource-heavy applications, providing near-instant access to files and programs.

Using the scalable in-house SSD architecture of the WD Black SN750 NVMe SSD, the new WD Blue SN500 NVMe SSD is also built on Western Digital’s 3D NAND technology, firmware and controller, and delivers sequential read and write speeds up to 1,700MB/s and 1,450MB/s respectively (for 500GB model) with efficient power consumption as low as 2.7W.

Targeting evolving workflows, the WD Blue SN500 NVMe SSD features high sustained write performance over SATA, as well as other emerging technologies on the market today, to give that performance edge.

“Content transitioning from 4K and 8K means it’s a perfect time for video and photo editors, content creators, heavy data users and PC enthusiasts to transition from SATA to NVMe,” says Eyal Bek, VP, data center and client computing, Western Digital. “The WD Blue SN500 NVMe SSD will enable customers to build high-performance laptops and PCs with fast speeds and enough capacity in a reliable, rugged and slim form factor.”

The WD Blue SN500 NVMe SSD will be available in 250GB and 500GB capacities in a single-sided M.2 2280 PCIe Gen3 x2 form factor. Pricing is $54.99 USD for 250GB (model WDS250G1B0C) and $77.99 USD for 500GB (model WDS500G1B0C).

Pixit Media adds David Sallak 
as CTO

Pixit Media, which provides data-driven storage platforms targeting M&E, has expanded its management team with the addition of CTO and member of the board David Sallak.

Most recently the VP of industry marketing at storage company Panasas, Sallak brings with him more than 15 years of experience. Prior to Panasas, Sallak served as CTO at EMC Isilon. Based in Chicago but working globally, he will be responsible for helping to grow Pixit Media.

Other key appointments for Pixit Media include Chris Horn as chief operating officer. He also joins the board. Greg Furmidge comes on as VP of global sales, and Chris Exton has been promoted to professional services manager.

With offices in Vista CA, London and Stuttgart, Pixit Media clients include Warner Bros., Pixelogic, Framestore, Goldcrest, Encompass and Deluxe.

Updated Quantum Xcellis targets robust video workflows

Quantum has updated its Xcellis storage environment, which allow users to ingest, edit, share and store media content. These new appliances, which are powered by the company’s StorNext platform, are based on a next-generation server architecture that includes dual eight-core Intel Xeon CPUs, 64GB memory, SSD boot drives and dual 100Gb Ethernet or 32Gb Fibre Channel ports.

The enhanced CPU and 50% increase in RAM over the previous generation greatly improve StorNext metadata performance. These enhancements make tasks such as file auditing less time-intensive, support an even greater number of clients per node and enable the management of billions of files per node. Users operating in a dynamic application environment on storage nodes will also see performance improvements.

With the ability to provide cross-protocol locking for shared files across SAN, NFS and SMB, Xcellis targets organizations that have collaborative workflows and need to share content across both Fibre Channel and Ethernet.

Leveraging this next-generation hardware platform, StorNext will provide higher levels of streaming performance for video playback. Xcellis appliances provide a high-performance gateway for StorNext advanced data management software to integrate tiers of scalable on-premise and cloud-based storage. This end-to-end capability provides a cost-effective solution to retain massive amounts of data.

StorNext offers a variety of features that ensure data-protection of valuable content over its entire life-cycle. Users can easily copy files to off-site tiers and take advantage of versioning to roll back to an earlier point in time (prior to a malware attack, for example) as well as set up automated replication for disaster recovery purposes — all of which is designed to protect digital assets.

Quantum’s latest Xcellis appliances are available now.

XenData offering LTO tape-to-cloud migration service

So you have safely archived your content on sturdy, cost-effective LTO drives, but now what? According to XenData, a provider of high-capacity data storage solutions based on hybrid cloud, data tape and optical cartridges, while LTO tape cartridges have a shelf-life of 30 years, after several years there is a need to migrate the LTO cartridge contents to current technology media to avoid the need to maintain old generations of tape drives and systems.

To serve this need, XenData has launched a service to migrate files archived on LTO tape cartridges to cloud storage. XenData archive storage systems have built-in migration capabilities that make it easy for users to seamlessly migrate their files from old generations of LTO to either the latest LTO formats or to the cloud.

This service uses its unique migration technologies to transfer content from LTO to the cloud. Supported data formats include LTFS, Cache-A TAR, XenData TAR and Front Porch DIVA proprietary formats. Contents may be categorized and re-organized before being migrated to AWS, Microsoft Azure or Wasabi public clouds. Alternatively, contents may be migrated to the latest LTO formats: either 6 TB LTO7 or 12 TB LTO8 cartridges.

According to XenData CEO Dr. Phil Storey, “Our service is aimed at organizations involved in creative media that have files stored on LTO cartridges and they want to make the content easily accessible. It is especially relevant to users that have old LTO cartridges written using legacy systems and they want to stop maintaining their old hardware. Often they are fearful that the old hardware will stop working and they are then going to lose access to their content. In other cases, the equipment has actually stopped working and they have already lost access to their archived files.”

This service is available now.

Rohde & Schwarz’s storage system R&S SpycerNode shipping

First shown at IBC 2018, Rohde & Schwarz’s new media storage system, R&S SpycerNode, is now available for purchase. This new storage system uses High Performance Computing (HPC), a term that refers to the system’s performance, scalability and redundancy. HPC is a combination of hardware, file system and RAID approach. HPC employs redundancy using software RAID technologies called erasure coding in combination with declustering to increase performance and reduce rebuild times. Also, system scalability is almost infinite and expansion is possible during operation.

According to Rohde & Schwarz, in creating this new storage system, their engineers looked at many of the key issues that impact on media storage systems within high-performance video editing environments — from annoying maintenance requirements, such as defraging, to much more serious system failures, including dying disk drives.

R&S SpycerNode features Rohde & Schwarz‘s device manager web application that makes it much easier to set up and use Rohde & Schwarz solutions in an integrated fashion. Device manager helps to reduce setup times and simplifies maintenance and service due to its intuitive web-based UI-operated through a single client.

To ensure data security, Rohde & Schwarz has introduced data protection systems based on erasure coding and declustering within the R&S SpycerNode. Erasure coding means that a data block is always written including parity.

Declustering is a part of the data protection approach of HPC setups (formerly known as RAID). It is software based, and in comparison to a traditional RAID setup the spare disk is spread over all other disks and is not a dedicated disk. This will decrease rebuild times and reduce performance impact. Also, there are no limitations with the RAID controller, which results in much higher IOPS (input/output operations per second). Importantly, there is no impact on system performance over time due to declustering.

R&S SpycerNode comes in multiple 2U and 5U chassis designs, which are available with NL-SAS HDD and SAS SSDs in different capacities. An additional 2U24 chassis design is a pure Flash system with main processor units and JBOD units. A main unit is always redundant, equipped with two appliance controllers (AP). Each AP features two 100Gb interfaces, resulting in four 100Gbinterfaces per main unit.

The combination of different chassis systems makes R&S SpycerNode applicable to a very broad range of applications. The 2U system represents a compact, lightweight unit that works well within mobile productions as well as offering a very dense, high-speed storage device for on-premise applications. A larger 5U system offers sophisticated large-scale storage facilities on-premise within broadcast production centers and post facilities.

Storage for VFX Studios

By Karen Moltenbrey

Visual effects are dazzling — inviting eye candy, if you will. But when you mention the term “storage,” the wide eyes may turn into a stifled yawn from viewers of the amazing content. Not so for the makers of that content.

They know that the key to a successful project rests within the reliability of their storage solutions. Here, we look at two visual effects studios — both top players in television and feature film effects — as they discuss how data storage enables them to excel at their craft.

Zoic Studios
A Culver City-based visual effects facility, with shops in Vancouver and New York, Zoic Studios has been crafting visual effects for a host of television series since its founding in 2002, starting with Firefly. In addition to a full plate of episodics, Zoic also counts numerous feature films and spots to its credits.

Saker Klippsten

According to Saker Klippsten, CTO, the facility has used a range of storage solutions over the past 16 years from BlueArc (before it was acquired by Hitachi), DataDirect Networks and others, but now uses Dell EMC’s Isilon cluster file storage system for its current needs. “We’ve been a fan of theirs for quite a long time now. I think we were customer number two,” he says, “back when they were trying to break into the media and entertainment sector.”

Locally, the studio uses Intel and NVMe drives for its workstations. NVMe, or non-volatile memory express, is an open logical device interface specification for accessing all-flash storage media attached via PCI Express (PCIe) bus. Previously, Zoic had been using Samsung SSD drives, with Samsung 1TB and 2TB EVO drives, but in the past year and a half, began migrating to NVMe on the local workstations.

Zoic transitioned to the Isilon system in 2004-2005 because of the heavy usage its renderfarm was getting. “Renderfarms work 24/7 and don’t take breaks. Our storage was getting really beat up, and people were starting to complain that it was slow accessing the file system and affecting playback of their footage and media,” explains Klippsten. “We needed to find something that could scale out horizontally.”

At the time, however, file-level storage was pretty much all that was available — “you were limited to this sort of vertical pool of storage,” says Klippsten. “You might have a lot of storage behind it, but you were still limited at the spigot, at the top end. You couldn’t get the data out fast enough.” But Isilon broke through that barrier by creating a cluster storage system that allotted the scale horizontally, “so we could balance our load, our render nodes and our artists across a number of machines, and access and update in parallel at the same time,” he adds.

Klippsten believes that solution was a big breakthrough for a lot of users; nevertheless, it took some time for others to get onboard. “In the media and entertainment industry, everyone seemed to be locked into BlueArc or NetApp,” he notes. Not so with Zoic.

Fairly recently, some new players have come onto the market, including Qumulo, touted as a “next-generation NAS company” built around advanced, distributed software running on commodity hardware. “That’s another storage platform that we have looked at and tested,” says Klippsten, adding that Zoic even has a number of nodes from the vendor.

There are other open-source options out there as well. Recently, Red Hat began offering Gluster Storage, an open, software-defined storage platform for physical, virtual and cloud environments. “And now with NVMe, it’s eliminating a lot of these problems as well,” Klippsten says.

Back when Zoic selected Isilon, there were a number of major issues that affected the studio’s decision making. As Klippsten notes, they had just opened the Vancouver office and were transferring data back and forth. “How do we back up that data? How do we protect it? Storage snapshot technology didn’t really exist at the time,” he says. But, Isilon had a number of features that the studio liked, including SyncIQ, software for asynchronous replication of data. “It could push data between different Isilon clusters from a block level, in a more automated fashion. It was very convenient. It offered a lot of parameters, such as moving data by time of day and access frequency.”

SyncIQ enabled the studio to archive the data. And for dealing with interim changes, such as a mistakenly deleted file, Zoic found Isilon’s SnapshotIQ ideal for fast data recovery. Moreover, Isilon was one of the first to support Aspera, right on the Isilon cluster. “You didn’t have to run it on a separate machine. It was a huge benefit because we transfer a lot of secure, encrypted data between us and a lot of our clients,” notes Klippsten.

Netflix’s The Chilling Adventures of Sabrina

Within the pipeline, Zoic’s storage system sits at the core. It is used immediately as the studio ingests the media, whether it is downloaded or transferred from hard drives – terabytes upon terabytes of data. The data is then cleaned up and distributed to project folders for tasks assigned to the various artists. In essence, it acts as a holding tank for the main production storage as an artist begins working on those specific shots, Klippsten explains.

Aside from using the storage at the floor level, the studio also employs it at the archive level, for data recovery as well as material that might not be accessed for weeks. “We have sort of a tiered level of storage — high-performance and deep-archival storage,” he says.

And the system is invaluable, as Zoic is handling 400 to 500 shots a week. If you multiply that by the number of revisions and versions that take place during that time frame, it adds up to hundreds of terabytes weekly. “Per day, we transfer between LA, Vancouver and New York somewhere around 20TB to 30TB,” he estimates. “That number increases quite a bit because we do a lot of cloud rendering. So, we’re pushing a lot of data up to Google and back for cloud rendering, and all of that hits our Isilon storage.”

When Zoic was founded, it originally saw itself as a visual effects company, but at the end of the day, Klippsten says they’re really a technology company that makes pretty pictures. “We push data and move it around to its limits. We’re constantly coming up with new, creative ideas, trying to find partners that can help provide solutions collaboratively if we cannot create them ourselves. The shot cost is constantly being squeezed by studios, which want these shots done faster and cheaper. So, we have to make sure our artists are working faster, too.”

The Chilling Adventures of Sabrina

Recently, Zoic has been working on a TV project involving a good deal of water simulations and other sims in general — which rapidly generate a tremendous amount of data. Then the data is transferred between the LA and Vancouver facilities. Having storage capable of handling that was unheard of three years ago, Klippsten says. However, Zoic has managed to do so using Isilon along with some off-the-shelf Supermicro storage with NVMe drives, enabling its dynamics department to tackle this and other projects. “When doing full simulation, you need to get that sim in front of the clients as soon as possible so they can comment on it. Simulations take a long time — we’re doing 26GB/sec, which is crazy. It’s close to something in the high-performance computing realm.”

With all that considered, it is hardly surprising to hear Klippsten say that Zoic could not function without a solid storage solution. “It’s funny. When people talk about storage, they are always saying they don’t have enough of it. Even when you have a lot of storage, it’s always running at 99 percent full, and they wonder why you can’t just go out to Best Buy and purchase another hard drive. It doesn’t work that way!”

Milk VFX
Founded just five years ago, Milk VFX is an independent visual effects facility in the UK with locations in London and Cardiff, Wales. While Milk VFX may be young, it was founded by experienced and award-winning VFX supervisors and producers. And the awards have continued, including an Oscar (Ex-Machina), an Emmy (Sherlock) and three BAFTAs, as the studio creates innovative and complex work for high-end television and feature films.

Benoit Leveau

With so much precious data, and a lot of it, the studio has to ensure that its work is secure and the storage system is keeping pace with the staff using it. When the studio was set up, it installed Pixit Media’s PixStor, a parallel file system with limitless storage, for its central storage solution. And, it has been growing with the company ever since. (Milk uses almost no local storage, except for media playback.)

“It was a carefully chosen solution due to its enterprise-level performance,” says Benoit Leveau, head of pipeline at Milk, about the decision to select PixStor. “It allowed us to expand when setting up our second studio in Cardiff and our rendering solutions in the cloud.”

When Milk was shopping for a storage offering while opening the studio, four things were forefront in their minds: speed, scalability, performance and reliability. Those were the functions the group wanted from its storage system — exactly the same four demands that the projects at the studios required.

“A final image requires gigabytes, sometimes terabytes, of data in the form of detailed models, high-resolution textures, animation files, particles and effects caches and so forth,” says Leveau. “We need to be able to review 4K image sequences in real time, so it’s really essential for daily operation.”

This year alone, Milk has completed a number of high-end visual effects sequences for feature films such as Adrift, serving as the principal vendor on this true story about a young couple lost at sea during one of the most catastrophic hurricanes in recorded history. The Milk team created all the major water and storm sequences, including bespoke 100-foot waves, all of which were rendered entirely in the cloud.

As Leveau points out, one of the shots in the film was more than 60TB, as it required complex ocean simulations. “We computed the ocean simulations on our local renderfarm, but the rendering was done in the cloud, and with this setup, we were able to access the data from everywhere almost transparently for the artists,” he explains.

Adrift

The studio also recently completed work on the blockbuster Fantastic Beasts sequel, The Crimes of Grindelwald.

For television, the studio created visual effects for an episode of the Netflix Altered Carbon sci-fi series, where people can live forever, as they digitally store their consciousness (stacks) and then download themselves into new bodies (sleeves). For the episode, the Milk crew created forest fires and the aftermath, as well as an alien planet and escape ship. For Origin, an action-thriller, the team generated 926 VFX shots in 4K for the 10-part series, spanning a wide range of work. Milk is also serving as the VFX vendor for Good Omens, a six-part horror/fantasy/drama series.

“For Origin, all the data had to be online for the duration of the four-month project. At the same time, we commenced work as the sole VFX vendor on the BBC/Amazon Good Omens series, which is now rapidly filling up our PixStor, hence the importance of scalability!” says Leveau.

Main Image: Origin via Milk VFX


Karen Moltenbrey is a veteran VFX and post writer.

Virtual Roundtable: Storage

By Randi Altman

The world of storage is ever changing and complicated. There are many flavors that are meant to match up to specific workflow needs. What matters most to users? In addition to easily-installed and easy-to-use systems that let them focus on the creative and not the tech? Scalability, speed, data protection, the cloud and the need to handle higher and higher frame rates with higher resolutions — meaning larger and larger files. The good news is the tools are growing to meet these needs. New technologies and software enhancements around NVMe are providing extremely low-latency connectivity that supports higher performance workflows. Time will tell how that plays a part in day-to-day workflows.

For this virtual roundtable, we reached out to makers of storage and users of storage. Their questions differ a bit, but their answers often overlap. Enjoy.

Western Digital Global Director M&E Strategy & Market Development Erik Weaver

What is the biggest trend you’ve seen in the past year in terms of storage?
There’s a couple that immediately come to mind. Both have to do with the massive amounts of data generated by the media and entertainment industry.

The first is the need to manage this data to understand what you have, where it resides and where it’s going. With multiple storage architectures in play – cloud, hybrid, legacy, remote, etc. — some may be out of your purview, making data management challenging. The key is abstraction, creating a unique identifier(s) for every file everywhere so assets can be identified regardless of file name or location.

Some companies are already making progress using the C4 framework and the C4 ID system. With abstraction, you can apply rules so you always know where assets are located within these environments. It allows you to see all your assets and easily move them between storage tiers, if needed. Better data management will also help with analytics and AI/ML.

The second big trend, which we’ll talk about some more, is NVMe (and NVMe-over-Fabric) and the incredible speed and flexibility it provides. It has the ability to radically change the workflow for M&E to genuinely handle multiple 4K, 6K and 8K feeds and manage massive volumes of data. NVMe all-Flash arrays such as our IntelliFlash N-Series product line, as opposed to traditional NAS, bring transfer rates to a whole new level. Using the NVMe protocol can deliver three to five times faster performance than traditional flash technology and 20 times faster than traditional NAS.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
For AI, VR and machine learning, there’s a general trend toward using Flash on the front end and object storage on the back end. Our customers use ActiveScale object storage to scale up and out and store the primary dataset, then use an NVMe tier to process that data. You need a storage architecture large enough to capture all those datasets, then analyze them. This is driving an extreme amount of data.

Take, for example, VR. The move from simple 360 video into volumetric capture is analogous to what film used to be: it’s expensive. With film, you only have a limited number of takes and only so much storage, but with digital you capture everything, then fix it and post. The expansion in storage needs is outrageous and you need cost-effective storage that can scale.

As far as AI and ML, think about a popular Internet entertainment or streaming service. They’re running analytics looking at patterns of what customers are watching. They’re constantly growing and adapting in order to provide recommendations, 24×7. It would be tedious and downright unfeasible for humans to track this.

All of this requires compute power and storage. And having the right balance of performance, storage economics and low TCO is critical. We’re helping many companies define that strategy today leveraging our family of IntelliFlash, ActiveScale, Ultrastar and G-Technology branded products.

WD’s IntelliFlash N-Series NVMe all-Flash array

Can you talk about NVMe?
NVMe is a game changer. NVMe, with extreme performance, low latencies and incredible throughput is opening up new possibilities for the media workflow. NVMe can offer 5x the performance of traditional Flash at comparable prices and will be the foundation for next-generation workflows for production, gaming and VFX. It’s a radical change to traditional workflows today.

NVMe also lays the foundation for NVMe over fabric (NVMf). With that, it’s important to mention the difference between NVMe and NVMf.

Unlike SAS and SATA protocols that were designed for disk drives, NVMe was designed from the ground up for persistent Flash memory technologies and the massively parallel transfer capabilities of SSDs. As such, it delivers significant advantages including extreme performance, improved queuing, low-latency and the reduction of I/O stack overheads.

NVMf is a networked storage protocol that allows NVMe Flash storage to be disaggregated from the server and made widely available to concurrent applications and multiple compute resources. There is no limit to the number of servers or NVMf storage devices that can be shared. It promises to deliver the lowest end-to-end latency from application to storage while delivering agility and flexibility by sharing resources throughout the enterprise.

The bottom line is NVMe and NVMf are enablers for next-generation workflows that can give you a competitive edge in terms of efficiency, productivity and extracting the most value from your data.

What do you do in your products to help safeguard your users’ data?
As one of the largest storage companies in the world, we understand the value of data. Our goal is to deliver the highest quality storage solutions that deliver consistent performance, high-capacity and value to our customers. We design and manufacture storage solutions from silicon to systems. This vertical innovation gives us a unique advantage to fine-tune and optimize virtually any layer within the stack, including firmware, software, processing, interconnect, storage, mechanical and even manufacturing disciplines. This approach helps us deliver purpose-built products across all of our brands that provide the performance, reliability, total cost of ownership and sustainability demanded by our customers.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We believe hybrid workflows are critical in today’s environment. M&E companies are increasingly leveraging a hybrid of on-premises and multi-cloud architectures. Core intellectual property (in the form of digital assets) is stored in private, secure storage, while they access multi-cloud vendors to render, run post workflows or take advantage of various tools and services such as AI.

Object storage in a private cloud configuration is enabling new capabilities by providing “warm” online access to petabyte-scale repositories that were previously stored on tape or other “cold” storage archives. Suddenly, with this hybrid approach, companies can access and retain all their assets, and create new content services, monetize opportunities or run analytics across a much larger dataset. Combined with the ability to use AI for audience viewing, demographic and geographic data allows companies to deliver high-value, tailored content and services on a global scale.

Final Thoughts?
We’re seeing a third dimension to the “digital dilemma.” The digital dilemma is not new and has been talked about before. The first dilemma is the physical device itself. No physical device lasts forever. Tape and media degradation happen over extended periods of time. You also need to think about the limitation of the device itself and will it become obsolete? The second is the age of the media format and compatibility with modern operating systems, leaving data possibly unreadable. But the third thing that’s happening, and it’s quite serious, is that the experts who manage the libraries are “aging out” and nearing retirement. They’ve owned or worked on these infrastructures for generations and have this tribal knowledge of what assets they have and where they’re stored as well as the fickle nature of the underlying hardware. Because of these factors, we strongly encourage that companies evaluate their archive strategy, or potentially risk losing enormous amounts of data.

Company 3 NY and Deluxe NY Data/IO Supervisor Hollie Grant

Company 3 specializes in DI, finishing and color correction, and Deluxe is an end-to-end post house working on projects from dailies through finishing.

Hollie Grant

How much data did you use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Over the past year, as a rough estimate, my team dealt with around 1.5 petabytes of data. The latter half of this year really ramped up storage-wise. We were cruising along with a normal increase in data per show until the last few months where we had an influx of UHD, 4K and even 6K jobs, which take up to quadruple the space of a “normal” HD or 2K project.

I don’t think we’ll see a decrease in this trend with the take off of 4K televisions as the baseline for consumers and with streaming becoming more popular than ever. OTT films and television have raised the bar for post production, expecting 4K source and native deliveries. Even smaller indie films that we would normally not think twice about space-wise are shooting and finishing 4K in the hopes that Netflix or Amazon will buy their film. This means that even for the projects that once were not a burden on our storage will have to be factored in differently going forward.

Have you ever lost important data due to a hardware failure?Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Triple knock on wood! In my time here we have not lost any data due to an operator error. We follow strict procedures and create redundancy in our data, so if there is a hardware failure we don’t lose anything permanently. We have received hard drives or tapes that failed, but this far along in the digital age most people have more than one copy of their work, and if they don’t, a backup is the first thing I recommend.

Do you find access speed to be a limiting factor with you current storage solution?
We can reach read and write speeds of 1GB on our SAN. We have a pretty fast configuration of disks. Of course, the more sessions you have trying to read or write on a volume, the harder it can be to get playback. That’s why we have around 2.5PB of storage across many volumes so I can organize projects based on the bandwidth they will need and their schedules so we don’t have trouble with speed. This is one of the more challenging aspects of my day-to-day as the size of projects and their demand for larger frame playback increase.

Showtime’s Escape From Dannemora – Co3 provided color grading and conform.

What percentage of your data’s value do you budget toward storage and data security?
I can’t speak to exact percentages, but storage upgrades are a large part of our yearly budget. There is always an ask for new disks in the funding for the year because every year we’re growing along with the size of the data for productions. Our production network infrastructure is designed around security regulations set forth by many studios and the MPAA. A lot of work goes into maintaining that and one of the most important things to us is keeping our clients’ data safe behind multiple “locks and keys.”

What trends do you see in storage?
I see the obvious trends in physical storage size decreasing while bandwidth and data size increases. Along those lines I’m sure we’ll see more movies being post produced with everything needed in “the cloud.” The frontrunners of cloud storage have larger, more secure and redundant forms of storing data, so I think it’s inevitable that we’ll move in that direction. It will also make collaboration much easier. You could have all camera-original material stored there, as well as any transcoded files that editorial and VFX will be working with. Using the cloud as a sort of near-line storage would free up the disks in post facilities to focus on only having online what the artists need while still being able to quickly access anything else. Some companies are already working in a manner similar to this, but I think it will start to be a more common solution moving forward.

creative.space‘s Nick Anderson

What is the biggest trend you’ve seen in the past year in terms of storage?
The biggest trend is NVMe storage. SSDs are finally entering a range where they are forcing storage vendors to re-evaluate their architectures to take advantage of its performance benefits.

Nick Anderson

Can you talk more about NVMe?
When it comes to NVMe, speed, price and form factor are three key things users need to understand. When it comes to speed, it blasts past the limitations of hard drives speeds to deliver 3GB/s per drive, which requires a faster connector (PCIe) to take advantage of. With parallel access and higher IOPS (input/output operations per second), NVMe drives can handle operations that would bring an HDD to its knees. When it comes to price, it is cheaper per GB than past iterations of SSD, making it a feasible alternative for tier one storage in many workflows. Finally, when it comes to form factor, it is smaller and requires less hardware bulk in a purpose-built system so you can get more drives in a smaller amount of space at a lower cost. People I talk to are surprised to hear that they have been paying a premium to put fast SSDs into HDD form factors that choke their performance.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
This is something we have been thinking a lot about and we have some exciting stuff in the works that addresses this need that I can’t go into at this time. For now, we are working with our early adopters to solve these needs in ways that are practical to them, integrating custom software as needed. Moving forward we hope to bring an intuitive and seamless storage experience to the larger industry.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
This gets down to a shift in what kind of data is being processed and how it can be accessed. When it comes to video, big media files and image sequences have driven the push for better performance. 360° video pushes the performance storage further past 4K into 8K, 12K, 16K and beyond. On the other hand, as CGI continues to become more photorealistic and we emerge from the “uncanny valley,” the performance need shifts from big data to small data in many cases as render engines are used instead of video or image files. Moving lots of small data is what these systems were originally designed for, so it will be a welcome shift for users.

When it comes to AI, our file system architectures and NVMe technology are making data easily accessible with less impact on performance. Apart from performance, we monitor thousands of metrics on the system that can be easily connected to your machine learning system of choice. We are still in the early days of this technology and its application to media production, so we are excited to see how customers take advantage of it.

What do you do in your products to help safeguard your users’ data?
From a data integrity perspective, every bit of data gets checksumed on copy and can be restored from that checksum if it gets corrupted. This means that that storage is self-healing with 100% data integrity once it is written to the disk.

As far as safeguarding data from external threats, this is a complicated issue. There are many methods of securing a system, but for post production, performance can’t be compromised. For companies following MPAA recommendations, putting the storage behind physical security is often considered enough. Unfortunately, for many companies without an IT staff, this is where the security stops and the system is left open once you get access to the network. To solve this problem, we developed an LDAP user management system that is built-in to our units that provides that extra layer of software security at no additional charge. Storage access becomes user-based, so system activity can be monitored. As far as administering support, we designed an API gatekeeper to manage data to and from the database that is auditable and secure.

AlphaDogs‘ Terence Curren

Alpha Dogs is a full-service post house in Burbank, California. They provide color correction, graphic design, VFX, sound design and audio mixing.

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
We are primarily a finishing house, so we use hundreds of TBs per year on our SAN. We work at higher resolutions, which means larger file sizes. When we have finished a job and delivered the master files, we archive to LTO and clear the project off the SAN. When we handle the offline on a project, obviously our storage needs rise exponentially. We do foresee those requirements rising substantially this year.

Terence Curren

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
We’ve been lucky in that area (knocking on wood) as our SANs are RAID-protected and we maintain a degree of redundancy. We have had clients’ transfer drives fail. We always recommend they deliver a copy of their media. In the early days of our SAN, which is the Facilis TerraBlock, one of our editors accidentally deleted a volume containing an ongoing project. Fortunately, Facilis engineers were able to recover the lost partition as it hadn’t been overwritten yet. That’s one of the things I really have appreciated about working with Facilis over the years — they have great technical support which is essential in our industry.

Do you find access speed to be a limiting factor with you current storage solution?
Not yet, As we get forced into heavily marketed but unnecessary formats like the coming 8K, we will have to scale to handle the bandwidth overload. I am sure the storage companies are all very excited about that prospect.

What percentage of your data’s value do you budget toward storage and data security?
Again, we don’t maintain long-term storage on projects so it’s not a large consideration in budgeting. Security is very important and one of the reasons our SANs are isolated from the outside world. Hopefully, this is an area in which easily accessible tools for network security become commoditized. Much like deadbolts and burglar alarms in housing, it is now a necessary evil.

What trends do you see in storage?
More storage and higher bandwidths, some of which is being aided by solid state storage, which is very expensive on our level of usage. The prices keep coming down on storage, yet it seems that the increased demand has caused our spending to remain fairly constant over the years.

Cinesite London‘s Chris Perschky

Perschky ensures that Cinesite’s constantly evolving infrastructure provides the technical backbone required for a visual effects facility. His team plans, installs and implements all manner of technology, in addition to providing technical support to the entire company.

Chris Perschky

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Depending on the demands of the project that we are working on we can generate terabytes of data every single day. We have become increasingly adept at separating out data we need to keep long-term from what we only require for a limited time, and our cleanup tends to be aggressive. This allows us to run pretty lean data sets when necessary.

I expect more 4K work to creep in next year and, as such, expect storage demands to increase accordingly.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Our thorough backup procedures mean that we have an offsite copy of all production data within a couple of hours of it being written. As such, when an artist has accidentally overwritten a file we are able to retrieve it from backup swiftly.

Do you find access speed to be a limiting factor with your current storage solution?
Only remotely, thereby requiring a caching solution.

What percentage of your data’s value do you budget toward storage and data security?
Due to the requirements of our clients, we do whatever is necessary to ensure the security of their IP and our work.

Cinesite also worked on Iron Spider for Avengers Infinity War ©2018 Marvel Studios

What trends do you see in storage?
The trendy answer is to move all storage to the cloud, but it is just too expensive. That said, the benefits of cloud storage are well documented, so we need some way of leveraging it. I see more hybrid on-prem and cloud solutions. providing the best of both worlds as demand requires. Full SSD solutions are still way too expensive for most of us, but multi-tier storage solutions will have a larger SSD cache tier as prices drop.

Panasas‘ RW Hawkins

What is the biggest trend you’ve seen in the past year in terms of storage?
The demand for more capacity certainly isn’t slowing down! New formats like ProRes RAW, HDR and stereoscopic images required for VR continue to push the need to scale storage capacity and performance. New Flash technologies address the speed, but not the capacity. As post production houses scale, they see that complexity increases dramatically. Trying to scale to petabytes with individual and limited file servers is a big part of the problem. Parallel file systems are playing a more important role, even in medium-sized shops.

RW Hawkins

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
VR (and, more generally, interactive content creation) is particularly interesting as it takes many of the aspects of VFX and interactive gaming and combines them with post. The VFX industry, for many years, has built batch-oriented pipelines running on multiple Linux boxes to solve many of their production problems. This same approach works well for interactive content production where the footage often needs to be pre-processed (stitched, warped, etc.) before editing. High speed, parallel filesystems are particularly well suited for this type of batch-based work.

The AI/ML space is red hot, and the applications seem boundless. Right now, much of the work is being done at a small scale where direct-attach, all-Flash storage boxes serve the need. As this technology is used on a larger scale, it will put demands on storage that can’t be met by direct-attached storage, so meeting those high IOP needs at scale is certainly something Panasas is looking at.

Can you talk about NVMe?
NVMe is an exciting technology, but not a panacea for all storage problems. While being very fast, and excellent at small operations, it is still very expensive, has small capacity and is difficult to scale to petabyte sizes. The next-generation Panasas ActiveStor Ultra platform uses NVMe for metadata while still leveraging spinning disk and SATA SSD. This hybrid approach, using each storage medium for what it does best, is something we have been doing for more than 10 years.

What do you do in your products to help safeguard your users’ data?
Panasas uses object-based data protection with RAID- 6+. This software-based erasure code protection, at the file level, provides the best scalable data protection. Only files affected by a particular hardware failure need to be rebuilt, and increasing the number of drives doesn’t increase the likelihood of losing data. In a sense, every file is individually protected. On the hardware side, all Panasas hardware provides non-volatile components, including cutting-edge NVDIMM technology to protect our customers’ data. The file system has been proven in the field. We wouldn’t have the high-profile customers we do if we didn’t provide superior performance as well as superior data protection.

Users want more flexible workflows — storage in the cloud, on-premises, etc. How are your offerings reflective of that?
While Panasas leverages an object storage backend, we provide our POSIX-compliant file system client called DirectFlow to allow standard file access to the namespace. Files and directories are the “lingua franca” of the storage world, allowing ultimate compatibility. It is very easy to interface between on-premises storage, remote DR storage and public cloud/REST storage using DirectFlow. Data flows freely and at high speed using standard tools, which makes the Panasas system an ideal scalable repository for data that will be used in a variety of pipelines.

Alkemy X‘s Dave Zeevalk

With studios in Philly, NYC, LA and Amsterdam, Alkemy X provides live-action, design, post, VFX and original content for spots, branded content and more.

Dave Zeevalk

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Each year, our VFX department generates nearly a petabyte of data, from simulation caches to rendered frames. This year, we have seen a significant increase in data usage as client expectations continue to grow and 4K resolution becomes more prominent in episodic television and feature film projects.

In order to use our 200TB server responsibly, we have created a solid system for preserving necessary data and clearing unnecessary files on a regular basis. Additionally, we are diligent in archiving finale projects to our LTO tape systems, and removing them from our production server.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)

Because of our data redundancy, through hourly snapshots and daily backups, we have avoided any data loss even with hardware failure. Although hardware does fail, with these snapshots and backups on a secondary server, we are able to bring data back online extremely quickly in the case of hardware failure on our production server. Years ago, while migrating to Linux, a software issue completely wiped out our production server. Within two hours, we were able to migrate all data back from our snapshots and backups to our production server with no data loss.

Do you find access speed to be a limiting factor with your current storage solution?
There are a few scenarios where we do experience some issues with access speed to the production server. We do a good amount of heavy simulation work, at times writing dozens of terabytes per hour. While at our peak, we have experienced some throttled speeds due to the amount of data being written to the server. Our VFX team also has a checkpoint system for simulation where raw data is saved to the server in parallel to the simulation cache. This allows us to restart a simulation mid-way through the process if a render node drops or fails the job. This raw data is extremely heavy, so while using checkpoints on heavy simulations, we also experience some slower than normal speeds.

What percentage of your data’s value do you budget toward storage and data security?
Our active production server houses 200TB of storage space. We have a secondary backup server, with equivalent storage space that we store hourly snapshots and daily back-ups to.

What trends do you see in storage?
With client expectations continuing to rise, and 4K (and higher at times) becoming more and more regular on jobs, the need for more storage space is ever increasing.

Quantum‘s Jamie Lerner

What is the biggest trend you’ve seen in the past year in terms of storage?
Although the digital transformation to higher resolution content in M&E has been taking place over the past several years, the interesting aspect is that the pace of change over the past 12 months is accelerating. Driving this trend is the mainstream adoption of 4K and high dynamic range (HDR) video, and the strong uptick in applications requiring 8K formats.

Jamie Lerner

Virtual reality and augmented reality applications are booming across the media and entertainment landscape; everywhere from broadcast news and gaming to episodic television. These high-resolution formats add data to streams that must be ingested at a much higher rate, consume more capacity once stored and require significantly more bandwidth when doing realtime editing. All of this translates into a significantly more demanding environment, which must be supported by the storage solution.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
New technologies for producing stunning visual content are opening tremendous opportunities for studios, post houses, distributors, and other media organizations. Sophisticated next-generation cameras and multi-camera arrays enable organizations to capture more visual information, in greater detail than ever before. At the same time, innovative technologies for consuming media are enabling people to view and interact with visual content in a variety of new ways.

To capitalize on new opportunities and meet consumer expectations, many media organizations will need to bolster their storage infrastructure. They need storage solutions that offer scalable capacity to support new ingest sources that capture huge amounts of data, with the performance to edit and add value to this rich media.

Can you talk about NVMe?
The main benefit of NVMe storage is that it provides extremely low latency — therefore allowing users to seek content at very high speed — which is ideal for high stream counts and compressed 4K content workflows.

However, NVMe resources are expensive. Quantum addresses this issue head-on by leveraging NVMe over fabrics (NVMeoF) technology. With NVMeoF, multiple clients can use pooled NVMe storage devices across a network at local speeds and latencies. And when combined with our StorNext, all data is accessible by multiple clients in a global namespace, making this high-performance tier of storage much more cost-effective. Finally, Quantum is in early field trials of a new advancement that will allow customers to benefit even more from NVMe-enabled storage.

What do you do in your products to help safeguard your users’ data?
A storage system must be able to accommodate policies ranging from “throw it out when the job is done” to “keep it forever” and everything in between. The cost of storage demands control over where data lives and when, how many copies of the data exist and where those copies reside over time.

Xcellis scale-out storage powered by StorNext incorporates a broad range of features for data protection. This includes integrated features such as RAID, automated copying, versioning and data replication functionality, all included within our latest release of StorNext.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Given the differences in size and scope of organizations across the media industry, production workflows are incredibly varied and often geographically dispersed. Within this context, flexibility becomes a paramount feature of any modern storage architecture.

We provide flexibility in a number of important ways for our customers. From the perspective of system architecture, and recognizing there is no one-size fits all solution, StorNext allows customers to configure storage with multiple media types that balance performance and capacity requirements across an entire end-to-end workflow. Second, and equally important for those companies that have a global workforce, is that our data replication software FlexSync allows for content to be rapidly distributed to production staff around the globe. And no matter what tier of storage the data resides on, FlexTier provides coordinated and unified access to the content within a single global namespace.

EditShare‘s Bill Thompson

What is the biggest trend you’ve seen in the past year in terms of storage?
In no particular order, the biggest trends for storage in the media and entertainment space are:
1. The need to handle higher and higher data rates associated with higher resolution and higher frame rate content. Across the industry, this is being address with Flash-based storage and the use of emerging technology like NVMe over “X” and 25/50/100G networking.

Bill Thompson

2. The ever-increasing concern about content security and content protection, backup and restoration solutions.

3. The request for more powerful analytics solutions to better manage storage resources.

4. The movement away from proprietary hardware/software storage solutions toward ones that are compatible with commodity hardware and/or virtual environments.

Can you talk about NVMe?
NVMe technology is very interesting and will clearly change the M&E landscape going forward. One of the challenges is that we are in the midst of changing standards and we expect current PCIe-based NVMe components to be replaced by U2/M2 implementations. This migration will require important changes to storage platforms.

In the meantime, we offer non-NVMe Flash-based storage solutions whose performance and price points are equivalent to those claimed by early NVMe implementations.

What do you do in your products to help safeguard your users’ data?
EditShare has been in the forefront of user data protection for many years beginning with our introduction of disk-based and tape-based automated backup and restoration solutions.

We expanded the types of data protection schemes and provided easy-to-use management tools that allow users to tailor the type of redundant protection applied to directories and files. Similarly, we now provide ACL Media Spaces, which allow user privileges to be precisely tailored to their tasks at hand; providing only the rights needed to accomplish their tasks, nothing more, nothing less.

Most recently, we introduced EFS File Auditing, a content security solution that enables system administrators to understand “who did what to my content” and “when and how did they did it.”

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
The EditShare file system is now available in variants that support EditShare hardware-based solutions and hybrid on-premise/cloud solutions. Our Flow automation platform enables users to migrate from on-premise high-speed EFS solutions to cloud-based solutions, such as Amazon S3 and Microsoft Azure, offering the best of both worlds.

Rohde & Schwarz‘s Dirk Thometzek

What is the biggest trend you’ve seen in the past year in terms of storage?
Consumer behavior is the most substantial change that the broadcast and media industry has experienced over the past years. Content is consumed on-demand. In order to stay competitive, content providers need to produce more content. Furthermore, to make the content more desirable, technologies such as UHD and HDR need to be adopted. This obviously has an impact on the amount of data being produced and stored.

Dirk Thometzek

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
In media and entertainment there has always been a remarkable growth of data over time, from the very first simple SCSI hard drives to huge network environments. Nowadays, however, there is a tremendous growth approximating an exponential function. Considering all media will be preserved for a very long time, the M&E storage market segment will keep on growing and innovating.

Looking at the amount of footage being produced, a big challenge is to find the appropriate data. Taking it a step further, there might be content that a producer wouldn’t even think of looking for, but has a relative significance to the original metadata queried. That is where machine learning and AI come into the play. We are looking into automated content indexing with the minimum amount of human interaction where the artificial intelligence learns autonomously and shares information with other databases. The real challenge here is to protect these intelligences from being compromised by unintentional access to the information.

What do you do to help safeguard your users’ data?
In collaboration with our Rohde & Schwarz Cybersecurity division, we are offering complete and protected packages to our customers. It begins with access restrictions to server rooms up to encrypted data transfers. Cyber attacks are complex and opaque, but the security layer must be transparent and usable. In media though, latency is just as critical, which is usually introduced with every security layer.

Can you talk about NVMe?
In order to bring the best value to the customer, we are constantly looking for improvements. The direct PCI communication of NVMe certainly brings a huge improvement in terms of latency since it completely eliminates the SCSI communication layer, so no protocol translation is necessary anymore. This results in much higher bandwidth and more IOPS.

For internal data processing and databases, R&S SpycerNode NVMe is used, which really boosts its performance. Unfortunately, the economic aspects of using this technology for media data storage is currently not considered to be efficient. We are dedicated to getting the best performance-to-cost ratio for the market, and since we have been developing video workstations and servers besides storage for decades now, we know how to get the best performance out of a drive — spinning or solid state.

Economically, it doesn’t seem to be acceptable to a build system with the latest and greatest technology for a workflow when standards will do, just because it is possible. The real art of storage technology lies in a highly customized configuration according to the technical requirements of an application or workflow. R&S SpycerNode will evolve over time and technologies will be added to the family.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Although hybrid workflows are highly desirable, it is quite important to understand the advantages and limits of this technology. High-bandwidth and low-latency wide-area network connections involve certain economical aspects. Without the suitable connection, an uncompressed 4K production does not seem feasible from a remote location — uploading several terabytes to a co-location can take hours or even days to be transferred, even if protocol acceleration is used. However, there are workflows, such as supplemental rendering or proxy editing, that do make sense to offload to a datacenter. R&S SpycerNode is ready to be an integral part of geographically scattered networks and the Spycer Storage family will grow.

Dell EMC‘s Tom Burns

What is the biggest trend you’ve seen in the past year in terms of storage?
The most important storage trend we’ve seen is an increasing need for access to shared content libraries accommodating global production teams. This is becoming an essential part of the production chain for feature films, episodic television, sports broadcasting and now e-sports. For example, teams in the UK and in California can share asset libraries for their file-based workflow via a common object store, whether on-prem or hybrid cloud. This means they don’t have to synchronize workflows using point-to-point transmissions from California to the UK, which can get expensive.

Tom Burns

Achieving this requires seamless integration of on-premises file storage for the high-throughput, low-latency workloads with object storage. The object storage can be in the public cloud or you can have a hybrid private cloud for your media assets. A private or hybrid cloud allows production teams to distribute assets more efficiently and saves money, versus using the public cloud for sharing content. If the production needs it to be there right now, they can still fire up Aspera, Signiant, File Catalyst or other point-to-point solutions and have prioritized content immediately available, while allowing your on-premise cloud to take care of the shared content libraries.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Dell Technologies offers end-to-end storage solutions where customers can position the needle anywhere they want. Are you working purely in the cloud? Are you working purely on-prem? Or, like most people, are you working somewhere in the middle? We have a continuous spectrum of storage between high-throughput low-latency workloads and cloud-based object storage, plus distributed services to support the mix that meets your needs.

The most important thing that we’ve learned is that data is expensive to store, granted, but it’s even more expensive to move. Storing your assets in one place and having that path name never change, that’s been a hallmark of Isilon for 15 years. Now we’re extending that seamless file-to-object spectrum to a global scale, deploying Isilon in the cloud in addition to our ECS object store on premises.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
AR, VR, AI and other emerging technologies offer new opportunities for media companies to change the way they tell and monetize their stories. However, due to the large amounts of data involved, many media organizations are challenged when they rely on storage systems that lack either scalability or performance to meet the needs of these new workflows.

Dell EMC’s file and object storage solutions help media companies cost effectively tier their content based upon access. This allows media organizations to use emerging technologies to improve how stories are told and monetize their content with the assistance of AI-generated metadata, without the challenges inherent in many traditional storage systems.

With artificial intelligence, for example, where it was once the job of interns to categorize content in projects that could span years, AI gives media companies the ability to analyze content in near-realtime and create large, easily searchable content libraries as the content is being migrated from existing tape libraries to object-based storage, or ingested for current projects. The metadata involved in this process includes brand recognition and player/actor identification, as well as speech-to-text, making it easy to determine logo placement for advertising analytics and to find footage for use in future movies or advertisements.

With Dell EMC storage, AI technologies can be brought to the data, removing the need to migrate or replicate data to direct-attach storage for analysis. Our solutions also offer the scalability to store the content for years using affordable archive nodes in Isilon or ECS object storage.

In terms of AR and VR, we are seeing video game companies using this technology to change the way players interact with their environments. Not only have they created a completely new genre with games such as Pokemon Go, they have figured out that audiences want nonlinear narratives told through realtime storytelling. Although AR and VR adoption has been slower for movies and TV compared to the video game industry, we can learn a lot from the successes of video game production and apply similar methodologies to movie and episodic productions in the future.

Can you talk about NVMe?
NVMe solutions are a small but exciting part of a much larger trend: workflows that fully exploit the levels of parallelism possible in modern converged architectures. As we look forward to 8K, 60fps and realtime production, the usage of PCIe bus bandwidth by compute, networking and storage resources will need to be much more balanced than it is today.

When we get into realtime productions, these “next-generation” architectures will involve new production methodologies such as realtime animation using game engines rather than camera-based acquisition of physically staged images. These realtime processes will take a lot of cooperation between hardware, software and networks to fully leverage the highly parallel, low-latency nature of converged infrastructure.

Dell Technologies is heavily invested in next-generation technologies that include NVMe cache drives, software-defined networking, virtualization and containerization that will allow our customers to continuously innovate together with the media industry’s leading ISVs.

What do you do in your products to help safeguard your users’ data?
Your content is your most precious capital asset and should be protected and maintained. If you invest in archiving and backing up your content with enterprise-quality tools, then your assets will continue to be available to generate revenue for you. However, archive and backup are just two pieces of data security that media organizations need to consider. They must also take active measures to deter data breaches and unauthorized access to data.

Protecting data at the edge, especially at the scale required for global collaboration can be challenging. We simplify this process through services such as SecureWorks, which includes offerings like security management and orchestration, vulnerability management, security monitoring, advanced threat services and threat intelligence services.

Our storage products are packed with technologies to keep data safe from unexpected outages and unauthorized access, and to meet industry standards such as alignment to MPAA and TPN best practices for content security. For example, Isilon’s OneFS operating system includes SyncIQ snapshots, providing point-in-time backup that updates automatically and generates a list of restore points.

Isilon also supports role-based access control and integration with Active Directory, MIT Kerberos and LDAP, making it easy to manage account access. For production houses working on multiple customer projects, our storage also supports multi-tenancy and access zones, which means that clients requiring quarantined storage don’t have to share storage space with potential competitors.

Our on-prem object store, ECS, provides long-term, cost-effective object storage with support for globally distributed active archives. This helps our customers with global collaboration, but also provides inherent redundancy. The multi-site redundancy creates an excellent backup mechanism as the system will maintain consistency across all sites, plus automatic failure detection and self-recovery options built into the platform.

Scale Logic‘s Bob Herzan

What is the biggest trend you’ve seen in the past year in terms of storage?
There is and has been a considerable buzz around cloud storage, object storage, AI and NVMe. Scale Logic recently took a private survey to its customer base to help determine the answer to this question. What we found is none of those buzzwords can be considered a trend. We also found that our customers were migrating away from SAN and focusing on building infrastructure around high-performance and scalable NAS.

Bob Herzan

They felt on-premises LTO was still the most viable option for archiving, and finding a more efficient and cost-effective way to manage their data was their highest priority for the next couple of years. There are plenty of early adopters testing out the buzzwords in the industry, but the trend — in my opinion — is to maximize a stable platform with the best overall return on the investment.

End users are not focused so much on storage, but on how a company like ours can help them solve problems within their workflows where storage is an important component.

Can you talk more about NVMe?
NVMe provides an any-K solution and superior metadata low-latency performance and works with our scale-out file system. All of our products have had 100GbE drivers for almost two years, enabling mesh technologies with NVMe for networks as well. As cost comes down, NVMe should start to become more mainstream this year — our team is well versed in supporting NVMe and ready to help facilities research the price-to-performance of NVMe to see if it makes sense for their Genesis and HyperFS Scale Out system.

With AI, VR and machine learning, our industry is even more dependent on storage. How are you addressing this?
We are continually refining and testing our best practices. Our focus on broadcast automation workflows over the years has already enabled our products for AI and machine learning. We are keeping up with the latest technologies, constantly testing in our lab with the latest in software and workflow tools and bringing in other hardware to work within the Genesis Platform.

What do you do in your products to help safeguard your users’ data?
This is a broad question that has different answers depending on which aspect of the Genesis Platform you may be talking about. Simply speaking, we can craft any number of data safeguard strategies and practices based on our customer needs, the current technology they are using and, most importantly, where they see their growth of capacity and data protection needs moving forward. Our safeguards start as simple as enterprise-quality components, mirrored sets, RAID -6, RAID-7.3 and RAID N+M, asynchronous data sync to a second instance, full HA with synchronous data sync to a second instance, virtual IP failover between multiple sites, multi-tier DR and business continuity solutions.

In addition, the Genesis Platform’s 24×7 health monitoring service (HMS) communicates directly with installed products at customer sites, using the equipment serial number to track service outages, system temperature, power supply failure, data storage drive failure and dozens of other mission-critical status updates. This service is available to Scale Logic end users in all regions of the world and complies with enterprise-level security protocols by relying only on outgoing communication via a single port.

Users want more flexible workflows — storage in the cloud, on-premises. Are your offerings reflective of that?
Absolutely. This question defines our go-to-market strategy — it’s in our name and part of our day-to-day culture. Scale Logic takes a consultative role with its clients. We take our 30-plus years of experience and ask many questions. Based on the answers, we can give the customer several options. First off, many customers feel pressured to refresh their storage infrastructure before they’re ready. Scale Logic offers customized extended warranty coverage that takes the pressure off the client and allows them to review their options and then slowly implement the migration and process of taking new technology into production.

Also, our Genesis Platform has been designed to scale, meaning clients can start small and grow as their facility grows. We are not trying to force a single solution to our customers. We educate them on the various options to solve their workflow needs and allow them the luxury of choosing the solution that best meets both their short-term and long-term needs as well as their budget.

Facilis‘ Jim McKenna

What is the biggest trend you’ve seen in the past year in terms of storage?
Recently, I’ve found that conversations around storage inevitably end up highlighting some non-storage aspects of the product. Sort of the “storage and…” discussion where the technology behind the storage is secondary to targeted add-on functionality. Encoding, asset management and ingest are some of the ways that storage manufacturers are offering value-add to their customers.

Jim McKenna

It’s great that customers can now expect more from a shared storage product, but as infrastructure providers we should be most concerned with advancing the technology of the storage system. I’m all for added value — we offer tools ourselves that assist our customers in managing their workflow — but that can’t be the primary differentiator. A premium shared storage system will provide years of service through the deployment of many supporting products from various manufacturers, so I advise people to avoid being caught-up in the value-add marketing from a storage vendor.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
Our industry has always been dependent upon storage in the workflow, but now facilities need to manage large quantities of data efficiently, so it’s becoming more about scaled networks. In the traditional SAN environment, hard-wired Fibre Channel clients are the exclusive members of the production workgroup.

With scalable shared-storage through multiple connection options, everyone in the facility can be included in the collaboration on a project. This includes offload machines for encoding and rendering large HDR and VR content, and MAM systems with localized and cloud analysis of data. User accounts commonly grow into the triple digits when producers, schedulers and assistants all require secure access to the storage network.

Can you talk about NVMe?
Like any new technology, the outlook for NVMe is promising. Solid state architecture solves a lot of problems inherent in HDD-based systems — seek times, read speeds, noise and cooling, form factor, etc. If I had to guess a couple years ago, I would have thought that SATA SSDs would be included in the majority of systems sold by now; instead they’ve barely made a dent in the HDD-based unit sales in this market. Our customers are aware of new technology, but they also prioritize tried-and-true, field-tested product designs and value high capacity at a lower cost per GB.

Spinning HDD will still be the primary storage method in this market for years to come, although solid state has advantages as a helper technology for caching and direct access for high-bandwidth requirements.

What do you do in your products to help safeguard your users’ data?
Integrity and security are priority features in a shared storage system. We go about our security differently than most, and because of this our customers have more confidence in their solution. By using a system of permissions that emanate from the volume-level, and are obscured from the complexities of network ownership attributes, network security training is not required. Because of the simplicity of securing data to only the necessary people, data integrity and privacy is increased.

In the case of data integrity during hardware failure, our software-defined data protection has been guarding our customers assets for over 13 years, and is continually improved. With increasing drive sizes, time to completion of drive recovery is an important factor, as well as system usability during the process.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
When data lifecycle is a concern of our customers, we consult on methods of building a storage hierarchy. There is no one-size-fits-all approach here, as every workflow, facility and engineering scope is different.

Tier 1 storage is our core product line, but we also have solutions for nearline (tier 2) and archive (tier 3). When the discussion turns to the cloud as a replacement for some of the traditional on-premises storage offerings, the complexity of the pricing structure, access model and interface becomes a gating factor. There are a lot of ways to effectively use the cloud, such as compute (AI, encoding, etc.), business continuity, workflow (WAN collaboration) or simple cold storage. These tools, when combined with a strong on-premises storage network, will enhance productivity and ensure on-time delivery of product.

mLogic’s co-founder/CEO Roger Mabon

What is the biggest trend you’ve seen in the past year in terms of storage?
In the M&E industry, high-resolution 4K/8K multi-camera shoots,
stereoscopic VR and HDR video are commonplace and are contributing to the unprecedented amounts of data being generated in today’s media productions. This trend will continue as frame rates and resolutions increase and video professionals move to shoot in these new formats to future-proof their content.

Roger Mabon

With AI, VR and machine learning, etc., our industry is even more dependent on storage. Can you talk about that?
Absolutely. In this environment, content creators must deploy storage solutions that are high capacity, high-performance and fault-tolerant. Furthermore, all of this content must be properly archived so it can be accessed well in to the future. mLogic’s mission is to provide affordable RAID and LTO tape storage solutions that fit this critical need.

How are you addressing this?
The tsunami of data being produced in today’s shoots must be properly managed. First and foremost is the need to protect the original camera files (OCF). Our high-performance mSpeed Thunderbolt 3 RAID solutions are being deployed on-set to protect these OCF. mSpeed is a desktop RAID that features plug-and-play Thunderbolt connectivity, capacities up to 168TB and RAID-6 data protection. Once the OCF is transferred to mSpeed, camera cards can be wiped and put back into production

The next step involves moving the OCF from the on-set RAID to LTO tape. Our portable mTape Thunderbolt 3 LTO solutions are used extensively by media pros to transfer OCF to LTO tape. LTO tape cartridges are shelf stable for 30+ years and cost around $10 per TB. That said, I find that many productions skip the LTO transfer and rely solely on single hard drives to store the OCF. This is a recipe for disaster as hard drives sitting on a shelf have a lifespan of only three to five years. Companies working with the likes of Netflix are required to use LTO for this very reason. Completed projects should also be offloaded from hard drives and RAIDs to LTO tape. These hard drives systems can then be put back into action for the tasks that they are designed for… editing, color correction, VFX, etc.

Can you talk about NVMe?
mLogic does not currently offer storage solutions that incorporate NVMe technology, but we do recognize numerous use cases for content creation applications. Intel is currently shipping an 8TB SSD with PCIe NVMe 3.1 x4 interface that can read/write data at 3000+ MB/second! Imagine a crazy fast and ruggedized NVMe shuttle drive for on-set dailies…

What do you do in your products to help safeguard your users data?

Our 8- and12-drive mSpeed solutions feature hardware RAID data protection. mSpeed can be configured in multiple RAID levels including RAID-6, which will protect the content stored on the unit even if two drives should fail. Our mTape solutions are specifically designed to make it easy to offload media from spinning drives and archive the content to LTO tape for long term data preservation.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We recommend that you make two LTO archives of your content that are geographically separated in secure locations such as the post facility and the production facility. Our mTape Thunderbolt solutions accomplish this task.

In regards to the cloud, transferring terabytes upon terabytes of data takes an enormous amount of time and can be prohibitively expensive, especially when you need to retrieve the content. For now, cloud storage is reserved for productions with big pipes and big budgets.

OWC president Jennifer Soulé 


With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?

We’re constantly working to provide more capacity and faster performance.  For spinning disk solutions, we’re making sure that we’re offering the latest sizes in ever-increasing bays. Our ThunderBay line started as a four-bay, went to a six-bay and will grow to eight-bay in 2019. With 12TB drives, that’s 96TB in a pretty workable form factor. Of course, you also need performance, and that is where our SSD solutions come in as well as integrating the latest interfaces like Thunderbolt 3. For those with greater graphics needs, we also have our Helios FX external GPU box.

Can you talk about NVME?
With our Aura Pro X, Envoy Pro EX, Express 4M2 and ThunderBlade, we’re already into NVMe and don’t see that stopping. By the end of 2019 we expect virtually all of our external Flash-based solutions will be NVMe-based rather than SATA. As the cost of Flash goes down and performance and capacity go up, we expect broader adoption as both primary storage and in secondary cache setups. 2TB drive supply will stabilize and we should see 4TB  and PCIe Gen 4 will double bandwidth.  Bigger, faster and cheaper is a pretty awesome combination.

What do you do in your products to help safeguard your users data?
We focus more on providing products that are compatible with different encryption schemas rather than building something in. As far as overall data protection, we’re always focused on providing the most reliable storage we can. We make sure our power supplies are over what is required to make sure insufficient power is never a factor. We test a multitude of drives in our enclosures to ensure we’re providing the best performing drives.

For our RAID solutions, we do burn-in testing to make sure all the drives are solid. Our SoftRAID technology also provides in-depth drive health monitoring so you know well in advance if a drive is failing.  This is critical because many other SMART-based systems fail to detect bad drives leading to subpar system performance and corrupted data. Of course, all the hardware and software technology we put into our drives don’t do much if people don’t back up their data — so we also work with our customers to find the right solution for their use case or workflow.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
I definitely think we hit on flexibility within the on prem-space by offering a full range of single and multi-drive solutions, spinning disk and SSD options, portable to rack mounted that can be fully setup solutions or DIY where you can use drives you might already have. You’ll have to stay tuned on the cloud part, but we do have plans to use the cloud to expand on the data protection our drives already offer.

A Technologist’s Data Storage Primer

By Mike McCarthy

Storage is the concept of keeping all of the files for a particular project or workflow, but they may not all be stored in the same place — different types of data have different requirements and different storage solutions have different strengths and features.

At a fundamental level, most digital data is stored on HDDs or SSDs. HDDs, or hard disk drives, are mechanical devices that store the data on a spinning magnetic surface and move read/write heads over that surface to access the data. They currently max out around 200MB/s and 5ms latency.

SSDs, or solid-state drives, involve no moving parts. SSDs can be built with a number of different architectures and interfaces, but most are based on the same basic Flash memory technology as the CF or SD card in your camera. Some SSDs are SATA drives that use the same interface and form factor as a spinning disk for easy replacement in existing HDD-compatible devices. These devices are limited to SATA’s bandwidth of 600MB/s. Other SSDs use the PCIe interface, either in full-sized PCIe cards or the smaller M.2 form factor. These have much higher potential bandwidths, up to 3000MB/s.

Currently, HDDs are much cheaper for storing large quantities of data but require some level of redundancy for security. SSDs are also capable of failure, but it is a much more rare occurrence. Data recovery for either is very expensive. SSDs are usually cheaper for achieving high bandwidth, unless large capacities are also needed.

RAIDs
Traditionally, hard drives used in professional contexts are grouped together for higher speeds and better data security. These are called RAIDs, which stands for redundant array of independent disks. There are a variety of different approaches to RAID design that are very different from one another.

RAID-0 or striping is technically not redundant, but every file is split across each disk, so each disk only has to retrieve its portion of a requested file. Since these happen in parallel, the result is usually faster than if a single disk had read the entire file, especially for larger files. But if one disk fails, every one of your files will be missing a part of its data, making the remaining partial information pretty useless. The more disks in the array, the higher the chances of one failing, so I rarely see striped arrays composed of more than four disks. It used to be popular to create striped arrays for high-speed access to restorable data, like backed-up source footage, or temp files, but now a single PCIe SSD is far faster, cheaper, smaller and more efficient in most cases.

Sugar

RAID-1 or mirroring is when all of the data is written to more than one drive. This limits the array’s capacity to the size of the smallest source volume, but the data is very secure. There is no speed benefit to writes since each drive must write all of the data, but reads can be distributed across the identical drives with similar performance as RAID-0.

RAID-3, -5 and -6 try to achieve a balance between those benefits for larger arrays with more disks (minimum three). They all require more complicated controllers, so they are more expensive for the same levels of performance. RAID-3 stripes data across all but one drive and then calculates and stores parity (odd/even) data across the data drives and stores it on the last drive. This allows the data from any single failed drive to be restored, based on the parity data. RAID-5 is similar, but the parity volume is alternated depending on the block, allowing the reads to be shared across all disks, not just the “data drives.”

So the capacity of a RAID-3 or RAID-5 array will be the minimum individual disk capacity times the number of disks minus one. RAID-6 is similar but stores two drives worth of parity data, which via some more advanced math than odd/even, allows it to restore the data even if two drives fail at the same time. RAID-6 capacity will be the minimum individual disk capacity times the number of disks minus two, and is usually only used on arrays with many disks. RAID-5 is the most popular option for most media storage arrays, although RAID-6 becomes more popular as the value of the data stored increases and the price of extra drives decreases over time.

Storage Bandwidth
Digital data is stored as a series of ones and zeros, each of which is a bit. One byte is 8 bits, which frequently represents one letter of text, or one pixel of an image (8-bit single channel). Bits are frequently referenced in large quantities to measure data rates, while bytes are usually referenced when measuring stored data. I prefer to use bytes for both purposes, but it is important to know the difference. A Megabit (Mb) is one million bits, while a Megabyte (MB) is one million bytes, or 8 million bits. Similar to metric, Kilo is thousand, Mega is million, Giga is billion, and Tera is trillion. Anything beyond that you can learn as you go.

Networking speeds are measured in bits (Gigabits), but with headers and everything else, it is safer to divide by 10 when converting speed into bytes per second. Estimate 100MB/s for Gigabit, up to 1000MB/s on 10GB, and around 500MB/s for the new N-BaseT standard. Similarly, when transferring files over a 30Mb Internet connection, expect around 3MB/s, then multiple by 60 or 3,600 to get to minutes or hours (180MB/min or 9600MB/hr in this case). So if you have to download a 10GB file on that connection, come back to check on it in an hour.

Magnopus

Because networking standards are measured in bits, and because networking is so important for sharing video files, many video file types are measured in bits as well. An 8Mb H.264 stream is 1MB per second. DNxHD36 is 36Mb/s (or 4.5MB/s when divided by eight), DV and HDV are 25Mb, DVCProHD is 100Mb, etc. Other compression types have variable bit rates depending on the content, but there are still average rates we can make calculations from. Any file’s size divided by its duration will reveal its average data rate. It is important to make sure that your storage has the bandwidth to handle as many streams of video as you need, which will be that average data rate times the number of streams. So 10 streams of DNxHD36 will be 360Mb or 45MB/s.

The other issue to account for is IO requests and drive latency. Lots of small requests require not just high total transfer rates, but high IO performance as well. Hard drives can only fulfill around 100 individual requests per second, regardless of how big those requests are. So while a single drive can easily sustain a 45MB/s stream, satisfying 10 different sets of requests may keep it so busy bouncing between the demands that it can’t keep up. You may need a larger array, with a higher number of (potentially) smaller disks to keep up with the IO demands of multiple streams of data. Audio is worse in this regard in that you are dealing with lots of smaller individual files as your track count increases, even though the data rate is relatively low. SSDs are much better at handling larger numbers of individual requests, usually measured in the thousands or tens of thousands per second per drive.

Storage Capacity
Capacity on the other hand is simpler. Megabytes are usually the smallest increments of data that we have to worry about calculating. A media type’s data rate (in MB/sec) times its duration (in seconds) will give you its expected file size. If you are planning to edit a feature film with 100 hours of offline content in DNxHD36, that is 3600×100 seconds, times 4.5MB/s, equaling 1620000MB, 1620GB, or simply about 1.6TB. But you should add some headroom for unexpected needs, and then a 2TB disk is about 1.8TB when formatted, so it will just barely fit. It is probably worth sizing up to at least 3TB if you are planning to store your renders and exports on there as well.

Once you have a storage solution of the required capacity there is still the issue of connecting it to your system. The most expensive options connect through the network to make them easier to share (although more is required for true shared storage), but that isn’t actually the fastest option or the cheapest. A large array can be connected over USB3 or Thunderbolt, or via the SATA or SAS protocol directly to an internal controller.

There are also options for Fibre Channel, which can allow sharing over a SAN, but this is becoming less popular as 10GbE becomes more affordable. Gigabit Ethernet and USB3 won’t be fast enough for high-bandwidth files to playback, but 10GbE, multichannel SAS, Fibre Channel and Thunderbolt can all handle almost anything up to uncompressed 4K.

Direct attached storage will always have the highest bandwidth and lowest latency, as it has the fewest steps between the stored files and the user. Using Thunderbolt or USB adds another controller and hop, Ethernet even more so.

Different Types of Project Data
Now that we know the options for storage, let’s look at the data we anticipate needing to store. First off we will have lots of video footage of source media (either camera original files, transcoded editing dailies, or both). This is usually in the Terabytes, but the data rates vary dramatically — from 1Mb H.264 files to 200Mb ProRes files to 2400Mb Red files. The data rate for the files you are playing back, combined with the number of playback streams you expect to use, will determine the bandwidth you need from your storage system. These files are usually static in that they don’t get edited or written to in any way after creation.

The exceptions would be sidecar files like RMD and XML files, which will require write access to the media volume. If a certain set of files is static, as long as a backup of the source data exists, they don’t need to be backed up on a regular basis and don’t even necessarily need redundancy. Although if the cost of restoring that data would be high, in regards to lost time during that process, some level of redundancy is still recommended.

Another important set of files we will have is our project files, which actually record the “work” we do in our application. They contain instructions for manipulating our media files during playback or export. The files are usually relatively small, and are constantly changing as we use them. That means they need to be backed up on a regular basis. The more frequent the backups, the less work you lose when something goes wrong.

We will also have a variety of exports and intermediate renders over the course of the project. Whether they are flattened exports for upload and review, VFX files or other renders, these are a more dynamic set of files than our original source footage. And they are generated on our systems instead of being imported from somewhere else. These can usually be regenerated from their source projects, if necessary, but the time and effort required usually makes it worth it to invest in protecting or backing them up. In most workflows, these files don’t change once they are created, which makes it easier to back them up if desired.

There will also be a variety of temp files generated by most editing or VFX programs. Some of these files need high-speed access for best application performance, but they rarely need to be protected or backed up because they can be automatically regenerated by the source applications on the fly if needed.

Choosing the Right Storage for Your Needs
Ok, so we have source footage, project files, exports and temp files that we need to find a place for. If you have a system or laptop with a single data volume, the answer is simple: It all goes on the C drive. But we can achieve far better performance if we have the storage infrastructure to break those files up onto different devices. Newer laptops frequently have both a small SSD and a larger hard disk. In that case we would want our source footage on the (larger) HDD, while the project files should go on the (safer) SSD.

Usually your temp file directories should be located on the SSD as well since it is faster, and your exports can go either place, preferably the SSD if they fit. If you have an external drive of source footage connected, you can back all files up there, but you should probably work from projects stored on the local system, playing back media from the external drive.

A professional workstation can have a variety of different storage options available. I have a system with two SSDs and two RAIDs, so I store my OS and software on one SSD, my projects and temp files on the other SSD, my source footage on one RAID and my exports on the other. I also back up my project folder to the exports RAID on a daily basis, since the SSDs have no redundancy.

Individual Store Solution Case Study Examples
If you are natively editing a short film project shot on Red, then R3Ds can be 300MB/s. That is 1080GB/hour, so five hours of footage will be just over 5TB. It could be stored on a single 6TB external drive, but that won’t give you the bandwidth to play back in real-time (hard drives usually top out around 200MB/s).

Striping your data across two drives in one of those larger external drives would probably provide the needed performance, but with that much data you are unlikely to have a backup elsewhere. So data security becomes more of a concern, leading us toward a RAID-5-based solution. A four-disk array of 2TB drives provides 6TB of usable storage at RAID-5 (4x2TB = 8TB raw capacity, minus 2TB of parity data equals 6TB of usable storage capacity). Using an array of 8 1TB drives would provide higher performance, and 7TB of space before formatting (8x1TB = 8TB raw capacity, minus 1TB of parity, because a single drive failure would only lose 1TB of data in this configuration) but will cost more. (eight-port RAID controller, eight-bay enclosure, and two 1TB drives are usually more expensive than one 2TB drive.)

Larger projects deal with much higher numbers. Another project has 200TB of Red footage that needs to be accessible on a single volume. A 24-bay enclosure with 12TB drives provides 288TB of space, minus two drives worth of data for RAID-6 redundancy (288TB raw-[2x12TB for parity]=264TB usable capacity), which will be more like 240TB of space available in Windows once it is formatted.

Sharing Storage and Files With Others
As Ethernet networking technology has improved, the benefits of expensive SAN (storage area network) solutions over NAS (network attached storage) solutions has diminished. 10Gigabit Ethernet (10GbE) transfers over 1GB of data a second and is relatively cheap to implement. NAS has the benefit of a single host system controlling the writes, usually with software included in the OS. This prevents data corruption and also isolates the client devices from the file system allowing PC, Mac and Linux devices to all access the same files. This comes at the cost of slightly increased latency and occasionally lower total bandwidth, but the prices and complexity of installation are far lower.

So now all but the largest facilities and most demanding workflows are being deployed with NAS-based shared storage solutions. This can be as simple as a main editing system with a large direct attached array sharing its media with an assistant station, over a direct 10GbE link, for about $50. This can be scaled up by adding a switch and connecting more users to it, but the more users sharing the data, the greater the impact on the host system, and the lower the overall performance. Over 3-4 users, it becomes prudent to have a dedicated host system for the storage, for both performance and stability. Once you are buying a dedicated system, there are a variety of other functionalities offered by different vendors to improve performance and collaboration.

Bin Locking and Simultaneous Access
The main step to improve collaboration is to implement what is usually referred to as a “bin locking system.” Even with a top-end SAN solution and strict permissions controls there is still the possibility of users overwriting each other’s work, or at the very least branching the project into two versions that can’t easily be reconciled.

If two people are working on the same sequence at the same time, only one of their sets of changes is going to make it to the master copy of the file without some way of combining the changes (and solutions are being developed). But usually the way to avoid that is to break projects down into smaller pieces and make sure that no two people are ever working on the exact same part. This is accomplished by locking the part (or bin) of the project that a user is editing so that no one else may edit it at the same time. This usually requires some level of server functionality because it involves changes that are not always happening at the local machine.

Avid requires specific support for that from the storage host in order for it to enable that feature. Adobe on the other hand has implemented a simpler storage-based solution, which is effective but not infallible, that works on any shared storage device that offers users write access.

A Note on iSCSI
iSCSI arrays offer some interesting possibilities for read-only data, like source footage, as iSCSI gives block-level access for maximum performance and runs on any network without expensive software. The only limit is that only one system can copy new media to the volume, and there must be a secure way to ensure the remaining systems have read-only access. Projects and exports must be stored elsewhere, but those files require much less capacity and bandwidth than source media. I have not had the opportunity to test out this hybrid SAN theory since I don’t have iSCSI appliances to test with.

A Note on Top-End Ethernet Options
40Gb Ethernet products have been available for a while and we are now seeing 25GB and 100Gb Ethernet products as well. 40Gb cards can be gotten quite cheaply, and I was tempted to use them for direct connect, hoping to see 4GB/s to share fast SSDs between systems. But 40Gb Ethernet is actually a trunk of four parallel 10Gb links and each individual connection is limited to 10Gb. It is easy to share the 40Gb of aggregate bandwidth across 10 systems accessing a 40Gb storage host, but very challenging to get more than 10Gb to a single client system. Having extra lanes on the highway doesn’t get you to work any faster if there are no other cars on the road, it only helps when there is lots of competing traffic.

25Gb Ethernet on the other hand will give you access to nearly 3GB/s for single connections, but as that is newer technology, the prices haven’t come down yet ($500 instead of $50 for a 10GbE direct link). 100Gb Ethernet is four 25Gb links trunked together, and subject to the same aggregate limitations as 40Gb.

Main Image: Courtesy of Sugar Studios LA


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Storage for Post Studios

By Karen Moltenbrey

The post industry relies heavily on storage solutions, without question. Facilities are jugging a variety of tasks and multiple projects all at once. And deadlines are always looming. Thus, these studios need a storage solution that is fast and reliable. Each studio has different needs and searches to find the right system to fit their particular workflow. Luckily, there are many storage choices for pros to choose from.

For this article, we spoke with two post houses about their storage solutions and why they are a good fit for each of their needs.

Sugar Studios LA
Sugar Studios LA is one-stop shop playground for filmmakers that offers a full range of post production services, including editorial, color, VFX, audio, production and finishing, with each department led by seasoned professionals. Its office suites in the Wiltern Theater Tower, in the center of LA, serve an impressive list of clients, from numerous independent film producers and distributors to Disney, Marvel, Sony, MGM, Universal, Showtime, Netflix, AMC, Mercedes-Benz, Ferrari and others.

Jijo Reed and Sting in one of their post suites.

With so much important data in play at one time, Sugar needs a robust, secure and reliable storage system. However, with diverse offerings come diverse requirements. For its online and color projects, Sugar uses a Symply SAN with 200TB of usable storage. The color workstations are connected via 10Gb Ethernet over Fibre with a 40Gb uplink to the network. For mass storage and offline work, the studio uses a MacOS server acting as a NAS, with 530TB of usable storage connected via a 40Gb network uplink. For Avid offline jobs, the facility has an Avid Nexis Pro with 40TB of storage, and for Avid Pro Tools collaboration, a Facilis TerraBlock with 40TB of usable storage.

“We can collaborate with any and all client stations working on the same or different media and sharing projects across multiple software platforms,” says Jijo Reed, owner/executive producer of Sugar. “No station is limited to what it can do, since every station has access to all media. Centralized storage is so important because not only does it allow collaboration, we always have access to all media and don’t have to fumble through drives. It is also RAID-protected, so we don’t have to be concerned with losing data.”

Prior to employing the centralized storage, Sugar had been using G-Technology’s G-RAID drives, changing over in late 2016. “Once our technical service advisor, Zach Moller, came on board, he began immediately to institute a storage network solution that was tailored to our workflow,” says Reed.

Reed, an award-winning director/producer, founded the company in 2012, using a laptop (running Final Cut Pro 7) and an external hard drive he had purchased on sale at Fry’s. His target base at the time was producers and writers needing sizzle trailers to pitch their projects — at a time when the term “sizzle trailer” was not part of the common vernacular. “I attended festivals to pitch my wares, producing over 15 sizzles the first year,” he says, “and it grew from there.”

Since Reed was creating sizzles for yet-to-be-made features, he was in “pole position” to handle the post for some of these independent films when they got funded. In 2015, he, along with his senior editor, Paul Buhl, turned their focus to feature post work, which was “more lucrative and less exhausting, but mostly, we wanted to tell stories – the whole story.” He rebranded and changed the name of the company from Sizzlepitch to Sugar Studios, and brought on a feature post producer, Chris Harrington. Reed invested heavily in the company, purchasing equipment and acquiring space. Soon, one bay became two, then three and so on. Currently, the company spans three full floors, including the penthouse of the Wiltern Theater Tower.

As Reed proudly points out, the studio space features 21 bays and workstations, two screening theaters, including a 25-seat color and mix DI stage with a Barco DP4K projector and Dolby Atmos configuration. “We are fully staffed, all under one roof, with editorial, full audio services, color correction/grading, VFX and a greenscreen cyclorama stage with on-site 4K cameras, grip and lighting,” he details. “But, it’s the people who make this work. Our passion is obvious to our clients.”

While Sugar was growing and expanding, so, too, was its mass storage solution. According to Zach Moller, it started with the NAS due to its low price and fast (10Gb) connection to every client machine. “The Symply SAN solution was needed because we required a high-bandwidth system for online and color playback that used Fibre Channel technology for the low latency and local drive configuration,” he says.

Moreover, the facility wanted flexibility with its SAN solution; it was very expensive to have every machine connected via Fibre Channel, “and frankly, we didn’t need that bandwidth,” Reed says. “Symply allowed us to have client machines choose whether they connected via Fibre Channel or 10Gb. If this wasn’t the case, we would have been in a pickle, having to purchase expansion chassis for every machine to open up additional PCI slots.” (The bulk of the machines at Sugar connect using the pre-existing 10Gb Ethernet over Fibre network, thus negating the need to use another PCI slot on a Fibre Channel card.)

American Dreamer

At Sugar, the camera masters and production audio are loaded directly to the NAS for mass storage. Then, the group archives the camera masters to LTO for deep archival, for an additional backup. During LTO archival, the studio creates the dailies for the offline edit on either Avid Media Composer (where the MXFs are migrated to the Avid Nexis server) or Adobe Premiere (where the ProRes dailies continue to live on the NAS).

When adding visual effects, the artists render to the Symply SAN when preparing for the online, color and finishing.

The studio works with a wide range of codecs, some of which are extremely taxing on the systems. And, the SAN is ideal, especially for the raster image files (EXRs), since each frame has such a high density — and there can be 100,000 frames per folder. “This can only be accomplished with a premium storage solution: our SAN,” Reed says.

When the studio moved to the EXR codec for the VFX on the American Dreamer feature film, for example, its original NAS solution over 10Gb didn’t have enough bandwidth for playback on its systems (1.2GB/sec). Once it upgraded the SAN solution with dual 16Gb Fibre Channel, they were able to play back uncompressed 4K EXR footage without the headache or frustration of stuttering.

“We have created an environment that caters to the creative process with a technical infrastructure that is superfast and solid. Filmmakers love us, and I couldn’t be prouder of my team for making this happen,” says Reed.

Mike Seabrooke

Postal
Established in 2015, Postal is a boutique creative studio that produces motion graphics, visual effects, animation, live action and editorial, with the vision of transcending all mediums — whether it’s short animations for social media or big-budget visual effects for broadcast. “As a studio, we love to experiment with different techniques. We feel strongly that the idea should always come first,” says Mike Seabrooke, producer at New York’s Postal.

To ensure that these ideas make it to the final stage of a project, the company uses a mixture of hard drives, LTO tapes and servers that house the content while the artists are working on projects, as well as for archival purposes. Specifically, the studio employs the EditShare Storage v.7 shared storage platform and EditShare Ark Tape for managing the LTO tape libraries that serve as nearline and offline backup. This is the system setup that Postal deployed initially when it started up a few years ago, and since then Postal has been continuously updating and expanding it based on its growth as a studio.

Let’s face it, hard drives always have the possibility of failing. But, failure is not something that Postal — or any other post house — can afford. That is why the studio keeps two instances per job on archive drives: a master and a backup. “Organized hard drives give us quick access to previous jobs if need be, which sometimes can be quite the lifesaver,” says Seabrooke.

 

Postal’s Nordstrom project.

LTO tapes, meanwhile, are used to back up the facility’s servers running EditShare v7 – which house Postal’s editorial jobs — on the off chance that something happens to that precious piece of hardware. “The recovery process isn’t the fastest, but the system is compact, self-contained and gives us peace of mind in case anything does go wrong,” Seabrooke explains.

In addition, the studio uses Retrospect backup and restore software for its working projects server. Seabrooke says, “We chose it because it offers a backup service that does not require much oversight.”

When Postal began shopping for a solution for its studio three years ago, reliability was at the top of its list. The facility needed a system it could rely on to back up its data, which would comprise the facility’s entire scope of work. Ease of use was also a concern, as was access. This decision prompted questions such as: Would we have to monitor it constantly? In what timeframe would we be able to access the data? Moreover, cost was yet another factor: Would the solution be effective without breaking our budget?

Postal’s solution indeed enabled them to check off every one of those boxes. “Our projects demand a system that we can count on, with the added benefit of quick retrieval,” Seabrooke says.

Throughout the studio’s production process, the artists are accessing project data on the servers. Then, once they complete the project, the data is transferred to the archival drives for backup. This frees up space on the company servers for new jobs, while providing access to the stored data if needed.

“Storage is so important in our work because it is our work. Starting over on a project is an outcome we cannot allow, so responsible storage is a necessity,” concludes Seabrooke.


Karen Moltenbrey is a long-time VFX and post production writer.

A VFX pro on avoiding the storage ‘space trap’

By Adam Stern

Twenty years is an eternity in any technology-dependent industry. Over the course of two-plus decades of visual effects facility ownership, changing standards, demands, capability upgrades and staff expansion have seen my company, Vancouver-based Artifex Studios, usher in several distinct eras of storage, each with its own challenges. As we’ve migrated to bigger and better systems, one lesson we’ve learned has proven critical to all aspects of our workflow.

Adam Stern

In the early days, Artifex used off-the-shelf hard drives and primitive RAIDs for our storage needs, which brought with it slow transfer speeds and far too much downtime when loading gigabytes of data on and off the system. We barely had any centralized storage, and depended on what was essentially a shared network of workstations — which our then-small VFX house could get away with. Even considering where we were then, which was sub-terabyte, this was a messy problem that needed solving.

We took our first steps into multi-TB NAS using off-the-shelf solutions from companies like Buffalo. This helped our looming storage space crunch but brought new issues, including frequent breakdowns that cost us vital time and lost iterations — even with plenty of space. I recall a particular feature film project we had to deliver right before Christmas. It almost killed us. Our NAS crashed and wouldn’t allow us to pull final shots, while throwing endless error messages our way. I found myself frantically hot-wiring spare drives to enable us to deliver to our client. We made it, but barely.

At that point it was clear a change was needed. We started using a solution that Annex Pro — a Vancouver-based VAR we’d been working with for years — helped put into place. That company was bought and then went away completely.

Our senior FX TD, Stanislav Enilenis, who was also handling IT for us back then, worked with Annex to install the new system. According to Stan, “the switch allowed bandwidth for expansion. However, when we would be in high-production mode, bandwidth became an issue. While the system was an overall improvement from our first multi-terabyte NAS, we had issues. The company was bought out, so getting drives became problematic, parts became harder to source and there were system failures. When we hit top capacity with the-then 20-plus staff all grinding, the system would slow to a crawl and our artists spent more time waiting than working.”

Artifex machine room.

As we transitioned from SD to HD, and then to 4K, our delivery requirements increased along with our rendering demands, causing severe bottlenecks in the established setup. We needed a better solution but options were limited. We were potentially looking at a six-figure investment, in a system not geared to M&E.

In 2014, Artifex was working on the TV series Continuum, which had fairly massive 3D requirements on an incredibly tight turnaround. It was time to make a change. After a number of discussions with Annex, we made the decision to move to an offering from a new company called Qumulo, which provided above-and-beyond service, training and setup. When we expanded into our new facility, Qumulo helped properly move the tech. Our new 48TB pipeline flowed freely and offered features we didn’t previously have, and Qumulo were constantly adding new and requested updates.

Laila Arshid, our current IS manager, has found this to be particularly valuable. “In Qumulo’s dashboard I can see realtime analytics of everything in the system. If we have a slowdown, I can track it to specific workstations and address any issues. We can shut that workstation or render-node down or reroute files so the system stays fast.”

The main lesson we’ve learned throughout every storage system change or upgrade is this: It isn’t just about having a lot of space. That’s an easy trap to fall into, especially today when we’re seeing skyrocketing demands from 4K+ workflows. You can have unlimited storage, but If you can’t utilize it efficiently and at speed, your storage space becomes irrelevant.

In our industry, the number of iterations we can produce has a dramatic impact on the quality of work we’re able to provide, especially with today’s accelerated schedules. One less pass can mean work with less polish, which isn’t acceptable.

Artifex provided VFX for Faster Than Light

Looking forward, we’re researching extended storage on the cloud: an ever-expanding storage pool with the advantages of fast local infrastructure. We currently use GCP for burst rendering with Zync, along with nearline storage, which has been fantastic — but the next step will be integrating these services with our daily production processes. That brings a number of new challenges, including how to combine local and cloud-based rendering and storage in ways that are seamless to our team.

Constantly expanding storage requirements, along with maintaining the best possible speed and efficiency to allow for artist iterations, are the principal drivers for every infrastructure decision at our company — and should be a prime consideration for everyone in our industry.


Adam Stern is the founder of Vancouver, British Columbia’s Artifex. He says the studio’s main goal is to heighten the VFX experience, both artistically and technically, and collaborate globally with filmmakers to tell great stories.

Storage for Interactive, VR

By Karen Moltenbrey

Every vendor in the visual effects and post production industries relies on data storage. However, for those studios working on new media or hybrid projects, which generate far more content in general, they not only need a reliable solution, they need one that can handle terabytes upon terabytes of data.

Here, two companies in the VR space discuss their needs for a storage solution that serve their business requirements.

Lap Van Luu

Magnopus
Located in downtown Los Angeles, Magnopus creates amazing VR and AR experiences. While a fairly new company — it was founded in 2013 — its staff has an extensive history in the VFX and games industries, with Academy Award winners among its founders. So, there is no doubt that the group knows what it takes to create amazing content.

It also knows the necessity of a reliable storage solution and one that can handle the large data generated by an AR or VR project. At Magnopus, the crew uses a custom-built solution leveraging Supermicro architecture. As Magnopus CTO Lap Van Luu points out, they are using an SSG-6048R-E1CR60N 4U chassis that the studio populates with two types of tier storage: the cache read-and-write layer is NVMe, while the second tier is SAS. Both are in a RAID-10 configuration with 1TB of NVMe and 500TB of SAS raw storage.

“This setup allows us to scale to a larger workforce and meet the demands of our artists,” says Luu. “We leverage faster NVMe Flash and larger SAS for the bulk of our storage requirements.”

Before Magnopus, Luu worked at companies with all kinds of storage systems over the past 20 years, including those from NetApp, BlueArc and Isilon, as well as custom builds of ZFS, FreeNAS, Microsoft Windows Storage Spaces and Hadoop configurations. However, since Magnopus opened, it has only switched to a bigger and faster version of its original setup, starting with a custom Supermicro system with 400GB of SSD and 250TB of SAS in the same configuration.

“We went with this configuration because as we were moving more into realtime production than traditional VFX, the need for larger renderfarms and storage IO demands dropped dramatically,” says Luu. “We also knew that we wanted to leverage smart caching due to the cost of Flash storage dropping to a reasonable price point. It was the ideal situation to be in. We were starting a new company with a less-demanding infrastructure with newer technology that was cheaper, faster and better overall.”

Nevertheless, choosing a specific solution was not a decision that was made lightly. “When you move away from your premier storage solution providers, there is always a concern for scalability and reliability. When working in realtime production, the concern to re-render elements wasn’t a factor of hours or days, but rather seconds and minutes. It was important for us to have redundant backups. But for the cost saving on storage, we could easily get mirrored servers and still be saving a significant amount of money.”

Luu knew the studio wanted to leverage Flash caching, so the big question was, How much Flash was necessary to meet the demands of their artists and processing farm? The processing farm was mainly used to generate textures and environments that were imported over to a real-time engine, such as Unity or Unreal Engine. To this end, Magnopus had to find out who offered a solution for caching that was as hands-off as possible and was invisible to all the users. “LSI, now Avago, had a solution with the RAID controller called cachecade, which dealt with all the caching,” he says. “All you had to do was set up some preferences and the RAID controller would take care of the rest.”

However, cachecade had a size limit on the caching layer of 512GB, so the studio had to do some testing to see if it would ever exceed that, and in a rare situation it did, says Luu. “But it was never a worry because behind the flash cache was a 60 SAS drive RAID-10 configuration.”

As Luu explains, when working with VFX, IOPS (IO operations per second) is always the biggest issue due to the heavy demand from certain types of applications. “VFX work and compositing can typically drive any storage solution to a grinding halt when you have a renderfarm taxing the production storage from your artists,” he explains. However, realtime development IO demands are significantly less since the assets are created in a DCC application but imported into a game engine, where processing occurs in realtime and locally. So, storing all those traditional VFX elements are not necessary, and the overall capacity of storage dropped to one-tenth of what was required with VFX, Luu points out.

And since Magnopus has a Flash-based cache layer that is large enough to meet the company’s IO demands, it does not have to leverage localization to reduce the IO demand off the main production server; as a result, the user gets immediate server response. And, it means that all data within the pipeline resides on the company’s main production server — where the company starts and ends any project.

“Magnopus is a content-focused technology company,” Luu says. “All our assets and projects that we create are digital. Storage is extremely important because it is the lifeblood of everything we create. The storage server can be the difference between if a user can focus on creative content creation where the infrastructure is invisible or the frustration of constantly being blocked and delayed by hardware. Enabling everyone to work as efficiently as possible allows for the best results and products for our clients and customers.”

Light Sail VR
Light Sail VR is a Hollywood-based VR boutique that is a pioneer in cinematic virtual reality storytelling. Since its founding three years ago, the studio has been producing a range of interactive, 360- and 180-degree VR content, including original work and branded pieces for Google, ABC, GoPro and Paramount.

Matt Celia on set for Speak of the Devil.

Because Light Sail VR is a unique but small company, employees often have to wear a number of hats. For instance, co-founder Robert Watts is executive producer and handles many of the logistical issues. His partner, Matthew Celia, is creative director and handles more of the technical aspects of the business. So when it comes to managing the company’s storage needs, Celia is the guy. And, having a reliable system that keeps things running smoothly is paramount, as he is also juggling shoots and post-production work. No one can afford delays in production and post, but for a small company, it can be especially disastrous.

Light Sail VR does not simply dabble in VR; it is what the company does exclusively. Most of the projects thus far have been live action, though the group started its first game engine work this year. When the studio produced a piece with GoPro in the first year of its founding, it was on a sneakernet of G-Drives from G-Technology, “and I was going crazy!” says Celia. “VR is fantastic, but it’s very data-intensive. You can max out a computer’s processing very easily, and the render times are extraordinarily long. There’s a lot of shots to get through because every shot becomes a visual effects shot with either stitching, rotoscoping or compositing needed.”

He continues: “I told Robert [Watts] we needed to get a shared storage server so if I max out one computer while I’m working, I can just go to another computer and keep working, rather than wait eight to 10 hours for a render to finish.”

The Speak of the Devil shoot.

Celia had been dialed into the post world for some time. “Before diving into the world of VR, I was a Final Cut guy, and the LumaForge guys and [founder] Sam Mestman were people I always respected in the industry,” he says. So, Celia reached out to them with a cold call and explained that Light Sail VR was doing virtual reality, an uncharted, pioneering new thing, and was going to need a lot of storage — and needed it fast. “I told them, ‘We want to be hooked up to many computers, both Macs and PCs, and don’t want to deal with file structures and those types of things.’”

Celia points out that they are an independent and small boutique, so finding something that was cost effective and reliable was important. LumaForge responded with a solution called Jellyfish Mobile, geared for small teams and on-set work or portable office environments. “I think we got the 30TB NAS server that has four 10Gb Ethernet connections.” That enabled Light Sail VR to hook up the system to all its computers, “and it worked,” he adds. “I could work on one shot, hit render, and go to another computer and continue working on the next shot and hit render, then kind of ping-pong back and forth. It made our lives a lot easier.”

Light Sail VR has since graduated to the larger-capacity Jellyfish Rack system, which is a 160TB solution (expandable up to 1 petabyte).

The storage is located in Light Sail VR’s main office and is hooked up to its computers. The filmmakers shoot in the field and, if on location, download the data to drives, which they transport back to the office and load onto the server. Then, they transcode all the media to DNX. (VR is captured in H.264 format, which is not user friendly for editing due to the high-res frame size.)

Currently, Celia is in New York, having just wrapped the 20th episode of original content for Refinery29, a media company focused on young women that produces editorial and video programming, live events and social, shareable content delivered across major social media platforms, and covers a variety of categories from style to politics and more. Eight of the episodes are currently in various stages of the post pipeline, due to come out later this year. “And having a solid storage server has been a godsend,” Celia says.

The studio backs up locally onto Seagate drives for archival purposes and sometimes employs G-Technology drives for on-set work. “We just got this new G-Tech SSD that’s 2TB. It’s been great for use on set because having an SSD and downloading all the cards while on set makes your wrap process so much faster,” Celia points out.

Lately, Light Sail VR is shooting a lot of VR-180, requiring two 64GB cards per camera — one for the right eye and one for the left eye. But when they are shooting with the Yi Halo next-gen 3D 360-degree Google Jump camera, they use 17 64GB cards. “That’s a lot of data,” says Celia. “You can have a really bad day if you have really bad drives.”

The studio’s previous solution operated via Thunderbolt 1 in a RAID-5. It only worked on a single machine and was not cross-platform. As the studio made the transition over to PC from Mac to take advantage of better hardware capable of supporting VR playback, that solution was just not practical. They also needed a solution that was plug and play, so they could just pop it into a 10Gb Ethernet connection — they did not want fiber, “which can get expensive.”

The Light Sail team.

“I just wanted something very simple that was cross-platform and could handle what we were doing, which is, by the way, 6K or 8K stereo at 60 frames per second – these workloads are larger than most feature films,” Celia says. “So, we needed a lot of storage. We needed it fast. We needed it to be shared.”

However, while Celia searched for a system, one thing became clear to him: The solutions were technical. “It seemed like I would have to be my own IT department.” And, that was just one more hat he did not want to have to wear. “At LumaForge, they are independent filmmakers. They understood what I was trying to do immediately, and were willing to go on that journey with us.”

Say Celia, “I always call hard drives or storage the underwear of the post production world because it’s the thing you hate spending a lot of money on, but you really need it to perform and work.”

Main Image: Magnopus


Karen Moltenbrey is a long-time VFX and post writer.

StorageDNA, SNS, Backblaze, Spectra Logic partner on LTO migration bundles

StorageDNA is partnering with Studio Network Solutions (SNS), Backblaze and Spectra Logic to offer smart migration bundles that allow users to move content from archives stored on aging LTO tapes to spinning disk, cloud or newer-generation LTO tape technology. Built around StorageDNA’s intelligent archive engine, the bundled software/hardware solutions include DNAevolution software and either a Spectra Stack automated tape library; SNS disk storage with AI autotagging and the ShareBrowser file and asset manager; or Backblaze B2 cloud storage with B2 Fireball for rapid ingest of large data sets.

“Over the past five to 10 years, companies in the media and entertainment space have accumulated very large archives on older LTO-5 and LTO-6 tapes,” says Tridib Chakravarty (tC), president/CEO at StorageDNA. “With newer generations of LTO tape technology, and disk and cloud storage options becoming more affordable and attractive, companies are looking to move their data off LTO-5 or LTO-6 archives.”

The smart migration bundles offered by StorageDNA and its partners give users a way to pull data out of LTO-5 and LTO-6 archives and put it on a medium that guarantees ongoing access. Options with AI-based intelligence, metadata harvesting capabilities and accelerated transport mechanisms help users to better understand, access and leverage the content being stored.

StorageDNA’s DNAevolution intelligent archive software, newly released in Version 4.8, handles IT-centric backup and archive of folders, as well as Avid and Adobe projects within conventional media workflows. In addition to extensive metadata intelligence, the v4.8 software features smart migration capabilities, with enhanced scanning of new data and faster data transfers.

The EVO shared storage server from SNS combines a highly configurable storage array with an extensive workflow toolset for post and broadcast workflows. Every EVO system features unlimited media asset management, file automation tools, AI-powered asset tagging and enhanced integration with Adobe Premiere Pro, Apple Final Cut Pro X, Blackmagic Resolve 15, Avid Media Composer and other creative apps. EVO enables users to consolidate multiple tiers of storage into a single organized database and to edit 4K, 6K and 8K media directly from the server.

B2 Cloud Storage from Backblaze offers unlimited free uploads, and all data is instantly available for download via API, CLI or web browser with low download fees. According to Backblaze, priced at $5 per terabyte per month, B2 Cloud Storage is one-fourth the cost of Amazon’s S3. In addition, Backblaze offers the B2 Fireball, a rapid ingest service that simplifies the migration of massive data sets.

Designed to be installed, expanded and easily managed, the Spectra Stack automated tape library from Spectra Logic is rated at a 100 percent duty cycle, meaning it is one of few stackable libraries built to perform in a 24/7 environment. Scalable from 10 to 560 tape slots and from one to 42 tape drives, a Spectra Stack library enables users to store more than 6.7PB (16.7PB compressed) of data. StorageDNA has partnered with Spectra Logic to provide a Spectra Stack 80-slot, two-drive, scalable LTO solution, along with a preconfigured server with DNAevolution W.4.8 software pre-installed. In addition, Spectra Logic can include free LTO-6 loaner drives for use during the migration process.

The new smart migration bundles from StorageDNA, SNS, Backblaze and Spectra Logic are available now from the companies’ distribution partners.

StorageDNA is also offering a rental option designed for companies that aren’t prepared to invest in further LTO hardware. For about $900 a month, customers can rent an eight-slot, one-drive LTO-8 library and software to move data off existing LTO-5 and LTO-6 tapes.

Review: G-Tech’s G-Speed Shuttle using a Windows PC

By Barry Goch

When I was asked to review the G-Technology G-Speed Shuttle SSD drive, I was very excited. I’ve always had great experiences with G-Tech and was eager to try out this product with my MSI 17.3-inch GT73VR Titan PC laptop… and this is where the story gets interesting.

I’ve been a Mac fan for years. I’ve owned Macs going back to the Mac Classic in the ‘90s. But a couple of years ago I reached a tipping point. My 17-inch MacBook Pro didn’t have the horsepower to support VR video, and I was looking to upgrade to a new Mac. But when I started looking deeper, comparing specifications and performance, specifically looking to harness the power of industry-leading GPUs for Adobe Premiere with its VR capabilities, I bought the MSI Titan VR because it shipped with the Nvidia GTX1070 graphics card.

The laptop is a beast and has all the power and portability I needed but couldn’t find in a Mac laptop at the time. I wanted to give you my Mac-to-PC background before we jump in, because to be clear: The G-Speed Shuttle SSD will provide the best performance when used with Thunderbolt 3 Macs. That doesn’t mean it won’t be great on a PC; it just won’t be as good as when used on a Mac.

G-Tech makes the PC configuration software easy to find on their website… and easy to use. I did find, though, that I could only configure the drive NTFS with RAID-5 on the PC. But, I was also able to speed test the G-Speed Shuttle SSD as a Mac-formatted drive on the PC, as well as using MacDrive that enables Mac drive formatting and mounting.

We actually reached out to G-Tech, which is a Western Digital brand, about the Mac vs. PC equation. This is what Matthew Bennion, director of product line management at G-Technology said: “Western Digital is committed to providing high-speed, reliable storage solutions to both PC and Mac power users. G Utilities, formatted for Windows computers, is constantly being added to more of our products, including most recently our G-Speed Shuttle products. The addition of G Utilities makes our full portfolio Windows-friendly.”

Digging In
The packaging of the G-Speed Shuttle SSD is very clean and well laid out. There is a parts box that has the Thunderbolt cable, power cable and instructions. Underneath the perfectly formed plastic box insert, wrapped in a plastic bag, was the drive itself. The drive has a lightweight polycarbonate chassis. I was surprised how light it was when I pulled it out of the box.

There are four drive bays, each with an SSD drive. The first things I noticed was the drive’s weight and sound — it’s very lightweight for so much storage, and it’s very quiet with no spinning disks. SSDs run quieter, cooler and uses less power than traditional spinning disks. I think this would be a perfect companion for a DIT looking for a fast, lightweight and low-power-consumption RAID for doing dailies.

I used the drive with Red RAW files inside of Resolve and RedCine-X. I set up a transcode project to make Avid offline files that the G-Speed Shuttle SSD handled muscularly. I left the laptop running overnight working on the files on more than one occasion and didn’t have any issues with the drive at all.

The main shortcoming of using a PC setup using the G-Shuttle is the lack of ability to create Apple ProRes codec QuickTime files. I’ve become accustomed to working with ProRes files created with my Blackmagic Ursa Mini camera, and PCs read those files fine. If you’re delivering to YouTube or Vimeo, it’s not a big deal. It is a bit of an obstacle if you need to deliver ProRes. For this review, I worked around this by rendering out a DPX sequence to the Mac-formatted G-Speed Shuttle SSD drive in Resolve (I also used Premiere) and made ProRes files using Autodesk Flame on my venerable 17-inch MacBook Pro. The Flame is the clear winner in quality of file delivery. So, yes, not being able to write ProRes is a pain, but there are ways around it. And, again, if you’re delivering just for the Web, it’s no big deal.

The Speed
My main finding involves the speed of the drive on a PC. In their marketing material for the drive, G-Tech advertises a speed of 2880 MB/sec with Thunderbolt 3. Using the AJA speed test, I was able to get 1590MB/sec — a speed more comparable with Thunderbolt 2. Perhaps it had something to do with the fact that using the G-Tech PC drive configuration program? I could only set up the drive as RAID-5, and not the faster RAID-0 or RAID-1. I did also run speed tests on the Mac-formatted G-Speed Shuttle SSD and I found similar speeds. I am certain that if I had a newer Thunderbolt 3 Mac, I would have gotten speeds closer to their advertised Mac speed specifications.

Summing Up
Overall, I really liked the G-Speed Shuttle SSD. It looks cool on the desk, it’s lightweight and very quiet. I wish I didn’t have to give it back!

And the cost? It’s 16TB for $7499.95, and 8TB for $4999.95.


Barry Goch is a Finishing Artist at The Foundation and a Post Production Instructor at UCLA Extension. You can follow him on Twitter at @gochya.

Panasas’ new ActiveStor Ultra targets emerging apps: AI, VR

Panasas has introduced ActiveStor Ultra, the next generation of its high-performance computing storage solution, featuring PanFS 8, a plug-and-play, portable, parallel file system. ActiveStor Ultra offers up to 75GB/s per rack on industry-standard commodity hardware.

ActiveStor Ultra comes as a fully integrated plug-and-play appliance running PanFS 8 on industry-standard hardware. PanFS 8 is the completely re-engineered Panasas parallel file system, which now runs on Linux and features intelligent data placement across three tiers of media — metadata on non-volatile memory express (NVMe), small files on SSDs and large files on HDDs — resulting in optimized performance for all data types.

ActiveStor Ultra is designed to support the complex and varied data sets associated with traditional HPC workloads and emerging applications, such as artificial intelligence (AI), autonomous driving and virtual reality (VR). ActiveStor Ultra’s modular architecture and building-block design enables enterprises to start small and scale linearly. With dock-to-data in one hour, ActiveStor Ultra offers fast data access and virtually eliminates manual intervention to deliver the lowest total cost of ownership (TCO).

ActiveStor Ultra will be available early in the second half of 2019.

Symply offering StorNext 6-powered Thunderbolt 3 storage solution

Symply is at NAB New York providing tech previews of its SymplyWorkspace Thunderbolt 3-based SAN technology that uses Quantum’s StorNext 6.

SymplyWorkspace allows laptops and workstations equipped with Thunderbolt 3 to ingest, edit, finish and deliver media through a direct Thunderbolt 3 cable connection, with no adapter needed and without having to move content locally, even at 4K resolutions.

Based on StorNext 6 sharing software, users can connect up to eight laptops and workstations to the system and instantly share video, graphics and other data files using a standard Thunderbolt interface with no additional hardware or adapters.

While the company has not announced pricing it does expect to have systems for sale in Q4. The boxes are expected to start under $10,000 for 48TB and up to four users, making the system well-suited for users such as smaller post houses, companies with in-house creative teams and ad agencies.

Review: Sonnet Fusion PCIe 1TB and G-Drive Mobile Pro 500GB

By Brady Betzel

There are a lot of external Thunderbolt 3 SSD drives out in the wild these days, and they aren’t cheap. However, with a high price comes blazingly fast speeds. I was asked to review two very similar Thunderbolt 3 external SSD drives, so why not pit them against each other? Surprisingly (at least surprising to me), there are a couple of questions that you will want the answers to: Is there thermal throttling that will lower the read/write speeds when transferring large files for a sustained amount of time? Does it run so hot that it may burn you when touched?

I’ll answer these questions and a few others over the next few paragraphs, but in the end would I recommend buying a Thunderbolt 3 SSD? Yes, they are very, very fast. Especially when working with higher resolution multimedia files in apps like Premiere, Resolve, Pro Tools and many other data-intensive applications.

Sonnet Fusion Thunderbolt 3 PCIe Flash Drive
Up first (only because I received it first) is the Sonnet Fusion external SSD. I was sent the drive in a non-retail box, so I can’t attest to how it will arrive when you buy it in a retail setting, but the drive itself feels great. Like many other Sonnet products, the Fusion drive is hefty — and not in an overweight way. It feels like you are getting your money’s worth. Unlike the rubberized exterior of the popular LaCie Rugged drives, the Sonnet Fusion is essentially an aluminum heat sink wrapped around a powerful 1TB, Gen 3 M.2 PCIe, Toshiba RVD400-M22280 solid state drive. It’s sturdy and feels like you could drop it without receiving more damage than a little dent.

Attached to the drive is Sonnet’s “captive” Thunderbolt 3 cable, which I assume means the cable is attached to the external drive casing but can be removed without disassembling the case. I think more cable integrations should be called captive, it’s a great description. Anyway… the Thunderbolt 3 cable can be replaced/removed by removing the small four screws underneath the Fusion. It’s attached to a female Thunderbolt 3 port inside of the casing. I really wish Sonnet had integrated the wrapping of the cable around the drive, much like the LaCie Rugged drives in addition to the “captive” attachment. This would really help with transporting the drive and not worrying about the cable. It’s only a small annoyance, but since I’ve been spoiled by nice cable attachment I kind of expect it, especially with drives with a price tag like this. The Sonnet Fusion retails for $899 through stores like B&H, although I found it on Amazon.com for $799. Not cheap for an external drive, but in my opinion it is worth it.

The Sonnet Fusion is fast, like really fast, as in the fastest external drive I have tested. Sonnet claims a read speed of up to 2600MB/s and a write speed of up to 1600MB/s. The only caveat is that you must make sure your computer’s Thunderbolt 3 port is running x4 PCIe Gen 3 (four PCIe lanes) as opposed to x2 PCIe Gen 3 (only two PCIe lanes). If this is the case, your speed will be limited to around 1400MB/s as opposed to the proposed 1600MB/s write speed. You can find out more tech specs on Sonnet’s site. In addition you can find out if your computer has the PCIe lanes to run the Fusion at full speed here.

When testing the Sonnet Fusion I was lucky enough to have a few systems at my disposal: a 2018 iMac Pro, a 2018 Intel i9 MacBook Pro and an Intel i9 Puget Systems Genesis I with Thunderbolt 3 ports. All the systems provided similar results, which was nice to see. Using the AJA System Test, I adjusted the settings to 3840×2160, 4GB and ProRes 4444. I used one reading for an example image for this review, but they were generally the same every time I ran the test. I was getting around 1372MB/s write speed and 2200MB/s read speed. When transferring files on the Finder level I was consistently getting about 1GB/s write speeds, but it’s possible I was being limited by the write speed from the internal SSD! Incredible. For real-world numbers, I was able to transfer about 750GBs in under five minutes. Again, incredible speeds.

The key to the Sonnet Fusion SSD and what makes it a step above the competition is its enclosure acting as a heat sink in its 2.8×4.1×1.25-inch form factor. While this means there are no fans to increase the volume, it does mean that the drive can get extremely hot to touch, which can be an issue if you need to pack it up and go, or if you put it in your pocket (be careful!). This also means that with great heat dissipation comes less thermal throttling, which can slow down transfer speeds when using the drive over longer periods of time. This can be a real problem in some drives. Also keep in mind that this drive is bus powered and Sonnet’s instruction manual specifically states that it will not work with a Thunderbolt 2 adapter. The Sonnet Fusion comes with a one-year warranty that you can read about at this link.

G-Drive Mobile Pro SSD 500GB
Like the Sonnet Fusion, the G-Drive Mobile Pro SSD is a Thunderbolt 3 connected external hard drive that touts very high sustained transfer speeds of up to 2800MB/s (read speed). The G-Drive is physically lighter than the Sonnet, and is cheaper coming in at about 79 cents per GB or 68 cents if you purchase the 1TB version of the G-Drive — as compared to the Sonnet Fusion’s 88 cents per GB. So is this a “get what you pay for” scenario? I think so. The 500GB version costs $399.95 while the 1TB version retails for $699.95. A full $100 cheaper than the Sonnet Fusion.

The G-Drive Mobile Pro has a slender profile that matches what you think an external hard drive would look like. It measures 4.41x 3.15x.67 inches and weighs just .45 lbs. The exterior is attractive — the drive is surrounded by a blackish/dark grey rubberized plastic with silver plastic end caps. There are slits in the top and bottom of the case to dissipate heat, or maybe just to show off the internal electric blue aluminum heatsink. The Thunderbolt 3 connection is on the rear of the housing for easy connection with a status LED on the front. The cord is not attached to the drive, so there is a large chance of being misplaced. Again, I really wish manufacturers would think about cable storage and placement on these drives — LaCie Rugged drives have this nailed, and I hope others follow suite.

Included with the G-Drive Mobile Pro is .5 meter Thunderbolt 3 cable. It comes with a five-year limited warranty described on the included pamphlet that just may feature the tiniest font possible. The warranty ensures that the product is free from defects in materials and workmanship, with some exclusions including non-commercial use. In addition, the retail box shows off a couple of key specifics including “durable, shock resistant SSD” while the G-Technology website boasts of three-meter drop protection (on a carpeted concrete floor), as well as 1,000-pound crush-proof rating. Not sure if this is covered by the warranty or not, but since there really aren’t moving parts in an SSD, I don’t see why this wouldn’t hold up. An additional proclamation is that you can edit multi-stream 8K footage at full frame rate. This may technically be true in a read-only state but you would need a super-computer with multiple high-end GPUs to actually work with this size media. So take that with a grain of salt — not just on this drive but with any.

So on to the actual nuts and bolts of the G-Drive Mobile Pro SSD. The drive looks good on the outside and is immediately recognized by any Mac OS with direct Thunderbolt 3 connection (like all bus-powered drives). If you are using Windows you will have to format the drive before you can use it. G-Technology has an app to make that easy.

When doing real-world file transfers I was getting around the 1GB/s transfer speed consistently. So, theG-Drive Mobile Pro SSD is blazing fast. I was transferring 200GB of files in under two minutes.

Summing Up
In the end, if you haven’t seen the speed difference coming from a USB 3.0 or Thunderbolt 2 drive, you must try Thunderbolt 3. If you have Thunderbolt 3 ports and are using old Thunderbolt 2 drives, now is the time to upgrade. Not only can you use either of these drives like an internal drive, but if you are a Resolve colorist or a Premiere editor you can use these as your render cache or render drive. Not only will this speed up your coloring and editing, but you may even start to notice less errors and crashes since the pipes are open.

Personally, I love the Sonnet Fusion drive and the G-Drive Mobile Pro. If price is your main focus then obviously the G-Drive Mobile Pro is where you need to look. However, if a high-end look with some heft is your main interest, I think the Sonnet Fusion is an art piece you can have on your desktop.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

The benefits of LTO

By Mike McCarthy

LTO stands for Linear Tape Open, and was initially developed nearly 20 years ago as an “open” format technology that allows manufacturing by any vendor that wishes to license the technology. It records any digital files onto half-inch magnetic tapes, stored in square single reel cartridges. The capacity started at 100GB and has increased by a factor of two nearly every generation; the most recent LTO-8 cartridges store 12TB of uncompressed data.

If you want to find out more about LTO, you should check out the LTO Consortium, which is made up of Hewlett Packard Enterprises, IBM and Quantum, although there are other companies that make LTO drives and tape cartridges. You might be familiar with their LTO Ultrium logo.

‘Tapeless’ Workflows
While initially targeting server markets, with the introduction of “tapeless workflows” in the media and entertainment industry, there became a need for long-term media storage. Since the first P2 cards and SxS sticks were too expensive for single write operations, they were designed to be reused repeatedly once their contents had been offloaded to hard drives. But hard drives are not ideal for long-term data storage, and insurance and bonding companies wanted their clients to have alternate data archiving solutions.

So, by the time the Red One and Canon 5D were flooding post facilities with CF cards, LTO had become the default archive solution for most high-budget productions. But this approach was not without limitations and pitfalls. The LTO archiving solutions being marketed at the time were designed around the Linux-based Tar system of storing files, while most media work is done on Windows and Mac OS X. Various approaches were taken by different storage vendors to provide LTO capabilities to M&E customers. Some were network appliances running Linux under the hood, while others wrote drivers and software to access the media from OS X or, in one case, Windows. Then there was the issue that Tar isn’t a self-describing file system, so you needed a separate application to keep track of what was on each tape in your library. All of these aspects cost lots of money, so the initial investment was steep, even though the margin cost of tape cartridges was the cheapest way to store data per GB.

LTFS
Linear Tape File System (LTFS) was first introduced with LTO-5 and was intended to make LTO tapes easier to use and interchange between systems. A separate partition on the tape stores the index of data in XML and other associated metadata. It was intended to be platform independent, although it took a while for reliable drivers and software to be developed for use in Windows and OS X.

At this point, LTFS-formatted tapes in LTO tape drives operate very similarly to old 3.5-inch floppy drives. You insert a cartridge, it makes some funny noises, and then after a minute it asks you to format a new tape, or it displays the current contents of the tape as a new drive letter. If you drag files into that drive, it will start copying the data to the tape, and you can hear it grinding away. The biggest difference is when you hit eject it will take the computer a minute or two to rewind the tape, write the updated index to the first partition and then eject the cartridge for you. Otherwise it is a seamless drag and drop, just like any other removable data storage device.

LTO Drives
All you need in order to use LTO in your media workflow — for archive or data transfer — is an LTO drive. I bought one last year on Amazon for $1,600, which was a bit of a risk considering that I didn’t know if I was going to be able to get it to work on my Windows 7 desktop. As far as I know, all tape drives are SAS devices, although you can buy ones that have adapted the SAS interface to Thunderbolt or Fibre Channel.

Most professional workstations have integrated SAS controllers, so internal LTO drives fit into a 5.25-inch bay and can just connect to those, or any SAS card. External LTO drives usually use Small Form Factor cables (SFF-8088) to connect to the host device. Internal SAS ports can be easily adapted to SFF-8088 ports, or a dedicated eSAS PCIe card can be installed in the system.

Capacity & Compression
How much data do LTO tapes hold? This depends on the generation… and the compression options. The higher capacity advertised on any LTO product assumes a significant level of data compression, which may be achievable with uncompressed media files (DPX, TIFF, ARRI, etc.) The lower value advertised is the uncompressed data capacity, which is the more accurate estimate of how much data it will store. This level of compression is achieved using two different approaches, eliminating redundant data segments and eliminating the space between files. LTO was originally designed for backing up lots of tiny files on data servers, like credit card transactions or text data, and those compression approaches don’t always apply well to large continuous blocks of unique data found in encoded video.

Using data compression on media files which are already stored in a compressed codec doesn’t save much space (there is little redundancy in the data, and few gaps between individual files).

Uncompressed frame sequences, on the other hand, can definitely benefit from LTO’s hardware data compression. Regardless of compression, I wouldn’t count on using the full capacity of each cartridge. Due to the way the drives are formatted, and the way storage vendors measure data, I have only been able to copy 2.2TB of data from Windows onto my 2.5TB LTO-6 cartridges. So keep that in mind when estimating real-world capacity, like with any other data storage medium.

Choosing the ‘Right’ Version to Use
So which generation of LTO is the best option? That depends on how much data you are trying to store. Since most media files that need to be archived these days are compressed, either as camera source footage or final deliverables, I will be calculating based on the uncompressed capacities. VFX houses using DPX frames, or vendors using DCDMs might benefit from calculating based on the compressed capacities.

Prices are always changing, especially for the drives, but these are the numbers as of summer 2018. On the lowest end, we have LTO-5 drives available online for $600-$800, which will probably store 1-1.2TB of data on a $15 tape. So if you have less than 10TB of data to backup at a time, that might be a cost-effective option. Any version lower than LTO-5 doesn’t support the partitioning required for LTFS, and is too small to be useful in modern workflows anyway.

As I mentioned earlier, I spent $1,600 on an LTO-6 drive last year, and while that price is still about the same, LTO-7 and LTO-8 drives have come down in cost since then. My LTO-6 drive stores about 2.2TB of data per $23 tape. That allowed me to backup 40TB of Red footage onto 20 tapes in 90 hours, or an entire week. Now I am looking at using the same drive to ingest 250TB of footage from a production in China, but that would take well over a month, so LTO-6 is not the right solution for that project. But the finished deliverables will probably be a similar 10TB set of DPX and TIFF files, so LTO-6 will still be relevant for that application.

I see prices as low as $2,200 for LTO-7 drives, so they aren’t much more expensive than LTO-6 drives at this point, but the 6TB tapes are. LTO-7 switched to a different tape material, which increased the price of the media. At $63 they are just over $10 per TB, but that is higher than the two previous generations.

LTO-8 drives are available for as low as $2,600, and store up to 12TB on a single $160 tape. LTO-8 drives can also write up to 9TB onto a properly formatted LTO-7 tape in a system called “LTO-7 Type M” This is probably the cheapest cost per TB approach at the moment, since 9TB on a $63 tape is $7/TB.

Compatibility Between Generations
One other consideration is backwards compatibility. What will it take to read your tapes back in the future? The standard for LTO has been that drives can write the previous generation tapes and read tapes from two generations back.

So if you invested in an LTO-2 drive and have tons of tapes, they will still work when you upgrade to an LTO-4 drive. You can then copy them to newer cartridges with the same hardware at a 4:1 ratio since the capacity will have doubled twice. The designers probably figured that after two generations (about five years) most data will have been restored at some point, or be irrelevant (the difference between backups and archives).

If you need your media archived longer than that, it would probably be wise to transfer it to fresh media of a newer generation to ensure it is readable in the future. The other issue is transfer if you are using LTO cartridges to move data from one place to another. You must use the same generation of tape and be within one generation to go both ways. If I want to send data to someone who has an LTO-5 drive, I have to use an LTO-5 tape, but I can copy the data to the tape with my LTO-6 drive (and be subject to the LTO-5 capacity and performance limits). If they then sent that LTO-5 tape to someone with an LTO-7 drive, they would be able to read the data, but wouldn’t be able to write to the tape. The only exception to this is that the LTO-8 drives won’t read LTO-6 tapes (of course, because I have a bunch of LTO-6 tapes now, right?).

So for my next 250TB project, I have to choose between a new LTO-7 drive with backwards compatibility to my existing gear or an LTO-8 drive that can fit 50% more data on a $63 cartridge, and use the more expensive 12TB ones as well. Owning both LTO-6 and LTO-8 drives would allow me to read or write to any LTFS cartridge (until LTO-9 is released), but the two drives couldn’t exchange tapes with each other.

Automated Backup Software & Media Management
I have just been using HPE’s free StoreOpen Utility to operate my internal LTO drive and track what files I copy to which tapes. There are obviously much more expensive LTO-based products, both in hardware with robotic tape libraries and in software with media and asset management programs and automated file backup solutions.

I am really just exploring the minimum investment that needs to be made to take advantage of the benefits of LTO tape, for manually archiving your media files and backing up your projects. The possibilities are endless, but the threshold to start using LTO is much lower than it used to be, especially with the release of LTFS support.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Quantum upgrades Xcellis scale-out storage with StoreNext 6.2, NVMe tech

Quantum has made enhancements to its Xcellisscale-out storage appliance portfolio with an upgrade to StorNext 6.2 and the introduction of NVMe storage. StorNext 6.2 bolsters performance for 4K and 8K video while enhancing integration with cloud-based workflows and global collaborative environments. NVMe storage significantly accelerates ingest and other aspects of media workflows.

Quantum’s Xcellis scale-out appliances provide high performance for increasingly demanding applications and higher resolution content. Adding NVMe storage to the Xcellis appliances offers ultra-fast performance: 22 GB/s single-client, uncached streaming bandwidth. Excelero’s NVMesh technology in combination with StorNext ensures all data is accessible by multiple clients in a global namespace, making it easy to access and cost-effective to share Flash-based resources.

Xcellis provides cross-protocol locking for shared access across SAN, NFS and SMB, helping users share content across both Fibre Channel and Ethernet.

With StorNext 6.2, Quantum now offers an S3 interface to Xcellis appliances, allowing them to serve as targets for applications designed to write to RESTful interfaces. This allows pros to use Xcellis as either a gateway to the cloud or as an S3 target for web-based applications.

Xcellis environments can now be managed with a new cloud monitoring tool that enables Quantum’s support team to monitor critical customer environmental factors, speed time to resolution and ultimately increase uptime. When combined with Xcellis Web Services — a suite of services that lets users set policies and adjust system configuration — overall system management is streamlined.

Available with StorNext 6.2, enhanced FlexSync replication capabilities enable users to create local or remote replicas of multitier file system content and metadata. With the ability to protect data for both high-performance systems and massive archives, users now have more flexibility to protect a single directory or an entire file system.

StorNext 6.2 lets administrators provide defined and enforceable quotas and implement quality of service levels for specific users, and it simplifies reporting of used storage capacity. These new features make it easier for administrators to manage large-scale media archives efficiently.

The new S3 interface and NVMe storage option are available today. The other StorNext features and capabilities will be available by December 2018.

 

mLogic at IBC with four new storage solutions

mLogic will be at partner booths during IBC showing four new products at: the mSpeed Pro, mRack Pro, mShare MDC and mTape SAS.

The mLogic mSpeed Pro (pictured) is a 10-drive RAID system with integrated LTO tape. Thishybrid storage solution and hard drive provides high-speed access to media for coloring, editing and VFX, while also providing an extended, long-term archive for content to LTO tape, which promises more than 30+ years of media preservation.

mSpeed Pro supports multiple RAID levels, including RAID-6 for the ultimate in fault tolerance. It connects to any Linux, macOS, or Windows computer via a fast 40Gb/second Thunderbolt 3 port. The unit ships with the mLogic Linear Tape File System (LTFS) Utility, a simple drag-and-drop application that transfers media from the RAID to the LTO.

The mLogic mSpeed Pro will be available in 60, 80 and 100TB with an LT0-7 or LTO-8 tape drive. Pricing starts at $8,999.

The mRack Pro is a 2U rack-mountable archiving solution that features full-height LTO-8 drives and Thunderbolt 3 connectivity. Full-height (FH) LTO-8 drives offer numerous benefits over their half-height counterparts, including:
– Having larger motors that move media faster
– Working more optimally in LTFS (Linear Tape File System) environments
– Providing increased mechanical reliability
– Being a better choice for high-duty cycle workloads
– Having a lower operating temperature

The mRack Pro is available with one or two LTO-8 FH drives. Pricing starts at $7,999.

mLogic’s mShare is a metadata controller (MDC) with PCIe switch and embedded Storage Area Network (SAN) software, all integrated in a single compact rack-mount enclosure. Designed to work with mLogic’s mSAN Thunderbolt 3 RAID, the unit can be configured with Apple Xsan or Tiger Technology Tiger Store software. With mShare and mSAN, collaborative workgroups can be configured over Thunderbolt at a fraction of the cost of traditional SAN solutions. Pricing TBD.

Designed for archiving media in the Linux and Windows environments, mTape SAS is a desktop LTO-7 or LTO-8 that ships bundled with a high-speed SAS PCIe adapter to install in host computers. The mTape SAS can also be bundled with Xendata Workstation 6 archiving software for Windows. Pricing starts at $3,399.

Review: Mobile Filmmaking with Filmic Pro, Gnarbox, LumaFusion

By Brady Betzel

There is a lot of what’s become known as mobile filmmaking being done with cell phones, such as the iPhone, Samsung Galaxy and even the Google Pixel. For this review, I will cover two apps and one hybrid hard drive/mobile media ingest station built specifically for this type of mobile production.

Recently, I’ve heard how great the latest mobile phone camera sensors are, and how those embracing mobile filmmaking are taking advantage of them in their workflows. Those workflows typically have one thing in common: Filmic Pro.

One of the more difficult parts of mobile filmmaking, whether you are using a GoPro, DSLR or your phone, is storage and transferring the media to a workable editing system. The Gnarbox, which is designed to help solve this issue, is in my opinion one of the best solutions for mobile workflows that I have seen.

Finally, editing your footage together in a professional nonlinear editor like Adobe Premiere Pro or Blackmagic’s Resolve takes some skills and dedication. Moreover, if you are doing a lot of family filmmaking (like me), you usually have to wait for the kids to go to sleep to start transferring and editing. However, with the iOS app LumaFusion — used simultaneously with the Gnarbox — you can transfer your GoPro, DSLR or other pro camera shots, while your actors are taking a break, allowing you to clear your memory cards or get started on a quick rough cut to send to executives that might be waiting off site.

Filmic Pro
First up is Filmic Pro V.6. Filmic Pro is an iOS and Android app that gives you fine-tuned control over your phone’s camera, including live image analyzation features, focus pulling and much more.

There are four very useful live analytic views you can enable at the top of the app: Zebra Stripes, Clipping, False Color and Focus Peaking. There is another awesome recording view that allows simultaneous focus and exposure adjustments, conveniently placed where you would naturally rest your thumbs. With the focus pulling feature you can even set start and end focus points that Filmic Pro will run for you — amazing!

There are many options under the hood of Filmic Pro, including the ability to record at almost any frame rate and aspect ratio, such as 9:16 vertical video (Instagram TV anyone?). You can also film at one particular frame rate, such as 120fps and record at a more standard frame rate of 24fps, essentially processing your high-speed footage in the phone. Vertical video is one of those constant questions that arises when producing video for mobile viewing. If you don’t want the app to automatically change to vertical video recording mode, you can set an orientation lock in the settings. When recording video there are four data rate options: Filmic Extreme, with 100Mb/s for any frame size 2K or higher and 50Mb/s for 1080p or lower; Filmic Quality, which limits the data rate to 35Mb/s (your phone’s default data rate); or Economy, which you probably don’t need to use.

I have only touched on a few of the options inside of Filmic Pro. There are many more, including mic input selections, sample rate selections (including 48kHz), timelapse mode and, in my opinion, the most powerful feature, Log recording. Log recording inside of a mobile phone can unlock some unnoticed potential in your phone’s camera chip, allowing for a better ability to match color between cameras or expose details in shadows when doing color correction in post.

The only slightly bad news is that on top of the $14.99 price for the Filmic Pro app itself, to gain access to the Log ability (labeled Cinematographer’s Toolkit) you have to pay an additional $9.99. In the end, $25 is a really, really, really small price to pay for the abilities that Filmic Pro unlocks for you. And while this won’t turn your phone into an Arri Alexa or Red Helium (yet), you can raise your level of mobile cinematography quickly, and if you are using your phone for some B-or C-roll, Filmic Pro can help make your colorist happy, thanks to Log recording.

One feature that I couldn’t test because I do not own a DJI Osmo is that you can control the features on your iOS device from the Osmo itself, which is pretty intriguing. In addition, if you use any of the Moondog Labs anamorphic adapters, Filmic Pro can be programmed to de-squeeze the footage properly.

You can really dive in with Filmic Pro’s library of tutorials here.

Gnarbox 1.0
After running around with GoPro cameras strapped to your (or your dog’s) head all day, there will be some heavy post work to get it offloaded onto your computer system. And, typically, you will have much more than just one GoPro recording during the day. Maybe you took some still photos on your DSLR and phone, shot some drone footage and had GoPro on a chest mount.

As touched on earlier, the Gnarbox 1.0 is a stand-alone WiFi-enabled hard drive and media ingestion station that has SD, microSD, USB 3.0 and USB 2.0 ports to transfer media to the internal 128GB or 256GB Flash memory. You simply insert the memory cards or the camera’s USB cable and connect to the Gnarbox via the App on your phone to begin working or transferring.

There are a bunch of files that will open using the Gnarbox 1.0 iOS and Android apps, but there are some specific files that won’t open, including ProRes, H.265 iPhone recordings, CinemaDNG, etc. However, not all hope is lost. Gnarbox is offering up the Gnarbox 2.0 via IndieGogo and can be pre-ordered. Version 2.0 will offer compatibility with file types such as ProRes, in addition to having faster transfer times and app-free backups.

So while reading this review of the Gnarbox 1.0, keep Version 2 in the back of your mind, since it will likely contain many new features that you will want… if you can wait until the estimated delivery of January 2019.

Gnarbox 1.0 comes in two flavors: a 128GB version for $299.99, and the version I was sent to review, which is 256GB for $399.99. The price is a little steep, but the efficiency this product brings is worth the price of admission. Click here for all the lovely specs.

The drive itself is made to be used with an iPhone or Android-based device primarily, but it can be put into an external hard drive mode to be used with a stand-alone computer. The Gnarbox 1.0 has a write speed of 132MB/s and read speed of 92MB/s when attached to a computer in Mass Storage Mode via the USB 3.0 connection. I actually found myself switching modes a lot when transferring footage or photos back to my main system.

It would be nice to have a way to switch to the external hard drive mode outside of the app, but it’s still pretty easy and takes only a few seconds. To connect your phone or tablet to the Gnarbox 1.0, you need to download the Gnarbox app from the App Store or Google Play Store. From there you can access content on your phone as well as on the Gnarbox when connected to it. In addition to the Gnarbox app, Gnarbox 1.0 can be used with Adobe Lightroom CC and the mobile NLE LumaFusion, which I will cover next in the review.

The reason I love the Gnarbox so much is how simply, efficiently and powerfully it accomplishes its task of storing media without a computer, allowing you to access, edit and export the media to share online without a lot of technical know-how. The one drawback to using cameras like GoPros is it can take a lot of post processing power to get the videos on your system and edited. With the Gnarbox, you just insert your microSD card into the Gnarbox, connect your phone via WiFi, edit your photos or footage then export to your phone or the Gnarbox itself.

If you want to do a full backup of your memory card, you open the Gnarbox app, find the Connected Devices, select some or all of the clips and photos you want to backup to the Gnarbox and click Copy Files. The same screen will show you which files have and have not been backed up yet so you don’t do it multiple times.

When editing photos or video there are many options. If you are simply trimming down a video clip, stringing out a few clips for a highlight reel, adding some color correction, and even some music, then the Gnarbox app is all you will need. With the Gnarbox 1.0, you can select resolution and bit rates. If you’re reading this review you are probably familiar with how resolutions and bit rates work, so I won’t bore you with those explanations. Gnarbox 1.0 allows for 4K, 2.7K. 1080p and 720p resolutions and bitrates of 65 Mbps, 45Mbps, 30Mbps and 10Mbps.

My rule of thumb for social media is that resolution over 1080p doesn’t really apply to many people since most are watching it on their phone, and even with a high-end HDR, 4K, wide gamut… whatever, you really won’t see much difference. The real difference comes in bit rates. Spend your megabytes wisely and put all your eggs in the bit rate basket. The higher the bit rates the better quality your color will be and there will be less tearing or blockiness. In my opinion a higher bit rate 1080p video is worth more than a 4K video with a lower bit rate. It just doesn’t pay off. But, hey, you have the options.

Gnarbox has an awesome support site where you can find tutorial GIFs and writeups covering everything from powering on your Gnarbox to bitrates, like this one. They also have a great YouTube playlist that covers most topics with the Gnarbox, its app, and working with other apps like LumaFusion to get you started. Also, follow them on Instagram for some sweet shots they repost.

LumaFusion
With Filmic Pro to capture your video and with the Gnarbox you can lightly edit and consolidate your media, but you might need to go a little further in the editing than just simple trims. This is where LumaFusion comes in. At the moment, LumaFusion is an iOS only app, but I’ve heard they might be working on an Android version. So for this review I tried to get my hands on an iPad and an iPad Pro because this is where LumaFusion would sing. Alas, I had to settle for my wife’s iPhone 7 Plus. This was actually a small blessing, because I was afraid the app would be way too small to use on a standard iPhone. To my surprise it was actually fine.

LumaFusion is an iOS-based nonlinear editor, much like Adobe Premiere or FCPX, but it only costs $19.99 in the App store. I added LumaFusion to this review because of its tight integration with Gnarbox (by accessing the files directly on the Gnarbox for editing and output), but also because it has presets for Filmic Pro aspect ratios: 1.66:1, 17:9, 2.2:1, 2.39:1, 2.59:1. LumaFusion will also integrate with external drives like the Western Digital wireless SSD, as well as cloud services like Google Drive.

In the actual editing interface LumaFusion allows for advanced editing with titles, music, effects and color correction. It gives you three video and audio tracks to edit with, allowing for J and L cuts or transitions between clips. For an editor like me who is so used to Avid Media Composer that I want to slip and trim in every app, LumaFusion allows for slips, trims, insert edits, overwrite edits, audio track mixing, audio ducking to automatically set your music levels — depending on when dialogue occurs — audio panning, chroma key effects, slow and fast motion effects, titles with different fonts and much more.

There is a lot of versatility inside of LumaFusion, including the ability to export different frame rates between 18, 23.976, 24, 25, 29.97, 30, 48, 50, 59.94, 60, 120 and 240 fps. If you are dealing with 360-degree video, you can even enable the 360-degree metadata flag on export.

LumaFusion has a great reference manual that will fill you in on all the aspects of the app, and it’s a good primer on other subjects like exporting. In addition, they have a YouTube playlist. Simply, you can export for all sorts of social media platforms or even to share over Air Drop between Mac OS and iOS devices. You can choose your export resolution such as 1080p or UHD 4K (3840×2160), as well as your bit rate, and then you can select your codec, whether it be H.264 or H.265. You can also choose whether the container is a MP4 or MOV.

Obviously, some of these output settings will be dictated by the destination, such as YouTube, Instagram or maybe your NLE on your computer system. Bit rate is very important for color fidelity and overall picture quality. LumaFusion has a few settings on export, including: 12Mbps, 24Mbps, 32Mbps and 50Mbps if in 1080p, otherwise 100 Mbps if you are exporting UHD 4k (3840×2160).

LumaFusion is a great solution for someone who needs the fine tuning of a pro NLE on their iPad or iPhone. You can be on an exotic vacation without your laptop and still create intricately edited highlight reels.

Summing Up
In the end, technology is amazing! From the ultra-high-end camera app Filmic Pro to the amazing wireless media hub Gnarbox and even the iOS-based nonlinear editor LumaFusion, you can film, transfer and edit a professional-quality UHD 100Mbps clip without the need for a stand-alone computer.

If you really want to see some amazing footage being created using Filmic Pro you should follow Richard Lackey on all social media platforms. You can find more info on his website. He has some amazing imagery as well as tips on how to shoot more “cinematic” video using your iPhone with Filmic Pro.

The Gnarbox — one of my favorite tools reviewed over the years — serves a purpose and excels. I can’t wait to see how the Gnarbox 2.0 performs when it is released. If you own a GoPro or any type of camera and want a quick and slick way to centralize your media while you are on the road, then you need the Gnarbox.

LumaFusion will finish off your mobile filmmaking vision with titles, trimming and advanced edit options that will leave people wondering how you pulled off such a professional video from your phone or tablet.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Dell EMC’s ‘Ready Solutions for AI’ now available

Dell EMC has made available its new Ready Solutions for AI, with specialized designs for Machine Learning with Hadoop and Deep Learning with Nvidia.

Dell EMC Ready Solutions for AI eliminate the need for organizations to individually source and piece together their own solutions. They offer a Dell EMC-designed and validated set of best-of-breed technologies for software — including AI frameworks and libraries — with compute, networking and storage. Dell EMC’s portfolio of services include consulting, deployment, support and education.

Dell EMC’s Data Science Provisioning Portal offers an intuitive GUI that provides self-service access to hardware resources and a comprehensive set of AI libraries and frameworks, such as Caffe and TensorFlow. This reduces the steps it takes to configure a data scientist’s workspace to five clicks. Ready Solutions for AI’s distributed, scalable architecture offers the capacity and throughput of Dell EMC Isilon’s All-Flash scale-out design, which can improve model accuracy with fast access to larger data sets.

Dell EMC Ready Solutions for AI: Deep Learning with Nvidia solutions are built around Dell EMC PowerEdge servers with Nvidia Tesla V100 Tensor Core GPUs. Key features include Dell EMC PowerEdge R740xd and C4140 servers with four Nvidia Tesla V100 SXM2 Tensor Core GPUs; Dell EMC Isilon F800 All-Flash Scale-out NAS storage; and Bright Cluster Manager for Data Science in combination with the Dell EMC Data Science Provisioning Portal.

Dell EMC Ready Solutions for AI: Machine Learning with Hadoop includes an optimized solution stack, along with data science and framework optimization to get up and running quickly, and it allows expansion of existing Hadoop environments for machine learning.

Key features include Dell EMC PowerEdge R640 and R740xd servers; Cloudera Data Science Workbench for self-service data science for the enterprise; the Apache Spark open source unified data analytics engine; and the Dell EMC Data Science Provisioning Engine, which provides preconfigured containers that give data scientists access to the Intel BigDL distributed deep learning library on the Spark framework.

New Dell EMC Consulting services are available to help customers implement and operationalize the Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Dell EMC Education Services offers courses and certifications on data science and advanced analytics and workshops on machine learning in collaboration with Nvidia.

DigitalGlue’s Creative.Space optimized for Resolve workflows

DigitalGlue’s Creative.Space, an on-premise managed storage (OPMS) service, has been optimized for Blackmagic DaVinci Resolve workflows, meeting the technical requirements for inclusion in Blackmagic’s Configuration Guide. DigitalGlue is an equipment, integration and software development provider, that also designs and implements solutions for complete turnkey content creation, post production and distribution.

According to DigitalGlue CEO/CTO Tim Anderson, each Creative.Space system is pre-loaded with a Resolve optimized PostreSQL database server enabling users to simply create databases in Resolve using the same address they use to connect to their storage. In addition, users can schedule database backups with snapshots, ensuring that work is preserved timely and securely. Creative.Space also uses media intelligent caching to move project data and assets into a “fast lane” allowing all collaborators to experience seamless performance.

“We brought a Creative.Space entry-level Auteur unit optimized with a DaVinci Resolve database to the Blackmagic training facility in Burbank,” explains Nick Anderson, Creative.Space product manager. “The Auteur was put through a series of rigorous testing processes and passed each with flying colors. Our Media Intelligent caching allowed the unit to provide full performance to 12 systems at a level that would normally require a much larger and more expensive system.”

Auteur was the first service in the Creative.Space platform to launch. Creative.Space targets collaborative workflows by optimizing the latest hardware and software for efficiency and increased productivity. Auteur starts at 120TB RAW capacity across 12 drives in a 24-bay 4RU chassis with open bays for rapid growth. Every system is custom-built to address each client’s unique needs. Entry level systems are designed for small to medium workgroups using compressed 4K, 6K and 8K workflows and can scale for 4K uncompressed workflows (including 4K OpenEXR) and large multi-user environments.

Avid adds to Nexis product line with Nexis|E5

The Nexis|E5 NL nearline storage solution from Avid is now available. The addition of this high-density on-premises solution to the Avid Nexis family allows Avid users to manage media across all their online, nearline and archive storage resources.

Avid Nexis|E5 NL includes a new web-based Nexis management console for managing, controlling and monitoring Nexis installations. NexislE5 NL can be easily accessed through MediaCentral | Cloud UX or Media Composer and also integrates with MediaCentral|Production Management, MediaCentral|Asset Management and MediaCentral|Editorial Management to help collaboration, with advanced features such as project and bin sharing. Extending the Nexis|FS (file system) to a secondary storage tier makes it easy to search for, find and import media, enabling users to locate content distributed throughout their operations more quickly.

Build for project parking, staging workflows and proxy archive, Avid reports that Nexis | E5 NL streamlines the workflow between active and non-active assets, allowing media organizations to park assets as well as completed projects on high-density nearline storage, and keep them within easy reach for rediscovery and reuse.

Up to eight Nexis|E5 NL engines can be integrated as one virtualizable pool of storage, making content and associated projects and bins more accessible. In addition, other Avid Nexis Enterprise engines can be integrated into a single storage system that is partitioned for better archival organization.

Additional Nexis|E5 NL features include:
• It’s scalable from 480TB of storage to more than 7PB by connecting multiple Nexis|E5 NL engines together as a single nearline system for a highly scalable, lower-cost secondary tier of storage.
• It offers flexible storage infrastructure that can be provisioned with required capacity and fault-tolerance characteristics.
• Users can configure, control and monitor Nexis using the updated management console that looks and feels like a MediaCentral|Cloud UX application. Its dashboard provides an overview of the system’s performance, bandwidth and status, as well as access to quickly configure and manage workspaces, storage groups, user access, notifications and other functions. It offers the flexibility and security of HTML5 along with an interface design that enables mobile device support.

DigitalGlue’s Creative.Space intros all-Flash 1RU OPMS storage

Creative.Space, a division of DigitalGlue that provides on-premise managed storage (OPMS) as a service for production and post companies as well as broadcast networks, has added the Breathless system to its offerings. The product will make its debut at Cine Gear in LA next month.

The Breathless Next Generation Small Form Factor (NGSFF) media storage system offers 36 front-serviceable NVMe SSD bays in 1RU. It is designed for 4K, 6K and 8K uncompressed workflows using JPEG2000, DPX and multi-channel OpenEXR. There are 4TB of NVMe SSDs currently available, but a 16TB version will be available in later this year, allowing 576TB of Flash storage to fit in 1RU. Breathless performs 10 million random read IOPS (Input/Output Operations per Second) of storage performance (up to 475,000 per drive).

Each of the 36 NGSFF SSD bays connects to the motherboard directly over PCIe to deliver maximum potential performance. With dual Intel Skylake-SP CPUs and 24 DDR4 DIMMs of memory, this system is perfect for I/O intensive local workloads, not just for high-end VFX, but also realtime analytics, database and OTT content delivery servers.

Breathless’ OPMS features 24/7 monitoring, technical support and next-day repairs for an all-inclusive, affordable fixed monthly rate of $2,495.00, based on a three-year contract (16TB of SSD).

Breathless is the second Creative.Space system to launch, joining Auteur, which offers 120TB RAW capacity across 12 drives in a 24-bay 4 RU chassis. Every system is custom-built to address each client’s needs. Entry level systems are designed for small to medium workgroups using compressed 4K, 6K and 8K workflows and can scale for 4K uncompressed workflows (including 4K OpenEXR) and large multi-user environments.

DigitalGlue, an equipment, integration and software development provider, also designs and implements turnkey solutions for content creation, post and distribution.

 

NAB: Imagine Products and StorageDNA enhance LTO and LTFS

By Jonathan S. Abrams

That’s right. We are still taking NAB. There was a lot to cover!

So, the first appointment I booked for NAB Show 2018, both in terms of my show schedule (10am Monday) and the vendors I was in contact with, was with StorageDNA’s Jeff Krueger, VP of worldwide sales. Weeks later, I found out that StorageDNA was collaborating with Imagine Products on myLTOdna, so I extended my appointment. Doug Hynes, senior director of business development for StorageDNA, and Michelle Maddox, marketing director of Imagine Products, joined me to discuss what they had ready for the show.

The introduction of LTFS during NAB 2010 allowed LTO tape to be accessed as if it was a hard drive. Since LTO tape is linear, executing multiple operations at once and treating it like a hard drive results in performance falling off of a cliff. It also could cause the drive to engage in shoeshining, or shuttling of the tape back-and-forth over the same section.

Imagine Products’ main screen.

Eight years later, these performance and operation issues have been addressed by StorageDNA’s creation of HyperTape, which is their enhanced Linear File Transfer System that is part of Imagine Products’ myLTOdna application. My first question was “Is HyperTape yet another tape format?” Fortunately for myself and other users, the answer is “No.”

What is HyperTape? It is a workflow powered by dnaLTFS. The word “enhanced” in the description of HyperTape as an enhanced Linear File Transfer System refers to a middleware in their myLTOdna application for Mac OS. There are three commands that can be executed to put an LTO drive into either read-only, write-only or training mode. Putting the LTO drive into an “only mode” allows it to achieve up to 300MB/s of throughput. This is where the Hyper in HyperTape comes from. These modes can also be engaged from the command line.

Training mode allows for analyzing the files stored on an LTO tape and then storing that information in a Random Access Database (RAD). The creation of the RAD can be automated using Imagine Products’ PrimeTranscoder. Otherwise, each file on the tape must be opened in order to train myLTOdna and create a RAD.

As for shoeshining, or shuttling of the tape back-and-forth over the same section, this is avoided by intelligently writing files to LTO tape. This intelligence is proprietary and is built into the back-end of the software. The result is that you can load a clip in Avid’s Media Composer, Blackmagic’s DaVinci Resolve or Adobe’s Premiere Pro and then load a subclip from that content into your project. You still should not load a clip from tape and just press play. Remember, this is LTO tape you are reading from.

The target customer for myLTOdna is a DIT with camera masters who wants to reduce how much time it takes to backup their footage. Previously, DITs would transfer the camera card’s contents to a hard drive using an application such as Imagine Products’ ShotPut Pro. Once the footage had been transferred to a hard drive, it could then be transferred to LTO tape. Using myLTOdna in read-only mode allows a DIT to bypass the hard drive and go straight from the camera card to an LTO tape. Because the target customer is already using ShotPut Pro, the UI for myLTOdna was designed to be comfortable and not difficult to use or understand.

The licensing for dnaLTFS is tied to the serial number of an LTO drive. StorageDNA’s Krueger explained that, “dnaLTFS is the drive license that works with stand alone mac LTO drives today.” Purchasing a license for dnaLTFS allows the user to later upgrade to StorageDNA’s DNAevolution M Series product if they need automation and scheduling features without having to purchase another drive license if the same LTO drive is used.

Krueger went on to say, “We will have (dnaLTFS) integrated into our DNAevolution product in the future.” DNAevolution’s cost of entry is $5,000. A single LTO drive license starts at $1,250. Licensing is perpetual, and updates are available without a support contract. myLTOdna, like ShotPut Pro and PrimeTranscoder, is a one-time purchase (perpetual license). It will phone home on first launch. Remote support is available for $250 per year.

I also envision myLTOdna being useful outside of the DIT market. Indeed, this was the thinking when the collaboration between Imagine Products and StorageDNA began. If you do not mind doing manual work and want to keep your costs low, myLTOdna is for you. If you later need automation and can budget for the efficiencies that you get with it, then DNAevolution is what you can upgrade to.


Jonathan S. Abrams is the Chief Technical Engineer at Nutmeg, a creative marketing, production and post resource, located in New York City.

Display maker TVLogic buys portable backup storage company Nexto DI

TVLogic, a designer and manufacturer of LCD and OLED high-definition displays, has acquired Nexto DI, a provider of portable field backup storage for digital cameras. They are located in South Korea.

Nexto DI uses the company’s patented “X-copy” technology, while M-copy (copy to multiple drives simultaneously), according to the company, guarantees 100% data safety, even in worst-case circumstances.

We reached out to TVLogic’s Denny An to find out more…

Why did it make sense for TVLogic to acquire Nexto DI?
TVLogic develops and manufactures broadcast and pro monitors that work in concert with other equipment. Because we compete on a global scale with large organizations that supply other products, such as cameras, switchers and more in addition to monitors, we realized we had to extend our offerings to better serve our customers and stay competitive. After a thorough search for companies that provide complementary products we found the perfect technology partner with Nexto DI.

We will continue our efforts to become a comprehensive broadcast and professional equipment company by searching for products and companies that can create synergy with our monitor technology.

How do you feel this fits in with what you already provide for the industry?
TVLogic has over 90 distributors and service networks around the world that can now also promote, sell and provide the same great quality service for the Nexto DI product line. Although the data backup is the main feature of the Nexto DI products, they also support image and video preview features. We’re confident that the combined technologies of TVLogic and Nexto DI will result in new monitor products with built-in recording features in the near future.

High-performance flash storage at NAB 2018

By Tom Coughlin

After years of watching the development of flash memory-based storage for media and entertainment applications, especially for post, it finally appears that these products are getting some traction. This is driven by the decreasing cost of flash memory and also the increase in 4K up to 16K workflows with high frame rates and multi-camera video projects. The performance demanded for working storage to support multiple UHD raw video streams makes high performance storage attractive. Examples of 8K workflows were everywhere at the 2018 NAB show.

Flash memory is the clear leader in professional video camera media, increasing from 19% in 2009 to 66% in 2015, 54% in 2016 and 59% in 2017. The 2017 media and entertainment professional survey results are shown below.

Flash memory capacity used in M&E applications is believed to have been about 3.1% in 2016, but will be larger in coming years. Overall, revenues for flash memory in M&E should increase by more than 50% in the next few years as flash prices go down and it becomes a more standard primary storage for many applications.

At the 2018 NAB Show, and the NAB ShowStoppers, there were several products geared for this market and in discussion with vendors it appears that there is some real traction for solid state memory for some post applications, in addition to cameras and content distribution. This includes solid-state storage systems built with SAS, SATA and the newer NVMe interface. Let’s look at some of these products and developments.

Flash-Based Storage Systems
Excelero reports that its NVMe software-defined block storage solution with its low-latency and high-bandwidth improves the interactive editing process and enables customers to stream high-resolution video without dropping frames. Technicolor has said that it achieved 99.8% of local NVMe storage server performance across the network in an initial use of Excelero’s NVMesh. Below is the layout of the Pixit Media Excelero demonstration for 8K+ workflows at the NAB show.

“The IT infrastructure required to feed dozens of workstations of 4K files at 24ps is mindboggling — and that doesn’t even consider what storage demands we’ll face with 8K or even 16K formats,” says Amir Bemanian, engineering director at Technicolor. “It’s imperative that we can scale to future film standards today. Now, with innovations like the shared NVMe storage such as Excelero provides, Technicolor can enjoy a hardware-agnostic approach, enabling flexibility for tomorrow while not sacrificing performance.”

Excelero was showcasing 16K post production workflows with the Quantum StorNext storage and data management platform and Intel on the Technicolor project and at Mellanox with its 100Gb Ethernet switch.

Storbyte, a company based in Washington, DC, was showing its Eco Flash servers at the NAB show. Their product featured hot-swappable and accessible flash storage bays and redundant hot-swappable server controllers. The product features the company’s Hydra Dispersed Algorithmic Modeling (HDAM) that allows them to avoid having a flash transition layer, garbage collection, as well as dirty block management resulting in less performance overhead. Their Data Remapping Accelerator Core (DRACO) is said to offer up to a 4X performance increase over conventional flash architectures that can maintain peak performance even at 100% drive capacity and life and thus eliminates a write cliff and other problems that flash memory is subject to.

DDN was showing its ExaScaler DGX solution that combined a DDN ExaScaler ES14KX high-performance all-flash array integrated with a single Nvidia DGX-1 GPU server (initially announced at the 2018 GPU Technology Conference). Performance of the combination achieved up to 33GB/s of throughput. The company was touting this combination to accelerate machine learning, reducing the load times of large datasets to seconds for faster training. According to DDN, the combination also allows massive ingest rates and cost-effective capacity scaling and achieved more than 250,000 random read 4K IOPS. In addition to HDD-based storage, DDN offers hybrid HDD/SSD as well as all-flash array products. The new DDN SFA200NV all-flash platform product was on display at the 2018 NAB show

Dell EMC was showing its Isilon F800 all-flash scale-out NAS for creative applications. According to the company, the Isilon all-flash array gives visual effects artists and editors the power to work with multiple streams of uncompressed, full-aperture 4K material, enabling collaborative, global post and VFX pipelines for episodic and feature projects.

 

Dell EMC said this allows a true scale-out architecture with high concurrency and super-fast all-flash network-attached storage with low latency for high-throughput and random-access workloads. The company was demonstrating 4K editing of uncompressed DPX files with Adobe Premiere using a shared Isilon F800 all-flash array. They were also showing 4K and UHD workflows with Blackmagic’s DaVinci Resolve.

NetApp had a focus on solid-state storage for media workflows in their “Lunch and Learn sessions,” co-hosted by Advanced Systerms Group (ASG). The sessions discussed how NVMe and Storage Class Memory (SCM) are reshaping the storage industry. NetApp provides SSD-based E-series products that are used in the media and entertainment industry.

Promise Technology had its own NVMe SSD-based products. The company had data sheets on two NVMe fabric products. One was an HA storage appliance in a 2RU form factor (NVF-9000) with 24 NVMe drive slots and 100GbE ports offering up to 15M IOPS and 40GB/s throughout and many other enterprise features. The company said that its fabric allows servers to connect to a pool of storage nodes as if they had local NVMe SSDs. Promise’s NVMe Intelligent Storage is a 1U appliance (NVF-7000) with multiple 100 GbE connectors offering up to 5M IOPS and 20GB/s throughput. Both products offer RAID redundancy and end-to-end RDMA memory access.

Qumulo was showing its Qumulo P-Series NVMe all-flash solution. The P-series combines Qumulo File Fabric (QF2) software with high-speed NVMe, Intel Skylake SP processors and high-bandwidth Intel SSDs and 100GbE networking. It offers 16GB/s in a minimum four-node configuration (4GB/s per node). The P-series nodes come in 23 and 92TB size. According to Qumulo, QF2 provides realtime visibility and control regardless of the size of the file system, realtime capacity quotas, continuous replication, support for both SMB and NFS protocols, complete programmability with REST API and fast rebuild times. Qumulo says the P-series can run on-premise or in the cloud and can create a data fabric that interconnects every QF2 cluster whether it is all-flash, hybrid SSD/HDD or running on EC2 instances in AWS.

AIC was at the show with its J2024-04 2U 24-bay NVMe all-flash array using a Broadcom PCIe switch. The product includes dual hot-swap redundant 1.3 KW power supplies. AIC was also showing this AFA product providing a Storage Software Fabric platform with EXTEN smart NICs using Broadcom chips to create a storage software fabric platform, as well as an NVMe JBOF.

Companies such as Luma Forge were showing various hierarchical storage options, including flash memory, as shown in the image below.

Some other solid-state products included the use of two SATA SSDs for performance improvements for the SoftIron HyperDrive Ceph-based object storage appliance. Scale Logic has a hybrid SSD SAN/NAS product called Genesis Unlimited, which can support multiple 4K streams with a combination of HDDs and SSDs. Another NVMe offering was the RAIDIX NVMEXP software RAID engine for building NVMe-based arrays offering 4M IOPS and 30GB/s per 1U and offering RAID levels 5, 6 and 7.3. Nexsan has all-flash versions of its Unity storage products. Pure Storage had a small booth in the back of the South Hall lower showing their flash array products. Spectra Logic was showing new developments in its flash-based Black Pearl product, but we will cover that in another blog.

External Flash Storage Products
Other World Computing (OWC) was showing its solid-state and HDD-based products. They had a line-up of Thunderbolt 3 storage products, including the ThunderBlade and the Envoy Pro EX (VE) with Thunderbolt 3. The ThunderBlade uses a combination of M.2 SSDs to achieve transfer speeds up to 2.8 GB/s read and 2.45 GB/s write (pretty symmetrical R/W) with 1TB to 8TB storage capacity. It is fanless and has a dimmable LED so it won’t interfere with production work. OWC’s mobile bus-powered SSD product, Envoy Pro EX (VE) with Thunderbolt 3 provides sustained data rates up to 2.6 GB/s read and 1.6 GB/s write. This small 1TB to 2TB drive can be carried in a backpack or coat pocket.

Western Digital and Seagate had external SSD drives they were showing. Below is shown the G-Drive Mobile SSD-R, introduced late in 2017.

Memory Cards and SSDs
Samsung was at the NAB showing their EVO 860 2.5-inch. These SATA SSDs provide up to 4TB capacity and 550MB/s sequential read and 520MB/s sequential write speeds for media workstation applications. However, there were also showings of the product used in all-flash arrays as shown below.

ProGrade was showing its line of professional memory cards for high-end digital cameras. These included their SFExpress 1.0 memory card with 1TB capacity and 1.4GB/s read data transfer speed as well as burst write speed greater than 1GB/s. This new Compact Flash standard is a successor to both the C FAST and XQD formats. The product uses two lanes of PCIe and includes NVMe support. The product is interoperable with the XQD form factor. They also announced their V90 premium line of SDXC UHS-II memory cards with sustained read speeds of up to 250MB/s and sustained write speeds up to 200MB/s.

2018 Creative Storage Conference
For those who love storage, the 12th Annual Creative Storage Conference (CS 2018) will be held on June 7 at the Double Tree Hotel West Los Angeles in Culver City. This event brings together digital storage providers, equipment and software manufacturers and professional media and entertainment end users to explore the conference theme: “Enabling Immersive Content: Storage Takes Off.”

Also, my company Coughlin Associate is conducting a survey of digital storage requirements and practices for media and entertainment professionals with results presented at the 2018 Creative Storage Conference. M&E professionals can participate in the survey through this link. Those who complete the survey, with their contact information, will receive a free full pass to the conference.

Our main image: Seagate products in an editing session, including products in a Pelican case for field work. 


Tom Coughlin is president of Coughlin Associates, a digital storage analyst and  technology consultant. He has over 35 years in the data storage industry. He is also the founder of the Annual Storage Visions Conference and the Creative Storage Conference.

 

NAB 2018: My key takeaways

By Twain Richardson

I traveled to NAB this year to check out gear, software, technology and storage. Here are my top takeaways.

Promise Atlas S8+
First up is storage and the Promise Atlas S8+. The Promise Atlas S8+ is a network attached storage solution for small groups that features easy and fast NAS connectivity over Thunderbolt3 and 10GB Ethernet.

The Thunderbolt 3 version of the Atlas S8+ offers two Thunderbolt 3 ports, four 1Gb Ethernet ports, five USB 3.0 ports and one HMDI output. The 10g BaseT version swaps in two 10Gb/s Ethernet ports for the Thunderbolt 3 connections. It can be configured up to 112TB. The unit comes empty, and you will have to buy hard drives for it. The Atlas S8+ will be available later this year.

Lumaforge

Lumaforge Jellyfish Tower
The Jellyfish is designed for one thing and one thing only: collaborative video workflow. That means high bandwidth, low latency and no dropped frames. It features a direct connection, and you don’t need a 10GbE switch.

The great thing about this unit is that it runs quiet, and I mean very quiet. You could place it under your desk and you wouldn’t hear it running. It comes with two 10GbE ports and one 1GbE port. It can be configured for more ports and goes up to 200TB. The unit starts at $27,000 and is available now.

G-Drive Mobile Pro SSD
The G-Drive Mobile Pro SSD is blazing-fast storage with data transfer rates of up to 2800MB/s. It was said that you could transfer as much as a terabyte of media in seven minutes or less. That’s fast. Very fast.

It provides up to three-meter drop protection and comes with a single Thunderbolt 3 port and is bus powered. It also features a 1000lb crush-proof rating, which makes it ideal for being used in the field. It will be available in May with a capacity of 500GB. 1TB and 2TB versions will be available later this year.

OWC Thunderblade
Designed to be rugged and dependable as well as blazing fast, the Thunderblade has a rugged and sleek design, and it comes with a custom-fit ballistic hard-shell case. With capacities of up 8TB and data transfer rates of up to 2800MB/s, this unit is ideal for on-set workflows. The unit is not bus powered, but you can connect two ThunderBlades that can reach speeds of up to 3800MB/s. Now that’s fast.

OWC Thunderblade

It starts at $1,199 for the 1TB and is available now for purchase.

OWC Mercury Helios FX External Expansion Chassis
Add the power of a high-performance GPU to your Mac or PC via Thunderbolt 3. Performance is plug-and-play, and upgrades are easy. The unit is quiet and runs cool, making it a great addition to your environment.

It starts at $319 and is available now.

Flanders XM650U
This display is beautiful, absolutely beautiful.

The XM650U is a professional reference monitor designed for color-critical monitoring of 4K, UHD, and HD signals. It features the latest large-format OLED panel technology, offering outstanding black levels and overall picture performance. The monitor also features the ability to provide a realtime downscaled HD resolution output.

The FSI booth was showcasing the display playing HD, UHD, and UHD HDR content, which demonstrates how versatile the device is.

The monitor goes for $12,995 and is available for purchase now.

DaVinci Resolve 15
What could arguably be the biggest update yet to Resolve is version 15. It combines editing, color correction, audio and now visual effects all in one software tool with the addition of Fusion. Other additions include ADR tools in Fairlight and a sound library. The color and edit page has additions such as a LUT browser, shared grades, stacked timelines, closed captioning tools and more.

You can get DR15 for free — yes free — with some restrictions to the software and you can purchase DR15 Studio for $299. It’s available as a beta at the moment.

Those were my top take aways from NAB 2018. It was a great show, and I look forward to NAB 2019.


Twain Richardson is a co-founder of Frame of Reference, a boutique post production company located on the beautiful island of Jamaica. Follow the studio and Twain on Twitter: @forpostprod @twainrichardson

Riding the digital storage bus at the HPA Tech Retreat

By Tom Coughlin

At the 2018 HPA Tech Retreat in Palm Desert there were many panels that spoke to the changing requirements for digital storage to support today’s diverse video workflows. While at the show, I happened to snap a picture of the Maxx Digital bus — these guys supply video storage and RAID. I liked this picture because it had the logos of a number of companies with digital storage products serving the media and entertainment industry. So, this blog will ride the storage bus to see where digital storage in M&E is going.

Director of photography Bill Bennett, ASC, and senior scientist for RealD Tony Davis gave an interesting talk about why it can be beneficial to capture content at high frame rates, even if it will ultimately be shown at much lower frame rate. They also offered some interesting statics about Ang Lee’s 2016 technically groundbreaking movie, Billy Lynn’s Long Halftime Walk, which was shot in in 3D at 4K resolution and 120 frames per second.

The image above is a slide from the talk describing the size of the data generated in creating this movie. Single Sony F65 frames with 6:1 compression were 5.2MB in size with 7.5TB of average footage per day over 49 days. They reported that 104-512GB cards were used to capture and transfer the content and the total raw negative size (including test materials) was 404TB. This was stored on 1.5PB of hard disk storage. The actual size of the racks used for storage and processing wasn’t all that big. The photo below shows the setup in Ang Lee’s apartment.

Bennett and Davis went on to describe the advantages of shooting at high frame rates. Shooting at high frame rates gives greater on-set flexibility since no motion data is lost during shooting, so things can be fixed in post more easily. Even when shown at lower resolution in order to get conventional cinematic aesthetics, a synthetic shutter can be created with different motion sense in different parts of the frame to create effective cinematic effects using models for particle motion, rotary motion and speed ramps.

During Gary Demos’s talk on Parametric Appearance Compensation he discussed the Academy Color Encoding System (ACES) implementation and testing. He presented an interesting slide on a single master HDR architecture shown below. A master will be an important element in an overall video workflow that can be part of an archival package, probably using the SMPTE (and now ISO) Archive eXchange Format (AXF) standard and also used in a SMPTE Interoperable Mastering Format (IMF) delivery package.

The Demo Area
At the HPA Retreat exhibits area we found several interesting storage items. Microsoft had on exhibit one of it’s Data Boxes, that allow shipping up to 100 TB of data to its Azure cloud. The Microsoft Azure Data Box joins Amazon’s Snowball and Google’s similar bulk ingest box. Like the AWS Snowball, the Azure Data Box includes an e-paper display that also functions as a shipping label. Microsoft did early testing of their Data Box with Oceaneering International, which performs offline sub-sea oil industry inspection and uploaded their data to Azure using Data Box.

ATTO was showing its Direct2GPU technology that allowed direct transfer from storage to GPU memory for video processing without needing to pass through a system CPU. ATTO is a manufacturer of HBA and other connectivity solutions for moving data, and developing smarter connectors that can reduce overall system overhead.

Henry Gu’s GIC company was showing its digital video processor with automatic QC, and IMF tool set enabling conversion of any file type to IMF and transcoding to any file format and playback of all file types including 4K/UHD. He was doing his demonstration using a DDN storage array (right).

Digital storage is a crucial element in modern professional media workflows. Digital storage enables higher frame rate, HDR video recording and processing to create a variety of display formats. Digital storage also enables uploading bulk content to the cloud and implementing QC and IMF processes. Even SMPTE standards for AXF, IMF and others are dependent upon digital storage and memory technology in order to make them useful. In a very real sense, in the M&E industry, we are all riding the digital storage bus.


Dr. Tom Coughlin, president of Coughlin Associates, is a storage analyst and consultant. Coughlin has six patents to his credit and is active with SNIA, SMPTE, IEEE and other pro organizations. Additionally, Coughlin is the founder and organizer of the annual Storage Visions Conference as well as the Creative Storage Conference.
.

AJA intros new 2TB Pak 2000 SSD recording media

AJA has expanded its line of Pak SSD media with the new 2TB Pak 2000 for Ki Pro Ultra and Ki Pro Ultra Plus recording and playback systems. The company also announced new ordering options for the entire Pak drive family, including HFS+ formatting for Mac OS users and exFAT for PC and universal use.

“With productions embracing high resolution, high frame rate and multi-cam workflows, media storage is a key concern. Pak 2000 introduces a high capacity recording option at a lower cost per GB,” says AJA president Nick Rashby. “Our new HFS+ and exFat options give customers greater flexibility with formatting upon ordering that fits their workflow demands.”

Pak 2000 offers longer recording capacity required for documentaries, news, sports programming and live events, making it suitable for multi-camera HD workflows with the Ki Pro Ultra Plus’s multi-channel HD recording capabilities. The high capacity drive can hold more than four hours of 4K/UltraHD ProRes (HQ), three hours of ProRes 4444 at 30p and up to two hours ProRes (HQ) or 90 minutes of ProRes 4444 at 60p.

Users can get double that length with two Pak drives and rollover support in Ki Pro Ultra and Ki Pro Ultra Plus.

The Pak 2000, and all Pak SSD media modules are now available for order in the following formats and prices:
– Pak 2000-R0 2TB HFS+: $1,795
– Pak 2000-X0 2TB exFAT: $1,795
– Pak 1000-R0 1TB HFS+: $1,495
– Pak 1000-X0 1TB exFAT: $1,495
– Pak 512-R1 512GB HFS+: $995
– Pak 512-X1 512GB exFAT: $995
– Pak 256-R1 256GB HFS+: $495
– Pak 256-X1 256GB exFAT: $495

(For HFS+ and X models for exFAT, order R models.)

The challenges of creating a shared storage ‘spec’

By James McKenna

The specification — used in a bid, tender, RFQ or simply to provide vendors with a starting point — has been the source of frustration for many a sales engineer. Not because we wish that we could provide all the features that are listed, but because we can’t help but wonder what the author of those specs was thinking.

Creating a spec should be like designing your ideal product on paper and asking a vendor to come as close as they can to that ideal. Unlike most other forms of shopping, you avoid the sales process until the salesperson knows exactly what you want. This is good in some ways, but very limiting in others.

I dislike analogies with the auto industry because cars are personal and subjective, but in this way, you can see the difference in spec versus evaluation and research. Imagine writing down all the things you want in a car and showing up at the dealership looking for a match. You want power, beauty, technology, sports-car handling and room for five?

Your chances of finding the exact car you want are slim, unless you’re willing to compromise or adjust your budget. The same goes for facility shared storage. Many customers get hung up on the details and refuse to prioritize important aspects, like usability and sustainability, and as a result end up looking at quotes that are two to three times their cost expectations for systems that don’t perform the day-to-day work any better (and often perform worse).

There are three ways to design a specification:

Based On Your Workflow
By far, this is the best method and will result in the easiest path to getting what you want. Go ahead and plan for years down the road and challenge the vendors to keep up with your trajectory. Keep it grounded in what you believe is important to your business. This should include data security, usable administration and efficient management. Lay out your needs for backup strategy and how you’d like that to be automated, and be sure to prioritize these requests so the vendor can focus on what’s most important to you.

Be sure to clearly state the applications you’ll be using, what they will be requiring from the storage and how you expect them to work with the storage. The highest priority and true test of a successful shared storage deployment is: Can you work reliably and consistently to generate revenue? These are my favorite types of specs.

Based On Committee
Some facilities are the victim of their own size or budget. When there’s an active presence from the IT department, or the dollar amounts get too high, it’s not just up to the creative folks to select the right product. The committee can include consultants, system administrators, finance and production management, and everyone wants to justify their existence at the table. People with experience in enterprise storage and “big iron” systems will lean on their past knowledge and add terms like “Five-9s uptime,” “No SPOF,” “single namespace,” “multi-path” and “magic quadrant.”

In the enterprise storage world these would be important, but they don’t force vendors to take responsibility for prioritizing the interactions between the creative applications and the storage, and the usability and sustainability of a solution in the long term. The performance necessary to smoothly deliver a 4K program master, on time and on budget, might not even be considered. I see these types of specifications and I know that there will be a rude awakening when the quotes are distributed, usually leading to some modifications of the spec.

Based On A Product
The most limiting way to design a spec is by copying the feature list of a single product to create your requirements. I should mention that I have helped our customers to do this on some occasions, so I’m guilty here. When a customer really knows the market, and wants to avoid being bid an inferior product, this can be justified. However, you have better completed your research beforehand because there may be something out there that could change your opinion, and you don’t want to find out about it after you’re locked into the status quo. If you choose to do this but want to stay on the lookout for another option, simply prioritize the features list by what’s most important to you.

If you really like something about your storage, prioritize that and see if another vendor has something similar. When I respond to these bid specs, I always provide details on our solution and how we can achieve better results than the one that is obviously being requested. Sometimes it works, sometimes not, but at least now they’re educated.

The primary frustration with specifications that miss the mark is the waste of money and time. Enterprise storage features come with enterprise storage complexity and enterprise storage price tags. This requires training or reliance upon the IT staff to manage, or in some cases completely control the network for you. Cost savings in the infrastructure can be repurposed to revenue-generating workstations and artists can be employed instead of full-time techs. There’s a reason that scrappy, grassroots facilities produce faster growth and larger facilities tend to stagnate. They focus on generating content, invest only where needed and scale the storage as the bigger jobs and larger formats arrive.

Stick with a company that makes the process easy and ensures that you’ll never be without a support person that knows your daily grind.


James McKenna is VP of marketing and sales at shared storage company Facilis.

Review: G-Tech’s G-Speed Shuttle XL with EV Series Adapters, Thunderbolt 3

By Brady Betzel

As video recording file sizes continue to creep up, so do our hard drive storage size, RAID and bandwidth needs. These days we are seeing a huge amount of recorded media needing to be offloaded, edited and color graded in the field. G-Technology has stepped up to bat with its 24TB G-Speed Shuttle XL with EV Series adapters, Thunderbolt 3 edition — a portable, durable, adaptable and hardware-RAID powered storage solution.

In terms of physical appearance, the G-Speed Shuttle XL is just shy of 10x7x16-inches and weighs in at around 25 pounds. It’s not exactly lightweight, but with spinning hard drives it’s what I would expect. To get a lighter RAID you would need SSD drives, but those would most likely triple the price. The case itself is actually pretty rugged; it seems like it could withstand a minimal amount of production abuse, but again without being an SSD RAID you have some risks since spinning disk drives are more volatile.

The exterior is made out of a hard plastic, and it would have been nice to have the rubberized feel on at least the handle of the Shuttle XL — similar to G-Tech’s Rugged line of drives — but it still feels solid. To open the Shuttle XL, there is an easy-to-access switch on the front. There is a lock switch that took me a few wiggles to work correctly, but I would loved to have seen a key lock that would to add a little more security since this RAID will most likely house important data. When closing the front door, the slide lock wouldn’t fully close unless I pushed hard a second time. If I didn’t do that the door would open by itself. On the back are the two Thunderbolt 3 ports, a power cable plug and a Kensington Lock slot. On the inside of this particular Shuttle XL are six 4TB 7200 RPM Enterprise Class HGST (Western Digital) drives configured to a RAID-5 by default and two EV Series Bay adapters.

Since I was on a Windows-based PC I had to download and format the drives since it comes formatted for Mac OS by default. The EV adapters allow for quick connection of memory products like Atomos Master Caddy drives, CFast 2.0 or even Red Mini Mags. This gives you fast connection on set for transferring and backing up your media without extra card readers. The HGST hard drives are enterprise class, which very simply means that the drives are rated to be run 24 hours a day, seven days a week, 365 days a year more reliably than standard hard drives. This should do as it says, but if it doesn’t there is a five-year limited warranty that will back up this product. Basically, if the RAID or its drives fail due to craftsmanship errors, they will replace or repair it. Keep in mind, you are responsible for shipping the item back to them, and with the heavy weight of the RAID it may be costly if it goes bad. They will not cover accidental damage or misuse. Another caveat to G-Technology’s limited warranty is that they will not cover commercial use, so if you plan to use the G-Speed on a commercial shoot, you might not be covered. You should contact G-Technology’s support to check if your use will be covered: 1.888.426.5214 if you are in North or South America, including Canada.

The Shuttle XL comes pre-RAID formatted in a RAID-5 configuration for the Mac OS, but it can also be formatted as RAID-0,-1, -6, -10 and 50 using the G-Speed Studio Utility. Here is a quick RAID primer in case you forget the differences:

– RAID-0: All drives are used as one large drive. This gives you the fastest RAID performance.
– RAID-1: Total amount of drive space is halved, so if one drive goes out you will not lose your data and it will be rebuilt over time once the bad drive is replaced. The speed is slower than RAID 0. Essentially each drive is mirrored to an identical drive in the RAID.
– RAID-5: Needs at least three drives and uses each drive to create a safety net if a drive fails. The plus side is that it is faster than RAID-1 and includes a safety net. This is why G-Technology ships this drive with this configuration. You will have about 80% of usable disk space. The downside is that if a drive goes out you, will have degraded speed until the RAID fully repopulates itself once the bad drive is replaced.
– RAID-6: Works similar to RAID-5 but has two safety nets (a.k.a. parita blocks). One drawback of RAID-5 is that if a second drive goes out while the RAID database is rebuilding, all data can be lost permanently. RAID-6 adds another safety net so that if two drives go out you can still rebuild your RAID. The downside is that you have only 60% of your storage usable.
RAID-10: RAID-10 requires at least four drives and cuts your usable drive count in half. The upside is that if a drive goes out, you will not have degraded speed during the RAID rebuilding process, which depending on the data involved can be multiple days or longer. RAID-10 essentially mirrors a striped RAID.
– RAID-50: RAID-50 can be thought of as a RAID-5 + 0 and needs a minimum of six drives. It’s two RAID arrays running RAID-5. In each of those RAID arrays you will lose one drive’s worth of usable space. The good news is that if you can have a drive in each RAID array go out while minimizing total RAID loss unlink RAID-5 alone.

Those RAID formatting options are a lot to think about, and frankly I have to look them up about every year or so. If you didn’t read all about RAID formatting, then you may want to stick with RAID-5, which gives you a nice combo of safety vs. speed. If you are a risk taker, backup your data regularly, or if you can survive a total RAID failure, RAID-0 might be more your style. But in the end, keep in mind no matter how good your equipment is or how high a level of RAID protection you have, you can always suffer a total RAID failure and backups are always important.

I tested the 24TB Shuttle XL in each of the available RAID configurations on an HP ZBook Studio G4 with a Thunderbolt 3 interface using the AJA System Test Utility. Each test used the 4GB testing size, 3840×2160 resolution and DNxHR 444 codec since I typically use RAIDS when editing video or motion based projects, which tend to be higher file size. As a caveat, when I lowered the file size to 1GB the speeds increased tremendously. For instance, in RAID-0 the Read/Write speeds were 953MB/s/2274MB/s compare to those below. Here are my results, including the total size of the RAID:

RAID-0: 21.83TB – Read: 960 MB/s – Write: 1451 MB/s
RAID-1: 10.91TB – Read: 439 MB/s – Write: 639 MB/s
RAID-5: 18.19TB – Read: 950 MB/s – Write: 1171 MB/s ***
RAID-6: 14.55TB – Read: 683 MB/s – Write: 750 MB/s
RAID-10: 10.91TB – Read: 506 MB/s – Write: 667 MB/s
RAID-50: 14.55TB – Read: 614 MB/s – Write: 933 MB/s

***I pulled a drive while working live in RAID-5 and while the read/write speed degraded to Read: 326MB/s and Write: 858MB/s it continued to function while the RAID rebuilt itself.

For some reference I also ran the AJA System Test on my local hard drive and got Read 1606MB/s — Write 1108MB/s, which is pretty fast, so the Shuttle XL is doing well. I also wanted to test real-world copy speed and using 50GB of Cinema DNG files here are the results:

RAID-0: 2 mins. 7 seconds (~403 MB/s)
RAID-1: 3 mins. 35 seconds (~238 MB/s)
RAID-5: 2 mins. 49 seconds (~303 MB/s)
RAID-6: 3 mins. 6 seconds(~275 MB/s)
RAID-10: 3 mins. 17 seconds (~260 MB/s)
RAID-50: 2 mins. 45 seconds (~310 MB/s)

While these results are pretty self explanatory, when configured in RAID-5 they are pretty impressive. If you ran the tests continuously for a day you would probably see some variation on the average, maybe a little higher than what you see here. The speeds are pretty impressive, especially when considering a drive can give out and you can still be running at a pretty high bandwidth. Technically, G-Technology states that the Shuttle XL can reach a maximum transfer rate up to 1500MB/s — which I was close to hitting. This is very surprising since typically these specs are a little like miles-per-gallon on new cars but not in this case. I really appreciate that accuracy and think it will go a long way with consumers. In terms of other apps, I wasn’t running anything other than the AJA System Test, but I also did not shut down anything in the background so it is possible some background apps affected these transfer numbers, but I wouldn’t say they had a lot of influence, you should see similar results.

Summing Up
In the end, the G-Technology G-Speed Shuttle XL with EV Series Adapters Thunderbolt 3 Edition is a great choice for a RAID that might need to stand up to a little abuse in the field. The 24TB version of the Shuttle XL that connects using Thunderbolt 3 has a retail price of $2,799.95 with EV adapters being sold separately for between $99.95 and $199.95. The Shuttle XL is available from 24TB all the way up to 72TB, which will cost you $7,699.95.

If you like the idea of multiple RAID options, including ones that require more than four drives, the Shuttle XL has a decent price, a great build quality that should last you for years thanks to its Enterprise Class hard drives, and high bandwidth — the only thing better would be to load it with some SSD drives, but that could cost another $10,000 and would call for another review. Check out the Shuttle XL at G-Technologies website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Quantum’s Xcellis scale-out NAS targets IP workflows for M&E

Quantum is now offering a new Xcellis Scale-out NAS targeting data-heavy IP-based media workflows. Built off Quantum’s StorNext shared storage and data management platform, the multi-protocol, multi-client Xcellis Scale-out NAS system combines media and metadata management with high performance and scalability. Users can configure an Xcellis solution with both scale-out SAN and NAS to provide maximum flexibility.

“Media professionals have been looking for a solution that combines the performance and simplified scalability of a SAN with the cost efficiency and ease of use of NAS,” says Quantum’s Keith Lissak. “Quantum’s new Xcellis Scale-out NAS platform bridges that gap. By affordably delivering high performance, petabyte-level scalability and advanced capabilities such as integrated AI, Xcellis Scale-out NAS is [a great] solution for migrating to all-IP environments.”

Specific benefits of Xcellis Scale-out NAS include:
• Increased Productivity in All-IP Environments: It features a converged architecture that saves space and power, continuous scalability for simplified scaling of performance and capacity and unified access to content.
• Cost-Effective Scaling of Performance and Capacity: One appliance provides 12 GB/sec per client. An Xcellis cluster can scale performance and capacity together or independently to reach hundreds of petabytes in capacity and more than a terabyte per second in performance. When deployed as part of a multitier StorNext infrastructure ― which can include object, tape and cloud storage ― Xcellis Scale-out NAS can cost as little as 1/10 that of an enterprise-only NAS solution with the same capacity.
• Lifecycle, Location and Cost Management: It’s built off of Quantum’s StorNext software, which provides automatic tiering between flash, disk, tape, object storage and public cloud. Copies can be created for content distribution, collaboration, data protection and disaster recovery.
• Integrated Artificial Intelligence: Xcellis can integrate artificial intelligence (AI) capabilities to enable users to extract more value for their assets through the automated creation of metadata. The system can actively interrogate data across multiple axes to uncover events, objects, faces, words and sentiments, automatically generating new, custom metadata that unlocks additional possibilities for the use of stored assets.

Xcellis Scale-out NAS will be generally available this month with entry configurations and those leveraging tiering starting at under $100 per terabyte (raw).

Cloudian HyperFile for object-storage-based NAS

Newly introduced Cloudian HyperFile is an integrated NAS controller that provides SMB/NFS file services from on-premises Cloudian HyperStore object storage systems. Cloudian HyperFile includes targets enterprise network attached storage (NAS) customers, those working in mission-critical, capacity-intensive applications that employ file data. Media and entertainment is one of the main target markets for HyperFile.

Cloudian HyperFile incorporates snapshot, WORM, non-disruptive failover, scale-out performance, POSIX compliance and Active Directory integration. When combined with the limitless scalability of Cloudian HyperStore enterprise storage, organizations gain new on-premises options for managing all of their unstructured data.

Pricing for complete Cloudian HyperFile storage solutions, including on-premises disk-based storage, start at less than 1/2 cent per GB/mo. To simplify implementation, Cloudian HyperFile incorporates a policy-based data migration engine that transfers files to Cloudian from existing NAS systems, or from proprietary systems such as EMC Centera. IT managers select the attributes for files to be migrated and the data movement then proceeds as a background task with no service interruption.

Cloudian HyperFile is available as an appliance or as a virtual machine. The HyperFile appliance is deployed as a node within a Cloudian cluster and includes active-passive nodes for rapid failover, fully redundant hardware for high-availability, and integrated caching for performance.

Cloudian is offering two software versions, HyperFile Basic and HyperFile Enterprise. A HyperFile Basic software license is included with Cloudian HyperStore at no additional charge and includes multi-protocol support, high-availability support and a management feature set. HyperFile Enterprise includes everything in HyperFile Basic, plus Snapshot, WORM, Geo-distribution, Global Namespace and File Versioning.

Pricing for complete on-premises, appliance-based solutions, begins at ½ cent per GB per month. Cloudian HyperFile is available now from Cloudian and from Cloudian reseller partners.

Storage Roundtable

Production, post, visual effects, VR… you can’t do it without a strong infrastructure. This infrastructure must include storage and products that work hand in hand with it.

This year we spoke to a sampling of those providing storage solutions — of all kinds — for media and entertainment, as well as a storage-agnostic company that helps get your large files from point A to point B safely and quickly.

We gathered questions from real-world users — things that they would ask of these product makers if they were sitting across from them.

Quantum’s Keith Lissak
What kind of storage do you offer, and who is the main user of that storage?
We offer a complete storage ecosystem based around our StorNext shared storage and data management solution,including Xcellis high-performance primary storage, Lattus object storage and Scalar archive and cloud. Our customers include broadcasters, production companies, post facilities, animation/VFX studios, NCAA and professional sports teams, ad agencies and Fortune 500 companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Xcellis features continuous scalability and can be sized to precisely fit current requirements and scaled to meet future demands simply by adding storage arrays. Capacity and performance can grow independently, and no additional accelerators or controllers are needed to reach petabyte scale.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
We don’t have exact numbers, but a growing number of our customers are using cloud storage. Our FlexTier cloud-access solution can be used with both public (AWS, Microsoft Azure and Google Cloud) and private (StorageGrid, CleverSafe, Scality) storage.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We offer a range of StorNext 4K Reference Architecture configurations for handling the demanding workflows, including 4K, 8K and VR. Our customers can choose systems with small or large form-factor HDDs, up to an all-flash SSD system with the ability to handle 66 simultaneous 4K streams.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
StorNext systems are OS-agnostic and can work with all Mac, Windows and Linux clients with no discernible difference.

Zerowait’s Rob Robinson
What kind of storage do you offer, and who is the main user of that storage?
Zerowait’s SimplStor storage product line provides storage administrators scalable, flexible and reliable on-site storage needed for their growing storage requirements and workloads. SimplStor’s platform can be configured to work in Linux or Windows environments and we have several customers with multiple petabytes in their data centers. SimplStor systems have been used in VFX production for many years and we also provide solutions for video creation and many other large data environments.

Additionally, Zerowait specializes in NetApp service, support and upgrades, and we have provided many companies in the media and VFX businesses with off-lease transferrable licensed NetApp storage solutions. Zerowait provides storage hardware, engineering and support for customers that need reliable and big storage. Our engineers support customers with private cloud storage and customers that offer public cloud storage on our storage platforms. We do not provide any public cloud services to our customers.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our customers typically need on-site storage for processing speed and security. We have developed many techniques and monitoring solutions that we have incorporated into our service and hardware platforms. Our SimplStor and NetApp customers need storage infrastructures that scale into the multiple petabytes, and often require GigE, 10GigE or a NetApp FC connectivity solution. For customers that can’t handle the bandwidth constraints of the public Internet to process their workloads, Zerowait has the engineering experience to help our customers get the most of their on-premises storage.

How many of the people buying your solutions are using them with another cloud-based products (i.e. Microsoft Azure)?
Many of our customers use public cloud solutions for their non-proprietary data storage while using our SimplStor and NetApp hardware and support services for their proprietary, business-critical, high-speed and regulatory storage solutions where data security is required.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
SimplStor’s density and scalability make it perfect for use in HD and higher resolution environments. Our SimplStor platform is flexible and we can accommodate customers with special requests based on their unique workloads.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Zerowait’s NetApp and SimplStor platforms are compatible with both Linux (NFS) and Windows (CIFS) environments. OS X is supported in some applications. Every customer has a unique infrastructure and set of applications they are running. Customers will see differences in performance, but our flexibility allows us to customize a solution to maximize the throughput to meet workflow requirements.

Signiant’s Mike Nash
What kind of storage works with your solution, and who is the main user or users of that storage?
Signiant’s Media Shuttle file transfer solution is storage agnostic, and for nearly 200,000 media pros worldwide it is the primary vehicle for sending and sharing large files. Even though Media Shuttle doesn’t provide storage, and many users think of their data as “in Media Shuttle.” In reality, their files are located in whatever storage their IT department has designated. This might be the company’s own on-premises storage, or it could be their AWS or Microsoft Azure cloud storage tenancy. Our users employ a Media Shuttle portal to send and share files; they don’t have to think about where the files are stored.

How are you making sure your products are scalable so people can grow either their use or the bandwidth of their networks (or both)?
Media Shuttle is delivered as a cloud-native SaaS solution, so it can be up and running immediately for new customers, and it can scale up and down as demand changes. The servers that power the software are managed by our DevOps team and monitored 24×7 — and the infrastructure is auto-scaling and instantly available. Signiant does not charge for bandwidth, so customers can use our solutions with any size pipe at no additional cost. And while Media Shuttle can scale up to support the needs of the largest media companies, the SaaS delivery model also makes it accessible to even the smallest production and post facilities.

How many of the people buying your solutions are using them with cloud storage (i.e. AWS or Microsoft Azure)?
Cloud adoption within the M&E industry remains uneven, so it’s no surprise that we see a mixed picture when we look at the storage choices our customers make. Since we first introduced the cloud storage option, there has been a constant month-over-month growth in the number of customers deploying portals with cloud storage. It’s not yet in parity with on-prem storage, but the growth trends are clear.

On-premises content storage is very far from going away. We see many Media Shuttle customers taking a hybrid approach, with some portals using cloud storage and others using on-prem storage. It’s also interesting to note that when customers do choose cloud storage, we increasingly see them use both AWS and Azure.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We can move any size of file. As media files continue to get bigger, the value of our solutions continues to rise. Legacy solutions such as FTP, which lack any file acceleration, will grind things to a halt if 4K, 8K, VR and other huge files need to be moved between locations. And consumer-oriented sharing services like Dropbox and Google Drive become non-starters with these types of files.

What platforms do your system connect to (e.g. Mac OS X, Windows, Linux), and what differences might end-users notice when connecting on these different platforms?
Media Shuttle is designed to work with a wide range of platforms. Users simply log in to portals using any web browser. In the background, a native application installed on the user’s personal computer provides the acceleration functionality. This App works with Windows or Mac OSX systems.

On the IT side of things, no installed software is required for portals deployed with cloud storage. To connect Media Shuttle to on-premises storage, the IT team will run Signiant software on a computer in the customer’s network. This server-side software is available for Linux and Windows.

NetApp’s Jason Danielson
What kind of storage do you offer, and who is the main user of that storage?
NetApp has a wide portfolio of storage and data management products and services. We have four fundamentally different storage platforms — block, file, object and converged infrastructure. We use these platforms and our data fabric software to create a myriad of storage solutions that incorporate flash, disk and cloud storage.

1. NetApp E-Series block storage platform is used by leading shared file systems to create robust and high-bandwidth shared production storage systems. Boutique post houses, broadcast news operations and corporate video departments use these solutions for their production tier.
2. NetApp FAS network-attached file storage runs NetApp OnTap. This platform supports many thousands of applications for tens of thousands of customers in virtualized, private cloud and hybrid cloud environments. In media, this platform is designed for extreme random-access performance. It is used for rendering, transcoding, analytics, software development and the Internet-of-things pipelines.
3. NetApp StorageGrid Webscale object store manages content and data for back-up and active archive (or content repository) use cases. It scales to dozens of petabytes, billions of objects and currently 16 sites. Studios and national broadcast networks use this system and are currently moving content from tape robots and archive silos to a more accessible object tier.
4. NetApp SolidFire converged and hyper-converged platforms are used by cloud providers and enterprises running large private clouds for quality-of-service across hundreds to thousands of applications. Global media enterprises appreciate the ease of scaling, simplicity of QOS quota setting and overall maintenance for largest scale deployments.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The four platforms mentioned above scale up and scale out to support well beyond the largest media operations in the world. So our challenge is not scalability for large environments but appropriate sizing for individual environments. We are careful to design storage and data management solutions that are appropriate to media operations’ individual needs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Seven years ago, NetApp set out on a major initiative to build the data fabric. We are well on the path now with products designed specifically for hybrid cloud (a combination of private cloud and public cloud) workloads. While the uptake in media and entertainment is slower than in other industries, we now have hundreds of customers that use our storage in hybrid cloud workloads, from backup to burst compute.

We help customers wanting to stay cloud-agnostic by using AWS, Microsoft Azure, IBM Cloud, and Google Cloud Platform flexibly and as the project and pricing demands. AWS, Microsoft Azure, IBM, Telsra and ASE along with another hundred or so cloud storage providers include NetApp storage and data management products in their service offerings.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
For higher bandwidth, or bitrate, video production we’ll generally architect a solution with our E-Series storage under either Quantum StorNext or PixitMedia PixStor. Since 2012, when the NetApp E5400 enabled the mainstream adoption of 4K workflows, the E-Series platform has seen three generations of upgrades and the controllers are now more than 4x faster. The chassis has remained the same through these upgrades so some customers have chosen to put the latest controllers into these chassis to improve bandwidth or to utilize faster network interconnect like 16 gigabit fibrechannel. Many post houses continue to use fibrechannel to the workstation for these higher bandwidth video formats while others have chosen to move to Ethernet (40 and 100 Gigabit). As flash (SSDs) continue to drop in price it is starting to be used for video production in all flash arrays or in hybrid configurations. We recently showed our new E570 all flash array supporting NVM Express over Fabrics (NVMe-oF) technology providing 21GB/s of bandwidth and 1 million IOPs with less than 100µs of latency. This technology is initially targeted at super-computing use cases and we will see if it is adopted over the next couple of years for UHD production workloads.

What platforms do your system connect to (Mac OSx, Windows, Linux, etc.), and what differences might end-users notice when connecting on these different platforms?
NetApp maintains a compatibility matrix table that delineates our support of hundreds of client operating systems and networking devices. Specifically, we support Mac OS X, Windows and various Linux distributions. Bandwidth expectations differ between these three operating systems and Ethernet and Fibre Channel connectivity options, but rather than make a blanket statement about these, we prefer to talk with customers about their specific needs and legacy equipment considerations.

G-Technology’s Greg Crosby
What kind of storage do you offer, and who is the main user of that storage?
Western Digital’s G-Technology products provide high-performing and reliable storage solutions for end-to-end creative workflows, from capture and ingest to transfer and shuttle, all the way to editing and final production.

The G-Technology brand supports a wide range of users for both field and in-studio work, with solutions that span a number of portable handheld drives — which are often times used to backup content on-the-go — all the way to in-studio drives that offer capacities up to 144TB. We recognize that each creative has their own unique workflow and some embrace the use of cloud-based products. We are proud to be companions to those cloud services as a central location to store raw content or a conduit to feed cloud features and capabilities.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our line ranges from small portable and rugged drives to large, multi-bay RAID and NAS solutions, for all aspects of the media and entertainment industry. Integrating the latest interface technology such as USB-C or Thunderbolt 3, our storage solutions will take advantage of the ability to quickly transfer files.

We make it easy to take a ton of storage into the field. The G-Speed Shuttle XL drive is available in capacities up to 96TB, and an optional Pelican case, with handle, is available, making it easy to transport in the field and mitigating any concerns about running out of storage. We recently launched the G-Drive mobile SSD R-Series. This drive is built to withstand a three meter (nine foot) drop, and is able to endure accidental bumps or drops, given that it is a solid-state drive.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many of our customers are using cloud-based solutions to complement their creative workflows. We find that most of our customers use our solutions as the primary storage or to easily transfer and shuttle their content since the cloud is not an efficient way to move large amounts of data. We see the cloud capabilities as a great way to share project files and low-resolution content, or collaborate with others on projects as well as distribute share a variety of deliverables.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Today’s camera technology enables not only capture at higher resolutions but also higher frame rates with more dynamic imagery. We have solutions that can easily support multi-stream 4K, 8K and VR workflows or multi-layer photo and visual effects projects. G-Technology is well positioned to support these creative workflows as we integrate the latest technologies into our storage solutions. From small portable and rugged SSD drives to high-capacity and fast multi-drive RAID solutions with the latest Thunderbolt 3 and USB-C interface technology we are ready tackle a variety of creative endeavors.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.), and what differences might users notice when connecting on these different platforms?
Our complete portfolio of external storage solutions work for Mac and PC users alike. With native support for Apple Time Machine, these solutions are formatted for Mac OS out of the box, but can be easily reformatted for Windows users. G-Technology also has a number of strategic partners with technology vendors, including Apple, Atomos, Red Camera, Adobe and Intel.

Panasas’ David Sallak
What kind of storage do you offer, and who is the main user of that storage?
Panasas ActiveStor is an enterprise-class easy-to-deploy parallel scale-out NAS (network-attached storage) that combines Flash and SATA storage with a clustered file system accessed via a high-availability client protocol driver with support for standard protocols.

The ActiveStor storage cluster consists of the ActiveStor Director (ASD-100) control engine, the ActiveStor Hybrid (ASH-100) storage enclosure, the PanFS parallel file system, and the DirectFlow parallel data access protocol for Linux and Mac OS.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
ActiveStor is engineered to scale easily. There are no specific architectural limits for how widely the ActiveStor system can scale out, and adding more workloads and more users is accomplished without system downtime. The latest release of ActiveStor can grow either storage or bandwidth needs in an environment that lets metadata responsiveness, data performance and data capacity scale independently.

For example, we quote capacity and performance numbers for a Panasas storage environment containing 200 ActiveStor Hybrid 100 storage node enclosures with 5 ActiveStor Director 100 units for filesystem metadata management. This configuration would result in a single 57PB namespace delivering 360GB/s of aggregate bandwidth with an excess of 2.6M IOPs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Panasas customers deploy workflows and workloads in ways that are well-suited to consistent on-site performance or availability requirements, while experimenting with remote infrastructure components such as storage and compute provided by cloud vendors. The majority of Panasas customers continue to explore the right ways to leverage cloud-based products in a cost-managed way that avoids surprises.

This means that workflow requirements for file-based storage continue to take precedence when processing real-time video assets, while customers also expect that storage vendors will support the ability to use Panasas in cloud environments where the benefits of a parallel clustered data architecture can exploit the agility of underlying cloud infrastructure without impacting expectations for availability and consistency of performance.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Panasas ActiveStor is engineered to deliver superior application responsiveness via our DirectFlow parallel protocol for applications working in compressed UHD, 4K and higher-resolution media formats. Compared to traditional file-based protocols such as NFS and SMB, DirectFlow provides better granular I/O feedback to applications, resulting in client application performance that aligns well with the compressed UHD, 4K and other extreme-resolution formats.

For uncompressed data, Panasas ActiveStor is designed to support large-scale rendering of these data formats via distributed compute grids such as render farms. The parallel DirectFlow protocol results in better utilization of CPU resources in render nodes when processing frame-based UHD, 4K and higher-resolution formats, resulting in less wall clock time to produce these formats.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Panasas ActiveStor supports macOS and Linux with our higher-performance DirectFlow parallel client software. We support all client platforms via NFS or SMB as well.

Users would notice that when connecting to Panasas ActiveStor via DirectFlow, the I/O experience is as if users were working with local media files on internal drives, compared to working with shared storage where normal protocol access may result in the slight delay associated with open network protocols.

Facilis’ Jim McKenna
What kind of storage do you offer, and who is the main user of that storage?
We have always focused on shared storage for the facility. It’s high-speed attached storage and good for anyone who’s cutting HD or 4K. Our workflow and management features really make us different than basic network storage. We have attachment to the cloud through software that uses all the latest APIs.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Most of our large customers have been with us for several years, and many started pretty small. Our method of scalability is flexible in that you can decide to simply add expansion drives, add another server, or add a head unit that aggregates multiple servers. Each method increases bandwidth as well as capacity.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many customers use cloud, either through a corporate gateway or directly uploaded from the server. Many cloud service providers have ways of accessing the file locations from the facility desktops, so they can treat it like another hard drive. Alternatively, we can schedule, index and manage the uploads and downloads through our software.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Facilis is known for our speed. We still support Fibre Channel when everyone else, it seems, has moved completely to Ethernet, because it provides better speeds for intense 4K and beyond workflows. We can handle UHD playback on 10Gb Ethernet, and up to 4K full frame DPX 60p through Fibre Channel on a single server enclosure.

What platforms do your systems connect to (e.g. Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
We have a custom multi-platform shared file system, not NAS (network attached storage). Even though NAS may be compatible with multiple platforms by using multiple sharing methods, permissions and optimization across platforms is not easily manageable. With Facilis, the same volume, shared one way with one set of permissions, looks and acts native to every OS and even shows up as a local hard disk on the desktop. You can’t get any more cross-platform compatible than that.

SwiftStack’s Mario Blandini
What kind of storage do you offer, and who is the main user of that storage?
We offer hybrid cloud storage for media. SwiftStack is 100% software and runs on-premises atop the server hardware you already buy using local capacity and/or capacity in public cloud buckets. Data is stored in cloud-native format, so no need for gateways, which do not scale. Our technology is used by broadcasters for active archive and OTT distribution, digital animators for distributed transcoding and mobile gaming/eSports for massive concurrency among others.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The SwiftStack software architecture separates access, storage and management, where each function can be run together or on separate hardware. Unlike storage hardware with the mix of bandwidth and capacity being fixed to the ports and drives within, SwiftStack makes it easy to scale the access tier for bandwidth independently from capacity in the storage tier by simply adding server nodes on the fly. On the storage side, capacity in public cloud buckets scales and is managed in the same single namespace.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Objectively, use of capacity in public cloud providers like Amazon Web Services and Google Cloud Platform is still “early days” for many users. Customers in media however are on the leading edge of adoption, not only for hybrid cloud extending their on-premises environment to a public cloud, but also using a second source strategy across two public clouds. Two years ago it was less than 10%, today it is approaching 40%, and by 2020 it looks like the 80/20 rule will likely apply. Users actually do not care much how their data is stored, as long as their user experience is as good or better than it was before, and public clouds are great at delivering content to users.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Arguably, larger assets produced by a growing number of cameras and computers have driven the need to store those assets differently than in the past. A petabyte is the new terabyte in media storage. Banks have many IT admins, where media shops have few. SwiftStack has the same consumption experience as public cloud, which is very different than on-premises solutions of the past. Licensing is based on the amount of data managed, not the total capacity deployed, so you pay-as-you-grow. If you save four replicas or use erasure coding for 1.5X overhead, the price is the same.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
The great thing about cloud storage, whether it is on-premises or residing with your favorite IaaS providers like AWS and Google, the interface is HTTP. In other words, every smartphone, tablet, Chromebook and computer has an identical user experience. For classic applications on systems that do not support AWS S3 as an interface, users see the storage as a mount point or folder in their application — either NFS or SMB. The best part, it is a single namespace where data can come in file, get transformed via object, and get read either way, so the user experience does not need to change even though the data is stored in the most modern way.

Dell EMC’s Tom Burns
What kind of storage do you offer, and who is the main user of that storage?
At Dell EMC, we created two storage platforms for the media and entertainment industry: the Isilon scale-out NAS All-Flash, hybrid and archive platform to consolidate and simplify file-based workflows and the Dell EMC Elastic Cloud Storage (ECS), a scalable enterprise-grade private cloud solution that provides extremely high levels of storage efficiency, resiliency and simplicity designed for both traditional and next-generation workloads.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
In the media industry, change is inevitable. That’s why every Isilon system is built to rapidly and simply adapt by allowing the storage system to scale performance and capacity together, or independently, as more space or processing power is required. This allows you to scale your storage easily as your business needs dictate.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Over the past five years, Dell EMC media and entertainment customers have added more than 1.5 exabytes of Isilon and ECS data storage to simplify and accelerate their workflows.

Isilon’s cloud tiering software, CloudPools, provides policy-based automated tiering that lets you seamlessly integrate with cloud solutions as an additional storage tier for the Isilon cluster at your data center. This allows you to address rapid data growth and optimize data center storage resources by using the cloud as a highly economical storage tier with massive storage capacity.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
As technologies that enhance the viewing experience continue to emerge, including higher frame rates and resolutions, uncompressed 4K, UHD, high dynamic range (HDR) and wide color gamut (WCG), underlying storage infrastructures must effectively scale to keep up with expanding performance requirements.

Dell EMC recently launched the sixth generation of the Isilon platform, including our all-flash (F800), which brings the simplicity and scalability of NAS to uncompressed 4K workflows — something that up until now required expensive silos of storage or complex and inefficient push-pull workflows.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc)? And what differences might end-users notice when connecting on these different platforms?
With Dell EMC Isilon, you can streamline your storage infrastructure by consolidating file-based workflows and media assets, eliminating silos of storage. Isilon scale-out NAS includes integrated support for a wide range of industry-standard protocols allowing the major operating systems to connect using the most suitable protocol, for optimum performance and feature support, including Internet Protocols IPv4, and IPv6, NFS, SMB, HTTP, FTP, OpenStack Swift-based Object access for your cloud initiatives and native Hadoop Distributed File System (HDFS).

The ECS software-defined cloud storage platform provides the ability to store, access, and manipulate unstructured data and is compatible with existing Amazon S3, OpenStack Swift APIs, EMC CAS and EMC Atmos APIs.

EditShare’s Lee Griffin
What kind of storage do you offer, and who is the main user of that storage?
Our storage platforms are tailored for collaborative media workflows and post production. It combines the advanced EFS (that’s EditShare File System, in short) distributed file system with intelligent load balancing. It’s a scalable, fault-tolerant architecture that offers cost-effective connectivity. Within our shared storage platforms, we have a unique take on current cloud workflows, with current security and reliability of cloud-based technology prohibiting full migration to cloud storage for production, EditShare AirFlow uses EFS on-premise storage to provide secure access to media from anywhere in the world with a basic Internet connection. Our main users are creative post houses, broadcasters and large corporate companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Recently, we upgraded all our platforms to EFS and introduced two new single-node platforms, the EFS 200 and 300. These single-node platforms allow users to grow their storage whilst keeping a single namespace which eliminates management of multiple storage volumes. It enables them to better plan for the future, when their facility requires more storage and bandwidth, they can simply add another node.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
No production is in one location, so the ability to move media securely and back up is still a high priority to our clients. From our Flow media asset management and via our automation module, we offer clients the option to backup their valuable content to places like Amazon S3 servers.

How does your system handle UHD, 4K and other higher-than HD resolutions?
We have many clients working with UHD content who are supplying programming content to broadcasters, film distributors and online subscription media providers. Our solutions are designed to work effortlessly with high data rate content, enabling the bandwidth to expand with the addition of more EFS nodes to the intelligent storage pool. So, our system is ready and working now for 4K content and is future proof for even higher data rates in the future.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
EditShare supplies native client EFS drivers to all three platforms, allowing clients to pick and choose which platform they want to work on. If it is an Autodesk Flame for VFX, a Resolve for grading or our own Lightworks for editing on Linux, we don’t mind. In fact, EFS offers a considerable bandwidth improvement when using our EFS drivers over existing AFP and SMB protocol. Improved bandwidth and speed to all three platforms makes for happy clients!

And there are no differences when clients connect. We work with all three platforms the same way, offering a unified workflow to all creative machines, whether on Mac, Windows or PC.

Scale Logic’s Bob Herzan
What kind of storage do you offer, and who is the main user of that storage?
Scale Logic has developed an ecosystem (Genesis Platform) that includes servers, networking, metadata controllers, single and dual-controller RAID products and purpose-built appliances.

We have three different file systems that allow us to use the storage mentioned above to build SAN, NAS, scale-out NAS, object storage and gateways for private and public cloud. We use a combination of disk, tape and Flash technology to build our tiers of storage that allows us to manage media content efficiently with the ability to scale seamlessly as our customers’ requirements change over time.

We work with customers that range from small to enterprise and everything in between. We have a global customer base that includes broadcasters, post production, VFX, corporate, sports and house of worship.

In addition to the Genesis Platform we have also certified three other tier 1 storage vendors to work under our HyperMDC SAN and scale-out NAS metadata controller (HPE, HDS and NetApp). These partnerships complete our ability to consult with any type of customer looking to deploy a media-centric workflow.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Great questions and it’s actually built into the name and culture of our company. When we bring a solution to market it has to scale seamlessly and it needs to be logical when taking the customer’s environment into consideration. We focus on being able to start small but scale any system into a high-availability solution with limited to no downtime. Our solutions can scale independently if clients are looking to add capacity, performance or redundancy.

For example, a customer looking to move to 4K uncompressed workflows could add a Genesis Unlimited as a new workspace focused on the 4K workflow, keeping all existing infrastructure in place alongside it, avoiding major adjustments to their facility’s workflow. As more and more projects move to 4K, the Unlimited can scale capacity, performance and the needed HA requirements with zero downtime.

Customers can then start to migrate their content from their legacy storage over to Unlimited and then repurpose their legacy storage onto the HyperFS file system as second tier storage.Finally, once we have moved the legacy storage onto the new file system we also are more than happy to bring the legacy storage and networking hardware under our global support agreements.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Cloud continues to be ramping up for our industry, and we have many customers using cloud solutions for various aspects of their workflow. As it pertains to content creation, manipulation and long-term archive, we have not seen much adoption with our customer base. The economics just do not support the level of performance or capacity our clients demand.

However, private cloud or cloud-like configurations are becoming more mainstream for our larger customers. Working with on-premise storage while having DR (disaster recovery) replication offsite continues to be the best solution at this point for most of our clients.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Our solutions are built not only for the current resolutions but completely scalable to go beyond them. Many of our HD customers are now putting in UHD and 4K workspaces on the same equipment we installed three years ago. In addition to 4K we have been working with several companies in Asia that have been using our HyperFS file system and Genesis HyperMDC to build 8K workflows for the Olympics.

We have a number of solutions designed to meet our customer’s requirements. Some are done with spinning disk, others with all flash, and then even more that want a hybrid approach to seamlessly combine the technologies.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
All of our solutions are designed to support Windows, Linux, and Mac OS. However, how they support the various operating systems is based on the protocol (block or file) we are designing for the facility. If we are building a SAN that is strictly going to be block level access (8/16/32 Gbps Fibre Channel or 1/10/25/40/100 Gbps iSCSI, we would use our HyperFS file system and universal client drivers across all operating systems. If our clients also are looking for network protocols in addition to the block level clients we can support jSMB and NFS but allow access to block and file folders and files at the same time.

For customers that are not looking for block level access, we would then focus our design work around our Genesis NX or ZX product line. Both of these solutions are based on a NAS operating system and simply present themselves with the appropriate protocol over 1/10/25/40 or 100Gb. Genesis ZX solution is actually a software-defined clustered NAS with enterprise feature sets such as unlimited snapshots, metro clustering, thin provisioning and will scale up over 5 Petabytes.

Sonnet Technologies‘ Greg LaPorte
What kind of storage do you offer, and who is the main user of that storage?
We offer a portable, bus-powered Thunderbolt 3 SSD storage device that fits in your hand. Primary users of this product include video editors and DITs who need a “scratch drive” fast enough to support editing 4K video at 60fps while on location or traveling.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The Fusion Thunderbolt 3 PCIe Flash Drive is currently available with 1TB capacity. With data transfer of up to 2,600 MB/s supported, most users will not run out of bandwidth when using this device.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
Computers with Thunderbolt 3 ports running either macOS Sierra or High Sierra, or Windows 10 are supported. The drive may be formatted to suit the user’s needs, with either an OS-specific format such as HFS+, or cross-platform format such as exFAT.

Post Supervisor: Planning an approach to storage solutions

By Lance Holte

Like virtually everything in post production, storage is an ever-changing technology. Camera resolutions and media bitrates are constantly growing, requiring higher storage bitrates and capacities. Productions are increasingly becoming more mobile, demanding storage solutions that can live in an equally mobile environment. Yesterday’s 4K cameras are being replaced by 8K cameras, and the trend does not look to be slowing down.

Yet, at the same time, productions still vary greatly in size, budget, workflow and schedule, which has necessitated more storage options for post production every year. As a post production supervisor, when deciding on a storage solution for a project or set of projects, I always try to have answers to a number of workflow questions.

Let’s start at the beginning with production questions.

What type of video compression is production planning on recording?
Obviously, more storage will be required if the project is recording to Arriraw rather than H.264.

What camera resolution and frame rate?
Once you know the bitrate from the video compression specs, you can calculate the data size on a per-hour basis. If you don’t feel like sitting down with a calculator or spreadsheet for a few minutes, there are numerous online data size calculators, but I particularly like AJA’s DataCalc application, which has tons of presets for cameras and video and audio formats.

How many cameras and how many hours per day is each camera likely to be recording?
Data size per hour, multiplied by hours per day, multiplied by shoot days, multiplied by number of cameras gives a total estimate of the storage required for the shoot. I usually add 10-20% to this estimate to be safe.

Let’s move on to post questions…

Is it an online/offline workflow?
The simplicity of editing online is awesome, and I’m holding out for the day when all projects can be edited with online media. In the meantime, most larger projects require online/offline editorial, so keep in mind the extra storage space for offline editorial proxies. The upside is that raw camera files can be stored on slower, more affordable (even archival) storage through editorial until the online process begins.

On numerous shows I’ve elected to keep the raw camera files on portable external RAID arrays (cloned and stored in different locations for safety) until picture lock. G-Tech, LaCie, OWC and Western Digital all make 48+ TB external arrays on which I’ve stored raw median urging editorial. When you start the online process, copy the necessary media over to your faster online or grading/finishing storage, and finish the project with only the raw files that are used in the locked cut.

How much editorial staff needs to be working on the project simultaneously?
On smaller projects that only require an editorial staff of two or three people who need to access the media at the same time, you may be able to get away with the editors and assistants network sharing a storage array, and working in different projects. I’ve done numerous smaller projects in which a couple editors connected to an external RAID (I’ve had great success with Proavio and QNAP arrays), which is plugged into one workstation and shares over the network. Of course, the network must have enough bandwidth for both machines to play back the media from the storage array, but that’s the case for any shared storage system.

For larger projects that employ five, 10 or more editors and staff, storage that is designed for team sharing is almost a certain requirement. Avid has opened up integrated shared storage to outside storage vendors the past few years, but Avid’s Nexis solution still remains an excellent option. Aside from providing a solid solution for Media Composer and Symphony, Nexis can also be used with basically any other NLE, ranging from Adobe Premiere Pro to Blackmagic DaVinci Resolve to Final Cut Pro and others. The project sharing abilities within the NLEs vary depending on the application, but the clear trend is moving toward multiple editors and post production personnel working simultaneously in the same project.

Does editorial need to be mobile?
Increasingly, editorial is tending to begin near the start of physical production and this can necessitate the need for editors to be on or near set. This is a pretty simple question to answer but it is worth keeping in mind so that a shoot doesn’t end up without enough storage in a place where additional storage isn’t easily available — or the power requirements can’t be met. It’s also a good moment to plan simple things like the number of shuttle or transfer drives that may be needed to ship media back to home base.

Does the project need to be compartmentalized?
For example, should proxy media be on a separate volume or workspace from the raw media/VFX/music/etc.? Compartmentalization is good. It’s safe. Accidents happen, and it’s a pain if someone accidentally deletes everything on the VFX volume or workspace on the editorial storage array. But it can be catastrophic if everything is stored in the same place and they delete all the VFX, graphics, audio, proxy media, raw media, projects and exports.

Split up the project onto separate volumes, and only give write access to the necessary parties. The bigger the project and team, the bigger the risk for accidents, so err on the side of safety when planning storage organization.

Finally, we move to finishing, delivery and archive questions…

Will the project color and mix in-house? What are the delivery requirements? Resolution? Delivery format? Media and other files?
Color grading and finishing often require the fastest storage speeds of the whole pipeline. By this point, the project should be conformed back to the camera media, and the colorist is often working with high bitrate, high-resolution raw media or DPX sequences, EXRs or other heavy file types. (Of course, there are as many workflows as there are projects, many of which can be very light, but let’s consider the trend toward 4K-plus and the fact that raw media generally isn’t getting lighter.) On the bright side, while grading and finishing arrays need to be fast, they don’t need to be huge, since they won’t house all the raw media or editorial media — only what is used in the final cut.

I’m a fan of using an attached SAS or Thunderbolt array, which is capable of providing high bandwidth to one or two workstations. Anything over 20TB shouldn’t be necessary, since the media will be removed and archived as soon as the project is complete, ready for the next project. Arrays like Areca ARC-5028T2 or Proavio EB800MS give read speeds of 2000+ MB/s,which can play back 4K DPXs in real time.

How should the project be archived?
There are a few follow-up questions to this one, like: Will the project need to be accessed with short notice in the future? LTO is a great long-term archival solution, but pulling large amounts of media off LTO tape isn’t exactly quick. For projects that I suspect will be reopened in the near future, I try to keep an external hard drive or RAID with the necessary media onsite. Sometimes it isn’t possible to keep all of the raw media onsite and quickly accessible, so keeping the editorial media and projects onsite is a good compromise. Offsite, in a controlled, safe, secure location, LTO-6 tapes house a copy of every file used on the project.

Post production technology changes with the blink of an eye, and storage is no exception. Once these questions have been answered, if you are spending any serious amount of money, get an opinion from someone who is intimately familiar with the cutting edge of post production storage. Emphasis on the “post production” part of that sentence, because video I/O is not the same as, say, a bank with the same storage size requirements. The more money devoted to your storage solutions, the more opinions you should seek. Not all storage is created equal, so be 100% positive that the storage you select is optimal for the project’s particular workflow and technical requirements.

There is more than one good storage solution for any workflow, but the first step is always answering as many storage- and workflow-related questions as possible to start taking steps down the right path. Storage decisions are perhaps one of the most complex technical parts of the post process, but like the rest of filmmaking, an exhaustive, thoughtful, and collaborative approach will almost always point in the right direction.

Main Image: G-Tech, QNAP, Avid and Western Digital all make a variety of storage solutions for large and small-scale post production workflows.


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.