Category Archives: post production

The challenges of creating a shared storage ‘spec’

By James McKenna

The specification — used in a bid, tender, RFQ or simply to provide vendors with a starting point — has been the source of frustration for many a sales engineer. Not because we wish that we could provide all the features that are listed, but because we can’t help but wonder what the author of those specs was thinking.

Creating a spec should be like designing your ideal product on paper and asking a vendor to come as close as they can to that ideal. Unlike most other forms of shopping, you avoid the sales process until the salesperson knows exactly what you want. This is good in some ways, but very limiting in others.

I dislike analogies with the auto industry because cars are personal and subjective, but in this way, you can see the difference in spec versus evaluation and research. Imagine writing down all the things you want in a car and showing up at the dealership looking for a match. You want power, beauty, technology, sports-car handling and room for five?

Your chances of finding the exact car you want are slim, unless you’re willing to compromise or adjust your budget. The same goes for facility shared storage. Many customers get hung up on the details and refuse to prioritize important aspects, like usability and sustainability, and as a result end up looking at quotes that are two to three times their cost expectations for systems that don’t perform the day-to-day work any better (and often perform worse).

There are three ways to design a specification:

Based On Your Workflow
By far, this is the best method and will result in the easiest path to getting what you want. Go ahead and plan for years down the road and challenge the vendors to keep up with your trajectory. Keep it grounded in what you believe is important to your business. This should include data security, usable administration and efficient management. Lay out your needs for backup strategy and how you’d like that to be automated, and be sure to prioritize these requests so the vendor can focus on what’s most important to you.

Be sure to clearly state the applications you’ll be using, what they will be requiring from the storage and how you expect them to work with the storage. The highest priority and true test of a successful shared storage deployment is: Can you work reliably and consistently to generate revenue? These are my favorite types of specs.

Based On Committee
Some facilities are the victim of their own size or budget. When there’s an active presence from the IT department, or the dollar amounts get too high, it’s not just up to the creative folks to select the right product. The committee can include consultants, system administrators, finance and production management, and everyone wants to justify their existence at the table. People with experience in enterprise storage and “big iron” systems will lean on their past knowledge and add terms like “Five-9s uptime,” “No SPOF,” “single namespace,” “multi-path” and “magic quadrant.”

In the enterprise storage world these would be important, but they don’t force vendors to take responsibility for prioritizing the interactions between the creative applications and the storage, and the usability and sustainability of a solution in the long term. The performance necessary to smoothly deliver a 4K program master, on time and on budget, might not even be considered. I see these types of specifications and I know that there will be a rude awakening when the quotes are distributed, usually leading to some modifications of the spec.

Based On A Product
The most limiting way to design a spec is by copying the feature list of a single product to create your requirements. I should mention that I have helped our customers to do this on some occasions, so I’m guilty here. When a customer really knows the market, and wants to avoid being bid an inferior product, this can be justified. However, you have better completed your research beforehand because there may be something out there that could change your opinion, and you don’t want to find out about it after you’re locked into the status quo. If you choose to do this but want to stay on the lookout for another option, simply prioritize the features list by what’s most important to you.

If you really like something about your storage, prioritize that and see if another vendor has something similar. When I respond to these bid specs, I always provide details on our solution and how we can achieve better results than the one that is obviously being requested. Sometimes it works, sometimes not, but at least now they’re educated.

The primary frustration with specifications that miss the mark is the waste of money and time. Enterprise storage features come with enterprise storage complexity and enterprise storage price tags. This requires training or reliance upon the IT staff to manage, or in some cases completely control the network for you. Cost savings in the infrastructure can be repurposed to revenue-generating workstations and artists can be employed instead of full-time techs. There’s a reason that scrappy, grassroots facilities produce faster growth and larger facilities tend to stagnate. They focus on generating content, invest only where needed and scale the storage as the bigger jobs and larger formats arrive.

Stick with a company that makes the process easy and ensures that you’ll never be without a support person that knows your daily grind.


James McKenna is VP of marketing and sales at shared storage company Facilis.

DigitalFilm Tree’s Ramy Katrib talks trends and keynoting BMD conference

By Randi Altman

Blackmagic, which makes tools for all parts of the production and post workflow, is holding its very first Blackmagic Design Conference and Expo, produced with FMC and NAB Show. This three-day event takes place on February 11-13 in Los Angeles. The event includes a paid conference featuring over 35 sessions, as well as a free expo on February 12, which includes special guests, speakers and production and post companies.

Ramy Katrib, founder and CEO of Hollywood-based post house and software development company DigitalFilm Tree, is the keynote speaker for the conference. FotoKem DI colorist Walter Volpatto and color scientist Joseph Slomka will be keynoting the free expo on the 12th.

We reached out to Katrib to find out what he’ll be focusing on in his keynote, as well as pick his brains about technology and trends.

Can you talk about the theme of your keynote?
Resolve has grown mightily over the past few years, and is the foundation of DigitalFilm Tree’s post finishing efforts. I’ll discuss the how Resolve is becoming an essential post tool. And with Resolve 14, folks who are coloring, editing, conforming and doing VFX and audio work are now collaborating on the same timeline, and that is huge development for TV, film and every media industry creative and technician.

Why was it important for you to keynote this event?
DaVinci was part of my life when I was a colorist 25 years ago, and today BMD is relevant to me while I run my own post company, DigitalFilm Tree. On a personal note, I’ve known Grant Petty since 1999 and work with many folks at BMD who develop Resolve and the hardware products we use, like I/O cards and Teranex converters. This relationship involves us sharing our post production pain points and workflow suggestions, while BMD has provided very relevant software and hardware solutions.

Can you give us a sample of something you might talk about?
I’m looking forward to providing an overview of how Resolve is now part of our color, VFX, editorial, conform and deliverables effort, while having artists provide micro demos on stage.

You alluded to the addition of collaboration in Resolve. How important is this for users?
Resolve 14’s new collaboration tools are a huge development for the post industry, specifically in this golden age of TV where binge delivery of multiple episodes at the same time is common place. As the complexity of production and post increases, greater collaboration across multiple disciplines is a refreshing turn — it allows multiple artists and technicians to work in one timeline instead of 10 timelines and round tripping across multiple applications.

Blackmagic has ramped up their NLE offerings with Resolve 14. Do you see more and more editors embracing this tool for editing?
Absolutely. It always takes a little time to ramp up in professional communities. It reminds me of when the editors on Scrubs used Final Cut Pro for the first time and that ushered FCP into the TV arena. We’re already working with scripted TV editors who are in the process of transitioning to Resolve. Also, DigitalFilm Tree’s editors are now using Resolve for creative editing.

What about the Fairlight audio offerings within? Will you guys take advantage of that in any way? Do you see others embracing it?
For simple audio work like mapping audio tracks, creating multi mixes for 5.1 and 7.1 delivery and mapping various audio tracks, we are talking advantage of Fairlight and audio functionality within Resolve. We’re not an audio house, yet it’s great to have a tool like this for convenience and workflow efficiency.

What trends did you see in 2017 and where do you think things will land in 2018?
Last year was about the acceptance of cloud-based production and post process. This year is about the wider use of cloud-based production and post process. In short, what used to be file-based workflows will give way to cloud-based solutions and products.

postPerspective readers can get $50 off of Registration for the Blackmagic Design Conference & Expo by using Code: POST18. Click here to register

Cinna 1.2

Made in NY’s free post training program continues in 2018

New York City’s post production industry continues to grow thanks to the creation of New York State’s Post Production Film Tax Credit, which was established in 2010. Since then, over 1,000 productions have applied for the credit, creating almost a million new jobs.

“While this creates more pathways for New York City residents to get into the industry, there is evidence that this growth is not equally distributed among women and people of color. In response to this need, the NYC Mayor’s Office of Media and Entertainment decided to create the Made in New York Post Production Training Program, which built on the success of the Made in New York PA Training Program, which for the last 11 years has trained over 700 production assistants for work on TV and film sets,” explains Ryan Penny, program director of the Made In NY Post Production Training Program.

The Post Production Training Program seeks to diversify New York’s post industry by training low-income and unemployed New Yorkers in the basics of editing, animation and visual effects. Created in partnership with the Blue Collar Post Collective, BRIC Media Arts and Borough of Manhattan Community College, the course is free to participants and consists of a five-week, full-time skills training and job placement program administered by workforce development non-profit Brooklyn Workforce Innovations.

Trainees take part in classroom training covering the history and theory of post production, as well as technical training in Avid Media Composer, Adobe’s Premiere, After Effects and Photoshop, as well as Foundry’s Nuke. “Upon successful completion of the training, our staff will work with graduates to identify job opportunities for a period of two years,” says Penny.

Ryan Penny, far left with the most recent graduating class.

Launched in June 2017, the Made in New York Post Production Training Program graduated its second cycle of trainees in January 2018 and is now busy establishing partnerships with New York City post houses and productions who are interested in hiring graduates of the program as post PAs, receptionists, client service representatives, media management technicians and more.

“Employers can expect entry-level employees who are passionate about post and hungry to continue learning on the job,” reports Penny. “As an added incentive, the city has created a work-based learning program specifically for MiNY Post graduates, which allows qualified employers to be reimbursed for up to 80% of the first 280 hours of a trainee’s wages. This results in a win-win for employers and employees alike.”

The Made in New York Post Production Training Program will be conducting further cycles throughout the year, beginning with Cycle 3 planned for spring 2018. More information on the program and how to hire program graduates can be found here.


Sim Post LA beefs up with Greg Ciaccio and Paul Chapman

It’s always nice when good things happen to good people. Recently, long-time industry post pros Greg Ciaccio and Paul Chapman joined Sim Post LA — Greg as VP of post and Paul as VP of engineering and technology.

postPerspective has known both Greg and Paul for years and often call on them to pick their brains about technology, so having them end up working together warms our hearts.

Sim Post is a division of Sim, which provides end-to-end solutions for TV and feature film production and post production in LA, Vancouver, Toronto, New York and Atlanta.

“I’ll be working with the operations, sales, technology and finance teams to ensure tight integration between departments — always in the service of our clients,” reports Ciaccio. “Our ability to offer end-to-end services is a great advantage in the industry. I’ve admired the work produced by the talented group at Sim Post LA (formerly Chainsaw and Bling), and now I’m pleased to be a part of the team.”

Ciaccio’s resume includes executive operations management positions for creative service divisions at Ascent, Technicolor and Deluxe, and has led product development teams creating products. He also serves as chair of the ASC Motion Imaging Technology Council’s Workflow Committee, currently focused on ACES education and enlightenment, and is a member of the UHD/HDR Committee and Joint ASC/ICG/VES/PGA VR Committee.

Chapman, a Fellow of SMPTE, has held executive technology and engineering positions over the last 30 years, including his long-time role at FotoKem, as well as stints at Unitel Video and others. His skillset includes expertise in storage and networking infrastructure, facility engineering and operations.

“Sim has a lot of potential, and when the opportunity was presented to lead their engineering and technology departments, it really intrigued me,” says Chapman. “The LA facility itself is well constructed from the ground up. I’m looking forward to working with the creative and technical teams across the organization to enhance our technical operations, foster innovation and elevate performance for our clients.”

Greg and Paul are based at Sim’s operations in Hollywood.

Main Caption: (L-R) Greg Ciaccio and Paul Chapman


Industry mainstay Click3X purchased by Industrial Color Studios

Established New York City post house Click3X has been bought by Industrial Color Studios. Click3X is a 25-year-old facility that specializes in new media formats such as VR, AR, CGI and live streaming. Industrial Color Studios is a visual content production company. Founded in 1992, Industrial Color’s services range from full image capture and e-commerce photography to production support and post services, including creative editorial, color grading and CG.

With offices in New York and LA, Industrial Color has developed its own proprietary systems to support online digital asset management for video editing and high-speed file transfers for its clients working in broadcast and print media. The company is an end-to-end visual content production provider, partnering with top brands, agencies and creative professionals to accelerate multi-channel creative content.

Click3X was founded in 1993 by Peter Corbett, co-founder of numerous companies specializing in both traditional and emerging forms of media.  These include Media Circus (a digital production and web design company), IllusionFusion, Full Blue, ClickFire Media, Reason2Be, Sound Lounge and Heard City. A long-time member of the DGA as a commercial film director, Corbett emigrated to the US from Australia to pursue a career as a commercial director and, shortly thereafter, segued into integrated media and mixed media, becoming one of the first established film directors to do so.

Projects produced at Click3X have been honored with the industry’s top awards, including Cannes Lions, Clios, Andy Awards and others. Click3X also was presented with the Crystal Apple Award, presented by the New York City Mayor’s Office of Media and Entertainment, in recognition of its contributions to the city’s media landscape.

Corbett will remain in place at Click3X and eventually the companies will share the ICS space on 6th Avenue in NYC.

“We’ve seen a growing need for video production capabilities and have been in the market for a partner that would not only enhance our video offering, but one that provided a truly integrated and complementary suite of services,” says Steve Kalalian, CEO of Industrial Color Studios. “And Click3X was the ideal fit. While the industry continues to evolve at lightning speed, I’ve long admired Click3X as a company that’s consistently been on the cutting edge of technology as it pertains to creative film, digital video and new media solutions. Our respective companies share a passion for creativity and innovation, and I’m incredibly excited to share this unique new offering with our clients.”

“When Steve and I first entered into talks to align on the state of our clients’ future, we were immediately on the same page,” says Corbett, president of Click3X. “We share a vision for creating compelling content in all formats. As complementary production providers, we will now have the exciting opportunity to collaborate on a robust and highly-regarded client roster, but also expand the company’s creative and new media capabilities, using over 200,000 square feet of state-of-the-art facilities in New York, Los Angeles and Philadelphia.”

The added capabilities Click3X gives Industrial Color in video production and new media mirrors its growth in the field of e-commerce photography and image capture. The company has recently opened a new 30,000 square-foot studio in downtown Los Angeles designed to produce high-volume, high-quality product photography for advertisers. That studio complements the company’s existing e-commerce photography hub in Philadelphia.

Main Image: (L-R) Peter Corbett and Steve Kalalian


Fotokem posts Star Wars: The Last Jedi

Burbank-based post house FotoKem provided creative and technical services for the Disney/Lucasfilm movie Star Wars: The Last Jedi. The facility built advanced solutions that supported the creative team from production to dailies to color grade. Services included a customized workflow for dailies, editorial and VFX support, conform and a color pipeline that incorporated all camera formats (film and file-based).

The long-established post house worked directly with director Rian Johnson; DP Steve Yedlin, ASC; producer Ram Bergman; Lucasfilm head of post Pippa Anderson; and Lucasfilm director of post Mike Blanchard.

FotoKem was brought on prior to the beginning of principal photography and designed an intricate workflow tailored to accommodate the goals of production. A remote post facility was assembled near-set in London where film technician Simone Appleby operated two real-time Scanity film scanners, digitizing up to 15,000 feet a day of 35mm footage at full-aperture 4K resolution. Supported by a highly secure network, FotoKem NextLab systems ingested the digitized film and file-based camera footage, providing “scan once instant-access” to everything, and creating a singular workflow for every unit’s footage. By the end of production over one petabyte of data was managed by NextLab. This allowed the filmmakers, visual effects teams, editors and studio access to securely and easily share large volumes of assets for any part of the workflow.

“I worked with FotoKem previously and knew their capabilities. This project clearly required a high level of support to handle global locations with multiple units and production partners,” says Bergman. “We had a lot of requirements at this scale to create a consistent workflow for all the teams using the footage, from production viewing dailies to the specific editorial deliverables, visual effects plates, marketing and finishing, with no delays or security concerns.”

Before shooting began, Yedlin worked with FotoKem’s film and digital lab to create specialized scanner profiles and custom Look Up Tables (LUTs). FotoKem implemented the algorithms devised by Yedlin into their NextLab software to obtain a seamless match between digital footage and film scans. Yedlin also received full-resolution stills, which served as a communication funnel for color and quality control checks. This color workflow was devised in collaboration with FotoKem color scientist Joseph Slomka, and executed by NextLab software developer Eric Cameron and dailies colorist Jon Rocke, who were on site throughout the entire production.

“As cinematographers, we work hard to create looks, and FotoKem made it possible for me to take control of each step in the process and know exactly what was happening,” says Yedlin. The color science support I received made true image control a realized concept.”

Calibrated 4K monitoring via the Sony X300 and the high availability SAN on site, managed by NextLAB, enabled a real time workflow for dailies. Visual effects and editorial teams, via high density NAS, were allowed instant access to full fidelity footage during and after production for all VFX pulls and conform pulls. The NAS acted as a back-up for all source content, and was live throughout production. Through the system’s interface, they could procure footage, pull shots as needed, and maintain exact color and metadata integration between any step.

For the color grade, FotoKem colorist Walter Volpatto used Blackmagic Resolve to fine-tune raw images, as well as those from ILM, with Johnson and Yedlin using the color and imaging pipeline established from day one. FotoKem also set up remote grading suites at Skywalker Sound and Disney so the teams could work during the sound mix, and later while grading for HDR and other specialty theatrical deliverables. They used a Barco 4K projector for final finishing.

“The film emulation LUT that Steve (Yedlin) created carried nuances he wanted in the final image and he was mindful of this while shooting, lighting both the film and digital scenes so that minimal manipulation was required in the color grade,” Volpatto explains. “Steve’s mastery of lighting for both formats, as well as his extensive understanding of color science, helped to make the blended footage look more cohesive.”

Volpatto also oversaw the HDR pass and IMAX versions. Ultimately, multiple deliverables were created by FotoKem including standard DCP, HDR10, Dolby Vision, HLG, 3D (in standard, stereo Dolby and 2D Dolby HDR) and home video formats. FotoKem worked with IMAX to align the color science pipeline with their Xenon and laser DCPs and 15-perf 70mm prints as well.

“It’s not every day that we would ship scanners to remote locations and integrate a real-time post environment that would rival many permanent installations,” concludes Mike Brodersen, FotoKem’s chief strategy officer.


Behind the Title: Frame of Reference CEO/Chief Creative Twain Richardson

NAME: Twain Richardson

COMPANY: Kingston, Jamaica-based Frame of Reference (@forpostprod)

CAN YOU DESCRIBE YOUR COMPANY?
Frame of Reference is a boutique post production company specializing in TV commercials, digital content, music videos and films.

WHAT’S YOUR JOB TITLE?
CEO and chief creative, but also head cook and bottle washer. At the moment we are a small team, so my roles overlap.

WHAT DOES THAT ENTAIL?
Working on some projects. I’ll jump in and help the team edit or do some color. I’m also making sure clients and employees are happy.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
That it’s fun, or I find it fun. It makes life interesting.

WHAT HAVE YOU LEARNED OVER THE YEARS ABOUT RUNNING A BUSINESS?
It’s hard, very hard. There are always new and improved challenges that keep you up at night. Also, you have to be reliable, and being reliable means that you meet deadlines or answer the phone when a client calls.

WHAT TOOLS DO YOU USE?
We use Adobe Premiere for editing and Blackmagic Resolve for color work.

A LOT OF IT MUST BE ABOUT TRYING TO KEEP EMPLOYEES AND CLIENTS HAPPY. HOW DO YOU BALANCE THAT?
I find that one of the most impactful rules is to remember what it felt like to be an employee, and to always listen to your staff concerns. I think I am blessed with the perfect team so keeping employees happy is not too hard at Frame of Reference. Once employees are happy, then we can make and maintain the happiness of our clients.

WHAT’S YOUR FAVORITE PART OF THE JOB?
A happy client.

WHAT’S YOUR LEAST FAVORITE?
I don’t have a least favorite. There are days that I don’t like, of course, but I know that’s a part of running a business so I push on through.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m on Twitter and Instagram, I like Twitter for the conversations that you can engage in. The #postchat is a great hashtag to follow and a way to meet other post professionals.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The moment I wake up. There is no greater feeling than opening your eyes, taking your first deep breath of the day and realizing that you’re alive.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I relax. This could mean reading a book, and fortunately we are located in Jamaica where the beach is a stone’s throw away.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Growing up I wanted to be a pilot or a civil engineer, but I can’t picture myself doing something else. I love post production and running a business.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently did a TV commercial for the beer company Red Stripe, and a music video for international artist Tres, titled Looking for Love.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My MacBook Pro, my phone and my mechanical watch.


AICP and AICE to merge January 1

The AICP and AICE are celebrating the New Year in a very special way — they are merging into one organization. These two associations represent companies that produce and finish the majority of advertising and marketing content in the moving image. Post merger, AICP and AICE will function as a single association under the AICP brand. They will promote and advocate for independent production and post companies when it comes to producing brand communications for advertising agencies, advertisers and media companies.

The merger comes after months of careful deliberations on the part of each association’s respective boards and final votes of approval by their memberships. Under the newly merged association’s structure, executive director of AICE Rachelle Madden will assume the title of VP, post production and digital production affairs of AICP. She will report to president/CEO of AICP Matt Miller. Madden is now tasked with taking the lead on AICP’s post production offerings, including position papers, best practices, roundtables, town halls and other educational programs. She will also lead a post production council, which is being formed to advise the AICP National Board on post matters.

Former AICE members will be eligible to join the General Member Production companies of AICP, with access to all benefits starting in 2018. These include: Participation in the Producers’ Health Benefits Plan (PHBP); the AICP Legal Initiative (which provides legal advice on contracts with agencies and advertisers); and access to position papers, guidelines and other tools as they relate to business affairs and employment issues. Other member benefits include access to attend meetings, roundtables, town halls and seminars, as well as receiving the AICP newsletter, member discounts on services and a listing in the AICP membership directory on the AICP website.

All AICP offerings — including its AICP Week Base Camp for thought leadership — will reflect the expanded membership to include topics and issues pertaining to post production. Previously created AICE documents, position papers and forms will now live on aicp.com.

The AICP was founded in 1972 to protect the interests of independent commercial producers, crafting guidelines and best practice in an effort to help its members run their businesses more effectively. Through its AICP Awards, the organization celebrates creativity and craft in marketing communications.

AICE was founded in 1998 when three independent groups representing editing companies in Chicago, Los Angeles and New York formed a national association to discuss issues and undertake initiatives affecting post production on a broader scale. In addition to editing, the full range of post production disciplines, including color correction, visual effects, audio mixing and music and sound design are represented.

From AICP’s perspective, says Miller, merging the two organizations has benefits for members of both groups. “As we grow more closely allied, it makes more sense than ever for the organizations to have a unified voice in the industry,” he notes. He points out that there are numerous companies that are members of both organizations, reflecting the blurring of the lines between production and post that’s been occurring as media platforms, technologies and client needs have changed.

For Madden, AICE’s members will be joining an organization that provides them with a firm footing in terms of resources, programs, benefits and initiatives. “There are many reasons why we moved forward on this merger, and most of them involve amplifying the voice of the post production industry by combining our interests and advocacy with those of AICP members. We now become part of a much larger group, which gives us a strength in numbers we didn’t have before while adding critical post production perspectives to key discussions about business practices and industry trends.”

Main Image: Matt Miller and Rachelle Madden


Storage Roundtable

Production, post, visual effects, VR… you can’t do it without a strong infrastructure. This infrastructure must include storage and products that work hand in hand with it.

This year we spoke to a sampling of those providing storage solutions — of all kinds — for media and entertainment, as well as a storage-agnostic company that helps get your large files from point A to point B safely and quickly.

We gathered questions from real-world users — things that they would ask of these product makers if they were sitting across from them.

Quantum’s Keith Lissak
What kind of storage do you offer, and who is the main user of that storage?
We offer a complete storage ecosystem based around our StorNext shared storage and data management solution,including Xcellis high-performance primary storage, Lattus object storage and Scalar archive and cloud. Our customers include broadcasters, production companies, post facilities, animation/VFX studios, NCAA and professional sports teams, ad agencies and Fortune 500 companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Xcellis features continuous scalability and can be sized to precisely fit current requirements and scaled to meet future demands simply by adding storage arrays. Capacity and performance can grow independently, and no additional accelerators or controllers are needed to reach petabyte scale.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
We don’t have exact numbers, but a growing number of our customers are using cloud storage. Our FlexTier cloud-access solution can be used with both public (AWS, Microsoft Azure and Google Cloud) and private (StorageGrid, CleverSafe, Scality) storage.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We offer a range of StorNext 4K Reference Architecture configurations for handling the demanding workflows, including 4K, 8K and VR. Our customers can choose systems with small or large form-factor HDDs, up to an all-flash SSD system with the ability to handle 66 simultaneous 4K streams.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
StorNext systems are OS-agnostic and can work with all Mac, Windows and Linux clients with no discernible difference.

Zerowait’s Rob Robinson
What kind of storage do you offer, and who is the main user of that storage?
Zerowait’s SimplStor storage product line provides storage administrators scalable, flexible and reliable on-site storage needed for their growing storage requirements and workloads. SimplStor’s platform can be configured to work in Linux or Windows environments and we have several customers with multiple petabytes in their data centers. SimplStor systems have been used in VFX production for many years and we also provide solutions for video creation and many other large data environments.

Additionally, Zerowait specializes in NetApp service, support and upgrades, and we have provided many companies in the media and VFX businesses with off-lease transferrable licensed NetApp storage solutions. Zerowait provides storage hardware, engineering and support for customers that need reliable and big storage. Our engineers support customers with private cloud storage and customers that offer public cloud storage on our storage platforms. We do not provide any public cloud services to our customers.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our customers typically need on-site storage for processing speed and security. We have developed many techniques and monitoring solutions that we have incorporated into our service and hardware platforms. Our SimplStor and NetApp customers need storage infrastructures that scale into the multiple petabytes, and often require GigE, 10GigE or a NetApp FC connectivity solution. For customers that can’t handle the bandwidth constraints of the public Internet to process their workloads, Zerowait has the engineering experience to help our customers get the most of their on-premises storage.

How many of the people buying your solutions are using them with another cloud-based products (i.e. Microsoft Azure)?
Many of our customers use public cloud solutions for their non-proprietary data storage while using our SimplStor and NetApp hardware and support services for their proprietary, business-critical, high-speed and regulatory storage solutions where data security is required.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
SimplStor’s density and scalability make it perfect for use in HD and higher resolution environments. Our SimplStor platform is flexible and we can accommodate customers with special requests based on their unique workloads.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Zerowait’s NetApp and SimplStor platforms are compatible with both Linux (NFS) and Windows (CIFS) environments. OS X is supported in some applications. Every customer has a unique infrastructure and set of applications they are running. Customers will see differences in performance, but our flexibility allows us to customize a solution to maximize the throughput to meet workflow requirements.

Signiant’s Mike Nash
What kind of storage works with your solution, and who is the main user or users of that storage?
Signiant’s Media Shuttle file transfer solution is storage agnostic, and for nearly 200,000 media pros worldwide it is the primary vehicle for sending and sharing large files. Even though Media Shuttle doesn’t provide storage, and many users think of their data as “in Media Shuttle.” In reality, their files are located in whatever storage their IT department has designated. This might be the company’s own on-premises storage, or it could be their AWS or Microsoft Azure cloud storage tenancy. Our users employ a Media Shuttle portal to send and share files; they don’t have to think about where the files are stored.

How are you making sure your products are scalable so people can grow either their use or the bandwidth of their networks (or both)?
Media Shuttle is delivered as a cloud-native SaaS solution, so it can be up and running immediately for new customers, and it can scale up and down as demand changes. The servers that power the software are managed by our DevOps team and monitored 24×7 — and the infrastructure is auto-scaling and instantly available. Signiant does not charge for bandwidth, so customers can use our solutions with any size pipe at no additional cost. And while Media Shuttle can scale up to support the needs of the largest media companies, the SaaS delivery model also makes it accessible to even the smallest production and post facilities.

How many of the people buying your solutions are using them with cloud storage (i.e. AWS or Microsoft Azure)?
Cloud adoption within the M&E industry remains uneven, so it’s no surprise that we see a mixed picture when we look at the storage choices our customers make. Since we first introduced the cloud storage option, there has been a constant month-over-month growth in the number of customers deploying portals with cloud storage. It’s not yet in parity with on-prem storage, but the growth trends are clear.

On-premises content storage is very far from going away. We see many Media Shuttle customers taking a hybrid approach, with some portals using cloud storage and others using on-prem storage. It’s also interesting to note that when customers do choose cloud storage, we increasingly see them use both AWS and Azure.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
We can move any size of file. As media files continue to get bigger, the value of our solutions continues to rise. Legacy solutions such as FTP, which lack any file acceleration, will grind things to a halt if 4K, 8K, VR and other huge files need to be moved between locations. And consumer-oriented sharing services like Dropbox and Google Drive become non-starters with these types of files.

What platforms do your system connect to (e.g. Mac OS X, Windows, Linux), and what differences might end-users notice when connecting on these different platforms?
Media Shuttle is designed to work with a wide range of platforms. Users simply log in to portals using any web browser. In the background, a native application installed on the user’s personal computer provides the acceleration functionality. This App works with Windows or Mac OSX systems.

On the IT side of things, no installed software is required for portals deployed with cloud storage. To connect Media Shuttle to on-premises storage, the IT team will run Signiant software on a computer in the customer’s network. This server-side software is available for Linux and Windows.

NetApp’s Jason Danielson
What kind of storage do you offer, and who is the main user of that storage?
NetApp has a wide portfolio of storage and data management products and services. We have four fundamentally different storage platforms — block, file, object and converged infrastructure. We use these platforms and our data fabric software to create a myriad of storage solutions that incorporate flash, disk and cloud storage.

1. NetApp E-Series block storage platform is used by leading shared file systems to create robust and high-bandwidth shared production storage systems. Boutique post houses, broadcast news operations and corporate video departments use these solutions for their production tier.
2. NetApp FAS network-attached file storage runs NetApp OnTap. This platform supports many thousands of applications for tens of thousands of customers in virtualized, private cloud and hybrid cloud environments. In media, this platform is designed for extreme random-access performance. It is used for rendering, transcoding, analytics, software development and the Internet-of-things pipelines.
3. NetApp StorageGrid Webscale object store manages content and data for back-up and active archive (or content repository) use cases. It scales to dozens of petabytes, billions of objects and currently 16 sites. Studios and national broadcast networks use this system and are currently moving content from tape robots and archive silos to a more accessible object tier.
4. NetApp SolidFire converged and hyper-converged platforms are used by cloud providers and enterprises running large private clouds for quality-of-service across hundreds to thousands of applications. Global media enterprises appreciate the ease of scaling, simplicity of QOS quota setting and overall maintenance for largest scale deployments.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The four platforms mentioned above scale up and scale out to support well beyond the largest media operations in the world. So our challenge is not scalability for large environments but appropriate sizing for individual environments. We are careful to design storage and data management solutions that are appropriate to media operations’ individual needs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Seven years ago, NetApp set out on a major initiative to build the data fabric. We are well on the path now with products designed specifically for hybrid cloud (a combination of private cloud and public cloud) workloads. While the uptake in media and entertainment is slower than in other industries, we now have hundreds of customers that use our storage in hybrid cloud workloads, from backup to burst compute.

We help customers wanting to stay cloud-agnostic by using AWS, Microsoft Azure, IBM Cloud, and Google Cloud Platform flexibly and as the project and pricing demands. AWS, Microsoft Azure, IBM, Telsra and ASE along with another hundred or so cloud storage providers include NetApp storage and data management products in their service offerings.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
For higher bandwidth, or bitrate, video production we’ll generally architect a solution with our E-Series storage under either Quantum StorNext or PixitMedia PixStor. Since 2012, when the NetApp E5400 enabled the mainstream adoption of 4K workflows, the E-Series platform has seen three generations of upgrades and the controllers are now more than 4x faster. The chassis has remained the same through these upgrades so some customers have chosen to put the latest controllers into these chassis to improve bandwidth or to utilize faster network interconnect like 16 gigabit fibrechannel. Many post houses continue to use fibrechannel to the workstation for these higher bandwidth video formats while others have chosen to move to Ethernet (40 and 100 Gigabit). As flash (SSDs) continue to drop in price it is starting to be used for video production in all flash arrays or in hybrid configurations. We recently showed our new E570 all flash array supporting NVM Express over Fabrics (NVMe-oF) technology providing 21GB/s of bandwidth and 1 million IOPs with less than 100µs of latency. This technology is initially targeted at super-computing use cases and we will see if it is adopted over the next couple of years for UHD production workloads.

What platforms do your system connect to (Mac OSx, Windows, Linux, etc.), and what differences might end-users notice when connecting on these different platforms?
NetApp maintains a compatibility matrix table that delineates our support of hundreds of client operating systems and networking devices. Specifically, we support Mac OS X, Windows and various Linux distributions. Bandwidth expectations differ between these three operating systems and Ethernet and Fibre Channel connectivity options, but rather than make a blanket statement about these, we prefer to talk with customers about their specific needs and legacy equipment considerations.

G-Technology’s Greg Crosby
What kind of storage do you offer, and who is the main user of that storage?
Western Digital’s G-Technology products provide high-performing and reliable storage solutions for end-to-end creative workflows, from capture and ingest to transfer and shuttle, all the way to editing and final production.

The G-Technology brand supports a wide range of users for both field and in-studio work, with solutions that span a number of portable handheld drives — which are often times used to backup content on-the-go — all the way to in-studio drives that offer capacities up to 144TB. We recognize that each creative has their own unique workflow and some embrace the use of cloud-based products. We are proud to be companions to those cloud services as a central location to store raw content or a conduit to feed cloud features and capabilities.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Our line ranges from small portable and rugged drives to large, multi-bay RAID and NAS solutions, for all aspects of the media and entertainment industry. Integrating the latest interface technology such as USB-C or Thunderbolt 3, our storage solutions will take advantage of the ability to quickly transfer files.

We make it easy to take a ton of storage into the field. The G-Speed Shuttle XL drive is available in capacities up to 96TB, and an optional Pelican case, with handle, is available, making it easy to transport in the field and mitigating any concerns about running out of storage. We recently launched the G-Drive mobile SSD R-Series. This drive is built to withstand a three meter (nine foot) drop, and is able to endure accidental bumps or drops, given that it is a solid-state drive.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many of our customers are using cloud-based solutions to complement their creative workflows. We find that most of our customers use our solutions as the primary storage or to easily transfer and shuttle their content since the cloud is not an efficient way to move large amounts of data. We see the cloud capabilities as a great way to share project files and low-resolution content, or collaborate with others on projects as well as distribute share a variety of deliverables.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Today’s camera technology enables not only capture at higher resolutions but also higher frame rates with more dynamic imagery. We have solutions that can easily support multi-stream 4K, 8K and VR workflows or multi-layer photo and visual effects projects. G-Technology is well positioned to support these creative workflows as we integrate the latest technologies into our storage solutions. From small portable and rugged SSD drives to high-capacity and fast multi-drive RAID solutions with the latest Thunderbolt 3 and USB-C interface technology we are ready tackle a variety of creative endeavors.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.), and what differences might users notice when connecting on these different platforms?
Our complete portfolio of external storage solutions work for Mac and PC users alike. With native support for Apple Time Machine, these solutions are formatted for Mac OS out of the box, but can be easily reformatted for Windows users. G-Technology also has a number of strategic partners with technology vendors, including Apple, Atomos, Red Camera, Adobe and Intel.

Panasas’ David Sallak
What kind of storage do you offer, and who is the main user of that storage?
Panasas ActiveStor is an enterprise-class easy-to-deploy parallel scale-out NAS (network-attached storage) that combines Flash and SATA storage with a clustered file system accessed via a high-availability client protocol driver with support for standard protocols.

The ActiveStor storage cluster consists of the ActiveStor Director (ASD-100) control engine, the ActiveStor Hybrid (ASH-100) storage enclosure, the PanFS parallel file system, and the DirectFlow parallel data access protocol for Linux and Mac OS.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
ActiveStor is engineered to scale easily. There are no specific architectural limits for how widely the ActiveStor system can scale out, and adding more workloads and more users is accomplished without system downtime. The latest release of ActiveStor can grow either storage or bandwidth needs in an environment that lets metadata responsiveness, data performance and data capacity scale independently.

For example, we quote capacity and performance numbers for a Panasas storage environment containing 200 ActiveStor Hybrid 100 storage node enclosures with 5 ActiveStor Director 100 units for filesystem metadata management. This configuration would result in a single 57PB namespace delivering 360GB/s of aggregate bandwidth with an excess of 2.6M IOPs.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Panasas customers deploy workflows and workloads in ways that are well-suited to consistent on-site performance or availability requirements, while experimenting with remote infrastructure components such as storage and compute provided by cloud vendors. The majority of Panasas customers continue to explore the right ways to leverage cloud-based products in a cost-managed way that avoids surprises.

This means that workflow requirements for file-based storage continue to take precedence when processing real-time video assets, while customers also expect that storage vendors will support the ability to use Panasas in cloud environments where the benefits of a parallel clustered data architecture can exploit the agility of underlying cloud infrastructure without impacting expectations for availability and consistency of performance.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Panasas ActiveStor is engineered to deliver superior application responsiveness via our DirectFlow parallel protocol for applications working in compressed UHD, 4K and higher-resolution media formats. Compared to traditional file-based protocols such as NFS and SMB, DirectFlow provides better granular I/O feedback to applications, resulting in client application performance that aligns well with the compressed UHD, 4K and other extreme-resolution formats.

For uncompressed data, Panasas ActiveStor is designed to support large-scale rendering of these data formats via distributed compute grids such as render farms. The parallel DirectFlow protocol results in better utilization of CPU resources in render nodes when processing frame-based UHD, 4K and higher-resolution formats, resulting in less wall clock time to produce these formats.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
Panasas ActiveStor supports macOS and Linux with our higher-performance DirectFlow parallel client software. We support all client platforms via NFS or SMB as well.

Users would notice that when connecting to Panasas ActiveStor via DirectFlow, the I/O experience is as if users were working with local media files on internal drives, compared to working with shared storage where normal protocol access may result in the slight delay associated with open network protocols.

Facilis’ Jim McKenna
What kind of storage do you offer, and who is the main user of that storage?
We have always focused on shared storage for the facility. It’s high-speed attached storage and good for anyone who’s cutting HD or 4K. Our workflow and management features really make us different than basic network storage. We have attachment to the cloud through software that uses all the latest APIs.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Most of our large customers have been with us for several years, and many started pretty small. Our method of scalability is flexible in that you can decide to simply add expansion drives, add another server, or add a head unit that aggregates multiple servers. Each method increases bandwidth as well as capacity.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Many customers use cloud, either through a corporate gateway or directly uploaded from the server. Many cloud service providers have ways of accessing the file locations from the facility desktops, so they can treat it like another hard drive. Alternatively, we can schedule, index and manage the uploads and downloads through our software.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Facilis is known for our speed. We still support Fibre Channel when everyone else, it seems, has moved completely to Ethernet, because it provides better speeds for intense 4K and beyond workflows. We can handle UHD playback on 10Gb Ethernet, and up to 4K full frame DPX 60p through Fibre Channel on a single server enclosure.

What platforms do your systems connect to (e.g. Mac OS X, Windows, Linux, etc.)? And what differences might users notice when connecting on these different platforms?
We have a custom multi-platform shared file system, not NAS (network attached storage). Even though NAS may be compatible with multiple platforms by using multiple sharing methods, permissions and optimization across platforms is not easily manageable. With Facilis, the same volume, shared one way with one set of permissions, looks and acts native to every OS and even shows up as a local hard disk on the desktop. You can’t get any more cross-platform compatible than that.

SwiftStack’s Mario Blandini
What kind of storage do you offer, and who is the main user of that storage?
We offer hybrid cloud storage for media. SwiftStack is 100% software and runs on-premises atop the server hardware you already buy using local capacity and/or capacity in public cloud buckets. Data is stored in cloud-native format, so no need for gateways, which do not scale. Our technology is used by broadcasters for active archive and OTT distribution, digital animators for distributed transcoding and mobile gaming/eSports for massive concurrency among others.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The SwiftStack software architecture separates access, storage and management, where each function can be run together or on separate hardware. Unlike storage hardware with the mix of bandwidth and capacity being fixed to the ports and drives within, SwiftStack makes it easy to scale the access tier for bandwidth independently from capacity in the storage tier by simply adding server nodes on the fly. On the storage side, capacity in public cloud buckets scales and is managed in the same single namespace.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Objectively, use of capacity in public cloud providers like Amazon Web Services and Google Cloud Platform is still “early days” for many users. Customers in media however are on the leading edge of adoption, not only for hybrid cloud extending their on-premises environment to a public cloud, but also using a second source strategy across two public clouds. Two years ago it was less than 10%, today it is approaching 40%, and by 2020 it looks like the 80/20 rule will likely apply. Users actually do not care much how their data is stored, as long as their user experience is as good or better than it was before, and public clouds are great at delivering content to users.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Arguably, larger assets produced by a growing number of cameras and computers have driven the need to store those assets differently than in the past. A petabyte is the new terabyte in media storage. Banks have many IT admins, where media shops have few. SwiftStack has the same consumption experience as public cloud, which is very different than on-premises solutions of the past. Licensing is based on the amount of data managed, not the total capacity deployed, so you pay-as-you-grow. If you save four replicas or use erasure coding for 1.5X overhead, the price is the same.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
The great thing about cloud storage, whether it is on-premises or residing with your favorite IaaS providers like AWS and Google, the interface is HTTP. In other words, every smartphone, tablet, Chromebook and computer has an identical user experience. For classic applications on systems that do not support AWS S3 as an interface, users see the storage as a mount point or folder in their application — either NFS or SMB. The best part, it is a single namespace where data can come in file, get transformed via object, and get read either way, so the user experience does not need to change even though the data is stored in the most modern way.

Dell EMC’s Tom Burns
What kind of storage do you offer, and who is the main user of that storage?
At Dell EMC, we created two storage platforms for the media and entertainment industry: the Isilon scale-out NAS All-Flash, hybrid and archive platform to consolidate and simplify file-based workflows and the Dell EMC Elastic Cloud Storage (ECS), a scalable enterprise-grade private cloud solution that provides extremely high levels of storage efficiency, resiliency and simplicity designed for both traditional and next-generation workloads.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
In the media industry, change is inevitable. That’s why every Isilon system is built to rapidly and simply adapt by allowing the storage system to scale performance and capacity together, or independently, as more space or processing power is required. This allows you to scale your storage easily as your business needs dictate.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Over the past five years, Dell EMC media and entertainment customers have added more than 1.5 exabytes of Isilon and ECS data storage to simplify and accelerate their workflows.

Isilon’s cloud tiering software, CloudPools, provides policy-based automated tiering that lets you seamlessly integrate with cloud solutions as an additional storage tier for the Isilon cluster at your data center. This allows you to address rapid data growth and optimize data center storage resources by using the cloud as a highly economical storage tier with massive storage capacity.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
As technologies that enhance the viewing experience continue to emerge, including higher frame rates and resolutions, uncompressed 4K, UHD, high dynamic range (HDR) and wide color gamut (WCG), underlying storage infrastructures must effectively scale to keep up with expanding performance requirements.

Dell EMC recently launched the sixth generation of the Isilon platform, including our all-flash (F800), which brings the simplicity and scalability of NAS to uncompressed 4K workflows — something that up until now required expensive silos of storage or complex and inefficient push-pull workflows.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc)? And what differences might end-users notice when connecting on these different platforms?
With Dell EMC Isilon, you can streamline your storage infrastructure by consolidating file-based workflows and media assets, eliminating silos of storage. Isilon scale-out NAS includes integrated support for a wide range of industry-standard protocols allowing the major operating systems to connect using the most suitable protocol, for optimum performance and feature support, including Internet Protocols IPv4, and IPv6, NFS, SMB, HTTP, FTP, OpenStack Swift-based Object access for your cloud initiatives and native Hadoop Distributed File System (HDFS).

The ECS software-defined cloud storage platform provides the ability to store, access, and manipulate unstructured data and is compatible with existing Amazon S3, OpenStack Swift APIs, EMC CAS and EMC Atmos APIs.

EditShare’s Lee Griffin
What kind of storage do you offer, and who is the main user of that storage?
Our storage platforms are tailored for collaborative media workflows and post production. It combines the advanced EFS (that’s EditShare File System, in short) distributed file system with intelligent load balancing. It’s a scalable, fault-tolerant architecture that offers cost-effective connectivity. Within our shared storage platforms, we have a unique take on current cloud workflows, with current security and reliability of cloud-based technology prohibiting full migration to cloud storage for production, EditShare AirFlow uses EFS on-premise storage to provide secure access to media from anywhere in the world with a basic Internet connection. Our main users are creative post houses, broadcasters and large corporate companies.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Recently, we upgraded all our platforms to EFS and introduced two new single-node platforms, the EFS 200 and 300. These single-node platforms allow users to grow their storage whilst keeping a single namespace which eliminates management of multiple storage volumes. It enables them to better plan for the future, when their facility requires more storage and bandwidth, they can simply add another node.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
No production is in one location, so the ability to move media securely and back up is still a high priority to our clients. From our Flow media asset management and via our automation module, we offer clients the option to backup their valuable content to places like Amazon S3 servers.

How does your system handle UHD, 4K and other higher-than HD resolutions?
We have many clients working with UHD content who are supplying programming content to broadcasters, film distributors and online subscription media providers. Our solutions are designed to work effortlessly with high data rate content, enabling the bandwidth to expand with the addition of more EFS nodes to the intelligent storage pool. So, our system is ready and working now for 4K content and is future proof for even higher data rates in the future.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
EditShare supplies native client EFS drivers to all three platforms, allowing clients to pick and choose which platform they want to work on. If it is an Autodesk Flame for VFX, a Resolve for grading or our own Lightworks for editing on Linux, we don’t mind. In fact, EFS offers a considerable bandwidth improvement when using our EFS drivers over existing AFP and SMB protocol. Improved bandwidth and speed to all three platforms makes for happy clients!

And there are no differences when clients connect. We work with all three platforms the same way, offering a unified workflow to all creative machines, whether on Mac, Windows or PC.

Scale Logic’s Bob Herzan
What kind of storage do you offer, and who is the main user of that storage?
Scale Logic has developed an ecosystem (Genesis Platform) that includes servers, networking, metadata controllers, single and dual-controller RAID products and purpose-built appliances.

We have three different file systems that allow us to use the storage mentioned above to build SAN, NAS, scale-out NAS, object storage and gateways for private and public cloud. We use a combination of disk, tape and Flash technology to build our tiers of storage that allows us to manage media content efficiently with the ability to scale seamlessly as our customers’ requirements change over time.

We work with customers that range from small to enterprise and everything in between. We have a global customer base that includes broadcasters, post production, VFX, corporate, sports and house of worship.

In addition to the Genesis Platform we have also certified three other tier 1 storage vendors to work under our HyperMDC SAN and scale-out NAS metadata controller (HPE, HDS and NetApp). These partnerships complete our ability to consult with any type of customer looking to deploy a media-centric workflow.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
Great questions and it’s actually built into the name and culture of our company. When we bring a solution to market it has to scale seamlessly and it needs to be logical when taking the customer’s environment into consideration. We focus on being able to start small but scale any system into a high-availability solution with limited to no downtime. Our solutions can scale independently if clients are looking to add capacity, performance or redundancy.

For example, a customer looking to move to 4K uncompressed workflows could add a Genesis Unlimited as a new workspace focused on the 4K workflow, keeping all existing infrastructure in place alongside it, avoiding major adjustments to their facility’s workflow. As more and more projects move to 4K, the Unlimited can scale capacity, performance and the needed HA requirements with zero downtime.

Customers can then start to migrate their content from their legacy storage over to Unlimited and then repurpose their legacy storage onto the HyperFS file system as second tier storage.Finally, once we have moved the legacy storage onto the new file system we also are more than happy to bring the legacy storage and networking hardware under our global support agreements.

How many of the people buying your solutions are using them with another cloud-based product (i.e. Microsoft Azure)?
Cloud continues to be ramping up for our industry, and we have many customers using cloud solutions for various aspects of their workflow. As it pertains to content creation, manipulation and long-term archive, we have not seen much adoption with our customer base. The economics just do not support the level of performance or capacity our clients demand.

However, private cloud or cloud-like configurations are becoming more mainstream for our larger customers. Working with on-premise storage while having DR (disaster recovery) replication offsite continues to be the best solution at this point for most of our clients.

How does your system handle UHD, 4K and other higher-than-HD resolutions?
Our solutions are built not only for the current resolutions but completely scalable to go beyond them. Many of our HD customers are now putting in UHD and 4K workspaces on the same equipment we installed three years ago. In addition to 4K we have been working with several companies in Asia that have been using our HyperFS file system and Genesis HyperMDC to build 8K workflows for the Olympics.

We have a number of solutions designed to meet our customer’s requirements. Some are done with spinning disk, others with all flash, and then even more that want a hybrid approach to seamlessly combine the technologies.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
All of our solutions are designed to support Windows, Linux, and Mac OS. However, how they support the various operating systems is based on the protocol (block or file) we are designing for the facility. If we are building a SAN that is strictly going to be block level access (8/16/32 Gbps Fibre Channel or 1/10/25/40/100 Gbps iSCSI, we would use our HyperFS file system and universal client drivers across all operating systems. If our clients also are looking for network protocols in addition to the block level clients we can support jSMB and NFS but allow access to block and file folders and files at the same time.

For customers that are not looking for block level access, we would then focus our design work around our Genesis NX or ZX product line. Both of these solutions are based on a NAS operating system and simply present themselves with the appropriate protocol over 1/10/25/40 or 100Gb. Genesis ZX solution is actually a software-defined clustered NAS with enterprise feature sets such as unlimited snapshots, metro clustering, thin provisioning and will scale up over 5 Petabytes.

Sonnet Technologies‘ Greg LaPorte
What kind of storage do you offer, and who is the main user of that storage?
We offer a portable, bus-powered Thunderbolt 3 SSD storage device that fits in your hand. Primary users of this product include video editors and DITs who need a “scratch drive” fast enough to support editing 4K video at 60fps while on location or traveling.

How are you making sure your products are scalable so people can grow either their storage or bandwidth needs (or both)?
The Fusion Thunderbolt 3 PCIe Flash Drive is currently available with 1TB capacity. With data transfer of up to 2,600 MB/s supported, most users will not run out of bandwidth when using this device.

What platforms do your systems connect to (Mac OS X, Windows, Linux, etc.)? And what differences might end-users notice when connecting on these different platforms?
Computers with Thunderbolt 3 ports running either macOS Sierra or High Sierra, or Windows 10 are supported. The drive may be formatted to suit the user’s needs, with either an OS-specific format such as HFS+, or cross-platform format such as exFAT.

Post Supervisor: Planning an approach to storage solutions

By Lance Holte

Like virtually everything in post production, storage is an ever-changing technology. Camera resolutions and media bitrates are constantly growing, requiring higher storage bitrates and capacities. Productions are increasingly becoming more mobile, demanding storage solutions that can live in an equally mobile environment. Yesterday’s 4K cameras are being replaced by 8K cameras, and the trend does not look to be slowing down.

Yet, at the same time, productions still vary greatly in size, budget, workflow and schedule, which has necessitated more storage options for post production every year. As a post production supervisor, when deciding on a storage solution for a project or set of projects, I always try to have answers to a number of workflow questions.

Let’s start at the beginning with production questions.

What type of video compression is production planning on recording?
Obviously, more storage will be required if the project is recording to Arriraw rather than H.264.

What camera resolution and frame rate?
Once you know the bitrate from the video compression specs, you can calculate the data size on a per-hour basis. If you don’t feel like sitting down with a calculator or spreadsheet for a few minutes, there are numerous online data size calculators, but I particularly like AJA’s DataCalc application, which has tons of presets for cameras and video and audio formats.

How many cameras and how many hours per day is each camera likely to be recording?
Data size per hour, multiplied by hours per day, multiplied by shoot days, multiplied by number of cameras gives a total estimate of the storage required for the shoot. I usually add 10-20% to this estimate to be safe.

Let’s move on to post questions…

Is it an online/offline workflow?
The simplicity of editing online is awesome, and I’m holding out for the day when all projects can be edited with online media. In the meantime, most larger projects require online/offline editorial, so keep in mind the extra storage space for offline editorial proxies. The upside is that raw camera files can be stored on slower, more affordable (even archival) storage through editorial until the online process begins.

On numerous shows I’ve elected to keep the raw camera files on portable external RAID arrays (cloned and stored in different locations for safety) until picture lock. G-Tech, LaCie, OWC and Western Digital all make 48+ TB external arrays on which I’ve stored raw median urging editorial. When you start the online process, copy the necessary media over to your faster online or grading/finishing storage, and finish the project with only the raw files that are used in the locked cut.

How much editorial staff needs to be working on the project simultaneously?
On smaller projects that only require an editorial staff of two or three people who need to access the media at the same time, you may be able to get away with the editors and assistants network sharing a storage array, and working in different projects. I’ve done numerous smaller projects in which a couple editors connected to an external RAID (I’ve had great success with Proavio and QNAP arrays), which is plugged into one workstation and shares over the network. Of course, the network must have enough bandwidth for both machines to play back the media from the storage array, but that’s the case for any shared storage system.

For larger projects that employ five, 10 or more editors and staff, storage that is designed for team sharing is almost a certain requirement. Avid has opened up integrated shared storage to outside storage vendors the past few years, but Avid’s Nexis solution still remains an excellent option. Aside from providing a solid solution for Media Composer and Symphony, Nexis can also be used with basically any other NLE, ranging from Adobe Premiere Pro to Blackmagic DaVinci Resolve to Final Cut Pro and others. The project sharing abilities within the NLEs vary depending on the application, but the clear trend is moving toward multiple editors and post production personnel working simultaneously in the same project.

Does editorial need to be mobile?
Increasingly, editorial is tending to begin near the start of physical production and this can necessitate the need for editors to be on or near set. This is a pretty simple question to answer but it is worth keeping in mind so that a shoot doesn’t end up without enough storage in a place where additional storage isn’t easily available — or the power requirements can’t be met. It’s also a good moment to plan simple things like the number of shuttle or transfer drives that may be needed to ship media back to home base.

Does the project need to be compartmentalized?
For example, should proxy media be on a separate volume or workspace from the raw media/VFX/music/etc.? Compartmentalization is good. It’s safe. Accidents happen, and it’s a pain if someone accidentally deletes everything on the VFX volume or workspace on the editorial storage array. But it can be catastrophic if everything is stored in the same place and they delete all the VFX, graphics, audio, proxy media, raw media, projects and exports.

Split up the project onto separate volumes, and only give write access to the necessary parties. The bigger the project and team, the bigger the risk for accidents, so err on the side of safety when planning storage organization.

Finally, we move to finishing, delivery and archive questions…

Will the project color and mix in-house? What are the delivery requirements? Resolution? Delivery format? Media and other files?
Color grading and finishing often require the fastest storage speeds of the whole pipeline. By this point, the project should be conformed back to the camera media, and the colorist is often working with high bitrate, high-resolution raw media or DPX sequences, EXRs or other heavy file types. (Of course, there are as many workflows as there are projects, many of which can be very light, but let’s consider the trend toward 4K-plus and the fact that raw media generally isn’t getting lighter.) On the bright side, while grading and finishing arrays need to be fast, they don’t need to be huge, since they won’t house all the raw media or editorial media — only what is used in the final cut.

I’m a fan of using an attached SAS or Thunderbolt array, which is capable of providing high bandwidth to one or two workstations. Anything over 20TB shouldn’t be necessary, since the media will be removed and archived as soon as the project is complete, ready for the next project. Arrays like Areca ARC-5028T2 or Proavio EB800MS give read speeds of 2000+ MB/s,which can play back 4K DPXs in real time.

How should the project be archived?
There are a few follow-up questions to this one, like: Will the project need to be accessed with short notice in the future? LTO is a great long-term archival solution, but pulling large amounts of media off LTO tape isn’t exactly quick. For projects that I suspect will be reopened in the near future, I try to keep an external hard drive or RAID with the necessary media onsite. Sometimes it isn’t possible to keep all of the raw media onsite and quickly accessible, so keeping the editorial media and projects onsite is a good compromise. Offsite, in a controlled, safe, secure location, LTO-6 tapes house a copy of every file used on the project.

Post production technology changes with the blink of an eye, and storage is no exception. Once these questions have been answered, if you are spending any serious amount of money, get an opinion from someone who is intimately familiar with the cutting edge of post production storage. Emphasis on the “post production” part of that sentence, because video I/O is not the same as, say, a bank with the same storage size requirements. The more money devoted to your storage solutions, the more opinions you should seek. Not all storage is created equal, so be 100% positive that the storage you select is optimal for the project’s particular workflow and technical requirements.

There is more than one good storage solution for any workflow, but the first step is always answering as many storage- and workflow-related questions as possible to start taking steps down the right path. Storage decisions are perhaps one of the most complex technical parts of the post process, but like the rest of filmmaking, an exhaustive, thoughtful, and collaborative approach will almost always point in the right direction.

Main Image: G-Tech, QNAP, Avid and Western Digital all make a variety of storage solutions for large and small-scale post production workflows.


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.