Tag Archives: cloud-based workflows

Cloud workflow vets launch M&E cloud migration company

A team of cloud workflow experts have launched Fusion Workflows, a new media-workflow design company designed to help M&E companies migrate their infrastructures to scalable cloud platforms.

The company says that moving to cloud-based workflows represents an opportunity to improve the inefficiencies in current systems. However, many companies struggle to understand what is involved in cloud migrations, the costs entailed and how it will affect the demands of their workforce, processes and tools. Fusion Workflows aims to remedy these issues by providing clients a customized “Workflow Migration Guide,” which acts as their design blueprint to rebuild their operations on scalable cloud infrastructure and software-defined processes.

Mark Turner

Fusion provides a holistic approach — working with their clients from inception through to deployment. Fusion provides a comprehensive initial analysis of workflow process and customizes business operations during the migration. The work continues post-migration to include training and onboarding, new software, security and documentation. This one-stop-shop approach is designed so internal teams and systems are working in sync and without interruption.

“The COVID crisis has forced media companies to create temporary hacks and interim cloud workflows but also exposed the need for them to develop a long-term cloud migration vision,” says Mark Turner, Fusion Workflows’ managing partner. “Every company now needs a plan to effectively operate their business without ties to physical locations, on-premises storage or hardware processing. At Fusion we look forward to helping companies design their own cloud migrations.”

Fusion’s team comprises domain experts from the US and Europe, who have designed and implemented cloud-based workflows and created first-in-market re-engineering standards. Fusion’s team has worked across all media industries including major movie and TV production, visual FX, animation, sports, live broadcast, digital cinema, music and OTT streaming.

In addition to Turner, who co-authored the 2020 MovieLabs paper “The Evolution of Media Creation: a 10 year vision for the future of Media Production, Post and Creative Technologies, the team includes ITV vet Emma Clifford, OTT engineer Andrew Ioannou, Lionsgate vet Thomas Hughes, former chief digital strategy officer at Sony Pictures Mitch Singer, recent Techicolor data systems engineer Daryll Strauss, former Sony Pictures CTO Spencer Stephens, Autodesk vet Chris Vienneau and ETC’s Erik Weaver.

Facilis updates storage, MAM offerings for remote workflows

Targeting those who are working remotely, Facilis has released of v.8 of its Hub shared storage system, v.3.5 of its FastTracker media asset management software as well as Object Cloud updates. Facilis has made these available immediately and free of charge for any eligible existing user with a current software support contract.

Hub v.8 includes:
• Bandwidth Priority delivers full throughput to all workstations during normal operation but prioritizes workstations to maintain greater throughput when the server enters a high-load condition. This priority setting is dynamic and can affect client performance within seconds of applying.
• SSD and HDD tiering offers dedicated speed for projects needing SSD-level performance, while maintaining a perpetual HDD-based mirror.
• Software-defined Multi-disk Parity can be enabled for up to four drive failures per drive group, on a virtual volume-basis. This allows owners of aging systems to better protect their assets from data loss due to drive failure.

Facilis FastTracker MAM software features file movement profiles, duplicate file reporting and a secure directory browse interface. FastTracker can now flush and pre-fetch files and folders from cloud and LTO locations through the Object Cloud feature, while reporting the status of archived media. With proxy encoding of indexed assets to an Object Cloud location, FastTracker offers compressed versions of facility media to editors working in the field.

According to the company’s COO, Shane Rodbourn, this release represents over two years of development .

HPA Tech Retreat: Cloud workflows in the desert

By Tom Coughlin

At the 2020 HPA Retreat, attendees witnessed an active production of the short The Lost Lederhosen. This film used the Unreal gaming engine to provide impressive graphical details, along with several cameras and an ACES workflow, with much production work done in the cloud. Many of the companies and studios participating in the retreat played a role in the film’s production, and the shooting and post were part of the ongoing presentations and panels on the first official day of the conference. Tuesday’s sessions ended with Joachim Zell from Efilm and Josh Pines from Technicolor showing the completed video.

Shooting The Lost Lederhosen – director Steve Shaw is at the far right.

As you can imagine, several digital storage products were needed for The Lost Lederhosen. In checking out the production rig in the back of the conference room, I saw some G-Tech modular storage units and was told that there was an Isilon storage system on the other side of the wall — a giveaway because of the noise from the fans in the system. In one of the sessions on that first day, it was reported that 5TB of total footage was shot with 500GB left after conforming using Avid Media Composer with AAF. Editing was done in the cloud with Avid Nexis 30TB storage online. During dailies AWS CLI was used to push files to S3 for a common storage location. Pixmover from Pixspan was used to move data to and from LA, along with AWS S3 storage in the San Francisco Bay area.

Colorfront supported the cloud-based live production of the HPA video and did a demonstration of its 2020 Express Dailies that was used to do all the dailies and deliverables, as well as Transkoder which was used to do all the VFX pulls. Frame.io, which was used to move content from cameras to the cloud. A Mac Pro was feeding dual Apple 32-inch Retina Pro XDR displays showing 6K HDR content. Colorfront was displaying Transkoder 2020 running on a Supermicro workstation with four Nvidia GeForce RTX 2080 Ti GPUs and an AJA Kona 5 video card outputting to an 85-inch Sony Z9G HDR monitor and an AJA HDR Image Analyzer 12G for video analytic monitoring.

Metadata for video content was an important element in the HPA presentations, which included the ASC MHL (media hash list) that hashes files and folders in a standardized way, with essential file metadata in an XML human-readable format. The ASC MHL is used from data capture and offloading through backup and archiving, and it is an important element in restoring content as shown below. The ASC HML is available on github (https://github.com/ascmitc/mhl) and is still a work in progress.

The following day, Tech Retreat main conference producer Mark Schubin said that film hasn’t died yet and that Kodak had received orders from Disney, NBCUniversal, Paramount, Sony and Warner Bros. for motion picture film stock. He talked about what might be the world’s smallest camera, a small endoscopy image chip with 200×200 resolution. And he mentioned Microsoft’s Project Silica proof of concept — a 7.5cm x 7.5cm glass plate storing the 75.6GB Superman movie — as a possible long-term storage media.

MovieLabs

MovieLabs
The MovieLabs white paper released in August 2019, “The Evolution of Media Creation,” was referenced in several talks during the HPA retreat. The paper, created in cooperation with the major film studios, suggests a path to the future of moviemaking, and that path is in the cloud. You can read it here: https://movielabs.com/production-technology

During the SMPTE 2110 IP update, it was said that most new video trucks for the UK’s NEP are built for 2110 IP compliance. There are a total of 12 IP-enabled trucks, six IP control rooms and multiple IP flypacks (backpack IP video gear). In a panel organized by the Digital Production Partnership, the DPP’s Mark Harrison gave a presentation that included information on on-side and cloud storage for M&E applications. He spoke about the 2020 report from the DPP and 10 case studies from the M&E industry of companies that have all adopted cloud-led production for different reasons. We will look at the digital storage needs for three of these case studies.

It was reported that COPA90 is doing high-volume global content management with a cloud production hub and AI using the Veritone Digital Media Hub and IBM Cloud Storage, as shown below.

France TV is doing fast turnaround of high-end drama using cloud-based metadata enrichment with AWS, Azure, a private cloud and local storage before going into Avid Nexis storage, Avid Interplay and Media Composer.

UK’s Jellyfish Pictures is reportedly doing secure distributed high-volume virtualized production using Azure public cloud and a private cloud with PixStor storage.

There are five key principles in the Eluvio content fabric.

Distributed Content Delivery
Eluvio’s Michelle Munson gave an update on the company’s distributed content delivery service, and during a demo at the company’s booth, she told me that Eluvio’s approach keeps the master copy for distribution in cold storage, with the published serviceable content inherently streamable. By reusing distributed parts of content within the network, there is a considerable shrink in requirements for storage. In effect, the fabric replaces a hot storage tier, reducing higher-performance storage and network bandwidth requirements.

In her presentation, Munson said that Eluvio eliminates the need for cloud microservices for content distribution. The blockchain-based network system provides an inherent security model that makes it possible to serve audiences directly over public internet to enable a content fabric. This is not a cloud or a CDN, but rather a data distribution and storage protocol. Rendering is done at the consumer endpoint, allowing consumers to play content just in time with low latency, and monetization happens through secure transactions. MGM is deploying Eluvio’s technology for worldwide content distribution, and some other major media players are also working with the technology.

Renard Jenkins

There are five key principles in the Eluvio content fabric. First, there is no movement of the master copy; a mezzanine copy is used for all servicing. Second, a file-based interface is used for upload and download with underlying objects. Third, streaming and servicing are accomplished from the source in a JIT manner. Fourth, it uses a trustless encryption model over open networks, and fifth, access control and rights management are built in.

Best Practices for Cloud-Based Workflows
MediAnswers’ Chris Lennon and PBS’ Renard Jenkins (who subsequently started work as VP, content transmission, at WarnerMedia) spoke about the right way to do cloud-based workflows, which included local as well as cloud content copies. They gave three principles for survival. First, IT is not IP, and a network should be designed around media use and minimizing packet loss. Second, build or find cloud-native solutions rather than “lift and shift.” Third, linear workflows lead to nonlinear problems.

Universal and the Cloud
Universal’s Annie Chang spoke about tools for the next generation of production, including the use of cloud-based tools such as temporary production storage and an active archive for production assets. She went on to detail future cloud workflows wherein content goes from the camera directly to the cloud (or, if on film, from a digital intermediate post house to the cloud). Editing, dailies distribution and EDL are all done in the cloud, as is final archiving.

Chang said that the move to a mostly cloud-based workflow is already starting at Universal. She reported that DreamWorks Animation (DWA) has built a cloud-native platform that creates workspaces for its artists. Assets are related to each other, and workflows can be kicked off through microservices. She wondered if Universal could repurpose the DWA platform for live-action, VFX assets and workflows.

Universal

Chang discussed an experiment wherein Universal took one shot from Fast & Furious Presents: Hobbs & Shaw (including reference photos, LIDAR scans, camera raw) and demonstrated a VFX pull on premises at DWA while also testing in a public cloud. When Universal ran the content from the cloud and showed it to Universal VFX execs and the VFX producer from Hobbs & Shaw, Chang was told that this was something they have wanted for a decade. Developing the platform this year, Universal plans to test it on a full production in 2021. The company has 10 concurrent projects and is coordinating with multiple industry efforts with ACES, USC ETC and MovieLabs.

ACES
There was much discussion on the next developments for ACES (Academy Color Encoding System), particularly the implementation of ACES 1.2 and the development of ACES 2.0. A panel at the retreat suggested that practical problems with image matching with the current version of ACES could be solved by using AMF (ACES Metadata File). But there are some image matching problems that are not ACES-related but rather related to the source of the image and what sort of format is used for comparison. ACES 2.0 development is underway that plans to address these and other issues with the current version of ACES.

Storage
The digital storage exhibitors at the HPA Retreat included Cloudian (local object storage), which demonstrated with AWS, Azure, Google and other cloud storage services. Quantum had an exhibit that focused on its media and storage solutions, such as StorNext Workflow Storage Platform, F-Series NVMe storage, Xcellis high-performance workflow storage appliances and the its object storage and tape archive solutions. (Note that Quantum recently acquired Western Digital’s ActiveScale object storage.)

Racktop was advertising its Brickstor all-flash or hybrid HDD/SSD CyberConverged data storage offering, which supports FIPS 140-2 and AES-256 for encryption and compliance. Rohde & Schwarz was demoing IMF-based workflows with its Spycer Node media storage.

Rohde & Schwarz

Scale Logic featured its Atavium data management and orchestration solution. According to the product literature, data entering Atavium is identified, tagged and classified and can be searched via metadata or tags whether the data is on premises or in the cloud. Also, tasks can be automated using a combination of metadata and tags and a set of APIs and scheduler and application integration determine the placement of data to reflect the needs of the workflow. Local storage includes nearline HDDs as well as NVMe flash, and DRAM is used for read-ahead cache. The system will work with Spectra Logic’s Black Pearl and integrates with asset management systems.

Seagate Technology was showing storage products, including its Lyve Drive Shuttle for physical data delivery using e-ink and protective cases for shipping storage devices. The company had flyers out on its Seagate Exos modular storage for capacity and the Seagate Nytro modular storage for performance. Pixit Media was partnering with Seagate on its software-defined storage solution.

StorageDNA was showing its analytics-driven data management platform (DNAfabric) that provides data visibility services, including storage capacity and cost as well as data mobility services. Tiger Technology was showing its Tiger Bridge and shared an exhibit space with Nexsan NAS products. Western Digital was showing various G-Tech products, including its G-Speed Shuttle storage systems as well as desktop and mobile HDD and SSD storage devices.


Tom Coughlin is a digital storage analyst and business and technology consultant. His Coughlin Associates consults, publishes books and market and technology reports (such as the annual Digital Storage in Media and Entertainment Report ). He is currently working on his 2020 Digital Storage in Media and Entertainment Survey, feel free to participate:  https://www.surveymonkey.com/r/MWXL22N

 

 

Video Coverage: HPA Tech Retreat’s making of The Lost Lederhosen

By Randi Altman

At the HPA Tech Retreat in Rancho Mirage, California, the Supersession was a little different this year. Under the leadership of Joachim (JZ) Zell — who you might know from his day job as VP of technology at EFILM — the Supersession focused on the making of the short film, The Lost Lederhosen, in “near realtime,” in the desert. And postPerspective was there, camera in hand, to interview a few of the folks involved.

Watch our video coverage here.

While production for the film began a month before the Retreat — with Steve Shaw, ASC, directing and DP Roy H. Wagner Jr., ASC, lending his cinematography talents — some scenes were shot the morning of the session with data transfer taking place during lunch and post production in the afternoon. Peter Moss, ASC, and Sam Nicholson, ASC, also provided their time and expertise. After an active day of production, cloud-based post and extreme collaboration, the Supersession ended with the first-ever screening of The Lost Lederhosen, the story of Helga and her friend Hans making their way to Los Angeles, Zell and the HBA (Hollywood Beer Alliance). Check out HPA’s trailer here.

From acquisition to post (and with the use of multiple camera formats, framefrates and lenses), the film’s crew were volunteers and includes creatives and technologists from companies such as AWS, Colorfront, Frame.io, Avid, Blackmagic, Red, Panavision, Zeiss, FilmLight, SGO, Stargate, Unreal Engine, Sohonet and many more. One of the film’s goals was to use the cloud as much as possible in order to test out that particular workflow. While there were some minor hiccups along the way, the film got made — at the HPA Tech Retreat — and these industry pros got smarter about working in the cloud, something that will be increasingly employed going forward.

While we were were only able to chat with a handful of those pros involved, like any movie, the list of credits and thank you’s are too extensive to mention here — there were dozens of individuals and companies who donated their services and time to make this possible.

Watch our video coverage here.

(A big thank you and shout out to Twain Richardson for editing our videos.)

Main Image Caption: AWS’ Jack Wenzinger and EFILM’s Joachim Zell

Frame.io platform is now on the iPad

Frame.io’s review and collaboration platform is now available on the iPad. The company acknowledges there are times when watching dailies, evaluating VFX shots or getting a better sense of composition and color benefits from a larger display than a smartphone can offer. Enter Frame.io for iPad. They also point to the tablet’s high-resolution screen, which allows users to view “true-to-life” color.

New features include a split view, which lets users keep Frame.io in view on one side of the iPad screen while using apps like Final Draft, Slack or FaceTime on the other. There is also the ability to draw detailed annotations with Apple Pencil. Users can fine-tune stills or moving images, create illustrations, or work in Photoshop and import assets into Frame.io right on the iPad.

In addition to this latest product news, Frame.io recently received $50 million in C Series funding led by Insight Partners. The company’s co-founders and brain trust say they will use this money to keep enhancing their product and further embrace cloud-based workflows.

In talking about these types of workflows, Frame.io co-founder Emery Wells says, “It’s not so much about how the Frame.io platform will continue to develop for the cloud, but how the development of the cloud will enable new platform capabilities.” He points to 5G as an example. “We’ve watched many industries go through the cloud adoption curve, and the filmmaking industry is at the precipice of that taking off. We just haven’t been able to get data into the cloud; that’s been the big blocker. Cameras can produce anywhere from 2TB to 5TB of footage per camera, per day — that is big data. Now that gigabit connections are becoming commonplace, we can start moving the data into the cloud.”

Wells and company believe the video creation process is finally adopting the cloud. “The virtualization of post is upon us,” he reports. “There will be intense security requirements, but every signal has shown we’ve turned a corner. We’re right at the start of that adoption curve.”

L-R: Michael Cioni and Emery Wells

He points to the recent hire of post veteran Michael Cioni, who joined the Frame.io team to oversee a new initiative the company is calling “camera-to-cloud.” Wells says the goal is to integrate Frame.io right into the camera, allowing you to shoot from camera into Frame.io — into the cloud. “This transforms the way people will work because we’re already integrated into the creative tools, which means you’re shooting right into the editing tools — Premiere, Final Cut, Resolve — making it truly camera-to-cutting room.”

Cioni agrees, but realizes there is a learning curve. “Looking back at how other industry-changing technologies have evolved into Hollywood mainstream practices, we can predict the biggest challenge will likely be in the behavior changes required to join our plan. Blanketed resistance to change can be one of society’s worst characteristics, and we expect there will be a large amount of anxiety about relying more and more on the cloud and having less and less on-prem infrastructure.”

Cioni says the company has already begun to examine the value of education so they can work with the community to help reduce anxiety by listening to their concerns. “In order to do that, it’s imperative that we are able to engage in healthy conversations about what leaning more on the cloud means to each company, each department and each individual. We agree with the MovieLabs white paper, but the opportunity lies not only within who is willing to invest in cloud infrastructure, but who is willing to work with the community and strategically deploy each new iteration all within the right timing.”

Quick Chat: Frame.io’s new global SVP of innovation, Michael Cioni

By Randi Altman

Production and post specialist Michael Cioni, whom many of you might know from his years at Light Iron and Panavision, has joined Frame.io as global SVP of innovation. He will lead a new LA-based division of Frame.io that is focused on continued investment into cloud-enabled workflows for films and episodics — specifically, automated camera-to-cutting room technology.

Frame.io has been 100 percent cloud-based since the company was formed, according to founder Emery Wells. “We started seeding new workflows around dailies, collaborative review and realtime integration with NLEs for parallel work and approvals. Now, with Michael, we’re building Frame.io for the new frontier of cloud-enabled professional workflows. Frame.io will leverage machine learning and a combination of software and hardware in a way that will truly revolutionize collaboration.”

Quoted in a Frame.io release that went out today, Cioni says, “A robust camera-to-cloud approach means filmmakers will have greater access to their work, greater control of their content, and greater speed with which to make key decisions,” says Cioni. “Our new roadmap will dramatically reduce the time it takes to get original camera negative into the hands of editors. Directors, cinematographers, post houses, DITs and editors will all be able to work with recorded images in real time, regardless of location.”

We reached out to Cioni with some questions about Frame.io and the cloud.

Why was now the right time for you to move on from Light Iron — which you helped to establish — and Panavision to join Frame.io?
After 10 years at Light Iron and over four at Panavision, I have been very fortunate to spend large portions of my career focused on both post and production. Being at both these groups gave me more access to the unique challenges our industry collaborators face, especially with more productions operating on global schedules. Light Iron and Panavision equipped me with the ideal training to explore something entirely new that couples production and post together in an entirely new way. Frame.io is the right foundation for this change.

What will your day-to-day look like at the company?
I will be based in LA and helping build out Frame.io’s newest division in Los Angeles. I will also be traveling regularly to New York to work directly with the engineers and security teams on our roadmap development. This is great for me because I loved living in New York when we opened up Light Iron NY, but I also love working in LA, where so many post and production infrastructures call home.

Frame.io was founded by post pros. Why is it so important for the company to continue that tradition with your hire?
I find that the key to success in any industry is largely dependent on how deep your knowledge well goes. Even though we in media and entertainment serve the world through creative means, the filmmaking process is inherently complex and inherently technical. It always has been.

The best technologies are the ones that are invisible and let the creative process flow without thought about the technology behind what is happening in your mind. Frame.io CEO Emery Wells and I have a profound respect for post production because we were both entrepreneurs and experts in the post space. Anyone who has built or operated a post facility (big or small) knows that post is a hub linking together nearly all workflow components for both creative and technical team members.

Because post lives at the core of Emery and myself, Frame.io will always be grounded in the professional workflow space, which enables us to better evolve our technology into markets of every type and scale.

Your roadmap seems in line with the MovieLabs white paper on the future of production, which is cloud-based. Can you address that?
MovieLabs is arguably the best representation of a technological roadmap for the media and entertainment industry. I was thrilled to see an early copy because it parallels a similar vision I have been exploring since 2013. I believe MovieLabs paints an accurate picture of the great things we are going to be able to do using cloud and machine learning technology, but it also demonstrates how many challenges there are before we can enjoy all the benefits. Frame.io not only supports the conclusions of the MovieLabs white paper, we have already begun deploying solutions to bring a new virtual creative world to reality.

Main Image: (L-R) Michael Cioni and Emery Wells

Autodesk cloud-enabled tools now work with BeBop post platform

Autodesk has enabled use of its software in the cloud — including 3DS Max, Arnold, Flame and Maya — and BeBop Technology will deploy the tools on its cloud-based post platform. The BeBop platform enables processing-heavy post projects, such as visual effects and editing, in the cloud on powerful and highly secure virtualized desktops. Creatives can process, render, manage and deliver media files from anywhere on BeBop using any computer and as small as a 20Mbps Internet connection.

The ongoing deployment of Autodesk software on the BeBop platform mirrors the ways BeBop and Adobe work closely together to optimize the experience of Adobe Creative Cloud subscribers. Adobe applications have been available natively on BeBop since April 2018.

Autodesk software users will now also gain access to BeBop Rocket Uploader, which enables ingestion of large media files at incredibly high speeds for a predictable monthly fee with no volume limits. Additionally, BeBop Over the Shoulder (OTS) enables secure and affordable remote collaboration, review and approval sessions in real-time. BeBop runs on all of the major public clouds, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

Embracing production in the cloud

By Igor Boshoer

Cloud technology is set to revolutionize film production. That is if studios can be convinced. But since this century-old industry is reluctant to change, these new technologies and promising innovation trends are integrating at a slower pace.

Tried-and-true production methods are steadily becoming outdated. Bringing innovation, a cloud platform offers accessibility to both small and large studios. In the not-so-distant future, what may now be merely a competitive edge will become industry standard practices. But until then, some studios are apprehensive. And the reasons are mostly myth.

The Need for Transition
Core video production applications, computing, storage and other IT services are moving to the cloud at a rapid pace. A variety of industries and businesses — not just film — are being challenged by new customer expectations, which are heavily influenced by consumer applications powered by the cloud.

In visual effects, film and XR, application vendors such as Autodesk, Avere and Aspera are all updating their software to support these cloud workflows. Studios are recognizing that more focus should be placed on creating high-quality content, and far less on in-house software development and infrastructure maintenance. But to grow the topline and stand apart from the competition, it’s imperative for our industry to be proactive and re-imagine the workflow. Cloud providers offer a much faster pace with this innovation than what a studio can internally provide.

In the grand scheme of things, the industry wants to make studio operations more efficient, cost-effective and quantifiable to better serve their customers. And by taking advantage of cloud-based services, studios can increase agility, while decreasing their cost and risk.

Common Misconceptions of the Cloud
Many believe the cloud to be insecure. But there are many very successful and striving startups, even in the finance and healthcare industries. Our industry’s MPAA regulations are much less stringently regulated than their industry’s HIPPA compliance. To the contrary, the cloud providers offer vastly stronger securities than a studio’s very own internal security measures.

Some studios are reluctant because the transfer of mass amounts of data into a cloud platform can prove challenging. But there are still ways to speed up these transfers, including the use of caching and custom UDP-based transport protocols. While this reluctance is valid, it’s still entirely manageable.

Studios also assume that cloud technology is expensive. It is… however, when you truly break down the costs to maintain infrastructure — adding internal storage, hardware setup, multi-year equipment leases, not to mention the ongoing support team — it, in fact, proves more expensive. While the cloud appears costly, it actually saves money and lets you quantify the cost of production. Moreover, studios can scale resources as production demands fluctuate, instead of relying on the typical static, in-house model.

How the Cloud Better Serves Customers
While some are still apprehensive of cloud-based integration, studios that have shifted production pipelines to cloud-based platforms — and embraced it — are finding positive results and success. The cloud can serve customers in a variety of ways. It can deliver a richer, more consistent and personalized experience for a studio’s content creators, as well as offer a collaborative community.

The latest digital technologies are guaranteed to reshape economics, production, and distribution of the entertainment industry. But to be on their game and remain competitive, studios must adapt to these new Internet and computer technologies.

If our industry is willing to push itself through these myths and preconceived assumptions, cloud technology can indeed revolutionize film production. When that begins to happen, more and more studios will adopt this competitive edge, and it will make for an exciting shift.


Igor Boshoer is a media technologist with feature film VFX credits, including The Revenant and The Wolf of Wall Street. His experience building studio technology inspired his company, Linc, a studio platform as a service. He also hosts the monthly media technology meetup Filmologic in the Bay Area.

Public, Private, Hybrid Cloud: the basics and benefits

By Alex Grossman

The cloud is everywhere, and media facilities are constantly being inundated with messages about the benefits the cloud offers in every area, from production to delivery. While it is easy to locate information on how the cloud can be used for ingest, storage, post operations, transcoding, rendering, archive and, of course, delivery, many media facilities still have questions about the public, private and hybrid clouds, and how each of these cloud models can relate to their business. The following is a brief guide intended to answer these questions.

Public
Public cloud is the cloud as most people see it: a set of services hosted outside a facility and accessed through the Web, either securely through a gateway appliance or simply through a browser. The public nature of this cloud model does not mean that content from one person or one company can be accessed by another. It simply means that the same physical hardware is being shared by multiple users — a “multi-tenant” arrangement in which data from different users resides on one system. Through this approach, users get to take advantage of the scale of many servers and storage systems at the cloud facility. This scale can also improve accessibility and performance, which can be key considerations for many content creators.

Public cloud is the most versatile type of cloud, and it can offer a range of services, from hosted applications to “compute” capabilities. For media companies these services range from transcoding, rendering and animation to business services such as project tracking, billing and, in some cases, file sharing. (Box and Dropbox are good examples of file sharing enabled by public cloud.) Services may be generic or branded, and they are most often offered by a software vendor using a public cloud, or by the public cloud vendor itself. Public clouds are popular for content and asset storage, both for short-term transcode to delivery or project-in-process storage and for longer-term “third copy” off-site archive.

Public clouds can be very appealing due to the OPEX or pay-as-you-go nature of billing and the lack of any capital expense with ongoing hardware purchase and refresh, but the downside of this is that public clouds remove control over workflow. While most public cloud vendors today are large and financially stable, it remains important to choose carefully.

Moreover, taking advantage of public cloud is rarely easy. This path involves dealing with new vendors, and possibly with unfamiliar applications and hardware gateways, and there can be unexpected charges for simple operations such as retrieving data. Although content security concerns are mostly overblown, they nevertheless are a source of apprehension for many potential public cloud users. These uncertainties have a lot of media companies looking to private cloud.

Private
Private cloud can most simply be defined for media companies as a walled machine room environment with workflow compute and storage capabilities offering outside connectivity, while at the same time preventing outside intrusions into the facility.

A well-designed private cloud will allow facilities to extend most of their production and archive capabilities to remote users. The main difference between this approach and most current (non-cloud) storage and compute operations in a facility today is simply that a private cloud can isolate the current workflow from the outside world while extending a portion to remote users based on preferences and polices.

The idea of remote access is not confined to private cloud. It is possible to provide facility access to external users through normal networking protocols, but the private cloud takes this a step further through easier access for authorized users and greater security for others. The media facility remains in complete control of its content and assets. Furthermore, it can host its applications, and its content remains in its on-site storage, safe and secure, with no retrieval fees.

A facility that embraces private cloud cannot take advantage of the scale or pay-as-you-go benefits of public cloud. Thus, in order to provide greater accessibility and flexibility, some media companies have adopted a private cloud model as an extension of their online operations. Private cloud can effectively replace much of the current hardware used today in post and archive operations, so it is a more cost-effective solution for many considering cloud benefits.

Hybrid
Hybrid cloud is an interesting proposition. In the enterprise IT world, hybrid cloud implementations are seen as way to bridge private and public and realize the best of both worlds — lower OPEX for certain functions such as SAAS (software as a service) and the security of keeping valuable data back in their data centers.

For media professionals, hybrid cloud may have even greater benefits. Considering the changing delivery requirements facing the industry and the sheer volume of content being created and reviewed — and, of course, keeping in mind the value of re-monetization — hybrid cloud has exciting potential. Well-designed hybrid cloud can provide the benefits of public and private cloud while taking advantage of the cost savings and reduced complexity that come with maintaining on-premise end-to end-hardware. By sharing the load between the hardware at a facility and the massive scale of a public cloud, a media company can extend its workflow easily while controlling every stage — even on a project-by-project basis.

Choosing between public, private and hybrid cloud can be a daunting task. It is a decision that must start with understanding the unique needs and goals of the media company and its operations, and then involve careful mapping of the various vendors’ offering solutions — with cost considerations always in mind. In the end, a facility may choose neither public cloud, private cloud nor hybrid cloud, but then it may miss out on the many and growing benefits enabled by the cloud.

Alex Grossman, a cloud workflow expert, is VP, Media and Entertainment at Quantum. You can follow him on Twitter @activeguy.

Forbidden to demo Forscene’s virtualized post workflow at IBC

Forbidden Technologies, makers of the editing software Forscene, will be at IBC in Amsterdam showing its end-to-end virtualized workflow for posting and distributing of video content. The hardware-independent solution is enabled by Forscene’s integration with the Microsoft Azure cloud-computing platform.

The workflow sees the Forscene ingest server running as a virtual machine on the Microsoft Azure platform to transcode and ingest live broadcast streams into Forscene accounts seconds behind the live feed. Video editors can then create subclips or full highlights packages using Forscene’s NLE from anywhere. Once the edit is complete, they can drop the sequence back onto Azure for faster-than-realtime conforming and distribution.

IBC attendees can experience the virtualized workflow by competing in a simulated car race, editing the race footage in Forscene and then sharing the video on social media — without needing any Forscene hardware.

Thinkbox at SIGGRAPH with cloud-enabled pipelines

Thinkbox Software showed off some new features from the upcoming new release of Deadline, a high-volume compute management solution that enables studios to link any combination of local and cloud-based resources into one logical pipeline. Currently entering beta testing, Deadline 8 will introduce on-demand metered licensing to work with both local and cloud-based farms alongside existing permanent and temporary licenses.

houdini_jigsaw

Deadline 8 also adds a Proxy Server application that allows users to securely connect to and interact with remote and cloud-based render farms over public Internet without the need for a virtual private network (VPN). The new version includes an updated user interface with enhanced interactivity and a new sandboxed Python environment to facilitate additional rendering and event stability.

Currently, Deadline supports cloud rendering on Amazon EC2, Google Cloud Platform, Microsoft Azure and OpenStack, among others.

Leading up to SIGGRAPH 2015, Thinkbox released Deadline Cloud Wizard for Google Cloud Platform and a beta version for Amazon’s Elastic Compute Cloud (EC2). These wizards automate the configuration of cloud-based render farms running Deadline into a one-time automated task completed in less than 30 minutes. Similarly, Deadline was made available on Microsoft Azure Marketplace for one-click access to the software.

Thinkbox also recently launched the beta for Deadline 7.2. New features include new Quick Draft options to create movies from rendered images or perform file conversions without having to create template scripts. Integration with NIM Labs’ NIM pipeline management tool enables the creation of NIM renders, and automates the uploading of thumbnails and movies to NIM when jobs complete. Enhanced support for SideFX Software’s Houdini introduces Jigsaw region rendering, HServer interactive rendering, and HQueue simulation node slicing.

Answering questions about ‘the cloud’

By Randi Altman

The cloud is everywhere. Workflows, as well as companies making tools for those workflows, are popping up all over, but still some post pros are dubious. What exactly is the cloud? How will it help beyond regular workflows? How does it keep my assets secure? Those are just some questions being thrown around by those who have yet to make the transition.

We thought reaching out to a company that capitalized on the cloud early and from a post production perspective might be a good way to get some of these questions answered.

David Peto owned London-based post house Unit Post Production until 2009 when he started Aframe, a cloud platform that enables teams and organizations to collaborate, organize and move media. He designed the product from a user’s perspective. Let’s find out more about the cloud and its benefits for post pros…

David Peto

David Peto

Some people don’t have a clear understanding of the cloud. Can you help them out?
In its simplest definition, the cloud is a combination of software and services that run on Internet-accessible servers rather than on local computers.

Can you describe how your company uses the cloud?
Aframe was built as a cloud platform from the very beginning. We recognized that people were shipping hard drives all around the world, making unnecessary copies and versions of their media, losing comments and other metadata, and generally spending too much time just waiting to work with media. Using the cloud gives organizations a central repository — a one-to-many point of distribution — that enables more people to access media and do their work regardless of where they may be located and what time of day it is.

There are private and public clouds. Can you describe the differences? What are the benefits of each?
Public clouds are generally owned and operated by a third party such as Amazon’s Web Services, Rackspace or Microsoft Azure. They are provisioned for use by many different types of users — banks, pharmaceutical companies or content distribution networks featuring the latest grumpy cat video!

Private clouds are owned by one company for the specific use of its employees and partners, and generally have very high security standards and limited accessibility.

Which does Aframe use, and why?
Aframe sits somewhere in between being a public and private cloud, which we think offers benefits from both types. What we offer is sometimes referred to as a Vendor Cloud. Like a public cloud offering, Aframe is globally available and accessible to all, but has been purpose-built for a specific task: handling large and complex media files.

Like private clouds, we offer very tight security, greater flexibility and features tailored to users. We also own and operate all of the equipment in the datacenters and do not outsource any portion of our infrastructure to third parties.

For content creators and owners dealing with large, often complex, high-resolution media files, dedicated processes and services are required to enable post workflows. Far beyond simply storing content is the need to automatically transcode to different formats and extract and add descriptive metadata, while also providing a method to review and approve assets for all stakeholders on any device.

How do you educate people who have concerns about data security?
The best way is to explain the different security areas that must be considered. First there is file transfer security where anytime your media is moving to or from the cloud, it should be encrypted during transit. Media in transit is encrypted with an extended validation 256-bit SSL encryption at all times. This means that our corporate identity and place of business has been verified by SSL certification.

Application Security is where we use 256-bit SSL encryption at all times in the browser. Like banking online, your browser will display a green box that shows a verified connection when you are logged in. For server security inside our datacenters, access is protected by powerful 2048-bit RSA encryption keys. This can only be accessed by senior members of the Aframe team.

For physical security and backups of the data itself, we firmly believe that it is not safe unless it is verified to be stored redundantly in at least two geographically separate locations. Our customers rest easy knowing that their files are backed up hundreds of miles apart at opposite ends of the country.

Why should a post organization consider a cloud-based solution? What are the advantages?
Most post houses, regardless of their size, are experiencing the headaches I’ve mentioned with regards to teams working in different locations, having to FedEx media and having silos of workflows where not everyone has access to the types of files they need. Only certain people need high-resolution files, others are happy to view proxies.

Being able to add timecode accurate notes and comments for the editor or producer is critical, but so are automatically transcoding, uploading and downloading files, as your workflow requires. All of these are necessary to get the maximum productivity to hit ever-shrinking deadlines and budgets. In the end, people should be concentrating on the creative aspects of their jobs, not the mundane moving of files to different departments and other stakeholders.

Can you give any examples of how post facilities are using Aframe?
There are many workflows being used by our post customers today. Some users upload dailies to the cloud so that stakeholders can view and log comments and even embed complete transcriptions. Others are using Aframe as a central repository where team members in offices across town or across countries can collaborate and get access to the latest footage. All metadata is indexed and preserved so that searching for just the right shot is effortless. Avid, Final Cut or Adobe editors benefit by seeing all comments and feedback when they transfer the metadata into the edit.

How is Aframe different from something like Amazon S3’s cloud offering?
That’s a great question. Amazon is a true cloud solution. It’s big, and in use by countless organizations every day. Unfortunately, that’s exactly where it can fall short for companies working with high-resolution files and broadcast masters. Amazon was built to serve many masters and as such, they have a different business model that can be quite costly for content creators.

Amazon charges for uploads and downloads, Aframe does not, which can make uploading a 500GB show master very expensive. With Amazon, it’s not as clear where your media is stored, and where the backup of that media is stored. Finally, Amazon is sharing their bandwidth across a huge cross-section of customers in a way that Aframe is not. It’s just not built for the media and entertainment industry.

A lot of people say the Internet is not fast enough to support upload/download of full-res video for any meaningful post workflows, how do you answer that?
That’s a completely legitimate question, because it is true that your experience in the cloud is only as good as your Internet connection. However, we have optimized file transfer protocols to be the fastest in the industry. Our transfer speeds are 15x faster than FTP, for example. There are many users working from home or from coffee shops that are quite successfully viewing media and making comments over 3G or 4G connections with as little as 5Mbps.

Obviously, you’ll want more than that if you are uploading dailies, but the good news is that Internet speeds are increasing exponentially every year and most post organizations have very capable connections in place today.

At NAB, Aframe showed a collaboration between Aframe and Adobe Anywhere. Can you talk about that?
Since the beginning of Aframe, I’ve dreamed of true, no-download cloud editing. I’ve seen a lot of people fail for various reasons. However, four years ago, Adobe showed me their Anywhere product which allowed full resolution material to be streamed down a standard Internet connection allowing you to edit in Adobe Premiere Pro on your laptop just as if you were back at the facility. I was blown away at the possibility, because I could imagine hosting Anywhere in Aframe’s cloud platform,  allowing full broadcast quality, no-download edit in the cloud using Adobe Premiere on your laptop.

That’s exactly what we showed at this year’s NAB. It’s pretty amazing to see someone upload material into the cloud and never download it again, through the entire post process, until someone actually sees it.

This is significant for our industry because it means that you could now be editing from the office, home, the beach, the local café… anywhere really. It brings editing into the modern world and unchains the editor from all the storage and big iron workstations when you need to be someplace else!

Main image: Stock Photo

Quantum intros Q-Cloud storage solutions

Quantum has been an early adopter when it comes to creating cloud-based storage. Recently they took that to a new level, offering three new product solutions that integrate the cloud into multi-tier, hybrid storage architectures for data workloads.

The Quantum Q-Cloud solutions allow users to leverage Quantum’s intelligent data management software to store data in the cloud when it makes the most sense for a given workflow or application – essentially customizing a user’s needs.

The new Q-Cloud Archive and Q-Cloud Vault incorporate the power of the public cloud as an off-site tier within a Quantum StorNext 5 workflow environment, while Q-Cloud Protect for AWS enables customers using Quantum’s DXi deduplication appliances to replicate data to the Amazon Web Services (AWS) cloud. With all three offerings, users can get the benefits of the cloud without making changes to existing applications or processes.

Taking into account both the benefits and challenges of the cloud, Quantum says their approach is to enable customers to combine public cloud storage with on-premises storage in a multi-tier, intelligently managed, application-centric architecture. So instead of treating the cloud as a passive repository, they integrate the cloud as an active tier in a hybrid storage infrastructure driven by application requirements.

Key features include:
• Full integration as a tier in StorNext 5-managed workflows, with automated, policy-based data movement
• No additional hardware, separate applications or programming needed
• Agility in addressing changing workload demands
• Reliability of global public cloud infrastructure
• Straightforward pricing and billing directly from Quantum
• Secure, encrypted data transmission and at rest

StorNext users can also employ Q-Cloud Archive and Q-Cloud Vault in conjunction with Quantum Lattus extended online storage, moving data from a Lattus object storage technology-based private cloud on their premises to Q-Cloud.

Offered as a subscription service through the Amazon Marketplace, Q-Cloud Protect for AWS enables users to replicate data from either a physical or virtual DXi appliance on premises to a virtual DXi instance in the AWS cloud.

Key features include:
• Easy access to the Amazon public cloud and its benefits, with no changes to applications or processes
• Cost savings enabled by the most efficient method of data deduplication

Q-Cloud Archive is available immediately in the Americas, EMEA and APAC, while Q-Cloud Vault will be available in the second half of this year. Q-Cloud Protect for AWS will be available next quarter. You can see an overview of the products here.

Nvidia next-gen GPUs focus on speed, the cloud and mobile

At the SIGGRAPH show in Vancouver, Nvidia showed its next-generation of Quadro GPUs, which they describe as “an enterprise-grade visual computing platform” that offers 40 percent faster performance. With these cards, Nvidia is focusing on 4K, the cloud, mobile and collaboration, including remote rendering. Pricing is expected to remain the same.

“The next generation of Quadro GPUs not only dramatically increases graphics and compute performance to handle huge data sets it, extends the concept of visual computing from a graphics card in a workstation to a connected environment,” said Jeff Brown, VP of Professional Visualization at Nvidia. “The new Quadro line-up lets users interact with their designs or data locally on a workstation, remotely on a mobile device or in tandem with cloud-based services.”

A shot from Nvidia's New York press event on the set of ABC's The Chew.

A shot from Nvidia’s New York press event on the set of ABC’s The Chew.

Framestore’s CTO, Steve MacPherson, who spoke at an Nvidia press event recently in New York City, says, “From increased efficiency to new workflow models, the results we’ve achieved with the latest Quadro GPUs are fundamental to our future.”

Framestore, which provides VFX and graphics for major feature films such as Guardians of the Galaxy and Edge of Tomorrow, has been doing a lot of work recently in integrated advertising, including virtual reality, something that MacPherson calls “a blending between the physical and CG worlds.”

He points to Nvidia’s Quadro cards as a key part of the studio’s workflow. “Nvidia Quadro is an essential component to helping us keep our edge, and gives us the reassurance of knowing the graphics technology our artists rely on has been developed specifically for professional users with the highest standards of reliability and compatibility.”

Adobe’s Dennis Radke was at the New York event, talking about how these new cards accelerate the Creative Cloud and how it allows Premiere Pro to take in 4K Red Raw files without a Red Rocket card.

Also in New York, Nvidia showed remote collaboration workflows using Google Tablet running Autodesk Maya. Check out the company’s blog on the topic.

So to sum up, the new generation of Quadro GPUs — the K5200, K4200, K2200, K620 and K420 — allows users to interact with data sets or designs up to twice the size handled by previous generations; remotely interact with graphics applications from a Quadro-based workstation from essentially any device, including PCs, Macs and tablets; run major applications, such as Adobe Creative Cloud and the Autodesk Design Suite — on average 40 percent faster than with previous Quadro cards; and switch easily from local GPU rendering to cloud-based offerings using Nvidia Iray rendering.

 

Review: MediaSilo with Premiere/Prelude integration

By Brady Betzel

A few months ago I reviewed a cloud-based asset management and collaboration platform called Aframe. After that review hit, many people emailed asking how I “really” felt about it, and would it work in different production scenarios.

Competitors also reached out asking if I could review their product too. My Forbidden Technologies review is already up on postPerspective.com, and I recently took MediaSilo up on their offer and ran some of my own tests. I discovered that we really are going to be working in the cloud in the near future — this is not just a fad.

While these products do differ in their offerings, it’s clear there is a race to supremacy in the cloud wars. Forbidden Technology offers Forscene, a Web-based NLE, cloud storage and Continue reading

New Scratch 8 supports native ProRes encoding on Windows, cloud workflows

Santa Clara, California — Assimilate has three bits of news for users today, but let’s start off with this: Scratch and Scratch Lab customers can encode Apple ProRes files on Microsoft Windows 7/8-based PCs.

So that means Windows users get native ProRes 442, HQ, LT, proxy and 4444 encoding; Continue reading

Aframe names David Frasco as VP of sales, North America

New York — Aframe, makers of a cloud video production system with capabilities in collaboration, review and approval, archive and tagging, has named long-time media and entertainment industry vet David Frasco as its new VP of North American sales, based in New York City.

Formerly with Avid, David Frasco has over 30 years of experience in editing, post production and building out marketing and sales teams for solutions that have transformed content creation and production workflows. For the past 14 years Frasco served as Avid’s director of enterprise accounts, assisting clients to deploy file-based workflow and production systems. He was instrumental in the debut of game-changing solutions for the past four Olympic games.

In earlier positions Frasco also was a product manager with expertise in the company’s graphics and editorial solutions. Prior to joining Avid, he was director of marketing for Chyron, a post production product manager for Sony Broadcast, and a freelance video editor handling both online and offline finishing for broadcast TV, corporate video and advertising spots.

As more broadcasters are looking at cloud solutions, Aframe’s (www.aframe.com) appointment of Frasco will help the company tap into the innovative professional video communities in New York and Los Angeles that are eager to leverage new and smarter ways to create and distribute content.

Aframe will be at NAB next month with the next generation of its cloud video platform, Aframe 3.0, including a new desktop app offering automated media movement workflows, custom transcoding technology, an HTML5 player for precision control ahead of the edit and asset management features that save immense time and hassle in broadcast video production.   Aframe 3.0 makes news and sports as well as media & entertainment professionals more productive and helps them leverage existing on-premise assets more efficiently with the peace of mind that assets are fully under control and secure.

New features include:
– A desktop app with smart upload technology that enables Aframe users to save time and hassle in uploading and distributing files out. The app automatically detects the best connectivity settings before sending the clips to Aframe where it automatically transcodes them to a viewing copy. Users can then configure the app to automatically push the clips to other users’ desktops, for maximum convenience.

– Automatic transcoding features including automatic transcoding to a house format or custom transcoding per clip that mean less effort spent transcoding everything to be used, and less expense on transcoding tools.

–  Expanded access controls and refined permission features that provide a “mission control” administrator view, enabling easier management across cost centers and more fluid asset management. With more adaptable user settings and granular permission structure, Aframe provides added peace of mind against potentially damaging incidents.

–  New HTML5 Player – faster, frame accurate and universally compatible for faster footage review even on smartphones and tablets that expedites the organization of vast amounts of video.

Brady Betzel recently reviewed the Aframe system for postPerspective. Click here to see the piece:  https://postperspective.com/review-aframe-cloud-platform.