Tag Archives: Randi Altman

Visual Effects Roundtable

By Randi Altman

With Siggraph 2019 in our not-too-distant rearview mirror, we thought it was a good time to reach out to visual effects experts to talk about trends. Everyone has had a bit of time to digest what they saw. Users are thinking what new tools and technologies might help their current and future workflows. Manufacturers are thinking about how their products will incorporate these new technologies.

We provided these experts with questions relating to realtime raytracing, the use of game engines in visual effects workflows, easier ways to share files and more.

Ben Looram, partner/owner, Chapeau Studios
Chapeau Studios provides production, VFX/animation, design and creative IP development (both for digital content and technology) for all screens.

What film inspired you to work in VFX?
There was Ray Harryhausen’s film Jason and the Argonauts, which I watched on TV when I was seven. The skeleton-fighting scene has been visually burned into my memory ever since. Later in life I watched an artist compositing some tough bluescreen shots on a Quantel Henry in 1997, and I instantly knew that that was going to be in my future.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
Double the content for half the cost seems to be the industry’s direction lately. This is coming from new in-house/client-direct agencies that sometimes don’t know what they don’t know … so we help guide/teach them where it’s OK to trim budgets or dedicate more funds for creative.

Are game engines affecting how you work, or how you will work in the future?
Yes, rendering on device and all the subtle shifts in video fidelity shifted our attention toward game engine technology a couple years ago. As soon as the game engines start to look less canned and have accurate depth of field and parallax, we’ll start to integrate more of those tools into our workflow.

Right now we have a handful of projects in the forecast where we will be using realtime game engine outputs as backgrounds on set instead of shooting greenscreen.

What about realtime raytracing? How will that affect VFX and the way you work?
We just finished an R&D project with Intel’s new raytracing engine OSPRay for Siggraph. The ability to work on a massive scale with last-minute creative flexibility was my main takeaway. This will allow our team to support our clients’ swift changes in direction with ease on global launches. I see this ingredient as really exciting for our creative tech devs moving into 2020. Proof of concept iterations will become finaled faster, and we’ve seen efficiencies in lighting, render and compositing effort.

How have ML/AI affected your workflows, if at all?
None to date, but we’ve been making suggestions for new tools that will make our compositing and color correction process more efficient.

The Uncanny Valley. Where are we now?
Still uncanny. Even with well-done virtual avatar influencers on Instagram like Lil Miquela, we’re still caught with that eerie feeling of close-to-visually-correct with a “meh” filter.

Apple

Can you name some recent projects?
The Rookie’s Guide to the NFL. This was a fun hybrid project where we mixed CG character design with realtime rendering voice activation. We created an avatar named Matthew for the NFL’s Amazon Alexa Skills store that answers your football questions in real time.

Microsoft AI: Carlsberg and Snow Leopard. We designed Microsoft’s visual language of AI on multiple campaigns.

Apple Trade In campaign: Our team concepted, shot and created an in-store video wall activation and on-all-device screen saver for Apple’s iPhone Trade In Program.

 

Mac Moore, CEO, Conductor
Conductor is a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud.

What are some of today’s VFX trends? Is cloud playing an even larger role?
Cloud is absolutely a growing trend. I think for many years the inherent complexity and perceived cost of cloud has limited adoption in VFX, but there’s been a marked acceleration in the past 12 months.

Two years ago at Siggraph, I was explaining the value of elastic compute and how it perfectly aligns with the elastic requirements that define our project-based industry; this year there was a much more pragmatic approach to cloud, and many of the people I spoke with are either using the cloud or planning to use it in the near future. Studios have seen referenceable success, both technically and financially, with cloud adoption and are now defining cloud’s role in their pipeline for fear of being left behind. Having a cloud-enabled pipeline is really a game changer; it is leveling the field and allowing artistic talent to be the differentiation, rather than the size of the studio’s wallet (and its ability to purchase a massive render farm).

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines for VFX have definitely attracted interest lately and show a lot of promise in certain verticals like virtual production. There’s more work to be done in terms of out-of-the-box usability, but great strides have been made in the past couple years. I also think various open source initiatives and the inherent collaboration those initiatives foster will help move VFX workflows forward.

Will realtime raytracing play a role in how your tool works?
There’s a need for managing the “last mile,” even in realtime raytracing, which is where Conductor would come in. We’ve been discussing realtime assist scenarios with a number of studios, such as pre-baking light maps and similar applications, where we’d perform some of the heavy lifting before assets are integrated in the realtime environment. There are certainly benefits on both sides, so we’ll likely land in some hybrid best practice using realtime and traditional rendering in the near future.

How do ML/AI and AR/VR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Machine learning and artificial intelligence are critical for our next evolutionary phase at Conductor. To date we’ve run over 250 million core-hours on the platform, and for each of those hours, we have a wealth of anonymous metadata about render behavior, such as the software run, duration, type of machine, etc.

Conductor

For our next phase, we’re focused on delivering intelligent rendering akin to ride-share app pricing; the goal is to provide producers with an upfront cost estimate before they submit the job, so they have a fixed price that they can leverage for their bids. There is also a rich set of analytics that we can mine, and those analytics are proving invaluable for studios in the planning phase of a project. We’re working with data science experts now to help us deliver this insight to our broader customer base.

AR/VR front presents a unique challenge for cloud, due to the large size and variety of datasets involved. The rendering of these workloads is less about compute cycles and more about scene assembly, so we’re determining how we can deliver more of a whole product for this market in particular.

OpenXR and USD are certainly helping with industry best practices and compatibility, which build recipes for repeatable success, and Conductor is collaborating on creating those guidelines for success when it comes to cloud computing with those standards.

What is next on the horizon for VFX?
Cloud, open source and realtime technologies are all disrupting VFX norms and are converging in a way that’s driving an overall democratization of the industry. Gone are the days when you need a pile of cash and a big brick-and-mortar building to house all of your tech and talent.

Streaming services and new mediums, along with a sky-high quality bar, have increased the pool of available VFX work, which is attracting new talent. Many of these new entrants are bootstrapping their businesses with cloud, standards-based approaches and geographically dispersed artistic talent.

Conductor recently became a fully virtual company for this reason. I hire based on expertise, not location, and today’s technology allows us to collaborate as if we are in the same building.

 

Aruna Inversin, creative director/VFX supervisor, Digital Domain 
Digital Domain has provided visual effects and technology for hundreds of motion pictures, commercials, video games, music videos and virtual reality experiences. It also livestreams events in 360-degree virtual reality, creates “virtual humans” for use in films and live events, and develops interactive content, among other things.

What film inspired you to work in VFX?
RoboCop in 1984. The combination of practical effects, miniatures and visual effects inspired me to start learning about what some call “The Invisible Art.”

What trends have you been seeing? What do you feel is important?
There has been a large focus on realtime rendering and virtual production and using it to help increase the throughput and workflow of visual effects. While indeed realtime rendering does increase throughput, there is now a greater onus on filmmakers to plan their creative ideas and assets before you can render them. No longer is it truly post production, but we are back into the realm of preproduction, using post tools and realtime tools to help define how a story is created and eventually filmed.

USD and cloud rendering are also important components, which allow many different VFX facilities the ability to manage their resources effectively. I think another trend that has since passed and has gained more traction is the availability of ACES and a more unified color space by the Academy. This allows quicker throughput between all facilities.

Are game engines affecting how you work or how you will work in the future?
As my primary focus is in new media and experiential entertainment at Digital Domain, I already use game engines (cinematic engines, realtime engines) for the majority of my deliverables. I also use our traditional visual effects pipeline; we have created a pipeline that flows from our traditional cinematic workflow directly into our realtime workflow, speeding up the development process of asset creation and shot creation.

What about realtime raytracing? How will that affect VFX and the way you work?
The ability to use Nvidia’s RTX and raytracing increases the physicality and realistic approximations of virtual worlds, which is really exciting for the future of cinematic storytelling in realtime narratives. I think we are just seeing the beginnings of how RTX can help VFX.

How have AR/VR and AI/ML affected your workflows, if at all?
Augmented reality has occasionally been a client deliverable for us, but we are not using it heavily in our VFX pipeline. Machine learning, on the other hand, allows us to continually improve our digital humans projects, providing quicker turnaround with higher fidelity than competitors.

The Uncanny Valley. Where are we now?
There is no more uncanny valley. We have the ability to create a digital human with the nuance expected! The only limitation is time and resources.

Can you name some recent projects?
I am currently working on a Time project but I cannot speak too much about it just yet. I am also heavily involved in creating digital humans for realtime projects for a number of game companies that wish to push the boundaries of storytelling in realtime. All these projects have a release date of 2020 or 2021.

 

Matt Allard, strategic alliances lead, M&E, Dell Precision Workstations
Dell Precision workstations feature the latest processors and graphics technology and target those working in the editing studio or at a drafting table, at the office or on location.

What are some of today’s VFX trends?
We’re seeing a number of trends in VFX at the moment — from 4K mastering from even higher-resolution acquisition formats and an increase in HDR content to game engines taking a larger role on set in VFX-heavy productions. Of course, we are also seeing rising expectations for more visual sophistication, complexity and film-level VFX, even in TV post (for example, Game of Thrones).

Will realtime raytracing play a role in how your tools work?
We expect that Dell customers will embrace realtime and hardware-accelerated raytracing as creative, cost-saving and time-saving tools. With the availability of Nvidia Quadro RTX across the Dell Precision portfolio, including on our 7000 series mobile workstations, customers can realize these benefits now to deliver better content wherever a production takes them in the world.

Large-scale studio users will not only benefit from the freedom to create the highest-quality content faster, but they’ll likely see overall impact to their energy consumption as they assess the move from CPU rendering, which dominates studio data centers today. Moving toward GPU and hybrid CPU/GPU rendering approaches can offer equal or better rendering output with less energy consumption.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines have made their way into VFX-intensive productions to deliver in-context views of the VFX during the practical shoot. With increasing quality driven by realtime raytracing, game engines have the potential to drive a master-quality VFX shot on set, helping to minimize the need to “fix it in post.”

What is next on the horizon for VFX?
The industry is at the beginning of a new era as artificial intelligence and machine learning techniques are brought to bear on VFX workflows. Analytical and repetitive tasks are already being targeted by major software applications to accelerate or eliminate cumbersome elements in the workflow. And as with most new technologies, it can result in improved creative output and/or cost savings. It really is an exciting time for VFX workflows!

Ongoing performance improvements to the computing infrastructure will continue to accelerate and democratize the highest-resolution workflows. Now more than ever, small shops and independents can access the computing power, tools and techniques that were previously available only to top-end studios. Additionally, virtualization techniques will allow flexible means to maximize the utilization and proliferation of workstation technology.

 

Carl Flygare, manager, Quadro Marketing, PNY
Providing tools for realtime raytracing, augmented reality and virtual reality with the goal of advancing VFX workflow creativity and productivity. PNY is NVIDIA’s Quadro channel partner throughout North America, Latin America, Europe and India..

How will realtime raytracing play a role in workflows?
Budgets are getting tighter, timelines are contracting, and audience expectations are increasing. This sounds like a perfect storm, in the bad sense of the term, but with the right tools, it is actually an opportunity.

Realtime raytracing, based on Nvidia’s RTX technology and support from leading ISVs, enables VFX shops to fit into these new realities while delivering brilliant work. Whiteboarding a VFX workflow is a complex task, so let’s break it down by categories. In preproduction, specifically previz, realtime raytracing will let VFX artists present far more realistic and compelling concepts much earlier in the creative process than ever before.

This extends to the next phase, asset creation and character animation, in which models can incorporate essentially lifelike nuance, including fur, cloth, hair or feathers – or something else altogether! Shot layout, blocking, animation, simulation, lighting and, of course, rendering all benefit from additional iterations, nuanced design and the creative possibilities that realtime raytracing can express and realize. Even finishing, particularly compositing, can benefit. Given the applicable scope of realtime raytracing, it will essentially remake VFX workflows and overall film pipelines, and Quadro RTX series products are the go-to tools enabling this revolution.

How are game engines changing how VFX is done? Is this for everyone or just a select few?
Variety had a great article on this last May. ILM substituted realtime rendering and five 4K laser projectors for a greenscreen shot during a sequence from Solo: A Star Wars Story. This allowed the actors to perform in context — in this case, a hyperspace jump — but also allowed cinematographers to capture arresting reflections of the jump effect in the actors’ eyes. Think of it as “practical digital effects” created during shots, not added later in post. The benefits are significant enough that the entire VFX ecosystem, from high-end shops and major studios to independent producers, are using realtime production tools to rethink how movies and TV shows happen while extending their vision to realize previously unrealizable concepts or projects.

Project Sol

How do ML and AR play a role in your tool? And are you supporting OpenXR 1.0? What about Pixar’s USD?
Those are three separate but somewhat interrelated questions! ML (machine learning) and AI (artificial intelligence) can contribute by rapidly denoising raytraced images in far less time than would be required by letting a given raytracing algorithm run to conclusion. Nvidia enables AI denoising in Optix 5.0 and is working with a broad array of leading ISVs to bring ML/AI enhanced realtime raytracing techniques into the mainstream.

OpenXR 1.0 was released at Siggraph 2019. Nvidia (among others) is supporting this open, royalty-free and cross-platform standard for VR/AR. Nvidia is now providing VR enhancing technologies, such as variable rate shading, content adaptive shading and foveated rendering (among others), with the launch of Quadro RTX. This provides access to the best of both worlds — open standards and the most advanced GPU platform on which to build actual implementations.

Pixar and Nvidia have collaborated to make Pixar’s USD (Universal Scene Description) and Nvidia’s complementary MDL (Materials Definition Language) software open source in an effort to catalyze the rapid development of cinematic quality realtime raytracing for M&E applications.

Project Sol

What is next on the horizon for VFX?
The insatiable desire on the part of VFX professionals, and audiences, to explore edge-of-the-envelope VFX will increasingly turn to realtime raytracing, based on the actual behavior of light and real materials, increasingly sophisticated shader technology and new mediums like VR and AR to explore new creative possibilities and entertainment experiences.

AI, specifically DNNs (deep neural networks) of various types, will automate many repetitive VFX workflow tasks, allowing creative visionaries and artists to focus on realizing formerly impossible digital storytelling techniques.

One obvious need is increasing the resolution at which VFX shots are rendered. We’re in a 4K world, but many films are finished at 2K, primarily based on VFX. 8K is unleashing the abilities (and changing the economics) of cinematography, so expect increasingly powerful realtime rendering solutions, such as Quadro RTX (and successor products when they come to market), along with amazing advances in AI, to allow the VFX community to innovate in tandem.

 

Chris Healer, CEO/CTO/VFX supervisor, The Molecule 
Founded in 2005, The Molecule creates bespoke VFX imagery for clients worldwide. Over 80 artists, producers, technicians and administrative support staff collaborate at our New York City and Los Angeles studios.

What film or show inspired you to work in VFX?
I have to admit, The Matrix was a big one for me.

Are game engines affecting how you work or how you will work?
Game engines are coming, but the talent pool is difficult and the bridge is hard to cross … a realtime artist doesn’t have the same mindset as a traditional VFX artist. The last small percentage of completion on a shot can invalidate any values gained by working in a game engine.

What about realtime raytracing?
I am amazed at this technology, and as a result bought stock in Nvidia, but the software has to get there. It’s a long game, for sure!

How have AR/VR and ML/AI affected your workflows?
I think artists are thinking more about how images work and how to generate them. There is still value in a plain-old four-cornered 16:9 rectangle that you can make the most beautiful image inside of.

AR,VR, ML, etc., are not that, to be sure. I think there was a skip over VR in all the hype. There’s way more to explore in VR, and that will inform AR tremendously. It is going to take a few more turns to find a real home for all this.

What trends have you been seeing? Cloud workflows? What else?
Everyone is rendering in the cloud. The biggest problem I see now is lack of a UBL model that is global enough to democratize it. UBL = usage-based licensing. I would love to be able to render while paying by the second or minute at large or small scales. I would love for Houdini or Arnold to be rentable on a Satoshi level … that would be awesome! Unfortunately, it is each software vendor that needs to provide this, which is a lot to organize.

The Uncanny Valley. Where are we now?
We saw in the recent Avengers film that Mark Ruffalo was in it. Or was he? I totally respect the Uncanny Valley, but within the complexity and context of VFX, this is not my battle. Others have to sort this one out, and I commend the artists who are working on it. Deepfake and Deeptake are amazing.

Can you name some recent projects?
We worked on Fosse/Verdon, but more recent stuff, I can’t … sorry. Let’s just say I have a lot of processors running right now.

 

Matt Bach and William George, lab technicians, Puget Systems 
Puget Systems specializes in high-performance custom-built computers — emphasizing each customer’s specific workflow.

Matt Bach

William George

What are some of today’s VFX trends?
Matt Bach: There are so many advances going on right now that it is really hard to identify specific trends. However, one of the most interesting to us is the back and forth between local and cloud rendering.

Cloud rendering has been progressing for quite a few years and is a great way to get a nice burst in rendering performance when you are  in a crunch. However, there have been high improvements in GPU-based rendering with technology like Nvidia Optix. Because of these, you no longer have to spend a fortune to have a local render farm, and even a relatively small investment in hardware can often move the production bottleneck away from rendering to other parts of the workflow. Of course, this technology should make its way to the cloud at some point, but as long as these types of advances keep happening, the cloud is going to continue playing catch-up.

A few other that we are keeping our eyes on are the growing use of game engines, motion capture suits and realtime markerless facial tracking in VFX pipelines.

Realtime raytracing is becoming more prevalent in VFX. What impact does realtime raytracing have on system hardware, and what do VFX artists need to be thinking about when optimizing their systems?
William George: Most realtime raytracing requires specialized computer hardware, specifically video cards with dedicated raytracing functionality. Raytracing can be done on the CPU and/or normal video cards as well, which is what render engines have done for years, but not quickly enough for realtime applications. Nvidia is the only game in town at the moment for hardware raytracing on video cards with its RTX series.

Nvidia’s raytracing technology is available on its consumer (GeForce) and professional (Quadro) RTX lines, but which one to use depends on your specific needs. Quadro cards are specifically made for this kind of work, with higher reliability and more VRAM, which allows for the rendering of more complex scenes … but they also cost a lot more. GeForce, on the other hand, is more geared toward consumer markets, but the “bang for your buck” is incredibly high, allowing you to get several times the performance for the same cost.

In between these two is the Titan RTX, which offers very good performance and VRAM for its price, but due to its fan layout, it should only be used as a single card (or at most in pairs, if used in a computer chassis with lots of airflow).

Another thing to consider is that if you plan on using multiple GPUs (which is often the case for rendering), the size of the computer chassis itself has to be fairly large in order to fit all the cards, power supply, and additional cooling needed to keep everything going.

How are game engines changing or impacting VFX workflows?
Bach: Game engines have been used for previsualization for a while, but we are starting to see them being used further and further down the VFX pipeline. In fact, there are already several instances where renders directly captured from game engines, like Unity or Unreal, are being used in the final film or animation.

This is getting into speculation, but I believe that as the quality of what game engines can produce continues to improve, it is going to drastically shake up VFX workflows. The fact that you can make changes in real time, as well as use motion capture and facial tracking, is going to dramatically reduce the amount of time necessary to produce a highly polished final product. Game engines likely won’t completely replace more traditional rendering for quite a while (if ever), but it is going to be significant enough that I would encourage VFX artists to at least familiarize themselves with the popular engines like Unity or Unreal.

What impact do you see ML/AI and AR/VR playing for your customers?
We are seeing a lot of work being done for machine learning and AI, but a lot of it is still on the development side of things. We are starting to get a taste of what is possible with things like Deepfakes, but there is still so much that could be done. I think it is too early to really tell how this will affect VFX in the long term, but it is going to be exciting to see.

AR and VR are cool technologies, but it seems like they have yet to really take off, in part because designing for them takes a different way of thinking than traditional media, but also in part because there isn’t one major platform that’s an overwhelming standard. Hopefully, that is something that gets addressed over time, because once creative folks really get a handle on how to use the unique capabilities of AR/VR to their fullest, I think a lot of neat stories will be told.

What is the next on the horizon for VFX?
Bach: The sky is really the limit due to how fast technology and techniques are changing, but I think there are two things in particular that are going to be very interesting to see how they play out.

First, we are hitting a point where ethics (“With great power comes great responsibility” and all that) is a serious concern. With how easy it is to create highly convincing Deepfakes of celebrities or other individuals, even for someone who has never used machine learning before, I believe that there is the potential of backlash from the general public. At the moment, every use of this type of technology has been for entertainment or otherwise rightful purposes, but the potential to use it for harm is too significant to ignore.

Something else I believe we will start to see is “VFX for the masses,” similar to how video editing used to be a purely specialized skill, but now anyone with a camera can create and produce content on social platforms like YouTube. Advances in game engines, facial/body tracking for animated characters and other technologies that remove a number of skills and hardware barriers for relatively simple content are going to mean that more and more people with no formal training will take on simple VFX work. This isn’t going to impact the professional VFX industry by a significant degree, but I think it might spawn a number of interesting techniques or styles that might make their way up to the professional level.

 

Paul Ghezzo, creative director, Technicolor Visual Effects
Technicolor and its family of VFX brands provide visual effects services tailored to each project’s needs.

What film inspired you to work in VFX?
At a pretty young age, I fell in love with Star Wars: Episode IV – A New Hope and learned about the movie magic that was developed to make those incredible visuals come to life.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
USD will help structure some of what we currently do, and cloud rendering is an incredible source to use when needed. I see both of them maturing and being around for years to come.

As for other trends, I see new methods of photogrammetry and HDRI photography/videography providing datasets for digital environment creation and capturing lighting content; performance capture (smart 2D tracking and manipulation or 3D volumetric capture) for ease of performance manipulation or layout; and even post camera work. New simulation engines are creating incredible and dynamic sims in a fraction of the time, and all of this coming together through video cards streamlining the creation of the end product. In many ways it might reinvent what can be done, but it might take a few cutting-edge shows to embrace and perfect the recipe and show its true value.

Production cameras tethered to digital environments for live set extensions are also coming of age, and with realtime rendering becoming a viable option, I can imagine that it will only be a matter of time for LED walls to become the new greenscreen. Can you imagine a live-action set extension that parallaxes, distorts and is exposed in the same way as its real-life foreground? How about adding explosions, bullet hits or even an armada of spaceships landing in the BG, all on cue. I imagine this will happen in short order. Exciting times.

Are game engines affecting how you work or how you will work in the future?
Game engines have affected how we work. The speed and quality that they offer is undoubtably a game changer, but they don’t always create the desired elements and AOVs that are typically needed in TV/film production.

They are also creating a level of competition that is spurring other render engines to be competitive and provide a similar or better solution. I can imagine that our future will use Unreal/Unity engines for fast turnaround productions like previz and stylized content, as well as for visualizing virtual environments and digital sets as realtime set extensions and a lot more.

Snowfall

What about realtime raytracing? How will that affect VFX and the way you work?
GPU rendering has single-handedly changed how we render and what we render with. A handful of GPUs and a GPU-accelerated render engine can equal or surpass a CPU farm that’s several times larger and much more expensive. In VFX, iterations equal quality, and if multiple iterations can be completed in a fraction of the time — and with production time usually being finite — then GPU-accelerated rendering equates to higher quality in the time given.

There are a lot of hidden variables to that equation (change of direction, level of talent provided, work ethics, hardware/software limitations, etc.), but simply said, if you can hit the notes as fast as they are given, and not have to wait hours for a render farm to churn out a product, then clearly the faster an iteration can be provided the more iterations can be produced, allowing for a higher-quality product in the time given.

How have AR or ML affected your workflows, if at all?
ML and AR haven’t significantly affected our current workflows yet … but I believe they will very soon.

One aspect of AR/VR/MR that we occasionally use in TV/film production is to previz environments, props and vehicles, which allows everyone in production and on set/location to see what the greenscreen will be replaced with, which allows for greater communication and understanding with the directors, DPs, gaffers, stunt teams, SFX and talent. I can imagine that AR/VR/MR will only become more popular as a preproduction tool, allowing productions to front load and approve all aspects of production way before the camera is loaded and the clock is running on cast and crew.
Machine learning is on the cusp of general usage, but it currently seems to be used by productions with lengthy schedules that will benefit from development teams building those toolsets. There are tasks that ML will undoubtably revolutionize, but it hasn’t affected our workflows yet.

The Uncanny Valley. Where are we now?
Making the impossible possible … That *is* what we do in VFX. Looking at everything from Digital Emily in 2011 to Thanos and Hulk in Avengers: Endgame, we’ve seen what can be done, and the Uncanny Valley will likely remain, but only on productions that can’t afford the time or cost of flawless execution.

Can you name some recent projects?
Big Little Lies, Dead to Me, NOS4A2, True Detective, Veep, This Is Us, Snowfall, The Loudest Voice, and Avengers: Endgame.

 

James Knight, virtual production director, AMD 
AMD is a semiconductor company that develops computer processors and related technologies for M&E as well as other markets. Its tools include Ryzen and Threadripper.

What are some of today’s VFX trends?
Well, certainly the exploration for “better, faster, cheaper” keeps going. Faster rendering, so our community can accomplish more iterations in a much shorter amount of time, seems to something I’ve heard the whole time I’ve been in the business.

I’d surely say the virtual production movement (or on-set visualization) is gaining steam, finally. I work with almost all the major studios in my role, and all of them, at a minimum, have the ability to speed up post and blend it with production on their radar; many have virtual production departments.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
I would say game engines are where most of the innovation comes from these days. Think about Unreal, for example. Epic pioneered Fortnite, and the revenue from that must be astonishing, and they’re not going to sit on their hands. The feature film and TV post/VFX business benefits from the requirement of the gaming consumer to see higher-resolution, more photorealistic images in real time. That gets passed on to our community in eliminating guess work on set when framing partial or completely CG shots.

It should be for everyone or most, because the realtime and post production time savings are rather large. I think many still have a personal preference for what they’re used to. And that’s not wrong, if it works for them, obviously that’s fine. I just think that even in 2019, use of game engines is still new to some … which is why it’s not completely ubiquitous.

How do ML or AR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Well, it’s more the reverse. With our new Rome and Threadripper CPUs, we’re powering AR. Yes, we are supporting OpenXR 1.0.

What is next on the horizon for VFX?
Well, the demand for VFX is increasing, not the opposite, so the pursuit of faster photographic reality is perpetually in play. That’s good job security for me at a CPU/GPU company, as we have a way to go to properly bridge the Uncanny Valley completely, for example.

I’d love to say lower-cost CG is part of the future, but then look at the budgets of major features — they’re not exactly falling. The dance of Moore’s law will forever be in effect more than likely, with momentary huge leaps in compute power — like with Rome and Threadripper — catching amazement for a period. Then, when someone sees the new, expanded size of their sandpit, they then fill that and go, “I now know what I’d do if it was just a bit bigger.”

I am vested and fascinated by the future of VFX, but I think it goes hand in hand with great storytelling. If we don’t have great stories, then directing and artistry innovations don’t properly get noticed. Look at the top 20 highest grossing films in history … they’re all fantasy. We all want to be taken away from our daily lives and immersed in a beautiful, realistic VFX intense fictional world for 90 minutes, so we’ll be forever pushing the boundaries of rigging, texturing, shading, simulations, etc. To put my finger on exactly what’s next, I’d say I happen to know of a few amazing things that are coming, but sadly, I’m not at liberty to say right now.

 

Michel Suissa, managing director of pro solutions, The Studio-B&H 
The Studio-B&H provides hands-on experience to high-end professionals. Its Technology Center is a fully operational studio with an extensive display of high-end products and state-of-the-art workflows.

What are some of today’s VFX trends?
AI, ML, NN (GAN) and realtime environments

Will realtime raytracing play a role in how the tools you provide work?
It already does with most relevant applications in the market.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
The ubiquity of realtime game engines is becoming more mainstream with every passing year. It is becoming fairly accessible to a number of disciplines within different market targets.

What is next on the horizon for VFX?
New pipeline architectures that will rely on different implementations (traditional and AI/ML/NN) and mixed infrastructures (local and cloud-based).

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
AI, ML and realtime environments. New cloud toolsets. Prominence of neural networks and GANs. Proliferation of convincing “deepfakes” as a proof of concept for the use of generative networks as resources for VFX creation.

What about realtime raytracing? How will that affect VFX workflows?
RTX is changing how most people see their work being done. It is also changing expectations about what it takes to create and render CG images.



The Uncanny Valley. Where are we now?
AI and machine learning will help us get there. Perfection still remains too costly. The amount of time and resources required to create something convincing is prohibitive for the large majority of the budgets.

 

Marc Côté, CEO, Real by Fake 
Real by Fake services include preproduction planning, visual effects, post production and tax-incentive financing.

What film or show inspired you to work in VFX?
George Lucas’ Star Wars and Indiana Jones (Raiders of the Lost Ark). For Star Wars, I was a kid and I saw this movie. It brought me to another universe. Star Wars was so inspiring even though I was too young to understand what the movie was about. The robots in the desert and the spaceships flying around. It looked real; it looked great. I was like, “Wow, this is amazing.”

Indiana Jones because it was a great adventure; we really visit the worlds. I was super-impressed by the action, by the way it was done. It was mostly practical effects, not really visual effects. Later on I realized that in Star Wars, they were using robots (motion control systems) to shoot the spaceships. And as a kid, I was very interested in robots. And I said, “Wow, this is great!” So I thought maybe I could use my skills and what I love and combine it with film. So that’s the way it started.

What trends have you been seeing? What do you feel is important?
The trend right now is using realtime rendering engines. It’s coming on pretty strong. The game companies who build engines like Unity or Unreal are offering a good product.

It’s bit of a hack to use these tools in rendering or in production at this point. They’re great for previz, and they’re great for generating realtime environments and realtime playback. But having the capacity to change or modify imagery with the director during the process of finishing is still not easy. But it’s a very promising trend.

Rendering in the cloud gives you a very rapid capacity, but I think it’s very expensive. You also have to download and upload 4K images, so you need a very big internet pipe. So I still believe in local rendering — either with CPUs or GPUs. But cloud rendering can be useful for very tight deadlines or for small companies that want to achieve something that’s impossible to do with the infrastructure they have.

My hope is that AI will minimize repetition in visual effects. For example, in keying. We key multiple sections of the body, but we get keying errors in plotting or transparency or in the edges, and they are all a bit different, so you have to use multiple keys. AI would be useful to define which key you need to use for every section and do it automatically and in parallel. AI could be an amazing tool to be able to make objects disappear by just selecting them.

Pixar’s USD is interesting. The question is: Will the industry take it as a standard? It’s like anything else. Kodak invented DPX, and it became the standard through time. Now we are using EXR. We have different software, and having exchange between them will be great. We’ll see. We have FBX, which is a really good standard right now. It was built by Filmbox, a Montreal company that was acquired by Autodesk. So we’ll see. The demand and the companies who build the software — they will be the ones who take it up or not. A big company like Pixar has the advantage of other companies using it.

The last trend is remote access. The internet is now allowing us to connect cross-country, like from LA to Montreal or Atlanta. We have a sophisticated remote infrastructure, and we do very high-quality remote sessions with artists who work from disparate locations. It’s very secure and very seamless.

What about realtime raytracing? How will that affect VFX and the way you work?
I think we have pretty good raytracing compared to what we had two years ago. I think it’s a question of performance, and of making it user-friendly in the application so it’s easy to light with natural lighting. To not have to fake the rebounds so you can get two or three rebounds. I think it’s coming along very well and quickly.

Sharp Objects

So what about things like AI/ML or AR/VR? Have those things changed anything in the way movies and TV shows are being made?
My feeling right now is that we are getting into an era where I don’t think you’ll have enough visual effects companies to cover the demand.

Every show has visual effects. It can be a complete character, like a Transformer, or a movie from the Marvel Universe where the entire film is CG. Or it can be the huge number of invisible effects that are starting to appear in virtually every show. You need capacity to get all this done.

AI can help minimize repetition so artists can work more on the art and what is being created. This will accelerate and give us the capacity to respond to what’s being demanded of us. They want a faster cheaper product, and they want the quality to be as high as a movie.

The only scenario where we are looking at using AR is when we are filming. For example, you need to have a good camera track in real time, and then you want to be able to quickly add a CGI environment around the actors so the director can make the right decision in terms of the background or interactive characters who are in the scene. The actors will not see it until they have a monitor or a pair of glasses or something to be able to give them the result.

So AR is a tool to be able to make faster decisions when you’re on set shooting. This is what we’ve been working on for a long time: bringing post production and preproduction together. To have an engineering department who designs and conceptualizes and creates everything that needs to be done before shooting.

The Uncanny Valley. Where are we now?
In terms of the environment, I think we’re pretty much there. We can create an environment that nobody will know is fake. Respectfully, I think our company Real by Fake is pretty good at doing it.

In terms of characters, I think we’re still not there. I think the game industry is helping a lot to push this. I think we’re on the verge of having characters look as close as possible to live actors, but if you’re in a closeup, it still feels fake. For mid-ground and long shots, it’s fine. You can make sure nobody will know. But I don’t think we’ve crossed the valley just yet.

Can you name some recent projects?
Big Little Lies and Sharp Objects for HBO, Black Summer for Netflix
and Brian Banks, an indie feature.

 

Jeremy Smith, CTO, Jellyfish Pictures
Jellyfish Pictures provides a range of services including VFX for feature film, high-end TV and episodic animated kids’ TV series and visual development for projects spanning multiple genres.

What film or show inspired you to work in VFX?
Forrest Gump really opened my eyes to how VFX could support filmmaking. Seeing Tom Hanks interact with historic footage (e.g., John F. Kennedy) was something that really grabbed my attention, and I remember thinking, “Wow … that is really cool.”

What trends have you been seeing? What do you feel is important?
The use of cloud technology is really empowering “digital transformation” within the animation and VFX industry. The result of this is that there are new opportunities that simply wouldn’t have been possible otherwise.

Jellyfish Pictures uses burst rendering into the cloud, extending our capacity and enabling us to take on more work. In addition to cloud rendering, Jellyfish Pictures were early adopters of virtual workstations, and, especially after Siggraph this year, it is apparent to see that this is the future for VFX and animation.

Virtual workstations promote a flexible and scalable way of working, with global reach for talent. This is incredibly important for studios to remain competitive in today’s market. As well as the cloud, formats such as USD are making it easier to exchange data with others, which allow us to work in a more collaborative environment.

It’s important for the industry to pay attention to these, and similar, trends, as they will have a massive impact on how productions are carried out going forward.
Are game engines affecting how you work, or how you will work in the future?

Game engines are offering ways to enhance certain parts of the workflow. We see a lot of value in the previz stage of the production. This allows artists to iterate very quickly and helps move shots onto the next stage of production.

What about realtime raytracing? How will that affect VFX and the way you work?
The realtime raytracing from Nvidia (as well as GPU compute in general) offers artists a new way to iterate and help create content. However, with recent advancements in CPU compute, we can see that “traditional” workloads aren’t going to be displaced. The RTX solution is another tool that can be used to assist in the creation of content.

How have AR/VR and ML/AI affected your workflows, if at all?
Machine learning has the power to really assist certain workloads. For example, it’s possible to use machine learning to assist a video editor by cataloging speech in a certain clip. When a director says, “find the spot where the actor says ‘X,’” we can go directly to that point in time on the timeline.

 In addition, ML can be used to mine existing file servers that contain vast amounts of unstructured data. When mining this “dark data,” an organization may find a lot of great additional value in the existing content, which machine learning can uncover.

The Uncanny Valley. Where are we now?
With recent advancements in technology, the Uncanny Valley is closing, however it is still there. We see more and more digital humans in cinema than ever before (Peter Cushing in Rogue One: A Star Wars Story was a main character), and I fully expect to see more advances as time goes on.

Can you name some recent projects?
Our latest credits include Solo: A Star Wars Story, Captive State, The Innocents, Black Mirror, Dennis & Gnasher: Unleashed! and Floogals Seasons 1 through 3.

 

Andy Brown, creative director, Jogger 
Jogger Studios is a boutique visual effects studio with offices in London, New York and LA. With capabilities in color grading, compositing and animation, Jogger works on a variety of projects, from TV commercials and music videos to projections for live concerts.

What inspired you to work in VFX?
First of all, my sixth form English project was writing treatments for music videos to songs that I really liked. You could do anything you wanted to for this project, and I wanted to create pictures using words. I never actually made any of them, but it planted the seed of working with visual images. Soon after that I went to university in Birmingham in the UK. I studied communications and cultural studies there, and as part of the course, we visited the BBC Studios at Pebble Mill. We visited one of the new edit suites, where they were putting together a story on the inquiry into the Handsworth riots in Birmingham. It struck me how these two people, the journalist and the editor, could shape the story and tell it however they saw fit. That’s what got me interested on a critical level in the editorial process. The practical interest in putting pictures together developed from that experience and all the opportunities that opened up when I started work at MPC after leaving university.

What trends have you been seeing? What do you feel is important?
Remote workstations and cloud rendering are all really interesting. It’s giving us more opportunities to work with clients across the world using our resources in LA, SF, Austin, NYC and London. I love the concept of a centralized remote machine room that runs all of your software for all of your offices and allows you scaled rendering in an efficient and seamless manner. The key part of that sentence is seamless. We’re doing remote grading and editing across our offices so we can share resources and personnel, giving the clients the best experience that we can without the carbon footprint.

Are game engines affecting how you work or how you will work in the future?
Game engines are having a tremendous effect on the entire media and entertainment industry, from conception to delivery. Walking around Siggraph last month, seeing what was not only possible but practical and available today using gaming engines, was fascinating. It’s hard to predict industry trends, but the technology felt like it will change everything. The possibilities on set look great, too, so I’m sure it will mean a merging of production and post production in many instances.

What about realtime raytracing How will that affect VFX and the way you work?
Faster workflows and less time waiting for something to render have got to be good news. It gives you more time to experiment and refine things.

Chico for Wendy’s

How have AR/VR or ML/AI affected your workflows, if at all?
Machine learning is making its way into new software releases, and the tools are useful. Anything that makes it easier to get where you need to go on a shot is welcome. AR, not so much. I viewed the new Mac Pro sitting on my kitchen work surface through my phone the other day, but it didn’t make me want to buy it any more or less. It feels more like something that we can take technology from rather than something that I want to see in my work.

I’d like 3D camera tracking and facial tracking to be realtime on my box, for example. That would be a huge time-saver in set extensions and beauty work. Anything that makes getting perfect key easier would also be great.

The Uncanny Valley. Where are we now?
It always used to be “Don’t believe anything you read.” Now it’s, “Don’t believe anything you see.” I used to struggle to see the point of an artificial human, except for resurrecting dead actors, but now I realize the ultimate aim is suppression of the human race and the destruction of democracy by multimillionaire despots and their robot underlings.

Can you name some recent projects?
I’ve started prepping for the apocalypse, so it’s hard to remember individual jobs, but there’s been the usual kind of stuff — beauty, set extensions, fast food, Muppets, greenscreen, squirrels, adding logos, removing logos, titles, grading, finishing, versioning, removing rigs, Frankensteining, animating, removing weeds, cleaning runways, making tenders into wings, split screens, roto, grading, polishing cars, removing camera reflections, stabilizing, tracking, adding seatbelts, moving seatbelts, adding photos, removing pictures and building petrol stations. You know, the usual.

 

James David Hattin, founder/creative director, VFX Legion 
Based in Burbank and British Columbia, VFX Legion specializes in providing episodic shows and feature films with an efficient approach to creating high-quality visual effects.

What film or show inspired you to work in VFX?
Star Wars was my ultimate source of inspiration for doing visual effects. Much of the effects in the movies didn’t make sense to me as a six-year-old, but I knew that this was the next best thing to magic. Visual effects create a wondrous world where everyday people can become superheroes, leaders of a resistance or ruler of a 5th century dynasty. Watching X-wings flying over the surface of a space station, the size of a small moon was exquisite. I also learned, much later on, that the visual effects that we couldn’t see were as important as what we could see.

I had already been steeped in visual effects with Star Trek — phasers, spaceships and futuristic transporters. Models held from wires on a moon base convinced me that we could survive on the moon as it broke free from orbit. All of this fueled my budding imagination. Exploring computer technology and creating alternate realities, CGI and digitally enhanced solutions have been my passion for over a quarter of century.

What trends have you been seeing? What do you feel is important?
More and more of the work is going to happen inside a cloud structure. That is definitely something that is being pressed on very heavily by the tech giants like Google and Amazon that rule our world. There is no Moore’s law for computers anymore. The prices and power we see out of computers is almost plateauing. The technology is now in the world of optimizing algorithms or rendering with video cards. It’s about getting bigger, better effects out more efficiently. Some companies are opting to run their entire operations in the cloud or co-located server locations. This can theoretically free up the workers to be in different locations around the world, provided they have solid, low-latency, high-speed internet.

When Legion was founded in 2013, the best way around cloud costs was to have on-premises servers and workstations that supported global connectivity. It was a cost control issue that has benefitted the company to this day, enabling us to bring a global collective of artists and clients into our fold in a controlled and secure way. Legion works in what we consider a “private cloud,” eschewing the costs of egress from large providers and working directly with on-premises solutions.

Are game engines affecting how you work or how you will work in the future?
Game engines are perfect for revisualization in large, involved scenes. We create a lot of environments and invisible effects. For the larger bluescreen shoots, we can build out our sets in Unreal engines, previsualizing how the scene will play for the director or DP. This helps get everyone on the same page when it comes to how a particular sequence is going to be filmed. It’s a technique that also helps the CG team focus on adding details to the areas of a set that we know will be seen. When the schedule is tight, the assets are camera-ready by the time the cut comes to us.

What about realtime raytracing via Nvidia’s RTX? How will that affect VFX and the way you work?
The type of visual effects that we create for feature films and television shows involves a lot of layers and technology that provides efficient, comprehensive compositing solutions. Many of the video card rendering engines like Octanerender, Redshift and V-Ray RT are limited when it comes to what they can create with layers. They often have issues with getting what is called a “back to beauty,” in which the sum of the render passes equals the final render. However, the workarounds we’ve developed enable us to achieve the quality we need. Realtime raytracing introduces a fantastic technology that will someday make it an ideal fit with our needs. We’re keeping an out eye for it as it evolves and becomes more robust.

How have AR/VR or ML/AI affected your workflows, if at all?
AR has been in the wings of the industry for a while. There’s nothing specific that we would take advantage of. Machine learning has been introduced a number of times to solve various problems. It’s a pretty exciting time for these things. One of our partner contacts, who left to join Facebook, was keen to try a number of machine learning tricks for a couple of projects that might have come through, but we didn’t get to put it through the test. There’s an enormous amount of power to be had in machine learning, and I think we are going to see big changes over the next five years in that field and how it affects all of post production.

The Uncanny Valley. Where are we now?
Climbing up the other side, not quite at the summit for daily use. As long as the character isn’t a full normal human, it’s almost indistinguishable from reality.

Can you name some recent projects?
We create visual effects on an ongoing basis for a variety of television shows that include How to Get Away with Murder, DC’s Legends of Tomorrow, Madam Secretary and The Food That Built America. Our team is also called upon to craft VFX for a mix of movies, from the groundbreaking feature film Hardcore Henry to recently released films such as Ma, SuperFly and After.

MAIN IMAGE: Good Morning Football via Chapeau Studios.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

DP Chat: Peaky Blinder‘s Si Bell ramps up the realism for Season 5

By Randi Altman

UK-based cinematographer Si Bell is known for his work on the critically acclaimed feature films Electricity (2015), In Darkness (2019) and Tiger Raid (2016), as well as high-profile TV shows such as Fortitude, Hard Sun, Britannia and Ripper Street. He is currently working on the new Steven Knight drama special, A Christmas Carol.

Si Bell

He also shot the new season of Peaky Blinders, which begins airing on BBC One on August 25 and then makes its way to Netflix on October 4. Peaky Blinders takes place in Birmingham, England not long after World War I, and follows the Shelby family and its mafia-like business. The show is often dark, brutally violent and completely compelling. It stars Cillian Murphy as Thomas Shelby.

We recently reached out to Bell to ask him about his work on this current season of the edgy crime drama, followed by a look at his career in cinematography.

Tell us about Peaky Blinders Season 5. How early did you get involved in planning for the season? What direction did the showrunners give you about the look they wanted this season?
I got involved pretty early on and ended up having over 10 weeks prep, which is a long time for a TV show. I worked closely with Anthony Byrne, our director, whom I know very well. As the scripts came in, we began to discuss and plan how we were going to tackle the story.

I met with the showrunners early on as well, and they really loved the work Anthony and I had done in the past together on the movie In Darkness and on Ripper Street. Anthony is a very visual director and they trusted us both, so that was really amazing. They wanted us to do Peaky but also to bring our own style and way of working to the table. We were massive fans of the show and had big respect for what the previous directors and cinematographers had done. We knew we had big shoes to fill!

How would you describe the look?
I would describe the Peaky Blinders look as very stylized and larger than life. Lighting wise, it’s known for beams of light, smoke and atmosphere and an almost theatrical look with over cranked camera moves and speed ramps. I wanted to push some realism into the show and not make things quite as theatrical this season yet still keep that Peaky vibe. Tommy (Cillian Murphy) is battling with himself and his own demons more than anyone else in our story.

I wanted to try and show this with the lighting and the camera style. We also tried to use more developing shots in certain scenes to put the audience right in the center of the action and create this sense of visceral realism. We tried to motivate every decision based on how to tell the story in the best and most powerful way to bring out the emotional aspects and really connect with audience.

How did you work with the directors and colorist to achieve the intended look?
I used my DIT James Shovlar to create a look on set for the offline edit and we used that as a starting point for the grade. Then Anthony and I worked with grader Paul Staples at Deluxe in London, whom we had worked with on Ripper Street, and from the reference grade Paul created the finished look. Paul really understood where we wanted to take it, and I’m really pleased with how it turned out. We didn’t want it to feel too pushed but we still wanted it to look like Peaky Blinders.

Where was it shot, and how long was the shoot?
We shot around the northwest of England. We were based mainly in Manchester where we built a number of sets, including the Garrison, Houses of Parliament and Shelby HQ. We also shot in Birmingham, Liverpool, Rochdale and Bradford. We shot 16 five-day weeks in total.

How did you go about choosing the right camera and lenses for this project?
We had to shoot 4K, so the standard ARRI Alexa was off the table. A friend of mine, Sam McCurdy, BSC, had mentioned he had been shooting on the new Red Monstro and said he was really blown away by the images. I tested it and thought it was perfect for us. We coupled that with Cooke Anamorphic lenses and delivered in a 2:1 ratio.

Can you describe the lighting?
The lighting is a big part of Peaky Blinders, and it had to be right. My gaffer Oliver Whickman and I used our prep time to draw up detailed lighting plans, which included all of our machine and rigging requirements. We had 91 different lighting diagrams, and because we were scouting and planning the whole six episodes, it was very important that everything had to be written down in a clear, accurate way that could be passed on to our rigging crews.

We were scouting in September 2018, but some of the locations we weren’t shooting until January 2019 and we weren’t going to come back to them because we were so busy shooting. Oliver used the Shot Designer app to make the plans and we made printed books for the rigging gaffer and our best boy Alan Millar. It was certainly the most technically difficult job I have ever done in terms of planning, but everything went very smoothly.

Are there any scenes that you are particularly proud of or found most challenging?
There were many challenging scenes and sets. I’m really pleased how the opening sequence in Chinatown turned out. Also, there’s a big sequence set around a ballet, and I loved how that came together. I thought the design was great, with all the practicals that our designer Nicole Northridge installed in the set. There’s so much in this series, it’s hard to mention one thing.

I’m very proud of all our team. Everyone worked so hard and put so much into it, and I really think it shows. My camera operator Andrew Fletcher, focus puller Tom Finch and key grip Paul Kemp provided exceptional talent to the project. Not only are they great friends, they are the best of the best at what they do and I’m very proud of everything they did on Peaky.

Now let’s dig into some general DP questions. How did you become interested in cinematography?
I used to make skate videos, and then I studied photography in college and started to get interested in the idea of making films. I studied film production at university, and then started to work as a camera trainee once I left. At first I thought I wanted to be a director and made some short films, but after training under some great DPs — Sam McCurdy, BSC, and Lol Crawley, BSC — I realized that’s what I wanted to do, so I started shooting as much as I could and went from there.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology?
I am inspired by watching movies or TV with great stories. I’m also inspired by working with talented people, great directors, great producers and people with a great passion for what they do. Peaky Blinders was massively inspiring as we got to work with some of the greatest actors of our age who are at the top of their game. Working at that level, you need to up your game and that also was massively inspiring.

I always stay on top of new technology by going to trade shows and reading trade magazines.

What new technology has changed the way you work?
I think the camera getting smaller has been the biggest change, as we can use drones, Trinity rigs and other gimbals to move the camera in ways we could never even have dreamed of five years ago.

What are some of your best practices you try to follow on each job?
I always try to bring all my own crew if I can. We have a tight team and it’s so much easier if I can bring all of my guys onto a job as we all have a shorthand with each other. Additionally, I always do detailed lighting diagrams with my gaffer and put in lots of prep and time into the planning of the lighting so we can move quickly and adapt on the day. I also try to build a good relationship with the director as much as I can before shooting.

Explain your ideal collaboration with the director or showrunner when starting a new project.
For me it’s ideal when you work with someone who wants to hear your ideas and bounces off you creatively. It should be a collaboration, and you should be able to talk openly about ideas and feel like you’re valued. That connection is very important — sometimes you click, and sometimes you don’t — it’s about chemistry.

What’s your go-to gear? Things you can’t live without?
Things change depending on the show, but I love a Technocrane and a good remote head. If the show has the budget, they are such brilliant tools to move a camera and find the shot quickly.

On Peaky Blinders we used the ARRI Trinity camera stabilizer quite a lot, which is especially great if you have operator Andrew Fletcher, who is a master!


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Editing Roundtable

By Randi Altman

The world of the editor has changed over the years as a result of new technology, the types of projects they are being asked to cut (looking at you social media) and the various deliverables they must create. Are deadlines still getting tighter and are budgets still getting smaller? The answer is yes, but some editors are adapting to the trends, and companies that make products for editors are helping by making the tools more flexible and efficient so pros can get to where they need to be.

We posed questions to various editors working in TV, short form and indies, who do a variety of jobs, as well as to those making the tools they use on a daily basis. Enjoy.

Cut+Run Editor/Partner Pete Koob

What trends do you see in commercial editing? Good or bad?
I remember 10 years ago a “colleague,” who was an interactive producer at the time, told me rather haughtily that I’d be out of work in a few years when all advertising became interactive and lived online. Nothing could have been further from the truth, of course, and I think editors everywhere have found that the viewer migration from TV to online has yielded an even greater need for content.

The 30-second spot still exists, both online and on TV, but the opportunities for brands to tell more in-depth stories across a wide range of media platforms mean that there’s a much more diverse breadth of work for editors, both in terms of format and style.

For better or worse, we’ve also seen every human being with a phone become their own personal brand manager with a highly cultivated and highly saturated digital presence. I think this development has had a big impact on the types of stories we’re telling in advertising and how we’re telling them. The genre of “docu-style” editing is evolving in a very exciting way as more and more companies are looking to find real people whose personal journeys embody their brands. Some of the most impressive editorial work I see these days is a fusion of styles — music video, fashion, documentary — all being brought to bear on telling these real stories, but doing it in a way that elevates them above the noise of the daily social media feed.

Selecting the subjects in a way that feels authentic — and not just like a brand co-opting someone’s personal struggle — is essential, but when done well, there are some incredibly inspirational and emotional stories to be told. And as a father of a young girl, it’s been great to show my daughter all the empowering stories of women being told right now, especially when they’re done with such a fresh and exciting visual language.

What is it about commercial editing that attracted you and keeps attracting you?
Probably the thing that keeps me most engaged with commercial editing is the variety and volume of projects throughout the year. Cutting commercials means you’re on to the next one before you’ve really finished the last.

The work feels fresh when I’m constantly collaborating with different people every few weeks on a diverse range of projects. Even if I’m cutting with the same directors, agencies or clients, the cast of characters always rotates to some degree, and that keeps me on my toes. Every project has its own unique challenges, and that compels me to constantly find new ways to tell stories. It’s hard for me to get bored with my work when the work is always changing.

Conoco’s Picnic spot

Can you talk about challenges specific to short-form editing?
I think the most obvious challenge for the commercial editor is time. Being able to tell a story efficiently and poignantly in a 60-, 30-, 15- or even six-second window reveals the spot editor’s unique talent. Sometimes that time limit can be a blessing, but more often than not, the idea on the page warrants a bigger canvas than the few seconds allotted.

It’s always satisfying to feel as if I’ve found an elegant editorial solution to telling the story in a concise manner, even if that means re-imagining the concept slightly. It’s a true testament to the power of editing and one that is specific to editing commercials.

How have social media campaigns changed the way you edit, if at all?
Social media hasn’t changed the way I edit, but it has certainly changed my involvement in the campaign as a whole. At its worst, the social media component is an afterthought, where editors are asked to just slap together a quick six-second cutdown or reformat a spot to fit into a square framing for Instagram. At its best, the editor is brought into the brainstorming process and has a hand in determining how the footage can be used inventively to disperse the creative into different media slots. One of the biggest assets of an editor on any project is his or her knowledge of the material, and being able to leverage that knowledge to shape the campaign across all platforms is incredibly rewarding.

Phillips 76 “Jean and Gene”

What system do you edit on, and what else other than editing are you asked to supply?
We edit primarily on Avid Media Composer. I still believe that nothing else can compete when it comes to project sharing, and as a company it allows for the smoothest means of collaboration between offices around the world. That being said, clients continue to expect more and more polish from the offline process, and we are always pushing our capabilities in motion graphics and visual effects in After Effects and color finessing in Blackmagic DaVinci Resolve.

What projects have you worked on recently?
I’ve been working on some bigger campaigns that consist of a larger number of spots. Two campaigns that come to mind are a seven-spot TV campaign for Phillips 76 gas stations and 13 short online films for Subaru. It’s fun to step back and look at how they all fit together, and sometimes you make different decisions about an individual spot based on how it sits in the larger group.

The “Jean and Gene” spots for 76 were particularly fun because it’s the same two characters who you follow across several stories, and it almost feels like a mini TV series exploring their life.

Earlier in the  year I worked on a Conoco campaign, featuring the spots Picnic, First Contact and River, via Carmichael Lynch.

Red Digital Cinema Post and Workflow Specialist Dan Duran

How do you see the line between production and post blurring?
Both post and on set production are evolving with each other. There has always been a fine line between them, but as tech grows and becomes more affordable, you’re seeing tools that previously would have been used only in post bleed onto set.

One of my favorite trends is seeing color-managed workflows on locations. With full color control pipelines being used with calibrated SDR and HDR monitors, a more accurate representation of what the final image will look like is given. I’ve also seen growth in virtual productions where you’re able to see realtime CGI and environments on set directly through camera while shooting.

What are the biggest trends you’ve been facing in product development?
Everyone is always looking for the highest image quality at the best price point. As sensor technology advances, we’re seeing users ask for more and more out of the camera. Higher sensitivity, faster frame rates, more dynamic range and a digital RAW that allows them to effortlessly shape the images into a very specific creative look that they’re trying to achieve for their show. 8K provides a huge canvas to work with, offering flexibility in what they are trying to capture.

Smaller cameras are able to easily adapt into a whole new myriad of support accessories to achieve shots in ways that weren’t always possible. Along with the camera/sensor revolution, Red has seen a lot of new cinema lenses emerge, each adding their own character to the image as it hits the photo sites.

What trends do you see from editors these days. What enables their success?
I’ve seen post production really take advantage of modern tech to help improve and innovate new workflows. Being able to view higher resolution, process footage faster and playback off of a laptop shows how far hardware has come.

We have been working more with partners to help give pros the post tools they need to be more efficient. As an example, Red recently teamed up with Nvidia to not only get realtime full resolution 8K playback on laptops, but also allow for accelerated renders and transcode times much faster than before. Companies collaborating to take advantage of new tech will enable creative success.

AlphaDogs Owner/Editor Terence Curren

What trends do you see in editing? Good or bad.
There is a lot of content being created across a wide range of outlets and formats, from theatrical blockbusters and high-end TV shows all the way down to one-minute videos for Instagram. That’s positive for people desiring to use their editing skills to do a lot of storytelling. The flip side is that with so much content being created, the dollars to pay editors gets stretched much thinner. Barring high-end content creation, the overall pay rates for editors have been going down.

The cost of content capture is a tiny fraction of what it was back in the film days. The good part of that is there is a greater likelihood that the shot you need was actually captured. The downside is that without the extreme expense of shooting associated with film, we’ve lost the disciplines of rehearsing scenes thoroughly, only shooting while the scene is being performed, only printing circled takes, etc. That, combined with reduced post schedules, means for the most part editors just don’t have the time to screen all the footage captured.

The commoditization of the toolsets, (some editing systems are actually free) combined with the plethora of training materials readily available on the Internet and in most schools means that video storytelling is now a skill available to everyone. This means that the next great editors won’t be faced with the barriers to entry that past generations experienced, but it also means that there’s a much larger field of editors to choose from. The rules of supply and demand tell us that increased availability and competition of a service reduces its cost. Traditionally, many editors have been able to make upper-middle-class livings in our industry, and I don’t see as much of that going forward.

To sum it up, it’s a great time to become an editor, as there’s plenty of work and therefore lots of opportunity. But along with that, the days of making a higher-end living as an editor are waning.

What is it about editing that attracted you and keeps attracting you?
I am a storyteller at heart. The position of editor is, in my opinion, matched with the director and writer for responsibility of the structural part of telling the story. The writer has to invent the actual story out of whole cloth. The director has to play traffic cop with a cornucopia of moving pieces under a very tight schedule while trying to maintain the vision of the pieces of the story necessary to deliver the final product. The editor takes all those pieces and gives the final rewrite of the story for the audience to hopefully enjoy.

Night Walk

As with writing, there are plenty of rules to guide an editor through the process. Those rules, combined with experience, make the basic job almost mechanical much of the time. But there is a magic thing that happens when the muse strikes and I am inspired to piece shots together in some way that just perfectly speaks to the audience. Being such an important part of the storytelling process is uniquely rewarding for a storyteller like me.

Can you talk about challenges specific to short-form editing versus long-form?
Long-form editing is a test of your ability to maintain a fresh perspective of your story to keep the pacing correct. If you’ve been editing a project for weeks or months at a time, you know the story and all the pieces inside out. That can make it difficult to realize you might be giving too much information or not enough to the audience. Probably the most important skill for long form is the ability to watch a cut you’ve been working on for a long time and see it as a first-time viewer. I don’t know how others handle it, but for me there is a mental process that just blanks out the past when I want to take a critical fresh viewing.

Short form brings the challenge of being ruthless. You need to eliminate every frame of unnecessary material without sacrificing the message. While the editors don’t need to keep their focus for weeks or months, they have the challenge of getting as much information into that short time as possible without overwhelming the audience. It’s a lot like sprinting versus running a marathon. It exercises a different creative muscle that also enjoys an immediate reward.

Lafayette Escadrille

I can’t say I prefer either one over the other, but I would be bored if I didn’t get to do both over time, as they bring different disciplines and rewards.

How have social media campaigns changed the way you edit, if at all? Can you talk about the variety of deliverables and how that affects things?
Well, there is the horrible vertical framing trend, but that appears to be waning, thankfully. Seriously, though, the Instagram “one minute” limit forces us all to become commercial editors. Trying to tell the story in as short a timeframe as possible, knowing it will probably be viewed on a phone in a bright and noisy environment is a new challenge for seasoned editors.

There is a big difference between having a captive audience in a theater or at home in front of the TV and having a scattered audience whose attention you are trying to hold exclusively amid all the distractions. This seems to require more overt attention-grabbing tricks, and it’s unfortunate that storytelling has come to this point.

As for deliverables, they are constantly evolving, which means each project can bring all new requirements. We really have to work backward from the deliverables now. In other words, one of our first questions now is, “Where is this going?” That way we can plan the appropriate workflows from the start.

What system do you edit on and what else other than editing are you asked to supply?
I primarily edit on Media Composer, as it’s the industry standard in my world. As an editor, I can learn any tool to use. I have cut with Premiere and FCP. It’s knowing where to make the edit that is far more important than how to make the edit.

When I started editing in the film days, we just cut picture and dialogue. There were other editors for sound beyond the basic location-recorded sound. There were labs from which you ordered something as simple as a dissolve or a fade to black. There were color timers at the film lab who handled the look of the film. There were negative cutters that conformed the final master. There were VFX houses that handled anything that wasn’t actually shot.

Now, every editor has all the tools at hand to do all those tasks themselves. While this is helpful in keeping costs down and not slowing the process, it requires editors to be a jack-of-all-trades. However, what typically follows that term is “and master of none.”

Night Walk

One of the main advantages of separate people handling different parts of the process is that they could become really good at their particular art. Experience is the best teacher, and you learn more doing the same thing every day than occasionally doing it. I’ve met a few editors over the years that truly are masters in multiple skills, but they are few and far between.

Using myself as an example, if the client wants some creatively designed show open, I am not the best person for that. Can I create something? Yes. Can I use After Effects? Yes, to a minor degree. Am I the best person for that job? No. It is not what I have trained myself to do over my career. There is a different skill set involved in deciding where to make a cut versus how to create a heavily layered, graphically designed show open. If that is what I had dedicated my career to doing, then I would probably be really good at it, but I wouldn’t be as good at knowing where to make the edit.

What projects have gone through the studio recently?
We work on a lot of projects at AlphaDogs. The bulk of our work is on modest-budget features, documentaries and unscripted TV shows. A recent example is a documentary on World War I fighter pilots called The Lafayette Escadrille and an action-thriller starring Eric Roberts and Mickey Rourke, called Night Walk.

Unfortunately for me I have become so focused on running the company that I haven’t been personally working on the creative side as much as I would like. While keeping a post house running in the current business climate is its own challenge, I don’t particularly find it as rewarding as “being in the chair.”

That feeling is offset by looking back at all the careers I have helped launch through our internship program and by offering entry-level employment. I’ve also tried hard to help editors over the years through venues like online user groups and, of course, our own Editors’ Lounge events and videos. So I guess that even running a post house can be rewarding in its own way.

Luma Touch Co-Founder/Lead Designer Terri Morgan

Have there been any talks among NLE providers about an open timeline? Being able to go between Avid, Resolve or Adobe with one file like an AAF or XML?
Because every edit system uses its own editing paradigms (think Premiere versus FCP X), creating an open exchange is challenging. However, there is an interesting effort by Pixar (https://github.com/PixarAnimationStudios/OpenTimelineIO) that includes adapters for the wide range of structural differences of some editors. There are also efforts for standards in effects and color correction. The core editing functionality in LumaFusion is built to allow easy conversion in and out to different formats, so adapting to new standards will not be challenging in most cases.

With AI becoming a popular idea and term, at what point does it stop? Is there a line where AI won’t go?
Looking at AI strictly as it relates to video editing, we can see that its power is incrementally increasing, and automatically generated movies are getting better. But while a neural network might be able to put together a coherent story, and even mimic a series of edits to match a professional style, it will still be cookie-cutter in nature, rather than being an artistic individual endeavor.

What we understand from our customers — and from our own experience — is that people get profound joy from being the storyteller or the moviemaker. And we understand that automatic editing does not provide the creative/ownership satisfaction that you get from crafting your own movie. You only have to make one automatic movie to learn this fact.

It is also clear that movie viewers feel a lack of connection or even annoyance when watching an automatically generated movie. You get the same feeling when you pay for parking at an automated machine, and the machine says, “Thank you, have a nice day.”

Here is a question from one of our readers: There are many advancements in technology coming in NLEs. Are those updates coming too fast and at an undesirable cost?
It is a constant challenge to maintain quality while improving a product. We use software practices like Agile, engage in usability tests and employ testing as robust as possible to minimize the effects of any changes in LumaFusion.

In the case of LumaFusion, we are consistently adding new features that support more powerful mobile video editing and features that support the growing and changing world around us. In fact, if we stopped developing so rapidly, the app would simply stop working with the latest operating system or wouldn’t be able to deliver solutions for the latest trends and workflows.

To put it all in perspective, I like to remind myself of the amount of effort it took to edit video 20 years ago compared to how much more efficient and fun it is to edit a video now. It gives me reason to forgive the constant changes in technology and software, and reason to embrace new workflows and methodologies.

Will we ever be at a point where an offline/online workflow will be completely gone?
Years ago, the difference in image quality provided a clear separation between offline and online. But today, online is differentiated by the ability to edit with dozens of tracks, specialized workflows, specific codecs, high-end effects and color. Even more importantly, online editing typically uses the specialized skills that a professional editor brings to a project.

Since you can now edit a complex timeline with six tracks of 4K video with audio and another six tracks of audio, basic color correction and multilayered titles straight from an iPad, for many projects you might find it unnecessary to move to an online situation. But there will always be times that you need more advanced features or the skills of a professional editor. Since not everybody wants to understand the complex world of post production, it is our challenge at Luma Touch to make more of these high-end features available without greatly limiting who can successfully use the product.

What are the trends you’re seeing in customer base from high-end post facility vs. independent editor/contractor?
High-end post facilities tend to have stationary workstations that employ skilled editor/operators. The professionals that find LumaFusion to be a valuable tool in their bag are often those who are responsible for the entire production and post production, including independent producers, journalists and high-end professionals who want the flexibility of starting to edit while on location or while traveling.

What are the biggest trends you’ve been seeing in product development?
In general, moving away from lengthy periods of development without user feedback. Moving toward getting feedback from users early and often is an Agile-based practice that really makes a difference in product development and greatly increases the joy that our team gets from developing LumaFusion. There’s nothing more satisfying than talking to real users and responding to their needs.

New development tools, languages and technologies are always welcome. At WWDC this year, Apple announced it would make it easier for third-party developers to port their iOS apps over to the desktop with Project Catalyst. This will likely be a viable option for LumaFusion.

You come from a high-end editing background, with deep experience editing at the workstation level. When you decided to branch off and do something on your own, why did you choose mobile?
Mobile offered a solution to some of the longest running wishes in professional video editing: to be liberated from the confines of an edit suite, to be able to start editing on location, to have a closer relationship to the production of the story in order to avoid the “fix it in post” mentality, and to take your editing suite with you anywhere.

It was only after starting to develop for mobile that we fully understood one of the most appealing benefits. Editing on an iPad or iPhone encourages experimentation, not only because you have your system with you when you have a good idea, but also because you experience a more direct relationship to your media when using the touch interface; it feels more natural and immersive. And experimentation equals creativity. From my own experience I know that the more you edit, the better you get at it. These are benefits that everyone can enjoy whether they are a professional or a novice.

Hecho Studios Editor Grant Lewis

What trends do you see in commercial editing? Good or bad.
Commercials are trending away from traditional, large-budget cinematic pieces to smaller, faster, budget-conscious ones. You’re starting to see it now more and more as big brands shy away from big commercial spectacles and pivot toward a more direct reflection of the culture itself.

Last year’s #CODNation work for the latest installment of the Call of Duty franchise exemplifies this by forgoing a traditional live-action cinematic trailer in favor of larger number of game-capture, meme-like films. This pivot away from more dialogue-driven narrative structures is changing what we think of as a commercial. For better or worse, I see commercial editing leaning more into the fast-paced, campy nature of meme culture.

What is it about commercial editing that attracted you and keeps attracting you?
What excites me most about commercial editing is that it runs the gamut of the editorial genre. Sometimes commercials are a music video; sometimes they are dramatic anthems; other times they are simple comedy sketches. Commercials have the flexibility to exist as a multitude of narrative genres, and that’s what keeps me attracted to commercial editing.

Can you talk about challenges specific to short form versus long form?
The most challenging thing about short-form editing is finding time for breath. In a 30-second piece, where do you find a moment of pause? There’s always so much information being packed into smaller timeframes; the real challenge is editing at a sprint, but still having it feel dynamic and articulate.

How have social media campaigns changed the way you edit, if at all? Can you talk about the variety of deliverables and how that affects things?
All campaigns will either live on social media or have specific social components now. I think the biggest thing that has changed is being tasked with telling a compelling narrative in 10 or even five or six seconds. Now, the 60-second and 90-second anthem film has to be able to work in six seconds as well. It is challenging to boil concepts down to just a few seconds and still maintain a sense of story.

#CODNation

All the deliverable aspect ratios editors are asked to make now is also a blossoming challenge. Unless a campaign is strictly shot for social, the DP probably shot for a traditional 16×9 framing. That means the editor is tasked with reframing all social content to work in all the different deliverable formats. This makes the editor act almost as the DP for social in the post process. Shorter deliverables and a multitude of aspect ratios have just become another layer to editing and demand a whole new editorial lens to view and process the project through.

What system do you edit on and what else other than editing are you asked to supply?
I currently cut in Adobe Premiere Pro. I’m often asked to supply graphics and motion graphic elements for offline cuts as well. That means being comfortable with the whole Adobe suite of tools, including Photoshop and After Effects. From type setting to motion tracking, editors are now asked to be well-versed in all tangential aspects of editorial.

What projects have you worked on recently?
I cut the launch film for Razer’s new Respawn energy drink. I also cut Toms Shoes’ most recent campaign, “Stand For Tomorrow.”

EditShare Head of Marketing Lee Griffin

What are the biggest trends you’ve been seeing in product development?
We see the need to produce more video content — and produce it faster than ever before — for social media channels. This means producing video in non-broadcast standards/formats and, more specifically, producing square video. To accommodate, editing tools need to offer user-defined options for manipulating size and aspect ratio.

What changes have you seen in terms of the way editors work and use your tools?
There are two distinct changes: One, productions are working with editors regardless of their location. Two, there is a wider level of participation in the content creation process.

In the past, the editor was physically located at the facility and was responsible for assembling, editing and finishing projects. However, with the growing demand for content production, directors and producers need options to tap into a much larger pool of talent, regardless of their location.

EditShare AirFlow and Flow Story enable editors to work remotely from any location. So today, we frequently see editors who use our Flow editorial tools working in different states and even on different continents.

With AI becoming a popular idea and term, at what point does it stop?
I think AI is quite exciting for the industry, and we do see its potential to significantly advance productions. However, AI is still in its infancy with regards to the content creation market. So from our point of view, the road to AI and its limits are yet to be defined. But we do have our own roadmap strategy for AI and will showcase some offerings integrated within our collaborative solutions at IBC 2019.

Will we ever be at a point where an offline/online workflow will be completely gone?
It depends on the production. Offline/online workflows are here to stay in the higher-end production environment. However, for fast turnaround productions, such as news, sports and programs (for example, soap operas and reality TV), there is no need for offline/online workflows.

What are the trends you’re seeing in customer base from high-end post facility vs, independent editor. How is that informing your decisions on products and pricing?
With the increase in the number of productions thanks to OTTs, high-end post facilities are tapping into independent editors more and more to manage the workload. Often the independent editor is remote, requiring the facility to have a media management foundation that can facilitate collaboration beyond the facility walls.

So we are seeing a fundamental shift in how facilities are structuring their media operations to support remote collaborations. The ability to expand and contract — with the same level of security they have within the facility — is paramount in architecting their “next-generation” infrastructure.

What do you see as untapped potential customer bases that didn’t exist 10 to 20 years ago, and how do you plan on attracting and nurturing them? What new markets are you seeing.
We are seeing major growth beyond the borders of the media and entertainment industry in many markets. From banks to real estate agencies to insurance companies, video has become one of the main ways for them to communicate to their media-savvy clientele.

While EditShare solutions were initially designed to support traditional broadcast deliverables, we have evolved them to accommodate these new customers. And today, these customers want simplicity coupled with speed. Our development methodology puts this at the forefront of our core products.

Puget Systems Senior Labs Technician Matt Bach

Have there been any talks between NLE providers about an open timeline. Essentially being able to go between Avid, Resolve, or Adobe with one file like an AAF or XML?
I have not heard anything on this topic from any developers, so keep in mind that this is pure conjecture, but the pessimistic side of me doesn’t see an “open timeline” being something that will happen anytime soon.

If you look at what many of the NLE developers are doing, they are moving more and more toward a pipeline that is completely contained within their ecosystem. Adobe has been pushing Dynamic Link in recent years in order to make it easier to move between Premiere Pro and After Effects. Blackmagic is going even a step further by integrating editing, color, VFX and audio all within DaVinci Resolve.

These examples are both great advancements that can really improve your workflow efficiency, but they are being done in order to keep the user within their specific ecosystem. As great as an open timeline would be, it seems to be counter to what Adobe, Blackmagic, and others are actively pursuing. We can still hold out hope, however!

With AI becoming a popular idea and term, at what point does it stop?
There are definitely limitations to what AI is capable of, but that line is moving year by year. For the foreseeable future, AI is going to take on a lot of the tedious tasks like tagging of footage, content-aware fill, shot matching, image enhancement and other similar tasks. These are all perfect use cases for artificial intelligence, and many (like content-aware fill) are already being implemented in the software we have available right now.

The creative side is where AI is going to take the longest time to become useful. I’m not sure if there is a point where AI will stop from a technical standpoint, but I personally believe that even if AI was perfect, there is value in the fact that an actual person made something. That may mean that the masses of videos that get published will be made by AI (or perhaps simply AI-assisted), but just like furniture, food, or even workstations, there will always be a market for high-quality items crafted by human hands.

I think the main thing to keep in mind with AI is that it is just a tool. Moving from black and white to color, or from film to digital, was something that at the time, people thought was going to destroy the industry. In reality, however, they ended up being a huge boon. Yes, AI will change how some jobs are approached — and may even eliminate some job roles entirely —but in the end, a computer is never going to be as creative and inventive as a real person.

There are many advancements in technology coming in NLEs seemingly daily, are those updates coming too fast and at an undesirable cost?
I agree that this is a problem right now, but it isn’t limited to just NLEs. We see the same thing all the time in other industries, and it even occurs on the hardware side where a new product will be launched simply because they could, not because there is an actual need for it.

The best thing you can do as an end-user is to provide feedback to the companies about what you actually want. Don’t just sit on those bugs, report them! Want a feature? Most companies have a feature request forum that you can post on.

In the end, these companies are doing what they believe will bring them the most users. If they think a flashy new feature will do it, that is what they will spend money on. But if they see a demand for less flashy, but more useful, improvements, they will make that a priority.

Will we ever be at a point where an offline/online workflow will be completely gone?
Unless we hit some point where camera technology stops advancing, I don’t think offline editing is ever going to fully go away. It is amazing what modern workstations can handle from a pure processing standpoint, but even if the systems themselves could handle online editing, you also need to have the storage infrastructure that can keep up. With the move from HD to 4K, and now to 8K, that is a lot of moving parts that need to come together in order to eliminate offline editing entirely.

With that said, I do feel like offline editing is going to be used less and less. We are starting to hit the point that people feel their footage is higher quality than they need without having to be on the bleeding edge. We can edit 4K ProRes or even Red RAW footage pretty easily with the technology that is currently available, and for most people that is more than enough for what they are going to need for the foreseeable future.

What are the trends you’re seeing in customer base from high-end post facility vs. independent editor, and how is that informing your decisions on products and pricing?
From a workstation side, there really is not too much of a difference beyond the fact that high-end post facilities tend to have larger budgets that allow them to get higher-end machines. Technology is becoming so accessible that even hobbyist YouTubers often end up getting workstations from us that are very similar to what high-end professionals use.

The biggest differences typically revolves not around the pure power or performance of the system itself, but rather how it interfaces with the other tools the editor is using. Things like whether the system has 10GB (or fiber) networking, or whether they need a video monitoring card in order to connect to a color calibrated display, are often what sets them apart.

What are the biggest trends you’ve been seeing in product development?
In general, the two big things that have come up over and over in recent years are GPU acceleration and artificial intelligence. GPU acceleration is a pretty straight-forward advancement that lets software developers get a lot more performance out of a system for tasks like color correction, noise reduction and other tasks that are very well suited for running on a GPU.

Artificial intelligence is a completely different beast. We do quite a bit of work with people that are on the forefront of AI and machine learning, and it is going to have a large impact on post production in the near future. It has been a topic at conferences like NAB for several years, but with platforms like Adobe Sensei starting to take off, it is going to become more important

However, I do feel that AI is going to be more of an enabling technology rather than one that replaces jobs. Yes, people are using AI to do crazy things like cut trailers without any human input, but I don’t think that is going to be the primary use of it anytime in the near future. It is going to be things like assisting with shot matching, tagging of footage, noise reduction, and image enhancement that is going to be where it is truly useful.

What do you see as untapped potential customer bases that didn’t exist 10-20 years ago, and how do you plan on attracting and nurturing them? What new markets are you seeing?
I don’t know if there are any customer bases that are completely untapped, but I do believe that there is going to be more overlap between industries in the next few years. One example is how much realtime raytracing has improved recently, which is spurring the use of video game engines in film. This has been done for previsualization for quite a while, but the quality is getting so good that there are some films already out that include footage straight from the game engine.

For us on the workstation side, we regularly work with customers doing post and customers who are game developers, so we already have the skills and technical knowledge to make this work. The biggest challenge is really on the communication side. Both groups have their own set of jargon and general language, so we often find ourselves having to be the “translator” when a post house is looking at integrating realtime visualization in their workflow.

This exact scenario is also likely to happen with VR/AR as well.

Lucky Post Editor Marc Stone

What trends do you see in commercial editing?
I’m seeing an increase in client awareness of the mobility of editing. It’s freeing knowing you can take the craft with you as needed, and for clients, it can save the ever-precious commodity of time. Mobility means we can be an even greater resource to our clients with a flexible approach.

I love editing at Lucky Post, but I’m happy to edit anywhere I am needed — be it on set or on location. I especially welcome it if it means you can have face-to-face interaction with the agency team or the project’s director.

What is it about commercial editing that attracted you and keeps attracting you?
The fact that I can work on many projects throughout the year, with a variety of genres, is really appealing. Cars, comedy, emotional PSAs — each has a unique creative challenge, and I welcome the opportunity to experience different styles and creative teams. I also love putting visuals together with music, and that’s a big part of what I do in 30-or 60-second… or even in a two-minute branded piece. That just wouldn’t be possible, to the same extent, in features or television.

Can you talk about challenges specific to short-form editing?
The biggest challenge is telling a story in 30 seconds. To communicate emotion and a sense of character and get people to care, all within a very short period of time. People outside of our industry are often surprised to hear that editors take hours and hours of footage and hone it down to a minute or less. The key is to make each moment count and to help make the piece something special.

Ram’s The Promise spot

How has social media campaigns changed the way you edit, if at all?
It hasn’t changed the way I edit, but it does allow some flexibility. Length isn’t constrained in the same way as broadcast, and you can conceive of things in a different way in part because of the engagement approach and goals. Social campaigns allow agencies to be more experimental with ideas, which can lead to some bold and exciting projects.

What system do you edit on, and what else other than editing are you asked to supply?
For years I worked on Avid Media Composer, and at Lucky Post I work in Adobe Premiere. As part of my editing process, I often weave sound design and music into the offline so I can feel if the edit is truly working. What I also like to do, when the opportunity presents, is to be able to meet with the agency creatives before the shoot to discuss style and mood ahead of time.

What projects have you worked on recently?
Over the last six months, I have worked on projects for Tazo, Ram and GameStop, and I am about to start a PSA for the Salvation Army. It gets back to the variety I spoke about earlier and the opportunity to work on interesting projects with great people.

Billboard Video Post Supervisor/Editor Zack Wolder

What trends do you see in editing? Good or bad.I’m noticing a lot of glitch transitions and RGB splits being used. Much flashier edits, probably for social content to quickly grab the viewers attention.

Can you talk about challenges specific to short-form editing versus long-form?
With short-form editing, the main goal is to squeeze the most amount of useful information into a short period of time while not overloading the viewer. How do you fit an hour-long conversation into a three-minute clip while hitting all the important talking points and not overloading the viewer? With long-form editing, the goal is to keep viewers’ attention over a long period of time while always surprising them with new and exciting info.

What is it about editing that attracted you and keeps attracting you?
I loved the fact that I could manipulate time. That hooked me right away. The fact that I could take a moment that lasts only a few seconds and drag it out for a few minutes was incredible.

Can you talk about the variety of deliverables for social media and how that affects things?
Social media formats have made me think differently about framing a shot or designing logos. Almost all the videos I create start in the standard 16×9 framing but will eventually be delivered as a vertical. All graphics and transitions I build need to easily work in a vertical frame. Working in a 4K space and shooting in 4K helps tremendously.

Rainn Wilson and Billie Eilish

What system do you edit on, and what else other than editing are you asked to supply?
I edit in Adobe Premiere Pro. I’m constantly asked to supply design ideas and mockups for logos and branding and then to animate those ideas.

What projects have you worked on recently?
Recently, I edited a video that featured Rainn Wilson — who played Dwight Schrute on The Office — quizzing singer Billie Eilish, who is a big-time fan of the show.

Main Image: AlphaDogs editor Herrianne Catolos


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

DP Chat: Brandon Trost on the Ted Bundy film Extremely Wicked

By Randi Altman

To say that cinematographer Brandon Trost was born to work in the entertainment industry might not be hyperbole. This fourth-generation Angeleno has family roots in the industry — from his dad who did visual/physical effects, to his great uncle, actor Victor French (Little House on the Prairie).

Channeling his innate creativity, Trost studied cinematography at The Los Angeles Film School. His career kicked into high gear after winning the Best Cinematography award at the Newport Beach Film Festival for He Was a Quiet Man.

He has collaborated with Seth Rogen on several films, including The Interview, Neighbors and Neighbors 2: Sorority Rising, The Night Before and This Is the End. Additional credits include The Diary of a Teenage Girl, The Disaster Artist and Can You Ever Forgive Me? His most recent project — now streaming on Netflix — Extremely Wicked, Shockingly Evil and Vile, the story of serial killer Ted Bundy (Zac Efron) but this time told from his girlfriend’s perspective.

We reached out to Trost to find out about his process and his work on Extremely Wicked.

You’ve worked on a range of interesting projects from different genres. What attracts you to a story?
A movie can be told 100 different ways, so I ask myself where a movie can go — what’s the potential for doing something different? Especially if it is a genre I haven’t done. I really love jumping around.

And, of course, it all starts with the script and who the filmmakers are on a project — and synergy among us all during the interview process.

Tell us about Extremely Wicked, Shockingly Evil and Vile. How would you describe the general look of the film?
It’s a period movie first and foremost, but we wanted to elevate the production value as much as possible – on a tight budget. The director, Joe Berlinger, is a prolific documentarian. He really wanted to preserve his documentary sensibilities but with a cinematic, nostalgic quality to our approach. A lot of the film is shot handheld because we wanted to create an intimate portrait of the scenario, as horrifying as it is!

How did you go about choosing the right camera and lenses to achieve the look?
I chose Alexa Mini because of its size — I knew I’d be operating a lot, and Joe wanted a lot handheld. I also wanted to be able to make decisions on the fly and follow the actors as they tell this story. We had two cameras and mounted them with Panavision C Series anamorphics. I love these lenses. Each one has a specific characteristic. Plus, they are the same lenses of the era (made in 1968 and upgraded for today’s cameras), which matches the 1970s period we are depicting on screen.

Is there a challenging scene that you are particularly proud of how it turned out?
There is an extensive sequence covering the Miami trial, which was the first one ever televised. It was a phenomenon back then, and we wanted to capture some of that energy. We were strapped for time and lighting was built into a courtroom set. We also used a courtroom location that was augmented to mimic set. We had so many pages to shoot, so I chose not to bring in any additional lights.

Plus, the execution was challenging. With so many long courtroom scenes back to back, we didn’t want it to feel monotonous. With the cameras and lighting set up, I could stand in the courtroom with the freedom to follow a character. I was like an invisible fly on the wall. That helped get us through all the material and infused some energy into the shots.

The sequence ends with Ted Bundy’s statement after firing all his lawyers and ultimately representing himself. We did that shot as a slow zoom, capturing this emotional, impactful speech — even though he’s lying! We zoomed all the way to just Zac’s eyes. His performance was so great, and the results are very satisfying, knowing we could have used twice as many days to shoot these scenes.

I’m glad I had the freedom to make bold choices, and that closing zoom is the only time we broke from shooting handheld. It has a very ‘70s, voyeuristic feel.

How did you become interested in cinematography?
As a kid, I always thought I’d do effects like my dad, but he saw my creative side and encouraged me to explore it. When I went to film school, I learned I had a knack for cinematography. I loved movies, and coming from a family who has worked in all sectors of the industry for four generations, I grew up with film.
Finding a frame feels innate to me, but it’s taken a lot of practice to get to where I am now.

What inspires you artistically?
I love the challenge of finding the right image to tell the story and using the right light to achieve that image. As a crew, we all have a different job, but we are all building the same house. We all bring a piece of ourselves to what we do, and it becomes like solving a puzzle to tell the director’s story and create it collaboratively with everyone. Imagery can be so powerful; you can use it to push a scene and evoke a feeling, whether it’s loneliness, strength, optimism or sadness. Camera and lens choices, movement, lighting… it all feeds into completing the puzzle.

I also find cinematography to be very instinctive. If I design a rulebook with the director early on a film, I know it’s just the foundation, something to build from. I like to be reactive – and lean into what feels right in the moment.

How do you stay on top of advancing tools that serve your vision?
I read industry mags, but also through the DITs on set, or the camera houses. I get shown new things and how they work. Or I’ll ask if they have heard about something. This builds my awareness for understanding fundamentals of the tool in case I want to use it.

What are some of your best practices or rules you try to follow on each job?
I’m a big lens guy. For me, the lenses make the movie, and I’m loving using vintage glass. Cameras are being designed with more and more resolution, and I’m always trying to add an analog softness. With every advancement in sharpness and noise reduction, I’m usually trying to take the electric edge off. I rely on lenses to help do that — or I’ll “stress” the camera at a higher ISO or do something in post with texture and grain. I’m usually trying to tear the image apart a little bit.

Panavision has even taken old lenses and customized them optically for me to create a more “shattered” look when it was right for the story.

And everything could go out the window if it serves the purpose of the story. It’s important as a DP to leave your artistic baggage behind if the story guides you to do something different. The story dictates how I work, and as a DP. I have to be flexible in my approaches. That’s what makes this work fun!

Has any recent or new technology changed the way you work?
The tool I use the most is my iPhone. I’ve got the Artemis app with the Director’s Viewfinder and the Cinescope app for adjusting aspect ratios, etc. I haven’t held a light meter in years.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Quick Chat: Lord Danger takes on VFX-heavy Devil May Cry 5 spot

By Randi Altman

Visual effects for spots have become more and more sophisticated, and the recent Capcom trailer promoting the availability of its game Devil May Cry 5 is a perfect example.

 The Mike Diva-directed Something Greater starts off like it might be a commercial for an anti-depressant with images of a woman cooking dinner for some guests, people working at a construction site, a bored guy trimming hedges… but suddenly each of our “Everyday Joes” turns into a warrior fighting baddies in a video game.

Josh Shadid

The hedge trimmer’s right arm turns into a futuristic weapon, the construction worker evokes a panther to fight a monster, and the lady cooking is seen with guns a blazin’ in both hands. When she runs out of ammo, and to the dismay of her dinner guests, her arms turn into giant saws. 

Lord Danger’s team worked closely with Capcom USA to create this over-the-top experience, and they provided everything from production to VFX to post, including sound and music.

We reached out to Lord Danger founder/EP Josh Shadid to learn more about their collaboration with Capcom, as well as their workflow.

How much direction did you get from Capcom? What was their brief to you?
Capcom’s fight-games director of brand marketing, Charlene Ingram, came to us with a simple request — make a memorable TV commercial that did not use gameplay footage but still illustrated the intensity and epic-ness of the DMC series.

What was it shot on and why?
We shot on both Arri Alexa Mini and Phantom Flex 4k using Zeiss Super Speed MKii Prime lenses, thanks to our friends at Antagonist Camera, and a Technodolly motion control crane arm. We used the Phantom on the Technodolly to capture the high-speed shots. We used that setup to speed ramp through character actions, while maintaining 4K resolution for post in both the garden and kitchen transformations.

We used the Alexa Mini on the rest of the spot. It’s our preferred camera for most of our shoots because we love the combination of its size and image quality. The Technodolly allowed us to create frame-accurate, repeatable camera movements around the characters so we could seamlessly stitch together multiple shots as one. We also needed to cue the fight choreography to sync up with our camera positions.

You had a VFX supervisor on set. Can you give an example of how that was beneficial?
We did have a VFX supervisor on site for this production. Our usual VFX supervisor is one of our lead animators — having him on site to work with means we’re often starting elements in our post production workflow while we’re still shooting.

Assuming some of it was greenscreen?
We shot elements of the construction site and gardening scene on greenscreen. We used pop-ups to film these elements on set so we could mimic camera moves and lighting perfectly. We also took photogrammetry scans of our characters to help rebuild parts of their bodies during transition moments, and to emulate flying without requiring wire work — which would have been difficult to control outside during windy and rainy weather.

Can you talk about some of the more challenging VFX?
The shot of the gardener jumping into the air while the camera spins around him twice was particularly difficult. The camera starts on a 45-degree frontal, swings behind him and then returns to a 45-degree frontal once he’s in the air.

We had to digitally recreate the entire street, so we used the technocrane at the highest position possible to capture data from a slow pan across the neighborhood in order to rebuild the world. We also had to shoot this scene in several pieces and stitch it together. Since we didn’t use wire work to suspend the character, we also had to recreate the lower half of his body in 3D to achieve a natural looking jump position. That with the combination of the CG weapon elements made for a challenging composite — but in the end, it turned out really dramatic (and pretty cool).

Were any of the assets provided by Capcom? All created from scratch?
We were provided with the character and weapons models from Capcom — but these were in-game assets, and if you’ve played the game you’ll see that the environments are often dark and moody, so the textures and shaders really didn’t apply to a real-world scenario.

Our character modeling team had to recreate and re-interpret what these characters and weapons would look like in the real world — and they had to nail it — because game culture wouldn’t forgive a poor interpretation of these iconic elements. So far the feedback has been pretty darn good.

In what ways did being the production company and the VFX house on the project help?
The separation of creative from production and post production is an outdated model. The time it takes to bring each team up to speed, to manage the communication of ideas between creatives and to ensure there is a cohesive vision from start to finish, increases both the costs and the time it takes to deliver a final project.

We shot and delivered all of Devil May Cry’s Something Greater in four weeks total, all in-house. We find that working as the production company and VFX house reduces the ratio of managers per creative significantly, putting more of the money into the final product.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Virtual Roundtable: Storage

By Randi Altman

The world of storage is ever changing and complicated. There are many flavors that are meant to match up to specific workflow needs. What matters most to users? In addition to easily-installed and easy-to-use systems that let them focus on the creative and not the tech? Scalability, speed, data protection, the cloud and the need to handle higher and higher frame rates with higher resolutions — meaning larger and larger files. The good news is the tools are growing to meet these needs. New technologies and software enhancements around NVMe are providing extremely low-latency connectivity that supports higher performance workflows. Time will tell how that plays a part in day-to-day workflows.

For this virtual roundtable, we reached out to makers of storage and users of storage. Their questions differ a bit, but their answers often overlap. Enjoy.

Western Digital Global Director M&E Strategy & Market Development Erik Weaver

What is the biggest trend you’ve seen in the past year in terms of storage?
There’s a couple that immediately come to mind. Both have to do with the massive amounts of data generated by the media and entertainment industry.

The first is the need to manage this data to understand what you have, where it resides and where it’s going. With multiple storage architectures in play – cloud, hybrid, legacy, remote, etc. — some may be out of your purview, making data management challenging. The key is abstraction, creating a unique identifier(s) for every file everywhere so assets can be identified regardless of file name or location.

Some companies are already making progress using the C4 framework and the C4 ID system. With abstraction, you can apply rules so you always know where assets are located within these environments. It allows you to see all your assets and easily move them between storage tiers, if needed. Better data management will also help with analytics and AI/ML.

The second big trend, which we’ll talk about some more, is NVMe (and NVMe-over-Fabric) and the incredible speed and flexibility it provides. It has the ability to radically change the workflow for M&E to genuinely handle multiple 4K, 6K and 8K feeds and manage massive volumes of data. NVMe all-Flash arrays such as our IntelliFlash N-Series product line, as opposed to traditional NAS, bring transfer rates to a whole new level. Using the NVMe protocol can deliver three to five times faster performance than traditional flash technology and 20 times faster than traditional NAS.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
For AI, VR and machine learning, there’s a general trend toward using Flash on the front end and object storage on the back end. Our customers use ActiveScale object storage to scale up and out and store the primary dataset, then use an NVMe tier to process that data. You need a storage architecture large enough to capture all those datasets, then analyze them. This is driving an extreme amount of data.

Take, for example, VR. The move from simple 360 video into volumetric capture is analogous to what film used to be: it’s expensive. With film, you only have a limited number of takes and only so much storage, but with digital you capture everything, then fix it and post. The expansion in storage needs is outrageous and you need cost-effective storage that can scale.

As far as AI and ML, think about a popular Internet entertainment or streaming service. They’re running analytics looking at patterns of what customers are watching. They’re constantly growing and adapting in order to provide recommendations, 24×7. It would be tedious and downright unfeasible for humans to track this.

All of this requires compute power and storage. And having the right balance of performance, storage economics and low TCO is critical. We’re helping many companies define that strategy today leveraging our family of IntelliFlash, ActiveScale, Ultrastar and G-Technology branded products.

WD’s IntelliFlash N-Series NVMe all-Flash array

Can you talk about NVMe?
NVMe is a game changer. NVMe, with extreme performance, low latencies and incredible throughput is opening up new possibilities for the media workflow. NVMe can offer 5x the performance of traditional Flash at comparable prices and will be the foundation for next-generation workflows for production, gaming and VFX. It’s a radical change to traditional workflows today.

NVMe also lays the foundation for NVMe over fabric (NVMf). With that, it’s important to mention the difference between NVMe and NVMf.

Unlike SAS and SATA protocols that were designed for disk drives, NVMe was designed from the ground up for persistent Flash memory technologies and the massively parallel transfer capabilities of SSDs. As such, it delivers significant advantages including extreme performance, improved queuing, low-latency and the reduction of I/O stack overheads.

NVMf is a networked storage protocol that allows NVMe Flash storage to be disaggregated from the server and made widely available to concurrent applications and multiple compute resources. There is no limit to the number of servers or NVMf storage devices that can be shared. It promises to deliver the lowest end-to-end latency from application to storage while delivering agility and flexibility by sharing resources throughout the enterprise.

The bottom line is NVMe and NVMf are enablers for next-generation workflows that can give you a competitive edge in terms of efficiency, productivity and extracting the most value from your data.

What do you do in your products to help safeguard your users’ data?
As one of the largest storage companies in the world, we understand the value of data. Our goal is to deliver the highest quality storage solutions that deliver consistent performance, high-capacity and value to our customers. We design and manufacture storage solutions from silicon to systems. This vertical innovation gives us a unique advantage to fine-tune and optimize virtually any layer within the stack, including firmware, software, processing, interconnect, storage, mechanical and even manufacturing disciplines. This approach helps us deliver purpose-built products across all of our brands that provide the performance, reliability, total cost of ownership and sustainability demanded by our customers.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We believe hybrid workflows are critical in today’s environment. M&E companies are increasingly leveraging a hybrid of on-premises and multi-cloud architectures. Core intellectual property (in the form of digital assets) is stored in private, secure storage, while they access multi-cloud vendors to render, run post workflows or take advantage of various tools and services such as AI.

Object storage in a private cloud configuration is enabling new capabilities by providing “warm” online access to petabyte-scale repositories that were previously stored on tape or other “cold” storage archives. Suddenly, with this hybrid approach, companies can access and retain all their assets, and create new content services, monetize opportunities or run analytics across a much larger dataset. Combined with the ability to use AI for audience viewing, demographic and geographic data allows companies to deliver high-value, tailored content and services on a global scale.

Final Thoughts?
We’re seeing a third dimension to the “digital dilemma.” The digital dilemma is not new and has been talked about before. The first dilemma is the physical device itself. No physical device lasts forever. Tape and media degradation happen over extended periods of time. You also need to think about the limitation of the device itself and will it become obsolete? The second is the age of the media format and compatibility with modern operating systems, leaving data possibly unreadable. But the third thing that’s happening, and it’s quite serious, is that the experts who manage the libraries are “aging out” and nearing retirement. They’ve owned or worked on these infrastructures for generations and have this tribal knowledge of what assets they have and where they’re stored as well as the fickle nature of the underlying hardware. Because of these factors, we strongly encourage that companies evaluate their archive strategy, or potentially risk losing enormous amounts of data.

Company 3 NY and Deluxe NY Data/IO Supervisor Hollie Grant

Company 3 specializes in DI, finishing and color correction, and Deluxe is an end-to-end post house working on projects from dailies through finishing.

Hollie Grant

How much data did you use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Over the past year, as a rough estimate, my team dealt with around 1.5 petabytes of data. The latter half of this year really ramped up storage-wise. We were cruising along with a normal increase in data per show until the last few months where we had an influx of UHD, 4K and even 6K jobs, which take up to quadruple the space of a “normal” HD or 2K project.

I don’t think we’ll see a decrease in this trend with the take off of 4K televisions as the baseline for consumers and with streaming becoming more popular than ever. OTT films and television have raised the bar for post production, expecting 4K source and native deliveries. Even smaller indie films that we would normally not think twice about space-wise are shooting and finishing 4K in the hopes that Netflix or Amazon will buy their film. This means that even for the projects that once were not a burden on our storage will have to be factored in differently going forward.

Have you ever lost important data due to a hardware failure?Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Triple knock on wood! In my time here we have not lost any data due to an operator error. We follow strict procedures and create redundancy in our data, so if there is a hardware failure we don’t lose anything permanently. We have received hard drives or tapes that failed, but this far along in the digital age most people have more than one copy of their work, and if they don’t, a backup is the first thing I recommend.

Do you find access speed to be a limiting factor with you current storage solution?
We can reach read and write speeds of 1GB on our SAN. We have a pretty fast configuration of disks. Of course, the more sessions you have trying to read or write on a volume, the harder it can be to get playback. That’s why we have around 2.5PB of storage across many volumes so I can organize projects based on the bandwidth they will need and their schedules so we don’t have trouble with speed. This is one of the more challenging aspects of my day-to-day as the size of projects and their demand for larger frame playback increase.

Showtime’s Escape From Dannemora – Co3 provided color grading and conform.

What percentage of your data’s value do you budget toward storage and data security?
I can’t speak to exact percentages, but storage upgrades are a large part of our yearly budget. There is always an ask for new disks in the funding for the year because every year we’re growing along with the size of the data for productions. Our production network infrastructure is designed around security regulations set forth by many studios and the MPAA. A lot of work goes into maintaining that and one of the most important things to us is keeping our clients’ data safe behind multiple “locks and keys.”

What trends do you see in storage?
I see the obvious trends in physical storage size decreasing while bandwidth and data size increases. Along those lines I’m sure we’ll see more movies being post produced with everything needed in “the cloud.” The frontrunners of cloud storage have larger, more secure and redundant forms of storing data, so I think it’s inevitable that we’ll move in that direction. It will also make collaboration much easier. You could have all camera-original material stored there, as well as any transcoded files that editorial and VFX will be working with. Using the cloud as a sort of near-line storage would free up the disks in post facilities to focus on only having online what the artists need while still being able to quickly access anything else. Some companies are already working in a manner similar to this, but I think it will start to be a more common solution moving forward.

creative.space‘s Nick Anderson

What is the biggest trend you’ve seen in the past year in terms of storage?
The biggest trend is NVMe storage. SSDs are finally entering a range where they are forcing storage vendors to re-evaluate their architectures to take advantage of its performance benefits.

Nick Anderson

Can you talk more about NVMe?
When it comes to NVMe, speed, price and form factor are three key things users need to understand. When it comes to speed, it blasts past the limitations of hard drives speeds to deliver 3GB/s per drive, which requires a faster connector (PCIe) to take advantage of. With parallel access and higher IOPS (input/output operations per second), NVMe drives can handle operations that would bring an HDD to its knees. When it comes to price, it is cheaper per GB than past iterations of SSD, making it a feasible alternative for tier one storage in many workflows. Finally, when it comes to form factor, it is smaller and requires less hardware bulk in a purpose-built system so you can get more drives in a smaller amount of space at a lower cost. People I talk to are surprised to hear that they have been paying a premium to put fast SSDs into HDD form factors that choke their performance.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
This is something we have been thinking a lot about and we have some exciting stuff in the works that addresses this need that I can’t go into at this time. For now, we are working with our early adopters to solve these needs in ways that are practical to them, integrating custom software as needed. Moving forward we hope to bring an intuitive and seamless storage experience to the larger industry.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
This gets down to a shift in what kind of data is being processed and how it can be accessed. When it comes to video, big media files and image sequences have driven the push for better performance. 360° video pushes the performance storage further past 4K into 8K, 12K, 16K and beyond. On the other hand, as CGI continues to become more photorealistic and we emerge from the “uncanny valley,” the performance need shifts from big data to small data in many cases as render engines are used instead of video or image files. Moving lots of small data is what these systems were originally designed for, so it will be a welcome shift for users.

When it comes to AI, our file system architectures and NVMe technology are making data easily accessible with less impact on performance. Apart from performance, we monitor thousands of metrics on the system that can be easily connected to your machine learning system of choice. We are still in the early days of this technology and its application to media production, so we are excited to see how customers take advantage of it.

What do you do in your products to help safeguard your users’ data?
From a data integrity perspective, every bit of data gets checksumed on copy and can be restored from that checksum if it gets corrupted. This means that that storage is self-healing with 100% data integrity once it is written to the disk.

As far as safeguarding data from external threats, this is a complicated issue. There are many methods of securing a system, but for post production, performance can’t be compromised. For companies following MPAA recommendations, putting the storage behind physical security is often considered enough. Unfortunately, for many companies without an IT staff, this is where the security stops and the system is left open once you get access to the network. To solve this problem, we developed an LDAP user management system that is built-in to our units that provides that extra layer of software security at no additional charge. Storage access becomes user-based, so system activity can be monitored. As far as administering support, we designed an API gatekeeper to manage data to and from the database that is auditable and secure.

AlphaDogs‘ Terence Curren

Alpha Dogs is a full-service post house in Burbank, California. They provide color correction, graphic design, VFX, sound design and audio mixing.

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
We are primarily a finishing house, so we use hundreds of TBs per year on our SAN. We work at higher resolutions, which means larger file sizes. When we have finished a job and delivered the master files, we archive to LTO and clear the project off the SAN. When we handle the offline on a project, obviously our storage needs rise exponentially. We do foresee those requirements rising substantially this year.

Terence Curren

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
We’ve been lucky in that area (knocking on wood) as our SANs are RAID-protected and we maintain a degree of redundancy. We have had clients’ transfer drives fail. We always recommend they deliver a copy of their media. In the early days of our SAN, which is the Facilis TerraBlock, one of our editors accidentally deleted a volume containing an ongoing project. Fortunately, Facilis engineers were able to recover the lost partition as it hadn’t been overwritten yet. That’s one of the things I really have appreciated about working with Facilis over the years — they have great technical support which is essential in our industry.

Do you find access speed to be a limiting factor with you current storage solution?
Not yet, As we get forced into heavily marketed but unnecessary formats like the coming 8K, we will have to scale to handle the bandwidth overload. I am sure the storage companies are all very excited about that prospect.

What percentage of your data’s value do you budget toward storage and data security?
Again, we don’t maintain long-term storage on projects so it’s not a large consideration in budgeting. Security is very important and one of the reasons our SANs are isolated from the outside world. Hopefully, this is an area in which easily accessible tools for network security become commoditized. Much like deadbolts and burglar alarms in housing, it is now a necessary evil.

What trends do you see in storage?
More storage and higher bandwidths, some of which is being aided by solid state storage, which is very expensive on our level of usage. The prices keep coming down on storage, yet it seems that the increased demand has caused our spending to remain fairly constant over the years.

Cinesite London‘s Chris Perschky

Perschky ensures that Cinesite’s constantly evolving infrastructure provides the technical backbone required for a visual effects facility. His team plans, installs and implements all manner of technology, in addition to providing technical support to the entire company.

Chris Perschky

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Depending on the demands of the project that we are working on we can generate terabytes of data every single day. We have become increasingly adept at separating out data we need to keep long-term from what we only require for a limited time, and our cleanup tends to be aggressive. This allows us to run pretty lean data sets when necessary.

I expect more 4K work to creep in next year and, as such, expect storage demands to increase accordingly.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Our thorough backup procedures mean that we have an offsite copy of all production data within a couple of hours of it being written. As such, when an artist has accidentally overwritten a file we are able to retrieve it from backup swiftly.

Do you find access speed to be a limiting factor with your current storage solution?
Only remotely, thereby requiring a caching solution.

What percentage of your data’s value do you budget toward storage and data security?
Due to the requirements of our clients, we do whatever is necessary to ensure the security of their IP and our work.

Cinesite also worked on Iron Spider for Avengers Infinity War ©2018 Marvel Studios

What trends do you see in storage?
The trendy answer is to move all storage to the cloud, but it is just too expensive. That said, the benefits of cloud storage are well documented, so we need some way of leveraging it. I see more hybrid on-prem and cloud solutions. providing the best of both worlds as demand requires. Full SSD solutions are still way too expensive for most of us, but multi-tier storage solutions will have a larger SSD cache tier as prices drop.

Panasas‘ RW Hawkins

What is the biggest trend you’ve seen in the past year in terms of storage?
The demand for more capacity certainly isn’t slowing down! New formats like ProRes RAW, HDR and stereoscopic images required for VR continue to push the need to scale storage capacity and performance. New Flash technologies address the speed, but not the capacity. As post production houses scale, they see that complexity increases dramatically. Trying to scale to petabytes with individual and limited file servers is a big part of the problem. Parallel file systems are playing a more important role, even in medium-sized shops.

RW Hawkins

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
VR (and, more generally, interactive content creation) is particularly interesting as it takes many of the aspects of VFX and interactive gaming and combines them with post. The VFX industry, for many years, has built batch-oriented pipelines running on multiple Linux boxes to solve many of their production problems. This same approach works well for interactive content production where the footage often needs to be pre-processed (stitched, warped, etc.) before editing. High speed, parallel filesystems are particularly well suited for this type of batch-based work.

The AI/ML space is red hot, and the applications seem boundless. Right now, much of the work is being done at a small scale where direct-attach, all-Flash storage boxes serve the need. As this technology is used on a larger scale, it will put demands on storage that can’t be met by direct-attached storage, so meeting those high IOP needs at scale is certainly something Panasas is looking at.

Can you talk about NVMe?
NVMe is an exciting technology, but not a panacea for all storage problems. While being very fast, and excellent at small operations, it is still very expensive, has small capacity and is difficult to scale to petabyte sizes. The next-generation Panasas ActiveStor Ultra platform uses NVMe for metadata while still leveraging spinning disk and SATA SSD. This hybrid approach, using each storage medium for what it does best, is something we have been doing for more than 10 years.

What do you do in your products to help safeguard your users’ data?
Panasas uses object-based data protection with RAID- 6+. This software-based erasure code protection, at the file level, provides the best scalable data protection. Only files affected by a particular hardware failure need to be rebuilt, and increasing the number of drives doesn’t increase the likelihood of losing data. In a sense, every file is individually protected. On the hardware side, all Panasas hardware provides non-volatile components, including cutting-edge NVDIMM technology to protect our customers’ data. The file system has been proven in the field. We wouldn’t have the high-profile customers we do if we didn’t provide superior performance as well as superior data protection.

Users want more flexible workflows — storage in the cloud, on-premises, etc. How are your offerings reflective of that?
While Panasas leverages an object storage backend, we provide our POSIX-compliant file system client called DirectFlow to allow standard file access to the namespace. Files and directories are the “lingua franca” of the storage world, allowing ultimate compatibility. It is very easy to interface between on-premises storage, remote DR storage and public cloud/REST storage using DirectFlow. Data flows freely and at high speed using standard tools, which makes the Panasas system an ideal scalable repository for data that will be used in a variety of pipelines.

Alkemy X‘s Dave Zeevalk

With studios in Philly, NYC, LA and Amsterdam, Alkemy X provides live-action, design, post, VFX and original content for spots, branded content and more.

Dave Zeevalk

How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Each year, our VFX department generates nearly a petabyte of data, from simulation caches to rendered frames. This year, we have seen a significant increase in data usage as client expectations continue to grow and 4K resolution becomes more prominent in episodic television and feature film projects.

In order to use our 200TB server responsibly, we have created a solid system for preserving necessary data and clearing unnecessary files on a regular basis. Additionally, we are diligent in archiving finale projects to our LTO tape systems, and removing them from our production server.

Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)

Because of our data redundancy, through hourly snapshots and daily backups, we have avoided any data loss even with hardware failure. Although hardware does fail, with these snapshots and backups on a secondary server, we are able to bring data back online extremely quickly in the case of hardware failure on our production server. Years ago, while migrating to Linux, a software issue completely wiped out our production server. Within two hours, we were able to migrate all data back from our snapshots and backups to our production server with no data loss.

Do you find access speed to be a limiting factor with your current storage solution?
There are a few scenarios where we do experience some issues with access speed to the production server. We do a good amount of heavy simulation work, at times writing dozens of terabytes per hour. While at our peak, we have experienced some throttled speeds due to the amount of data being written to the server. Our VFX team also has a checkpoint system for simulation where raw data is saved to the server in parallel to the simulation cache. This allows us to restart a simulation mid-way through the process if a render node drops or fails the job. This raw data is extremely heavy, so while using checkpoints on heavy simulations, we also experience some slower than normal speeds.

What percentage of your data’s value do you budget toward storage and data security?
Our active production server houses 200TB of storage space. We have a secondary backup server, with equivalent storage space that we store hourly snapshots and daily back-ups to.

What trends do you see in storage?
With client expectations continuing to rise, and 4K (and higher at times) becoming more and more regular on jobs, the need for more storage space is ever increasing.

Quantum‘s Jamie Lerner

What is the biggest trend you’ve seen in the past year in terms of storage?
Although the digital transformation to higher resolution content in M&E has been taking place over the past several years, the interesting aspect is that the pace of change over the past 12 months is accelerating. Driving this trend is the mainstream adoption of 4K and high dynamic range (HDR) video, and the strong uptick in applications requiring 8K formats.

Jamie Lerner

Virtual reality and augmented reality applications are booming across the media and entertainment landscape; everywhere from broadcast news and gaming to episodic television. These high-resolution formats add data to streams that must be ingested at a much higher rate, consume more capacity once stored and require significantly more bandwidth when doing realtime editing. All of this translates into a significantly more demanding environment, which must be supported by the storage solution.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
New technologies for producing stunning visual content are opening tremendous opportunities for studios, post houses, distributors, and other media organizations. Sophisticated next-generation cameras and multi-camera arrays enable organizations to capture more visual information, in greater detail than ever before. At the same time, innovative technologies for consuming media are enabling people to view and interact with visual content in a variety of new ways.

To capitalize on new opportunities and meet consumer expectations, many media organizations will need to bolster their storage infrastructure. They need storage solutions that offer scalable capacity to support new ingest sources that capture huge amounts of data, with the performance to edit and add value to this rich media.

Can you talk about NVMe?
The main benefit of NVMe storage is that it provides extremely low latency — therefore allowing users to seek content at very high speed — which is ideal for high stream counts and compressed 4K content workflows.

However, NVMe resources are expensive. Quantum addresses this issue head-on by leveraging NVMe over fabrics (NVMeoF) technology. With NVMeoF, multiple clients can use pooled NVMe storage devices across a network at local speeds and latencies. And when combined with our StorNext, all data is accessible by multiple clients in a global namespace, making this high-performance tier of storage much more cost-effective. Finally, Quantum is in early field trials of a new advancement that will allow customers to benefit even more from NVMe-enabled storage.

What do you do in your products to help safeguard your users’ data?
A storage system must be able to accommodate policies ranging from “throw it out when the job is done” to “keep it forever” and everything in between. The cost of storage demands control over where data lives and when, how many copies of the data exist and where those copies reside over time.

Xcellis scale-out storage powered by StorNext incorporates a broad range of features for data protection. This includes integrated features such as RAID, automated copying, versioning and data replication functionality, all included within our latest release of StorNext.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Given the differences in size and scope of organizations across the media industry, production workflows are incredibly varied and often geographically dispersed. Within this context, flexibility becomes a paramount feature of any modern storage architecture.

We provide flexibility in a number of important ways for our customers. From the perspective of system architecture, and recognizing there is no one-size fits all solution, StorNext allows customers to configure storage with multiple media types that balance performance and capacity requirements across an entire end-to-end workflow. Second, and equally important for those companies that have a global workforce, is that our data replication software FlexSync allows for content to be rapidly distributed to production staff around the globe. And no matter what tier of storage the data resides on, FlexTier provides coordinated and unified access to the content within a single global namespace.

EditShare‘s Bill Thompson

What is the biggest trend you’ve seen in the past year in terms of storage?
In no particular order, the biggest trends for storage in the media and entertainment space are:
1. The need to handle higher and higher data rates associated with higher resolution and higher frame rate content. Across the industry, this is being address with Flash-based storage and the use of emerging technology like NVMe over “X” and 25/50/100G networking.

Bill Thompson

2. The ever-increasing concern about content security and content protection, backup and restoration solutions.

3. The request for more powerful analytics solutions to better manage storage resources.

4. The movement away from proprietary hardware/software storage solutions toward ones that are compatible with commodity hardware and/or virtual environments.

Can you talk about NVMe?
NVMe technology is very interesting and will clearly change the M&E landscape going forward. One of the challenges is that we are in the midst of changing standards and we expect current PCIe-based NVMe components to be replaced by U2/M2 implementations. This migration will require important changes to storage platforms.

In the meantime, we offer non-NVMe Flash-based storage solutions whose performance and price points are equivalent to those claimed by early NVMe implementations.

What do you do in your products to help safeguard your users’ data?
EditShare has been in the forefront of user data protection for many years beginning with our introduction of disk-based and tape-based automated backup and restoration solutions.

We expanded the types of data protection schemes and provided easy-to-use management tools that allow users to tailor the type of redundant protection applied to directories and files. Similarly, we now provide ACL Media Spaces, which allow user privileges to be precisely tailored to their tasks at hand; providing only the rights needed to accomplish their tasks, nothing more, nothing less.

Most recently, we introduced EFS File Auditing, a content security solution that enables system administrators to understand “who did what to my content” and “when and how did they did it.”

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
The EditShare file system is now available in variants that support EditShare hardware-based solutions and hybrid on-premise/cloud solutions. Our Flow automation platform enables users to migrate from on-premise high-speed EFS solutions to cloud-based solutions, such as Amazon S3 and Microsoft Azure, offering the best of both worlds.

Rohde & Schwarz‘s Dirk Thometzek

What is the biggest trend you’ve seen in the past year in terms of storage?
Consumer behavior is the most substantial change that the broadcast and media industry has experienced over the past years. Content is consumed on-demand. In order to stay competitive, content providers need to produce more content. Furthermore, to make the content more desirable, technologies such as UHD and HDR need to be adopted. This obviously has an impact on the amount of data being produced and stored.

Dirk Thometzek

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
In media and entertainment there has always been a remarkable growth of data over time, from the very first simple SCSI hard drives to huge network environments. Nowadays, however, there is a tremendous growth approximating an exponential function. Considering all media will be preserved for a very long time, the M&E storage market segment will keep on growing and innovating.

Looking at the amount of footage being produced, a big challenge is to find the appropriate data. Taking it a step further, there might be content that a producer wouldn’t even think of looking for, but has a relative significance to the original metadata queried. That is where machine learning and AI come into the play. We are looking into automated content indexing with the minimum amount of human interaction where the artificial intelligence learns autonomously and shares information with other databases. The real challenge here is to protect these intelligences from being compromised by unintentional access to the information.

What do you do to help safeguard your users’ data?
In collaboration with our Rohde & Schwarz Cybersecurity division, we are offering complete and protected packages to our customers. It begins with access restrictions to server rooms up to encrypted data transfers. Cyber attacks are complex and opaque, but the security layer must be transparent and usable. In media though, latency is just as critical, which is usually introduced with every security layer.

Can you talk about NVMe?
In order to bring the best value to the customer, we are constantly looking for improvements. The direct PCI communication of NVMe certainly brings a huge improvement in terms of latency since it completely eliminates the SCSI communication layer, so no protocol translation is necessary anymore. This results in much higher bandwidth and more IOPS.

For internal data processing and databases, R&S SpycerNode NVMe is used, which really boosts its performance. Unfortunately, the economic aspects of using this technology for media data storage is currently not considered to be efficient. We are dedicated to getting the best performance-to-cost ratio for the market, and since we have been developing video workstations and servers besides storage for decades now, we know how to get the best performance out of a drive — spinning or solid state.

Economically, it doesn’t seem to be acceptable to a build system with the latest and greatest technology for a workflow when standards will do, just because it is possible. The real art of storage technology lies in a highly customized configuration according to the technical requirements of an application or workflow. R&S SpycerNode will evolve over time and technologies will be added to the family.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Although hybrid workflows are highly desirable, it is quite important to understand the advantages and limits of this technology. High-bandwidth and low-latency wide-area network connections involve certain economical aspects. Without the suitable connection, an uncompressed 4K production does not seem feasible from a remote location — uploading several terabytes to a co-location can take hours or even days to be transferred, even if protocol acceleration is used. However, there are workflows, such as supplemental rendering or proxy editing, that do make sense to offload to a datacenter. R&S SpycerNode is ready to be an integral part of geographically scattered networks and the Spycer Storage family will grow.

Dell EMC‘s Tom Burns

What is the biggest trend you’ve seen in the past year in terms of storage?
The most important storage trend we’ve seen is an increasing need for access to shared content libraries accommodating global production teams. This is becoming an essential part of the production chain for feature films, episodic television, sports broadcasting and now e-sports. For example, teams in the UK and in California can share asset libraries for their file-based workflow via a common object store, whether on-prem or hybrid cloud. This means they don’t have to synchronize workflows using point-to-point transmissions from California to the UK, which can get expensive.

Tom Burns

Achieving this requires seamless integration of on-premises file storage for the high-throughput, low-latency workloads with object storage. The object storage can be in the public cloud or you can have a hybrid private cloud for your media assets. A private or hybrid cloud allows production teams to distribute assets more efficiently and saves money, versus using the public cloud for sharing content. If the production needs it to be there right now, they can still fire up Aspera, Signiant, File Catalyst or other point-to-point solutions and have prioritized content immediately available, while allowing your on-premise cloud to take care of the shared content libraries.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Dell Technologies offers end-to-end storage solutions where customers can position the needle anywhere they want. Are you working purely in the cloud? Are you working purely on-prem? Or, like most people, are you working somewhere in the middle? We have a continuous spectrum of storage between high-throughput low-latency workloads and cloud-based object storage, plus distributed services to support the mix that meets your needs.

The most important thing that we’ve learned is that data is expensive to store, granted, but it’s even more expensive to move. Storing your assets in one place and having that path name never change, that’s been a hallmark of Isilon for 15 years. Now we’re extending that seamless file-to-object spectrum to a global scale, deploying Isilon in the cloud in addition to our ECS object store on premises.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
AR, VR, AI and other emerging technologies offer new opportunities for media companies to change the way they tell and monetize their stories. However, due to the large amounts of data involved, many media organizations are challenged when they rely on storage systems that lack either scalability or performance to meet the needs of these new workflows.

Dell EMC’s file and object storage solutions help media companies cost effectively tier their content based upon access. This allows media organizations to use emerging technologies to improve how stories are told and monetize their content with the assistance of AI-generated metadata, without the challenges inherent in many traditional storage systems.

With artificial intelligence, for example, where it was once the job of interns to categorize content in projects that could span years, AI gives media companies the ability to analyze content in near-realtime and create large, easily searchable content libraries as the content is being migrated from existing tape libraries to object-based storage, or ingested for current projects. The metadata involved in this process includes brand recognition and player/actor identification, as well as speech-to-text, making it easy to determine logo placement for advertising analytics and to find footage for use in future movies or advertisements.

With Dell EMC storage, AI technologies can be brought to the data, removing the need to migrate or replicate data to direct-attach storage for analysis. Our solutions also offer the scalability to store the content for years using affordable archive nodes in Isilon or ECS object storage.

In terms of AR and VR, we are seeing video game companies using this technology to change the way players interact with their environments. Not only have they created a completely new genre with games such as Pokemon Go, they have figured out that audiences want nonlinear narratives told through realtime storytelling. Although AR and VR adoption has been slower for movies and TV compared to the video game industry, we can learn a lot from the successes of video game production and apply similar methodologies to movie and episodic productions in the future.

Can you talk about NVMe?
NVMe solutions are a small but exciting part of a much larger trend: workflows that fully exploit the levels of parallelism possible in modern converged architectures. As we look forward to 8K, 60fps and realtime production, the usage of PCIe bus bandwidth by compute, networking and storage resources will need to be much more balanced than it is today.

When we get into realtime productions, these “next-generation” architectures will involve new production methodologies such as realtime animation using game engines rather than camera-based acquisition of physically staged images. These realtime processes will take a lot of cooperation between hardware, software and networks to fully leverage the highly parallel, low-latency nature of converged infrastructure.

Dell Technologies is heavily invested in next-generation technologies that include NVMe cache drives, software-defined networking, virtualization and containerization that will allow our customers to continuously innovate together with the media industry’s leading ISVs.

What do you do in your products to help safeguard your users’ data?
Your content is your most precious capital asset and should be protected and maintained. If you invest in archiving and backing up your content with enterprise-quality tools, then your assets will continue to be available to generate revenue for you. However, archive and backup are just two pieces of data security that media organizations need to consider. They must also take active measures to deter data breaches and unauthorized access to data.

Protecting data at the edge, especially at the scale required for global collaboration can be challenging. We simplify this process through services such as SecureWorks, which includes offerings like security management and orchestration, vulnerability management, security monitoring, advanced threat services and threat intelligence services.

Our storage products are packed with technologies to keep data safe from unexpected outages and unauthorized access, and to meet industry standards such as alignment to MPAA and TPN best practices for content security. For example, Isilon’s OneFS operating system includes SyncIQ snapshots, providing point-in-time backup that updates automatically and generates a list of restore points.

Isilon also supports role-based access control and integration with Active Directory, MIT Kerberos and LDAP, making it easy to manage account access. For production houses working on multiple customer projects, our storage also supports multi-tenancy and access zones, which means that clients requiring quarantined storage don’t have to share storage space with potential competitors.

Our on-prem object store, ECS, provides long-term, cost-effective object storage with support for globally distributed active archives. This helps our customers with global collaboration, but also provides inherent redundancy. The multi-site redundancy creates an excellent backup mechanism as the system will maintain consistency across all sites, plus automatic failure detection and self-recovery options built into the platform.

Scale Logic‘s Bob Herzan

What is the biggest trend you’ve seen in the past year in terms of storage?
There is and has been a considerable buzz around cloud storage, object storage, AI and NVMe. Scale Logic recently took a private survey to its customer base to help determine the answer to this question. What we found is none of those buzzwords can be considered a trend. We also found that our customers were migrating away from SAN and focusing on building infrastructure around high-performance and scalable NAS.

Bob Herzan

They felt on-premises LTO was still the most viable option for archiving, and finding a more efficient and cost-effective way to manage their data was their highest priority for the next couple of years. There are plenty of early adopters testing out the buzzwords in the industry, but the trend — in my opinion — is to maximize a stable platform with the best overall return on the investment.

End users are not focused so much on storage, but on how a company like ours can help them solve problems within their workflows where storage is an important component.

Can you talk more about NVMe?
NVMe provides an any-K solution and superior metadata low-latency performance and works with our scale-out file system. All of our products have had 100GbE drivers for almost two years, enabling mesh technologies with NVMe for networks as well. As cost comes down, NVMe should start to become more mainstream this year — our team is well versed in supporting NVMe and ready to help facilities research the price-to-performance of NVMe to see if it makes sense for their Genesis and HyperFS Scale Out system.

With AI, VR and machine learning, our industry is even more dependent on storage. How are you addressing this?
We are continually refining and testing our best practices. Our focus on broadcast automation workflows over the years has already enabled our products for AI and machine learning. We are keeping up with the latest technologies, constantly testing in our lab with the latest in software and workflow tools and bringing in other hardware to work within the Genesis Platform.

What do you do in your products to help safeguard your users’ data?
This is a broad question that has different answers depending on which aspect of the Genesis Platform you may be talking about. Simply speaking, we can craft any number of data safeguard strategies and practices based on our customer needs, the current technology they are using and, most importantly, where they see their growth of capacity and data protection needs moving forward. Our safeguards start as simple as enterprise-quality components, mirrored sets, RAID -6, RAID-7.3 and RAID N+M, asynchronous data sync to a second instance, full HA with synchronous data sync to a second instance, virtual IP failover between multiple sites, multi-tier DR and business continuity solutions.

In addition, the Genesis Platform’s 24×7 health monitoring service (HMS) communicates directly with installed products at customer sites, using the equipment serial number to track service outages, system temperature, power supply failure, data storage drive failure and dozens of other mission-critical status updates. This service is available to Scale Logic end users in all regions of the world and complies with enterprise-level security protocols by relying only on outgoing communication via a single port.

Users want more flexible workflows — storage in the cloud, on-premises. Are your offerings reflective of that?
Absolutely. This question defines our go-to-market strategy — it’s in our name and part of our day-to-day culture. Scale Logic takes a consultative role with its clients. We take our 30-plus years of experience and ask many questions. Based on the answers, we can give the customer several options. First off, many customers feel pressured to refresh their storage infrastructure before they’re ready. Scale Logic offers customized extended warranty coverage that takes the pressure off the client and allows them to review their options and then slowly implement the migration and process of taking new technology into production.

Also, our Genesis Platform has been designed to scale, meaning clients can start small and grow as their facility grows. We are not trying to force a single solution to our customers. We educate them on the various options to solve their workflow needs and allow them the luxury of choosing the solution that best meets both their short-term and long-term needs as well as their budget.

Facilis‘ Jim McKenna

What is the biggest trend you’ve seen in the past year in terms of storage?
Recently, I’ve found that conversations around storage inevitably end up highlighting some non-storage aspects of the product. Sort of the “storage and…” discussion where the technology behind the storage is secondary to targeted add-on functionality. Encoding, asset management and ingest are some of the ways that storage manufacturers are offering value-add to their customers.

Jim McKenna

It’s great that customers can now expect more from a shared storage product, but as infrastructure providers we should be most concerned with advancing the technology of the storage system. I’m all for added value — we offer tools ourselves that assist our customers in managing their workflow — but that can’t be the primary differentiator. A premium shared storage system will provide years of service through the deployment of many supporting products from various manufacturers, so I advise people to avoid being caught-up in the value-add marketing from a storage vendor.

With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
Our industry has always been dependent upon storage in the workflow, but now facilities need to manage large quantities of data efficiently, so it’s becoming more about scaled networks. In the traditional SAN environment, hard-wired Fibre Channel clients are the exclusive members of the production workgroup.

With scalable shared-storage through multiple connection options, everyone in the facility can be included in the collaboration on a project. This includes offload machines for encoding and rendering large HDR and VR content, and MAM systems with localized and cloud analysis of data. User accounts commonly grow into the triple digits when producers, schedulers and assistants all require secure access to the storage network.

Can you talk about NVMe?
Like any new technology, the outlook for NVMe is promising. Solid state architecture solves a lot of problems inherent in HDD-based systems — seek times, read speeds, noise and cooling, form factor, etc. If I had to guess a couple years ago, I would have thought that SATA SSDs would be included in the majority of systems sold by now; instead they’ve barely made a dent in the HDD-based unit sales in this market. Our customers are aware of new technology, but they also prioritize tried-and-true, field-tested product designs and value high capacity at a lower cost per GB.

Spinning HDD will still be the primary storage method in this market for years to come, although solid state has advantages as a helper technology for caching and direct access for high-bandwidth requirements.

What do you do in your products to help safeguard your users’ data?
Integrity and security are priority features in a shared storage system. We go about our security differently than most, and because of this our customers have more confidence in their solution. By using a system of permissions that emanate from the volume-level, and are obscured from the complexities of network ownership attributes, network security training is not required. Because of the simplicity of securing data to only the necessary people, data integrity and privacy is increased.

In the case of data integrity during hardware failure, our software-defined data protection has been guarding our customers assets for over 13 years, and is continually improved. With increasing drive sizes, time to completion of drive recovery is an important factor, as well as system usability during the process.

Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
When data lifecycle is a concern of our customers, we consult on methods of building a storage hierarchy. There is no one-size-fits-all approach here, as every workflow, facility and engineering scope is different.

Tier 1 storage is our core product line, but we also have solutions for nearline (tier 2) and archive (tier 3). When the discussion turns to the cloud as a replacement for some of the traditional on-premises storage offerings, the complexity of the pricing structure, access model and interface becomes a gating factor. There are a lot of ways to effectively use the cloud, such as compute (AI, encoding, etc.), business continuity, workflow (WAN collaboration) or simple cold storage. These tools, when combined with a strong on-premises storage network, will enhance productivity and ensure on-time delivery of product.

mLogic’s co-founder/CEO Roger Mabon

What is the biggest trend you’ve seen in the past year in terms of storage?
In the M&E industry, high-resolution 4K/8K multi-camera shoots,
stereoscopic VR and HDR video are commonplace and are contributing to the unprecedented amounts of data being generated in today’s media productions. This trend will continue as frame rates and resolutions increase and video professionals move to shoot in these new formats to future-proof their content.

Roger Mabon

With AI, VR and machine learning, etc., our industry is even more dependent on storage. Can you talk about that?
Absolutely. In this environment, content creators must deploy storage solutions that are high capacity, high-performance and fault-tolerant. Furthermore, all of this content must be properly archived so it can be accessed well in to the future. mLogic’s mission is to provide affordable RAID and LTO tape storage solutions that fit this critical need.

How are you addressing this?
The tsunami of data being produced in today’s shoots must be properly managed. First and foremost is the need to protect the original camera files (OCF). Our high-performance mSpeed Thunderbolt 3 RAID solutions are being deployed on-set to protect these OCF. mSpeed is a desktop RAID that features plug-and-play Thunderbolt connectivity, capacities up to 168TB and RAID-6 data protection. Once the OCF is transferred to mSpeed, camera cards can be wiped and put back into production

The next step involves moving the OCF from the on-set RAID to LTO tape. Our portable mTape Thunderbolt 3 LTO solutions are used extensively by media pros to transfer OCF to LTO tape. LTO tape cartridges are shelf stable for 30+ years and cost around $10 per TB. That said, I find that many productions skip the LTO transfer and rely solely on single hard drives to store the OCF. This is a recipe for disaster as hard drives sitting on a shelf have a lifespan of only three to five years. Companies working with the likes of Netflix are required to use LTO for this very reason. Completed projects should also be offloaded from hard drives and RAIDs to LTO tape. These hard drives systems can then be put back into action for the tasks that they are designed for… editing, color correction, VFX, etc.

Can you talk about NVMe?
mLogic does not currently offer storage solutions that incorporate NVMe technology, but we do recognize numerous use cases for content creation applications. Intel is currently shipping an 8TB SSD with PCIe NVMe 3.1 x4 interface that can read/write data at 3000+ MB/second! Imagine a crazy fast and ruggedized NVMe shuttle drive for on-set dailies…

What do you do in your products to help safeguard your users data?

Our 8- and12-drive mSpeed solutions feature hardware RAID data protection. mSpeed can be configured in multiple RAID levels including RAID-6, which will protect the content stored on the unit even if two drives should fail. Our mTape solutions are specifically designed to make it easy to offload media from spinning drives and archive the content to LTO tape for long term data preservation.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We recommend that you make two LTO archives of your content that are geographically separated in secure locations such as the post facility and the production facility. Our mTape Thunderbolt solutions accomplish this task.

In regards to the cloud, transferring terabytes upon terabytes of data takes an enormous amount of time and can be prohibitively expensive, especially when you need to retrieve the content. For now, cloud storage is reserved for productions with big pipes and big budgets.

OWC president Jennifer Soulé 


With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?

We’re constantly working to provide more capacity and faster performance.  For spinning disk solutions, we’re making sure that we’re offering the latest sizes in ever-increasing bays. Our ThunderBay line started as a four-bay, went to a six-bay and will grow to eight-bay in 2019. With 12TB drives, that’s 96TB in a pretty workable form factor. Of course, you also need performance, and that is where our SSD solutions come in as well as integrating the latest interfaces like Thunderbolt 3. For those with greater graphics needs, we also have our Helios FX external GPU box.

Can you talk about NVME?
With our Aura Pro X, Envoy Pro EX, Express 4M2 and ThunderBlade, we’re already into NVMe and don’t see that stopping. By the end of 2019 we expect virtually all of our external Flash-based solutions will be NVMe-based rather than SATA. As the cost of Flash goes down and performance and capacity go up, we expect broader adoption as both primary storage and in secondary cache setups. 2TB drive supply will stabilize and we should see 4TB  and PCIe Gen 4 will double bandwidth.  Bigger, faster and cheaper is a pretty awesome combination.

What do you do in your products to help safeguard your users data?
We focus more on providing products that are compatible with different encryption schemas rather than building something in. As far as overall data protection, we’re always focused on providing the most reliable storage we can. We make sure our power supplies are over what is required to make sure insufficient power is never a factor. We test a multitude of drives in our enclosures to ensure we’re providing the best performing drives.

For our RAID solutions, we do burn-in testing to make sure all the drives are solid. Our SoftRAID technology also provides in-depth drive health monitoring so you know well in advance if a drive is failing.  This is critical because many other SMART-based systems fail to detect bad drives leading to subpar system performance and corrupted data. Of course, all the hardware and software technology we put into our drives don’t do much if people don’t back up their data — so we also work with our customers to find the right solution for their use case or workflow.

Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
I definitely think we hit on flexibility within the on prem-space by offering a full range of single and multi-drive solutions, spinning disk and SSD options, portable to rack mounted that can be fully setup solutions or DIY where you can use drives you might already have. You’ll have to stay tuned on the cloud part, but we do have plans to use the cloud to expand on the data protection our drives already offer.

Smile wins Grand Prize for best music video at Showdown 8

By Randi Altman

Silver Sound’s Showdown 8 Music Video Festival took place this week at the Brooklyn Bowl, highlighting bands, showcasing music videos and naming the winner of their Best Music Video contest.

I’m proud to say that this was my fourth year as a judge of the contest and happy to report that my number one pick took home the top prize. Joe Staehly’s Smile, for artist Jay Pray, features an older woman revisiting places from her past, bringing with her a film projector that plays images of herself and friends — including one special boy — when they were young. Finally, in present day, she visits an older man in a medical facility. He clearly doesn’t recognize her and is visibly uncomfortable. The woman then turns off the lights and turns on projectors that fill the room with images of their past.

For this effort, Staehly took home the Grand Prize for the video he shot on Red Epic Dragon and Super 8 film. Smile brought the audience, and this judge, to tears. That’s right, a music video did that.

Twenty-three-year-old Staehly is a Philadelphia-based cinematographer and director at Set in Motion. Staehly, who also edited the piece, is the youngest grand prize winner in Showdown history.

Each year 21 music videos and four bands compete for the Grand Prize — Silver Sound will produce a music video with them — worth over $10,000. Staehly will be collaborating on this music video with artist Gabrielle Sterbenz.

Created eight years ago by the talents behind NYC audio post house Silver Sound, Showdown shows no sign of slowing down. “Music videos are an oft-overlooked medium that I personally find very exciting,” reports Silver Sound partner/festival director Cory Choy. “Music video directors take risks, both narratively and technically, that other filmmakers, who have to worry about dialogue, aren’t willing to take. It’s a challenge, but it’s also incredibly freeing and exciting to experience two stories simultaneously — the story that the music is telling, and the story that the movie is telling. The way these stories interact and resonate with each other… that’s what music videos are about.”

postPerspective welcomes industry vet Dayna McCallum as publisher

By Randi Altman

I’m very happy to share with you that postPerspective (what you are reading right now!) has brought industry veteran Dayna McCallum on as publisher. She’s going to have a lot on her plate, including supporting our website, newsletter and events, developing some exciting new services, and helping expand our reach in the industry.

Since I founded postPerspective just under three years ago, we have interviewed many pros, covered a lot of trade shows, hosted a variety of events, and gone Behind the Title with a host of artists. And now it’s time for us to take the next step forward. I have spent my entire adult life in this industry and cannot imagine doing anything else, and I know that Dayna feels the same way. There is no person better suited or more qualified to help postPerspective grow in ways that will further our reach and increase our ability to cover and partner with our community. I am very excited to have her on board.

For the past 15 years, Dayna has worked with many of the biggest names in post and entertainment tech. Most recently, she served as Senior Strategist for ignite strategic communications, where she helped guide the strategic marketing planning for some of the industry’s leading companies. Before that, Dayna was the VP of Global Marketing and Communications for Ascent Media’s Creative Services Group (now Deluxe), where she oversaw all marketing strategy and public relations outreach for their collective of creative post production companies, including Company 3, Encore, Encore VFX and Todd-Soundelux. Her work with the company spanned the episodic television, feature film, commercial, DVD and entertainment technology markets in both the US and the UK. Prior to her nine years with Ascent, Dayna was involved with the launch of Westwind Media in Burbank.

Dayna’s roots in the industry are deep, and she is actively involved with the HPA and serves as Co-Chair of the HPA Awards. “As things continue to evolve rapidly in the post industry, and as we converge more and more with production and distribution, we need a strong voice and a closely knit community,” she noted. “This is a very eventful period, and postPerspective is at the forefront of the coverage, sitting right at the crossroads where technology and artistry converge. We have a lot of plans in store as we look to grow and expand the content and services offered by our publishing entity, and I’m very excited and honored to be a part of it.”

And you will see the two of us wandering the halls of NAB when we aren’t at our booth (SL8826) shooting interviews with industry pros and product makers…

Quick Chat: VFX Legion’s James Hattin on his visual effects collective

By Randi Altman

While VFX Legion does have a brick-and-mortar location in Burbank, California, their team of 50 visual effects artists is spread around the world. Started in 2012 by co-founder and VFX supervisor James Hattin and six others who were weary of the old VFX house model — including large overhead and long hours away from family — the virtual studio was set up to allow artists to work where they live, instead of having to move to where the work is.

VFX Legion has provided visual effects for television shows like Scandal and How to Get Away With Murder, as well as feature films such as Insidious: Chapter 3, Jem and the Holograms, and Sinister 2. We recently reached out to Hattin to find out more about his collective and how they make sure their remote collaboration workflow is buttoned up.

Sinister 2

Sinister 2

Can you talk about the work/services you provide?
VFX Legion full-service visual effects facility that provides on-set supervision, tracking, match move, animation, 3D, dynamics and compositing. We favor the compositing side of the work because we have so many skilled compositors on the team. However, we have talent all over the world for dynamics, lighting and animation as well.

You co-founded VFX Legion as a collective?
Legion was started by myself and six equal partners. We are mostly artists and production people. This has been the key to our early success — the partners alone could deliver a significant amount of work. Early on, Legion was designed to be a co-op, wherein, everyone who worked for the company would have a vested interest in getting projects done profitably. However, in researching how that could be done on a legal and business level, we found that we were going to have to change the industry one way at a time. A fully remote workflow was enough to get VFX Legion off the ground. We will have to wait for that change to take hold industry wide before we move into 100s of “owners.”

You have an official office, but you have artists working all over the world. Why did you guys opt to do that as opposed to expanding in Burbank?
The brick-and-mortar office is for the management and supervision. We have an expandable team that handles everything from IO to producing and supervising the artists around the world. We could expand this facility to house artists, but the goal of the company was to find the best artists around the world — not to open offices all over the world. We want people to be able to work wherever they want to live. We don’t mandate that they come in to the office and work a 9 to 5. Artists get to work on their own schedule in their own offices and personal spaces. It’s the new way of giving talent their lives back. VFX can be insanely demanding on the people who work in the industry.

What are the benefits?
The benefits are that artists take control over their lives. They can work all night if they are night owls. They can walk the dog or go out to eat with their families and not be chained to a desk in one of the most expensive cities in the world — which is where all VFX hubs are based. It takes a certain kind of artist, with a certain level of experience, to manage themselves in this atmosphere. Those who do it well can live pretty well by working full time for Legion on projects.

Are there any negatives?
If the artist isn’t the kind of person that can start and finish something, if they can’t manage their time very well, or don’t communicate well, this can be very challenging. We’ve had a few artists bow out over the last few years because they simply weren’t cut out for the type of work that we do. Self management is very important to this pipeline, and if someone isn’t up to it, it can be frustrating.

What kind of software do you use for your VFX work?
We use Nuke and Maya, along with Redshift and VRay for rendering. We also call on After Effects, Mocha, Zoom, Aspera and Shotgun.

With people spread around the world, how do you communicate and review and approve projects? Can you walk us through a typical workflow, starting with how early you get involved on a project?
On many projects, we start at the very beginning. We are there for production meetings and help drive the visual effects workflow so that it is easier to deal with in post. Once we are done on set, we work with the editorial staff to manage shot turnovers and ingesting plates into our system. Once we have plates in our system, we assign the work out to the artists who are a good fit for the work that needs to be done.

Jem and the Holograms

Jem and the Holograms

We let them know what the budget is for the shot and they can accept or refuse the work. Once the artist is kicked off, they will start sending shots through Shotgun for review by a supervisor in-house in Burbank. We generally look at the Shotgun media first to see if the basics are in place. If that looks good, we download the uploaded QuickTime from Shotgun. When that is approved, we pull the synced DPX frames and evaluate them through a QC process to make sure that they meet the quality standards we have as a company.

There are a lot of moving parts, and that is why we have a team of trained coordinators, project managers and producers here in Burbank, to make sure that we facilitate all the work and track all the progress.

Can you talk about some recent projects?
We have been working on Scandal and How to Get Away With Murder for ABC Television. There are a number of challenges working on shows like this. The schedule can be very tight and we are tasked with updating many older elements from previous vendors and previous seasons.

This can also be a lot of fun because we get a chance to make sure that the effects look as good as possible, but we slowly update each of the assets to be a little more ‘Legion-like.’ This can be little secondary animations that weren’t there originally or a change in seasons of a set extension. It is all very exciting and fast paced.

——–

For more on VFX Legion, check out James Hattin’s LinkedIn blog here.

NAB: Nvidia’s Andrew Page

Las Vegas — Nvidia’s Andrew Page came by the postPerspective booth during the NAB Show to discuss GPU acceleration. He even brought a prop: an Nvidia Quadro K6000 flagship board, which offers enhanced performance power for graphics and computing acceleration.

Page talks about partnerships with companies like Adobe, Blackmagic, Quantel and others. Nvidia’s cards can be used in desk-side workstations, mobile workstations and even in the cloud with companies like VMware.

Continue reading

iOgrapher to partner with postPerspective at NAB 2014

San Marino, California – iOgrapher is partnering with postPerspective to deliver NAB 2014 news, along with vendor and influencer interviews using Apple iPads configured with the iOgrapher mobile media cases.

Throughout the course of NAB 2014, postPerspective will not only be conducting interviews at its booth, but  from the NAB 2014 show floor as well.

Continue reading

Day One at the IBC conference

By Simon Ray
Head of Operations and Engineering
Goldcrest London

We got to the show around 4pm.

Already managed to see about 4 stands in the 2 hours we were there.  At this rate we will have finished the whole show by sometime in December.

Had a great demo on Resolve, but Blackmagic don’t seem to be too interested in flogging it as a high-end colour corrector with a small area tucked away in the corner of their huge booth and no real way of organizing personal demos.

Quick look at the new Avid S6 console, looks shiny and we have a demo arranged for Sunday – I am hoping for a ‘paradigm shift’, or at least a whoop and a high-five.

Nice post 6pm beers on the Quantel stand.

The thoughts and opinions here don’t necessarily reflect those of postPerspective.

Meet The Artist: Sandra Dow

Behind the Title…

The 20-year vet of The Mill couldn’t live without a kettle, DVR and an encoder… oh, and she likes sleep!

Sandra

NAME: Sandra Dow

COMPANY: The Mill (@millvfx) in Los Angeles

CAN YOU DESCRIBE YOUR COMPANY?
The Mill in Los Angeles has been producing visual effects content and imagery for commercials and films since 2007

WHAT’S YOUR JOB TITLE? 
Head of MCR  (Master Control Room)

WHAT DOES THAT ENTAIL?
The Machine Room is responsible for everything that enters or leaves The Mill. I would like to think we are the heart of The Mill, without us, the building couldn’t run. It might not be the most glamorous job, but we make sure that all the months/day/hours of creative hard work actually reach your TV/computer screen looking the best they possibly can and on time.

Continue reading

DS is dead, long live DS

p.txt

By Barry Goch
@gochya

LOS ANGELES — When the news came of DS’s demise I was in shock. It’s as if a dear friend who’s been sick for a while finally passes. I knew it was inevitable, but that doesn’t take away the pain. Who am I to speak of pain when it’s just a software application, right? Well, for the folks that got it, and unfortunately there aren’t enough of us, it’s a huge loss. Not just professionally, but on a deeper level.
“…the passion comes from being so impressed with an all-encompassing bit of software.”Tony Quinsee-Jover , HDHeaven.com Continue reading