Cinnafilm 6.6.19

Category Archives: Adobe Premiere

Editing Roundtable

By Randi Altman

The world of the editor has changed over the years as a result of new technology, the types of projects they are being asked to cut (looking at you social media) and the various deliverables they must create. Are deadlines still getting tighter and are budgets still getting smaller? The answer is yes, but some editors are adapting to the trends, and companies that make products for editors are helping by making the tools more flexible and efficient so pros can get to where they need to be.

We posed questions to various editors working in TV, short form and indies, who do a variety of jobs, as well as to those making the tools they use on a daily basis. Enjoy.

Cut+Run Editor/Partner Pete Koob

What trends do you see in commercial editing? Good or bad?
I remember 10 years ago a “colleague,” who was an interactive producer at the time, told me rather haughtily that I’d be out of work in a few years when all advertising became interactive and lived online. Nothing could have been further from the truth, of course, and I think editors everywhere have found that the viewer migration from TV to online has yielded an even greater need for content.

The 30-second spot still exists, both online and on TV, but the opportunities for brands to tell more in-depth stories across a wide range of media platforms mean that there’s a much more diverse breadth of work for editors, both in terms of format and style.

For better or worse, we’ve also seen every human being with a phone become their own personal brand manager with a highly cultivated and highly saturated digital presence. I think this development has had a big impact on the types of stories we’re telling in advertising and how we’re telling them. The genre of “docu-style” editing is evolving in a very exciting way as more and more companies are looking to find real people whose personal journeys embody their brands. Some of the most impressive editorial work I see these days is a fusion of styles — music video, fashion, documentary — all being brought to bear on telling these real stories, but doing it in a way that elevates them above the noise of the daily social media feed.

Selecting the subjects in a way that feels authentic — and not just like a brand co-opting someone’s personal struggle — is essential, but when done well, there are some incredibly inspirational and emotional stories to be told. And as a father of a young girl, it’s been great to show my daughter all the empowering stories of women being told right now, especially when they’re done with such a fresh and exciting visual language.

What is it about commercial editing that attracted you and keeps attracting you?
Probably the thing that keeps me most engaged with commercial editing is the variety and volume of projects throughout the year. Cutting commercials means you’re on to the next one before you’ve really finished the last.

The work feels fresh when I’m constantly collaborating with different people every few weeks on a diverse range of projects. Even if I’m cutting with the same directors, agencies or clients, the cast of characters always rotates to some degree, and that keeps me on my toes. Every project has its own unique challenges, and that compels me to constantly find new ways to tell stories. It’s hard for me to get bored with my work when the work is always changing.

Conoco’s Picnic spot

Can you talk about challenges specific to short-form editing?
I think the most obvious challenge for the commercial editor is time. Being able to tell a story efficiently and poignantly in a 60-, 30-, 15- or even six-second window reveals the spot editor’s unique talent. Sometimes that time limit can be a blessing, but more often than not, the idea on the page warrants a bigger canvas than the few seconds allotted.

It’s always satisfying to feel as if I’ve found an elegant editorial solution to telling the story in a concise manner, even if that means re-imagining the concept slightly. It’s a true testament to the power of editing and one that is specific to editing commercials.

How have social media campaigns changed the way you edit, if at all?
Social media hasn’t changed the way I edit, but it has certainly changed my involvement in the campaign as a whole. At its worst, the social media component is an afterthought, where editors are asked to just slap together a quick six-second cutdown or reformat a spot to fit into a square framing for Instagram. At its best, the editor is brought into the brainstorming process and has a hand in determining how the footage can be used inventively to disperse the creative into different media slots. One of the biggest assets of an editor on any project is his or her knowledge of the material, and being able to leverage that knowledge to shape the campaign across all platforms is incredibly rewarding.

Phillips 76 “Jean and Gene”

What system do you edit on, and what else other than editing are you asked to supply?
We edit primarily on Avid Media Composer. I still believe that nothing else can compete when it comes to project sharing, and as a company it allows for the smoothest means of collaboration between offices around the world. That being said, clients continue to expect more and more polish from the offline process, and we are always pushing our capabilities in motion graphics and visual effects in After Effects and color finessing in Blackmagic DaVinci Resolve.

What projects have you worked on recently?
I’ve been working on some bigger campaigns that consist of a larger number of spots. Two campaigns that come to mind are a seven-spot TV campaign for Phillips 76 gas stations and 13 short online films for Subaru. It’s fun to step back and look at how they all fit together, and sometimes you make different decisions about an individual spot based on how it sits in the larger group.

The “Jean and Gene” spots for 76 were particularly fun because it’s the same two characters who you follow across several stories, and it almost feels like a mini TV series exploring their life.

Earlier in the  year I worked on a Conoco campaign, featuring the spots Picnic, First Contact and River, via Carmichael Lynch.

Red Digital Cinema Post and Workflow Specialist Dan Duran

How do you see the line between production and post blurring?
Both post and on set production are evolving with each other. There has always been a fine line between them, but as tech grows and becomes more affordable, you’re seeing tools that previously would have been used only in post bleed onto set.

One of my favorite trends is seeing color-managed workflows on locations. With full color control pipelines being used with calibrated SDR and HDR monitors, a more accurate representation of what the final image will look like is given. I’ve also seen growth in virtual productions where you’re able to see realtime CGI and environments on set directly through camera while shooting.

What are the biggest trends you’ve been facing in product development?
Everyone is always looking for the highest image quality at the best price point. As sensor technology advances, we’re seeing users ask for more and more out of the camera. Higher sensitivity, faster frame rates, more dynamic range and a digital RAW that allows them to effortlessly shape the images into a very specific creative look that they’re trying to achieve for their show. 8K provides a huge canvas to work with, offering flexibility in what they are trying to capture.

Smaller cameras are able to easily adapt into a whole new myriad of support accessories to achieve shots in ways that weren’t always possible. Along with the camera/sensor revolution, Red has seen a lot of new cinema lenses emerge, each adding their own character to the image as it hits the photo sites.

What trends do you see from editors these days. What enables their success?
I’ve seen post production really take advantage of modern tech to help improve and innovate new workflows. Being able to view higher resolution, process footage faster and playback off of a laptop shows how far hardware has come.

We have been working more with partners to help give pros the post tools they need to be more efficient. As an example, Red recently teamed up with Nvidia to not only get realtime full resolution 8K playback on laptops, but also allow for accelerated renders and transcode times much faster than before. Companies collaborating to take advantage of new tech will enable creative success.

AlphaDogs Owner/Editor Terence Curren

What trends do you see in editing? Good or bad.
There is a lot of content being created across a wide range of outlets and formats, from theatrical blockbusters and high-end TV shows all the way down to one-minute videos for Instagram. That’s positive for people desiring to use their editing skills to do a lot of storytelling. The flip side is that with so much content being created, the dollars to pay editors gets stretched much thinner. Barring high-end content creation, the overall pay rates for editors have been going down.

The cost of content capture is a tiny fraction of what it was back in the film days. The good part of that is there is a greater likelihood that the shot you need was actually captured. The downside is that without the extreme expense of shooting associated with film, we’ve lost the disciplines of rehearsing scenes thoroughly, only shooting while the scene is being performed, only printing circled takes, etc. That, combined with reduced post schedules, means for the most part editors just don’t have the time to screen all the footage captured.

The commoditization of the toolsets, (some editing systems are actually free) combined with the plethora of training materials readily available on the Internet and in most schools means that video storytelling is now a skill available to everyone. This means that the next great editors won’t be faced with the barriers to entry that past generations experienced, but it also means that there’s a much larger field of editors to choose from. The rules of supply and demand tell us that increased availability and competition of a service reduces its cost. Traditionally, many editors have been able to make upper-middle-class livings in our industry, and I don’t see as much of that going forward.

To sum it up, it’s a great time to become an editor, as there’s plenty of work and therefore lots of opportunity. But along with that, the days of making a higher-end living as an editor are waning.

What is it about editing that attracted you and keeps attracting you?
I am a storyteller at heart. The position of editor is, in my opinion, matched with the director and writer for responsibility of the structural part of telling the story. The writer has to invent the actual story out of whole cloth. The director has to play traffic cop with a cornucopia of moving pieces under a very tight schedule while trying to maintain the vision of the pieces of the story necessary to deliver the final product. The editor takes all those pieces and gives the final rewrite of the story for the audience to hopefully enjoy.

Night Walk

As with writing, there are plenty of rules to guide an editor through the process. Those rules, combined with experience, make the basic job almost mechanical much of the time. But there is a magic thing that happens when the muse strikes and I am inspired to piece shots together in some way that just perfectly speaks to the audience. Being such an important part of the storytelling process is uniquely rewarding for a storyteller like me.

Can you talk about challenges specific to short-form editing versus long-form?
Long-form editing is a test of your ability to maintain a fresh perspective of your story to keep the pacing correct. If you’ve been editing a project for weeks or months at a time, you know the story and all the pieces inside out. That can make it difficult to realize you might be giving too much information or not enough to the audience. Probably the most important skill for long form is the ability to watch a cut you’ve been working on for a long time and see it as a first-time viewer. I don’t know how others handle it, but for me there is a mental process that just blanks out the past when I want to take a critical fresh viewing.

Short form brings the challenge of being ruthless. You need to eliminate every frame of unnecessary material without sacrificing the message. While the editors don’t need to keep their focus for weeks or months, they have the challenge of getting as much information into that short time as possible without overwhelming the audience. It’s a lot like sprinting versus running a marathon. It exercises a different creative muscle that also enjoys an immediate reward.

Lafayette Escadrille

I can’t say I prefer either one over the other, but I would be bored if I didn’t get to do both over time, as they bring different disciplines and rewards.

How have social media campaigns changed the way you edit, if at all? Can you talk about the variety of deliverables and how that affects things?
Well, there is the horrible vertical framing trend, but that appears to be waning, thankfully. Seriously, though, the Instagram “one minute” limit forces us all to become commercial editors. Trying to tell the story in as short a timeframe as possible, knowing it will probably be viewed on a phone in a bright and noisy environment is a new challenge for seasoned editors.

There is a big difference between having a captive audience in a theater or at home in front of the TV and having a scattered audience whose attention you are trying to hold exclusively amid all the distractions. This seems to require more overt attention-grabbing tricks, and it’s unfortunate that storytelling has come to this point.

As for deliverables, they are constantly evolving, which means each project can bring all new requirements. We really have to work backward from the deliverables now. In other words, one of our first questions now is, “Where is this going?” That way we can plan the appropriate workflows from the start.

What system do you edit on and what else other than editing are you asked to supply?
I primarily edit on Media Composer, as it’s the industry standard in my world. As an editor, I can learn any tool to use. I have cut with Premiere and FCP. It’s knowing where to make the edit that is far more important than how to make the edit.

When I started editing in the film days, we just cut picture and dialogue. There were other editors for sound beyond the basic location-recorded sound. There were labs from which you ordered something as simple as a dissolve or a fade to black. There were color timers at the film lab who handled the look of the film. There were negative cutters that conformed the final master. There were VFX houses that handled anything that wasn’t actually shot.

Now, every editor has all the tools at hand to do all those tasks themselves. While this is helpful in keeping costs down and not slowing the process, it requires editors to be a jack-of-all-trades. However, what typically follows that term is “and master of none.”

Night Walk

One of the main advantages of separate people handling different parts of the process is that they could become really good at their particular art. Experience is the best teacher, and you learn more doing the same thing every day than occasionally doing it. I’ve met a few editors over the years that truly are masters in multiple skills, but they are few and far between.

Using myself as an example, if the client wants some creatively designed show open, I am not the best person for that. Can I create something? Yes. Can I use After Effects? Yes, to a minor degree. Am I the best person for that job? No. It is not what I have trained myself to do over my career. There is a different skill set involved in deciding where to make a cut versus how to create a heavily layered, graphically designed show open. If that is what I had dedicated my career to doing, then I would probably be really good at it, but I wouldn’t be as good at knowing where to make the edit.

What projects have gone through the studio recently?
We work on a lot of projects at AlphaDogs. The bulk of our work is on modest-budget features, documentaries and unscripted TV shows. A recent example is a documentary on World War I fighter pilots called The Lafayette Escadrille and an action-thriller starring Eric Roberts and Mickey Rourke, called Night Walk.

Unfortunately for me I have become so focused on running the company that I haven’t been personally working on the creative side as much as I would like. While keeping a post house running in the current business climate is its own challenge, I don’t particularly find it as rewarding as “being in the chair.”

That feeling is offset by looking back at all the careers I have helped launch through our internship program and by offering entry-level employment. I’ve also tried hard to help editors over the years through venues like online user groups and, of course, our own Editors’ Lounge events and videos. So I guess that even running a post house can be rewarding in its own way.

Luma Touch Co-Founder/Lead Designer Terri Morgan

Have there been any talks among NLE providers about an open timeline? Being able to go between Avid, Resolve or Adobe with one file like an AAF or XML?
Because every edit system uses its own editing paradigms (think Premiere versus FCP X), creating an open exchange is challenging. However, there is an interesting effort by Pixar (https://github.com/PixarAnimationStudios/OpenTimelineIO) that includes adapters for the wide range of structural differences of some editors. There are also efforts for standards in effects and color correction. The core editing functionality in LumaFusion is built to allow easy conversion in and out to different formats, so adapting to new standards will not be challenging in most cases.

With AI becoming a popular idea and term, at what point does it stop? Is there a line where AI won’t go?
Looking at AI strictly as it relates to video editing, we can see that its power is incrementally increasing, and automatically generated movies are getting better. But while a neural network might be able to put together a coherent story, and even mimic a series of edits to match a professional style, it will still be cookie-cutter in nature, rather than being an artistic individual endeavor.

What we understand from our customers — and from our own experience — is that people get profound joy from being the storyteller or the moviemaker. And we understand that automatic editing does not provide the creative/ownership satisfaction that you get from crafting your own movie. You only have to make one automatic movie to learn this fact.

It is also clear that movie viewers feel a lack of connection or even annoyance when watching an automatically generated movie. You get the same feeling when you pay for parking at an automated machine, and the machine says, “Thank you, have a nice day.”

Here is a question from one of our readers: There are many advancements in technology coming in NLEs. Are those updates coming too fast and at an undesirable cost?
It is a constant challenge to maintain quality while improving a product. We use software practices like Agile, engage in usability tests and employ testing as robust as possible to minimize the effects of any changes in LumaFusion.

In the case of LumaFusion, we are consistently adding new features that support more powerful mobile video editing and features that support the growing and changing world around us. In fact, if we stopped developing so rapidly, the app would simply stop working with the latest operating system or wouldn’t be able to deliver solutions for the latest trends and workflows.

To put it all in perspective, I like to remind myself of the amount of effort it took to edit video 20 years ago compared to how much more efficient and fun it is to edit a video now. It gives me reason to forgive the constant changes in technology and software, and reason to embrace new workflows and methodologies.

Will we ever be at a point where an offline/online workflow will be completely gone?
Years ago, the difference in image quality provided a clear separation between offline and online. But today, online is differentiated by the ability to edit with dozens of tracks, specialized workflows, specific codecs, high-end effects and color. Even more importantly, online editing typically uses the specialized skills that a professional editor brings to a project.

Since you can now edit a complex timeline with six tracks of 4K video with audio and another six tracks of audio, basic color correction and multilayered titles straight from an iPad, for many projects you might find it unnecessary to move to an online situation. But there will always be times that you need more advanced features or the skills of a professional editor. Since not everybody wants to understand the complex world of post production, it is our challenge at Luma Touch to make more of these high-end features available without greatly limiting who can successfully use the product.

What are the trends you’re seeing in customer base from high-end post facility vs. independent editor/contractor?
High-end post facilities tend to have stationary workstations that employ skilled editor/operators. The professionals that find LumaFusion to be a valuable tool in their bag are often those who are responsible for the entire production and post production, including independent producers, journalists and high-end professionals who want the flexibility of starting to edit while on location or while traveling.

What are the biggest trends you’ve been seeing in product development?
In general, moving away from lengthy periods of development without user feedback. Moving toward getting feedback from users early and often is an Agile-based practice that really makes a difference in product development and greatly increases the joy that our team gets from developing LumaFusion. There’s nothing more satisfying than talking to real users and responding to their needs.

New development tools, languages and technologies are always welcome. At WWDC this year, Apple announced it would make it easier for third-party developers to port their iOS apps over to the desktop with Project Catalyst. This will likely be a viable option for LumaFusion.

You come from a high-end editing background, with deep experience editing at the workstation level. When you decided to branch off and do something on your own, why did you choose mobile?
Mobile offered a solution to some of the longest running wishes in professional video editing: to be liberated from the confines of an edit suite, to be able to start editing on location, to have a closer relationship to the production of the story in order to avoid the “fix it in post” mentality, and to take your editing suite with you anywhere.

It was only after starting to develop for mobile that we fully understood one of the most appealing benefits. Editing on an iPad or iPhone encourages experimentation, not only because you have your system with you when you have a good idea, but also because you experience a more direct relationship to your media when using the touch interface; it feels more natural and immersive. And experimentation equals creativity. From my own experience I know that the more you edit, the better you get at it. These are benefits that everyone can enjoy whether they are a professional or a novice.

Hecho Studios Editor Grant Lewis

What trends do you see in commercial editing? Good or bad.
Commercials are trending away from traditional, large-budget cinematic pieces to smaller, faster, budget-conscious ones. You’re starting to see it now more and more as big brands shy away from big commercial spectacles and pivot toward a more direct reflection of the culture itself.

Last year’s #CODNation work for the latest installment of the Call of Duty franchise exemplifies this by forgoing a traditional live-action cinematic trailer in favor of larger number of game-capture, meme-like films. This pivot away from more dialogue-driven narrative structures is changing what we think of as a commercial. For better or worse, I see commercial editing leaning more into the fast-paced, campy nature of meme culture.

What is it about commercial editing that attracted you and keeps attracting you?
What excites me most about commercial editing is that it runs the gamut of the editorial genre. Sometimes commercials are a music video; sometimes they are dramatic anthems; other times they are simple comedy sketches. Commercials have the flexibility to exist as a multitude of narrative genres, and that’s what keeps me attracted to commercial editing.

Can you talk about challenges specific to short form versus long form?
The most challenging thing about short-form editing is finding time for breath. In a 30-second piece, where do you find a moment of pause? There’s always so much information being packed into smaller timeframes; the real challenge is editing at a sprint, but still having it feel dynamic and articulate.

How have social media campaigns changed the way you edit, if at all? Can you talk about the variety of deliverables and how that affects things?
All campaigns will either live on social media or have specific social components now. I think the biggest thing that has changed is being tasked with telling a compelling narrative in 10 or even five or six seconds. Now, the 60-second and 90-second anthem film has to be able to work in six seconds as well. It is challenging to boil concepts down to just a few seconds and still maintain a sense of story.

#CODNation

All the deliverable aspect ratios editors are asked to make now is also a blossoming challenge. Unless a campaign is strictly shot for social, the DP probably shot for a traditional 16×9 framing. That means the editor is tasked with reframing all social content to work in all the different deliverable formats. This makes the editor act almost as the DP for social in the post process. Shorter deliverables and a multitude of aspect ratios have just become another layer to editing and demand a whole new editorial lens to view and process the project through.

What system do you edit on and what else other than editing are you asked to supply?
I currently cut in Adobe Premiere Pro. I’m often asked to supply graphics and motion graphic elements for offline cuts as well. That means being comfortable with the whole Adobe suite of tools, including Photoshop and After Effects. From type setting to motion tracking, editors are now asked to be well-versed in all tangential aspects of editorial.

What projects have you worked on recently?
I cut the launch film for Razer’s new Respawn energy drink. I also cut Toms Shoes’ most recent campaign, “Stand For Tomorrow.”

EditShare Head of Marketing Lee Griffin

What are the biggest trends you’ve been seeing in product development?
We see the need to produce more video content — and produce it faster than ever before — for social media channels. This means producing video in non-broadcast standards/formats and, more specifically, producing square video. To accommodate, editing tools need to offer user-defined options for manipulating size and aspect ratio.

What changes have you seen in terms of the way editors work and use your tools?
There are two distinct changes: One, productions are working with editors regardless of their location. Two, there is a wider level of participation in the content creation process.

In the past, the editor was physically located at the facility and was responsible for assembling, editing and finishing projects. However, with the growing demand for content production, directors and producers need options to tap into a much larger pool of talent, regardless of their location.

EditShare AirFlow and Flow Story enable editors to work remotely from any location. So today, we frequently see editors who use our Flow editorial tools working in different states and even on different continents.

With AI becoming a popular idea and term, at what point does it stop?
I think AI is quite exciting for the industry, and we do see its potential to significantly advance productions. However, AI is still in its infancy with regards to the content creation market. So from our point of view, the road to AI and its limits are yet to be defined. But we do have our own roadmap strategy for AI and will showcase some offerings integrated within our collaborative solutions at IBC 2019.

Will we ever be at a point where an offline/online workflow will be completely gone?
It depends on the production. Offline/online workflows are here to stay in the higher-end production environment. However, for fast turnaround productions, such as news, sports and programs (for example, soap operas and reality TV), there is no need for offline/online workflows.

What are the trends you’re seeing in customer base from high-end post facility vs, independent editor. How is that informing your decisions on products and pricing?
With the increase in the number of productions thanks to OTTs, high-end post facilities are tapping into independent editors more and more to manage the workload. Often the independent editor is remote, requiring the facility to have a media management foundation that can facilitate collaboration beyond the facility walls.

So we are seeing a fundamental shift in how facilities are structuring their media operations to support remote collaborations. The ability to expand and contract — with the same level of security they have within the facility — is paramount in architecting their “next-generation” infrastructure.

What do you see as untapped potential customer bases that didn’t exist 10 to 20 years ago, and how do you plan on attracting and nurturing them? What new markets are you seeing.
We are seeing major growth beyond the borders of the media and entertainment industry in many markets. From banks to real estate agencies to insurance companies, video has become one of the main ways for them to communicate to their media-savvy clientele.

While EditShare solutions were initially designed to support traditional broadcast deliverables, we have evolved them to accommodate these new customers. And today, these customers want simplicity coupled with speed. Our development methodology puts this at the forefront of our core products.

Puget Systems Senior Labs Technician Matt Bach

Have there been any talks between NLE providers about an open timeline. Essentially being able to go between Avid, Resolve, or Adobe with one file like an AAF or XML?
I have not heard anything on this topic from any developers, so keep in mind that this is pure conjecture, but the pessimistic side of me doesn’t see an “open timeline” being something that will happen anytime soon.

If you look at what many of the NLE developers are doing, they are moving more and more toward a pipeline that is completely contained within their ecosystem. Adobe has been pushing Dynamic Link in recent years in order to make it easier to move between Premiere Pro and After Effects. Blackmagic is going even a step further by integrating editing, color, VFX and audio all within DaVinci Resolve.

These examples are both great advancements that can really improve your workflow efficiency, but they are being done in order to keep the user within their specific ecosystem. As great as an open timeline would be, it seems to be counter to what Adobe, Blackmagic, and others are actively pursuing. We can still hold out hope, however!

With AI becoming a popular idea and term, at what point does it stop?
There are definitely limitations to what AI is capable of, but that line is moving year by year. For the foreseeable future, AI is going to take on a lot of the tedious tasks like tagging of footage, content-aware fill, shot matching, image enhancement and other similar tasks. These are all perfect use cases for artificial intelligence, and many (like content-aware fill) are already being implemented in the software we have available right now.

The creative side is where AI is going to take the longest time to become useful. I’m not sure if there is a point where AI will stop from a technical standpoint, but I personally believe that even if AI was perfect, there is value in the fact that an actual person made something. That may mean that the masses of videos that get published will be made by AI (or perhaps simply AI-assisted), but just like furniture, food, or even workstations, there will always be a market for high-quality items crafted by human hands.

I think the main thing to keep in mind with AI is that it is just a tool. Moving from black and white to color, or from film to digital, was something that at the time, people thought was going to destroy the industry. In reality, however, they ended up being a huge boon. Yes, AI will change how some jobs are approached — and may even eliminate some job roles entirely —but in the end, a computer is never going to be as creative and inventive as a real person.

There are many advancements in technology coming in NLEs seemingly daily, are those updates coming too fast and at an undesirable cost?
I agree that this is a problem right now, but it isn’t limited to just NLEs. We see the same thing all the time in other industries, and it even occurs on the hardware side where a new product will be launched simply because they could, not because there is an actual need for it.

The best thing you can do as an end-user is to provide feedback to the companies about what you actually want. Don’t just sit on those bugs, report them! Want a feature? Most companies have a feature request forum that you can post on.

In the end, these companies are doing what they believe will bring them the most users. If they think a flashy new feature will do it, that is what they will spend money on. But if they see a demand for less flashy, but more useful, improvements, they will make that a priority.

Will we ever be at a point where an offline/online workflow will be completely gone?
Unless we hit some point where camera technology stops advancing, I don’t think offline editing is ever going to fully go away. It is amazing what modern workstations can handle from a pure processing standpoint, but even if the systems themselves could handle online editing, you also need to have the storage infrastructure that can keep up. With the move from HD to 4K, and now to 8K, that is a lot of moving parts that need to come together in order to eliminate offline editing entirely.

With that said, I do feel like offline editing is going to be used less and less. We are starting to hit the point that people feel their footage is higher quality than they need without having to be on the bleeding edge. We can edit 4K ProRes or even Red RAW footage pretty easily with the technology that is currently available, and for most people that is more than enough for what they are going to need for the foreseeable future.

What are the trends you’re seeing in customer base from high-end post facility vs. independent editor, and how is that informing your decisions on products and pricing?
From a workstation side, there really is not too much of a difference beyond the fact that high-end post facilities tend to have larger budgets that allow them to get higher-end machines. Technology is becoming so accessible that even hobbyist YouTubers often end up getting workstations from us that are very similar to what high-end professionals use.

The biggest differences typically revolves not around the pure power or performance of the system itself, but rather how it interfaces with the other tools the editor is using. Things like whether the system has 10GB (or fiber) networking, or whether they need a video monitoring card in order to connect to a color calibrated display, are often what sets them apart.

What are the biggest trends you’ve been seeing in product development?
In general, the two big things that have come up over and over in recent years are GPU acceleration and artificial intelligence. GPU acceleration is a pretty straight-forward advancement that lets software developers get a lot more performance out of a system for tasks like color correction, noise reduction and other tasks that are very well suited for running on a GPU.

Artificial intelligence is a completely different beast. We do quite a bit of work with people that are on the forefront of AI and machine learning, and it is going to have a large impact on post production in the near future. It has been a topic at conferences like NAB for several years, but with platforms like Adobe Sensei starting to take off, it is going to become more important

However, I do feel that AI is going to be more of an enabling technology rather than one that replaces jobs. Yes, people are using AI to do crazy things like cut trailers without any human input, but I don’t think that is going to be the primary use of it anytime in the near future. It is going to be things like assisting with shot matching, tagging of footage, noise reduction, and image enhancement that is going to be where it is truly useful.

What do you see as untapped potential customer bases that didn’t exist 10-20 years ago, and how do you plan on attracting and nurturing them? What new markets are you seeing?
I don’t know if there are any customer bases that are completely untapped, but I do believe that there is going to be more overlap between industries in the next few years. One example is how much realtime raytracing has improved recently, which is spurring the use of video game engines in film. This has been done for previsualization for quite a while, but the quality is getting so good that there are some films already out that include footage straight from the game engine.

For us on the workstation side, we regularly work with customers doing post and customers who are game developers, so we already have the skills and technical knowledge to make this work. The biggest challenge is really on the communication side. Both groups have their own set of jargon and general language, so we often find ourselves having to be the “translator” when a post house is looking at integrating realtime visualization in their workflow.

This exact scenario is also likely to happen with VR/AR as well.

Lucky Post Editor Marc Stone

What trends do you see in commercial editing?
I’m seeing an increase in client awareness of the mobility of editing. It’s freeing knowing you can take the craft with you as needed, and for clients, it can save the ever-precious commodity of time. Mobility means we can be an even greater resource to our clients with a flexible approach.

I love editing at Lucky Post, but I’m happy to edit anywhere I am needed — be it on set or on location. I especially welcome it if it means you can have face-to-face interaction with the agency team or the project’s director.

What is it about commercial editing that attracted you and keeps attracting you?
The fact that I can work on many projects throughout the year, with a variety of genres, is really appealing. Cars, comedy, emotional PSAs — each has a unique creative challenge, and I welcome the opportunity to experience different styles and creative teams. I also love putting visuals together with music, and that’s a big part of what I do in 30-or 60-second… or even in a two-minute branded piece. That just wouldn’t be possible, to the same extent, in features or television.

Can you talk about challenges specific to short-form editing?
The biggest challenge is telling a story in 30 seconds. To communicate emotion and a sense of character and get people to care, all within a very short period of time. People outside of our industry are often surprised to hear that editors take hours and hours of footage and hone it down to a minute or less. The key is to make each moment count and to help make the piece something special.

Ram’s The Promise spot

How has social media campaigns changed the way you edit, if at all?
It hasn’t changed the way I edit, but it does allow some flexibility. Length isn’t constrained in the same way as broadcast, and you can conceive of things in a different way in part because of the engagement approach and goals. Social campaigns allow agencies to be more experimental with ideas, which can lead to some bold and exciting projects.

What system do you edit on, and what else other than editing are you asked to supply?
For years I worked on Avid Media Composer, and at Lucky Post I work in Adobe Premiere. As part of my editing process, I often weave sound design and music into the offline so I can feel if the edit is truly working. What I also like to do, when the opportunity presents, is to be able to meet with the agency creatives before the shoot to discuss style and mood ahead of time.

What projects have you worked on recently?
Over the last six months, I have worked on projects for Tazo, Ram and GameStop, and I am about to start a PSA for the Salvation Army. It gets back to the variety I spoke about earlier and the opportunity to work on interesting projects with great people.

Billboard Video Post Supervisor/Editor Zack Wolder

What trends do you see in editing? Good or bad.I’m noticing a lot of glitch transitions and RGB splits being used. Much flashier edits, probably for social content to quickly grab the viewers attention.

Can you talk about challenges specific to short-form editing versus long-form?
With short-form editing, the main goal is to squeeze the most amount of useful information into a short period of time while not overloading the viewer. How do you fit an hour-long conversation into a three-minute clip while hitting all the important talking points and not overloading the viewer? With long-form editing, the goal is to keep viewers’ attention over a long period of time while always surprising them with new and exciting info.

What is it about editing that attracted you and keeps attracting you?
I loved the fact that I could manipulate time. That hooked me right away. The fact that I could take a moment that lasts only a few seconds and drag it out for a few minutes was incredible.

Can you talk about the variety of deliverables for social media and how that affects things?
Social media formats have made me think differently about framing a shot or designing logos. Almost all the videos I create start in the standard 16×9 framing but will eventually be delivered as a vertical. All graphics and transitions I build need to easily work in a vertical frame. Working in a 4K space and shooting in 4K helps tremendously.

Rainn Wilson and Billie Eilish

What system do you edit on, and what else other than editing are you asked to supply?
I edit in Adobe Premiere Pro. I’m constantly asked to supply design ideas and mockups for logos and branding and then to animate those ideas.

What projects have you worked on recently?
Recently, I edited a video that featured Rainn Wilson — who played Dwight Schrute on The Office — quizzing singer Billie Eilish, who is a big-time fan of the show.

Main Image: AlphaDogs editor Herrianne Catolos


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Behind the Title: Exile editor Lorin Askill

Name: Lorin Askill

Company: Exile (@exileedit)

Can you describe your company?
Exile is an editorial and finishing house based in NYC and LA. I am based in New York.

What’s your job title?
Editor

What does that entail?
I take moving images, sound and other raw materials and arrange them in time to create shape and meaning and ultimately tell stories. I always loved the Tarkovsky book title, sculpting in Time. I like to think that is what I do.

What would surprise people the most about what falls under that title?
Probably how much of an all-encompassing creative process it is. As well as editing picture, I source and edit sounds, I experiment with music, I create rough comps and block compositions for VFX, I play with color and place titles. At its best, editing is not only finding the best pieces of footage and ordering them to tell a story, an editor is crafting the whole visual-aural world that will be carried through to the finished piece.

Hyundai

What’s your favorite part of the job?
I love watching the first cut! When you’re excited about a project, you’ve found the gems and assembled your favorite pieces, solved some challenging problems, fudged together some tricky stunt or effects moments (and it’s already working!). Then you put a piece of music under it that (which you know you can’t actually use), and you feel like it has a good shape and runs from start to finish — usually very over length. It’s so much fun getting to this stage, then sitting back, turning the volume up, pressing play and watching it all together for the first time!

What’s your least favorite part?
My least favorite part is then going through and destroying that first cut with boring realities like running length and client requirements… JOKING. I also love the process of tightening and honing a cut to hit all the right notes and achieve the ultimate vision. But there is nothing like watching the first assembly of a project you love.

What is your most productive time of the day?
Probably first thing when I’ve got fresh eyes and I’m solving problems that seemed impossible the day before. Also the very end of the day when you’re in a little delirious zone and you’re really immersed and engrossed. When I’m cutting a music video, I like to pull up the project late at night and give myself the freedom to play because your brain is definitely functioning in a different way, and sometimes it’s really creative.

If you didn’t have this job, what would you be doing instead?
I think I’d photograph landscapes and spread environmental awareness while having food pop-ups in my garden.

Why did you choose this profession? How early on did you know this would be your path?
Ever since I got my first iMac in high school and started speeding up, slowing down and reversing footage in iMovie. I was addicted to it. I was manipulating time and creating stories with images and sound, and it felt like a beautiful combination of visual art and music, both of which I loved and studied. When I realized I could make a living being creative, and hopefully one day make movies. It seemed like a no-brainer.

Sia

Can you name some recent projects you have worked on?
Most recently I’ve been editing a passion project. It’s a short film directed by my brother. It’s a proof-of-concept for a film we’ve been writing together for a long time. Before that I was working on a bunch commercial projects while also cutting musical sequences for a feature film directed by Sia.

What do you use to edit?
I grew up on iMovie and then Final Cut Pro. Now I use Adobe Premiere Pro and find it does exactly what I need it to do.

Name a few pieces of technology you can’t live without.
I hate to say my phone, but it’s undeniable. My laptop for edits on the run. Good headphones. My Hasselblad from the ‘60s.

What do you do to de-stress from it all?
I get into nature whenever possible, and I cook.

Cinnafilm 6.6.19

Adobe’s new Content-Aware fill in AE is magic, plus other CC updates

By Brady Betzel

NAB is just under a week away, and we are here to share some of Adobe’s latest Creative Cloud offerings. And there are a few updates worth mentioning, such as a freeform project panel in Premiere Pro, AI-driven Auto Ducking for Ambience for Audition and addition of a Twitch extension for Character Animator. But, in my opinion, the Adobe After Effects updates are what this year’s release will be remembered by.


Content Aware: Here is the before and after. Our main image is the mask.

There is a new expression editor in After Effects, so us old pseudo-website designers can now feel at home with highlighting, line numbers and more. There are also performance improvements, such as faster project loading times and new deBayering support for Metal on macOS. But the first prize ribbon goes to the Content-Aware fill for video powered by Adobe Sensei, the company’s AI technology. It’s one of those voodoo features that when you use it, you will be blown away. If you have ever used Mocha Pro by BorisFX then you have had a similar tool known as the “Object Removal” tool. Essentially, you draw around the object you want to remove, such as a camera shadow or boom mic, hit the magic button and your object will be removed with a new background in its place. This will save users hours of manual work.

Freeform Project panel in Premiere.

Here are some details on other new features:

● Freeform Project panel in Premiere Pro— Arrange assets visually and save layouts for shot selects, production tasks, brainstorming story ideas, and assembly edits.
● Rulers and Guides—Work with familiar Adobe design tools inside Premiere Pro, making it easier to align titling, animate effects, and ensure consistency across deliverables.
● Punch and Roll in Audition—The new feature provides efficient production workflows in both Waveform and Multitrack for longform recording, including voiceover and audiobook creators.
● Surprise viewers in Twitch Live-Streaming Triggers with Character Animator Extension—Livestream performances are enhanced where audiences engage with characters in real-time with on-the-fly costume changes, impromptu dance moves, and signature gestures and poses—a new way to interact and even monetize using Bits to trigger actions.
● Auto Ducking for ambient sound in Audition and Premiere Pro — Also powered by Adobe Sensei, Auto Ducking now allows for dynamic adjustments to ambient sounds against spoken dialog. Keyframed adjustments can be manually fine-tuned to retain creative control over a mix.
● Adobe Stock now offers 10 million professional-quality, curated, royalty-free HD and 4K video footage and Motion Graphics templates from leading agencies and independent editors to use for editorial content, establishing shots or filling gaps in a project.
● Premiere Rush, introduced late last year, offers a mobile-to-desktop workflow integrated with Premiere Pro for on-the-go editing and video assembly. Built-in camera functionality in Premiere Rush helps you take pro-quality video on your mobile devices.

The new features for Adobe Creative Cloud are now available with the latest version of Creative Cloud.


Arvato to launch VPMS MediaEditor NLE at NAB

First seen as a technology preview at IBC 2018, Arvato’s MediaEditor is a browser-based desktop editor aimed at journalistic editing and content preparation workflows. MediaEditor projects can be easily exported and published in various formats, including square and vertical video, or can be opened in Adobe Premiere with VPMS EditMate for craft editing.

MediaEditor, which features a familiar editing interface, offers simple drag-and-drop transitions and effects, as well as basic color correction. Users can also record voiceovers directly into a sequence, and the system enables automatic mixing of audio tracks for quicker turnaround. Arvato will add motion graphics for captioning and pre-generated graphics in an upcoming version of MediaEditor.

MediaEditor is a part of Arvato Systems’ Video Production Management Suite (VPMS) enterprise MAM solution. Like other products in the suite, it can be independently deployed and scaled, or combined with other products for workflows across the media enterprise. MediaEditor can also be used with Vidispine-based systems, and VPMS and Vidispine clients can access their material through MediaEditor whether on-premise or via the cloud. MediaEditor takes advantage of the advanced VPMS streaming technology allowing users to work anywhere with high-quality, responsive video playback, even on lower-speed connections.


InSync intros frame rate converter plug-in for Mac-based Premiere users

InSync Technology’s FrameFormer motion compensated frame rate converter now is available as a plug-in for Adobe Premiere Pro users working on Macs. Simplifying and accelerating deployment through automated settings, FrameFormer provides conversion for all types of content from sub-QCIF up to 8K and beyond.

“Frame rate conversion is an essential requirement for monetizing content domestically and internationally, as well as for integrating mixed frame rate footage into a production,” reports managing director of InSync Technology Paola Hobson. “A high-quality motion compensated standards converter is the only solution for these applications, and we’re adding to our solutions for Mac users with our new FrameFormer plug-in for Adobe Premiere Pro for macOS.”

The FrameFormer Adobe Premiere Pro Mac plug-in complements InSync’s plug-ins for Final Cut Pro (Mac) and Adobe Premiere Pro (Windows), quickly and conveniently meeting any frame rate and format conversion requirements. Integrated seamlessly into Adobe Premiere Pro, the plug-in offers a simple user interface that allows users to select the required conversion and to preview in-progress results via on-screen thumbnails.

“In repurposing different frame rate material for integration into your media projects, attention to detail makes all the difference,” added Hobson. “Picture quality must be preserved at every step because even the smallest error introduced early in the process will propagate, resulting in highly visible defects down the line. Now our family of FrameFormer plug-ins gives Adobe Premiere Pro users working on both Mac and Windows systems confidence in the results of their frame rate conversion processes.”

FrameFormer is available in a standard edition that provides conversions for content up to HD resolution, with presets for common conversions, and in a professional edition that provides conversions for content up to UHD and beyond.


New codec, workflow options via Red, Nvidia and Adobe

By Mike McCarthy

There were two announcements last week that will impact post production workflows. The first was the launch of Red’s new SDK, which leverages Nvidia’s GPU-accelerated CUDA framework to deliver realtime playback of 8K Red footage. I’ll get to the other news shortly. Nvidia was demonstrating an early version of this technology at Adobe Max in October, and I have been looking forward to this development since I am about to start post on a feature film shot on the Red Monstro camera. This should effectively render the RedRocket accelerator cards obsolete, replacing them with cheaper, multipurpose hardware that can also accelerate other computational tasks.

While accelerating playback of 8K content at full resolution requires a top-end RTX series card from Nvidia (Quadro RTX 6000, Titan RTX or GeForce RTX 2080Ti), the technology is not dependent on RTX’s new architecture (RT and Tensor cores), allowing earlier generation hardware to accelerate smooth playback at smaller frame sizes. Lots of existing Red footage is shot at 4K and 6K, and playback of these files will be accelerated on widely deployed legacy products from previous generations of Nvidia GPU architecture. It will still be a while before this functionality is in the hands of end users, because now Adobe, Apple, Blackmagic and other software vendors have to integrate the new SDK functionality into their individual applications. But hopefully we will see those updates hitting the market soon (targeting late Q1 of 2019).

Encoding ProRes on Windows via Adobe apps
The other significant update, which is already available to users as of this week, is Adobe’s addition of ProRes encoding support on its video apps in Windows. Developed by Apple, ProRes encoding has been available on Mac for a long time, and ProRes decoding and playback has been available on Windows for over 10 years. But creating ProRes files on Windows has always been a challenge. Fixing this was less a technical challenge than a political one, as Apple owns the codec and it is not technically a standard. So while there were some hacks available at various points during that time, Apple has severely restricted the official encoding options available on Windows… until now.

With the 13.0.2 release of Premiere Pro and Media Encoder, as well as the newest update to After Effects, Adobe users on Windows systems can now create ProRes files in whatever flavor they happen to need. This is especially useful since many places require delivery of final products in the ProRes format. In this case, the new export support is obviously a win all the way around.

Adobe Premiere

Now users have yet another codec option for all of their intermediate files, prompting another look at the question: Which codec is best for your workflow? With this release, Adobe users have at least three major options for high-quality intermediate codecs: Cineform, DNxHR and now ProRes. I am limiting the scope to integrated cross-platform codecs supporting 10-bit color depth, variable levels of image compression and customizable frame sizes. Here is a quick overview of the strengths and weaknesses of each option:

ProRes
ProRes was created by Apple over 10 years ago and has become the de-facto standard throughout the industry, regardless of the fact that it is entirely owned by Apple. ProRes is now fully cross-platform compatible, has options for both YUV and RGB color and has six variations, all of which support at least 10-bit color depth. The variable bit rate compression scheme scales well with content complexity, so encoding black or static images doesn’t require as much space as full-motion video. It also supports alpha channels with compression, but only in the 444 variants of the codec.

Recent tests on my Windows 10 workstation resulted in ProRes taking 3x to 5x as much CPU power to playback as similar DNxHR of Cineform files, especially as frame sizes get larger. The codec supports 8K frame sizes but playback will require much more processing power. I can’t even playback UHD files in ProRes 444 at full resolution, while the Cineform and DNxHR files have no problem, even at 444. This is less of concern if you are only working at 1080p.

Multiply those file sizes by four for UHD content (and by 16 for 8K content).

Cineform
Cineform, which has been available since 2004, was acquired by GoPro in 2011. They have licensed the codec to Adobe, (among other vendors) and it is available as “GoPro Cineform” in the AVI or QuickTime sections of the Adobe export window. Cineform is a wavelet compression codec, with 10-bit YUV and 12-bit RGB variants, which like ProRes support compressed alpha channels in the RGB variant. The five levels of encoding quality are selected separately from the format, so higher levels of compression are available for 4444 content compared to the limited options available in the other codecs.

It usually plays back extremely efficiently on Windows, but my recent tests show that encoding to the format is much slower than it used to be. And while it has some level of support outside of Adobe applications, it is not as universally recognized as ProRes or DNxHD.

DNxHD
DNxHD was created by Avid for compressed HD playback and has now been extended to DNxHR (high resolution). It is a fixed bit rate codec, with each variant having a locked multiplier based on resolution and frame rate. This makes it easy to calculate storage needs but wastes space for files that are black or contain a lot of static content. It is available in MXF and Mov wrappers and has five levels of quality. The top option is 444 RGB, and all variants support alpha channels in Mov but uncompressed, which takes a lot of space. For whatever reason, Adobe has greatly optimized DNxHR playback in Premiere Pro, of all variants, in both MXF and Mov wrappers. On my project 6Below, I was able to get 6K 444 files to playback, with lots of effects, without dropping frames. The encodes to and from DNxHR are faster in Adobe apps as well.

So for most PC Adobe users, DNxHR-LB (low bandwidth) is probably the best codec to use for intermediate work. We are using it to offline my current project, with 2.2K DNxHR-LB, Mov files. People with a heavy Mac interchange may lean toward ProRes, but up your CPU specs for the same level of application performance.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Adobe Max 2018: Creative Cloud updates and more

By Mike McCarthy

I attended my first Adobe Max 2018 last week in Los Angeles. This huge conference takes over the LA convention center and overflows into the surrounding venues. It began on Monday morning with a two-and-a-half-hour keynote outlining the developments and features being released in the newest updates to Adobe’s Creative Cloud. This was followed by all sorts of smaller sessions and training labs for attendees to dig deeper into the new capabilities of the various tools and applications.

The South Hall was filled with booths from various hardware and software partners, with more available than any one person could possibly take in. Tuesday started off with some early morning hands-on labs, followed by a second keynote presentation about creative and career development. I got a front row seat to hear five different people, who are successful in their creative fields — including director Ron Howard — discuss their approach to work and life. The rest of the day was so packed with various briefings, meetings and interviews that I didn’t get to actually attend any of the classroom sessions.

By Wednesday, the event was beginning to wind down, but there was still a plethora of sessions and other options for attendees to split their time. I presented the workflow for my most recent project Grounds of Freedom at Nvidia’s booth in the community pavilion, and spent the rest of the time connecting with other hardware and software partners who had a presence there.

Adobe released updates for most of its creative applications concurrent with the event. Many of the most relevant updates to the video tools were previously announced at IBC in Amsterdam last month, so I won’t repeat those, but there are still a few new video ones, as well as many that are broader in scope in regards to media as a whole.

Adobe Premiere Rush
The biggest video-centric announcement is Adobe Premiere Rush, which offers simplified video editing workflows for mobile devices and PCs.  Currently releasing on iOS and Windows, with Android to follow in the future, it is a cloud-enabled application, with the option to offload much of the processing from the user device. Rush projects can be moved into Premiere Pro for finishing once you are back on the desktop.  It will also integrate with Team Projects for greater collaboration in larger organizations. It is free to start using, but most functionality will be limited to subscription users.

Let’s keep in mind that I am a finishing editor for feature films, so my first question (as a Razr-M user) was, “Who wants to edit video on their phone?” But what if the user shot the video on their phone? I don’t do that, but many people do, so I know this will be a valuable tool. This has me thinking about my own mentality toward video. I think if I was a sculptor I would be sculpting stone, while many people are sculpting with clay or silly putty. Because of that I would have trouble sculpting in clay and see little value in tools that are only able to sculpt clay. But there is probably benefit to being well versed in both.

I would have no trouble showing my son’s first-year video compilation to a prospective employer because it is just that good — I don’t make anything less than that. But there was no second-year video, even though I have the footage because that level of work takes way too much time. So I need to break free from that mentality, and get better at producing content that is “sufficient to tell a story” without being “technically and artistically flawless.” Learning to use Adobe Rush might be a good way for me to take a step in that direction. As a result, we may eventually see more videos in my articles as well. The current ones took me way too long to produce, but Adobe Rush should allow me to create content in a much shorter timeframe, if I am willing to compromise a bit on the precision and control offered by Premiere Pro and After Effects.

Rush allows up to four layers of video, with various effects and 32-bit Lumetri color controls, as well as AI-based audio filtering for noise reduction and de-reverb and lots of preset motion graphics templates for titling and such.  It should allow simple videos to be edited relatively easily, with good looking results, then shared directly to YouTube, Facebook and other platforms. While it doesn’t fit into my current workflow, I may need to create an entirely new “flow” for my personal videos. This seems like an interesting place to start, once they release an Android version and I get a new phone.

Photoshop Updates
There is a new version of Photoshop released nearly every year, and most of the time I can’t tell the difference between the new and the old. This year’s differences will probably be a lot more apparent to most users after a few minutes of use. The Undo command now works like other apps instead of being limited to toggling the last action. Transform operates very differently, in that they made proportional transform the default behavior instead of requiring users to hold Shift every time they scale. It allows the anchor point to be hidden to prevent people from moving the anchor instead of the image and the “commit changes” step at the end has been removed. All positive improvements, in my opinion, that might take a bit of getting used to for seasoned pros. There is also a new Framing Tool, which allows you to scale or crop any layer to a defined resolution. Maybe I am the only one, but I frequently find myself creating new documents in PS just so I can drag the new layer, that is preset to the resolution I need, back into my current document. For example, I need a 200x300px box in the middle of my HD frame — how else do you do that currently? This Framing tool should fill that hole in the features for more precise control over layer and object sizes and positions (As well as provide its easily adjustable non-destructive masking.).

They also showed off a very impressive AI-based auto selection of the subject or background.  It creates a standard selection that can be manually modified anywhere that the initial attempt didn’t give you what you were looking for.  Being someone who gives software demos, I don’t trust prepared demonstrations, so I wanted to try it for myself with a real-world asset. I opened up one of my source photos for my animation project and clicked the “Select Subject” button with no further input and got this result.  It needs some cleanup at the bottom, and refinement in the newly revamped “Select & Mask” tool, but this is a huge improvement over what I had to do on hundreds of layers earlier this year.  They also demonstrated a similar feature they are working on for video footage in Tuesday night’s Sneak previews.  Named “Project Fast Mask,” it automatically propagates masks of moving objects through video frames and, while not released yet, it looks promising.  Combined with the content-aware background fill for video that Jason Levine demonstrated in AE during the opening keynote, basic VFX work is going to get a lot easier.

There are also some smaller changes to the UI, allowing math expressions in the numerical value fields and making it easier to differentiate similarly named layers by showing the beginning and end of the name if it gets abbreviated.  They also added a function to distribute layers spatially based on the space between them, which accounts for their varying sizes, compared to the current solution which just evenly distributes based on their reference anchor point.

In other news, Photoshop is coming to iPad, and while that doesn’t affect me personally, I can see how this could be a big deal for some people. They have offered various trimmed down Photoshop editing applications for iOS in the past, but this new release is supposed to be based on the same underlying code as the desktop version and will eventually replicate all functionality, once they finish adapting the UI for touchscreens.

New Apps
Adobe also showed off Project Gemini, which is a sketch and painting tool for iPad that sits somewhere between Photoshop and Illustrator. (Hence the name, I assume) This doesn’t have much direct application to video workflows besides being able to record time-lapses of a sketch, which should make it easier to create those “white board illustration” videos that are becoming more popular.

Project Aero is a tool for creating AR experiences, and I can envision Premiere and After Effects being critical pieces in the puzzle for creating the visual assets that Aero will be placing into the augmented reality space.  This one is the hardest for me to fully conceptualize. I know Adobe is creating a lot of supporting infrastructure behind the scenes to enable the delivery of AR content in the future, but I haven’t yet been able to wrap my mind around a vision of what that future will be like.  VR I get, but AR is more complicated because of its interface with the real world and due to the variety of forms in which it can be experienced by users.  Similar to how web design is complicated by the need to support people on various browsers and cell phones, AR needs to support a variety of use cases and delivery platforms.  But Adobe is working on the tools to make that a reality, and Project Aero is the first public step in that larger process.

Community Pavilion
Adobe’s partner companies in the Community Pavilion were showing off a number of new products.  Dell has a new 49″ IPS monitor, the U4919DW, which is basically the resolution and desktop space of two 27-inch QHD displays without the seam (5120×1440 to be exact). HP was displaying their recently released ZBook Studio x360 convertible laptop workstation, (which I will be posting a review of soon), as well as their Zbook X2 tablet and the rest of their Z workstations.  NVidia was exhibiting their new Turing-based cards with 8K Red decoding acceleration, ray tracing in Adobe Dimension and other GPU accelerated tasks.  AMD was demoing 4K Red playback on a MacBookPro with an eGPU solution, and CPU based ray-tracing on their Ryzen systems.  The other booths spanned the gamut from GoPro cameras and server storage devices to paper stock products for designers.  I even won a Thunderbolt 3 docking station at Intel’s booth. (Although in the next drawing they gave away a brand new Dell Precision 5530 2-in-1 convertible laptop workstation.)   Microsoft also garnered quite a bit of attention when they gave away 30 MS Surface tablets near the end of the show.  There was lots to see and learn everywhere I looked.

The Significance of MAX
Adobe MAX is quite a significant event, especially now that I have been in the industry long enough to start to see the evolution of certain trends — things are not as static as we may expect.  I have attended NAB for the last 12 years, and the focus of that show has shifted significantly away from my primary professional focus. (No Red, Ncidia, or Apple booths, among many other changes)  This was the first year that I had the thought “I should have gone to Sundance,” and a number of other people I know had the same impression. Adobe Max is similar, although I have been a little slower to catch on to that change.  It has been happening for over ten years, but has grown dramatically in size and significance recently.  If I still lived in LA, I probably would have started attending sooner, but it was hardly on my radar until three weeks ago.  Now that I have seen it in person, I probably won’t miss it in the future.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Adobe launches Premiere Rush CC for social video

At the Adobe Max Creativity Conference, Adobe introduced Adobe Premiere Rush CC, the first all-in-one video editing app for social media creators that simplifies video creation and sharing on platforms such as YouTube and Instagram.

Designed specifically for online video creators, Premiere Rush CC integrates capture, intuitive editing, simplified color, audio and motion graphics with seamless publishing to leading social platforms, such as YouTube and Instagram, all in one easy-to-use solution.

With Premiere Rush CC, content creators do not have to be video, color or audio experts to publish professional-quality videos. Premiere Rush CC harnesses the power of Premiere Pro CC and After Effects CC; offers built-in access to professionally designed Motion Graphics templates in Adobe Stock to get started quickly; and features a Sensei-powered, one-click auto-duck feature to adjust music and normalize sound. It also allows access anywhere, enabling users to create compelling video projects — optimized for social distribution — on one device and publish from another with a consistent user experience across desktop and mobile.

Premiere Rush CC is available now on Windows and macOS and via the iOS App Store. (Google Play store availability is coming in 2019.) Adobe offers a variety of pricing plans tailored for customers’ needs:

• Premiere Rush CC is available for $9.99/month to individuals, $19.99/month to teams and $29.99/month to enterprise customers. Premiere Rush CC is also included as part of All Apps, Student and Premiere Pro CC single app plans and comes with 100 GB of CC storage. Additional storage options, up to 10 TB, are also available for purchase.

• Premiere Rush CC Starter Plan: Available for free, the Starter Plan gives customers access to all Premiere Rush CC features, use of desktop and mobile apps and the ability to create an unlimited number of projects and export up to three projects for free.


Review: Blackmagic’s eGPU and Intel i9 MacBook Pro 2018

By Brady Betzel

Blackmagic’s eGPU is worth the $699 price tag. You can buy it from Apple’s website, where it is being sold exclusively for the time being. Wait? What? You wanted some actual evidence as to why you should buy the BMD eGPU?

Ok, here you go…

MacBook Pro With Intel i9
First, I want to go over the latest Apple MacBook Pro, which was released (or really just updated) this past July. With some controversial fanfare, the 2018 MacBook Pro can now be purchased with the blazingly fast Intel i9, 2.6GHz (Turbo Boost up to 4.3GHz) six-core processor. In addition, you can add up to 32GB of 2400MHz DDR4 onboard memory. The Radeon Pro 560x GPU with 4GB of GDDR5 memory and even a 4TB SSD storage drive. It has four Thunderbolt 3 ports and, for some reason, a headphone jack. Apple is also touting its improved butterfly keyboard switches as well as its True Tone display technology. If you want to read more about that glossy info head over to Apple’s site.

The 2018 MacBook Pro is a beast. I am a big advocate for the ability to upgrade and repair computers, so Apple’s venture to create what is essentially a leased computer ecosystem that needs to be upgraded every year or two usually puts a bad taste in my mouth.

However, the latest MacBook Pros are really amazing… and really expensive. The top-of-the-line MacBook Pro I was provided with for this review would cost $6,699! Yikes! If I was serious, I would purchase everything but the $2,000 upgrade from the 2TB SSD drive to the 4TB, and it would still cost $4,699. But I suppose that’s not a terrible price for such an intense processor (albeit not technically workstation-class).

Overall, the MacBook Pro is a workhorse that I put through its video editing and color correcting paces using three of the top four professional nonlinear editors: Adobe Premiere, Apple FCP X and Blackmagic’s Resolve 15 (the official release). More on those results in a bit, but for now, I’ll just say a few things: I love how light and thin it is. I don’t like how hot it can get. I love how fast it charges. I don’t like how fast it loses charge when doing things like transcoding or exporting clips. A 15-minute export can drain the battery over 40% while playing Spotify for eight hours will hardly drain the battery at all (maybe 20%).

Blackmagic’s eGPU with Radeon Pro 580 GPU
One of the more surprising releases from Blackmagic has been this eGPU offering. I would never have guessed they would have gone into this area, and certainly would never have guessed they would have gone with a Radeon card, but here we are.

Once you step back from the initial, “Why in the hell wouldn’t they let it be user-replaceable and also not brand dependent” shock, it actually makes sense. If you are Mac OS user, you probably can do a lot in terms of external GPU power already. When you buy a new iMac, iMac Pro or MacBook Pro, you are expecting it to work, full stop.

However, if you are a DIT or colorist that is more mobile than that sweet million-dollar color bay you dream of, you need more. This is where the BMD eGPU falls nicely into place. You plug it in and instantly see it populate in the menu bar. In addition, the eGPU acts as a dock with four USB 3 ports, two Thunderbolt 3 ports and an HDMI port. The MacBook Pro will charge off of the eGPU as well, which eliminates the need for your charger at your docking point.

On the go, the most decked out MacBook Pro can handle its own. So it’s no surprise that FCP X runs remarkably fast… faster than everything else. However, you have to be invested in an FCP X workflow and paradigm — and while I’m not there yet, maybe the future will prove me wrong. Recently, I saw someone on Twitter who developed an online collaboration workflow, so people are excited about it.

Anyway, many of the nonlinear editors I work with can also play on the MacBook Pro, even with 4K Red, ARRI and, especially, ProRes footage. Keep in mind though, with the 2K, 4K, or whatever K footage, you will need to set the debayer to around “half good” if you want a fluid timeline. Even with the 4GB Radeon 560x I couldn’t quite play realtime 4K footage without some sort of compromise in quality.

But with the Blackmagic eGPU, I significantly improved my playback capabilities — and not just in Resolve 15. I did try and plug the eGPU into a PC with Windows 10 I was reviewing at the same time and it was recognized, but I couldn’t get all the drivers sorted out. So it’s possible it will work in Windows, but I couldn’t get it there.

Before I get to the Resolve testing, I did some benchmarking. First I ran Cinebench R15 without the eGPU attached and got the following scores: OpenGL – 99.21fps, reference match 99.5%, CPU – 947cb, CPU (single core) 190cb and MP ratio of 5.00x. With the GPU attached: Open GL — 60.26fps, reference match 99.5%, CPU — 1057 cb, CPU (single core) 186cb and MP ratio of 5.69x. Then I ran Unigine’s Valley Benchmark 1.0 without the eGPU, which got 21.3fps and a score of 890 (minimum 12.4fps/maximum 36.2fps). With the eGPU it got 25.6fps and a score of 1073 (minimum 19.2 fps/max 37.1fps)

Resolve 15 Test
I based all of my tests on a similar (although not exact for the different editing applications) 10-minute timeline, 23.98fps, 3840×2160, 4K and 8K RAW Red footage (R3D files) and Alexa (.ari and ProRes444XQ) UHD footage, all with edit page resizes, simple color correction and intermittent sharpening and temporal noise reduction (three frames, better, medium, 10, 10 and 5).

Playback: Without the eGPU I couldn’t play 23.98fps, 4K Red R3D without being set to half-res. With the eGPU I could playback at full-res in realtime (this is what I was talking about in sentence one of this review). The ARRI footage would play at full res, but would go between 1fps and 7fps at full res. The 8K Red footage would play in realtime when set to quarter-res.

One of the most re-assuring things I noticed when watching my Activity Monitor’s GPU history readout was that Resolve uses both GPUs at once. Not all of the apps did.

Resolve 15 Export Tests
In the following tests, I disabled all cache or optimized media options, including Performance Mode.

Test 1: H.264 at 23.98fps, UHD, auto-quality, no frame reordering, force highest-quality debayer/resizes and encoding profile Main)
a. Without eGPU (Radeon Pro 560x): 22 minutes, 16 seconds
b. With BMD eGPU (Radeon Pro 580): 16 minutes and 21 seconds

Test 2: H.265 10-bit, 23.98/UHD, auto quality, no frame reordering, force highest-quality debayer/resizes)
a. Without eGPU: stopped rendering after 10 frames
b. With BMD eGPU: same result

Test 3:
ProRes4444 at 23.98/UHD
a. Without eGPU: 27 min and 29 seconds
b. With BMD eGPU: 22 minutes and 57 seconds

Test 4:
– Edit page cache – enabled Smart User Cache at ProResHQ
a. Without eGPU: 17 minutes and 28 seconds
b. With BMD eGPU: 12 minutes and 22 seconds

Adobe Premiere Pro v.12.1.2
I performed similar testing in Adobe Premiere Pro using a 10-minute timeline at 23.98fps, 3840×2160, 4K and 8K RAW Red footage (R3D files) and Alexa (DNxHR SQ 8-bit) UHD footage, all with Effect Control tab resizes and simple Lumetri color correction, including sharpening and intermittent denoise (16) under the HSL Secondary tab in Lumetri applied to shadows only.

In order to ensure your eGPU will be used inside of Adobe Premiere, you must use Metal as your encoder. To enable it go to File > Project Settings > General and change the renderer to Mercury Playback Engine GPU acceleration Metal — (OpenCL will only use the internal GPU for processing.)

Premiere did not handle the high-resolution media as aptly as Resolve had, but it did help a little. However, I really wanted to test the export power with the added eGPU horsepower. I almost always send my Premiere sequences to Adobe Media Encoder to do the processing, so that is where my exports were processed.

Adobe Media Encoder
Test 1: H.264 (No render used during exports: 23.98/UHD, 80Mb/s, software encoding doesn’t allow for profile setup)
a. Open CL with no eGPU: about 140 minutes (sorry had to chase the kids around and couldn’t watch this snail crawl)
b. Metal no eGPU: about 137 minutes (chased the kids around again, and couldn’t watch this snail crawl, either)
c. Open CL with eGPU: wont work, Metal only
d. Metal with eGPU: one hour

Test 2: H.265
a. Without eGPU: failed (interesting result)
b. With eGPU: 40 minutes

Test 3: ProRes4444
a. Without eGPU: three hours
b. With eGPU: one hour and 14 minutes

FCP X
FCP X is an interesting editing app, and it is blazing fast at handling ProRes media. As I mentioned earlier, it hasn’t been in my world too much, but that isn’t because I don’t like it. It’s because professionally I haven’t run into it. I love the idea of roles, and would really love to see that playout in other NLEs. However, my results speak for themselves.

One caveat to using the eGPU in FCP X is that you must force it to work inside of the NLE. At first, I couldn’t get it to work. The Activity Monitor would show no activity on the eGPU. However, thanks to a Twitter post, James Wells (@9voltDC) sent me to this, which allows you to force FCP X to use the eGPU. It took a few tries but I did get it to work, and funny enough I saw times when all three GPUs were being used inside of FCP X, which was pretty good to see. This is one of those use-at-your-own risk things, but it worked for me and is pretty slick… if you are ok with using Terminal commands. This also allows you to force the eGPU onto other apps like Cinebench.

Anyways here are my results with the BMD eGPU exporting from FCP X:

Test 1: H.264
a. Without eGPU: eight minutes
b. With eGPU: eight minutes and 30 seconds

Test 2: H.265: Not an option

Test 3: ProRes4444
a. Without eGPU: nine minutes
b. With eGPU: six minutes and 30 seconds

Summing Up
In the end, the Blackmagic eGPU with Radeon Pro 580 GPU is a must buy if you use your MacBook Pro with Resolve 15. There are other options out there though, like the Razer Core v2 or the Akitio Node Pro.

From this review I can tell you that the Blackmagic eGPU is silent even when processing 8K Red RAW footage (even when the MacBook Pro fans are going at full speed), and it just works. Plug it in and you are running, no settings, no drivers, no cards to install… it just runs. And sometimes when I have three little boys running around my house, I just want that peace of mind and I want things to just work like the Blackmagic eGPU.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Adobe updates Creative Cloud

By Brady Betzel

You know it’s almost fall when when pumpkin spice lattes are  back and Adobe announces its annual updates. At this year’s IBC, Adobe had a variety of updates to its Creative Cloud line of apps. From more info on their new editing platform Project Rush to the addition of Characterizer to Character Animator — there are a lot of updates so I’m going to focus on a select few that I think really stand out.

Project Rush

I use Adobe Premiere quite a lot these days; it’s quick and relatively easy to use and will work with pretty much every codec in the universe. In addition, the Dynamic Link between Adobe Premiere Pro and Adobe After Effects is an indispensible feature in my world.

With the 2018 fall updates, Adobe Premiere will be closer to a color tool like Blackmagic’s Resolve with the addition of new hue saturation curves in the Lumetri Color toolset. In Resolve these are some of the most important aspects of the color corrector, and I think that will be the same for Premiere. From Hue vs. Sat, which can help isolate a specific color and desaturate it to Hue vs. Luma, which can help add or subtract brightness values from specific hues and hue ranges — these new color correcting tools further Premiere’s venture into true professional color correction. These new curves will also be available inside of After Effects.

After Effects features many updates, but my favorites are the ability to access depth matte data of 3D elements and the addition of the new JavaScript engine for building expressions.

There is one update that runs across both Premiere and After Effects that seems to be a sleeper update. The improvements to motion graphics templates, if implemented correctly, could be a time and creativity saver for both artists and editors.

AI
Adobe, like many other companies, seem to be diving heavily into the “AI” pool, which is amazing, but… with great power comes great responsibility. While I feel this way and realize others might not, sometimes I don’t want all the work done for me. With new features like Auto Lip Sync and Color Match, editors and creators of all kinds should not lose the forest for the trees. I’m not telling people to ignore these features, but asking that they put a few minutes into discovering how the color of a shot was matched, so you can fix something if it goes wrong. You don’t want to be the editor who says, “Premiere did it” and not have a great solution to fix something when it goes wrong.

What Else?
I would love to see Adobe take a stab at digging up the bones of SpeedGrade and integrating that into the Premiere Pro world as a new tab. Call it Lumetri Grade, or whatever? A page with a more traditional colorist layout and clip organization would go a long way.

In the end, there are plenty of other updates to Adobe’s 2018 Creative Cloud apps, and you can read their blog to find out about other updates.

DP Rick Ray: Traveling the world capturing stock images

By Randi Altman

It takes a special kind of human to travel the world, putting himself in harm’s way to collect hard-to-find stock imagery, but Rick Ray thrives on this way of life. This Adobe Stock contributor has a long history as a documentary filmmaker and a resume that includes 10 Questions for the Dalai Lama (2006), Letters Home from the South China Seas: Adventures in Singapore & Borneo (1989) and Letters Home from Iceland (1990).

Let’s find out more about what makes Ray tick.

As a DP, are you just collecting footage to sell or are you working on films, docs and series as well?
I used to be a documentary filmmaker and have about 24 published titles in travel and biography, including the 10 Questions For The Dalai Lama and the TV series Raising The Bamboo Curtain With Martin Sheen. However, I found that unless you are Ken Burns or Michael Moore, making a living in the world of documentary films can be very difficult. It wasn’t until I came to realize that individual shots taken from my films and used in other productions were earning me more income than the whole film itself that I understood how potentially lucrative and valuable your footage can be when it is repurposed as stock.

That said, I still hire myself out as a DP on many Hollywood and independent films whenever possible. I also try to retain the stock rights for these assignments whenever possible.

A Bedouin man in Jordan.

How often are you on the road, and how do you pick your next place to shoot?
I travel for about three to four months each year now. Lately, I travel to places that interest me from a beauty or cultural perspective, whether or not they may be of maximal commercial potential. The stock footage world is inundated with great shots of Paris, London or Tokyo. It’s very hard for your footage to be noticed in such a crowded field of content. For that reason, lesser known locations of the world are attractive to me because there is less good footage of those places.

I also enjoy the challenges of traveling and filming in less comfortable places in the world, something I suppose I inherited from my days as a 25-year-old backpacking and hitchhiking around the world.

Are you typically given topics to capture — filling a need — or just shooting what interests you?
Mostly what interests me, but also I see a need for many topics of political relevance, and this also informs my shooting itinerary.

For example, immigration is in the news intensively these days, so I have recently driven the border wall from Tijuana to the New Mexico border capturing imagery of that. It’s not a place I’d normally go for a shoot, but it proved to be very interesting and it’s licensing all the time.

Rick Ray

Do you shoot alone?
Yes, normally. Sometimes I go with one other person, but that’s it. To be an efficient and effective stock shooter, you are not a “film crew” per se. You are not hauling huge amounts of gear around. There are no “grips,” and no “craft services.” In stock shooting around the world, as I define it, I am a low-key casual observer making beautiful images with low-key gear and minimal disruption to life in the countries I visit. If you are a crew of three or more, you become a group unto yourself, and it’s much more difficult to interact and experience the places you are visiting.

What do you typically capture with camera-wise? What format? Do you convert footage or let Adobe Stock do that?
I travel with two small (but excellent) Sony 4K handicams (FDR-AX100), two drones, a DJI Osmo handheld steady-grip, an Edelkrone slider kit and two lightweight tripods. Believe it or not, these can all fit into one standard large suitcase. I shoot in XDCAM 4K and then convert it to Apple ProRes in post. Adobe Stock does not convert my clips for me. I deliver them ready to be ordered.

You edit on Adobe Premiere. Why is that the right system for you, and do you edit your footage before submitting? How does that Adobe Stock process work?
I used to work in Final Cut Pro 7 and Final Cut Pro X, but I switched to Adobe Premiere Pro after struggling with FCPX. As for “editing,” it doesn’t really play a part in stock footage submission. There is no editing as we are almost always dealing with single clips. I do grade, color correct, stabilize and de-noise many clips before I export them. I believe in having the clips look great before they are submitted. They have to compete with thousands of other clips on the site, and mine need to jump out at you and make you want to use them. Adobe allows users to submit content directly from Premiere to Adobe Stock, but since I deal in large volumes of clips in submitting, I don’t generally use this approach. I send a drive in with a spreadsheet of data when a batch of clips are done.

A firefighter looks back as a building collapses during the Thomas Fire in Ventura, California.

What are the challenges of this type of shooting?
Well, you are 100% responsible for the success or failure of the mission. There is no one to blame but yourself. Since you are mostly traveling low-key and without a lot of protection, it’s very important to have a “fixer” or driver in difficult countries. You might get arrested or have all of your equipment stolen by corrupt customs authorities in a country like Macedonia, as happened to me. It happens! You have to roll with the good and the bad, ask forgiveness rather than permission and be happy for the amazing footage you do manage to get,

You left a pretty traditional job to travel the world. What spurred that decision, and do you ever see yourself back at a more 9-to-5  type of existence?
Never! I have figured out the perfect retirement plan for myself. Every day I can check my sales from anywhere in the world, and on most days the revenue more than justifies the cost of the travel! And it’s all a tax write-off. Who has benefits like that?

A word of warning, though — this is not for everyone. You have to be ok with the idea of spending money to build a portfolio before you see significant revenue in return. It can take time and you may not be as lucky as I have been. But for those who are self-motivated and have a knack for cinematography and travel, this is a perfect career.

Can you name some projects that feature your work?
Very often this takes me by surprise since I often don’t know exactly how my footage is used. More often than not, I’m watching CNN, a TV show or a movie and I see my footage. It’s always a surprise and makes me laugh. I’ve seen my work on the Daily Show, Colbert, CNN, in commercials for everything from pharmaceuticals to Viking Cruises, in political campaign ads for people I agree and disagree with, and in music videos for Neil Young, Bruce Springsteen, Coldplay and Roger Waters.

Fire burns along the road near a village in the Palestinian territories.

Shooting on the road must be interesting. Can you share a story with us?
There have been quite a few. I have had my gear stolen in Israel (twice). In Thailand my gear was confiscated by corrupt customs authorities in Macedonia, as I mentioned earlier. I have been jailed by Ethiopian police for not having a valid filming permit, which was not necessary. Once a proper bribe was arranged they changed clothes from police into costumed natives and performed as tour guides and cultural emissaries for me.

In India, I was on a train to the Kumba Mela, which was stopped by a riot and burned. I escaped with minor injuries. I was also accosted by communist revolutionaries in Bihar, India. Rather than be a victim, I got out of the car and filmed it, and the leader and his generals then reviewed the footage and decided to do it over. After five takes of them running down the road and past the camera, the leader finally approved the take and I was left unharmed.

I’ve been in Syria and Lebanon and felt truly threatened by violence. I’ve been chased by Somali bandits at night in a van in Northern Kenya. Buy me a beer sometime, I’ll tell you more.

LACPUG hosting FCP and Premiere creator Randy Ubillos

The Los Angeles Creative Pro User Group (LACPUG) is celebrating its 18th anniversary on June 27 by presenting the official debut of Bradley Olsen’s Off the Tracks, a documentary about Final Cut Pro X. Also on the night’s agenda is a trip down memory lane with Randy Ubillos, the creator of Final Cut Pro, Adobe Premiere, Aperture, iMovie 08 and Final Cut Pro X.

The event will take place at the Gallery Theater in Hollywood. Start time is 6:45pm. Scheduled to be in the audience and perhaps on stage, depending on availability, will be members of the original FCP team: Michael Wohl, Tim Serda and Paul Saccone. Also on hand will be Ramy Katrib of DigitalFilm Tree and editor and digital consultant Dan Fort. “Many other invites to the ‘superstars’ of the digital revolution and FCP have been sent out,” says Michael Horton, founder and head of LACPUG.

The night will also include food and drinks, time for questions and the group’s “World Famous Raffle.”
Tickets are on sale now on the LACPUG website for $10 each, plus a ticket fee of $2.24.

The Los Angeles Creative Pro User Group, formerly the LA Final Cut Pro User Group, was established in June of 2000 and hosts a membership of over 6,000 worldwide.

Adobe intros updates to Creative Cloud, including Team Projects

Later this year, Adobe will be offering new capabilities within its Adobe Creative Cloud video tools and services. This includes updates for VR/360, animation, motion graphics, editing, collaboration and Adobe Stock. Many of these features are powered by Adobe Sensei, the company’s artificial intelligence and machine learning framework. Adobe will preview these advancements at IBC.

The new capabilities coming later this year to Adobe Creative Cloud for video include:
• Access to motion graphics templates in Adobe Stock and through Creative Cloud Libraries, as well as usability improvements to the Essential Graphics panel in Premiere Pro, including responsive design options for preserving spatial and temporal.
• Character Animator 1.0 with changes to core and custom animation functions, such as pose-to-pose blending, new physics behaviors and visual puppet controls. Adobe Sensei will help improve lip-sync capability by accurately matching mouth shape with spoken sounds.
• Virtual reality video creation with a dedicated viewing environment in Premiere Pro. Editors can experience the deeply engaging qualities of content, review their timeline and use keyboard driven editing for trimming and markers while wearing the same VR head-mounts as their audience. In addition, audio can be determined by orientation or position and exported as ambisonics audio for VR-enabled platforms such as YouTube and Facebook. VR effects and transitions are now native and accelerated via the Mercury playback engine.
• Improved collaborative workflows with Team Projects on the Local Area Network with managed access features that allow users to lock bins and provide read-only access to others. Formerly in beta, the release of Team Projects will offer smoother workflows hosted in Creative Cloud and the ability to more easily manage versions with auto-save history.
• Flexible session organization to multi-take workflows and continuous playback while editing in Adobe Audition. Powered by Adobe Sensei, auto-ducking is added to the Essential Sound panel that automatically adjusts levels by type: dialogue, background sound or music.

Integration with Adobe Stock
Adobe Stock is now offering over 90 million assets including photos, illustrations and vectors. Customers now have access to over 4 million HD and 4K Adobe Stock video footage directly within their Creative Cloud video workflows and can now search and scrub assets in Premiere Pro.

Coming to this new release are hundreds of professionally-created motion graphics templates for Adobe Stock, available later this year. Additionally, motion graphic artists will be able to sell Motion Graphic templates for Premiere Pro through Adobe Stock. Earlier this year, Adobe added editorial and premium collections from Reuters, USA Today Sports, Stocksy and 500px.

Jimmy Helm upped to editor at The Colonie

The Colonie, the Chicago-based editorial, visual effects and motion graphics shop, has promoted Jimmy Helm to editor. Helm has honed his craft over the past seven years, working with The Colonie’s senior editors on a wide range of projects. Most recently, he has been managing ongoing social media work with Facebook and conceptualizing and editing short format ads. Some clients he has collaborated with include Lyft, Dos Equis, Capital One, Heineken and Microsoft. He works on both Avid Media Composer and Adobe Premiere.

A filmmaking major at Columbia College Chicago, Helm applied for an internship at The Colonie in 2010. Six months later he was offered a full-time position as an assistant editor, working alongside veteran cutter Tom Pastorelle on commercials for McDonald’s, Kellogg’s, Quaker and Wrangler. During this time, Helm edited numerous projects on his own, including broadcast commercials for Centrum and Kay Jewelers.

“Tom is incredible to work with,” says Helm. “Not only is he a great editor but a great person. He shared his editorial methods and taught me the importance of bringing your instinctual creativity to the process. I feel fortunate to have had him as a mentor.”

In 2014, Helm was promoted to senior assistant editor and continued to hone his editing skills while taking on a leadership role.

“My passion for visual storytelling began when I was young,” says Helm “Growing up in Memphis, I spent a great deal of time watching classic films by great directors. I realize now that I was doing more than watching — I was studying their techniques and, particularly, their editing styles. When you’re editing a scene, there’s something addictive about the rhythm you create and the drama you build. I love that I get to do it every day.”

Helm joins The Colonie’s editorial team, comprised of Joe Clear, Keith Kristinat, Pastorelle and Brian Salazar, along with editors and partners Bob Ackerman and Brian Sepanik.

 

 

Adobe acquires Mettle’s SkyBox tools for 360/VR editing, VFX

Adobe has acquired all SkyBox technology from Mettle, a developer of 360-degree and virtual reality software. As more media and entertainment companies embrace 360/VR, there is a need for seamless, end-to-end workflows for this new and immersive medium.

The Skybox toolset is designed exclusively for post production in Adobe Premiere Pro CC and Adobe After Effects CC and complements Adobe Creative Cloud’s existing 360/VR cinematic production technology. Adobe will integrate SkyBox plugin functionality natively into future releases of Premiere Pro and After Effects.

To further strengthen Adobe’s leadership in 360-degree and virtual reality, Mettle co-founder Chris Bobotis will join Adobe, bringing more than 25 years of production experience to his new role.

“We believe making virtual reality content should be as easy as possible for creators. The acquisition of Mettle SkyBox technology allows us to deliver a more highly integrated VR editing and effects experience to the film and video community,” says Steven Warner, VP of digital video and audio at Adobe. “Editing in 360/VR requires specialized technology, and as such, this is a critical area of investment for Adobe, and we’re thrilled Chris Bobotis has joined us to help lead the charge forward.”

“Our relationship started with Adobe in 2010 when we created FreeForm for After Effects, and has been evolving ever since. This is the next big step in our partnership,” says Bobotis, now director, professional video at Adobe. “I’ve always believed in developing software for artists, by artists, and I’m looking forward to bringing new technology and integration that will empower creators with the digital tools they need to bring their creative vision to life.”

Introduced in April 2015, SkyBox was the first plugin to leverage Mettle’s proprietary 3DNAE technology, and its success quickly led to additional development of 360/VR plugins for Premiere Pro and After Effects.

Today, Mettle’s plugins have been adopted by companies such as The New York Times, CNN, HBO, Google, YouTube, Discovery VR, DreamWorks TV, National Geographic, Washington Post, Apple and Facebook, as well as independent filmmakers and YouTubers.

Bluefish444 supports Adobe CC and 4K HDR with Epoch card

Bluefish444 Epoch video audio and data I/O cards now support the advanced 4K high dynamic range (HDR) workflows offered in the latest versions of the Adobe Creative Cloud.

Epoch SDI and HDMI solutions are suited for Adobe’s Premiere Pro CC, After Effects CC, Audition CC and other tools that are part of the Creative Cloud. With GPU-accelerated performance for emerging post workflows, including 4K HDR and video over IP, Adobe and Bluefish444 are providing a strong option for pros.

Bluefish444’s Adobe Mercury Transmit support for Adobe Creative Cloud brings improved performance in demanding workflows requiring realtime video I/O from UHD and 4K HDR sequences.

Bluefish444 Epoch video card support adds:
• HD/SD SDI input and output
• 4K/2K SDI input and output
• 12/10/8-bit SDI input and output
• 4K/2K/HD/SD HDMI preview
• Quad split 4K UHD SDI
• Two sample interleaved 4K UHD SDI
• 23, 24, 25, 29, 30fps video input and output
• 48, 50, 59, 60fps video input and output
• Dual-link 1.5Gbps SDI
• 3Gbps level A & B SDI
• Quad link 1.5Gbps and 3Gbps SDI
• AES digital audio
• Analog audio monitoring
• RS-422 machine control
• 12-bit video color space conversions

“Recent updates have enabled performance which was previously unachievable,” reports Tom Lithgow, product manager at Bluefish444. “Thanks to GPU acceleration, and [the] Adobe Mercury Transmit plug-in, Bluefish444 and Adobe users can be confident of smooth realtime video performance for UHD 4K 60fps and HDR content.”

WWE adds iPads, iPhones to production workflow

By Nick Mattingly

Creating TV style productions is a big operation. Lots of equipment, lots of people and lots of time. World Wrestling Entertainment (WWE) is an entertainment company and the largest professional wrestling organization in the world. Since its inception, it has amassed a global audience of over 36 million.

Each year, WWE televises over 100 events via its SmackDown, WWE Raw and Pay-Per-View events. That doesn’t include the hundreds of arena shows that the organization books in venues around the world.

“Putting this show on in one day is no small feat. Our shows begins to load-in typically around 4:00am and everything must be up and ready for production by 2:00pm,” explained Nick Smith, WWE’s director of remote IT and broadcast engineering. “We travel everything from the lighting, PA, screens, backstage sets, television production facilities, generators and satellite transmission facilities, down to catering. Everyone [on our team] knows precisely what to do and how to get it done.”

Now the WWE is experimenting with a new format for the some 300 events it hosts that are currently not captured on video. The goal? To see if using Switcher Studio with a few iPhones and iPads can achieve TV-style results. A key part of testing has been defining workflow using mobile devices while meeting WWE’s high standard of quality. One of the first requirements was moving beyond the four-camera setup. As a result, the Switcher Studio team produced a special version of Switcher that allows unlimited sources. The only limitation is network bandwidth.

Adding more cameras was an untested challenge. To help prevent bottlenecks over the local network, we lowered the resolution and bitrate on preview video feeds. We also hardwired the primary iPad used for switching using Apple dongles. Using the “Director Mode” function in Switcher Studio. WWE then triggered a recording on all devices.

For the first test using Switcher Studio, the WWE had a director and operator at the main iPad. The video from the iPad was output to an external TV monitor using Apple’s AirPlay. This workflow allowed the director to see a live video feed from all sources. They were also able to talk with the camera crew and “direct” the operator when to cut to each camera.

The WWE crew had three camera operators from their TV productions to run iPhones in and around the ring. To ensure the devices had enough power to make it through the four-hour-long event, iPhones were attached to batteries. Meanwhile, two camera operators captured wide shots of the ring. Another camera operator captured performer entrances and crowd reaction shots.

WWE setup a local WiFi network for the event to wirelessly sync cameras. The operator made edits in realtime to generate a line cut. After the event the line cut and a ISO from each angle was sent to the WWE post team in the United Kingdom.

Moving forward, we plan to make further improvements to the post workflow. This will be especially helpful for editors, using tools like Adobe Premiere or Avid Media Composer.

If future tests prove successful, WWE could use this new mobile setup to provide more content to their fans–building new revenue streams along the way.


Nick Mattingly is the CEO/co-founder of Switcher Studio. He has over 10 years of experience in video streaming, online monetization and new technologies. 

A glimpse at what was new at NAB

By Lance Holte

I made the trek out to Las Vegas last week for the annual NAB show to take in the latest in post production technology, discuss new trends and products and get lost in a sea of exhibits. With over 1,700 exhibitors, it’s impossible to see everything (especially in the two days I was there), but here are a handful of notable things that caught my eye.

Blackmagic DaVinci Resolve Studio 14: While the “non-studio” version is still free, it’s hard to beat the $299 license for the full version of Resolve. As 4K and 3D media becomes increasingly prevalent, and with the release of their micro and mini panels, Resolve can be a very affordable solution for editors, mobile colorists and DITs.

The new editorial and audio tools are particularly appealing to someone like me, who is often more hands-on on the editorial side than the grading side of post. To that regard, the new tracking features look to provide extra ease of use for quick and simple grades. I also love that Blackmagic has gotten rid of the dongles, which removes the hassle of tracking numerous dongles in a post environment where systems and rooms are swapped regularly. Oh, and there’s bin, clip and timeline locking for collaborative workflows, which easily pushes Resolve into the competition for an end-to-end post solution.

Adobe Premiere CC 2017 with After Effects and Audition Adobe Premiere is typically my editorial application of choice, and the increased integration of AE and Audition promise to make an end-to-end Creative Cloud workflow even smoother. I’ve been hoping for a revamp of Premiere’s title tool for a while, and the Essential Graphics panel/new Title Tool appears to greatly increase and streamline Premiere’s motion graphics capabilities — especially as someone who does almost all my graphics work in After Effects and Photoshop. The more integrated the various applications can be, the better; and Adobe has been pushing that aspect for some time now.

On the audio side, Premiere’s Essential Sound Panel tools for volume matching, organization, cleanup and other effects without going directly into Audition (or exporting for ProTools, etc.) will be really helpful, especially for smaller projects and offline mixes. And as a last note, the new Camera Shake Deblur effect in After Effects is fantastic.

Dell UltraSharp 4K HDR Monitor — There were a lot of great looking HDR monitors at the show, but I liked that this one fell in the middle of the pack in terms of price point ($2K), with solid specs (1000 nits, 97.7% of P3, and 76.9% of Rec. 2020) and a reasonable size (27 inches). Seems like a good editorial or VFX display solution, though the price might be pushing budgetary constraints for smaller post houses. I wish it was DCI 4K instead of UHD and a little more affordable, but that will hopefully come with time.

On that note, I really like HP’s DreamColor Z31x Studio Display. It’s not HDR, but it’s 99% of the P3 colorspace, and it’s DCI 4K — as well as 2K, by multiplying every pixel at 2K resolution into exactly 4 pixels — so there’s no odd-numbered scaling and sharpening required. Also, I like working with large monitors, especially at high resolutions. It offers automated (and schedulable) color calibration, though I’d love to see a non-automated display in the future if it could bring the price down. I could see the HP monitor as a great alternative to using more expensive HDR displays for the majority of workstations at many post houses.

As another side note, Flanders Scientific’s OLED 55-inch HDR display was among the most beautiful I’ve ever seen, but with numerous built-in interfaces and scaling capabilities, it’s likely to come at a higher price.

Canon 4K600STZ 4K HDR laser projector — This looks to be a great projection solution for small screening rooms or large editorial bays. It offers huge 4096×2400 resolution, is fairly small and compact, and apparently has very few restraints when it comes to projection angle, which would be nice for a theatrical edit bay (or a really nice home theater). The laser light source is also attractive because it will be low maintenance. At $63K, it’s at the more affordable end of 4K projector pricing.

Mettle 360 Degree/VR Depth plug-ins: I haven’t worked with a ton of 360-degree media, but I have dealt with the challenges of doing depth-related effects in a traditional single-camera space, so the fact that Mettle is doing depth-of-field effects, dolly effects and depth volumetric effects with 360-degree/VR content is pretty incredible. Plus, their plug-ins are designed to integrate with Premiere and After Effects, which is good news for an Adobe power user. I believe they’re still going to be in beta for a while, but I’m very curious to see how their plug-ins play out.

Finally, in terms of purely interesting tech, Sony’s Bravia 4K acoustic surface TVs are pretty wild. Their displays are OLED, so they look great, and the fact that the screen vibrates to create sound instead of having separate speakers or an attached speaker bar is awfully cool. Even at very close viewing, the screen doesn’t appear to move, though it can clearly be felt vibrating when touched. A vibrating acoustic surface raises some questions about mounting, so it may not be perfect for every environment, but interesting nonetheless.


Lance Holte is an LA-based post production supervisor and producer. He has spoken and taught at such events as NAB, SMPTE, SIGGRAPH and Createasphere. You can email him at lance@lanceholte.com.

Exceptional Minds: Autistic students learn VFX, work on major feature films

After graduation, these artists have been working on projects for Marvel, Disney, Fox and HBO.

By Randi Altman

With an estimated 1 in 68 children in the US being born with some sort of autism spectrum disorder, according to the Centers for Disease Control’s Autism and Developmental Disabilities Monitoring, I think it’s fair to say that most people have been touched in some way by a child on the spectrum.

As a parent of a teenager with autism, I can attest to the fact that one of our biggest worries, the thing that keeps us up at night, is the question of independence. Will he be able to make a living? Will there be an employer who can see beyond his deficits to his gifts and exploit those gifts in the best possible way?

Enter Exceptional Minds, a school in Los Angeles that teaches young adults with autism how to create visual effects and animation while working as part of a team. This program recognizes how bright these young people are and how focused they can be, surrounds them with the right teachers and behavioral therapists, puts the right tools in their hands and lets them fly.

The school, which also has a VFX and animation studio that employs its graduates, was started in 2011 by a group of parents who have children on the spectrum. “They were looking for work opportunities for their kids, and quickly discovered they couldn’t find any. So they decided to start Exceptional Minds and prepare them for careers in animation and visual effects,” explains Susan Zwerman, the studio executive producer at Exceptional Minds and a long-time VFX producer whose credits include Broken Arrow, Alien Resurrection, Men of Honor, Around the World in 80 Days and The Guardian.

Since the program began, these young people have had the opportunity to work on some very high-profile films and TV programs. Recent credits include Game of Thrones, The Fate of the Furious and Doctor Strange, which was nominated for an Oscar for visual effects this year.

We reached out to Zwerman to find out more about this school, its studio and how they help young people with autism find a path to independence.

The school came first and then the studio?
Yes. We started training them for visual effects and animation and then the conversation turned to, “What do they do when they graduate?” That led to the idea to start a visual effects studio. I came on board two years ago to organize and set it up. It’s located downstairs from the school.

How do you pick who is suitable for the program?
We can only take 10 students each year, and unfortunately, there is a waiting list because we are the only program of its kind anywhere. We have a review process that our educators and teachers have in terms of assessing the student’s ability to be able to work in this area. You know, not everybody can function working on a computer for six or eight hours. There are different levels of the spectrum. So the higher functioning and the medium functioning are more suited for this work, which takes a lot of focus.

Students are vetted by our teachers and behavioral specialists, who take into account the student’s ability, as well as their enthusiasm for visual effects and animation — it’s very intense, and they have to be motivated.

Susie Zwerman (in back row, red hair) with artists in the Exceptional Minds studio.

I know that kids on the spectrum aren’t necessarily social butterflies, how do you teach them to work as a team?
Oh, that’s a really good question. We have what’s called our Work Readiness program. They practice interviewing, they practice working as a team, they learn about appearance, attitude, organization and how to problem solve in a work place.

A lot of it is all about working in a team, and developing their social skills. That’s something we really stress in terms of behavioral curriculum.

Can you describe how the school works?
It’s a three-year program. In the first year, they learn about the principles of design and using programs like Adobe’s Flash and Photoshop. In Flash, they study 2D animation and in Photoshop they learn how to do backgrounds for their animation work.

During year two, they learn how to work in a production pipeline. They are given a project that the class works on together, and then they learn how to edit using Adobe Premiere Pro and compositing on Adobe After Effects.

In the third year, they are developing their skills in 3D via Autodesk Maya and compositing with The Foundry’s Nuke. So they learn the way we work in the studio and our pipeline, as well as preparing their portfolios for the workplace. At the end of three years, each student completes their training with a demo reel and resume of their work.

Who helps with the reels and resumes?
Their teachers supervise that process and help them with editing and picking the best pieces for their reel. Having a reel is important for many reasons. While many students will work in our studio for a year after graduation, I was able to place some directly into the work environment because their talent was so good… and their reel was so good.

What is the transition like from school to studio?
They graduate in June and we transition many of them to the studio, where they learn about deadlines and get paid for their work. Here, many experience independence for the first time. We do a lot of 2D-type visual effects clean-up work. We give them shots to work on and test them for the first month to see how they are doing. That’s when we decide if they need more training.

The visual effects side of the studio deals with paint work, wire and rod removal and tracker or marker removals — simple composites — plus a lot of rotoscoping and some greenscreen keying. We also do end title credits for the major movies.

We just opened the animation side of the studio in 2016, so it’s still in the beginning stages, but we’re doing 2D animation. We are not a 3D studio… yet! The 2D work we’ve done includes music videos, Websites, Power Points and some stuff for the LA Zoo. We are gearing up for major projects.

How many work in the studio?
Right now, we have about 15 artists at workstations in our current studio. Some of these will be placed on the outside, but that’s part of using strategic planning in the future to figure out how much expansion we want to do over the next five years.

Thanks to your VFX background, you have many existing relationships with the major studios. Can you talk about how that has benefitted Exceptional Minds?
We have had so much support from the studios; they really want to help us get work for the artists. We started out with Fox, then Disney and then HBO for television. Marvel Studios is one of our biggest fans. Marvel’s Victoria Alonso is a big supporter, so much so that we gave her our Ed Asner Award last June.

Once we started to do tracker marker and end title credits for Marvel, it opened doors. People say, “Well, if you work for Marvel, you could work for us.” So she has been so instrumental in our success.

What were the Fox and Marvel projects?
Our very first client was Fox and we did tracker removals for Dawn of the Planet of the Apes — that was about three years ago. Marvel happened about two years ago and our first job for them was on Avengers: Age of Ultron.

What are some of the other projects Exceptional Minds has worked on?
We worked on Doctor Strange, providing tracker marker removals and end credits. We worked on Ant-Man, Captain America: Civil War, Pete’s Dragon, Alvin & the Chipmunks: The Road Chip and X-Men: Apocalypse.

Thanks to HBO’s Holly Schiffer we did a lot of Game of Thrones work. She has also been a huge supporter of ours.

It’s remarkable how far you guys have come in a short amount of time. Can you talk about how you ended up at Exceptional Minds?
I used to be DGA production manager/location manager and then segued into visual effects as a freelance VFX producer for all the major studios. About three years ago, my best friend Yudi Bennett, who is one of the founders of Exceptional Minds, convinced me to leave my career and  come here to help set up the studio. I was also tasked with producing, scheduling and budgeting work to come into the studio. For me, personally, this has been a spiritual journey. I have had such a good career in the industry, and this is my way of giving back.

So some of these kids move on to other places?
After they have worked in the studio for about a year, or sometimes longer, I look to have them placed at an outside studio. Some of them will stay here at our studio because they may not have the social skills to work on the outside.

Five graduates have been placed so far and they are working full time at various productions studios and visual effects facilities in Los Angeles. We have also had graduates in internships at Cartoon Network and Nickelodeon.

One student is at Marvel, and others are at Stargate Studios, Mr. Wolf and New Edit. To be able to place our artists on the outside is our ultimate goal. We love to place them because it’s sort of life changing. For example, one of the first students we placed, Kevin, is at Stargate. He moved out of his parents’ apartment, he is traveling by himself to and from the studio, he is getting raises and he is moving up as a rotoscope artist.

What is the tuition like?
Students pay about 50 percent and we fundraise the other 50 percent. We also have scholarships for those that can’t afford it. We have to raise a lot of money to support the efforts of the school and studio.

Do companies donate gear?
When we first started, Adobe donated software. That’s how we were able to fund the school before the studio was up and running. Now we’re on an educational plan with them where we pay the minimum. Autodesk and The Foundry also give us discounts or try to donate licenses to us. In terms of hardware, we have been working with Melrose Mac, who is giving us discounts on computers for the school and studio.


Check out Exceptional Minds Website for more info.

Comprimato plug-in manages Ultra HD, VR files within Premiere

Comprimato, makers of GPU-accelerated storage compression and video transcoding solutions, has launched Comprimato UltraPix. This video plug-in offers proxy-free, auto-setup workflows for Ultra HD, VR and more on hardware running Adobe Premiere Pro CC.

The challenge for post facilities finishing in 4K or 8K Ultra HD, or working on immersive 360­ VR projects, is managing the massive amount of data. The files are large, requiring a lot of expensive storage, which can be slow and cumbersome to load, and achieving realtime editing performance is difficult.

Comprimato UltraPix addresses this, building on JPEG2000, a compression format that offers high image quality (including mathematically lossless mode) to generate smaller versions of each frame as an inherent part of the compression process. Comprimato UltraPix delivers the file at a size that the user’s hardware can accommodate.

Once Comprimato UltraPix is loaded on any hardware, it configures itself with auto-setup, requiring no specialist knowledge from the editor who continues to work in Premiere Pro CC exactly as normal. Any workflow can be boosted by Comprimato UltraPix, and the larger the files the greater the benefit.

Comprimato UltraPix is a multi-platform video processing software for instant video resolution in realtime. It is a lightweight, downloadable video plug-in for OS X, Windows and Linux systems. Editors can switch between 4K, 8K, full HD, HD or lower resolutions without proxy-file rendering or transcoding.

“JPEG2000 is an open standard, recognized universally, and post production professionals will already be familiar with it as it is the image standard in DCP digital cinema files,” says Comprimato founder/CEO Jirˇí Matela. “What we have achieved is a unique implementation of JPEG2000 encoding and decoding in software, using the power of the CPU or GPU, which means we can embed it in realtime editing tools like Adobe Premiere Pro CC. It solves a real issue, simply and effectively.”

“Editors and post professionals need tools that integrate ‘under the hood’ so they can focus on content creation and not technology,” says Sue Skidmore, partner relations for Adobe. “Comprimato adds a great option for Adobe Premiere Pro users who need to work with high-resolution video files, including 360 VR material.”

Comprimato UltraPix plug-ins are currently available for Adobe Premiere Pro CC and Foundry Nuke and will be available on other post and VFX tools soon. You can download a free 30-day trial or buy Comprimato UltraPix for $99 a year.

Frame.io 2.0 offers 100 new features, improvements for collaboration

Frame.io, developers of the video review and collaboration platform for content creators, has unveiled Frame.io 2.0 , an upgrade offering over 100 new features and improvements. This new version features new client Review Pages, which expands content review and sharing. In addition, the new release offers deeper workflow integration with Final Cut Pro X and Avid Media Composer, plus a completely re-engineered player.

“Frame.io 2 is based on everything we’ve learned from our customers over the past two years and includes our most-requested features,” says Emery Wells, CEO of Frame.io.

Just as internal teams can collaborate using Frame.io’s comprehensive annotation and feedback tools, clients can now provide detailed feedback on projects with Review Pages, which is designed to make the sharing experience simple, with no log-in required.

Review Pages give clients the same commenting ability as collaborators, without exposing them to the full Frame.io interface. Settings are highly configurable to meet specific customer needs, including workflow controls (approvals), security (password protection, setting expiration date) and communication (including a personalized message for the client).

The Review Pages workflow simplifies the exchange of ideas, consolidating feedback in a succinct manner. For those using Adobe Premiere or After Effects, those thoughts flow directly into the timeline, where you can immediately take action and upload a new version. Client Review Pages are also now available in the Frame.io iOS app, allowing collaboration via iPhones and iPads.

Exporting and importing comments and annotations into Final Cut Pro X and Media Composer has gotten easier with the upgraded, free desktop companion app, which allows users to open downloaded comment files and bring them into the editor as markers. There is now no need to toggle between Frame.io and the NLE.

Users can also now copy and paste comments from one version to another. The information is exportable in a variety of formats, whether that’s a PDF containing a thumbnail, timecode, comment, annotation and completion status that can be shared and reviewed with the team or as a .csv or .xml file containing tons of additional data for further processing.

Also new to Frame.io 2.0 is a SMPTE-compliant source timecode display that works with both non-drop and drop-frame timecode. Users can now download proxies straight from Frame.io.

The Frame.io 2.0 player page now offers better navigation, efficiency and accountability. New “comment heads” allow artists to visually see who left a comment and where so they can quickly find and prioritize feedback on any given project. Users can also preview the next comment, saving them time when one comment affects another.

The new looping feature, targeting motion and VFX artists, lets users watch the same short clip on loop. You can even select a range within a clip to really dive in deep. Frame.io 2.0’s asset slider makes it easy to navigate between assets from the player page.

The new Frame.io 2.0 dashboard has been redesigned for speed and simplicity. Users can manage collaborators for any given project from the new collaborator panel, where adding an entire team to a project takes one click. A simple search in the project search bar makes it easy to bring up a project. The breadcrumb navigation bar tracks every move deeper into a sub-sub-subfolder, helping artists stay oriented when getting lost in their work. The new list view option with mini-scrub gives users the birds-eye view of everything happening in Frame.io 2.0.

Copying and moving assets between projects takes up no additional storage, even when users make thousands of copies of a clip or project. Frame.io 2.0 also now offers the ability to publish direct to Vimeo, with full control over publishing options, so pros can create the description and set privacy permissions, right then and there.

Aardman creates short film, struts its stuff

By Randi Altman

All creative studios strive for creative ways to show off their talent and offerings, and London-based Aardman is no exception. Famous for its stop-motion animation work (remember the Wallace and Gromit films?), this studio now provides so much more, including live-action, CG, 2D animation and character creation.

Danny Capozzi

In order to help hammer home all of their offerings, and in hopes of breaking that stop-motion stereotype, Aardman has created a satirical short film, called Visualize This, depicting a conference call between a production company and an advertising agency, giving the studio the ability to show off the range of solutions they can provide for clients. Each time the fictional client suggests something, that visual pops up on the screen, whether it’s adding graffiti to a snail’s shell or textured type or making a giant monster out of CG cardboard boxes.

We reached out to Aardman’s Danny Capozzi, who directed the short, to find out more about this project and the studio in general.

How did the idea for this short come about?
I felt that the idea of making a film based on a conference call was something that would resonate with a lot of people in any creative industry. The continuous spit balling of ideas and suggestions would make a great platform to demonstrate a lot of different styles that myself and Aardman can produce. Aardman is well known for its high level of stop-motion/Claymation work, but we do CGI, live action and 2D just as well. We also create brand new ways of animating by combining styles and techniques.

Why was now the right time to do this?
I think we are living in a time of uncertainty, and this film really expresses that. We do a lot of procrastinating. We have the luxury to change our minds, our tastes and our styles every two minutes. With so much choice of everything at our fingertips we can no longer make quick decisions and stick to them. There’s always that sense of “I love this… it’s perfect, but what if there’s something better?” I think Visualize This sums it up.

You guys work with agencies and directly with brands — how would you break that up percentage wise?
The large majority of our advertising work still comes through agencies, although we are increasingly doing one-off projects for clients who seek us out for our storytelling and characters. It’s hard to give a percentage on it because the one-offs vary so much in size that they can skew the numbers and give the wrong impression. More often than not, they aren’t advertising projects either and tend to fall into the realm of short films for organizations, which can be either charities, museums or visitor attractions, or even mass participation arts projects and events.

Can you talk about making the short? Your workflow?
When I first pitched the idea to our executive producer Heather Wright, she immediately loved the idea. After a bit of tweaking on the script and the pace of the dialogue we soon went into production. The film was achieved during some down time from commercial productions and took about 14 weeks on and off over several months.

What tools did you call on?
We used a large variety of techniques CGI, stop-motion, 2D, live action, timelapse photography and greenscreen. Compositing and CG was via Maya, Houdini and Nuke software. We used HDRI (High Dynamic Range Images). We also used Adobe’s After Effects, Premiere, Photoshop, and Illustrator, along with clay sculpting, model making and blood, sweat and, of course, some tears.

What was the most complicated shot?
The glossy black oil shot. This could have been done in CGI with a very good team of modelers and lighters and compositors, but I wanted to achieve this in-camera.

Firstly, I secretly stole some of my son Vinny’s toys away to Aardman’s model-making workshop and spray painted them black. Sorry Vinny! I hot glued the black toys onto a black board (huge mistake!), you’ll see why later. Then I cleared Asda out of cheap cooking oil — 72 litres of the greasy stuff. I mixed it with black oil paint and poured it into a casket.

We then rigged the board of toys to a motion control rig. This would act as the winch to raise the toys out of the black oily soup. Another motion control was rigged to do the panning shot with the camera attached to it. This way we get a nice up and across motion in-camera.

We lowered the board of toys into the black soup and the cables that held it up sagged and released the board of toys. Noooooo! I watched them sink. Then to add insult to injury, the hot glue gave way and the toys floated up. How do you glue something to an oily surface?? You don’t! You use screws. After much tinkering it was ready to be submerged again. After a couple of passes, it worked. I just love the way the natural glossy highlights move over the objects. All well worth doing in-camera for real, and so much more rewarding.

What sort of response has it received?
I’m delighted. It has really travelled since we launched a couple of weeks ago, and it’s fantastic to keep seeing it pop up in my news feed on various social media sites! I think we are on over 20,000 YouTube views and 40,000 odd views on Facebook.

Editor Eddie Ringer joins NYC’s Wax

Wax, an editorial house based in NYC, has added film and commercial editor Eddie Ringer. Ringer comes to Wax from Wildchild + Bonch in New York. Prior to that, he spent over eight years at Sausalito-based agency Butler Shine Stern + Partners (BSSP), where he edited and directed advertising projects spanning broadcast commercials, viral campaigns and branded content.

Ringer says he calls on his agency background for his editing work. “Working on the agency side I saw firsthand the tremendous amount of thought and hard work that goes into creating a campaign. I take this into consideration on every project. It focuses me. The baton has been passed, and it’s my responsibility to make sure the collective vision is carried through to the end.”

In addition to his agency experience, Ringer enjoys the way sound design can dictate the flow of the edit and stresses the importance of balancing the creative part with the commerce side of things and understanding why it works. “At the end of the day,” he notes, “we’re trying to connect with an audience to sell a product and a brand.”

Ringer’s first job with Wax was a new spot for ITV London promoting the horse-racing channel. It features momentum edits, hard cuts, energy and, of course, lots of sound design.

His tool of choice is Adobe Premiere Pro. “I made the switch to Premiere about four years ago and never looked back. I find the functionality more intuitive than other NLEs I’ve used in the past,” he says.

Review: The HP Z1G3 All-in-One workstation

By Brady Betzel

I’ll admit it. I’ve always been impressed with HP’s All-in-One workstations — from their z840 to their zBook mobile workstation and now their HP Z1G3. Yes, I know, the HP line of workstations are not cheap. In fact, you can save quite a bit of money building your own system, but you will probably have tons of headaches unless you are very confident in your computer-building skills. And if you don’t mind standing in the return line at the Fry’s Electronics.

HP spends tons of time and money on ISV certifications for their workstations. ISV certification stands for Independent Software Vendor certification. In plain English it means that HP spends a lot of time and money making sure the hardware inside of your workstation works with the software you use. For an industry pro that means apps like Adobe’s Premiere Pro and After Effects, Avid Media Composer, Autodesk products like 3DS Max and many others.

For this review,  I tested apps like Avid Media Composer, FilmLight’s Baselight for Media Composer color correction plug-in, Adobe Premiere Pro, Adobe Media Encoder and Adobe After Effects, as well as Blackmagic’s Resolve 12.5.2, which chewed through basic color correction. In terms of testing time, I typically keep a review computer system for a couple of months, but with this workstation I really wanted to test it as thoroughly as possible — I’ve had the workstation for three months and counting, and I’ve been running the system through all the appropriate paces.

I always love to review workstations like the HP Z1G3 because of the raw power they possess. While HP sent me one of the top-of-the-line Z1G3 configurations, which retails for a list price of $3,486, they have a pretty reasonable starting price at $1,349. From Intel i3, i5 and i7 configurations all the way up to the all mighty Intel Xeon — the HP Z1G3 can be customized to fit into your workflow whether you just need to check your email or color correct video from your GoPro.

Here are the specs that make up the HP Z1G3 All-in-One workstation I received:

● 23.6-inch UHD/4K non-glare and non-touch display (3840×2160)
● Intel Xeon E3-1270 v5 CPU, 3.6GHz (4 Cores / 8 Threads)
● 64GB DDR4 SODIMM 2133 GHz (4 x 16GB)
● Nvidia Quadro M2000M graphics (4GB)
● Two Z Turbo drives (512GB, PCIe M.2)
● Wireless keyboard and mouse
● Two Thunderbolt 3/USB 3.1 ports
● USB charging port
● Media card reader
● DisplayPort out

As I mentioned earlier, I tested the Z1G3 with many different apps, but recently I’ve been diving deeper into color correction, and luckily for my testing this fits right in. A few of the most strenuous real-world tests for computer systems is running 3D modeling apps like Maxon Cinema 4D and color correction suites like Resolve. Of course, apps like After Effects are great tests as well, but adding nodes on nodes on nodes in Resolve will really tax your CPU, as well as your GPU.

One thing that can really set apart high-end systems like the Z1G3 is the delay when using a precision color correction panel like Tangent’s Elements or Ripple. Sometimes you will move one of the color wheel balls and a half a second later the color wheel moves on screen. I tried adding a few clips and nodes on the timeline and when using the panels, I noticed no discernible delay (at least more than what I would expect). While this isn’t a scientific test, it is crucial for folks looking to plug in external devices.

For more scientific tests I stuck to apps like Cinebench from Maxon, AJA’s System Test and Blackmagic’s Disk Speed Test. In Cinebench, the Z1G3 ranked at the top of the list when compared to similar systems. In AJA’s System Test I tested the read/write speed of the hp-z1g3-aja-system-test-copynon-OS drive (basically the editing or cache drive). It sustained around 1520MB/s read and 1490MB/s write. I say around because I couldn’t get the AJA app to display the entire read/write numbers because of the high-resolution/zoom in Windows, I tried scaling it down to 1920×1080 but no luck. In Blackmagic’s Disk Speed Test, I was running at 1560MB/s read and 1497.3MB/s write. The drive that I ran this test on is HP’s version of the M.2 PCIe SSD powered by Samsung, more affectionately known by HP as a Z-Turbo drive. The only thing better at the moment would be a bunch of these drives arranged in a RAID-0 configuration. Luckily, you can do that through the Thunderbolt 3 port with some spare SSDs you have lying around.

Almost daily I ran Premiere Pro CC, Media Encoder and Resolve Studio 12.5.2. I was really happy with the performance in Premiere. When working with QuickTimes in inter-frame codecs like H.264 and AVC-HD (non-edit friendly codecs), I was able to work without too much stuttering in the timeline. When I used intra-frame codecs like ProRes HQ from a Blackmagic’s Pocket Cinema Camera, Premiere worked great. I even jumped into Adobe’s Lumetri color tools while using Tangent’s Ripple external color correction panel and it worked with little discernable delay. I did notice that Premiere had a little more delay when using the external color correction panel than Media Composer and Resolve, but that seemed to be more of a software problem rather than a workstation problem.

One of my favorite parts about using a system with an Nvidia graphics card, especially a Quadro card like the M2000M, is the ability to encode multiple versions of a file at once. Once I was done editing some timelapses in Premiere, I exported using Media Encoder. I would apply three presets I made: one square 600×600 H.264 for Instagram, one 3840×2160 H.264 for YouTube and an Animated GIF at 480×360 for Twitter. Once I told Media Encoder to encode, it ran all three exports concurrently — a really awesome feature. With the Nvidia Quadro card installed, it really sped along the export.

Media Composer
Another app I wanted to test was Media Composer 8.6.3. Overall Media Composer ran great except for the high-resolution display. As I’ve said in previous reviews, this isn’t really the fault of HP, but more of the software manufacturers who haven’t updated their interfaces to adapt to the latest UHD displays. I had filmed a little hike I took with my five-year-old. I gave him a GoPro while I had my own. Once we got the footage back home, I imported it into Media Composer, grouped the footage and edited it using the multi-cam edit workflow.

Simply put, the multi-camera split was on the left and the clip I had in the sequence was playing simultaneously on the right. Before I grouped the footage into a multi-group, I transcoded the H.264s into DNxHD 175 an intra-frame, edit-friendly codec. The transcode was nearly realtime, so it took 60 minutes to transcode a 60-minute H.264 — which is not bad. In the end, I was able to edit the two-camera multi-group at 1920×1080 resolution with only minor hiccups. Occasionally, I would get caught in fast forward for a few extra seconds when J-K-L editing, but nothing that made me want to throw my keyboard or mouse against the wall.

Once done editing, I installed the FilmLight color correction plug-in for Media Composer. I had a really awesome experience coloring using Baselight in Media Composer on the Z1G3. I didn’t have any slowdowns, and the relationship between using the color correction panel and Baselight was smooth.

Resolve
The last app I tested with HP’s Z1G3 All-in-One Workstation was Blackmagic’s Resolve 12.5.2. Much like my other tests, I concentrated on color correction with the Tangent Ripple and Element-Vs iOS app. I had four or five nodes going in the color correction page before I started to see a slow down. I was using the native H.264 and ProRes HQ files from the cameras, so I didn’t make it easy for Resolve, but it still worked. Once I added a little sharpening to my clips, the HP Z1G3 really started to kick into gear. I heard the faint hum of fans, which up until this point hadn’t kicked in. This is also where the system started to slow down and become sluggish.

Summing Up
The Z1G3 is one of my favorite workstations, period. A while ago, I reviewed the previous All-in-One workstation from HP, the Z1G2, and at the time it was my favorite. One of my few complaints was that, while it was easy to fix, it was very heavy and bulky. When I opened the Z1G3 box, I immediately noticed how much lighter and streamlined the design was. It almost felt like they took away 50 percent of the bulk, which is something I really appreciate. I can tell that one of the main focuses with the Z1G3 was minimizing its footprint and weight, while increasing the power. HP really knocked it out of the park.

One of the only things that I wish was different on the Z1G3 I tested was the graphics card. While the Nvidia Quadro M2000M is a great graphics card, it is a “mobile” version of a Quadro, which has 128 fewer CUDA cores and 26GB/s less bandwidth than its desktop equivalent the M2000. I would love the option of a full-sized Quadro and instead of the mobile version but I also understand the power consumption will go up as well as the form factor, so maybe I give HP a pass here.

In the end, I know everyone reading this review is saying to themselves, “I love my iMac so why would I want the HP Z1G3?” If you are a die-hard Apple user, or you just saw the new Microsoft Surface Studio announcement, then it might be a hard sell, but I love both Windows- and Mac OS-based systems, and the Z1G3 is awesome. What’s even more awesome is that it is easily upgradeable. I took off the back cover, and with simple switch I could have added a 2.5-inch hard drive or two in under a minute. If you are looking for a new powerful workstation and want one that not only stands up to Resolve and Premiere Pro CC, the HP Z1G3 is for you.


Brady Betzel is an online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Earlier this year, Brady was nominated for an Emmy for his work on Disney’s Unforgettable Christmas Celebration.

Frame.io’s Emery Wells talks about Premiere Pro collaboration

In this business, collaboration is key. And without a strong system in place, chaos ensues, things get missed and time and money is wasted. The makers of Frame.io, a video review, sharing and collaboration platform, know that first hand, having been hands-on post pros. Recently, they came out with a realtime tool for Adobe Premiere Pro, aptly named Frame.io for Premiere Pro. The product includes the entire Frame.io application, redesigned and re-engineered for Adobe’s panel architecture.

This product, they say, is based on decades of real-world experience sitting in an editor’s chair. Features of the product include a shared cloud bin that multiple editors can work from; one click import and export of sequences, project files and entire bins; realtime comments directly in the Premiere timeline with no marker syncing; auto versioning for rapid iteration on creative ideas; comment marker syncing for when you do not have an internet connection; and sync’d playback in Frame.io and your Premiere timeline.

To find out more, we reached out to Frame.io co-founder and CTO Emery Wells (right) to find out more.

With this latest offering, you target a specific product. Does this mean you will be customizing your tool for specific editing platforms going forward?
Frame.io is a web application that can be accessed from any browser. It is not NLE specific and you can upload media that’s come from anywhere. We’ve started to build (and will continue to build) tools that help bridge the gap between the creative desktop apps like Premiere and Frame.io. Each tool we integrate comes with its own unique set of challenges and capabilities. Premiere’s extension model was compatible with Frame.io’s architecture so we were able to bring the entire feature set directly inside Premiere.

How is this product different from others out there?
It’s much more significant than an integration. It’s an entire realtime collaboration layer for Adobe Premiere and is transformative to the way video gets made. The Premiere integration has already been in the hands of companies like BuzzFeed, where they have 200 producers cranking out 175 videos/week. That is an absolutely maddening pace. Frame.io and our latest Premiere integration brings efficiency to that process.

Can multiple editors work simultaneously, like Avid?
No. It’s not a replacement for an Avid set-up like that.

What are the costs?
The Premiere extension comes standard with any Frame.io plan, including our free plan.

What speed of Internet is required?
We recommend a minimum of 10 megabits per second, which is fairly accessible on any broadband connection these days.

How easy to use is it, really?
It’s as easy as the Frame.io web application itself. Anyone can get up to speed in 10-15 minutes of poking around.

What do you think is the most important thing users should know about this product?
We’re solving real problems based on real experience. We built the tool we wanted as editors ourselves. Frame.io for Premiere Pro really allows you to go home at a decent hour instead of waiting around for a render at 10pm. We automate the render, upload and notification. You don’t have to pull your hair out trying to stay organized just to move a project forward.

Bandito Brothers: picking tools that fit their workflow

Los Angeles-based post, production and distribution company Bandito Brothers is known for its work on feature films such as Need for Speed, Act of Valor and Dust to Glory. They provide a variety of services — from shooting to post to visual effects — for spots, TV, films and other types of projects.

Lance Holte in the company’s broadcast color by working on DaVinci Resolve 12.

They are also known in our world for their Adobe-based workflows, using Premiere and After Effects in particular. But that’s not all they are. Recently, Bandito invested in Avid’s new Avid ISIS|1000 shared storage system to help them work more collaboratively with very large and difficult-to-play files across all editing applications. The system — part of the Avid MediaCentral Platform— allows Bandito’s creative teams to collaborate efficiently regardless of which editing application they use.

“We’ve been using Media Composer since 2009, although our workflows and infrastructure have always been built around Premiere,” explains Lance Holte, senior director of post production, Bandito Brothers. “We tend to use Media Composer for offline editorial on projects that require more than a few editors/assistants to be working in the same project since Avid bin-locking in one project is a lot simpler than breaking a feature into 200 different scene-based Premiere projects.

“That said, almost every project we cut in Avid is conformed and finished in Premiere, and many projects — that only require two or three editors/assistants, or require a really quick turnaround time, or have a lot of After Effects-specific VFX work — are cut in Premiere. The major reason that we’ve partnered with Avid on their new shared storage is because it works really well with the Adobe suite and can handle a number of different editorial workflows.”

MixStage             
Bandito’s Mix Stage                                                         Bandito’s Edit 4.

He says the ISIS | 1000 gives them the collaborative power to share projects across a wide range of tools and staff, and to complete projects in less time. “The fact that it’s software-agnostic means everyone can use the right tools for the job, and we don’t need to have several different servers with different projects and workflows,” says Holte.

Bandito Brothers’ ISIS|1000 system is accessible from three separate buildings at its Los Angeles campus — for music, post production and finishing. Editors can access plates being worked on by its on-site visual effects company, or copy over an AAF or OMF file for the sound team to open in Avid Pro Tools in their shared workspace.

“Bandito uses Pro Tools for mixing, which also makes the ISIS|1000 handy, since we can quickly movie media between mix and editorial anywhere across the campus,” concludes Holte.

Currently, Bandito Brothers is working on a documentary called EDM, as well as commercial campaigns for Audi, Budweiser and Red Bull.

Tutorial: Using Trim editing in Premiere Pro CC 2015

By Sean “Premiere Bro” Schools

Premiere Pro CC 2015 brought more to editors than awesome color grading tools and magical transitions. The new release also brought several enhancements to Premiere Pro’s trimming capabilities.

If you’re a Premiere Pro editor who has never edited in Trim Mode, CC 2015 is the time and version to start. This post highlights three new trim features along with many tips for maximizing the efficiency of Trim Mode editing in Premiere Pro.

1. Trim and Nudge Share Shortcut
Trim and Nudge can use the same keyboard shortcut. Premiere Pro blog: https://blogs.adobe.com/premierepro/2015/06/premiere-pro-cc-2015.html.

Shortcut sharing sounds like chaos: two editing functions — Trim and Nudge — battling it out underneath the keyboard for priority. But it’s not as scary as it sounds. Premiere Pro will perform a Trim when an edited point is selected and will perform a Nudge when a clip is selected. It’s actually profoundly intuitive and it’s a feature that will soon be taken for granted.

By enabling Trim and Nudge to share the same keyboard shortcuts, Premiere Pro consolidates valuable keystrokes by giving them twice the capability. Obviously, only the Trim function of the shared shortcut applies while in Trim Mode. This tutorial shows how to map Trim commands to the default Nudge keyboard shortcuts: https://youtu.be/iEsWIE7hx9I.

trim_editing_premiere_pro_cc_2015

2. Revert Trim Session
A Revert Trim Session button can be added to the Program Monitor to enable an edit point to be returned to its original position before Trim Mode was entered — Premiere Pro blog https://blogs.adobe.com/premierepro/2015/06/premiere-pro-cc-2015.html (Note: Revert Trim Session is also a keyboard shortcut.)

Simply put, Revert Trim Session undoes successive trim edits made in Trim Mode. The ability to return an edit point to its original place, prior to changes, with one click, will make Trim Mode more appealing to many Premiere Pro editors. The Revert Trim Session feature is also particularly intriguing because it introduces a new trimming terminology: “Trim Session.” Although it’s logical to assume that Trim Session refers to all trim activity within Trim Mode, there’s no official documentation for this functionality. It may be reading too much between the lines, but it’s as if Adobe is using this language to suggest an enhanced trim editing workflow. More on that in a future post. Learn how to set-up Revert Trim Session in this tutorial: https://youtu.be/yQb7a2ilgCM.

3. Loop Playback ‘Live Trimming’
In loop playback Trim mode in the program monitor, the I and O buttons can be used to adjust the position of the edit point on the fly — Premiere Pro blog: https://blogs.adobe.com/premierepro/2015/06/premiere-pro-cc-2015.html.

We’ll coin this feature “Live Trimming” until a more official term is given by Adobe. It’s similar to “J-K-L Dynamic Trimming” (which still works in CC 2015) but it’s uniquely different in that making an edit does not require playback to stop.

While playback is looping in Trim Mode, pressing “I” and “O” will set a new in and out point (based on the current trim type) for the outgoing or incoming clips. When an edit is made, loop playback will reset on the new edit point and further editing can continue.

In a way, Live Trimming feels similar to multicam switching in being able to watch playback and make an edit when it feels right. This new functionality within Trim Mode gives Premiere Pro editors a more dynamic and interactive trim editing experience. Watch this tutorial to see Live Trimming in action: https://youtu.be/FXe-mjxR5ko.

Key Point Recap
The following tips will increase the speed and efficiency of trim editing in Premiere Pro CC 2015:
• Assign keyboard shortcuts to each of the “Select Nearest Edit Point…” commands. This will allow you to jump to the nearest edit point with a specific trim type, instead of having to select the edit point and then Toggle Trim Type (Ctrl+T).

nearest_edit_point_shortcuts
• In Trim Mode, select your trim type before you begin loop playback. Playback must be stopped to change trim type.
• Try first using “I” and “O” Live Trimming to trim the edit point to where it feels right. Then, continue to finesse using the Trim keyboard shortcuts.
• Cmd+Z will undo the last trim edit without exiting trim mode or interrupting loop playback.
• Assign keyboard shortcuts to each of the “Toggle Target Video…” commands. This will allow you to make trim edits to clips on specific video tracks. Do the same for all the “Toggle Target Audio” commands.

toggle_target_video_shortcuts

Coming Soon to this space: a post defining Trim Session, including two feature requests, and how it is a unique trim editing workflow.

Premiere Bro is the alias for Sean Schools. Sean is the video editor for JK Design. He is a Full Sail University graduate who did time in Brooklyn. You can email Sean at premierebro@gmail.com, and follow him on Twitter @premierebro. You will also find this blog on his website www.premierebro.com.