Tag Archives: Adobe Premiere

NAB: Adobe’s spring updates for Creative Cloud

By Brady Betzel

Adobe has had a tradition of releasing Creative Cloud updates prior to NAB, and this year is no different. The company has been focused on improving existing workflows and adding new features, some based on Adobe’s Sensei technology, as well as improved VR enhancements.

In this release, Adobe has announced a handful of Premiere Pro CC updates. While I personally don’t think that they are game changing, many users will appreciate the direction Adobe is going. If you are color correcting, Adobe has added the Shot Match function that allows you to match color between two shots. Powered by Adobe’s Sensei technology, Shot Match analyzes one image and tries to apply the same look to another image. Included in this update is the long-requested split screen to compare before and after color corrections.

Motion graphic templates have been improved with new adjustments like 2D position, rotation and scale. Automatic audio ducking has been included in this release as well. You can find this feature in the Essential Sound panel, and once applied it will essentially dip the music in your scene based on dialogue waveforms that you identify.

Still inside of Adobe Premiere Pro CC, but also applicable in After Effects, is Adobe’s enhanced Immersive Environment. This update is for people who use VR headsets to edit and or process VFX. Team Project workflows have been updated with better version tracking and indicators of who is using bins and sequences in realtime.

New Timecode Panel
Overall, while these updates are helpful, none are barn burners, the thing that does have me excited is the new Timecode Panel — it’s the biggest new update to the Premiere Pro CC app. For years now, editors have been clamoring for more than just one timecode view. You can view sequence timecodes, source media timecodes from the clips on the different video layers in your timeline, and you can even view the same sequence timecode in a different frame rate (great for editing those 23.98 shows to a 29.97/59.94 clock!). And one of my unexpected favorites is the clip name in the timecode window.

I was testing this feature in a pre-release version of Premiere Pro, and it was a little wonky. First, I couldn’t dock the timecode window. While I could add lines and access the different menus, my changes wouldn’t apply to the row I had selected. In addition, I could only right click and try to change the first row of contents, but it would choose a random row to change. I am assuming the final release has this all fixed. If it the wonkiness gets flushed out, this is a phenomenal (and necessary) addition to Premiere Pro.

Codecs, Master Property, Puppet Tool, more
There have been some compatible codec updates, specifically Raw Sony X-OCN (Venice), Canon Cinema Raw Light (C200) and Red IPP2.

After Effects CC has also been updated with Master Property controls. Adobe said it best during their announcement: “Add layer properties, such as position, color or text, in the Essential Graphics panel and control them in the parent composition’s timeline. Use Master Property to push individual values to all versions of the composition or pull selected changes back to the master.”

The Puppet Tool has been given some love with a new Advanced Puppet Engine, giving access to improving mesh and starch workflows to animate static objects. Beyond updates to Add Grain, Remove Grain and Match Grain effects, making them multi-threaded, enhanced disk caching and project management improvements have been added.

My favorite update for After Effects CC is the addition of data-driven graphics. You can drop a CSV or JSON data file and pick-whip data to layer properties to control them. In addition, you can drag and drop data right onto your comp to use the actual numerical value. Data-driven graphics is a definite game changer for After Effects.

Audition
While Adobe Audition is an audio mixing application, it has some updates that will directly help anyone looking to mix their edit in Audition. In the past, to get audio to a mixing program like Audition, Pro Tools or Fairlight you would have to export an AAF (or if you are old like me possibly an OMF). In the latest Audition update you can simply open your Premiere Pro projects directly into Audition, re-link video and audio and begin mixing.

I asked Adobe whether you could go back and forth between Audition and Premiere, but it seems like it is a one-way trip. They must be expecting you to export individual audio stems once done in Audition for final output. In the future, I would love to see back and forth capabilities between apps like Premiere Pro and Audition, much like the Fairlight tab in Blackmagic’s Resolve. There are some other updates like larger tracks and under-the-hood updates which you can find more info about on: https://theblog.adobe.com/creative-cloud/.

Adobe Character Animator has some cool updates like overall character building updates, but I am not too involved with Character Animator so you should definitely read about things like the Trigger Improvements on their blog.

Summing Up
In the end, it is great to see Adobe moving forward on updates to its Creative Cloud video offerings. Data-driven animation inside of After Effects is a game-changer. Shot color matching in Premiere Pro is a nice step toward a professional color correction application. Importing Premiere Pro projects directly into Audition is definitely a workflow improvement.

I do have a wishlist though: I would love for Premiere Pro to concentrate on tried-and-true solutions before adding fancy updates like audio ducking. For example, I often hear people complain about how hard it is to export a QuickTime out of Premiere with either stereo or mono/discrete tracks. You need to set up the sequence correctly from the jump, adjust the pan on the tracks, as well as adjust the audio settings and export settings. Doesn’t sound streamlined to me.

In addition, while shot color matching is great, let’s get an Adobe SpeedGrade-style view tab into Premiere Pro so it works like a professional color correction app… maybe Lumetri Pro? I know if the color correction setup was improved I would be way more apt to stay inside of Premiere Pro to finish something instead of going to an app like Resolve.

Finally, consolidating and transcoding used clips with handles is hit or miss inside of Premiere Pro. Can we get a rock-solid consolidate and transcode feature inside of Premiere Pro? Regardless of some of the few negatives, Premiere Pro is an industry staple and it works very well.

Check out Adobe’s NAB 2018 update video playlist for details on each and every update.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Digital Anarchy’s Transcriptive plugin for Adobe Premiere

By Brady Betzel

One of the most time consuming parts of editing can be dealing with the pre-post, including organizing scripts and transcriptions of interviews. In the past, I have used and loved Avid’s ScriptSync and Phrase Find. These days, with people becoming more comfortable with other NLEs such as Adobe Premiere Pro, Apple FCP X and Blackmagic Resolve, there is a need for similar technology inside  those apps, and that is where Digital Anarchy’s Transcriptive plug-in comes in.

Transcriptive is a Windows- and Mac OS-compatible plugin for Premiere Pro CC 2015.3 and above. Transcriptive allows the editor to have a sequence or multiple clips transcribed in the cloud by either IBM Watson or Speechmatics, a script downloaded to your system and in sync with the clips and sequences for a price. From there you can search for specific words, sort by person speaking, including labelling each speaker, or just follow an interview along with a transcript.

Avid’s ScriptSync is an invaluable plugin, in my opinion, when working on shows with interviews, especially when combining multiple responses into one cohesive answer being covered by b-roll — often referred to as a Frankenbite. Transcriptive comes close to Avid’s ScriptSync within Premiere Pro, but has a few differences, and is priced at $299, plus the per-minute cost of transcription.

A Deeper Look
Within Premiere, Transcriptive lives under the Windows menu > Extension > Transcriptive. To get access to the online AI transcription services you will obviously need an Internet connection as well as an account with Speechmatics and/or IBM’s Watson. You’ll really want to follow along with the manual, which can be found here. It walks you step by step through setting up the Transcriptive plugin.

It is a little convoluted to get it all set up, but once you do you are ready to upload a clip and get transcribing. IBM’s Watson will get you going with 1,000 free minutes of transcription a month, and from there it goes from $.02/minute down to $.01/minute, depending on how much you need transcribed. If you need additional languages transcribed it will be up-charged $.03/minute. Speechmatics is another transcription service that runs roughly $.08 a minute (I say roughly because the price is in pounds and has fluctuated in the past) and it will go down if you do more than 1,000 minutes a month.

Your first question should be why the disparity in price, and in this instance you get what you pay for. If you aren’t as strict on accuracy, then Watson is for you — it doesn’t quite get everything correct and can sometimes fail to see when a new person is talking, even on a very clear recording. Speechmatics was faster during my testing and more accurate. If free is a good price for you then Watson might do the job, and you should try it first. But in my opinion Speechmatics is where you need to be.

When editing interviews, accuracy is extremely important, especially when searching specific key words, and this is where Speechmatics came through. Neither service has complete accuracy, and if something is wrong you can’t kick it back like you could a traditional, human-based transcription service.

The Test
To test Transcriptive I downloaded a CNN interview between Anderson Cooper and Hillary Clinton, which in theory should have perfect audio. Even with “perfect audio” Watson had some trouble when one person would talk over the other. Speechmatics seemed to get each person labeled correctly when they spoke, I would guess it missed only about 5% of the words, so about 95% accurate — Watson seemed to be about 70% accurate.

To get your file to these services you will either send your media from a sequence, multiple clips or a folder of clips. I seem to favor a specific folder of clips to transcode as it forces some organization and my OCD assistant editor brain feels a little more at home.

As a plugin, Transcriptive is an extension inside of Premiere Pro, as alluded to earlier. Inside Premiere you have to have the Transcriptive window active when doing edits or simply playing down a clip, otherwise you will be affecting the timeline (meaning if you hit undo you will be undoing your timeline work, so be careful). When working with transcriptions between clips and sequences your transcription will load differently. If you transcribe individual clips using the Batch Files command, the transcription will be loaded into the infamous Speech Analysis field of the files metadata. In this instance you can now search in the metadata field instead of the Transcriptive window.

One feature I really like is the ability to export a transcript as markers to be placed on the timeline. In addition, you can export many different closed captioning file types such as SMPTE-TT (XML file), which can be used inside of Premiere with its built-in caption integration. SRT and VTT are captioning file types to be uploaded alongside your video to services like YouTube, and JSON files allow you to send transcripts to other machines using the Transcriptive plugin. Besides searching inside of Transcriptive for any lines or speakers you want, you can also edit the transcript. This can be extremely useful if two speakers are combined or if there are some missed words that need to be corrected.

To really explain how Transcriptive works, it is easiest to compare it to Avid’s ScriptSync. If you have used Avid’s ScriptSync and then gave Transcriptive a try, you likely noticed some features that Transcriptive desperately needs in order to be the powerhouse that ScriptSync is — but Transcriptive has the added ability to upload your files and process them in the cloud.

ScriptSync allows the editor or assistant editor to take a bunch of transcriptions, line them up, then, for example, have every clip from a particular person in one transcription file that could be searched or edited from. In addition, there is a physical representation of the transcriptions that can be organized in bins and accessed separately from the clips. These functions would be a huge upgrade to Transcriptive in the future, especially for editors who work on unscripted or documentary projects with multiple interviews from the same people. If you use an external transcription file and want to align with clips you have in the system you must use (and pay) Speechmatics, which for a lower price per minute will align the two files.

Updates Are Coming
After I had finished my initial review, Jim Tierney, president of Digital Anarchy, was kind enough to email me about some updates that were coming to Transcriptive as well as a really handy transcription workflow that I had missed my first time around.

He mentioned that they are working on a Power Search function that will allow for a search of all clips and transcripts inside the project. A window will then show all the search results and can be clicked on to open the corresponding clips in the source window or sequence in the record window. Once that update rolls in, Transcriptive will be much more powerful and easier to use.

The only thing that will be hard to differentiate is if you have multiple interviews from multiple people. For instance, if I wanted to limit the search to only my interviews and for a specific phrase. In the future, a way to Power Search a select folder of clips or sequences may be a great way to search isolated clips or sequences, at least easier than searching all clips and sequences.

The other tidbit Jim mentioned was using YouTube’s built-in transcriptions in your own videos. Before you watch the tutorial keep in mind that this process isn’t flawless. While you can upload your video to YouTube in private mode, the uploading part may still turn away a few people who have security concerns. In addition, you will need to export a low-res proxy version of your clip to transcode, which can take time.

If you have the time, or have an assistant editor with time, this process through YouTube might be your saving grace. My two cents is that with some upfront bookkeeping like tape naming, and after transcribing corrections, this could be one of the best solutions if you aren’t worried about security.

Regardless, check out the tutorial if you want a way to get supposedly very accurate transcriptions via YouTube’s transcriber. In the end it will produce a VTT transcription file that you will import back into Transcriptive, where you will need to either leave alone or spend adjusting since VTT files will not allow for punctuation. The main benefit to the VTT file from YouTube is the timecode is carried back to Transcriptive and enables each word to be clicked on and the video will line up to it.

Summing Up
All in all, there are only a few options when working with transcriptions inside of Premiere. Transcriptive did a good job at what it did: uploading my file to one of the transcription services, acquiring the transcript and aligning the clip to the timecoded transcript with identifying markers for speakers that can be changed if needed. Once the Power Search gets ironed out and put into a proper release, Transcriptive will get even closer to being the transcription powerhouse you need for Premiere editing.

If you work with tons of interviews or just want clips transcribed for easy search you should definitely download Digital Anarchy’s Transcriptive demo and give it a whirl.

You can also find a ton of good video tutorials on their site. Keep in mind that the Transcriptive plugin runs $299 and you have some free transcriptions available to you through IBM’s Watson, but if you want very accurate transcriptions you will need to pay for Speechmatics or you can try YouTube’s built-in transcription service that charges nothing.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Editing 360 Video in VR (Part 2)

By Mike McCarthy

In the last article I wrote on this topic, I looked at the options for shooting 360-degree video footage, and what it takes to get footage recorded on a Gear 360 ready to review and edit on a VR-enabled system. The remaining steps in the workflow will be similar regardless of which camera you are using.

Previewing your work is important so, if you have a VR headset you will want to make sure it is installed and functioning with your editing software. I will be basing this article on using an Oculus Rift to view my work in Adobe Premiere Pro 11.1.2 on a Thinkpad P71 with an Nvidia Quadro P5000 GPU. Premiere requires an extra set of plugins to interface to the Rift headset. Adobe acquired Mettle’s Skybox VR Player plugin back in June, and has made it available to Creative Cloud users upon request, which you can do here.

Skybox VR player

Skybox can project the Adobe UI to the Rift, as well as the output, so you could leave the headset on when making adjustments, but I have not found that to be as useful as I had hoped. Another option is to use the GoPro VR Player plugin to send the Adobe Transmit output to the Rift, which can be downloaded for free here (use the 3.0 version or above). I found this to have slightly better playback performance, but fewer options (no UI projection, for example). Adobe is expected to integrate much of this functionality into the next release of Premiere, which should remove the need for most of the current plugins and increase the overall functionality.

Once our VR editing system is ready to go, we need to look at the footage we have. In the case of the Gear 360, the dual spherical image file recorded by the camera is not directly usable in most applications and needs to be processed to generate a single equirectangular projection, stitching the images from both cameras into a single continuous view.

There are a number of ways to do this. One option is to use the application Samsung packages with the camera: Action Director 360. You can download the original version here, but will need the activation code that came with the camera in order to use it. Upon import, the software automatically processes the original stills and video into equirectangular 2:1 H.264 files. Instead of exporting from that application, I pull the temp files that it generates on media import, and use them in Premiere. (C:\Users\[Username]\Documents\CyberLink\ActionDirector\1.0\360) is where they should be located by default. While this is the simplest solution for PC users, it introduces an extra transcoding step to H.264 (after the initial H.265 recording), and I frequently encountered an issue where there was a black hexagon in the middle of the stitched image.

Action Director

Activating Automatic Angle Compensation in the Preferences->Editing panel gets around this bug, while trying to stabilize your footage to some degree. I later discovered that Samsung had released a separate Version 2 of Action Director available for Windows or Mac, which solves this issue. But I couldn’t get the stitched files to work directly in the Adobe apps, so I had to export them, which was yet another layer of video compression. You will need a Samsung activation code that came with the Gear 360 to use any of the versions, and both versions took twice as long to stitch a clip as its run time on my P71 laptop.

An option that gives you more control over the stitching process is to do it in After Effects. Adobe’s recent acquisition of Mettle’s SkyBox VR toolset makes this much easier, but it is still a process. Currently you have to manually request and install your copy of the plugins as a Creative Cloud subscriber. There are three separate installers, and while this stitching process only requires Skybox Suite AE, I would install both the AE and Premiere Pro versions for use in later steps, as well as the Skybox VR player if you have an HMD to preview with. Once you have them installed, you can use the Skybox Converter effect in After Effects to convert from the Gear 360’s fisheye files to the equirectangular assets that Premiere requires for editing VR.

Unfortunately, Samsung’s format is not one of the default conversions supported by the effect, so it requires a little more creativity. The two sensor images have to be cropped into separate comps and with plugin applied to each of them. Setting the Input to fisheye and the output to equirectangular for each image will give the desired distortion. A feathered mask applied to the circle to adjust the seam, and the overlap can be adjusted with the FOV and re-orient camera values.

Since this can be challenging to setup, I have posted an AE template that is already configured for footage from the Gear 360. The included directions should be easy to follow, and the projection, overlap and stitch can be further tweaked by adjusting the position, rotation and mask settings in the sub-comps, and the re-orientation values in the Skybox Converter effects. Hopefully, once you find the correct adjustments for your individual camera, they should remain the same for all of your footage, unless you want to mask around an object crossing the stitch boundary. More info on those types of fixes can be found here. It took me five minutes to export 60 seconds of 360 video using this approach, and there is no stabilization or other automatic image analysis.

Video Stitch Studio

Orah makes Video-Stitch Studio, which is a similar product but with a slightly different feature set and approach. One limitation I couldn’t find a way around is that the program expects the various fisheye source images to be in separate files, and unlike AVP I couldn’t get the source cropping tool to work without rendering the dual fisheye images into separate square video source files. There should be a way to avoid that step, but I couldn’t find one. (You can use the crop effect to remove 1920 pixels on one side or the other to make the conversions in Media Encoder relatively quickly.) Splitting the source file and rendering separate fisheye spheres adds a workflow step and render time, and my one-minute clip took 11 minutes to export. This is a slower option, which might be significant if you have hours of footage to process instead of minutes.

Clearly, there are a variety of ways to get your raw footage stitched for editing. The results vary greatly between the different programs, so I made video to compare the different stitching options on the same source clip. My first attempt was with a locked-off shot in the park, but that shot was too simple to see the differences, and it didn’t allow for comparison of the stabilization options available in some of the programs. I shot some footage from a moving vehicle to see how well the motion and shake would be handled by the various programs. The result is now available on YouTube, fading between each of the five labeled options over the course of the minute long clip. I would categorize this as testing how well the various applications can handle non-ideal source footage, which happens a lot in the real world.

I didn’t feel that any of the stitching options were perfect solutions, so hopefully we will see further developments in that regard in the future. You may want to explore them yourself to determine which one best meets your needs. Once your footage is correctly mapped to equirectangular projection, ideally in a 2:1 aspect ratio, and the projects are rendered and exported (I recommend Cineform or DNxHR), you are ready to edit your processed footage.

Launch Premiere Pro and import your footage as you normally would. If you are using the Skybox Player plugin, turn on Adobe Transmit with the HMD selected as the only dedicated output (in the Skybox VR configuration window, I recommend setting the hot corner to top left, to avoid accidentally hitting the start menu, desktop hide or application close buttons during preview). In the playback monitor, you may want to right click the wrench icon and select Enable VR to preview a pan-able perspective of the video, instead of the entire distorted equirectangular source frame. You can cut, trim and stack your footage as usual, and apply color corrections and other non-geometry-based effects.

In version 11.1.2 of Premiere, there is basically one VR effect (VR Projection), which allows you to rotate the video sphere along all three axis. If you have the Skybox Suite for Premiere installed, you will have some extra VR effects. The Skybox Rotate Sphere effect is basically the same. You can add titles and graphics and use the Skybox Project 2D effect to project them into the sphere where you want. Skybox also includes other effects for blurring and sharpening the spherical video, as well as denoise and glow. If you have Kolor AVP installed that adds two new effects as well. GoPro VR Horizon is similar to the other sphere rotation ones, but allows you to drag the image around in the monitor window to rotate it, instead of manually adjusting the axis values, so it is faster and more intuitive. The GoPro VR Reframe effect is applied to equirectangular footage, to extract a flat perspective from within it. The field of view can be adjusted and rotated around all three axis.

Most of the effects are pretty easy to figure out, but Skybox Project 2D may require some experimentation to get the desired results. Avoid placing objects near the edges of the 2D frame that you apply it to, to keep them facing toward the viewer. The rotate projection values control where the object is placed relative to the viewer. The rotate source values rotate the object at the location it is projected to. Personally, I think they should be placed in the reverse order in the effects panel.

Encoding the final output is not difficult, just send it to Adobe Media Encoder using either H.264 or H.265 formats. Make sure the “Video is VR” box is checked at the bottom of the Video Settings pane, and in this case that the frame layout is set to monoscopic. There are presets for some of the common framesizes, but I would recommend lowering the bitrates, at least if you are using Gear 360 footage. Also, if you have ambisonic audio set channels to 4.0 in the audio pane.

Once the video is encoded, you can upload it directly to Facebook. If you want to upload to YouTube, exports from AME with the VR box checked should work fine, but for videos from other sources you will need to modify the metadata with this app here.  Once your video is uploaded to YouTube, you can embed it on any webpage that supports 2D web videos. And YouTube videos can be streamed directly to your Rift headset using the free DeoVR video player.

That should give you a 360-video production workflow from start to finish. I will post more updated articles as new software tools are developed, and as I get new 360 cameras with which to test and experiment.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Michael Kammes’ 5 Things – Video editing software

By Randi Altman

Technologist Michael Kammes is back with a new episode of 5 Things, which focuses on simplifying film, TV and media technology. The web series answers, according to Kammes, the “five burning tech questions” people might have about technologies and workflows in the media creation space. This episode tackles professional video editing software being used (or not used) in Hollywood.

Why is now the time to address this segment of the industry? “The market for NLEs is now more crowded than it has been in over 20 years,” explains Kammes. “Not since the dawn of modern NLEs have there been this many questions over what tools should be used. In addition, the massive price drop of NLEs, coupled with the pricing shift (monthly/yearly, as opposed to outright) has created more confusion in the market.”

In his video, Kammes focuses on Avid Media Composer, Adobe Premiere, Apple Final Cut Pro, Lightworks, Blackmagic Resolve and others.

Considering its history and use on some major motion pictures, (such as The Wolf of Wall Street), why hasn’t Lightworks made more strides in the Hollywood community? “I think Lightworks has had massive product development and marketing issues,” shares Kammes. “I rarely see the product pushed online, at user groups or in forums.  EditShare, the parent company of LightWorks, also deals heavily in storage, so one can only assume the marketing dollars are being spent on larger ticket items like professional and enterprise storage over a desktop application.”

What about Resolve, considering its updated NLE tools and the acquisition of audio company Fairlight? Should we expect to see more Resolve being used as a traditional NLE? “I think in Hollywood, adoption will be very, very slow for creative editorial, and unless something drastic happens to Avid and Adobe, Resolve will remain in the minority. For dailies, transcodes or grading, I can see it only getting bigger, but I don’t see larger facilities adopting Resolve for creative editorial. Outside of Hollywood, I see it gaining more traction. Those outlets have more flexibility to pivot and try different tools without the locked-in TV and feature film machine in Hollywood.”

Check it out:

Jimmy Helm upped to editor at The Colonie

The Colonie, the Chicago-based editorial, visual effects and motion graphics shop, has promoted Jimmy Helm to editor. Helm has honed his craft over the past seven years, working with The Colonie’s senior editors on a wide range of projects. Most recently, he has been managing ongoing social media work with Facebook and conceptualizing and editing short format ads. Some clients he has collaborated with include Lyft, Dos Equis, Capital One, Heineken and Microsoft. He works on both Avid Media Composer and Adobe Premiere.

A filmmaking major at Columbia College Chicago, Helm applied for an internship at The Colonie in 2010. Six months later he was offered a full-time position as an assistant editor, working alongside veteran cutter Tom Pastorelle on commercials for McDonald’s, Kellogg’s, Quaker and Wrangler. During this time, Helm edited numerous projects on his own, including broadcast commercials for Centrum and Kay Jewelers.

“Tom is incredible to work with,” says Helm. “Not only is he a great editor but a great person. He shared his editorial methods and taught me the importance of bringing your instinctual creativity to the process. I feel fortunate to have had him as a mentor.”

In 2014, Helm was promoted to senior assistant editor and continued to hone his editing skills while taking on a leadership role.

“My passion for visual storytelling began when I was young,” says Helm “Growing up in Memphis, I spent a great deal of time watching classic films by great directors. I realize now that I was doing more than watching — I was studying their techniques and, particularly, their editing styles. When you’re editing a scene, there’s something addictive about the rhythm you create and the drama you build. I love that I get to do it every day.”

Helm joins The Colonie’s editorial team, comprised of Joe Clear, Keith Kristinat, Pastorelle and Brian Salazar, along with editors and partners Bob Ackerman and Brian Sepanik.

 

 

Quick Chat: Lucky Post’s Sai Selvarajan on editing Don’t Fear The Fin

Costa, makers of polarized sunglasses, has teamed up with Ocearch, a group of explorers and scientists dedicated to generating data on the movement, biology and health of sharks, in order to educate people on how saving the sharks will save our oceans. In a 2.5-minute video, three shark attack survivors — Mike Coots, Paul de Gelder, and Lisa Mondy — explain why they are now on a quest to help save the very thing that attacked them, took their limbs and almost their lives.

The video edited by Lucky Post’s Sai Selvarajan for agency McGarrah Jessee and Rabbit Food Studios, tells the viewer that the number of sharks killed by long-lining, illegal fishing and the shark finning trade exceeds human shark attacks by millions. And as go the sharks, so go our oceans.

For editor Selvarajan, the goal was to strike a balance with the intimate stories and the global message, from striking footage filmed in Hawaii’s surf mecca, the North Shore. “Stories inside stories,” describes Selvarajan, who reveres the subjects’ dedication to saving the misunderstood creatures, despite having their life-changing encounters.

We spoke with the Texas-based editor to find out more about this project.

How early on did you become involved in the project?
I got a call when the project was greenlit and Jeff Bednarz the creative head at Rabbit Foot walked me through the concept. He wanted to showcase the whole teamwork aspect of Costa, Ocearch and shark survivors all coming together and using their skills to save sharks.

Did working on Don’t Fear The Fin change your perception of sharks?
Yes it did.  Before working on the project I had no idea that sharks were in trouble. After working on Don’t Fear the Fin, I’m totally for shark conservation, and I admire anyone who is out there fighting for the species.

What equipment did you use for the edit?
Adobe Premiere on Mac Tower.

What were the biggest creative challenges?
The biggest creative challenge was how to tell the shark survivors’ stories and then the shark’s story, and then Ocearch/Costa’s mission story. It was stories inside stories, which made it very dense and challenging to cut into a three-minute story. I had to do justice to all the stories and weave them into each other. The footage was gorgeous, but there had to be a sense of gravity to it all, so I used pacing and score to give us that gravity.

What do you think of the fact that sharks are not shown much in the film?
We made a conscious effort to show sharks and people in the same shot. The biggest misconception is that sharks are these big man-eating monsters. Seeing people diving with the sharks tied them to our story and the mission of the project.

What’s your biggest fear, and how would/can you overcome it?
Snakes are my biggest fear. I’m not sure how I’ll ever overcome it. I respect snakes and keep a safe distance. Living in Texas, I’ve read up on which ones are poisonous, so I know which ones to stay away from. But if I came across a rat snake in the wild, I’m sure to jump 20 feet in the air.

Check out the full video below…

 

Adobe acquires Mettle’s SkyBox tools for 360/VR editing, VFX

Adobe has acquired all SkyBox technology from Mettle, a developer of 360-degree and virtual reality software. As more media and entertainment companies embrace 360/VR, there is a need for seamless, end-to-end workflows for this new and immersive medium.

The Skybox toolset is designed exclusively for post production in Adobe Premiere Pro CC and Adobe After Effects CC and complements Adobe Creative Cloud’s existing 360/VR cinematic production technology. Adobe will integrate SkyBox plugin functionality natively into future releases of Premiere Pro and After Effects.

To further strengthen Adobe’s leadership in 360-degree and virtual reality, Mettle co-founder Chris Bobotis will join Adobe, bringing more than 25 years of production experience to his new role.

“We believe making virtual reality content should be as easy as possible for creators. The acquisition of Mettle SkyBox technology allows us to deliver a more highly integrated VR editing and effects experience to the film and video community,” says Steven Warner, VP of digital video and audio at Adobe. “Editing in 360/VR requires specialized technology, and as such, this is a critical area of investment for Adobe, and we’re thrilled Chris Bobotis has joined us to help lead the charge forward.”

“Our relationship started with Adobe in 2010 when we created FreeForm for After Effects, and has been evolving ever since. This is the next big step in our partnership,” says Bobotis, now director, professional video at Adobe. “I’ve always believed in developing software for artists, by artists, and I’m looking forward to bringing new technology and integration that will empower creators with the digital tools they need to bring their creative vision to life.”

Introduced in April 2015, SkyBox was the first plugin to leverage Mettle’s proprietary 3DNAE technology, and its success quickly led to additional development of 360/VR plugins for Premiere Pro and After Effects.

Today, Mettle’s plugins have been adopted by companies such as The New York Times, CNN, HBO, Google, YouTube, Discovery VR, DreamWorks TV, National Geographic, Washington Post, Apple and Facebook, as well as independent filmmakers and YouTubers.

Comprimato plug-in manages Ultra HD, VR files within Premiere

Comprimato, makers of GPU-accelerated storage compression and video transcoding solutions, has launched Comprimato UltraPix. This video plug-in offers proxy-free, auto-setup workflows for Ultra HD, VR and more on hardware running Adobe Premiere Pro CC.

The challenge for post facilities finishing in 4K or 8K Ultra HD, or working on immersive 360­ VR projects, is managing the massive amount of data. The files are large, requiring a lot of expensive storage, which can be slow and cumbersome to load, and achieving realtime editing performance is difficult.

Comprimato UltraPix addresses this, building on JPEG2000, a compression format that offers high image quality (including mathematically lossless mode) to generate smaller versions of each frame as an inherent part of the compression process. Comprimato UltraPix delivers the file at a size that the user’s hardware can accommodate.

Once Comprimato UltraPix is loaded on any hardware, it configures itself with auto-setup, requiring no specialist knowledge from the editor who continues to work in Premiere Pro CC exactly as normal. Any workflow can be boosted by Comprimato UltraPix, and the larger the files the greater the benefit.

Comprimato UltraPix is a multi-platform video processing software for instant video resolution in realtime. It is a lightweight, downloadable video plug-in for OS X, Windows and Linux systems. Editors can switch between 4K, 8K, full HD, HD or lower resolutions without proxy-file rendering or transcoding.

“JPEG2000 is an open standard, recognized universally, and post production professionals will already be familiar with it as it is the image standard in DCP digital cinema files,” says Comprimato founder/CEO Jirˇí Matela. “What we have achieved is a unique implementation of JPEG2000 encoding and decoding in software, using the power of the CPU or GPU, which means we can embed it in realtime editing tools like Adobe Premiere Pro CC. It solves a real issue, simply and effectively.”

“Editors and post professionals need tools that integrate ‘under the hood’ so they can focus on content creation and not technology,” says Sue Skidmore, partner relations for Adobe. “Comprimato adds a great option for Adobe Premiere Pro users who need to work with high-resolution video files, including 360 VR material.”

Comprimato UltraPix plug-ins are currently available for Adobe Premiere Pro CC and Foundry Nuke and will be available on other post and VFX tools soon. You can download a free 30-day trial or buy Comprimato UltraPix for $99 a year.

Frame.io 2.0 offers 100 new features, improvements for collaboration

Frame.io, developers of the video review and collaboration platform for content creators, has unveiled Frame.io 2.0 , an upgrade offering over 100 new features and improvements. This new version features new client Review Pages, which expands content review and sharing. In addition, the new release offers deeper workflow integration with Final Cut Pro X and Avid Media Composer, plus a completely re-engineered player.

“Frame.io 2 is based on everything we’ve learned from our customers over the past two years and includes our most-requested features,” says Emery Wells, CEO of Frame.io.

Just as internal teams can collaborate using Frame.io’s comprehensive annotation and feedback tools, clients can now provide detailed feedback on projects with Review Pages, which is designed to make the sharing experience simple, with no log-in required.

Review Pages give clients the same commenting ability as collaborators, without exposing them to the full Frame.io interface. Settings are highly configurable to meet specific customer needs, including workflow controls (approvals), security (password protection, setting expiration date) and communication (including a personalized message for the client).

The Review Pages workflow simplifies the exchange of ideas, consolidating feedback in a succinct manner. For those using Adobe Premiere or After Effects, those thoughts flow directly into the timeline, where you can immediately take action and upload a new version. Client Review Pages are also now available in the Frame.io iOS app, allowing collaboration via iPhones and iPads.

Exporting and importing comments and annotations into Final Cut Pro X and Media Composer has gotten easier with the upgraded, free desktop companion app, which allows users to open downloaded comment files and bring them into the editor as markers. There is now no need to toggle between Frame.io and the NLE.

Users can also now copy and paste comments from one version to another. The information is exportable in a variety of formats, whether that’s a PDF containing a thumbnail, timecode, comment, annotation and completion status that can be shared and reviewed with the team or as a .csv or .xml file containing tons of additional data for further processing.

Also new to Frame.io 2.0 is a SMPTE-compliant source timecode display that works with both non-drop and drop-frame timecode. Users can now download proxies straight from Frame.io.

The Frame.io 2.0 player page now offers better navigation, efficiency and accountability. New “comment heads” allow artists to visually see who left a comment and where so they can quickly find and prioritize feedback on any given project. Users can also preview the next comment, saving them time when one comment affects another.

The new looping feature, targeting motion and VFX artists, lets users watch the same short clip on loop. You can even select a range within a clip to really dive in deep. Frame.io 2.0’s asset slider makes it easy to navigate between assets from the player page.

The new Frame.io 2.0 dashboard has been redesigned for speed and simplicity. Users can manage collaborators for any given project from the new collaborator panel, where adding an entire team to a project takes one click. A simple search in the project search bar makes it easy to bring up a project. The breadcrumb navigation bar tracks every move deeper into a sub-sub-subfolder, helping artists stay oriented when getting lost in their work. The new list view option with mini-scrub gives users the birds-eye view of everything happening in Frame.io 2.0.

Copying and moving assets between projects takes up no additional storage, even when users make thousands of copies of a clip or project. Frame.io 2.0 also now offers the ability to publish direct to Vimeo, with full control over publishing options, so pros can create the description and set privacy permissions, right then and there.

Review: Nvidia’s new Pascal-based Quadro cards

By Mike McCarthy

Nvidia has announced a number of new professional graphic cards, filling out their entire Quadro line-up with models based on their newest Pascal architecture. At the absolute top end, there is the new Quadro GP100, which is a PCIe card implementation of their supercomputer chip. It has similar 32-bit (graphics) processing power to the existing Quadro P6000, but adds 16-bit (AI) and 64-bit (simulation). It is intended to combine compute and visualization capabilities into a single solution. It has 16GB of new HBM2 (High Bandwidth Memory) and two cards can be paired together with NVLink at 80GB/sec to share a total of 32GB between them.

This powerhouse is followed by the existing P6000 and P5000 announced last July. The next addition to the line-up is the single-slot VR-ready Quadro P4000. With 1,792 CUDA cores running at 1200MHz, it should outperform a previous-generation M5000 for less than half the price. It is similar to its predecessor the M4000 in having 8GB RAM, four DisplayPort connectors, and running on a single six-pin power connector. The new P2000 follows next with 1024 cores at 1076MHz and 5GB of RAM, giving it similar performance to the K5000, which is nothing to scoff at. The P1000, P600 and P400 are all low-profile cards with Mini-DisplayPort connectors.

All of these cards run on PCIe Gen3 x16, and use DisplayPort 1.4, which adds support for HDR and DSC. They all support 4Kp60 output, with the higher end cards allowing 5K and 4Kp120 displays. In regards to high-resolution displays, Nvidia continues to push forward with that, allowing up to 32 synchronized displays to be connected to a single system, provided you have enough slots for eight Quadro P4000 cards and two Quadro Sync II boards.

Nvidia also announced a number of Pascal-based mobile Quadro GPUs last month, with the mobile P4000 having roughly comparable specifications to the desktop version. But you can read the paper specs for the new cards elsewhere on the Internet. More importantly, I have had the opportunity to test out some of these new cards over the last few weeks, to get a feel for how they operate in the real world.

DisplayPorts

Testing
I was able to run tests and benchmarks with the P6000, P4000 and P2000 against my current M6000 for comparison. All of these test were done on a top-end Dell 7910 workstation, with a variety of display outputs, primarily using Adobe Premiere Pro, since I am a video editor after all.

I ran a full battery of benchmark tests on each of the cards using Premiere Pro 2017. I measured both playback performance and encoding speed, monitoring CPU and GPU use, as well as power usage throughout the tests. I had HD, 4K, and 6K source assets to pull from, and tested monitoring with an HD projector, a 4K LCD and a 6K array of TVs. I had assets that were RAW R3D files, compressed MOVs and DPX sequences. I wanted to see how each of the cards would perform at various levels of production quality and measure the differences between them to help editors and visual artists determine which option would best meet the needs of their individual workflow.

I started with the intuitive expectation that the P2000 would be sufficient for most HD work, but that a P4000 would be required to effectively handle 4K. I also assumed that a top-end card would be required to playback 6K files and split the image between my three Barco Escape formatted displays. And I was totally wrong.

Besides when using the higher-end options within Premiere’s Lumetri-based color corrector, all of the cards were fully capable of every editing task I threw at them. To be fair, the P6000 usually renders out files about 30 percent faster than the P2000, but that is a minimal difference compared to the costs. Even the P2000 was able to playback my uncompressed 6K assets onto my array of Barco Escape displays without issue. It was only when I started making heavy color changes in Lumetri that I began to observe any performance differences at all.

Lumetri

Color correction is an inherently parallel, graphics-related computing task, so this is where GPU processing really shines. Premiere’s Lumetri color tools are based on SpeedGrade’s original CUDA processing engine, and it can really harness the power of the higher-end cards. The P2000 can make basic corrections to 6K footage, but it is possible to max out the P6000 with HD footage if I adjust enough different parameters. Fortunately, most people aren’t looking for more stylized footage than the 300 had, so in this case, my original assumptions seem to be accurate. The P2000 can handle reasonable corrections to HD footage, the P4000 is probably a good choice for VR and 4K footage, while the P6000 is the right tool for the job if you plan to do a lot of heavy color tweaking or are working on massive frame sizes.

The other way I expected to be able to measure a difference between the cards would be in playback while rendering in Adobe Media Encoder. By default, Media Encoder pauses exports during timeline playback, but this behavior can be disabled by reopening Premiere after queuing your encode. Even with careful planning to avoid reading from the same disks as the encoder was accessing from, I was unable to get significantly better playback performance from the P6000 compared to the P2000. This says more about the software than it says about the cards.

P6000

The largest difference I was able to consistently measure across the board was power usage, with each card averaging about 30 watts more as I stepped up from the P2000 to the P4000 to the P6000. But they all are far more efficient than the previous M6000, which frequently sucked up an extra 100 watts in the same tests. While “watts” may not be a benchmark most editors worry too much about, among other things it does equate to money for electricity. Lower wattage also means less cooling is needed, which results in quieter systems that can be kept closer to the editor without being distracting from the creative process or interfering with audio editing. It also allows these new cards to be installed in smaller systems with smaller power supplies, using up fewer power connectors. My HP Z420 workstation only has one 6-pin PCIe power plug, so the P4000 is the ideal GPU solution for that system.

Summing Up
It appears that we have once again reached a point where hardware processing capabilities have surpassed the software capacity to use them, at least within Premiere Pro. This leads to the cards performing relatively similar to one another in most of my tests, but true 3D applications might reveal much greater differences in their performance. Further optimization of CUDA implementation in Premiere Pro might also lead to better use of these higher-end GPUs in the future.


Mike McCarthy is an online editor and workflow consultant with 10 years of experience on feature films and commercials. He has been on the forefront of pioneering new solutions for tapeless workflows, DSLR filmmaking and now multiscreen and surround video experiences. If you want to see more specific details about performance numbers and benchmark tests for these Nvidia cards, check out techwithmikefirst.com.