Tag Archives: Adobe Premiere

Review: Blackmagic’s eGPU and Intel i9 MacBook Pro 2018

By Brady Betzel

Blackmagic’s eGPU is worth the $699 price tag. You can buy it from Apple’s website, where it is being sold exclusively for the time being. Wait? What? You wanted some actual evidence as to why you should buy the BMD eGPU?

Ok, here you go…

MacBook Pro With Intel i9
First, I want to go over the latest Apple MacBook Pro, which was released (or really just updated) this past July. With some controversial fanfare, the 2018 MacBook Pro can now be purchased with the blazingly fast Intel i9, 2.6GHz (Turbo Boost up to 4.3GHz) six-core processor. In addition, you can add up to 32GB of 2400MHz DDR4 onboard memory. The Radeon Pro 560x GPU with 4GB of GDDR5 memory and even a 4TB SSD storage drive. It has four Thunderbolt 3 ports and, for some reason, a headphone jack. Apple is also touting its improved butterfly keyboard switches as well as its True Tone display technology. If you want to read more about that glossy info head over to Apple’s site.

The 2018 MacBook Pro is a beast. I am a big advocate for the ability to upgrade and repair computers, so Apple’s venture to create what is essentially a leased computer ecosystem that needs to be upgraded every year or two usually puts a bad taste in my mouth.

However, the latest MacBook Pros are really amazing… and really expensive. The top-of-the-line MacBook Pro I was provided with for this review would cost $6,699! Yikes! If I was serious, I would purchase everything but the $2,000 upgrade from the 2TB SSD drive to the 4TB, and it would still cost $4,699. But I suppose that’s not a terrible price for such an intense processor (albeit not technically workstation-class).

Overall, the MacBook Pro is a workhorse that I put through its video editing and color correcting paces using three of the top four professional nonlinear editors: Adobe Premiere, Apple FCP X and Blackmagic’s Resolve 15 (the official release). More on those results in a bit, but for now, I’ll just say a few things: I love how light and thin it is. I don’t like how hot it can get. I love how fast it charges. I don’t like how fast it loses charge when doing things like transcoding or exporting clips. A 15-minute export can drain the battery over 40% while playing Spotify for eight hours will hardly drain the battery at all (maybe 20%).

Blackmagic’s eGPU with Radeon Pro 580 GPU
One of the more surprising releases from Blackmagic has been this eGPU offering. I would never have guessed they would have gone into this area, and certainly would never have guessed they would have gone with a Radeon card, but here we are.

Once you step back from the initial, “Why in the hell wouldn’t they let it be user-replaceable and also not brand dependent” shock, it actually makes sense. If you are Mac OS user, you probably can do a lot in terms of external GPU power already. When you buy a new iMac, iMac Pro or MacBook Pro, you are expecting it to work, full stop.

However, if you are a DIT or colorist that is more mobile than that sweet million-dollar color bay you dream of, you need more. This is where the BMD eGPU falls nicely into place. You plug it in and instantly see it populate in the menu bar. In addition, the eGPU acts as a dock with four USB 3 ports, two Thunderbolt 3 ports and an HDMI port. The MacBook Pro will charge off of the eGPU as well, which eliminates the need for your charger at your docking point.

On the go, the most decked out MacBook Pro can handle its own. So it’s no surprise that FCP X runs remarkably fast… faster than everything else. However, you have to be invested in an FCP X workflow and paradigm — and while I’m not there yet, maybe the future will prove me wrong. Recently, I saw someone on Twitter who developed an online collaboration workflow, so people are excited about it.

Anyway, many of the nonlinear editors I work with can also play on the MacBook Pro, even with 4K Red, ARRI and, especially, ProRes footage. Keep in mind though, with the 2K, 4K, or whatever K footage, you will need to set the debayer to around “half good” if you want a fluid timeline. Even with the 4GB Radeon 560x I couldn’t quite play realtime 4K footage without some sort of compromise in quality.

But with the Blackmagic eGPU, I significantly improved my playback capabilities — and not just in Resolve 15. I did try and plug the eGPU into a PC with Windows 10 I was reviewing at the same time and it was recognized, but I couldn’t get all the drivers sorted out. So it’s possible it will work in Windows, but I couldn’t get it there.

Before I get to the Resolve testing, I did some benchmarking. First I ran Cinebench R15 without the eGPU attached and got the following scores: OpenGL – 99.21fps, reference match 99.5%, CPU – 947cb, CPU (single core) 190cb and MP ratio of 5.00x. With the GPU attached: Open GL — 60.26fps, reference match 99.5%, CPU — 1057 cb, CPU (single core) 186cb and MP ratio of 5.69x. Then I ran Unigine’s Valley Benchmark 1.0 without the eGPU, which got 21.3fps and a score of 890 (minimum 12.4fps/maximum 36.2fps). With the eGPU it got 25.6fps and a score of 1073 (minimum 19.2 fps/max 37.1fps)

Resolve 15 Test
I based all of my tests on a similar (although not exact for the different editing applications) 10-minute timeline, 23.98fps, 3840×2160, 4K and 8K RAW Red footage (R3D files) and Alexa (.ari and ProRes444XQ) UHD footage, all with edit page resizes, simple color correction and intermittent sharpening and temporal noise reduction (three frames, better, medium, 10, 10 and 5).

Playback: Without the eGPU I couldn’t play 23.98fps, 4K Red R3D without being set to half-res. With the eGPU I could playback at full-res in realtime (this is what I was talking about in sentence one of this review). The ARRI footage would play at full res, but would go between 1fps and 7fps at full res. The 8K Red footage would play in realtime when set to quarter-res.

One of the most re-assuring things I noticed when watching my Activity Monitor’s GPU history readout was that Resolve uses both GPUs at once. Not all of the apps did.

Resolve 15 Export Tests
In the following tests, I disabled all cache or optimized media options, including Performance Mode.

Test 1: H.264 at 23.98fps, UHD, auto-quality, no frame reordering, force highest-quality debayer/resizes and encoding profile Main)
a. Without eGPU (Radeon Pro 560x): 22 minutes, 16 seconds
b. With BMD eGPU (Radeon Pro 580): 16 minutes and 21 seconds

Test 2: H.265 10-bit, 23.98/UHD, auto quality, no frame reordering, force highest-quality debayer/resizes)
a. Without eGPU: stopped rendering after 10 frames
b. With BMD eGPU: same result

Test 3:
ProRes4444 at 23.98/UHD
a. Without eGPU: 27 min and 29 seconds
b. With BMD eGPU: 22 minutes and 57 seconds

Test 4:
– Edit page cache – enabled Smart User Cache at ProResHQ
a. Without eGPU: 17 minutes and 28 seconds
b. With BMD eGPU: 12 minutes and 22 seconds

Adobe Premiere Pro v.12.1.2
I performed similar testing in Adobe Premiere Pro using a 10-minute timeline at 23.98fps, 3840×2160, 4K and 8K RAW Red footage (R3D files) and Alexa (DNxHR SQ 8-bit) UHD footage, all with Effect Control tab resizes and simple Lumetri color correction, including sharpening and intermittent denoise (16) under the HSL Secondary tab in Lumetri applied to shadows only.

In order to ensure your eGPU will be used inside of Adobe Premiere, you must use Metal as your encoder. To enable it go to File > Project Settings > General and change the renderer to Mercury Playback Engine GPU acceleration Metal — (OpenCL will only use the internal GPU for processing.)

Premiere did not handle the high-resolution media as aptly as Resolve had, but it did help a little. However, I really wanted to test the export power with the added eGPU horsepower. I almost always send my Premiere sequences to Adobe Media Encoder to do the processing, so that is where my exports were processed.

Adobe Media Encoder
Test 1: H.264 (No render used during exports: 23.98/UHD, 80Mb/s, software encoding doesn’t allow for profile setup)
a. Open CL with no eGPU: about 140 minutes (sorry had to chase the kids around and couldn’t watch this snail crawl)
b. Metal no eGPU: about 137 minutes (chased the kids around again, and couldn’t watch this snail crawl, either)
c. Open CL with eGPU: wont work, Metal only
d. Metal with eGPU: one hour

Test 2: H.265
a. Without eGPU: failed (interesting result)
b. With eGPU: 40 minutes

Test 3: ProRes4444
a. Without eGPU: three hours
b. With eGPU: one hour and 14 minutes

FCP X
FCP X is an interesting editing app, and it is blazing fast at handling ProRes media. As I mentioned earlier, it hasn’t been in my world too much, but that isn’t because I don’t like it. It’s because professionally I haven’t run into it. I love the idea of roles, and would really love to see that playout in other NLEs. However, my results speak for themselves.

One caveat to using the eGPU in FCP X is that you must force it to work inside of the NLE. At first, I couldn’t get it to work. The Activity Monitor would show no activity on the eGPU. However, thanks to a Twitter post, James Wells (@9voltDC) sent me to this, which allows you to force FCP X to use the eGPU. It took a few tries but I did get it to work, and funny enough I saw times when all three GPUs were being used inside of FCP X, which was pretty good to see. This is one of those use-at-your-own risk things, but it worked for me and is pretty slick… if you are ok with using Terminal commands. This also allows you to force the eGPU onto other apps like Cinebench.

Anyways here are my results with the BMD eGPU exporting from FCP X:

Test 1: H.264
a. Without eGPU: eight minutes
b. With eGPU: eight minutes and 30 seconds

Test 2: H.265: Not an option

Test 3: ProRes4444
a. Without eGPU: nine minutes
b. With eGPU: six minutes and 30 seconds

Summing Up
In the end, the Blackmagic eGPU with Radeon Pro 580 GPU is a must buy if you use your MacBook Pro with Resolve 15. There are other options out there though, like the Razer Core v2 or the Akitio Node Pro.

From this review I can tell you that the Blackmagic eGPU is silent even when processing 8K Red RAW footage (even when the MacBook Pro fans are going at full speed), and it just works. Plug it in and you are running, no settings, no drivers, no cards to install… it just runs. And sometimes when I have three little boys running around my house, I just want that peace of mind and I want things to just work like the Blackmagic eGPU.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Adobe updates Creative Cloud

By Brady Betzel

You know it’s almost fall when when pumpkin spice lattes are  back and Adobe announces its annual updates. At this year’s IBC, Adobe had a variety of updates to its Creative Cloud line of apps. From more info on their new editing platform Project Rush to the addition of Characterizer to Character Animator — there are a lot of updates so I’m going to focus on a select few that I think really stand out.

Project Rush

I use Adobe Premiere quite a lot these days; it’s quick and relatively easy to use and will work with pretty much every codec in the universe. In addition, the Dynamic Link between Adobe Premiere Pro and Adobe After Effects is an indispensible feature in my world.

With the 2018 fall updates, Adobe Premiere will be closer to a color tool like Blackmagic’s Resolve with the addition of new hue saturation curves in the Lumetri Color toolset. In Resolve these are some of the most important aspects of the color corrector, and I think that will be the same for Premiere. From Hue vs. Sat, which can help isolate a specific color and desaturate it to Hue vs. Luma, which can help add or subtract brightness values from specific hues and hue ranges — these new color correcting tools further Premiere’s venture into true professional color correction. These new curves will also be available inside of After Effects.

After Effects features many updates, but my favorites are the ability to access depth matte data of 3D elements and the addition of the new JavaScript engine for building expressions.

There is one update that runs across both Premiere and After Effects that seems to be a sleeper update. The improvements to motion graphics templates, if implemented correctly, could be a time and creativity saver for both artists and editors.

AI
Adobe, like many other companies, seem to be diving heavily into the “AI” pool, which is amazing, but… with great power comes great responsibility. While I feel this way and realize others might not, sometimes I don’t want all the work done for me. With new features like Auto Lip Sync and Color Match, editors and creators of all kinds should not lose the forest for the trees. I’m not telling people to ignore these features, but asking that they put a few minutes into discovering how the color of a shot was matched, so you can fix something if it goes wrong. You don’t want to be the editor who says, “Premiere did it” and not have a great solution to fix something when it goes wrong.

What Else?
I would love to see Adobe take a stab at digging up the bones of SpeedGrade and integrating that into the Premiere Pro world as a new tab. Call it Lumetri Grade, or whatever? A page with a more traditional colorist layout and clip organization would go a long way.

In the end, there are plenty of other updates to Adobe’s 2018 Creative Cloud apps, and you can read their blog to find out about other updates.

LACPUG hosting FCP and Premiere creator Randy Ubillos

The Los Angeles Creative Pro User Group (LACPUG) is celebrating its 18th anniversary on June 27 by presenting the official debut of Bradley Olsen’s Off the Tracks, a documentary about Final Cut Pro X. Also on the night’s agenda is a trip down memory lane with Randy Ubillos, the creator of Final Cut Pro, Adobe Premiere, Aperture, iMovie 08 and Final Cut Pro X.

The event will take place at the Gallery Theater in Hollywood. Start time is 6:45pm. Scheduled to be in the audience and perhaps on stage, depending on availability, will be members of the original FCP team: Michael Wohl, Tim Serda and Paul Saccone. Also on hand will be Ramy Katrib of DigitalFilm Tree and editor and digital consultant Dan Fort. “Many other invites to the ‘superstars’ of the digital revolution and FCP have been sent out,” says Michael Horton, founder and head of LACPUG.

The night will also include food and drinks, time for questions and the group’s “World Famous Raffle.”
Tickets are on sale now on the LACPUG website for $10 each, plus a ticket fee of $2.24.

The Los Angeles Creative Pro User Group, formerly the LA Final Cut Pro User Group, was established in June of 2000 and hosts a membership of over 6,000 worldwide.

Review: The PNY PrevailPro mobile workstation

By Mike McCarthy

PNY, a company best known in the media and entertainment industry as the manufacturer of Nvidia’s Quadro line of professional graphics cards, is now offering a powerful mobile workstation. While PNY makes a variety of other products, mostly centered around memory and graphics cards, the PrevailPro is their first move into offering complete systems.

Let’s take a look at what’s inside. The PrevailPro is based on Intel’s 7th generation Core i7 7700HQ Quad-Core Hyperthreaded CPU, running at 2.8-3.8GHz. It has an HM175 chipset and 32GB of dual-channel DDR4 RAM. At less than ¾-inch thick and 4.8 pounds, it also has an SD card slot, fingerprint reader, five USB ports, Gigabit Ethernet, Intel 8265 WiFi, and audio I/O. It might not be the lightest 15-inch laptop, but it is one of the most powerful. At 107 cubic inches, it has half the volume of my 17-inch Lenovo P71.

The model I am reviewing is their top option, with a 512GB NVMe SSD, as well as a 2TB HDD for storage. The display is a 15.6-inch UHD panel, driven by the headline feature, a Quadro P4000 GPU in Max-Q configuration. With 1792 CUDA cores, and 8GB of GDDR memory, the GPU retains 80% of the power of the desktop version of the P4000, at 4.4 TFlops. Someone I showed the system to joked that it was a PNY Quadro graphics card with a screen, which isn’t necessarily inaccurate. The Nvidia Pascal-based Quadro P4000 Max-Q GPU is the key unique feature of the product, being the only system I am aware of in its class — 15-inch workstations — with that much graphics horsepower.

Display Connectivity
This top-end PrevailPro system is ProVR certified by Nvidia and comes with a full complement of ports, offering more display options than any other system its size. It can drive three external 4K displays plus its attached UHD panel, an 8K monitor at 60Hz or anything in between. I originally requested to review this unit when it was announced last fall because I was working on a number of Barco Escape three-screen cinema projects. The system’s set of display outputs would allow me to natively drive the three TVs or projectors required for live editing and playback at a theater, without having to lug my full-sized workstation to the site. This is less of an issue now that the Escape format has been discontinued, but there are many other applications that involve multi-screen content creation, usually related to advertising as opposed to cinema.

I had also been looking for a more portable device to drive my 8K monitor — I wanted to do some on-set tests, reviewing footage from 8K cameras, without dragging my 50-pound workstation around with me — even my 17-inch P71 didn’t support it. Its DisplayPort connection is limited to Version 1.2, due to being attached to the Intel side of the hybrid graphics system. Dell’s Precision mobile workstations can drive their 8K display at 30Hz, but none of the other major manufacturers have implemented DisplayPort 1.3, favoring the power savings of using Intel’s 1.2 port in the chipset. The PrevailPro by comparison has dual mini-DisplayPort 01.3 ports, connected directly to the Nvidia GPU, which can be used together to drive an 8K monitor at 60Hz for the ultimate high-res viewing experience. It also has an HDMI 2.0 port supporting 4Kp60 with HDCP to connect your 4K TV.

It can connect three external displays, or a fourth with MST if you turn off the integrated panel. The one feature that is missing is Thunderbolt, which may be related to the DisplayPort issue. (Thunderbolt 3 was officially limited to DisplayPort 1.2) This doesn’t affect me personally, and USB 3.1 has much of the same functionality, but it will be an issue for many users in the M&E space — it limits its flexibility.

User Experience
The integrated display is a UHD LCD panel with a matte finish. It seems middle of the line. There is nothing wrong with it, and it appears to be accurate, but it doesn’t really pop the way some nicer displays do, possibly due to the blacks not being as dark as they could be.

The audio performance is not too impressive either. The speaker located at the top of the keyboard aren’t very loud, even at maximum volume, and they occasionally crackle a bit. This is probably the system’s most serious deficiency, although a decent pair of headphones can improve that experience significantly. The keyboard is well laid out, and felt natural to use, and the trackpad worked great for me. Switching between laptops frequently, I sometimes have difficulty adjusting to changes in the function and arrow key positioning, but everything was where my fingers expected them to be.

Performance wise, I am not comparing it to other 15-inch laptops, because I don’t have any to test it against, and that is not the point of this article. The users who need this kind of performance have previously been limited to 17-inch systems, and this one might allow them to lighten their load — more portable without sacrificing much performance. I will be comparing it to my 17-inch and 13-inch laptops, for context, as well as my 20-core Dell workstation.

Storage Performance
First off, with synthetic benchmarks, the SSD reports 1400MB/s write and 2000MB/s read performance, but the write is throttled to half of that over sustained periods. This is slower than some new SSDs, but probably sufficient because without Thunderbolt there is no way to feed the system data any faster than that. (USB 3.1 tops out around 800MB/s in the real world.)

The read speed allowed me to playback 6K DPX files in Adobe Premiere, and that is nothing to scoff at. The HDD tops out at 125MB/s as should be expected for a 2.5-inch SATA drive, so it will perform just like any other system. The spinning disk seems out of place in a device like this, where a second M.2 slot would have allowed the same capacity, at higher speeds, with size and power savings.

Here are its Cinebench scores, compared to my other systems:
System OpenGL CPU
PNY PrevailPro (P4000) 109.94 738
Lenovo P71 (P5000) 153.34 859
Dell 7910 Desktop (P6000) 179.98 3060Aorus X3 Plus (GF870) 47.00 520

The P4000 is a VR-certified solution, so I hooked up my Lenovo Explorer HMD and tried editing some 360 video in Premiere Pro 12.1. Everything works as expected, and I was able to get my GoPro Fusion footage to play back 3Kp60 at full resolution, and 5Kp30 at half resolution. Playing back exported clips in WMR worked in full resolution, even at 5K.

8K Playback
One of the unique features of this system is its support for an 8K display. Now, that makes for an awfully nice UI monitor, but most people buying it to drive an 8K display will probably want to view 8K content on it. To that end, 8K playback was one of the first things I tested. Within Premiere, DNxHR-LB files were the only ones I could get to play without dropping frames at full resolution, and even then only when they were scope aspect ratio. The fewer pixels to process due to the letterboxing works in its favor. All of the other options wouldn’t playback at full resolution, which defeats the purpose of an 8K display. The Windows 10 media player did playback 8K HEVC files at full resolution without issue, due to the hardware decoder on the Quadro GPU, which explicitly supports 8K playback. So that is probably the best way to experience 8K media on a system like this.

Now obviously 8K is pushing our luck with a laptop in the first place. My 6K Red files play back at quarter res, and most of my other 4K and 6K test assets play smoothly. I rendered a complex 5K comp in Adobe After Effects, and at 28 minutes, it was four minutes slower than my larger 17-inch system, and twice as fast as my 13-inch gaming notebook. Encoding a 10-minute file in DCP-O-Matic took 47 minutes in 2K, and 189 minutes in 4K, which is 15% slower than my 17-inch laptop.

Conclusion
The new 15-inch PrevailPro is not as fast as my huge 17-inch P71, as to be expected, but it is close in most tests, and many users would never notice the difference. It supports 8K monitors and takes up half the space in my bag. It blows my 13-inch gaming notebook out of the water and does many media tasks just as fast as my desktop workstation. It seems like an ideal choice for a power user who needs strong graphics performance but doesn’t want to lug around a 17-inch monster of a system.

The steps to improve it would be the addition of Thunderbolt support, better speakers, and an upgrade to Intel’s new 8th Gen CPUs. If I was still working on multi-screen theatrical projects, this would be the perfect system for taking my projects with me — same if I was working in VR more. I believe the configuration I tested has an MSRP of $4,500, but I find it online for around $4100. So it is clearly not the cheap option, but it is one of the most powerful 15-inch laptops available, especially if your processing needs are GPU intense. It is a well-balanced solution, for demanding users who need performance, but want to limit size and weight.

Update-September 27, 2018
I have had the opportunity to use the PrevailPro as my primary workstation while on the road for the last three months, and I have been very happy with the performance. The Wi-Fi range and battery life are significantly better than my previous system, although I wouldn’t bank on more than two hours of serious media editing work before needing to plug in.

I was able to process 7K R3D test shoot files for my next project in Adobe Media Encoder, and it converts them in full quality at around a quarter of realtime, so four minutes to convert one minute of footage, which is fast enough for my mobile needs. (So it could theoretically export six hours of dailies per day, but I wouldn’t usually recommend using a laptop for that kind of processing.) It renders my edited 5K project assets to H.264 faster than realtime, and the UHD screen has been great for all of my Photoshop work.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

NAB: Adobe’s spring updates for Creative Cloud

By Brady Betzel

Adobe has had a tradition of releasing Creative Cloud updates prior to NAB, and this year is no different. The company has been focused on improving existing workflows and adding new features, some based on Adobe’s Sensei technology, as well as improved VR enhancements.

In this release, Adobe has announced a handful of Premiere Pro CC updates. While I personally don’t think that they are game changing, many users will appreciate the direction Adobe is going. If you are color correcting, Adobe has added the Shot Match function that allows you to match color between two shots. Powered by Adobe’s Sensei technology, Shot Match analyzes one image and tries to apply the same look to another image. Included in this update is the long-requested split screen to compare before and after color corrections.

Motion graphic templates have been improved with new adjustments like 2D position, rotation and scale. Automatic audio ducking has been included in this release as well. You can find this feature in the Essential Sound panel, and once applied it will essentially dip the music in your scene based on dialogue waveforms that you identify.

Still inside of Adobe Premiere Pro CC, but also applicable in After Effects, is Adobe’s enhanced Immersive Environment. This update is for people who use VR headsets to edit and or process VFX. Team Project workflows have been updated with better version tracking and indicators of who is using bins and sequences in realtime.

New Timecode Panel
Overall, while these updates are helpful, none are barn burners, the thing that does have me excited is the new Timecode Panel — it’s the biggest new update to the Premiere Pro CC app. For years now, editors have been clamoring for more than just one timecode view. You can view sequence timecodes, source media timecodes from the clips on the different video layers in your timeline, and you can even view the same sequence timecode in a different frame rate (great for editing those 23.98 shows to a 29.97/59.94 clock!). And one of my unexpected favorites is the clip name in the timecode window.

I was testing this feature in a pre-release version of Premiere Pro, and it was a little wonky. First, I couldn’t dock the timecode window. While I could add lines and access the different menus, my changes wouldn’t apply to the row I had selected. In addition, I could only right click and try to change the first row of contents, but it would choose a random row to change. I am assuming the final release has this all fixed. If it the wonkiness gets flushed out, this is a phenomenal (and necessary) addition to Premiere Pro.

Codecs, Master Property, Puppet Tool, more
There have been some compatible codec updates, specifically Raw Sony X-OCN (Venice), Canon Cinema Raw Light (C200) and Red IPP2.

After Effects CC has also been updated with Master Property controls. Adobe said it best during their announcement: “Add layer properties, such as position, color or text, in the Essential Graphics panel and control them in the parent composition’s timeline. Use Master Property to push individual values to all versions of the composition or pull selected changes back to the master.”

The Puppet Tool has been given some love with a new Advanced Puppet Engine, giving access to improving mesh and starch workflows to animate static objects. Beyond updates to Add Grain, Remove Grain and Match Grain effects, making them multi-threaded, enhanced disk caching and project management improvements have been added.

My favorite update for After Effects CC is the addition of data-driven graphics. You can drop a CSV or JSON data file and pick-whip data to layer properties to control them. In addition, you can drag and drop data right onto your comp to use the actual numerical value. Data-driven graphics is a definite game changer for After Effects.

Audition
While Adobe Audition is an audio mixing application, it has some updates that will directly help anyone looking to mix their edit in Audition. In the past, to get audio to a mixing program like Audition, Pro Tools or Fairlight you would have to export an AAF (or if you are old like me possibly an OMF). In the latest Audition update you can simply open your Premiere Pro projects directly into Audition, re-link video and audio and begin mixing.

I asked Adobe whether you could go back and forth between Audition and Premiere, but it seems like it is a one-way trip. They must be expecting you to export individual audio stems once done in Audition for final output. In the future, I would love to see back and forth capabilities between apps like Premiere Pro and Audition, much like the Fairlight tab in Blackmagic’s Resolve. There are some other updates like larger tracks and under-the-hood updates which you can find more info about on: https://theblog.adobe.com/creative-cloud/.

Adobe Character Animator has some cool updates like overall character building updates, but I am not too involved with Character Animator so you should definitely read about things like the Trigger Improvements on their blog.

Summing Up
In the end, it is great to see Adobe moving forward on updates to its Creative Cloud video offerings. Data-driven animation inside of After Effects is a game-changer. Shot color matching in Premiere Pro is a nice step toward a professional color correction application. Importing Premiere Pro projects directly into Audition is definitely a workflow improvement.

I do have a wishlist though: I would love for Premiere Pro to concentrate on tried-and-true solutions before adding fancy updates like audio ducking. For example, I often hear people complain about how hard it is to export a QuickTime out of Premiere with either stereo or mono/discrete tracks. You need to set up the sequence correctly from the jump, adjust the pan on the tracks, as well as adjust the audio settings and export settings. Doesn’t sound streamlined to me.

In addition, while shot color matching is great, let’s get an Adobe SpeedGrade-style view tab into Premiere Pro so it works like a professional color correction app… maybe Lumetri Pro? I know if the color correction setup was improved I would be way more apt to stay inside of Premiere Pro to finish something instead of going to an app like Resolve.

Finally, consolidating and transcoding used clips with handles is hit or miss inside of Premiere Pro. Can we get a rock-solid consolidate and transcode feature inside of Premiere Pro? Regardless of some of the few negatives, Premiere Pro is an industry staple and it works very well.

Check out Adobe’s NAB 2018 update video playlist for details on each and every update.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Review: Digital Anarchy’s Transcriptive plugin for Adobe Premiere

By Brady Betzel

One of the most time consuming parts of editing can be dealing with the pre-post, including organizing scripts and transcriptions of interviews. In the past, I have used and loved Avid’s ScriptSync and Phrase Find. These days, with people becoming more comfortable with other NLEs such as Adobe Premiere Pro, Apple FCP X and Blackmagic Resolve, there is a need for similar technology inside  those apps, and that is where Digital Anarchy’s Transcriptive plug-in comes in.

Transcriptive is a Windows- and Mac OS-compatible plugin for Premiere Pro CC 2015.3 and above. Transcriptive allows the editor to have a sequence or multiple clips transcribed in the cloud by either IBM Watson or Speechmatics, a script downloaded to your system and in sync with the clips and sequences for a price. From there you can search for specific words, sort by person speaking, including labelling each speaker, or just follow an interview along with a transcript.

Avid’s ScriptSync is an invaluable plugin, in my opinion, when working on shows with interviews, especially when combining multiple responses into one cohesive answer being covered by b-roll — often referred to as a Frankenbite. Transcriptive comes close to Avid’s ScriptSync within Premiere Pro, but has a few differences, and is priced at $299, plus the per-minute cost of transcription.

A Deeper Look
Within Premiere, Transcriptive lives under the Windows menu > Extension > Transcriptive. To get access to the online AI transcription services you will obviously need an Internet connection as well as an account with Speechmatics and/or IBM’s Watson. You’ll really want to follow along with the manual, which can be found here. It walks you step by step through setting up the Transcriptive plugin.

It is a little convoluted to get it all set up, but once you do you are ready to upload a clip and get transcribing. IBM’s Watson will get you going with 1,000 free minutes of transcription a month, and from there it goes from $.02/minute down to $.01/minute, depending on how much you need transcribed. If you need additional languages transcribed it will be up-charged $.03/minute. Speechmatics is another transcription service that runs roughly $.08 a minute (I say roughly because the price is in pounds and has fluctuated in the past) and it will go down if you do more than 1,000 minutes a month.

Your first question should be why the disparity in price, and in this instance you get what you pay for. If you aren’t as strict on accuracy, then Watson is for you — it doesn’t quite get everything correct and can sometimes fail to see when a new person is talking, even on a very clear recording. Speechmatics was faster during my testing and more accurate. If free is a good price for you then Watson might do the job, and you should try it first. But in my opinion Speechmatics is where you need to be.

When editing interviews, accuracy is extremely important, especially when searching specific key words, and this is where Speechmatics came through. Neither service has complete accuracy, and if something is wrong you can’t kick it back like you could a traditional, human-based transcription service.

The Test
To test Transcriptive I downloaded a CNN interview between Anderson Cooper and Hillary Clinton, which in theory should have perfect audio. Even with “perfect audio” Watson had some trouble when one person would talk over the other. Speechmatics seemed to get each person labeled correctly when they spoke, I would guess it missed only about 5% of the words, so about 95% accurate — Watson seemed to be about 70% accurate.

To get your file to these services you will either send your media from a sequence, multiple clips or a folder of clips. I seem to favor a specific folder of clips to transcode as it forces some organization and my OCD assistant editor brain feels a little more at home.

As a plugin, Transcriptive is an extension inside of Premiere Pro, as alluded to earlier. Inside Premiere you have to have the Transcriptive window active when doing edits or simply playing down a clip, otherwise you will be affecting the timeline (meaning if you hit undo you will be undoing your timeline work, so be careful). When working with transcriptions between clips and sequences your transcription will load differently. If you transcribe individual clips using the Batch Files command, the transcription will be loaded into the infamous Speech Analysis field of the files metadata. In this instance you can now search in the metadata field instead of the Transcriptive window.

One feature I really like is the ability to export a transcript as markers to be placed on the timeline. In addition, you can export many different closed captioning file types such as SMPTE-TT (XML file), which can be used inside of Premiere with its built-in caption integration. SRT and VTT are captioning file types to be uploaded alongside your video to services like YouTube, and JSON files allow you to send transcripts to other machines using the Transcriptive plugin. Besides searching inside of Transcriptive for any lines or speakers you want, you can also edit the transcript. This can be extremely useful if two speakers are combined or if there are some missed words that need to be corrected.

To really explain how Transcriptive works, it is easiest to compare it to Avid’s ScriptSync. If you have used Avid’s ScriptSync and then gave Transcriptive a try, you likely noticed some features that Transcriptive desperately needs in order to be the powerhouse that ScriptSync is — but Transcriptive has the added ability to upload your files and process them in the cloud.

ScriptSync allows the editor or assistant editor to take a bunch of transcriptions, line them up, then, for example, have every clip from a particular person in one transcription file that could be searched or edited from. In addition, there is a physical representation of the transcriptions that can be organized in bins and accessed separately from the clips. These functions would be a huge upgrade to Transcriptive in the future, especially for editors who work on unscripted or documentary projects with multiple interviews from the same people. If you use an external transcription file and want to align with clips you have in the system you must use (and pay) Speechmatics, which for a lower price per minute will align the two files.

Updates Are Coming
After I had finished my initial review, Jim Tierney, president of Digital Anarchy, was kind enough to email me about some updates that were coming to Transcriptive as well as a really handy transcription workflow that I had missed my first time around.

He mentioned that they are working on a Power Search function that will allow for a search of all clips and transcripts inside the project. A window will then show all the search results and can be clicked on to open the corresponding clips in the source window or sequence in the record window. Once that update rolls in, Transcriptive will be much more powerful and easier to use.

The only thing that will be hard to differentiate is if you have multiple interviews from multiple people. For instance, if I wanted to limit the search to only my interviews and for a specific phrase. In the future, a way to Power Search a select folder of clips or sequences may be a great way to search isolated clips or sequences, at least easier than searching all clips and sequences.

The other tidbit Jim mentioned was using YouTube’s built-in transcriptions in your own videos. Before you watch the tutorial keep in mind that this process isn’t flawless. While you can upload your video to YouTube in private mode, the uploading part may still turn away a few people who have security concerns. In addition, you will need to export a low-res proxy version of your clip to transcode, which can take time.

If you have the time, or have an assistant editor with time, this process through YouTube might be your saving grace. My two cents is that with some upfront bookkeeping like tape naming, and after transcribing corrections, this could be one of the best solutions if you aren’t worried about security.

Regardless, check out the tutorial if you want a way to get supposedly very accurate transcriptions via YouTube’s transcriber. In the end it will produce a VTT transcription file that you will import back into Transcriptive, where you will need to either leave alone or spend adjusting since VTT files will not allow for punctuation. The main benefit to the VTT file from YouTube is the timecode is carried back to Transcriptive and enables each word to be clicked on and the video will line up to it.

Summing Up
All in all, there are only a few options when working with transcriptions inside of Premiere. Transcriptive did a good job at what it did: uploading my file to one of the transcription services, acquiring the transcript and aligning the clip to the timecoded transcript with identifying markers for speakers that can be changed if needed. Once the Power Search gets ironed out and put into a proper release, Transcriptive will get even closer to being the transcription powerhouse you need for Premiere editing.

If you work with tons of interviews or just want clips transcribed for easy search you should definitely download Digital Anarchy’s Transcriptive demo and give it a whirl.

You can also find a ton of good video tutorials on their site. Keep in mind that the Transcriptive plugin runs $299 and you have some free transcriptions available to you through IBM’s Watson, but if you want very accurate transcriptions you will need to pay for Speechmatics or you can try YouTube’s built-in transcription service that charges nothing.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Editing 360 Video in VR (Part 2)

By Mike McCarthy

In the last article I wrote on this topic, I looked at the options for shooting 360-degree video footage, and what it takes to get footage recorded on a Gear 360 ready to review and edit on a VR-enabled system. The remaining steps in the workflow will be similar regardless of which camera you are using.

Previewing your work is important so, if you have a VR headset you will want to make sure it is installed and functioning with your editing software. I will be basing this article on using an Oculus Rift to view my work in Adobe Premiere Pro 11.1.2 on a Thinkpad P71 with an Nvidia Quadro P5000 GPU. Premiere requires an extra set of plugins to interface to the Rift headset. Adobe acquired Mettle’s Skybox VR Player plugin back in June, and has made it available to Creative Cloud users upon request, which you can do here.

Skybox VR player

Skybox can project the Adobe UI to the Rift, as well as the output, so you could leave the headset on when making adjustments, but I have not found that to be as useful as I had hoped. Another option is to use the GoPro VR Player plugin to send the Adobe Transmit output to the Rift, which can be downloaded for free here (use the 3.0 version or above). I found this to have slightly better playback performance, but fewer options (no UI projection, for example). Adobe is expected to integrate much of this functionality into the next release of Premiere, which should remove the need for most of the current plugins and increase the overall functionality.

Once our VR editing system is ready to go, we need to look at the footage we have. In the case of the Gear 360, the dual spherical image file recorded by the camera is not directly usable in most applications and needs to be processed to generate a single equirectangular projection, stitching the images from both cameras into a single continuous view.

There are a number of ways to do this. One option is to use the application Samsung packages with the camera: Action Director 360. You can download the original version here, but will need the activation code that came with the camera in order to use it. Upon import, the software automatically processes the original stills and video into equirectangular 2:1 H.264 files. Instead of exporting from that application, I pull the temp files that it generates on media import, and use them in Premiere. (C:\Users\[Username]\Documents\CyberLink\ActionDirector\1.0\360) is where they should be located by default. While this is the simplest solution for PC users, it introduces an extra transcoding step to H.264 (after the initial H.265 recording), and I frequently encountered an issue where there was a black hexagon in the middle of the stitched image.

Action Director

Activating Automatic Angle Compensation in the Preferences->Editing panel gets around this bug, while trying to stabilize your footage to some degree. I later discovered that Samsung had released a separate Version 2 of Action Director available for Windows or Mac, which solves this issue. But I couldn’t get the stitched files to work directly in the Adobe apps, so I had to export them, which was yet another layer of video compression. You will need a Samsung activation code that came with the Gear 360 to use any of the versions, and both versions took twice as long to stitch a clip as its run time on my P71 laptop.

An option that gives you more control over the stitching process is to do it in After Effects. Adobe’s recent acquisition of Mettle’s SkyBox VR toolset makes this much easier, but it is still a process. Currently you have to manually request and install your copy of the plugins as a Creative Cloud subscriber. There are three separate installers, and while this stitching process only requires Skybox Suite AE, I would install both the AE and Premiere Pro versions for use in later steps, as well as the Skybox VR player if you have an HMD to preview with. Once you have them installed, you can use the Skybox Converter effect in After Effects to convert from the Gear 360’s fisheye files to the equirectangular assets that Premiere requires for editing VR.

Unfortunately, Samsung’s format is not one of the default conversions supported by the effect, so it requires a little more creativity. The two sensor images have to be cropped into separate comps and with plugin applied to each of them. Setting the Input to fisheye and the output to equirectangular for each image will give the desired distortion. A feathered mask applied to the circle to adjust the seam, and the overlap can be adjusted with the FOV and re-orient camera values.

Since this can be challenging to setup, I have posted an AE template that is already configured for footage from the Gear 360. The included directions should be easy to follow, and the projection, overlap and stitch can be further tweaked by adjusting the position, rotation and mask settings in the sub-comps, and the re-orientation values in the Skybox Converter effects. Hopefully, once you find the correct adjustments for your individual camera, they should remain the same for all of your footage, unless you want to mask around an object crossing the stitch boundary. More info on those types of fixes can be found here. It took me five minutes to export 60 seconds of 360 video using this approach, and there is no stabilization or other automatic image analysis.

Video Stitch Studio

Orah makes Video-Stitch Studio, which is a similar product but with a slightly different feature set and approach. One limitation I couldn’t find a way around is that the program expects the various fisheye source images to be in separate files, and unlike AVP I couldn’t get the source cropping tool to work without rendering the dual fisheye images into separate square video source files. There should be a way to avoid that step, but I couldn’t find one. (You can use the crop effect to remove 1920 pixels on one side or the other to make the conversions in Media Encoder relatively quickly.) Splitting the source file and rendering separate fisheye spheres adds a workflow step and render time, and my one-minute clip took 11 minutes to export. This is a slower option, which might be significant if you have hours of footage to process instead of minutes.

Clearly, there are a variety of ways to get your raw footage stitched for editing. The results vary greatly between the different programs, so I made video to compare the different stitching options on the same source clip. My first attempt was with a locked-off shot in the park, but that shot was too simple to see the differences, and it didn’t allow for comparison of the stabilization options available in some of the programs. I shot some footage from a moving vehicle to see how well the motion and shake would be handled by the various programs. The result is now available on YouTube, fading between each of the five labeled options over the course of the minute long clip. I would categorize this as testing how well the various applications can handle non-ideal source footage, which happens a lot in the real world.

I didn’t feel that any of the stitching options were perfect solutions, so hopefully we will see further developments in that regard in the future. You may want to explore them yourself to determine which one best meets your needs. Once your footage is correctly mapped to equirectangular projection, ideally in a 2:1 aspect ratio, and the projects are rendered and exported (I recommend Cineform or DNxHR), you are ready to edit your processed footage.

Launch Premiere Pro and import your footage as you normally would. If you are using the Skybox Player plugin, turn on Adobe Transmit with the HMD selected as the only dedicated output (in the Skybox VR configuration window, I recommend setting the hot corner to top left, to avoid accidentally hitting the start menu, desktop hide or application close buttons during preview). In the playback monitor, you may want to right click the wrench icon and select Enable VR to preview a pan-able perspective of the video, instead of the entire distorted equirectangular source frame. You can cut, trim and stack your footage as usual, and apply color corrections and other non-geometry-based effects.

In version 11.1.2 of Premiere, there is basically one VR effect (VR Projection), which allows you to rotate the video sphere along all three axis. If you have the Skybox Suite for Premiere installed, you will have some extra VR effects. The Skybox Rotate Sphere effect is basically the same. You can add titles and graphics and use the Skybox Project 2D effect to project them into the sphere where you want. Skybox also includes other effects for blurring and sharpening the spherical video, as well as denoise and glow. If you have Kolor AVP installed that adds two new effects as well. GoPro VR Horizon is similar to the other sphere rotation ones, but allows you to drag the image around in the monitor window to rotate it, instead of manually adjusting the axis values, so it is faster and more intuitive. The GoPro VR Reframe effect is applied to equirectangular footage, to extract a flat perspective from within it. The field of view can be adjusted and rotated around all three axis.

Most of the effects are pretty easy to figure out, but Skybox Project 2D may require some experimentation to get the desired results. Avoid placing objects near the edges of the 2D frame that you apply it to, to keep them facing toward the viewer. The rotate projection values control where the object is placed relative to the viewer. The rotate source values rotate the object at the location it is projected to. Personally, I think they should be placed in the reverse order in the effects panel.

Encoding the final output is not difficult, just send it to Adobe Media Encoder using either H.264 or H.265 formats. Make sure the “Video is VR” box is checked at the bottom of the Video Settings pane, and in this case that the frame layout is set to monoscopic. There are presets for some of the common framesizes, but I would recommend lowering the bitrates, at least if you are using Gear 360 footage. Also, if you have ambisonic audio set channels to 4.0 in the audio pane.

Once the video is encoded, you can upload it directly to Facebook. If you want to upload to YouTube, exports from AME with the VR box checked should work fine, but for videos from other sources you will need to modify the metadata with this app here.  Once your video is uploaded to YouTube, you can embed it on any webpage that supports 2D web videos. And YouTube videos can be streamed directly to your Rift headset using the free DeoVR video player.

That should give you a 360-video production workflow from start to finish. I will post more updated articles as new software tools are developed, and as I get new 360 cameras with which to test and experiment.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Michael Kammes’ 5 Things – Video editing software

By Randi Altman

Technologist Michael Kammes is back with a new episode of 5 Things, which focuses on simplifying film, TV and media technology. The web series answers, according to Kammes, the “five burning tech questions” people might have about technologies and workflows in the media creation space. This episode tackles professional video editing software being used (or not used) in Hollywood.

Why is now the time to address this segment of the industry? “The market for NLEs is now more crowded than it has been in over 20 years,” explains Kammes. “Not since the dawn of modern NLEs have there been this many questions over what tools should be used. In addition, the massive price drop of NLEs, coupled with the pricing shift (monthly/yearly, as opposed to outright) has created more confusion in the market.”

In his video, Kammes focuses on Avid Media Composer, Adobe Premiere, Apple Final Cut Pro, Lightworks, Blackmagic Resolve and others.

Considering its history and use on some major motion pictures, (such as The Wolf of Wall Street), why hasn’t Lightworks made more strides in the Hollywood community? “I think Lightworks has had massive product development and marketing issues,” shares Kammes. “I rarely see the product pushed online, at user groups or in forums.  EditShare, the parent company of LightWorks, also deals heavily in storage, so one can only assume the marketing dollars are being spent on larger ticket items like professional and enterprise storage over a desktop application.”

What about Resolve, considering its updated NLE tools and the acquisition of audio company Fairlight? Should we expect to see more Resolve being used as a traditional NLE? “I think in Hollywood, adoption will be very, very slow for creative editorial, and unless something drastic happens to Avid and Adobe, Resolve will remain in the minority. For dailies, transcodes or grading, I can see it only getting bigger, but I don’t see larger facilities adopting Resolve for creative editorial. Outside of Hollywood, I see it gaining more traction. Those outlets have more flexibility to pivot and try different tools without the locked-in TV and feature film machine in Hollywood.”

Check it out:

Jimmy Helm upped to editor at The Colonie

The Colonie, the Chicago-based editorial, visual effects and motion graphics shop, has promoted Jimmy Helm to editor. Helm has honed his craft over the past seven years, working with The Colonie’s senior editors on a wide range of projects. Most recently, he has been managing ongoing social media work with Facebook and conceptualizing and editing short format ads. Some clients he has collaborated with include Lyft, Dos Equis, Capital One, Heineken and Microsoft. He works on both Avid Media Composer and Adobe Premiere.

A filmmaking major at Columbia College Chicago, Helm applied for an internship at The Colonie in 2010. Six months later he was offered a full-time position as an assistant editor, working alongside veteran cutter Tom Pastorelle on commercials for McDonald’s, Kellogg’s, Quaker and Wrangler. During this time, Helm edited numerous projects on his own, including broadcast commercials for Centrum and Kay Jewelers.

“Tom is incredible to work with,” says Helm. “Not only is he a great editor but a great person. He shared his editorial methods and taught me the importance of bringing your instinctual creativity to the process. I feel fortunate to have had him as a mentor.”

In 2014, Helm was promoted to senior assistant editor and continued to hone his editing skills while taking on a leadership role.

“My passion for visual storytelling began when I was young,” says Helm “Growing up in Memphis, I spent a great deal of time watching classic films by great directors. I realize now that I was doing more than watching — I was studying their techniques and, particularly, their editing styles. When you’re editing a scene, there’s something addictive about the rhythm you create and the drama you build. I love that I get to do it every day.”

Helm joins The Colonie’s editorial team, comprised of Joe Clear, Keith Kristinat, Pastorelle and Brian Salazar, along with editors and partners Bob Ackerman and Brian Sepanik.

 

 

Quick Chat: Lucky Post’s Sai Selvarajan on editing Don’t Fear The Fin

Costa, makers of polarized sunglasses, has teamed up with Ocearch, a group of explorers and scientists dedicated to generating data on the movement, biology and health of sharks, in order to educate people on how saving the sharks will save our oceans. In a 2.5-minute video, three shark attack survivors — Mike Coots, Paul de Gelder, and Lisa Mondy — explain why they are now on a quest to help save the very thing that attacked them, took their limbs and almost their lives.

The video edited by Lucky Post’s Sai Selvarajan for agency McGarrah Jessee and Rabbit Food Studios, tells the viewer that the number of sharks killed by long-lining, illegal fishing and the shark finning trade exceeds human shark attacks by millions. And as go the sharks, so go our oceans.

For editor Selvarajan, the goal was to strike a balance with the intimate stories and the global message, from striking footage filmed in Hawaii’s surf mecca, the North Shore. “Stories inside stories,” describes Selvarajan, who reveres the subjects’ dedication to saving the misunderstood creatures, despite having their life-changing encounters.

We spoke with the Texas-based editor to find out more about this project.

How early on did you become involved in the project?
I got a call when the project was greenlit and Jeff Bednarz the creative head at Rabbit Foot walked me through the concept. He wanted to showcase the whole teamwork aspect of Costa, Ocearch and shark survivors all coming together and using their skills to save sharks.

Did working on Don’t Fear The Fin change your perception of sharks?
Yes it did.  Before working on the project I had no idea that sharks were in trouble. After working on Don’t Fear the Fin, I’m totally for shark conservation, and I admire anyone who is out there fighting for the species.

What equipment did you use for the edit?
Adobe Premiere on Mac Tower.

What were the biggest creative challenges?
The biggest creative challenge was how to tell the shark survivors’ stories and then the shark’s story, and then Ocearch/Costa’s mission story. It was stories inside stories, which made it very dense and challenging to cut into a three-minute story. I had to do justice to all the stories and weave them into each other. The footage was gorgeous, but there had to be a sense of gravity to it all, so I used pacing and score to give us that gravity.

What do you think of the fact that sharks are not shown much in the film?
We made a conscious effort to show sharks and people in the same shot. The biggest misconception is that sharks are these big man-eating monsters. Seeing people diving with the sharks tied them to our story and the mission of the project.

What’s your biggest fear, and how would/can you overcome it?
Snakes are my biggest fear. I’m not sure how I’ll ever overcome it. I respect snakes and keep a safe distance. Living in Texas, I’ve read up on which ones are poisonous, so I know which ones to stay away from. But if I came across a rat snake in the wild, I’m sure to jump 20 feet in the air.

Check out the full video below…