Author Archives: Randi Altman

Quick Chat: AI-based audio mastering

Antoine Rotondo is an audio engineer by trade who has been in the business for the past 17 years. Throughout his career he’s worked in audio across music, film and broadcast, focusing on sound reproduction. After completing college studies in sound design, undergraduate studies in music and music technology, as well as graduate studies in sound recording at McGill University in Montreal, Rotondo went on to work in recording, mixing, producing and mastering.

He is currently an audio engineer at Landr.com, which has released Landr Audio Mastering for Video, which provides professional video editors with AI-based audio mastering capabilities in Adobe Premiere Pro CC.

As an audio engineer how do you feel about AI tools to shortcut the mastering process?
Well first, there’s a myth about how AI and machines can’t possibly make valid decisions in the creative process in a consistent way. There’s actually a huge intersection between artistic intentions and technical solutions where we find many patterns, where people tend to agree and go about things very similarly, often unknowingly. We’ve been building technology around that.

Truth be told there are many tasks in audio mastering that are repetitive and that people don’t necessarily like spending a lot of time on, tasks such as leveling dialogue, music and background elements across multiple segments, or dealing with noise. Everyone’s job gets easier when those tasks become automated.

I see innovation in AI-driven audio mastering as a way to make creators more productive and efficient — not to replace them. It’s now more accessible than ever for amateur and aspiring producers and musicians to learn about mastering and have the resources to professionally polish their work. I think the same will apply to videographers.

What’s the key to making video content sound great?
Great sound quality is effortless and sounds as natural as possible. It’s about creating an experience that keeps the viewer engaged and entertained. It’s also about great communication — delivering a message to your audience and even conveying your artistic vision — all this to impact your audience in the way you intended.

More specifically, audio shouldn’t unintentionally sound muffled, distorted, noisy or erratic. Dialogue and music should shine through. Viewers should never need to change the volume or rewind the content to play something back during the program.

When are the times you’d want to hire an audio mastering engineer and when are the times that projects could solely use an AI-engine for audio mastering?
Mastering engineers are especially important for extremely intricate artistic projects that require direct communication with a producer or artist, including long-form narrative, feature films, television series and also TV commercials. Any project with conceptual sound design will almost always require an engineer to perfect the final master.

Users can truly benefit from AI-driven mastering in short form, non-fiction projects that require clean dialog, reduced background noise and overall leveling. Quick turnaround projects can also use AI mastering to elevate the audio to a more professional level even, when deadlines are tight. AI mastering can now insert itself in the offline creation process, where multiple revisions of a project are sent back and forth, making great sound accessible throughout the entire production cycle.

The other thing to consider is that AI mastering is a great option for video editors who don’t have technical audio expertise themselves, and where lower budgets translate into them having to work on their own. These editors could purchase purpose-built mastering plugins, but they don’t necessarily have the time to learn how to really take advantage of these tools. And even if they did have the time, some would prefer to focus more on all the other aspects of the work that they have to juggle.

iOgrapher now offering Multi Case for Androids and iOS phones

iOgrapher has debuted the iOgrapher Multi Case rig for mobile filmmaking. It’s the companies first non-iOS offering. An early pioneer of mobile media filmmaking cases for iOS devices, iOgrapher is now targeting mobile filmmakers with a flexible design to support recent model iOS and Android mobile phones of all sizes.

The iOgrapher Multi Case features:

• Slide in function for a strong and secure fit
• The ability to attach lighting and mics for higher quality mobile video production
• Flexible mount options for any standard tripod in landscape or portrait mode
• ¼ 20-inch screw mounts on handles to attach accessories
• Standard protective cases for your phone can be used — filmmakers no longer need to remove protective cases to use the iOgrapher Multi Case
• It works with Moment Lenses. Users do not need to remove Moment Lens cases or lenses to use the iOgrapher Multi Case
• The Multi Case is designed to work with iPhone 6 and later models, and has been tested to work with popular Samsung, Google Pixel, LG and Motorola phones.

With the launch of the Multi Case, iOgrapher is introducing a new design. The capabilities and mounting options have evolved as a result of customer reviews and feedback, as well as real-world use cases from professional broadcasters, filmmakers, pro-sport coaches and training facilities.

The iOgrapher Multi Case is available for pre-order and is priced at $79. It will ship at the end of November.

Crazy Rich Asians editor Myron Kerstein

By Amy Leland

When the buzz started in anticipation of the premiere of Crazy Rich Asians, there was a lot of speculation about whether audiences would fill the theaters for the first all-Asian cast in an American film since 1993’s Joy Luck Club. Or whether audiences wanted to see a romantic comedy, a format that seemed to be falling out of favor.

The answer to both questions was a resounding, “Yes!” The film grossed $35 million during its opening weekend, against a $30 million budget. It continued going strong its second weekend, making another $28M, the highest Labor Day weekend box office in more than a decade. It was the biggest opening weekend for a rom-com in three years, and is the most successful studio rom-com in nine. All of this great success can be explained pretty simply — it’s a fun movie with a well-told story.

Not long ago, I had the great fun of sitting down with one of its storytellers, editor Myron Kerstein, to discuss this Jon M. Chu-directed film as well as Kerstein’s career as an editor.

How did you get started as an editor?
I was a fine arts major in college and stumbled upon photography, filmmaking, painting and printmaking. I really just wanted to make art of any kind. Once I started doing more short films in college, I found a knack for editing.

When I first moved to New York, I needed to make a living, so I became a PA, and I worked on a series called TV Nation one of Michael Moore’s first shows. It was political satire. There was a production period, and then slowly the editors needed help in the post department. I gravitated toward these alchemists, these amazing people who were making things out of nothing. I really started to move toward post through that experience.

I also hustled quite a bit with all of those editors, and they started to hire me after that job. Slowly but surely I had a network of people who wanted to hire me again. That’s how I really started, and I really began to love it. I thought, what an amazing process to read these stories and look at how much power and influence an editor has in the filmmaking process.

I was not an assistant for too long, because I got to cut a film called Black & White. Then I quickly began doing edits for other indies, one being a film called Raising Victor Vargas, and another film called Garden State. That was my big hit in the indie world, and slowly that lead to more studio films, and then to Crazy Rich Asians.

Myron Kerstein and Crazy Rich Asians actor Henry Golding.

Your first break was on a television show that was nothing like feature films. How did you ultimately move toward cutting feature films?
I had a real attraction to documentary filmmaking, but my heart wanted to make narrative features. I think once you put that out in the universe, then those jobs start coming to you. I then stumbled upon my mentor, Jim Lyons, who cut all of Todd Haynes’s movies for years. When I worked on Velvet Goldmine as an assistant editor, I knew this was where I really needed to be. This was a film with music that was trying to say something, and was also very subversive. Jim and Todd were these amazing filmmakers that were just shining examples of the things I wanted to make in the future.

Any other filmmakers or editors whose work influenced you as you were starting out?
In addition to Todd Haynes, directors like Gus Van Sant and John Hughes. When I was first watching films, I didn’t really understand what editors did, so at the same time I was influenced by Spielberg, or somebody like George Romero. Then I realized there were editors later who made these things. Ang Lee, and his editor Tim Squyres were like a gods to me. I really wanted to work on one of Ang’s crews very badly, but everyone wanted to work with him. I was working at the same facilities where Ang was cutting, and I was literally sneaking into his edit rooms. I would be working on another film, and I would just kind of peek my head in and see what they were doing and that kind of thing.

How did this Crazy Rich Asians come about for you?
Brad Simpson, who was a post supervisor on Velvet Goldmine back in the ‘90s when I was the assistant editor, is a producer on this film. Flash forward 20 years and I stumbled upon this script through agents. I read it and I was like, “I really want to be a part of this, and Brad’s the producer on this thing? Let me reach out to him.” He said, “I think you might be the right fit for this.” It was pretty nerve-wracking because I’d never worked with Jon before. Jon was a pretty experienced filmmaker, and he’d worked with a lot of editors. I just knew that if I could be part of the process, we could make something special.

My first interview with Jon was a Skype interview. He was in Malaysia already prepping for the film. Those interviews are very difficult to not look or sound weird. I just spoke from the heart, and said this is what I think makes me special. These are the ways I can try to influence a film and be part of the process. Lucky enough between that interview and Brad’s recommendation, I got the job.

Myron Kerstein and director Jon Chu.

When did you begin your work on the film?
I basically started the first week of filming and joined them in Malaysia and Singapore for the whole shoot. It was a pretty amazing experience being out there in two Muslim countries — two Westernized Muslim countries that were filled with some of the friendliest people I’ve ever met. It was an almost entirely local crew, a couple of assistant editors, and me. Sometimes I feel like it might not be the best thing for an editor to be around set too much, but in this case it was good for me to see the setting they were trying to portray… and feel the humidity, the steaminess, the romance and Singapore, which is both alien and beautiful at the same time.

What was your collaboration like with Jon Chu?
It was just an organic process, where my DNA started to become infused with Jon’s. The good thing about my going to Malaysia and Singapore was we got to work together early. One thing that doesn’t happen often anymore is a director who actually screens dailies in a theater. Jon would do that every weekend. We would watch dailies, and he would say what he liked and didn’t like, or more just general impressions of his footage. That allowed me to get into his head a bit.

At the same time I was also cutting scenes. At the end of every day’s screening, we would sit down together. He gave me a lot of freedom, but at the same time was there to give me his first impressions of what I was doing. I think we were able to build some trust really early.

Because of the film’s overwhelming success, this has opened doors for other Asian-led projects.
Isn’t that the most satisfying thing in the world? You hope to define your career by moments like this, but rarely get that chance. I watched this film, right when it was released, which was on my birthday. I ended up sitting next to this young Asian boy and his mom. This kid was just giggling and weeping throughout the movie. To have an interaction with a kid like that, who may have never seen someone like himself represented on the screen was pretty outstanding.

Music was such an important part of this film. The soundtrack is so crucial to moments in the film that it almost felt like a musical. Were you editing scenes with specific songs in mind, or did you edit  and then come back and add music?
Jon gave me a playlist very early on of music he was interested in. A lot of the songs sounded like they were from the 1920s — almost big band tunes. Right then I knew the film could have more of a classy Asian-Gatsby quality to it. Then as we were working on the film together, we started trying out these more modern tunes. I think the producers might have thought we were crazy at one point. You’re asking the audience to go down these different roads with you, and that can sometimes work really well, or sometimes can be a train wreck.

But as much as I love working with music, when I assemble I don’t cut with any music in mind. I try not to use it as a crutch. Oftentimes you cut something with music, either with a song in your head, or often editors will cut with a song as a music bed. But, if you can’t tell a story visually without a song to help drive it, then I think you’re fooling yourself.

I really find that my joy of putting in music happens after I assemble, and then I enjoy experimenting. That Coldplay song at the end of the film, for example… We were really struggling with how to end our movie. We had a bunch of different dialogue scenes that were strung together, but we didn’t feel like it was building up to some kind of climax. I figured out the structure and then cut it like any other scene without any music. Then Jon pitched a couple songs. Ironically enough I had an experience with Coldplay from the opening of Garden State. I liked the idea of this full circle in my own career with Coldplay at the end of a romantic comedy that starred an all-Asian cast. And it really felt like it was the right fit.

The graphic design was fascinating, especially in the early scene with Rachel and Nick on their date that kicks off all of the text messages. Is that something that was storyboarded early, or was that something you all figured out in the edit and in post?
Jon did have a very loose six-page storyboard of how we would get from the beginning of this to the end. The storyboard was nothing compared to what we ended up doing. When I first assembled my footage, I stitched together a two-minute sequence of just split screens of people reacting to other people. Some of that footage is in the movie, but it was just a loose sketch. Jon liked it, but it didn’t represent what he imagined this sequence to be. To some extent he had wondered whether we even needed the sequence.

Jon and I discussed it and said, “Let’s give this a shot. Let’s find the best graphics company out there.” We ended up landing with this company called Aspect, led by John Berkowitz. He and his team of artists worked with us to slowly craft this sequence over months. Beginning with, “How do we get the first text bubble to the second person? What do those text bubbles look like? How do they travel?” Then they gave us 20 different options to see how those two elements would work together. Then we asked, “How do we start expanding outward? What information are we conveying? What is the text bubble saying?” It was like this slowly choreographed dance that we ended up putting together over the course of months.

They would make these little Disney-esque pops. We really loved that. That kind of made it feel like we were back in old Hollywood for a second. At the same time we had these modern devices with text bubbles. So far as the tone was concerned, we tried percussion, just drumming, and other old scores. Then we landed on a score from John Williams from 1941, and that gave us the idea that maybe some old-school big band jazz might go really well in this. Our composer Brian Tyler saw it, and said, “I think I can make this even zanier and crazier.”

How do you work with your assistants?
Assistants are crucial as far as getting through the whole process. I actually had two sets of assistants; John To and David Zimmerman were on the first half in Malaysia and Singapore. I found John through my buddy Tom Cross, who edits for Damien Chazelle. I wanted somebody who could help me with the challenges of getting through places like Malaysia and Singapore, because if you’re looking for help for your Avid, or trying to get dailies from Malaysia to America, you’re kind of on your own. Warner Bros. was great and supportive, and they gave us all the technical help. But it’s not like they can fly somebody out if something goes wrong in an hour.

On the post side I ended up using Melissa Remenarich-Aperlo, and she was outstanding. In the post process I needed somebody to hold down the fort and keep me organized, and also somebody for me to bounce ideas off of. I’m a big proponent of using my assistants creatively. Melissa ended up cutting the big fashion montage. I really struggled with that sequence because I felt strongly like this might be a trope that this film didn’t need. That was the debate with a lot of them. Which romantic comedy tropes should we have in this movie? Jon was like, “It’s wish fulfillment. We really need this. I know we’ve seen it a thousand times, but we need this scene.”

I said let’s try something different. Let’s try inter-cutting the wedding arrival with the montage, and let’s try to make it one big story to get us from us not knowing what she’s going to show up in to her arrival. Both of those sequences were fine on their own, but it didn’t feel like either one of them was doing anything interesting. It just felt like we were eating up time, and we needed to get to the wedding, and we had a lot of story to tell. Once we inter-cut them we knew this was the right choice. As Jon said, you need these moments in the film where you can just sit back and take a breath, smile for a minute and get ready for the drama that starts. Melissa did a great job on that sequence.

Do you have any advice for somebody who’s just starting out and really wants to edit feature films?
I would tell them to start cutting. Cut anything they can. If they don’t have the software, they can cut on iMovie on their iPhone. Then they should  reach out to people like me and create a network. And keep doing that until people say yes. Don’t be afraid to reach out to people.

Also don’t be afraid to be an assistant editor. As much as they want to cut, as they should, they also need to learn the process of editing from others. Be willing to stick with it, even if that means years of doing it. I think you’d be surprised how much you learn over the course of time with good editors. I feel like it’s a long bridge. I’ve been doing this for 20 years, and it took a long time to get here, but perseverance goes a long way in this field. You just have to really know you want to do it and keep doing it.


Amy Leland is a film director and editor. Her short film, “Echoes”, is now available on Amazon Video. She also has a feature documentary in post, a feature screenplay in development, and a new doc in pre-production. She is an editor for CBS Sports Network and recently edited the feature “Sundown.” You can follow Amy on social media on Twitter at @amy-leland and Instagram at @la_directora.

Satore Tech tackles post for Philharmonia Orchestra’s latest VR film

The Philharmonia Orchestra in London debuted its latest VR experience at Royal Festival Hall alongside the opening two concerts of the Philharmonia’s new season. Satore Tech completed VR stitching for the Mahler 3: Live From London film. This is the first project completed by Satore Tech since it was launched in June of this year.

The VR experience placed users at the heart of the Orchestra during the final 10 minutes of Mahler’s Third Symphony, which was filmed live in October 2017. The stitching project was completed by creative technologist/SFX/VR expert Sergio Ochoa, who leads Satore Tech. The company used SGO Mistika technology to post the project, which Ochoa helped to develop during his time in that company — he was creative technologist and CEO of SGO’s French division.

Luke Ritchie, head of innovation and partnerships at the Philharmonia Orchestra, says, “We’ve been working with VR since 2015, it’s a fantastic technology to connect new audiences with the Orchestra in an entirely new way. VR allows you to sit at the heart of the Orchestra, and our VR experiences can transform audiences’ preconceptions of orchestral performance — whether they’re new to classical music or are a die-hard fan.”

It was a technically demanding project for Satore Tech to stitch together, as the concert was filmed live, in 360 degrees, with no retakes using Google’s latest Jump Odyssey VR camera. This meant that Ochoa was working with four to five different depth layers at any one time. The amount of fast movement also meant the resolution of the footage needed to be up-scaled from 4K to 8K to ensure it was suitable for the VR platform.

“The guiding principle for Satore Tech is we aspire to constantly push the boundaries, both in terms of what we produce and the technologies we develop to achieve that vision,” explains Ochoa. “It was challenging given the issues that arise with any live recording, but the ambition and complexity is what makes it such a very suitable initial project for us.”

Satore Tech’s next project is currently in development in Mexico, using experimental volumetric capture techniques with some of the world’s most famous dancers. It is slated for release early next year.


Company 3 adds television colorist Jeremy Sawyer 

Company 3 in Santa Monica has beefed up its team of colorists with Jeremy Sawyer (Hulu’s The First, Showtime’s I’m Dying Up Here). He will be working on the studio’s expanding slate of TV projects — they currently have more than 20 series in the facility, including Lost in Space (Netflix), Insecure (HBO) and Jack Ryan (Amazon).

For Sawyer, who has also worked on The Walking Dead (AMC), this move brings him back to Company 3, where he had worked as an assistant and then colorist and where he learned a great deal about his craft from CEO/founder Stefan Sonnenfeld.

He returns to the company following a tenure at Light Iron, and was at MTI before that. Prior to his initial stint at Company 3, Sawyer worked at the now-defunct Syndicate. He started his career at Finish Post in his native Boston.

“We’re very excited to welcome Jeremy back,” Sonnenfeld says. “He is an excellent artist and he has a keen understanding of the unique challenges involved in coloring episodic programming. He’s a perfect addition to our team, especially as demand for top-notch TV post continues to explode.”

Sawyer will continue his work on the third season of Netflix series Easy, for which he’s colored every episode to date.

Review: Sonnet Fusion PCIe 1TB and G-Drive Mobile Pro 500GB

By Brady Betzel

There are a lot of external Thunderbolt 3 SSD drives out in the wild these days, and they aren’t cheap. However, with a high price comes blazingly fast speeds. I was asked to review two very similar Thunderbolt 3 external SSD drives, so why not pit them against each other? Surprisingly (at least surprising to me), there are a couple of questions that you will want the answers to: Is there thermal throttling that will lower the read/write speeds when transferring large files for a sustained amount of time? Does it run so hot that it may burn you when touched?

I’ll answer these questions and a few others over the next few paragraphs, but in the end would I recommend buying a Thunderbolt 3 SSD? Yes, they are very, very fast. Especially when working with higher resolution multimedia files in apps like Premiere, Resolve, Pro Tools and many other data-intensive applications.

Sonnet Fusion Thunderbolt 3 PCIe Flash Drive
Up first (only because I received it first) is the Sonnet Fusion external SSD. I was sent the drive in a non-retail box, so I can’t attest to how it will arrive when you buy it in a retail setting, but the drive itself feels great. Like many other Sonnet products, the Fusion drive is hefty — and not in an overweight way. It feels like you are getting your money’s worth. Unlike the rubberized exterior of the popular LaCie Rugged drives, the Sonnet Fusion is essentially an aluminum heat sink wrapped around a powerful 1TB, Gen 3 M.2 PCIe, Toshiba RVD400-M22280 solid state drive. It’s sturdy and feels like you could drop it without receiving more damage than a little dent.

Attached to the drive is Sonnet’s “captive” Thunderbolt 3 cable, which I assume means the cable is attached to the external drive casing but can be removed without disassembling the case. I think more cable integrations should be called captive, it’s a great description. Anyway… the Thunderbolt 3 cable can be replaced/removed by removing the small four screws underneath the Fusion. It’s attached to a female Thunderbolt 3 port inside of the casing. I really wish Sonnet had integrated the wrapping of the cable around the drive, much like the LaCie Rugged drives in addition to the “captive” attachment. This would really help with transporting the drive and not worrying about the cable. It’s only a small annoyance, but since I’ve been spoiled by nice cable attachment I kind of expect it, especially with drives with a price tag like this. The Sonnet Fusion retails for $899 through stores like B&H, although I found it on Amazon.com for $799. Not cheap for an external drive, but in my opinion it is worth it.

The Sonnet Fusion is fast, like really fast, as in the fastest external drive I have tested. Sonnet claims a read speed of up to 2600MB/s and a write speed of up to 1600MB/s. The only caveat is that you must make sure your computer’s Thunderbolt 3 port is running x4 PCIe Gen 3 (four PCIe lanes) as opposed to x2 PCIe Gen 3 (only two PCIe lanes). If this is the case, your speed will be limited to around 1400MB/s as opposed to the proposed 1600MB/s write speed. You can find out more tech specs on Sonnet’s site. In addition you can find out if your computer has the PCIe lanes to run the Fusion at full speed here.

When testing the Sonnet Fusion I was lucky enough to have a few systems at my disposal: a 2018 iMac Pro, a 2018 Intel i9 MacBook Pro and an Intel i9 Puget Systems Genesis I with Thunderbolt 3 ports. All the systems provided similar results, which was nice to see. Using the AJA System Test, I adjusted the settings to 3840×2160, 4GB and ProRes 4444. I used one reading for an example image for this review, but they were generally the same every time I ran the test. I was getting around 1372MB/s write speed and 2200MB/s read speed. When transferring files on the Finder level I was consistently getting about 1GB/s write speeds, but it’s possible I was being limited by the write speed from the internal SSD! Incredible. For real-world numbers, I was able to transfer about 750GBs in under five minutes. Again, incredible speeds.

The key to the Sonnet Fusion SSD and what makes it a step above the competition is its enclosure acting as a heat sink in its 2.8×4.1×1.25-inch form factor. While this means there are no fans to increase the volume, it does mean that the drive can get extremely hot to touch, which can be an issue if you need to pack it up and go, or if you put it in your pocket (be careful!). This also means that with great heat dissipation comes less thermal throttling, which can slow down transfer speeds when using the drive over longer periods of time. This can be a real problem in some drives. Also keep in mind that this drive is bus powered and Sonnet’s instruction manual specifically states that it will not work with a Thunderbolt 2 adapter. The Sonnet Fusion comes with a one-year warranty that you can read about at this link.

G-Drive Mobile Pro SSD 500GB
Like the Sonnet Fusion, the G-Drive Mobile Pro SSD is a Thunderbolt 3 connected external hard drive that touts very high sustained transfer speeds of up to 2800MB/s (read speed). The G-Drive is physically lighter than the Sonnet, and is cheaper coming in at about 79 cents per GB or 68 cents if you purchase the 1TB version of the G-Drive — as compared to the Sonnet Fusion’s 88 cents per GB. So is this a “get what you pay for” scenario? I think so. The 500GB version costs $399.95 while the 1TB version retails for $699.95. A full $100 cheaper than the Sonnet Fusion.

The G-Drive Mobile Pro has a slender profile that matches what you think an external hard drive would look like. It measures 4.41x 3.15x.67 inches and weighs just .45 lbs. The exterior is attractive — the drive is surrounded by a blackish/dark grey rubberized plastic with silver plastic end caps. There are slits in the top and bottom of the case to dissipate heat, or maybe just to show off the internal electric blue aluminum heatsink. The Thunderbolt 3 connection is on the rear of the housing for easy connection with a status LED on the front. The cord is not attached to the drive, so there is a large chance of being misplaced. Again, I really wish manufacturers would think about cable storage and placement on these drives — LaCie Rugged drives have this nailed, and I hope others follow suite.

Included with the G-Drive Mobile Pro is .5 meter Thunderbolt 3 cable. It comes with a five-year limited warranty described on the included pamphlet that just may feature the tiniest font possible. The warranty ensures that the product is free from defects in materials and workmanship, with some exclusions including non-commercial use. In addition, the retail box shows off a couple of key specifics including “durable, shock resistant SSD” while the G-Technology website boasts of three-meter drop protection (on a carpeted concrete floor), as well as 1,000-pound crush-proof rating. Not sure if this is covered by the warranty or not, but since there really aren’t moving parts in an SSD, I don’t see why this wouldn’t hold up. An additional proclamation is that you can edit multi-stream 8K footage at full frame rate. This may technically be true in a read-only state but you would need a super-computer with multiple high-end GPUs to actually work with this size media. So take that with a grain of salt — not just on this drive but with any.

So on to the actual nuts and bolts of the G-Drive Mobile Pro SSD. The drive looks good on the outside and is immediately recognized by any Mac OS with direct Thunderbolt 3 connection (like all bus-powered drives). If you are using Windows you will have to format the drive before you can use it. G-Technology has an app to make that easy.

When doing real-world file transfers I was getting around the 1GB/s transfer speed consistently. So, theG-Drive Mobile Pro SSD is blazing fast. I was transferring 200GB of files in under two minutes.

Summing Up
In the end, if you haven’t seen the speed difference coming from a USB 3.0 or Thunderbolt 2 drive, you must try Thunderbolt 3. If you have Thunderbolt 3 ports and are using old Thunderbolt 2 drives, now is the time to upgrade. Not only can you use either of these drives like an internal drive, but if you are a Resolve colorist or a Premiere editor you can use these as your render cache or render drive. Not only will this speed up your coloring and editing, but you may even start to notice less errors and crashes since the pipes are open.

Personally, I love the Sonnet Fusion drive and the G-Drive Mobile Pro. If price is your main focus then obviously the G-Drive Mobile Pro is where you need to look. However, if a high-end look with some heft is your main interest, I think the Sonnet Fusion is an art piece you can have on your desktop.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

30 Ninja’s Julina Tatlock to keynote SMPTE 2018, will focus on emerging tech

30 Ninjas CEO Julina Tatlock, an award-winning writer-producer, virtual reality director and social TV specialist, will present the keynote address at the SMPTE 2018 conference, which takes place from October 22-25 in downtown Los Angeles. The keynote by Tatlock will take place on the 23rd at 9am, immediately following the SMPTE Annual general membership meeting.

Tatlock specializes in producing and directing VR, creating social media and web-based narrative games for movies and broadcast, as well as collaborating with developers on integrating new tech intellectual property into interactive stories.

During her keynote, she will discuss the ways that content creation and entertainment production can leverage emerging technologies. Tatlock will also address topics such as how best to evaluate what might be the next popular entertainment technology and platform, as well as how to write, direct and build for technology and platforms that don’t exist yet.

Tatlock’s 30 Ninjas, is an award-winning immersive-entertainment company she founded along with director Doug Liman (Bourne Identity, Mr. & Mrs. Smith, Edge of Tomorrow, American Made). 30 Ninjas creates original narratives and experiences in new technologies such as virtual reality, augmented reality and mixed reality and location-based entertainment for clients such as Warner Bros., USA Network, Universal Cable Productions and Harper Collins.

Tatlock also is the executive producer and director of episodes three and four of the six-part VR miniseries “Invisible,” with production partners Condé Nast Entertainment, Jaunt VR and Samsung.

Before founding 30 Ninjas, she spent eight years at Oxygen Media, where she was VP of programming strategy. In an earlier role with Martha Stewart Living Omnimedia, Tatlock wrote and produced more than 100 of NBC’s Martha Stewart Living morning show segments.

Registration is open for both SMPTE 2018 and for the SMPTE 2018 Symposium, an all-day session that will precede the technical conference and exhibition on Oct. 22. Pre-registration pricing is available through Oct. 13. Further details are available at smpte2018.org.

AI for M&E: Should you take the leap?

By Nick Gold

In Hollywood, the promise of artificial intelligence is all the rage. Who wouldn’t want a technology that adds the magic of AI to smarter computers for an instant solution to tedious, time-intensive problems? With artificial intelligence, anyone with abundant rich media assets can easily churn out more revenue or cut costs, while simplifying operations … or so we’re told.

If you attended IBC, you probably already heard the pitch: “It’s an ‘easy’ button that’s simple to add to the workflow and foolproof to operate, turning your massive amounts of uncategorized footage into metadata.”

But should you take the leap? Before you sign on the dotted line, take a closer look at the technology behind AI and what it can — and can’t — do for you.

First, it’s important to understand the bigger picture of artificial intelligence in today’s marketplace. Taking unstructured data and generating relevant metadata from it is something that other industries have been doing for some time. In fact, many of the tools we embrace today started off in other industries. But unlike banking, finance or healthcare, our industry prioritizes creativity, which is why we have always shied away from tools that automate. The idea that we can rely on the same technology as a hedge fund manager just doesn’t sit well with many people in our industry, and for good reason.

Nick Gold talks AI for a UCLA Annex panel.

In the media and entertainment industry, we’re looking for various types of metadata that could include a transcript of spoken words, important events within a period of time or information about the production (e.g., people, location, props), and currently there’s no single machine-learning algorithm that will solve for all these types of metadata parameters. For that reason, the best starting point is to define your problems and identify which machine learning tools may be able to solve them. Expecting to parse reams of untagged, uncategorized and unstructured media data is unrealistic until you know what you’re looking for.

What works for M&E?
AI has become pretty good at solving some specific problems for our industry. Speech-to-text is one of them. With AI, extracting data from a generally accurate transcription offers an automated solution that saves time. However, it’s important to note that AI tools still have limitations. An AI tool, known as “sentiment analysis,” could theoretically look for the emotional undertones described in spoken word, but it first requires another tool to generate a transcript for analysis.

But no matter how good the algorithms are, they won’t give you the qualitative data that a human observer would provide, such as the emotions expressed through body language. They won’t tell you the facial expressions of the people being spoken to, or the tone of voice, pacing and volume level of the speaker, or what is conveyed by a sarcastic tone or a wry expression. There are sentiment analysis engines that try to do this, but breaking down the components ensures the parameters you need will be addressed and solved.

Another task at which machine learning has progressed significantly is logo recognition. Certain engines are good at finding, for example, all the images with a Coke logo in 10,000 hours of video. That’s impressive and quite useful, but it’s another story if you want to also find footage of two people drinking what are clearly Coke-shaped bottles where the logo is obscured. That’s because machine-learning engines tend to have a narrow focus, which goes back to the need to define very specifically what you hope to get from it.

There are a bevy of algorithms and engines out there. If you license a service that will find a specific logo, then you haven’t solved your problem for finding objects that represent the product as well. Even with the right engine, you’ve got to think about how this information fits in your pipeline, and there are a lot of workflow questions to be explored.

Let’s say you’ve generated speech-to-text with audio media, but have you figured out how someone can search the results? There are several options. Sometimes vendors have their own front end for searching. Others may offer an export option from one engine into a MAM that you either already have on-premise or plan to purchase. There are also vendors that don’t provide machine learning themselves but act as a third-party service organizing the engines.

It’s important to remember that none of these AI solutions are accurate all the time. You might get a nudity detection filter, for example, but these vendors rely on probabilistic results. If having one nude image slip through is a huge problem for your company, then machine learning alone isn’t the right solution for you. It’s important to understand whether occasional inaccuracies will be acceptable or deal breakers for your company. Testing samples of your core content in different scenarios for which you need to solve becomes another crucial step. And many vendors are happy to test footage in their systems.

Although machine learning is still in its nascent stages, there is a lot of interest in learning how to make it work in the media workflow. It can do some magical things, but it’s not a magic “easy” button (yet, anyway). Exploring the options and understanding in detail what you need goes hand-in-hand with finding the right solution to integrate with your workflow.


Nick Gold is lead technologist for Baltimore’s Chesapeake Systems, which specializes in M&E workflows and solutions for the creation, distribution and preservation of content. Active in both SMPTE and the Association of Moving Image Archivists (AMIA), Gold speaks on a range of topics. He also co-hosts the Workflow Show Podcast.
 

Behind the Title: Pace Pictures owner Heath Ryan

NAME: Heath Ryan

COMPANY: Pace Pictures (@PacePictures)

CAN YOU DESCRIBE YOUR COMPANY?
We are a dailies-to-delivery post house, including audio mixing.

Pace’s Dolby Atmos stage.

WHAT’S YOUR JOB TITLE?
Owner and editor.

WHAT DOES THAT ENTAIL?
As owner, I need to make sure everyone is happy.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Psychology. I deal with a lot of producers, directors and artists that all have their own wants and needs. Sometimes what that entails is not strictly post production but managing personalities.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Editing. My company grew out of my love for editing. It’s the final draft of any film. In the over 30 years I have been editing, the power of what an editor can do has only grown.

WHAT’S YOUR LEAST FAVORITE?
Chasing unpaid invoices. It’s part of the job, but it’s not fun.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Late, late in the evening when there are no other people around and you can get some real work done.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Not by design but through sheer single mindedness, I have no other skill set but film production. My sense of direction is so bad that armed with a GPS super computer in my phone even Uber driver is not an option.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started making films in the single digit years. I won a few awards for my first short film in my teens and never looked back. I’m lucky to have found this passion early.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
This year I edited the reboot to Daddy Daycare called Grand-Daddy Daycare (2019) for Universal. I got to work with director Ron Oliver and actor Danny Trejo, and it meant a lot to me. It deals with what we do with our elders as time creeps up on us all. Sadly, we lost Ron’s mom while we were editing the film so it took on extra special meaning to us both.

Lawless Range

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Lawless Range and The Producer. I produced and edited both projects with my dear friend and collaborator Sean McGinly. A modern-day Western and a behind-the-scenes of a Hollywood pilot. They were very satisfying projects because there was no one to blame but ourselves.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My Meridian Sound system, the Internet and TV.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, I love it. I have always set the tone in the edit bay with music. Especially during dailies – I like to put music on, sometimes films scores, to set the mood of what we are making.

Cinesite London promotes Caroline Garrett to head of VFX

Cinesite in London has upped Caroline Garrett to head of its London VFX studio. She will oversee day-to-day operations at the independent VFX facility.  Garrett will work with her colleagues at Cinesite Montreal, Cinesite Vancouver and partner facilities Image Engine and Trixter to increase Cinesite’s global capacity for feature films, broadcast and streaming shows.

Garrett started her career in 1998 as an artist at the Magic Camera Company in Shepperton Film Studios. In 2002 she co-founded the previsualization company Fuzzygoat, working closely with directors at the start of the creative process. In 2009 she joined Cinesite, taking on the position of CG manager and overseeing the management of the 3D department as well as serving as both producer and executive producer. Prior to her most recent promotion, she was head of production, overseeing all aspects of production for the London.

With 20 years of industry experience and her own background as an animator and CG artist, Garrett understands the rigors that artists face while solving technical and creative challenges. Since joining Cinesite in 2009, she worked as senior production executive on high-profile features, including Harry Potter and the Deathly Hallows, Skyfall, World War Z, The Revenant, Fantastic Beasts and Where to Find Them and, most recently, Avengers Infinity War.

Garrett is the second woman to be appointed to a head of sudio role within the Cinesite group following Tara Kemes’ appointment as GM of Cinesite’s Vancouver animation studio earlier this year.