Tag Archives: subtitles

Talking localization with Deluxe’s Chris Reynolds

In a world where cable networks and streaming services have made global content the norm, localization work is more important than ever. For example, Deluxe’s global localization team provides content creators with transcription, scripting, translation, audio description, subtitling and dubbing services. Their team is made up of 1,300 full-time employees and a distributed workforce of over 6,000 translators, scripting editors, AD writers and linguistic quality experts that cover more than 75 languages.

Chris Reynolds

We reached out to Chris Reynolds, Deluxe’s SVP/GM of worldwide localization, to find out more.

Can you talk about dubbing, which is a big part of this puzzle?
We use our own Deluxe-owned studios across the globe, along with our extensive partner network of more than 350 dubbing studios around the world. We also have technology partners that we call on for automated language detection, conform, transcription and translation tools.

What technology do you use for these services?
Our localization solution is part of Deluxe’s cloud-based platform, Deluxe One. It uses cloud-based automation and integrated web applications for our workforce to help content creators and distributors who need to localize content in order to reach audiences.

You seem to have a giant well of talent to pull from.
We’ve been building up our workforce for over 15 years. Today’s translations and audio mixes have to be culturally adapted so that content reflects the creative and emotional intent of writers, directors and actors. We want the content to resonate and the audience to react appropriately.

How did Deluxe build this network?
Deluxe launched its localization group over 15 years ago, and from the beginning we believed that you need a global workforce to support the demands of global audiences so they could access high-quality localized content quickly and affordably.

Because our localization platform and services have been developed to support Deluxe’s end-to-end media supply chain offering, we know how to provide quality results across multiple release windows.

We continue to refine our services to simplify reuse of localized assets across theatrical, broadcast and streaming platforms. The build-up of our distributed workforce was intentional and based on ensuring that we’re recruiting talent whose quality of work supports these goals. We match our people to the content and workflows that properly leverage their skill sets.

Can you talk about your workflow/supply chain? What tools do you call on?
We’ve been widening our use of automation and AI technologies. The goal is always to speed up our processes while maintaining pristine quality. This means expanding our use of automated speech recognition (ASR) and machine translation (MT), as well as implementing automated QC, conversion, conform, compare and task assignment features to streamline our localization supply chain. The integration of these technologies into our cloud-based localization platform has been a significant focus for us.

Is IMF part of that workflow?
IMF is absolutely a part of the workflow, In fact, driving its adoption is the rapid growth of localized international iterations for over-the-top (OTT), video on demand (VOD), and subscription video on demand (SVOD). Deluxe has been using localized component workflows since their inception, which is the core concept that IMF uses to simplify versioning.

Is the workflow automated?
To an extent … adding new technology into our workflow is designed to make things more efficient. And these technologies are not meant as a replacement for our talent. Automation helps free up those artists from the more manual tasks and allows them to focus on the more creative aspects of localization.

By using automation in our workflows, we have been able to take on additional projects and explore new areas in M&E localization. We will continue to use workflow automation and AI/ML in our work.

Can you talk about transcription and how you handle that process?
Transcription is a critical part of the localization process and is a step that demands the highest possible quality. Whether we’re creating a script, delivering live or prerecorded captions, or creating an English template for subsequent translations, the initial transcription must be accurate.

Our teams use ASR to help speed up the process, but because the expectation is so high and many transcription tasks also require annotation that current AI technologies can’t deliver, our human workforce must review, qualify, amend and adapt the ASR output.

All of our transcription work undergoes a secondary QA at some point. Sometimes the initial deliverable is immediate, as is the case with live captions, but even then, revisions are often made during secondary key-outs or before the file is delivered for subsequent downstream use.

What are some of the biggest challenges for localization?
The rise in original content distribution and global distribution and the need to localize that content faster than ever is probably the biggest general challenge. We also continue to see new competitors entering the already crowded market.

And it’s not just competitors — customers are challenging our industry standards too, with some bringing localization in house. To accommodate this change, we’re always adapting and refining workflows to fit what our customers need. We are always checking in with them to make sure our teams can anticipate change and create solutions that solve challenges before they impact the rest of the supply chain.

What are some projects that you’ve worked on recently?
Some examples are Star Wars: The Rise of Skywalker, The Mandalorian, The Irishman, Joker, Marriage Story and The Marvelous Mrs. Maisel.

Finally, taking into account the COVID-19 crisis, I imagine that worldwide content will be needed even more. How will this affect your part of the process?
The demand for in-home entertainment continues to climb, mainly driven by an uptick in OTT and gaming in light of these unprecedented events. We are working with creators, media owners and platforms to provide localization services that can help respond to this recent influx in the global distribution of films and series.

Unfortunately, because several productions and dubbing studios around the world have had to shut down, there will be delays getting new content out. We’re working closely with our customers to complete as much work as we can during this time so that everyone can ramp up quickly once things start back up.

We’re also seeing big increases in catalog content orders for streaming platforms. Our teams are helping by providing large-scale subtitle and audio conforms, creating any new subtitles as needed, and creating dubbed audio versions for those languages that are not affected by studio closures.

Localization: Removing language barriers on global content

By Jennifer Walden

Foreign films aren’t just for cinephiles anymore. Streaming platforms are serving up international content to the masses. There are incredible series — like Netflix’s Spanish series Money Heist, Danish series The Rain and the German series Dark — that would have been otherwise unknown to American audiences. The same holds true for American content reaching foreign audiences. For instance, Starz series American Gods is available in French. Great stories are always worth sharing and language shouldn’t be the barrier that holds back the flood of global entertainment.

Now I know there are purists who feel a film or show should be experienced in its original language, but admit it, sometimes you just don’t feel like reading subtitles. (Or, if you do, you can certainly watch those aforementioned shows with subtitles and hear the original language.) So you pop on the audio for your preferred language and settle in.

Chris Carey in the Burbank studio

Dubbing used to be a poorly lipsynced affair, with bad voiceovers that didn’t fit the characters on screen in any capacity. Not so anymore. In fact, dubbing has evolved so much that it’s earned a new moniker — localization. The increased offering of globally produced content has dramatically increased the demand for localization. And as they say, practice makes perfect… or better, anyway.

Two major localization providers — BTI Studios and Iyuno Media Group — have recently joined forces under the Iyuno brand, which is now headquartered in London. Together, they have 40 studio facilities in 30 different countries, and support 82 different languages, according to its chief revenue officer/managing director of the Americas Chris Carey.

Those are impressive numbers. But what does this mean for the localization end result?

Iyuno is able to localize audio locally. The language localization for a specific market is happening in that market. This means the language is current. The actors aren’t just fluent; they’re native speakers. “Dialects change really fast. Slang changes. Colloquialisms change. These things are changing all the time, and if you’re not in the market with the target audience you can miss a lot of things that a good geographically diverse network of performers can give you,” says Carey.

Language expertise doesn’t end with actor performance. There are also the scripts and subtitles to think about. Localization isn’t a straight translation. There’s the process of script adaptation in which words are chosen based on meaning (of course) but also on syllable count in order to match lipsync as closely as possible. It’s a feat that requires language fluency and creativity.

BTI France

“If you think about the Eastern languages, and the European and Eastern European languages, they use a lot of consonants and syllables to make a simple English word. So we’re rewriting the script to use a different word that means the same thing but will fit better with the actor on-screen. So when the actor says the line in Polish and it comes out of what appears to be the mouth of the American actor on-screen, the lipsync is better,” explains Carey.

Iyuno doesn’t just do translations — dubbing and subtitles — to and from English. Of the 82 languages it covers, it can translate any one of those into another. This process requires a network of global linguists and a cloud-based infrastructure that can support tons of video streaming and asset sharing — including the “dubbing script” that’s been adapted into the destination language.

The magic of localization is 49% script adaptation, 49% dialogue editing and 2% processing in Avid Pro Tools, like time shifting and time compression/expansion to finesse the sync. “You’re looking at the actors on screen and watching their lip movement and trying to adjust this different language to come out of their mouth as close as possible,” says Carey. “There isn’t an automated-fit sound tool that would apply for localization. The actor, the director and the engineer are in the studio together working on the sync, adjusting the lines and editing the takes.”

As the voice record session is happening, “sometimes the actor will suggest a better way to say a line, too, and they’ll do an ‘as recorded script,’” says Carey. “They’ll make red lines and markups to the script, and all of that workflow we have managed into our technology platform, so we can deliver back to the customer the finished dub, the mix, and the ‘as recorded script’ with all of the adaptations and modifications that we had done.”

Darkest Hours is just one of the many titles they’ve worked on.

Iyuno’s technology platform (its cloud-based collaboration infrastructure) is custom-built. It can be modified and updated as needed to improve the workflow. “That backend platform does all the script management and file asset management; we are getting the workflow very efficient. We break all the scripts down into line counts by actor, so he/she can do the entire session’s worth of lines throughout that show. Then we’ll bring in the next actor to do it,” says Carey.

Pro Tools is the de facto DAW for all the studios in the Iyuno Media Group. Having one DAW as the standard makes it easy to share sessions between facilities. When it comes to mic selection, Carey says the studios’ engineers make those choices based on what’s best for each project. He adds, “And then factor in the acoustic space, which can impart a character to the sound in a variety of different ways. We use good studios that we built with great acoustic properties and use great miking techniques to create a sound that is natural and sounds like the original production.”

Iyuno is looking to improve the localization process even further by building up a searchable database of actors’ voices. “We’re looking at a bit more sophisticated science around waveform analysis. You can do a Fourier transform on the audio to get a spectral analysis of somebody’s voice. We’re looking at how to do that to build a sound-alike library so that when we have a show, we can listen to the actor we are trying to replace and find actors in our database that have a voice match for that. Then we can pull those actors in to do a casting test,” says Carey.

Subtitles
As for subtitles, Iyuno is moving toward a machine-assisted workflow. According to Carey, Iyuno is inputting data on language pairs (source and destination) into software that trains on that combination. Once it “learns” how to do those translations, the software will provide a first pass “in a pretty automated fashion, quite faster than a human would have done that. Then a human QCs it to make sure the words are right, makes some corrections, corrects intentions that weren’t literal and needs to be adjusted,” he says. “So we’re bringing a lot of advancement in with AI and machine learning to the subtitling world. We will expect that to continue to move pretty dramatically toward an all-machine-based workflow.”

But will machines eventually replace human actors on the performance side? Carey asks, “When were you moved by Google assistant, Alexa or Siri talking to you? I reckon we have another few turns of the technology crank before we can have a machine produce a really good emotional performance with a synthesized voice. It’s not there yet. We’re not going to have that too soon, but I think it’ll come eventually.”

Main Image: Starz’s American Gods – a localization client.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Rohde & Schwarz upgrades Clipster for IMF, adds Venice 4K

Rohde & Schwarz DVS has added key functions to its R&S Clipster IMF mastering workflow, including IMF-compliant closed captions and subtitles, composition playlist markers and forensic watermarking. The additional functions are aimed to make R&S Clipster’s IMF workflow a complete solution for UHD, 3D and Rec. 2020 mastering.

In the new version, the timeline can be used as usual to arrange all the content, including captions and subtitles. An integrated IMF delivery tool guides users through all the necessary steps to ensure the package they have created is IMF-compliant. Users can also play back UHD IMF content in real time for visual quality control. R&S Clipster now also supports NexGuard forensic watermarking, which lets content owners and their vendors protect prerelease content up to UHD and 4K.

Meanwhile, Rohde & Schwarz has expanded its R&S Venice ingest and production server product line with the new R&S Venice 4K server. With the new server, TV studios can set up file-based studio production workflows in 4K that, according to the company, can be as fast as similar HD workflows. R&S Venice 4K allows direct recording in 4K without any time-consuming stitching processes. At the same time, the material is converted to HD-SDI and saved as a file. This parallel generation of both HD and 4K content provides TV studios with a feasible transition option until content is broadcast entirely in 4K.