Cloud based machine translation is something that has many use cases. The most common one is to translate documents and phrases for writing letters and e-mails. Among AI services, machine translation is more accurate than speech-to-text when the use case involves video workflows. However, in practice the average video producer may find it difficult to get their subtitle or caption file automatically translated the same way they can translate an e-mail or a Word document. Subtitling and closed captioning for video must meet specific guidelines for quality and accessibility. Adding special terms, names, and sound cues that need to be translated may cause users to find themselves scratching their head and doing a lot of cutting & pasting into machine translation engine like Google Translate to be able to finish a simple subtitling project. At SyncWords, we try to address these many issues with a variety of automated tools.
Common subtitle file formats that pose a challenge
Some formats for subtitling like SRT, WebVTT, and IMSC or TTML files can be simple and opened into any word processor for editing. However, more advanced formats like EBU-STL, SCC, and MCC are not exactly ready to be translated. They require a software tool that can parse the data and timing into something that is human readable. Once the text is revealed, a user can then take that text and start the machine translation process. Even with the simpler formats like SRT, a word processor is not always ideal for translation and editing.
Timing and text segmentation
With video workflows, working with the subtitled text is only half the battle. A video editor or even seasoned captioner must consider the timing and text segmentation when doing machine translation. After all, the final product is text on top of video and too much text appearing at once is not appropriate for typical viewing. The timing is just as important. All translated text must keep the timing and not distract viewers who are also trying to follow the video content. Ideally, there must also be AI automation that can time the text on a word level to measure where the automatic machine translation can fit into the video most effectively.
In Canada and the United States, there are very strict government guidelines for subtitle and closed captioning accessibility. Among these guidelines there is a speed or words per minute (WPM) limit that must be followed for the closed captioning to be accessible. This means that in some instances a sentence must be condensed so that it is easier to read when the dialogue is too fast. For example, if there is too much text on the screen and only 2 seconds to read it, the viewer will not have enough time. Therefore, the same applies to a translated subtitle project. The machine translation workflow must preserve that timing to make videos properly accessible for the viewers.
Quality assurance of the translated text
A translator may be needed to proofread the machine translated text to make sure it is both accurate and meets the guidelines of professional subtitling and closed captioning video workflows. This could turn into a very manual and intensive editing task without the proper software tools. In addition, these tools must be available online since many translators work remotely as contractors. Leveraging AI tools for automation and timing will help make a translator’s job much easier and provide a clean workflow for standard users who are new to subtitle translation. Ultimately, machine translation can help translate subtitling and closed captioning but they are only one piece of the puzzle to deliver a proper video program to international audiences.