How it works, how it hurts, and how Meridian Linguistics can help
As technology has automated much of our everyday lives, it is natural to assume that translation, too, should be handled by computers. As scientists first began to confront this task in the late fifties and early sixties, however, they quickly realized that in order to be able to automate translation, they first had to understand language itself – and it became clear just how little they knew about how we communicate.
The task they had initially underestimated was to become the holy grail of artificial intelligence. Half a century after those initial efforts, it remains clear to anyone who has used GoogleTranslate that we are not quite there yet. Ray Kurzweil, the futurist and artificial intelligence expert, has predicted that even if a computer could pass the Turing test by 2029 — which he believes is likely — we would still not be able to rely completely on computers for translation purposes, because of the boundless shades of historical and personal context that inform human language.
Despite these longstanding issues, there are still many automated translation systems in use within the translation industry. These systems use rule-based or statistical systems developed from the available translated data in order to produce translations of variable quality, which are then post-edited by human translators in an attempt to fashion fluent, coherent, and accurate text. This process is called PEMT (Post-Editing Machine Translation). In practice, this process, in which translators are incentivized to edit garbled text as quickly as possible into fluent-sounding results, is incredibly tedious. Worse, the source text is often given only a cursory review, which can lead to serious translation errors. While this method has been used with some success in situations where budgets are very tight and quality is less important, we do not recommend this strategy to most of our clients.
To see some unfortunate examples of machine translation, see what happened to Taco Bell when they re-entered the Japanese market in April 2015.
However, there are ways to leverage computational tools towards greater efficiency in translation. At Meridian Linguistics, we use Computer-Aided Translation tools (CAT tools) such as SDL Trados and MemoQ. Rather than automatically translating large blocks of text based on previous unrelated data or unrefined rule systems, these tools streamline the translation process while keeping the translator responsible for final quality checks. They accomplish this by maintaining databases of specific terms, segments, and contexts, each of which must be entered and confirmed by a human translator before being approved for entry into the translation. These tools stick to letting machines do what machines do best — tasks such as formatting, layout, number localization, and consistency checks — while relying completely on the linguistic expertise of the human translator.
CAT tools also measure the amount of repetition in any given text. For example, a stack of transcripts with 50,000 words may actually only use the same 1,000 words, repeated 50 times, interspersed throughout with the occasional unique term. By using CAT tools, we can pass along these extensive time and cost-savings to our clients.
Click here to ask us more about how we leverage computational linguistics to offer higher quality translations at a lower cost.