How it works, how it hurts, and how Meridian Linguistics can help

As technology has automated much of our everyday lives, it is natural to assume that translation, too, should be handled by computers. As scientists first began to confront this task in the late fifties and early sixties, however, they quickly realized that in order to be able to automate translation, they first had to understand language itself – and it became clear just how little they knew about how we communicate.

The task they had initially underestimated was to become the holy grail of artificial intelligence. Half a century after those initial efforts, it remains clear to anyone who has used GoogleTranslate that we are not quite there yet. Ray Kurzweil, the futurist and artificial intelligence expert, has predicted that even if a computer could pass the Turing test by 2029 — which he believes is likely — we would still not be able to rely completely on computers for translation purposes, because of the boundless shades of historical and personal context that inform human language.

On the #becauselinguistics blog: what can Google’s AlphaGo tell us about the future of machine translation?

Despite these longstanding issues, there are still many automated translation systems in use within the translation industry. These systems use rule-based, statistical, or more recently neural network systems developed from the available translated data in order to produce translations of variable quality, which are then post-edited by human translators in an attempt to fashion fluent, coherent, and accurate text. This process is called PEMT (Post-Editing Machine Translation). 

For high-volume projects for which human translation would prove to be cost-inefficient, we recommend asking about our machine translation options offered in partnership with Systrans and SmartCat. These systems use state of the art machine translation and can even be customized for your purpose, whether it be communications, e-commerce, or e-discovery.

To