in Language Technology, Translation

Machine Translations Will Improve the Industry’s Efficiency, But Can’t Replace Humans Yet

When Google Translate was introduced 10 years ago, the program was run using Phrase-Based Machine Translation (PBMT). This algorithm would split a sentence into terms and phrases, and then translate each component individually. The end result was often a fairly poor translation that lacked contextual nuances. With the introduction of the Google Neural Machine Translation (GNMT) system this past September, Google radically improved their program’s translation capabilities by treating every sentence as a single component. Twelve languages are already supported by GNMT, and the company promises to add more in the coming months.

While the initial publicity surrounding the release of GNMT was extremely positive, experts in the field of MT have since given a more measured review of the technology. Their verdict is that GNMT still has a long way to go before it supersedes human translation.

Strides Made by GNMT

NMT is comprised of artificial neural networks that are modeled on the neural pathways in the human brain. Much like how a human brain deciphers language, the new GNMT system analyzes the meaning of each word in a sentence by considering all of the words that came before it. Whereas, PBMT would analyze every word or phrase independently of the other words in the sentence, GNMT is able to evaluate the meaning of the entire sentence by the time it reaches the last word. The result is a much clearer translation than those obtained from PBMT. This has been illustrated in the table below, which compares GNMT’s translation of a sentence in traditional Chinese to English to that of PBMT and a human translator.

Google Translate Table

A Comparison of a PBMT, GNMT, and a Human Translation (Source: Google)

Another major advance made by NMT is that it is able to translate between languages it has not yet been trained on. Take for example English, Japanese, and Korean. The machine was taught and tested for translating English⇄Japanese and English⇄Korean. While it hadn’t been trained to translate Japanese⇄Korean, by analyzing millions of examples from the known language pairs, it was able to produce relatively precise translations. Through 3D geometric modeling, Google’s research team was able to show that the system understood the semantics of the sentences translated in this experiment, which they took as evidence of an interlingua within the network. This demonstration of artificial intelligence has been heralded as the first of its kind in MT, and is projected to significantly decrease the amount of architecture required to support multilingual translations.

Impact on the Translation Industry

Daniel Marcu from has said that NMT will increase translation rates, decrease post-editing requirements, and provide smarter tools for translators to use. But while Google’s NMT has been hailed as a significant leap for MT, its employment is mostly suited for the public, rather than for private sectors. As a result, experts developing alternative MT models are still concentrating on creating systems that are qualified for companies’ and professional translators’ use.

Furthermore, advances in Machine Translation are not able to compensate for machines’ deficiencies to keep abreast with changes in industry terms and popular culture. So while machines are becoming more and more adept at translation, knowing what to put into them (and knowing how to perfect the output so that it’s as relevant as the morning news) is still a major hurdle. This is where Localization Service Providers(LSPs) come in. LSP’s, such as CSOFT International don’t just translate content, but also help a company know how to create country and market-specific material that fits every different cultural need.