in All Things Localization, Language Technology, Technology, Translation

With news this week that Google has improved some of the key linguistic functionalities of its search engine’s machine learning algorithms – including an AI spell check improvement that Google’s head of search considers more important by itself than the previous five years’ progress – it is an interesting moment in the brief history of language-related AI to reflect on how things have advanced within the language services industry, particularly regarding neural machine translation (NMT).

Much has changed in the few years since neural machine translation (NMT) first made waves with its potential to revolutionize localization, even in the very nature of the questions people continue to ask about it. Three years ago, for instance, one significant concern was over how neural MT could impact global language diversity. While concerns for technology’s unintended consequences for minorities are still very much in circulation, the question of AI’s inclusivity within language groups is now the more discussed ethical issue, as algorithms are now known to generate statistical in-groups and out-groups along demographic lines. The fact that this has not manifest in crises specific to the translation industry is partly indicative of the relatively moderate rollout of these capabilities, as well as just how much of a performance gap remains between raw machine translation and human-inclusive models even several years onward. The sheer diversity of populations now interacting in the globalized world economy may be a greater challenge to NMT’s effectiveness than NMT is to global diversity, as the limitations of a machine-centric model for translation have emerged in clearer focus.

Related:  AI Communication

In terms of performance, one of the major things that has changed for the better is our understanding of how NMT works, and how to improve on raw machine translation. As during its infancy, NMT today requires intensive work from human linguists not only to iron out any linguistic flaws that may arise, but also to verify that the model itself is performing correctly. Whereas plain translation is about expressing something in the correct conjugations and declensions, machine translation post-editing (MTPE) is also about verifying the mechanism behind these choices, and is arguably the harder task. To reduce the burden for human linguists and engineers, focus has shifted toward the crucial element of MT training, whereby neural translation models are prepared on linguistic datasets known to contain accurate translations in relevant subject matter areas.

As NMT practices continue advancing to steady enthusiasm across our industry, it is worth bearing in mind that natural language processing (NLP) capabilities now entering a ‘golden era’ will at some point enter practical applications specific to localization. When this happens, a paradigm shift will likely come underway as human linguists become less important to quality assurance.  However, we may first see a significant uptick in the market for machine translation solutions as the world economy continues to weather crisis. As NMT is first and foremost a driver of cost-effectiveness and efficiency for high volume translations, it is often the preferable method when budget concerns are decisive. NMT may not look quite as cutting-edge as it once did, but it is a more mature technology with an established role in localization strategy.

Related:  Localization and Climate Tech: Enabling the Fight Against Climate Change on a Global Scale

With a global network of linguists, subject matter experts, and engineers trained in the latest best practices for machine translation and linguistic review, CSOFT International can help companies realize cost-effective solutions meeting all of their translation requirements for entering new markets. You can learn more about our translation technologies and MTPE services at csoftintl.com!

 

[dqr_code size="120" bgcolor="#fff"]