in All Things Localization

With both Google and Meta vying to become vanguards in AI translation, the question ironically reaches us: Is capturing live interpretation between 1000 languages the meaning of universal, or is capturing 200?

Earlier this year, we looked at how Meta is tackling the theoretical grail of AI machine translation: a global language translation algorithm that can immediately convert any desired source into any given target language, with a focus on speech-to-speech translation to drive global participation in its ventures into the early metaverse. With the product tentatively titled a ‘universal speech translator’, it might not seem a tall tale of branding to claim that this go-between in just about 200 individual languages deserves such a name. The trouble is, months onward, another Silicon Valley tech giant now seems to be challenging that claim with a venture of its own: Google’s ‘1000 Languages Initiative’ to develop an intuitive AI large language model (LLM) that fluently converses between more languages than most humans even can name.

Globally, an estimated 7000+ languages exist to translate between, but only 23 of them make up half of all the world’s communications (give or take). How far down the long tail of less spoken tongues and dialects it makes sense to extend AI solutions? Given computational costs, the question of where the investment ceases to be practical is surely a question of the intended use case. For Meta’s purposes, a popular metaverse is unlikely to lack for a user base with 200 of the world’s most common and rarer languages covered on-demand. However, this poses a relatively forgiving scenario in many ways, where the fleeting nature of interactions in a diversionary setting can gain or lose appeal, but most likely will not harm users with mistranslations. The same cannot be said for contexts like medical translation, where undetected mistakes can potentially make the difference between life and death. Moreover, the value of language access is critical to research and development in fields like medicine, where the diversity of patient populations reached in clinical trials directly impacts the viability of pharmaceuticals in global markets. The same can be said for information services like Google’s own, where the ability to deliver technologies into new markets depends on linguistic equivalency, and the analytics of a diverse user base are valuable.

Related:  Lessons in localization for e-learning: why translation alone isn’t enough!

Unfortunately, as one report asserts, “Large language models…aren’t good enough for pharma or finance.” Automation is a force for innovation in translation as well as other fields, but the practical, real-world limitations of bleeding-edge translation tools severely limit them to heavily supervised contexts, if they can even assist at all. As technology eases access between languages, there is nevertheless a critical role for human linguists in the effective use of translation technologies like machine translation engines, which headline-breaking developments from Big Tech can easily obscure. While the futuristic dream of instant communication with anyone is actually under development, the foreseeable future of translation is persistently machine-human, where accurate, consistent results are a must.

For companies with growing needs for multilingual communications to navigate in complex industries, CSOFT International offers technology-driven, human-enhanced localization solutions in over 250 languages. To learn more, visit us at csoftintl.com.

[dqr_code size="120" bgcolor="#fff"]