In the weeks since we last surveyed the evolving realm of language AI, machine translation continues to advance innovative trends in localization gaining industry focus this month, while startling experimental breakthroughs are again challenging assumptions about what this class of technologies can achieve – now, without even a preliminary training, and across languages it has not even translated between.
Researchers from MIT, McGill, and Cornell this week announced the discovery of a new linguistic model that takes an detour from the usual pattern of proving that it has the ability to solve linguistic problems, instead proving that it knows how and why to do so. Specifically, the group worked to create an algorithm that outputs a program for the correct grammar – i.e., set of rules – that govern a body of language it reviews without prior knowledge. Unlike the concept of feeding an algorithm massive amounts of monolinguistic data to train a with, this effort targeted a simulation of human thinking by giving the algorithm the assumption that there is a correct way to understand a set of much smaller but closely related datasets. Moreover, it challenged it to do so with datasets in different languages, none of which it had learned in any prior capacity. In learning to do so, the model proved able to identify similarities between languages that it could reprocess in its understanding of one to the next, effectively saving itself the effort of comprehensively learning each individually. As MIT News reports, “The researchers also tried pre-programming the model with some knowledge it ‘should’ have learned if it was taking a linguistics course, and showed that it could solve all problems better.” In short, language AI is has the potential to not only improving its mastery of languages one by one, but to learn human language more generally and in such as way that it can apply global insights to more accurately reason with specific tasks. It also does so better on its own than we can teach it to.
Within practical applications of machine translation for the localization world, Slator has featured a number of interesting MT-related developments ahead of the Association for Machine Translation in the Americas’ 2022 gathering. Highlights such as Cairo-based researchers asking, “Can the NMT neural network translate into and from a language it has never seen before?” underscore how the aforementioned breakthroughs in experimental AI correlate with real priorities in localization, such as extending MT’s command of high-resource languages to better encompass low-resource languages. Slator also points to the importance of MT applications like machine interpreting and machine dubbing, echoing themes from CSOFT’s new white paper on streaming entertainment localization, which you can now download here. With further news including the Singapore government’s formal commitment to the use of MT in multilingual communications, innovation not only in the fundamentals of this technology but also in its real world applications are certainly gaining traction in 2022.
To learn more about all of CSOFT’s technology-driven translation and localization solutions in 250+ languages, or to get in touch about your next translation project, visit us at csoftintl.com!