in Language Technology, Life Sciences, Technology

Modern language AI is advancing at an unprecedented pace as innovation and research drives developers to create state-of-the-art language models that continue to inch closer to natural human speech patterns and cognitive reasoning. In our ongoing discussion on language AI and natural language processing (NLP), we have discussed ways in which these technologies have expanded in capabilities and how they continue to be integrated into global markets and industries. Yet, developers are actively creating smaller models that are far more cost-effective and highly efficient when pitted against some of the huge existing models. So, just how do these smaller models compete against the larger ones in areas of understanding and processing language?

This week, advancements in NLP AI have again brought developers at the oft-featured DeepMind one step closer to creating language models that could be generate intelligent conversation indistinguishable from that of a human in all programmed tasks. From the creators behind the massive Megatron-Turing model, DeepMind’s latest NLP model is dubbed Gopher, and at 280 billion parameters, presents a much smaller computational model than other more language AI models that generally rely on sheer size and power to best one another. Specifically, the Gopher model stands out in areas of reading comprehension, fact checking, and bias detection in language, which are important areas not only to improving on the accuracy but also the ethical quality of NLP communications in challenging, subjective knowledge areas. Even more importantly, this model is said to be capable of halving the accuracy gap between AI and human communication that the famed GTP-3 model exhibits, marking a significant stride towards automating the communication of expert knowledge.

Related:  Independent Video Game Translation: Supporting Creativity in and Accessibility

Almost simultaneous with the already small Gopher model, DeepMind also introduced the still smaller RETRO model, a 7 billion parameter model developed on a select collection of high-quality datasets across ten languages. As with its larger counterpart, RETRO showed improvements in tasks relating to the detection of biases in language and to answering specific questions. What’s different about RETRO, though, is that it can learn more rapidly and from smaller datasets that are precisely tailored to specific knowledge areas. Specifically, RETRO uses the concept of an external memory – analogous to a cheat sheet – to quickly formulate familiar, coherent responses with a minimum of computational strain. In short, rather than be a know-it-all, RETRO is a “can-find-it-all” when needed. Between Gopher and RETRO, DeepMind is advancing an approach to NLP that figures not on having a supreme algorithm that can process anything in language, but rather one that knows enough in general, and can get additional help if it needs it when the prompt is just too challenging. All of this makes for a language AI that is cheaper to train and more computationally efficient than larger models, while still being able to compete with and even outperform them.

As we have highlighted in previous posts, these experimental advances in language AI have remarkable parallels to language AI in language services. Most notably, machine translation post-editing (MTPE) applies the same fundamental strategy that DeepMind is leveraging in RETRO and Gopher, in terms of allocating scarce resources to the functions and uses that most require their attention. In MTPE, it is human expert linguists that need to be conserved, rather than computational resources, and doing so effectively is about involving them in the processes that machine translation engines struggle with alone. As AI continues to advance in industries outside of language services, it is validating of the innovations that LSPs like CSOFT apply to our own sphere of AI and language technology to see godlike AI models competing for the same edge that distinguishes the best translation solutions: accuracy, nuance, and ethicality. From ensuring the quality of clinical documents and supporting patient recruitment for clinical trials, to delivering accurate, functional translations for high-volume documentation needs, CSOFT excels at designing the right combination of automated and human-tailored, certified translation services.

Related:  The Artistry of Post-Editing

To learn more, visit us at csoftintl.com!

[dqr_code size="120" bgcolor="#fff"]