in All Things Localization

Two weeks ago, in our ongoing discussion on advancements in AI language functions and capabilities, we looked at Nvidia’s announcement on its development of the world’s most powerful natural language generation (NLG) model to date, surpassing GPT-3 as the latest high-powered text generator model. Following suit, Nvidia this week has made the decision to offer this language model, the Megatron 530B, to enterprises for purposes of training other AI. While developments like these find experimental NLG models getting new and innovative deployments in their primary markets of development, the question of how they will grow and expand into global markets has long hinged on the fact that AI models like these are highly reliant on the quality of training that existing linguistic datasets make possible. Now, accompanying advances demonstrating its sheer power, news that Inspur AI Research has advanced its Yuan 1.0 novel NLG AI trained specifically for the Chinese language on expanded data points is demonstrating growing focus on the challenges of doing so.

Although NLG remains an experimental language model with obvious hurdles laying in its path of development, the news this week furthers our understanding of the applications and impacts this technology will have on the advancement of language AI in general. As we covered, Nvidia’s NLG algorithm was trained on hundreds of billions of English language datasets and demonstrated its ability to efficiently carryout a variety of NLG functions ranging from text generation to reading comprehension. Now, the Megatron 530B is representative of the complexity involved in the training AI of this caliber, especially pertaining to the growing demand for high-quality datasets. As the original article addressed, vice president for Applied Deep Learning Research at Nvidia said that “building large language models for new languages and domains is likely the largest supercomputing application yet, and now these capabilities are within reach of the world’s enterprises.” Clearly this decision serves the purpose of furthering the training capabilities of businesses that encounter many of the challenges inherent in the process of machine learning in language AI.

Related:  Learning Without Analogies: Why Linguistic AI Struggles Where Human Linguists Excel

Coinciding with the release of Megatron 530B, Inspur AI Research’s release of the Yuan 1.0 not only expands on our understanding of how NLG technology is being applied to different markets but also indicates the ways sophisticated language AI is being developed for different language environments and markets. Modeled after OpenAI’s GTP-3 language model, Inspur AI Research marks the first Chinese-language equivalent to the famed NLG model that had dominated headlines since 2020. Although this represents a major stride in the advancement of Chinese language AI, researchers highlighted the challenge of finding high-quality Chinese training texts and datasets from which language models use to learn. From this, it could be suggested that powerful language models such as Nvidia’s will require intensive and context-specific development for markets that lie outside the current English language-dominated industry.

As developers in the field of linguistic AI continue to generate powerful NLG models and apply them in key areas such as machine learning, the challenge in addressing the support that is needed to advance this technology on a global scale is clearer. From supporting technology providers in delivering these novel products across borders to delivering cutting edge technology-driven translations, CSOFT remains committed to ensuring successful communications for a changing global landscape in over 250 languages. Learn more at csoftintl.com!

Leave a Comment

Comment