in All Things Localization

A lot has been happening in the month since our last look at the state of language AI, but many of the core themes we explored in July continue to circulate the news. Toward the question of how AI learns that we took up in our look at the significance of analogies, a novel study from DeepMind of the past week asserts that reinforcement learning is enough alone to train the kind of artificial general intelligence (AGI) that would, in theory, be able to achieve anything people can. Titled simply, ‘Reward is enough’, the paper looks at how language AI mechanisms like NLP (natural language processing) as well as other kinds of deep neural networks (DNN) effectively mimic the natural evolutionary learning mechanisms that incentivized intelligence in humans, arguing that the reinforcement of successful outcomes can by itself grow and align an AI’s understanding of its environment closer and closer to completeness.

On a similar note, researchers at Carnegie Mellon this week published a study that attempts to define a path forward for NLP using prompt-based learning, a version of reinforcement learning that can potentially improve the data efficiency of the model development process by a factor of 100 without the use of labeled data. According to reports, this approach relies heavily on the strategic work of human specialists to create linguistic prompts that will in effect guide a machine’s learning in the right direction, thus departing from both the manual tailoring of trained models and the sheer volume of data and labelling otherwise needed to improve machine learning outcomes. As well as an example of language AI advancing, the power of specific wordings to better access the learning capabilities of machines is by itself an affirmation of the crucial role that human language specialists play in everything from machine translation post-editing (MTPE) to the development of stronger and stronger NLP mechanisms, even as AI advances headlong.

Related:  Mastering Global AI Search Rankings with AI Search Translations: A CSOFT Guide

All of this coincides almost exactly with the odd news that an AI has been formally credited with an invention as a patent recipient for the first time. Known as DABUS, or ‘device for the autonomous bootstrapping of unified sentience’, the AI was formally attributed the rights to a novel form of packaging that it was deployed to design based on fractal geometry, winning its bid for rights not only in South Africa but also Australia, despite the refusal of courts in other countries. It is difficult to imagine being sued for infringement by an artificial intelligence, and the absurdity, if not fright of such a legal precedent has not been well received in the expert legal community, if reports are accurate. Nevertheless, these developments do highlight the fact that when issued a challenge in its domain, AI increasingly delivers something beyond the power of its own inventors to conceive without it.

As AI continues to influence communications across industries, it is language AI that tends to epitomize the remarkable advances that are aligning this novel form of intelligence with a human understanding of the world – whether that is focused on ensuring AI has common sense, or giving it the ability to imagine things. From supporting technology providers in delivering these novel products across borders to delivering cutting edge technology-driven translations, CSOFT remains committed to ensuring successful communications for a changing global landscape in over 250 languages. Learn more at csoftintl.com!

[dqr_code size="120" bgcolor="#fff"]