in All Things Localization

Those who follow the CSOFT Communications blog are familiar with the myriad and interesting problems born from language AI models trained on the most abundant data resource available: what people say online. What people provide in textual volume, they do not necessarily match in thoughtful consideration, and while the gains for machine learning have been nothing short of astonishing, the gaps also continue to prove newsworthy. For all that our data reveals, a transcript of humanity’s activity falls short of a comprehensive education for algorithms that many hope will one day learn the spontaneous thought processes and linguistic reasoning that GPT-3, for example, can only mimic based on pattern recognition. One party very much interested in improving this is Meta, with its stated aim of developing not only monolingual artificial intelligence, but capabilities as ambitious as real-time AI translation for its metaverse developers to deploy wherever conducive to interactivity among participants in things like online gaming (a development noted in CSOFT’s recent metaverse localization white paper). This week, reports that Meta is pursuing its focus with a shift away from the addition of parameters to boost the brawn of natural language processing (NLP) and toward the modeling of our innate mind for language highlight how the toughest nuances of communication to capture in, for example, machine translation are not simply missing data, but complex relationships between discreet regions of the brain and words encountered. By studying how the brain responds to various phrases as a person recognizes them, Meta’s researchers are gaining clearer insights into where people do with their neural networks as machines are still unable to, and how to better approximate this with language AI.

Related:  Predicting a Healthier Future with Dr. CJ Li: ‘The Promise of Gene Editing’

To pursue a deeper understanding of how brains use language, Meta relied on novel research on human subjects as well as known findings about linguistic algorithms as premises for exploration. In monitoring how different brain regions responded to linguistic prompts, researchers placed people into the same scenario as a typical NLP algorithm, studying how neural nodes activated one word at a time in anticipation of the various possibilities that would potentially be expected to follow it. The analogy provided by the researchers likens the difference between algorithms and humans as the difference between fishing for the rest of the sentence and looking forward to hearing it, specifically with the example, “Once upon a…”. Whereas a machine can determine “time” as the most likely next word in the sequence, people would simply leapfrog this determination while rushing into the mental ambience of a story being told, musing instead on what lies beyond this temporal indicator. Where children see dragons and castles, AI sees a statistical likelihood to carry a sentence forward. Ultimately, Meta hopes that by understanding how the physical structures of the mind conduct this process, it can better configure algorithms to experience and generate language in a similar fashion.

As always, the persistent limitations of experimental language AI highlight the virtues of better charted territories like machine translation post-editing, which actively address gaps in what machines are capable of through efforts from human linguists. Rather than just the flaws in its outputs, human linguists and translation experts leverage an eye for what is going on in language to correct inaccuracies and iron our awkward phrasings. Ultimately, but understanding what makes sense for a context – subject matter expertise – linguists are able to apply the imaginative component of language to the degree needed to amend the coarser statistical work of neural machine translation.

Related:  How LSPs Can Keep on Truckin’ with Green Tech

Learn more about CSOFT’s AI-powered cross-border communication solutions at csoftintl.com!

[dqr_code size="120" bgcolor="#fff"]