in All Things Localization

What do human linguists have that linguistic AI doesn’t? No longer a question just for the localization industry and machine translation, how well AI can replace people in communications is a shifting story in the world at large, and often a sensationalist one. A long view of developments suggests the true answer to what AI lacks is much more elusive than any one type of computational power, but the search for a specific mechanism is important not only to computer science but also psychology and biology today, as discovery continues to reveal more about the underlying workings of consciousness. Now, as Quanta Magazine reports, one expert who has dedicated decades to AI research is proposing a far more concise explanation for what makes organic intelligence so special by comparison: the ability to form analogies.    

For reasons experts debate, machine learning models are remarkably stunted in their ability to generalize from information they learn for a specific purpose, making them ‘one trick ponies’ of sorts that are as disappointing in the search for general artificial intelligence as they are astonishing in their respective domains. Infants make more cohesive leaps and bounds across concepts as they begin to learn about the world, by comparison, and continue this process of learning perpetually. Linguistic AI models like GPT-3 that have trained on vast volumes of real-world language data are getting exceedingly good at sounding like us, but what people can do and infer through language is still a world apart from anything a computer has achieved. According to Portland State University computer scientist Melanie Mitchell, the difference is anything but a question of computational power. Rather, AI simply doesn’t have our ability to form analogies between the unfamiliar and the familiar. It also doesn’t have our instinctive drive to do so, and it is anyone’s guess how that could be programmed. For example, to AI, six does not look like an upside down nine, no matter how many times it processes numerals. It simply learns to see ‘6’ and learns to see ‘9’. There is nothing vexing or interesting, even momentarily, about the ways they could be confused. AI sees no analogy along the lines of shape between two numerals it is trying to understand as symbols for known data values.  

Related:  Localization and Climate Tech: Enabling the Fight Against Climate Change on a Global Scale

Few in language services could fail to note how important the concept of analogy is to the work of human translators. Translating across languages and cultures, there are often no literal concepts in common at all between equivalent expressions or predefined ways to translate them, despite which qualified linguists are able to make the connection and communicate both literal and figurative meaning. As it turns out, that ability remains highly elusive to linguistic AI. As Mitchell points out, the idea that growing the sheer volume of data that a model learns on will eventually supply something like an interested understanding seems unlikely. She describes a “brittleness” to AI’s knowledge, where the lack of analogous thinking is a missing cement that would otherwise join fragmentary data points. In machine translation, that same brittleness is a recognized quality factor that can impact the consistency and accuracy of translation outputs if misapplied, making it a factor that LSPs consider carefully in how they leverage technology to ease the work of linguists. For now, it appears that approach is not only the wisest use of current technology, but also as advanced as any at the cutting edge of a linguistic AI landscape that seems to affirm, time and again, the irreplaceable importance of human linguists in global communications.  

To learn more about CSOFT’s translation-driven technology processes and global communication solutions, please visit us at csoftintl.com! 

[dqr_code size="120" bgcolor="#fff"]