in All Things Localization

Since its release in the early days of pandemic, language AI’s first-ever autonomous dialogue agent, OpenAI’s GPT-3, has likely generated more of controversy that it has of text or speech. From a language services perspective, where machine translation is a constantly advancing artificial intelligence, this astonishing algorithm’s every achievement has proved a frequent reminder of why human moderation is needed to ensure that AI-generated outputs are not only convincing but also accurate, consistent, sensitive to culture, and true to the values a brand or enterprise adheres to.

Despite a working knowledge of language trained on millions of linguistic data points from across the internet, GPT-3 emerges with essentially no such abilities of its own. Rather, it has a tendency to parrot the worst in what people say, speak with bias and a sense of authority where it knows nothing factual, and obscure the root cause of its gaffes. Little can be said for where any one thing it says comes from, except that it reflects right and wrong assumptions in a vast human linguistic data pool at large. Much has been done to tailor and tweak this powerful tool toward a use in specific settings with considerable success, but the verdict stands: GPT-3 is deeply flawed.

Now, headlines announcing a set of tools dubbed “GPT-3.5” are corroborating rumors that OpenAI is preparing a new GPT-4 model as the end result of targeted efforts to better the worst aspects of its terminally troubled predecessor. Similar to how human linguists facilitate the training of machine translation engines by supplying correct translations to replace flawed ones, one aspect of the effort appears to be setting the algorithm up to invite human rebuttals in dialogues where it generates one side of the conversation.  Specifically, reports of the ChatGPT function seem to offer an analogy to crowd-sourced training and labeling by allowing public users of the model to push back on answers they don’t like, simultaneously promoting the model’s machine learning while empowering human users of it to interact self-interestedly. While the ultimate goal is a model that never says anything it shouldn’t, one additional feature is a reminder of how far language AI has yet to advance to achieve that: “guardrails” that prevent the algorithm from engaging discussions of subject matter it has historically had problems with, such as by refusing to make jokes about ethnicity.

Related:  AI Communication

With language AI advancing in such rapid strides, it is important to note the crucial role that human linguists, translators, and interpreters play in ensuring that technology-driven communication solutions can deliver the necessary quality and accuracy for the real-world scenarios they speak to. As Technology Review notes from one subject matter expert, “’Fine tuning of human feedback won’t solve the problem of factuality,’” alluding to the need for AI that can not only speak a language but also leverage the broader web of today’s information technology to reason with problems, look things up, and report back with total fidelity. When the world’s most powerful language algorithm is so far from perfect, it is easy to see why translation engines like Google’s remain a stepping stone to quality multilingual communications, but not a standalone solution, except in the hands of qualified linguists with local language familiarity and domain knowledge when translating between languages.

To learn more about CSOFT’s technology-driven translations in over 250 languages, visit us at csoftintl.com!

[dqr_code size="120" bgcolor="#fff"]