As MT (machine translation) continues to advance year on year, it is a fine time to take note of the kinds of breakthroughs we can be glad it is not making.
As recently hyped breakthroughs in language AI continue to evolve, text generation models like GPT-3 are raising renewed commotion this week as recognition grows for the unintended consequences of teaching human language to machines using vast, unfiltered real-world datasets. Now confirmed to be full of the reflected biases watchers feared when it debuted a year ago, the experimental AI responsible for extremely human-like chatbots is under scrutiny for its capacity to channel the darker side of what people say online – the exact kind of language data it was trained on. And because sending AI into a naïve study of real online forums does return such an unflattering reflection of us, the debate has turned not to whether or not AI will learn to repeat our worst tendencies, but what to do about it.
On the other hand, from a language services perspective, this is all highly validating of the model localization providers like CSOFT have advanced for combining the best of human and AI capabilities in a separate context with clear parallels. If a computer can only really be as good with language as its training allows – a reality LSPs have worked with for years by not expecting it to be perfect – then it is a far better thing to be gradually augmenting machine learning capabilities with training inputs than to be puzzling over how to contain an autonomous AI as it learns slurs, bigotry, and so on, then repeats them naively. Training – key to optimizing translation engines for context – is difficult to inject in reverse when models are left to their own learning. Now the problem of limiting the damage, akin to quality assurance in a translation setting, is vexing developers who have placed the machine itself at the apex of the equation.
Where many once asked if machine translations would replace human translators, it increasingly looks like even superhuman language AI is sorely lacking a human moderator’s influence. Just as importantly, the holy grail of MT has never been accuracy in the sense of some proximity to 100 percent that machines can approach. Even two human linguists can disagree over which of two very accurate translations is most appropriate, compelling, or up-to-date for expectations in a target market. In fact, their ability to do so is a vital link in the MTPE (machine translation post-editing) workflow. The entire premise is that MT should remain a subordinate tool for linguists, rather than a superhuman replacement of them. It is ready-made for the kind of human oversight that not only ensures accuracy or basic readability, but also appropriateness – the exact thing the other end of the language AI spectrum is grappling with.
For companies seeking scalable, fast, high-quality translation to support their global growth, CSOFT’s full range of technology-driven localization solutions offer the advantages of both machine translation and human linguists, from general quality analysis to in-country review. To learn more, visit us at csoftintl.com!
[dqr_code size="120" bgcolor="#fff"]