in All Things Localization

When language AI talks, should it have to explain itself?

Those keeping up with the latest applications of natural language processing (NLP) and natural language generation (NLG) may have noticed a subtle shift in the ongoing discussion of ethics surrounding algorithms like GPT-3, which has long dwelt on whether AI will channel our more malevolent tendencies, and what, morally, engineers should be trying to do about it. As consensus forms that AI will do just as horribly as many of the actual people whose language data it learns from, resignation is growing toward the fact that data-trained models become even less knowable in their fundamental workings when experts get involved to steer them closer to the desired output – as one commentor previously posed, akin to a kind of black box. If AI can’t be stopped from saying things we’d like it not to without making it even more opaque, how can or should it be held to account for its decisions, for instance when posed in the guise of helpful expertise? While some are finding sophisticated reasons to argue it should not, many are adamant there should be no AI otherwise. As opposing voices take sides, there may be no answer in sight, but interesting parallels to the value of machine translation and MTPE models for translation – the original use case for language AI – are emerging to underscore the vital importance of human accountability in areas such as language services.

In some ways, the growing debate around ‘explainable AI’ is not especially focused on linguistic applications, despite how often the field is cited as needing it. As Lifewire reports this week, the FEC has now proposed that AI models that are applied in financial markets may be subject to investigation if their creators do not equip them with features to explain how they work; further, that some of the best examples of companies applying AI profitably did not just deliver a desired result, but rather plainly stated insights an application grasped and people did not. In short, the fact that AI is all around us is generating demand for the kinds of explanations it arrived without, whether as a business problem or a matter of enforcing existing policy, and largely regardless of the overarching ethical concerns.

Related:  Universal Robot Companion Localization for the Elderly

For all of the discussion around language AI, the best example of its real world use case – translation services – has always avoided the potential for unexplained results as a matter of addressing the same types of concerns from clients seeking localization solutions. When brands are invested in their content, the machine translation training and translation memory functionalities that language service providers deliver help offset the problematic machine learning outputs that routinely result from raw Google translations, for example. Meanwhile, human reviewers who handle and work directly with the engines that handle specific translation projects work carefully to ensure that what a client gains not only at the text level but also in terms of efficiency is reflected transparently. Through this lens, the struggle to achieve an ethical profile for GPT-3 is more typical of problems in general AI than in language-related applications – a field already far more advanced and rooted in the principles of transparency by virtue of the stakeholders involved.

For companies growing in global markets, localization through technology-driven translations and language services is vital to ensuing success across languages. To learn more about CSOFT’s solutions, please visit us at csoftintl.com.

[dqr_code size="120" bgcolor="#fff"]