in Technology

When trying to understand the distinction between technology-driven translation solutions like machine translation (MT) and neural machine translation (NMT), it can be tempting to seek simpler analogies from beyond the realm of technology. Increasingly, though, parsing the respective advantages of these solutions requires an awareness of distinctions in machine learning and artificial intelligence that will only become more important to navigate as innovation continues to segment the possibilities within localization and at large.

To some extent, technological parallels for understanding these technologies and the nature of their performance have existed for some time. When IBM’s chess engine Deep Blue made history by beating reigning world champion Gary Kasparov, it was doing essentially what basic forms of MT continue to do today, applying a statistical knowledge of patterns to generate a likely assertion in response to a challenge. Jumping ahead almost three decades, the way Google’s AlphaZero chess AI routinely defeats Deep Blue’s latest successor Stockfish is vastly more akin to what NMT can offer for translation services. By applying a deep learning model that has learned to play chess not through statistical training but through the experience of beating itself, over and over, with no more foundational programming than knowledge of the rules of chess, AlphaZero has produced games of chess that make perfect sense that no human would think of. While that is more than what NMT can do (or even anything we would want it to do), NMT does nearly replicate the generation of an original thought in language such as a human translator might have, targeting meaning itself rather than a likely equivalent formed from programmed assembly blocks. Where one has answers to choose from, the other is equipped with a mechanism that enables it to reason toward an original formulation. Adding the “N” to “MT” is not a small addition, but rather a significant shift of paradigm.

Related:  AI Subtitling: Will AI replace subtitle writers?

Comparing their respective strengths, some might wonder why MT persists when NMT is already a possibility. In short, the answer depends on how much and what kind of an investment a company wants to make in the content localization process, as well as the specific scenarios and industries their needs correspond to. For example, while both of these solutions are geared toward high-volume translation outputs, NMT is more favorable to capturing the subtler nuances of language that can escape coarser MT models. Both are amenable to human linguistic review, but in most cases non-neural MT is the more preferable model for initial translation when human linguists and their command of nuance and culture play a crucial role in quality assurance (i.e., machine translation post-editing), as basic MT presents greater cost savings up front.

Returning to the above example of AlphaZero, an additional analogy for what it can do in the world of games is rapidly emerging in the realm of linguistic AI. The form of machine learning known as NLP (natural language processing) has recently made headlines in force with successive breakthroughs that are now enabling chatbots and other non-human agents to engage people in language as never before. With prototypes like GPT-3 showing an uncanny ability to converse, experimental AI is rapidly toppling expectations for what can be automated with the help of smarter AI agents. With these possibilities come new requirements for AI model testing and development that place an enormous emphasis on training data that can be fed to machines, wherein the quality and integrity of datasets are of the utmost importance to successful outcomes. Linguistic data is just one form of data required by AI firms, but also one of the most ubiquitous. As developers seek advances beyond granular refinements for marginal performance gains, the emphasis is shifting toward giving AI more of the domain knowledge and flexibility with new information that people have. In short, there is a push to give machine learning a knack for the kind of subject-matter expertise top human linguists bring to their work, and data is at the core of enabling that.

Related:  The Artistry of Post-Editing

Along with how clients can benefit from the efficiency or cost savings, the quality of real-world linguistic data that can be gleaned from large-scale localization projects is rapidly gaining emphasis. With measures for anonymizing data as needed, LSPs can furnish validated linguistic data with measurable value in training machines to reflect a more nuanced understanding of language, rather than educated guesswork.

From MT and MTPE localization solutions to consulting and value-added services, CSOFT International helps companies deliver their products and services across languages and borders and enhance their performance in markets worldwide. Learn more about our technology-driven solutions and best practices at csoftintl.com!

 

[dqr_code size="120" bgcolor="#fff"]