At this point, there is no real question as to whether or not Earth’s language diversity is dwindling, as it already clearly is. But there is a question as to how much of this phenomenon is due to increasingly powerful machine translation (MT) tools such as neural machine translation (NMT) versus how much is due to natural processes.
Pop culture references have always evaded dictionaries for some time after their initial adoption, and now due to the voracity and speed at which the internet consumes and discards new slang and references, machines can’t quite keep up. Ayan has pointed to “odd spellings, hashtags, urban slang, dialects, hybrid words, and emoticons” as being the major hurdles for NMT.
Although Alibaba Cloud’s $254 million in revenues still trails behind Amazon Web Services’ $11 billion, Alibaba Cloud is betting on a near future in which it stands beside Microsoft’s Azure and AWS to dominate the Cloud market as a member of the 3As.
What do Wordpress, Linux, and Firefox all have in common? All of these successful projects are the result of crowdsourced contributions. As the Internet continues to connect us, collaborating on projects has become easier than ever. Even in the localization industry, crowdsourced translation solutions are helping to make translation services available to everyone and the advent of new technologies has brought about a few different methods of collaborative translation projects. Let’s take a look at three of the most popular models of crowdsourced translations.
Translation memory is an important tool in the modern translator’s toolkit, and one that is currently the focus of a great deal of discussion in the translation and localization community. Simply put, translation memory is a type of shared database that stores translations and continually updates itself as its users work.