Advances in technology are constantly changing the way we live and, when it comes to translation, this is no different. Gone are the days of flipping through an old ‘Spanish to English’ dictionary, looking up every word in a sentence, and trying to form something that vaguely resembles a translation. Now it is much more common practice to just copy & paste the sentence into your favored machine translation service and instantly receive a more accurate and efficient result. These developments will continue to drastically change the future of translation, not just due to increasing levels of technology but also, through the emergence of platforms in which bilinguals around the world can contribute freely by translating pieces of text – crowdsourcing.
Translation memory is an important tool in the modern translator’s toolkit, and one that is currently the focus of a great deal of discussion in the translation and localization community. Simply put, translation memory is a type of shared database that stores translations and continually updates itself as its users work.
Even when human societies first tentatively interacted with each other, there were instances of interpretation and translation. Reasons such as trade and diplomacy between nations and societies were important early engines behind the refinement of translation practices, and the urge to better understand other viewpoints all involved different kinds of evolving methods.
Machine translation as a concept has come a long way since it was initially tested over 60 years ago. It was originally suggested as a method for the US government to monitor Russian activities after World War 2; now it has developed into something that has transformed the translation industry entirely.