in CSOFT Annual Summit, Globalization

Localization Expert Tex Texin is an internationalization expert and consultant who provides business and software globalization services. He is a popular speaker at conferences around the world and is recognized as an industry thought leader as well as a noteworthy contributor to standards and open source products. Tex is the author behind the popular http://www.I18nGuy.com. He can be reached at his consulting company http://www.XenCraft.com.

I first met Tex Texin in a shuttle bus on the way to The World Expo in Shanghai. To be honest, I was at first a little intimidated by Tex—through no fault of his own. It wasn’t because of how he looked, per se, because he’s someone who’s quick to smile, eager to join into discussions, and who gives the all-around impression of a pretty genial guy.

Rather, it was because Tex exudes a palpable aura of intelligence. The first impression he gives you is one of a professorial man from whom you could learn a lot—and in front of whom you’d be scared of saying something stupid, which is something I’m inclined to do.

Regardless of first impressions, Tex was quick to put me at ease one night as we sat down for an interview in an empty restaurant next to the so-called “German Bar” at our hotel. Our peers were in the German Bar having, erm, very important late-night business meetings (during which there wasn’t a mite bit of alcohol nor alcohol-induced shenanigans, I can assure you), when Tex and I sat down to talk shop.

In your presentation at CSOFT’s 2010 Operations Summit, you started out by sharing with us a little perspective on changes in computing technology, especially in terms of the initial development of GUI. This isn’t really the focus of the interview, but we were just curious: how do you feel about “old school” computing versus more modern, GUI-oriented functionality? Did I detect a sense of nostalgia?

Tex: Are you really going to age me in front of your audience like this?

We’re not aging you, Tex. You’re like… a fine wine. You get better with—well, you know. In any case, we would really like to hear your perspective on things.

Tex: Well, in that case… I have to say that prior to GUI, using command lines did have its own set of benefits. That is, as long as you were knowledgeable about computers and could use them effectively, command lines enabled you to type ahead, regardless of the function currently being performed or of your position on the screen. You didn’t need to wait for a response, and that made for very efficient use of your time.  It also gave you think time when you needed it, as you weren’t waiting and watching for screen responses. You commanded the computer rather than interacting with it.

GUI, on the other hand, limits the execution of commands. You’re stuck waiting to react to what’s visually in front of you. That’s not to say that it isn’t useful, though.

I remember when Windows 3.0 first came out. I wasn’t convinced of its usefulness until I saw my daughter, who was three years old at the time, navigating Microsoft Paint like a pro. She knew that the circle tool (represented, obviously, by a circle) could make circles; that the line tool could make lines; that she could draw with the pencil tool, and so on and so forth.

And that’s really when the value of GUI dawned on me: even though it had its inefficiencies—like making you wait for various windows to open, etc., before you could perform certain actions—GUI did enable users who were ignorant of the system to use it and perform certain tasks that might have otherwise been outside their realm of technical know-how. It brought about the age of casual, non-technical computer use, which is really quite amazing.

Sometimes developers take it too far, though, with sexy and glitzy user interfaces that, although dynamic, aren’t helping the user. In some operating systems, you move your cursor with the intention of performing a simple function, but the functions keep moving around, or icons switch places based on the system’s perceived notions of your usage habits, etc. You end up clicking on something totally different from what you intended. That can be frustrating and even catastrophic.

Well put. So it sounds like you’re saying that, despite its inefficiencies, the development of GUI is, on the whole, a positive phenomenon.

Tex: Oh, most definitely. I like to think of it in terms of photography. A manual camera in the hands of a professional photographer is like a computer expert using a command-based system. You can control everything, from aperture, shutter speed, and exposure, which allows you to take control of precisely how your picture’s going to turn out.

Modern digital cameras, in comparison, are a lot like GUI. All of the fancy functionality is still there, but has been automated and treated in a digestible fashion, so that almost anyone can take pretty decent photographs. Expert photographers, in turn, can tweak the settings a bit to take more manual control over the device. However, there is a limit to the extent that the automation can be overridden, so the ability to have total control is lost. Because of this, there are some inefficiencies involved, but there’s no arguing that enabling larger groups of people to efficiently use a device is a positive thing.

Not only that, but increased usage challenges experts to expand the range of capabilities and perform even more sophisticated and complex tasks than before. And that really was the point of my keynote presentation: that technological innovation and trends come in waves, and you can either ride them to reach greater heights or, well… sink.

Precisely. Thanks for keeping us on track, Tex. Speaking of trends, as a world-renowned internationalization and localization consultant at Xencraft, what are some of the trends, both technological and otherwise, that you’ve witnessed recently in the localization industry?

Tex: Not surprisingly, I’ve noticed that most of my clients are concerned with content reuse—that is, designing text to be better leveraged for translation purposes. In fact, I think that organizations put a little bit too much emphasis on this, but let me explain.

I’d like to start off by saying that there can be a considerable amount of money to be saved from content leverage. Just as you heard in the great presentations by ITT and AMD, a lot of IT companies benefit tremendously from content reuse because their products’ documents are often times very similar, with only small changes in specifications and a few details.

For example, for software Help menus, 10% reuse is already pretty decent. For marketing documents—if you’ve got boilerplates and templates in place—30% is a good amount of leverage. To achieve these levels of reusability, companies invest a tremendous amount of resources in tools that help control source authoring. And they have their reasons: Yes, by using certain tools, you can make language more efficient for translation purposes. This increases its leveragability and thereby helps companies to save both money and time.

This comes with its own set of costs, however. As we become effective at reusing text and eliminating variations for the sake of efficient translation, we need to ensure that our text is clear, comprehensive, and independent of the surrounding context. There is a benefit to making a point in more than one way. If one statement doesn’t address the point in all aspects, the second statement may compensate.

Unfortunately, many companies produce poor documentation to begin with, even with redundancies in place. That is why a lot of people usually hit Google to find answers to their problems after getting nowhere with the software’s built-in Help menu and documentation.

Again, don’t get me wrong: there is value in content leverage and controlled authoring, but the process becomes counterproductive when you focus too much on counting words. I argue that an organization would be far better off putting more energy into working out how to establish better pricing structures, more accuracy by enhancing fact-checking, and improving their internationalization processes.

So can you offer any specific examples of where companies engaging in localization might better spend their time and money?

Tex: Well, take Google and other Web-based companies. They rely on analytics to track user behavior. By focusing on the types of user behavior that can be tracked, they follow trends and get immediate quantitative feedback on what works and what doesn’t. So instead of perfecting, say, the word-count of an explanation for a certain function, they throw the explanation online and track its effectiveness.

From there, they can use these statistics to give feedback to their authors, translators, and inform their processes, thus improving the content—and their approach to content—for future use. Focusing more on user experience through analytics will give organizations a lot more bang for their buck in the long run.

Definitely a worthwhile suggestion. I know that companies like Apple devote a lot of resources to user experience, and with the success of the iPad and iPhone 4, the return on their investment is certainly apparent. So moving forward, in your presentation you also touched on crowdsourcing as a hot trend in the localization industry. How do you feel about the cloud and crowdsourcing as it applies to translation?

Tex: First of all, let’s define the terms. The cloud refers to a method of scalably sharing resources so they are available on demand.

Essentially, the cloud provides the resource pool that enables people to interact and collaborate in a more dynamic fashion. Crowdsourcing, on the other hand, is a form of design or development involving a large number of collaborators. It is also called community-based development. Crowdsourcing in localization refers to the collaborative development and translation of content.

Thanks for the clarification. So does crowdsourcing have its place in translation? Or should it have a place in translation?

Tex: Oh yes. Crowdsourcing definitely has its place. I understand that a lot of translators get upset over the idea that someone who is potentially not skilled, or who is not a professional translator, is providing translations for public use, and for free—essentially cheapening their service offering. But sometimes there aren’t translators with the appropriate expertise for a given project.

In my presentation, I gave the example of an online fantasy game. Say a company is looking to localize their game for several different markets. Because it’s a new game based on an entirely fictional world, there are a lot of different terms for beasts and various other actors and their actions, all neologisms that were created just for the game. This means that there is no precedent for translation into other languages. In a situation like this, who can claim expertise in translating the game’s content?

No one can. A collaborative translation performed by the players themselves might, in fact, be more useful and culturally appropriate than one performed by a professional translator who, more likely than not, is translating the game’s content out of context. Beyond that, some emerging markets simply don’t have enough professional translators available. In this case, relying on crowd translation is an effective option.

Naturally, for regulated markets—like localizing the Life Sciences or Financial applications, for example—crowdsourcing isn’t the best option and professionals with subject-matter expertise are required. But crowdsourcing certainly does have its application.

So if an organization were looking into crowdsourcing their translations, what should they watch out for? What are some of the functions or processes that need to be in place for crowdsourcing to work?

Tex: First and foremost, any platform aimed at facilitating crowd translations should have the basic internationalization in place to avoid select pitfalls to which all software localization is vulnerable. For example, supporting numbered placeholders in strings to represent program variables that can be reordered depending on the grammar requirements of the translation, and alternative text options to match gender, case, plurality, etc., as needed.

[Author’s note: My eyes briefly crossed at this point.]

Beyond that, it’s important that there is a vehicle to see translations in context—that is, if a group of users are collaboratively working on a translation for the aforementioned fantasy game, then they need to be able to view their translations in the context of the game.

It’s also important to plan to support crowd translations as community efforts, which is to say, ongoing endeavors that require effort and resources. Crowdsourcing does not come free, as many people think. There are several ways of supporting it. Most notably, it’s essential to integrate a reward system that encourages people to actively participate. There needs to be some sort of promotion system—some sort of value or recognition for participating in the collaborative translation effort.

In addition to a reward system, there are other ways of building the community. For one, a collaborative platform needs to support the management of discussion and debates. Leaving dispute resolution up to the community can be divisive, so there needs to be moderation and specific modules within the software to host, manage and resolve any disputes that arise.

Also, there needs to be a mature system of checks and balances to ensure the quality of crowd translations. There also need to be strong version control, so that if somebody makes inappropriate changes, there is a way to restore data and roll-back to previous versions.

Naturally, every member of the community should have his or her say, but there needs to be a process for user management and permissions to keep the collaborative process in working order. It helps to move responsibility into the crowd. Active users with a track record for quality and clear communications can be given increased responsibilities. However, the roles and the extent of their responsibilities must be carefully defined and understood.

Last but not least, it is clear from recent failures of cloud computing networks that it is absolutely essential that there is an alternative to using the cloud, so that if the cloud fails during a critical time in a project, work can continue by traditional or other means. This means the cloud data is backed up and available for use in an alternative non-cloud based system.

Those are some of the requirements to consider for crowd translations.

Excellent. Thanks for being so specific, Tex, and thank you very much for taking the time to do this interview. Tex is on vacation here, folks, so it means a lot to us.

So I think we’re good to go… wanna go get a beer?

Tex: No, thanks. I’m fine with a Diet Coke.

You sure? It’s on us….

Tex: I’m sure.

Hmph. We’ll get you tipsy one of these days, you rogue. Any last words before we head off to the infamous German Bar?

Tex: Kowabunga, dude!

Check out our website here for our translation and localization solutions.

 

[dqr_code size="120" bgcolor="#fff"]