As a user experience (UX) professional, I can see how allowing users to translate their own content can be part of a compelling engagement strategy, and within that context I would have thought the entire user experience should be in the user’s language, not just part of it.
So, then, why is it that when we constantly read that Facebook is available in 65, 70, 80, whatever number of languages, we can find that the Facebook help is available in less than 10? Here is what Irish language (Gaeilge) users see under Help:
Is it because:
a) The Facebook crowdsourcing translation tool doesn’t allow the help strings to be translated?
b) Facebook users don’t want to translate help because they don’t like or need it, or doing so just ain’t cool (or easy) anyway?
c) There’s a whole bunch of places out there populated by people way way smarter than others and they don’t need help in their own language?
As a localization professional working according to budget, I was sometimes faced with the prospect of having to preside over a localization plan where help or doc not included and left in English (actually, Facebook doesn’t seem to allow users who switch their language to one where no help translation is available an option to read help in English instead). I wondered: if this approach was acceptable then why the help was written in English in the first place?
For me, partial localization is fine if the market and user experience accepts it, of course, though it’s clear that for some cultures doing so is a negative experience.
But what’s going on with community translation of user assistance like help?
Answers to the organizers of the next localization or UX conference, anywhere, please.
A new application for the iPhone allows you to take a picture of some text and then translate it. Not only that, with text-to-speech technology, it will pronounce the text as well.
With translation capability from English into 16 languages and pronunciation in FIGS plus Portuguese, the Pic Translator seems aimed at the monolingual American who ends up at a restaurant with indecipherable entrees. An interesting app—and at a cost of 99 cents, it seems worth a shot!
A feature offered by most CAT tools is the possibility of creating project memories from larger memories. Typically, to create a project memory, the PM analyzes the files to translate against a master memory, and the CAT tool includes in the project memory only the segments that would be useful as fuzzy matches. The translator thus receives only the part of the translation memory that provides fuzzy matches and 100% matches.
There are several different reasons to do this: from the need to give the translator smaller files, to the requirement of not sending out a full translation memory because of the risk of disclosing some sensitive or proprietary information.
Whatever the reason for creating limited project memories, end customers, translation companies and project managers often overlook something important: a project memory is, by definition, an incomplete memory. This harms translation by limiting the usefulness of concordance searches. A term already translated in a segment that is not similar enough to other segments as to be a fuzzy match would not be found by a concordance search on a project memory, whereas that very segment would be found if the same search were conducted on the master memory. Not finding an already translated term because of the limitations of project memories affects the quality and consistency of the translation.
The customer or the translation company may still decide that security reasons outweigh the quality disadvantages of using project memories, but the choice should be deliberate, not something arrived at by chance out of not trusting the translator.
A guest column on the book by the author mentions a few examples, such as to live like a maggot in bacon (to live in luxury: German) and to strike the 400 blows (to sow wild oats: French). The last one may not be so “novel” to fans of French cinema, who will undoubtedly recall a French New Wave flick by the same name (Les quatre cents coups). Apparently, there are two bands that have discovered this delightful idiom as well.
The column also discusses the use of idiom in language (which seems nearly unavoidable, at least for the native speaker) so check it out if you’re interested in anything beyond literal one-to-one language.
Facebook submitted a patent in December 2008 to the U.S. Patent & Trademark Office for its “Translations” application that allowed it to go from 0 localized versions to 16 in less than 6 months. It’s now up to 60 and counting.
This month, two interesting developments in the area of collaborative translation:
Facebook applies for patent for Community Translation on a Social Network. If you have translated on the Facebook Translation platform, like I have, you know that the tool works very well. The only limitation of community translation, when it is voluntary, is that larger chunks of text never get translated.
Swedish newspapers reported yesterday Dan Brown’s first new novel since “The Da Vinci Code” will be translated by six translators. The objective is to limit piracy and to prevent impatient fans from buying the English version of the book, by expediting the publishing of the Swedish translation.
What’s the relevance of these stories?
Collaborative translation or community translation is taking hold as a valid process for commercial projects. The usual contention is that in order to achieve consistency, it is better to have as few translators working on a project as possible is trumped by the commercial imperative: It is better to have a good translation – even in the literary world – that is delivered on time, than a perfect translation that arrives too late to the market.
Fernanda Pivano, a legendary translator who first brought to Italian readers so many great American writers and poets (from Edgar Lee Masters to William Burroughs, from Hemingway to Bob Dylan) died today in Milan.
I never had the privilege of knowing her in person, but it was thanks to her translations that I first read many American writers.
Last week I was in Québec City for the ATA-TCD Conference, which was superbly organized by Rina Ne’eman and Grant Hamilton.
The biggest takeaway of the event was the last presentation of the last day: A panel presentation by Don Shin, from 1-Stop Translation, and Rocío Txabarriaga, from Common Sense Advisory, named “The Future of the Translation Industry: MT, TM, Open Source, Crowdsourcing: Where’s It All Headed? And What Should You Do to Prepare?”
For his intervention, Don compiled some of the major efforts being done in those areas, but what I liked the most was his depiction of what the desktop of a translator working in a collaborative manner might look like.
The key point is that the translator is in control. At the top, you have the source text. Right below it, you have the translator’s TM, the project’s TM, and a Machine Translation of the segment. And below that, the translated segment.
It is up to the translator to choose which one of these sources she is going to use. On the right panel you have access to terminology and a chat window, to ask for help in live mode to other people working on the same project.
Finally, on the bottom right, there is a fare meter, that shows how much money the translator is making on the project. Whether this is a motivator or a demotivator depends on the price that the translator is getting.
Another panel discussing Translation Management Technologies, moderated by Duncan Shaw, failed to address what all the LSPs in the room were looking for: Interoperability. What I heard LSPs saying is that they want to let their translators work with any tool that they prefer (Trados, SDL, MemoQ, Across, whatever) and not to require them to have different tools for different projects.
What the industry seems to want, and the technology providers can’t seem to be able to deliver, is a standard format for Translation Memories that does not get corrupted if you change from one tool to another. Like as Comma Delimited File that can be opened by Excel, Lotus, MySQL or Oracle, without any data loss.
TM-Europe 2009 will be held on October 1st and 2nd in Warsaw. This year, the conference theme is Quality and Terminology Management, and Business Terms and Conditions for Translation and Localization Services.
In the 2008 TM-Global Translation and Localization Market Survey customers and providers alike reported that consistent high quality was the number one factor they take into account when managing processes or selecting a vendor, yet they had problems pin-pointing how quality is defined, measured and manifested.
The conference’s schedule covers an interesting range of topics, among which:
a workshop on translation and localization technology (Daniel Goldschmidt and Jost Zetsche)
a discussion between customers and translation companies on how they do business together
a panel and several presentations on terminology management
a panel and presentation on quality management
a presentation on different approaches to selling translation in the US and Europe (Dave Smith of Lingua-Lynx)
a post-conference workshop on Selling Translation
TM-Europe is the annual conference of the Polish Association of Translation Agencies (PSBT) and is organised by PSBT and TM-Global.
For more information on the conference visit www.tm-europe.org.
Translators are only human and errors are introduced by human translators every day… that’s why we have Quality Assurance processes in the first place! Auto-propagating translations pre-QA carries a tremendous risk
Many translation blogs start with tentative steps, unsure of where they are going, only to find their feet with practice and time. David, on the other hand, hit the ground running.
Technological jargon and the abbreviations used in text messages pose a new threat to clear language, the Plain English Campaign has warned on its 30th anniversary.
The Plain English Campaign is 30 years old.
The organisation says incomprehensible instruction manuals and the ‘text speak’ associated with mobile phones and the internet can be as hard to understand as the legal language of ‘small print’.
Chrissie Maher, the veteran campaigner who began the war on waffle on this day in 1979, said the increasing acceptance of street slang could prevent younger generations from benefiting from clearer communication.
The 71-year-old said: “Youngsters have their own jargon and that’s all very well in its place but if they aren’t taught plain English it will hold them back when it comes to applying for jobs, signing hospital forms or applying for credit in a shop.
“Technology has brought benefits but also a lot of jargon and poor language that is not easily understood. With mobile phones it is so easy to slip back into text language and then suddenly you have used ‘woz’ instead of ‘was’ in a formal letter without even realising.”
Research shows three-quarters of school pupils believe it is acceptable to use abbreviations such as ‘lol’ in academic assignments, and exam boards including the Scottish Qualifications Authority have admitted answers containing text message language are given some marks as long as they are correct (…).
This week, Google launched its new platform for translation projects, the Google Translator Toolkit. The tool is designed for translators and is similar to translation memory (TM) tools available in the market — such as Across, Déjà Vu, Trados, and Wordfast — and integrates Google Translate’s statistical machine translation.
As we have been discussing in Common Sense Advisory’s research, and in recent industry gatherings, this is the long-needed revolution in an industry that has been trying to “out-Trados” Trados, or trying to increase the productivity of processes and pump up technology that is old and cumbersome. Google Translator Toolkit incorporates all the collaboration features of current technology in an elegant way and enables translators to regain control of the process.
Even though it is still a bare bones solution, it will attract early adopters. Hardcore TM users, on the other hand, will likely shun the new technology.
It is still early to predict the impact of this launch, but we expect that the following will happen:
TM tools will develop interfaces that will read/write Google TMs and Google MT if they want to stay in the market.
Pre-translation and post-editing will become standard practices, even for the most recalcitrant translators.
Discussions about intellectual property of translation memories will become irrelevant, with negative impact for efforts like TM Market Place and the TAUS TDA initiative.
From the Google Translator Toolkit website, we also learn that:
It supports 47 languages.
Translations and glossaries each have a maximum size of 1MB.
Documents can be uploaded in most common file formats.
Translation memories have a maximum size of 50MB per upload.
Google Translator Toolkit is free, but in the future, Google plans to charge users whose translations exceed high-volume thresholds.
Google Translator Toolkit is not perfect. There are valid concerns about using it, along with the predictable resistance to change by those tied to the existing model. However, Google has already changed our behavior in the way we look for information. Now, it is launching a platform that has the potential to revolutionize the translation process, especially if combined with Google Wave, which is expected to be launched soon.
The role of the language services industry is to evolve from this stage. Alea jacta est!
Global software product development and globalization services are converging. The globalization services that make it possible for companies to sell and support their products and services outside of their home markets – internationalization engineering, software localization, website globalization, international QA & testing – are moving upstream, as more and more software development functions are outsourced.
The current economic downturn may have slowed the transition from U.S. GAAP (Generally Accepted Accounting Principles) to IFRS (International Financial Reporting Standards), but Paul Munter, Audit Partner with KPMG, explains why and how U.S. companies need to put this issue back on their financial radar screens. You can meet Paul personally during LISA@Berkeley on August 5, when he addresses The Potential for IFRS Adoption in the U.S.Click here for more information.
SDL International made a technology announcement recently that will be of interest to any of you who are focused on streamlining the way you create, distribute, manage and pay for your global content, whether that be in the form of websites, marketing and sales materials, product documentation, support information, legal documents – whatever. We asked Andrew Draheim, one of the most experienced, hands-on content globalization consultants, to put the SDL announcement in perspective.
NOTE: Neither Andrew nor LISA has had access to software or any specifications, so all comments below are based on publicly available information from SDL.
What did Facebook have to say at the LISA Forum Asia 2009 in Taipei, after announcing that it had reached 200 million users – 70% of whom are outside of the U.S.? Did you know that Google is currently failing in all non-English search markets in Asia? And why is Thomas Friedman (of the ‘world is flat’ fame) flat out wrong?