Optimizing LQA: on sampling translations for quality and alternatives to screenshot reviews

Source: Moravia Blog
Story flagged by: RominaZ

This post tackles a set of questions that related to sampling translations and alternatives to screenshot reviews to assess the quality of software in-context.

Question: sample checks are good – what do you recommend to achieve confidence in the finalized product though? Samples are always to be treated with caution as a translator showcases their best effort; however the rest of the translation might be of poor quality.

Sample checks are only efficient if done on randomly selected elements (strings, segments or other parts) of the localized content.

The best practice is to either review 100% of the content, and if this is not possible (e.g. due to content size and/or TATs), then pick the to-be-checked sections out of the whole deliverable submitted by translators, i.e. translators should never know what will be sampled. When doing so, we recommend considering the visibility and importance of the to-be-checked content for end-users.

At the same time, one needs to pay attention to assessing the localized content within the context: i.e. logical sequences of strings, sections of help or documentation, etc.

It is important to have a QA (=sampling) strategy defined upfront since different project types require different frequency and sampling scope.

Question: Do you see an alternative to screenshot reviews to assess the in-context quality of software?

Ideally, in-context references are available at the time of translation (i.e. strings are appended with links to screenshots/pre-release builds; separate tools allowing for context-viewing are provided; translators have remote access to the build).

One alternative is standard linguistic testing to verify UI translation quality in-context. But besides screenshot reviews done by the linguists recruited from the translation team, we have recently seen scenarios where a screenshot review was done by a community of potential end-users selected by the software producer.

Feedback provided by them according to a pre-defined scenario is moderated by a linguist appointed by an LSP and triaged either by the LSP or by the Producer. Such a screenshot review should be scheduled as soon as the first usable set of screenshots of the localized content is available.

A much more efficient alternative to the screenshot review done by the community in the early phase of the localization cycle is the pre-release build review executed according to the same scenario.

Adobe is one company that is leading the way in this regard with their Adobe Localized Prerelease Program. This program is using volunteer testers from around the world to test localized pre-release versions of several Adobe products, in order to feed end-user perspective early on in the localization cycle. More on the process and results in the Localization World 2011 Barcelona presentation “A6: International User Outreach and Prerelease Program at Adobe Systems.

Needless to say that a screenshot/build review early on in the localization cycle reduces the number of bugs reported during the standard in-context linguistic and functional testing phase applied in later phases. It also contributes to improving the quality at source for ongoing, long-term or batch-based localization projects since localizers can rely on the translations existing in the file as being correct in the given context.

See: Moravia Blog

Thanks to @LinguaGreca on Twitter

Comments about this article



Translation news
Stay informed on what is happening in the industry, by sharing and discussing translation industry news stories.

All of ProZ.com
  • All of ProZ.com
  • Term search
  • Jobs
  • Forums
  • Multiple search