For several years, the field of quality checking tools has been largely stagnant, with incremental updates to established tools. Recently, TAUS’ Dynamic Quality Framework (DQF) and the EU’s Multidimensional Quality Metrics (MQM) have set the stage for new developments in quality assessment methods thanks to their new methods and push for standardization. In this blog, we’ll review three new market entrants that are hoping to shake up this area. But let’s start with an overview of the types of tools out there:
- Automatic quality checkers. These tools use pattern recognition and other language technology approaches to identify potential problems, such as broken or missing links, inconsistent terminology, and missing content. These applications help linguists identify and fix problems during production to ensure quality.
- Quality assessment scorecards. Many LSPs use spreadsheet-based tools or simple software applications to count errors in translations to generate quality scores. They use the figures these produce to decide whether target text meet thresholds for acceptance. The classic example of such a system is the now-defunct LISA QA Model, but most CAT tools have some basic functions in this area.
Both of these approaches serve their purpose and help both LSPs and their clients, but three companies are bringing energy to an area that has been something of a language technology backwater. In CSA Research’s briefings with the developers of these tools, we saw encouraging signs that quality assessment is taking off again.