When a test is administered, ideally a testee who scores well on certain questions will also score well on similar questions of a similar domain and of similar difficulty.

When this is not the case, this could be due to chance, but it could also be due to a bad question: a question that is ambiguous, inappropriate, or not a good measure of testee ability.

If the tests are scored by independent evaluators, discrepancies could also be due to variations in evaluator criteria…or even variations in evaluator motivation, thoroughness, or incentives.

Most translators have had this experience: an agency asks you to take a translation test, which is essentially a very short passage. Your translation then goes to another translator for evaluation.

Yet when you are told you did not pass the test and ask to see the feedback, you realize either that either a) the evaluator is not herself a capable translator or b) the evaluator was obviously overzealous in stylistic corrections, perhaps because they didn’t want to share work opportunities with you in the future.

We designed our tests to counteract this phenomenon. Our tests break down translation ability into distinct, objective skills that are more measurable when separate than when present together in the translation process. These scores can then be compared to the same criteria present in a translation passage, which also appears in the test. When large discrepancies arise, the test and evaluator are flagged, and the test is sent to more evaluators until we are confident that the testee has received a fair and standardized evaluation.

Our test is the first, and to date the only, translation test that follows this method.

Our rigorous test design procedures are why LSP’s consistently rate linguists with the Meridian Certificate of Translation Ability as being reliably high-quality, and deserving of higher per-word and hourly rates.