Intento's latest State of Machine Translation Report is even more objective, thanks to e2f

How can you make machine translation sound more natural? And how do you know whether suppliers use machines when you're paying for human translation? 

Today, we kick off a new blog series sharing how e2f helps the world's leading localization suppliers, global brands, and data scientists answer those perennial questions.

Keep reading to learn more, and be sure to subscribe to our blog!

Intento's annual report ranking machine translation suppliers is essential reading across the localization industry. For Intento’s most recent report, released today, Intento tapped e2f to make its findings even more objective.

 Naturally, Intento aspires to eliminate any chance that a supplier could gain an unfair advantage by training an engine against the same dataset used to evaluate suppliers.

 But to measure how closely machine translations resemble human translation, organizations must be certain MT translations are compared against a benchmark exclusively comprising human translations. 

 That's why Intento partnered with e2f to build an original golden dataset of human translations - without any trace of machine translation or post editing to degrade naturality of benchmark strings.

Planning for human quality

Intento supplied e2f with collections of text strings representing key domain-specific use cases for machine translation: education, finance, healthcare, hospitality, legal, entertainment, general, IT, and colloquial speech.

 For each of Intento's 11 target languages and 9 domains, e2f selected native translators with expert-level qualifications and high hand-translation quality scores from previous projects in similar domains. For reviews, e2f selected linguists with native proficiency in each target language, with expertise in editing and proofreading across multiple domains, as well as a strong performance record as a reviewer. 

 Next, e2f proofread Intento’s strings for compliance with proper English grammar, spelling, and punctuation, and supplied files to translators via e2f’s Translation, Editing, and Proofreading (TEP) platform. 

Proprietary MT Detection

On an hourly basis, as linguists worked within the TEP platform, e2f’s unique MT Detection tool compared translations with those generated by all the leading machine translation engines. Using e2f’s proprietary algorithm, the tool assessed probability that any given string contained machine-translated and/or post-edited content (MTPE). 

 Strings whose MTPE probability exceeded e2f’s threshold triggered reviewers to analyze the strings for naturality. If the reviewer agreed the string resembled MTPE, e2f reminded the translator not to use machine translation and requested fresh translations. If the MT Detection tool flagged the same translator’s work a second time, e2f reassigned the work to another translator with the necessary credentials.

 As linguists reworked their translations, e2f's MT Detection tool continued assessing strings, which ensured the final golden dataset does not bear traces of MTPE. e2f ran quality assurance reports on capitalization, punctuation, spelling, numbers, spaces, and typos. Reviewers implemented necessary changes and proofread the dataset prior to final delivery.

Results you can count on

Backed by e2f's golden dataset, Intento's supplier rankings are all the more trustworthy - and they may contain a few surprises. Whether you source translations, manage suppliers, or build conversational AI solutions, you'll want to read Intento's “The State of Machine Translation Report 2022,” available for download here.

Follow our blog to learn how e2f can help you improve your own supplier evaluation, verify translations, or train machines for high-naturality conversational AI. Or contact sales directly for assistance setting up your project.  

Previous
Previous

e2f is proud to be sponsoring Project Voice 2023, the #1 event in Conversational AI

Next
Next

4 Reasons You Need to Work with a Professional Translator