Abstract
We present the first challenge set and evaluation protocol for the analysis of gender
bias in machine translation (MT). Our approach uses two recent coreference resolution
datasets composed of English sentences which
cast participants into non-stereotypical gender
roles (e.g., “The doctor asked the nurse to help
her in the operation”). We devise an automatic
gender bias evaluation method for eight target languages with grammatical gender, based
on morphological analysis (e.g., the use of female inflection for the word “doctor”). Our
analyses show that four popular industrial MT
systems and two recent state-of-the-art academic MT models are significantly prone to
gender-biased translation errors for all tested
target languages. Our data and code are publicly available at https://github.com/
gabrielStanovsky/mt_gender