Two papers by NYU CUSP Class of 2019 Graduate students were accepted by the Association for Computational Linguistics (ACL) 2019 Student Research Workshop!
Yusu Qian and Urwa Muaz’s paper “Reducing Gender Bias in Word-Level Language Models Using a Gender-Equalizing Loss Function” (with additional authors Ben Zhang and Jae Won Hyun) will be presented at the conference, which will be held in Florence, Italy in July.
This paper proposes a novel method to address gender bias in the neural language models. We introduce a penalty term to the objective function of the language model to penalize the discrimination against gender. This method benefits from being simple, intuitive and can be easily incorporated to any text generation model. Proposed model’s performance was evaluated using multiple fairness metrics as well as perplexity showing that this method, when trained with counterfactual data augmentation, outperforms other techniques in the literature.
Yusu Qian will also present the proposal Gender Stereotypes Differ between Male and Female Writings at the 2019 ACL Student Research Workshop.