TY - JOUR AU - AB - Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview Deven Shah H. Andrew Schwartz Dirk Hovy Stony Brook University Stony Brook University Bocconi University dsshah@cs.stonybrook.edu has@cs.stonybrook.edu dirk.hovy@unibocconi.it Abstract the training process. In the case of domains, these non-generalizations are words, phrases, or senses An increasing number of natural language that occur in one text type, but not another. processing papers address the effect of bias However, this kind of variation is not just re- on predictions, introducing mitigation tech- stricted to text domains: it is a fundamental prop- niques at different parts of the standard NLP erty of human-generated language: we talk differ- pipeline (data and models). However, these ently than our parents or people from a different works have been conducted individually, with- out a unifying framework to organize efforts part of our country, etc. (Pennebaker and Stone, within the field. This situation leads to repet- 2003; Eisenstein et al., 2010; Kern et al., 2016). itive approaches, and focuses overly on bias In other words, language reflects the diverse de- symptoms/effects, rather than on their origins, mographics, backgrounds, and personalities of the which could limit the development of effective people who use it. While these TI - Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview JF - Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics DO - 10.18653/v1/2020.acl-main.468 DA - 2020-01-01 UR - https://www.deepdyve.com/lp/unpaywall/predictive-biases-in-natural-language-processing-models-a-conceptual-ax3x0fGbzi DP - DeepDyve ER -