Automation, Algorithms, and Bias, from Settler Colonialism through the Future of Auditing: Algorithms & AI
On April 16, 2021, the Digital Scholarship Studio & Network (DSSN) hosted a symposium exploring automation, algorithms, and bias. The speakers were Sarah Montoya, Cathy O’Neil, and Ewa Plonowska Ziarek
Last Updated: Apr 17, 2024 11:42 AM
Articles
- Equality of Opportunity in Supervised LearningWe propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy.
- Measurement and FairnessWe propose measurement modeling from the quantitative social sciences as a framework for understanding fairness in computational systems. Computational systems often involve unobservable theoretical constructs, such as socioeconomic status, teacher effectiveness, and risk of recidivism. Such constructs cannot be measured directly and must instead be inferred from measurements of observable properties (and other unobservable theoretical constructs) thought to be related to them -- i.e., operationalized via a measurement model. This process, which necessarily involves making assumptions, introduces the potential for mismatches between the theoretical understanding of the construct purported to be measured and its operationalization. We argue that many of the harms discussed in the literature on fairness in computational systems are direct results of such mismatches. We show how some of these harms could have been anticipated and, in some cases, mitigated if viewed through the lens of measurement modeling. To do this, we contribute fairness-oriented conceptualizations of construct reliability and construct validity that unite traditions from political science, education, and psychology and provide a set of tools for making explicit and testing assumptions about constructs and their operationalizations. We then turn to fairness itself, an essentially contested construct that has different theoretical understandings in different contexts. We argue that this contestedness underlies recent debates about fairness definitions: although these debates appear to be about different operationalizations, they are, in fact, debates about different theoretical understandings of fairness. We show how measurement modeling can provide a framework for getting to the core of these debates.
- Major Universities Are Using Race as a “High Impact Predictor” of Student SuccessStudents, professors, and education experts worry that that’s pushing Black students in particular out of math and science
Websites
- Stanford Institute for Human-Centered Artificial Intelligence 2019-2020 Annual ReportFind out how HAI is advancing AI research, education, policy, and practice to improve the human condition.
- Government by Algorithm: Artificial Intelligence in Federal Administrative AgenciesThis is the final version of a report commissioned by the Conference to study how agencies acquire artificial intelligence (AI) systems and oversee their use. It includes a rigorous canvass of AI use at the 142 most significant federal departments, agencies, and sub-agencies; a series of in-depth but accessible case studies of specific AI applications at seven leading agencies covering a range of governance tasks; and a set of cross-cutting analyses of the institutional, legal, and policy challenges raised by agency use of AI.