Automation, Algorithms, and Bias, from Settler Colonialism through the Future of Auditing: Algorithms & AI
Articles
- Equality of Opportunity in Supervised LearningWe propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy.
- Measurement and FairnessWe propose measurement modeling from the quantitative social sciences as a framework for understanding fairness in computational systems. Computational systems often involve unobservable theoretical constructs, such as socioeconomic status, teacher effectiveness, and risk of recidivism. Such constructs cannot be measured directly and must instead be inferred from measurements of observable properties (and other unobservable theoretical constructs) thought to be related to them -- i.e., operationalized via a measurement model. This process, which necessarily involves making assumptions, introduces the potential for mismatches between the theoretical understanding of the construct purported to be measured and its operationalization. We argue that many of the harms discussed in the literature on fairness in computational systems are direct results of such mismatches. We show how some of these harms could have been anticipated and, in some cases, mitigated if viewed through the lens of measurement modeling. To do this, we contribute fairness-oriented conceptualizations of construct reliability and construct validity that unite traditions from political science, education, and psychology and provide a set of tools for making explicit and testing assumptions about constructs and their operationalizations. We then turn to fairness itself, an essentially contested construct that has different theoretical understandings in different contexts. We argue that this contestedness underlies recent debates about fairness definitions: although these debates appear to be about different operationalizations, they are, in fact, debates about different theoretical understandings of fairness. We show how measurement modeling can provide a framework for getting to the core of these debates.
- Major Universities Are Using Race as a “High Impact Predictor” of Student SuccessStudents, professors, and education experts worry that that’s pushing Black students in particular out of math and science
Websites
- Stanford Institute for Human-Centered Artificial Intelligence 2019-2020 Annual ReportFind out how HAI is advancing AI research, education, policy, and practice to improve the human condition.
- Government by Algorithm: Artificial Intelligence in Federal Administrative AgenciesThis is the final version of a report commissioned by the Conference to study how agencies acquire artificial intelligence (AI) systems and oversee their use. It includes a rigorous canvass of AI use at the 142 most significant federal departments, agencies, and sub-agencies; a series of in-depth but accessible case studies of specific AI applications at seven leading agencies covering a range of governance tasks; and a set of cross-cutting analyses of the institutional, legal, and policy challenges raised by agency use of AI.
Books
Ethics of Artificial Intelligence
As Artificial Intelligence (AI) technologies rapidly progress, questions about the ethics of AI, in both the near-future and the long-term, become more pressing than ever. This volume features seventeen original essays by prominent AI scientists and philosophers and represents the state-of-the-art thinking in this fast-growing field.
See specifically: Cathy O’Neil and Hanna Gunn, "Near-Term Artificial Intelligence and the Ethical Matrix"
Weapons of Math Destruction
We live in the age of the algorithm. Increasingly, the decisions that affect our lives...are being made not by humans, but by machines. In theory, this should lead to greater fairness: Everyone is judged according to the same rules.
But as mathematician and data scientist Cathy O'Neil reveals, the mathematical models being used today are unregulated and uncontestable, even when they're wrong. Most troubling, they reinforce discrimination--propping up the lucky, punishing the downtrodden, and undermining our democracy in the process.
Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
Eubanks investigates the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America. She shows how automated systems, rather than humans, control which neighborhoods get policed, which families attain needed resources, and who is investigated for fraud. While we all live under this new regime of data, the most invasive and punitive systems are aimed at the poor.
Data Feminism
In Data Feminism, Catherine D'Ignazio and Lauren Klein present a new way of thinking about data science and data ethics--one that is informed by intersectional feminist thought. Illustrating data feminism in action, D'Ignazio and Klein show how challenges to the male/female binary can help challenge other hierarchical (and empirically wrong) classification systems. Data Feminism offers strategies for data scientists seeking to learn how feminism can help them work toward justice, and for feminists who want to focus their efforts on the growing field of data science. But Data Feminism is about much more than gender. It is about power, about who has it and who doesn't, and about how those differentials of power can be challenged and changed.