Friday, December 6, 2019; 4:00 – 6:00 PM
GCASL, 238 Thompson St, Room 288
New York, NY, 10012
Abstract: It is intuitively plausible to think that persons who are adversely affected by the use of machine learning systems have, amongst other things, a right to normative justification: a right to be given good reasons why the deployment of machine learning systems for a particular task is morally permissible, all things considered. But what are the normative foundations of this right, and what does justification in this context entail? One central challenge is that machine learning systems are (partially) opaque. This may constitute a hard explainability constraint: in some cases, it may be impossible to explain why particular outcomes occur, and whether a given ML system is doing what it is supposed to be doing, and in the way that it is supposed to be doing it.
If explanation (and thus, explainability) is necessary—albeit not sufficient—for normative justification in this context, what does this imply for the right to justification? One possible response is to adopt a conditional account of the right to normative justification: persons have a right to justification only if explanation is possible. But the conditional solution seems intuitively unappealing because it lets potentially culpable agents off the hook rather easily.
Alternatively, one might reject the higher-order claim that justification requires explaining why and how particular outcomes occur in the context of machine learning systems. Several possible views are compatible with that move: on the one hand, one might substitute the explanation requirement by a significantly weaker one, such as ex post transparency. Yet this option is not without normative costs. On the other hand, one might argue that persons have a right to justification irrespective of the presence of a hard explainability constraint. This move implies a demanding view of justification—but is it too demanding?
A third possible strategy is to argue that there is both a right to explanation and an independent right to normative justification. This view differs from (1) and (2), neither of which are committed to the view that there is a right to explanation. If persons have a right to explanation which is not derivative of the right to justification, this raises the question whether explanation without justification is morally valuable. This is not obvious: we need to investigate which kinds of morally weighty interests, if any, justify the conferral of the right to explanation. Aside from articulating the independent normative sources of the right to explanation, defenders of this view would need to show who bears the correlative duty of explanation, since the set of agents who bear this duty might be significantly different from the set of agents who bear a duty of justification. This paper systematically assesses the normative implications of these three views.
Speaker Bio: Dr. Annette Zimmermann is a political philosopher working on the ethics of algorithmic decision-making, machine learning, and artificial intelligence. Dr. Zimmermann has additional research interests in moral philosophy (the ethics of risk and uncertainty) and legal philosophy (the philosophy of punishment), as well as the philosophy of science (models, explanation, abstraction).
In the context of her current research project “The Algorithmic Is Political”, she is focusing on the ways in which disproportionate distributions of risk and uncertainty associated with the use of emerging technologies—such as algorithmic bias and opacity—impact democratic values like equality and justice.
At Princeton, she is based at the Center for Human Values and at the Center for Information Technology Policy. Dr. Zimmermann holds a DPhil (PhD) and MPhil from the University of Oxford (Nuffield College and St Cross College), as well as a BA from the Freie Universität Berlin. She has held visiting positions at the Australian National University, Yale University, and SciencesPo Paris.