On April 26, 2019, the NYU Center for Bioethics will host a workshop on “The Ethics of Artificial Intelligence and Healthcare”.
Artificial intelligence (AI) systems have the potential to dramatically improve health outcomes for patients by analyzing enormous amounts of data, identifying patterns and predicting results. To succeed, though, these systems need access to personal and group health data and use complex algorithms that are difficult, and sometimes impossible, to understand. This creates a potential conflict with current ethical standards for the treatment of patients, which emphasize fairness, consent, and privacy. In this workshop hosted by the NYU Center for Bioethics, leading AI scientists, philosophers, and bioethicists will explore the following issues:
a) Bias in machine learning algorithms has led to discrimination in areas such as criminal sentencing, predictive policing, and hiring decisions. While healthcare poses some similar dangers, it often presents very different challenges. For example, data from clinical trials can discriminate against women, racial minorities, and the elderly because these groups are less likely to participate in trials, potentially leading to AI algorithms privileging treatment options optimized for specific demographics. If so, what are some ways to minimize harmful algorithmic biases specifically in healthcare?
b) In a number of contexts such as criminal sentencing, the predictive capacity of AI is not the only metric for success; we must also know how a decision was made in order to be able to explain those decisions. But while explainability may be required in such contexts, is it a necessary requirement for gaining the informed consent of patients? If it is, does the explanation have to include the causal relationship between symptoms and diagnosis/treatment, or can the connection be merely correlational? Does the explanation have to be intelligible to patients, to doctors, or even just to machine learning technicians? If certain diagnoses/treatment tools cannot meet the required standard of explanation, should we revert to more explainable but less predictive machine learning tools?
c) As with other applications of AI, realizing the benefits of these technologies in healthcare requires access to the vast quantities of data, raising concerns of data privacy. Respecting patients’ privacy remains one of the core values in health care. At the same time, as AI systems continue to develop, we may need to balance privacy concerns with novel medical innovations that can potentially save many lives. Can machine learning help us achieve both objectives? Or must there inevitably be a tradeoff?
Space is limited. Please RSVP to express interest. Priority will be given to those who have demonstrated an interest in the area. Confirmations will follow in coming weeks.
Speakers and panelists will include:
Glenn Cohen (Harvard), Tina Eliassi-Rad (Northeastern), S. Matthew Liao (NYU), Alex John London (Carnegie Mellon), Francesca Rossi (IBM), Walter Sinnott-Armstrong (Duke), Effy Vayena (ETH Zurich), Serena Yeung (Stanford)
Click below to see speaker abstracts and bios.