As science, technology, and medicine advance, society will confront new ethical dilemmas at the nexus of public health policy and individual choice. The Master of Arts in Bioethics at the College of Global Public Health provides a strong philosophical foundation for navigating these urgent questions.
Time: 3:00 – 4:30 pm
708 Broadway, Room 801
This talk aims to toss around several thorny questions such as (1) can and should machines learn morality, (2) can AI be truly safe without basic sense of ethics and morals, (3) might the so-called “AI alignment” be at odds with “value pluralism”, and (4) is it even possible for philosophers to meaningfully collaborate with scientists on AI. What makes these questions extra challenging are the paradoxical aspects of AI’s (in)capabilities, the significant power implications of AI, let alone the ongoing debate about morality among humanity at large.
To ground these questions to some degree, I will start by briefly introducing Delphi, an experimental framework based on deep neural networks trained to (pretend to) reason about descriptive ethical judgments, e.g., “helping a friend” is generally good, while “helping a friend spread fake news” is not.
I’ll then share the lessons learned, and present a range of follow-up efforts including Value Kaleidoscope, a new computational attempt to modeling pluralistic values, rights, and duties that are often intertwined in a real-life situation (e.g., “lying to a friend to protect their feelings”, where honesty and friendship are at odds).
Yejin Choi is Wissner-Slivka Professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a senior director at AI2 overseeing the project Mosaic and a Distinguished Research Fellow at the Institute for Ethics in AI at the University of Oxford. Her research investigates if (and how) AI systems can learn commonsense knowledge and reasoning, if machines can (and should) learn moral reasoning, and various other problems in NLP, AI, and Vision including neuro-symbolic integration, language grounding with vision and interactions, and AI for social good. She is a co-recipient of 2 Test of Time Awards (at ACL 2021 and ICCV 2021), 7 Best/Outstanding Paper Awards (at ACL 2023, NAACL 2022, ICML 2022, NeurIPS 2021, AAAI 2019, and ICCV 2013), the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI’s 10 to Watch in 2016.