Time: 3:00 – 4:30 pm
708 Broadway, Room 801
This talk aims to toss around several thorny questions such as (1) can and should machines learn morality, (2) can AI be truly safe without basic sense of ethics and morals, (3) might the so-called “AI alignment” be at odds with “value pluralism”, and (4) is it even possible for philosophers to meaningfully collaborate with scientists on AI. What makes these questions extra challenging are the paradoxical aspects of AI’s (in)capabilities, the significant power implications of AI, let alone the ongoing debate about morality among humanity at large.
To ground these questions to some degree, I will start by briefly introducing Delphi, an experimental framework based on deep neural networks trained to (pretend to) reason about descriptive ethical judgments, e.g., “helping a friend” is generally good, while “helping a friend spread fake news” is not.
I’ll then share the lessons learned, and present a range of follow-up efforts including Value Kaleidoscope, a new computational attempt to modeling pluralistic values, rights, and duties that are often intertwined in a real-life situation (e.g., “lying to a friend to protect their feelings”, where honesty and friendship are at odds).
Questions?
Contact us at bioethics@nyu.edu