The NYU Alignment Research Group

The NYU Alignment Research Group (ARG) is a set of researchers doing empirical work with language models that aims to address longer-term concerns about the impacts of deploying highly-capable AI systems. See our introductory post for more on what this is about and why we started this initiative.

We overlap and work closely with a variety of groups at NYU, including Machine Learning for Language (ML2), CILVR, the Center for Data Science, and The Department of Linguistics.

For talks, publications, and funding, see individual researcher pages.


(Header image source for this page. Other header images generated by DALL-E.)