How do listeners perceive structure in music?
How can we model these perceptions computationally?
In order to answer these questions, I employ an interdisciplinary approach, drawing from methodologies in the fields of music theory, cognitive psychology, and computer science. My research focuses primarily on the real-time aspects of music listening, in particular how emergent phenomena such as tonality and musical tension are perceived, in addition to computer applications for facilitating musical creativity that are based on cognitive models.
I am currently Associate Professor and Associate Director of Music Technology at New York University in the Department of Music and the Performing Arts Professions, Steinhardt School. I’m also a member of the NYU Music and Audio Research Lab (MARL) and the Max Planck/NYU Center for Language, Music, and Emotion (CLaME). I co-founded the Northeast Music Cognition Group with colleagues at NYU and Yale. From 2017-2018, I was a Radcliffe Fellow in Computer Science at Harvard University.
Contact
Dept. of Music and Performing Arts Professions
35 W. 4th St. Suite 1077
New York, NY 10012
E-mail: mfarbood@nyu.edu
Phone: +1.212.992.7680