Decoding Time for Music vs. Speech
This talk explores the decoding time of music vs. speech as well as the possible role of brain rhythms in processing musical structure. The design of the study described here is based on an experiment by Ghitza and Greenberg (2009) that explored the role of cortical oscillations in speech perception by inserting silences in compressed speech and ascertaining the error rate at identifying words. Without silences added, the error rate for word identification was >50%. However, when silences (up to 160 ms) were added between every 40 ms segment of audio, performance improved. The insertions of silence essentially added “necessary” decoding time. The current work explores whether there is an analogous decoding time for music, and if the timescales for music and speech are similar.