Multiparty Conversation across the Lifespan
In everyday life, we adjust our language based on who we’re speaking with and how much we know about each other. We create shared terms to quickly refer to things we frequently discuss, like calling a local coffee shop ‘ER‘ with friends but ‘Espresso Royale on Goodwin Street‘ to strangers. This process of adjusting our speech based on our audience is known as Audience Design (Clark, 1996).
We study how this process scales up in conversations involving more than two people who have varying levels of knowledge. We also explore when and why this ability to engage in effective audience design declines across the lifespan, and what factors help older adults maintain successful communication.
Interaction between Memory and Language
To make conversations smooth and meaningful, we need to remember what the other person knows. Our research looks at how memory and language work together in everyday conversations. We study how people remember what they’ve talked about while they’re speaking and listening.
We also study how people with memory problems, like those with amnesia or dementia, communicate. People with amnesia have significant memory loss due to damage to a part of the brain called the hippocampus. The hippocampus is important for creating new memories, so when it’s damaged, it might be hard for someone with amnesia to keep track of what’s been talked about in a conversation. One question we’re exploring is whether people with amnesia can still remember what different conversation partners know and adjust their language accordingly. We also work with people who have neurodegenerative diseases, like Alzheimer’s or Parkinson’s, to understand how their memory and thinking problems affect their communication.
Contextual Effect in Language
Lexical differentiation is a phenomenon, which refers to the tendency of speakers to elaborate their referring expressions with modifiers, e.g., “the striped shirt”, if a different exemplar from the same category when designing definite referring expressions. Although Lexical differentiation has been replicated in production many times, listeners showed no evidence of lexical differentiation in behavioral studies (e.g., eye-gaze; Yoon & Brown-Schmidt, 2014). I revisit lexical differentiation in both production and comprehension by testing (in)appropriately differentiated expressions while recording participants’ electrical brain activity.
Disfluency Processing
Speakers are often disfluent (~6 per 100 words in spontaneous speech). Disfluency (e.g., “um” or “uh”) does not contain any linguistic information, but listeners tend to actively process disfluency and predict upcoming words following disfluency. It is well establish that young adults predict something hard to label or discourse new referents following the speaker’s disfluency (Arnold et al., 2004). They also easily cancel this expectation when there is a certain reason why the speaker is disfluent (e.g., when there is a naive partner who just join the conversation, Yoon & Brown-Schmidt, 2014). I’m interested in how children process disfluency in particular when they interact with multiple partners who share different knowledge each other. My research shows that 4-year-old children flexibly process disfluency with respect to the current partner’s knowledge state, rather than based on their egocentric knowledge.