Abstracts

Session A1, Beat & Meter 1

9:30-10:15 AM in KC802

A1-1: Recent experience effects in complex rhythm processing

Carson G Miller Rigoli*(1), Sarah C Creel(1)
1:University of California, San Diego

In this submission, we present evidence that the performance of complex rhythms is influenced by recent rhythmic experience. When asked to produce or tap along with rhythms composed of multiple intervals, listeners typically distort those rhythms in systematic ways. Distorted rhythms frequently appear to be ‘normalized’ – though not fully – in the direction of ‘attractor’ rhythms with simple integer interval ratios such as 1:1 or 1:2. Two recent studies have demonstrated that the specific patterns of rhythmic distortion vary partly with the music-cultural environment of listeners. This invites the conclusion that ‘attractor’ rhythms may emerge with long-term experience. Here, we extend those findings by asking whether the performance of complex rhythms depends also on much more recent rhythmic experience. To answer this, we tested timing distortion for a range of two-interval rhythms after controlled exposure to priming rhythms. In a priming task, non-musicians (N = 64) tapped along with a single rhythm for a total of 250 cycles. The specific interval ratio of the rhythm presented to any participant was 1:1, 4:5, 4:7, or 1:2. Participants then synchronized with five new rhythms presented in counterbalanced, mixed blocks (ratios: 6:7, 3:4, 2:3, 3:5, 6:11). The average degree, and in some cases direction, of distortion for these five test rhythms was significantly modulated by the priming rhythm. We additionally found a trial-by-trial carryover effect; deviation from average rhythm distortion on any test trial was significantly predicted by performance on immediately preceding test trials. These results support the proposal that complex rhythm performance is sensitive to local rhythmic context extending tens of seconds and tens of minutes into the past. As a result, current models which explain complex rhythm processing in terms of (even locally) static representations must be revisited or augmented to account for contextual adaptation and change on developmental and short-term timescales.

Subjects: Beat, rhythm, and meter, Cross-cultural comparisons/non-Western music; Music and development; Music and movement

When: 9:30 AM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

A1-2: Recurrent timing nets for rhythmic expectancy

Peter A Cariani(1)
1:Boston University

Recurrent timing nets (RTNs) are neural networks that operate on temporally-coded inputs to produce temporally-coded outputs. The simplest neural representation of rhythm is direct temporal coding, i.e. temporal patterns of spikes associated with event onsets, and the simplest form of temporal pattern memory is a delay line. RTNs consist of arrays of delay loops single- or multi-synaptic delay paths) with different recurrence times and adaptive, facilitating/depressing coincidence detectors that compare the delayed signal event patterns with incoming ones (Cariani, “Temporal Codes, Timing Nets, and Music Perception”, JNMR, 30(2):107-135, 2001; Cariani, “Temporal memory traces as anticipatory mechanisms”, Nadin, ed. Anticipation in Medicine, pp. 105-136, Springer. 2017). Results of recent RTN simulations will be presented and discussed. Each delay loop functions roughly as an adaptive comb filter whose loop gain increases when the delayed and current signals are highly correlated and decreases when these are only weakly so. Because incoming temporal patterns propagate through the delay loops to present the pattern again at their characteristic delay, they can function as complex, pattern-oscillators. The representation of the rhythmic pattern expectancy is the sum of all of the arriving adaptively-weighted recurring signals at any moment. The processing resembles an adaptive (nonlinear) running-autocorrelation/comb filter signal processing and analysis. Such RTNs can potentially account for rhythmic pattern induction (buildup of a groove, the expectation of an exact repetition of a pattern of events) and metrical pattern induction (a regular temporal expectancy “frame” of accented/unaccented events irrespective of exact repetition). The nets build up temporal pattern expectancies to the extent that the event patterns are correlated with themselves, and adaptively adjust to the duration of repeated event sequences. When patterns regularly repeat, the pattern builds up in delay loop(s) whose recurrence times equal the (fundamental) period of the pattern and its multiples. When there are no repeated event patterns, expectancies revert to individual event probabilities. A complex, latency-based pulse pattern code is proposed that incorporates onset timings and event attributes (accent, pitch, timbre, loudness, duration), such that RTNs can also chunk events on the basis of patterns of these features in addition to onset timings.

Subjects: Beat, rhythm, and meter, Computational approach; Memory; Neuroscientific approach

When: 9:45 AM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

A1-3: Children synchronize their finger taps to rhythms through iterated reproduction

Karli Nave*(1), Nori Jacoby(2), Jessica Mussio(1), Erin Hannon(1), Chantal Carrilo(3), Laurel Trainor(3)
1:University of Nevada, Las Vegas, 2:Max Planck Institute for Empirical Aesthetics, 3:McMaster University

Rhythm is ubiquitous to human communication. The ability to speak with a native accent or play music depends on listeners’ ability to perceive, reproduce and synchronize with rhythmic patterns. Previous research has shown that listeners assimilate rhythmic patterns towards familiar structures or priors, and these assimilation patterns vary by culture. In this study, we investigate whether children also assimilate rhythms to culture-specific structures, as previously shown with adults. North American children ages 6-11 years completed a perception task (song rating game) and a production task (interactive tapping game). In the perception task, children listened to a tiger play his favorite songs (simple: 4/4 meter or complex: 7/8 meter) and rated how well subsequent animals (variations on the original song) matched the tiger’s song. All children performed above chance, and children over 7 years of age showed greater sensitivity to rhythmic disruptions of culturally familiar simple-meter than unfamiliar complex-meter songs. In the production task, the children helped an astronomer communicate with aliens by tapping in synchrony with rhythms sent to Earth from outer-space. On each iteration within a 5-iteration block, the child attempted to synchronize with the rhythm that they produced on the previous iteration. If children have robust culture-specific biases, then we expected that over successive iterations, their tapping would converge on those ratios preferred by North American adults in prior research. Tapping data collection is still underway, but results to date suggest that children’s iterated tapping does converge on integer ratios preferred by North American adults, but convergence patterns are much less stable in children. After data collection is complete, we plan to compare performance on both tasks to understand how enculturation processes unfold in perception and production.

Subjects: Beat, rhythm, and meter, Music and development

When: 10:00 AM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session A2, Crossing Cultures

9:30-10:15 AM in KC905/907

A2-1: The Stories Music Tells: Cross-Cultural Narratives for Wordless Music

Elizabeth Margulis*(1), Patrick Wong(2), Natalie Phillips(3), Rhimmon Simchy-Gross(1), Gabrielle Kindig(3), Devin McAuley(3)
1:University of Arkansas, 2:Chinese University of Hong Kong, 3:Michigan State University

Despite ample evidence to the contrary, people still sometimes speak about music as a universal language. One way music can convey meaning is through seeming to tell a story, despite the lack of any words or lyrics. This project aimed to investigate the role of culture in these narrative experiences of instrumental music. Participants were tested at a research site in the US, and in a rural village in Guizhou, China. These participants spoke Kam, a tone language independent from the Sino-Tibetan family (Ramsey, 1989) that possesses no widely-used written form. Participants at the US site had little experience with the kind of Chinese media essential to building up sound-pattern associations, and vice-versa for participants at the China site. Participants at both sites listened to 128 excerpts of wordless Chinese and Western music. Responses included open-ended descriptions of any stories they imagined while listening to each excerpt. While considerable topical consensus emerged among participants at each site for individual excerpts, this consensus extended across cultures in only a small minority of excerpts, such as excerpts that evoked a military parade. In many cases, what read as funereal to participants at one site read as happy to participants at the other, or what one group experienced as evocative of murder and paranoia sounded like a happy outing with friends to the other. As a manipulation check, the experiment was repeated at a second US site. Despite substantial geographic distance between the two US site locations, consensus themes remained similar, suggesting that even within broadly defined cultures (e.g. college students in the US), wordless music can evoke specific storylines. Moreover, the differences between responses at the US and China site reinforce the notion that within-cultural associations drive narrative responses to music more than intrinsic structural aspects.

Subjects: Language and speech, Cross-cultural comparisons/non-Western music

When: 9:30 AM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

A2-2: Timbre’s role in communicating emotions between performers and listeners from Western art music and Chinese music cultures

Lena Heng(1)
1:McGill University

Timbre has been identified by music perception scholars as an important component in the communication of emotions in music. While its function as a carrier of perceptually useful information about sound source mechanics has been established, studies of whether and how it functions as a carrier of information for communicating emotions in music are still in their infancy. If timbre functions as a carrier of emotion content in music, how it is used may vary across different musical traditions and has to be learned by musicians. To investigate whether there is a difference in the use of timbre by musicians of different musical traditions, two parallel groups of performers (three each on erhu, violin, pipa, guitar, dizi, flute) from the Chinese and Western art music traditions (n = 18) are recorded as they perform excerpts to express different emotions (happy, sad, angry, and neutral). Four groups of listeners (trained in Chinese music from Singapore, Western art music from Singapore and from Montreal, and Western nonmusicians from Montreal; n = 30 per group) listened to the recorded excerpts and classified the intended emotions performed. Listeners trained in the two musical cultures showed a significant difference in identifying the intended musical intent. Certain aspects of timbre seem to provide relevant information for musical communication. Finally, the recorded excerpts were analyzed to determine acoustic aspects that are correlated with timbre characteristics. Analysis revealed consistent differences in log attack times, frequency and amplitude modulations, spectral centroid, spectral flux, and spectral spread, suggesting purposeful manipulations of temporal, spectral, and spectrotemporal properties of the sounds by performers in their expression of emotions in music. These aspects of timbre appear to not only function as carriers of information for musical communication but also seem to be implicated in the learning of emotion expression in music that differs across musical traditions.

Subjects: Emotion, Cross-cultural comparisons/non-Western music; Timbre

When: 9:45 AM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

A2-3: Similar acoustic events lead to strong emotional responses in music across cultures.

Eleonora J Beier*(1), Petr Janata(1), Justin Hulbert(2), Fernanda Ferreira(1)
1:University of California, Davis, 2:Bard College

Music is a fundamental human trait playing an important role in all cultures. As music around the world presents great variability but also many shared characteristics, the extent to which listeners can infer the meaning of music from different cultures is debated. Listeners can form melodic expectations and infer the mood of music with unfamiliar tonal systems. However, emotion recognition in music is not necessarily equivalent to felt emotion. Thus, it is not yet clear whether music can induce felt emotional responses in listeners unfamiliar with its structure and cultural context, and what shared musical elements would elicit these emotions. We address these questions by measuring felt emotional responses, in the form of chills, in music from familiar and unfamiliar cultures. Stimuli consisted of musical excerpts from Western classical, traditional Chinese, and Hindustani classical music. Scrambled versions of these excerpts – presenting disrupted musical structure – acted as a control. Sixty-two participants, divided into three groups based on self-reported familiarity with each style, listened to all excerpts in pseudo-random order. Times of chills were recorded by button presses, and their magnitudes were measured as peaks in skin conductance. Results show that while scrambled music elicited significantly fewer chills than non-scrambled music, there were no effects of participant group and musical style; thus, participants felt strong emotional responses even in music from an unfamiliar tonal system. Acoustic analyses of our stimuli reveal that, for all styles, the timing of chills correlated with sudden peaks in amplitude envelope, brightness and roughness, while these correlations were not found in the scrambled pieces. Overall, our results indicate that listeners feel strong emotional responses to unfamiliar musical styles from other cultures, and that the same unexpected acoustic events induce these emotions cross-culturally.

Subjects: Cross-cultural comparisons/non-Western music, Aesthetics / preference; Emotion; Expectation

When: 10:00 AM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session A3, Aging

9:30-10:15 AM in KC909

A3-1: Psychological Mechanisms underlying musical emotions in dementia

Gonçalo T Barradas(1)
1:Uppsala Universitet

Music is understood to be a great way to break through to dementia patients. Literature suggests that patients are able to identify musical emotions in the middle stage of the disease when familiar music is played. Yet, little is known about how progressive impairment of psychological mechanisms involved on the mediation between music and emotion might affect how listeners suffering from dementia respond emotionally to music. The aim of this study was to answer whether listeners suffering from dementia show different patterns of emotional reactions to music from controls by manipulating four psychological mechanisms involved on the mediation between music and felt emotions. On this study, several listeners (65–90 years old) took part in an experiment which compared elderly individuals diagnosed with dementia (Alzheimer´s and frontotemporal dementia) with healthy elderly controls. The participants listened to music stimuli designed to target specific psychological mechanisms (brain stem reflex, contagion, episodic memory, and musical expectancy), and were asked to rate felt emotions. Because self-report of felt emotions could be biased in individuals suffering from dementia, and underlying mechanisms cannot be observed, psychophysiological measures were obtained (skin conductance level and facial electromyography). Other variables assessed participants cognitive abilities (Mini Mental Test) and depression level (Geriatric Depression Scale). Based on previous studies, we made predictions about how dementia could affect each mechanism. Initial results will be presented, including an approach that may help determine future music interventions and rehabilitation with dementia patients.

Subjects: Emotion, Health and well-being

When: 9:30 AM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

A3-2: Group singing improves psychosocial wellbeing in older adults

Arla Good*(1), Alexander Pachete(1), Gunter Kreutz(2), Alexandra Fiocco(1), Fran Copelli(1), Frank Russo(1)
1:Ryerson University, 2:University of Oldenburg

Many older adults, especially those living with age-related diseases such as Parkinson’s, face formidable challenges to psychosocial wellbeing. Foremost among these is loneliness and depression that may arise following a diagnosis. Increasingly, older adults are discovering group singing as a meaningful social activity that may address challenges to psychosocial wellbeing. The research presented here is part of the SingWell project, an international research study aimed at investigating the potential for group singing to support psychosocial wellbeing in older adults living with various age-related diseases. Another aim of this project is to clarify the biological underpinnings of these benefits, including the effects of group singing on cortisol, a stress-related hormone. In the current study, we assess the impact of group singing on the psychosocial wellbeing of older adults in two newly established choirs; one group consisting of older adults living with Parkinson’s disease and the other consisting of healthy aging older adults. In a pre-post design, participants were asked to complete a brief questionnaire assessing their current positive and negative affect and perceived social connectedness, as well as provide a saliva sample used to assess cortisol levels before and after group singing. The data show a positive shift in mood and an increased level of social connection following a single session of group singing. Moreover, a preliminary analysis of the salivary assay reveals a decrease in cortisol levels following group singing. These findings provide new insights regarding the positive impact of group singing on psychosocial wellbeing in older adults.

Subjects: Health and well-being, Music therapy

When: 9:45 AM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

A3-3: Effects of short-term choir participation on speech-in-noise perception and auditory processing in older adults with hearing loss.

Ella Dubinsky*(1), Gabriel Nespoli(1), Emily A Wood(1), Frank Russo(1)
1:Ryerson University

Hearing loss, which most adults will experience to some degree as they age, has been associated with social isolation and reduced quality of life in aging adults. Although hearing aids target aspects of peripheral hearing loss, persistent perceptual deficits are widely reported. One prevalent example is the loss of the ability to perceive speech in noise, which severely impacts quality of life and often persists in spite of peripheral remediation. Musicianship has been shown to enhance aspects of auditory processing, but has rarely been studied as a short-term intervention for improving these abilities in older adults. Experiment 1 investigated whether ten weeks of choir participation could improve aspects of auditory processing in older adults; measures included speech-in-noise perception (SIN), pitch discrimination (FDL), and the frequency following response to brief auditory stimuli (FFR). Choir participants and an age- and audiometry-matched do-nothing control group underwent the same battery of pre- and post-testing auditory and EEG assessments. The choir singing group demonstrated improvements across auditory measures over the ten-week period, while the control group did not, supporting the hypothesis that speech-in-noise perceptual outcomes can be supported by group singing. Experiment 2 investigated whether these effects may generalize to hearing-aided older adults, through a three-arm clinical trial in which hearing-aided participants were randomly assigned to one of three 14-week interventions: a) group choral singing; b) group music listening classes; or c) a waitlist do-nothing control group. The interventions were designed to elucidate the benefits of music production (singing) versus perceptual experience (listening). Participants underwent pre- and post-testing assessments of SIN, FDL, and FFR (Experiment 1), as well as measures of beat perception (BAT), spectrotemporal sensitivity (STMS), emotional identification (RAVDESS), and auditory working memory (digit span; WAIS). Preliminary findings inform the use of musical interventions to target aspects of auditory processing in hearing-aided older adults.

Subjects: Health and well-being, Music and hearing

When: 10:00 AM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session A4, Ensemble Performance 1

9:30-10:15 AM in KC914

A4-1: Role of ears, heads, and eyes in vocal duet performance

Caroline Palmer*(1), Frances Spidle(1), Erik Koopmans(1), Peter Schubert(1)
1:McGill University

How do singers’ head movements change in ensemble performance? Amateur vocalists sang a familiar melody in Solo and Duet conditions. They sang the duets at a metronome-cued pace in Unison (same pitches at same time) and Round (pitches at a delay from one another) and in 2 Visual conditions: Inward-facing (full view of partner) and Outward-facing (no view of partner). Each person performed all Duet conditions as Leader (who kept the cued tempo) and as Follower. Each participant’s motion capture markers on a headband measured the side-to-side (azimuth in degrees) and up-and-down motion (elevation in degrees). Pairs with larger tempo differences in (tempo-uncued) Solo performances showed larger asynchronies in Duet performances. Outward-facing conditions yielded slightly larger tone asynchronies than Inward-facing conditions. Vocalists’ side-to-side head movements were more variable in Round than in Unison conditions. Additionally, in the Inward-facing condition, the Follower turned away from the Leader more in Round than in Unison. Head elevation measures also indicated greater variability for the singers in Round than in Unison conditions. To test whether the up-down head movements were related to the rhythmic structure, FFT analyses were performed to examine the power present at 4 frequencies related to the hierarchical meter. Maximal power for 30 of 32 participants was found at the frequency corresponding to one head movement cycle every 4 beats (one measure). Power at the 4-beat frequency increased in Duet performances relative to Solo performances. Power was greater in Rounds than in Unison and more for the Follower than the Leader. Comparisons of singers’ head elevation with asynchronies indicated that the Follower’s power at the 4-beat level predicted asynchrony but not the Leader’s power. These findings suggest that singers’ head movements increased in correspondence to the metrical structure, more in difficult conditions specific to coordinating in time with a partner.

Subjects: Performance, Beat, rhythm, and meter

When: 9:30 AM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

A4-2: Individual Musician’s Spontaneous Performance Rates Affect Interpersonal Synchrony in Joint Musical Performance: A Dynamical Systems Model.

Adrian S Roman*(1), Iran R Roman(2)
1:University of California, Davis, 2:Stanford University

When two musicians play together, they coordinate the timing of their actions, resulting in interpersonal synchronization. Behavioral research shows that interpersonal synchronization is affected by individual spontaneous rates of movement, such as spontaneous performance rates (SPRs), measured when asking musicians to spontaneously play a simple melody to measure their tempo. Specifically, the greater discrepancy between their SPRs, the greater asynchronies occur during joint duet performance. Interestingly, an individual’s SPR remains stable even after experiencing a joint performance. These results suggest two separate phenomena: (1) short-term tempo adaptation during joint performance and (2) SPR restoration afterwards. Here we aim to characterize SPRs and interpersonal synchronization with oscillatory dynamical systems, which have been used to model periodicity of human behavior in musical tasks. To explain an individual agent’s behavior, we first use an oscillator with a fixed spontaneous cycling rate (SCR). This oscillator is configured to be driven by external periodic stimuli and align its frequency with the frequency of the stimuli. We made this frequency alignment mechanism rather resilient to mimic the robustness of SPR, allowing the oscillator to return to its SCR when it is no longer stimulated. Using two of these oscillators, we simulated the joint duet performance, and examined whether asynchronies are systematically influenced by the amount of difference between the two SCRs. In our simulation, a pair of oscillators drove each other in two conditions: (1) with matching and (2) mismatching SCRs. The results from the simulations replicated behavioral data, showing greater asynchrony for the “mismatching” condition. Our results show that the relationship between SPRs and interpersonal synchronization can be explained with dynamical system models. The present study offers a possibility to explain individual differences in human adaptive behavior computationally.

Subjects: Beat, rhythm, and meter, Computational Modeling

When: 9:45 AM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

A4-3: Balancing self and other during live orchestral performance as reflected by neural alpha oscillations

Justin Christensen*(1), Lauren Slavik(2), Jennifer Nicol(1), Janeen Loehr(1)
1:University of Saskatchewan, 2:University of Alberta

Musical ensemble performance demands continuous adaptation and alignment to allow for the expression of shared communicative goals. Previous findings suggest that high levels of behavioral entrainment are linked to self-other integration, while low levels of entrainment are linked to self-other segregation. In this study, we were interested in replicating these findings in a naturalistic context. We collected EEG data simultaneously from 4 performer participants (concertmaster, section 1st, principal 2nd and section 2nd) situated within a 60 member professional orchestra during a rehearsal performance of Derek Charke’s Elan. Prior to EEG analysis, the score was analyzed and divided into 5 sections linked to the differing coordination strategies that performers were hypothesized to use for the contrasting musical sections. EEG alpha enhancement and suppression were examined over centro-parietal electrodes. In support of our hypotheses, alpha suppression was observed most during the unison sections, suggesting that the highest level of self-other integration occurred during these sections. The divisi polyphonic section exhibited the strongest alpha enhancement, suggesting an inhibition of behavioral entrainment to others during this section. A canon section exhibited a leader-follower dynamic, whereby the players in the leader role showed alpha enhancement, while those in imitation showed alpha suppression. Our findings, generated in the naturalistic setting of the concert hall, support previous findings generated in a laboratory setting. Our findings also support previous work showing that ensemble performance requires performers to anticipate and adapt to each other’s actions through self-other integration when pursuing shared musical goals, and that performers may need to use multiple entrainment or integration strategies to best reach these goals.

Subjects: Neuroscientific approach, Joint agency

When: 10:00 AM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session B1, Beat & Meter 2

10:15-11:00 AM in KC802

B1-1: The Production of the “Pocket”: Beats as Domains in a Corpus of Drum Grooves

Fred Hosken(1)
1:Northwestern University

This paper takes a perceptually-informed, dynamic view of “the beat,” arguing that beats are durational spans—or, more colloquially, “pockets”—which are elements within a flexible metric reference structure, rather than the traditional view of beats as isochronously-spaced “durationless point[s] in time” (Lerdahl & Jackendoff, 1983). I use the MIRtoolbox to analyze a corpus of 4/4 bars (N = 3,645) of recorded performances by four top session drummers to investigate how the beat is performed. I report the findings in two ways: first as a “traditional” study into microtiming deviations from a fixed, isochronous metric structure of idealized beats, finding statistically-significant differences between how individual drummers nuance the placement of snare and bass drums; and second, taking an alternative stance that focuses on the data’s distribution, investigating the variance, skew, and kurtosis to describe shaped pockets of time within which the beat exists. With this information, a drummer may then be described as having, for example, a “tight” pocket or a “lopsided” pocket. These two possible analytical approaches to performance data illustrate my larger argument that the contradictory status of our understanding of the role of microtiming in the groove experience (summarized in Levitin et al., 2018) is the result of an epistemological grounding in theories of rhythm and meter that focus on written music—an idealized abstraction with fixed points—rather than the lived experience of sounding music. To redress this, I propose a perceptually-informed theory of musical time that does not focus on micro-deviations from idealized time points, but instead conceives of domains of time—“pockets.” The shaped sense of how the beat is produced by drummers that I develop here aligns with Danielsen’s listener-centered theoretical ideas of perceptual “beat-bins” or “extended beats” (2010, 2018), ideas that draw on Jones’s Dynamic Attending Theory (e.g. Jones, 1976) and will be pursued further in later studies.

Subjects: Beat, rhythm, and meter, Corpus analysis/studies; Music information retrieval

When: 10:15 AM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

B1-2: The Search for the Tactus: A Statistical Investigation of Metric Hierarchies in Popular and Classical Music

Nathaniel Condit-Schultz(1)
1:Georgia Institute of Technology

Musical meter comprises a hierarchy of beats related (predominantly) by ratios of 2/1. Variation in the absolute tempo of this hierarchy complicates comparative analysis: For instance, are 400ms quarter-notes (150bpm) in one piece comparable to 600ms quarter-notes or 300ms eighth-notes (100bpm) in another? This problem is relevant to music composition, performance, and transcription (de Clercq, 2016), as well as music cognition: in the abstract, levels in a metric hierarchy might seem functionally interchangeable, but different beats are actually perceptually quite distinct; in particular, one metric level holds a perceptually privileged position as the “tactus” which is counted/conducted (London, 2004, pp. 31-32). The subjective of tactus perception (Martens, 2011) is a direct parallel to the issue of notational ambiguity. I hypothesize that the perceptual distinction between metric levels reflects, and/or engenders, systematic differences in the rhythms articulated at different levels. For example, in 4/4 Germanic music from the Essen corpus, onsets are equally prevalent on the fourth and the third quarter-note of each measure, but much less common on the fourth eighth-note relative to the third eighth-note. My aim is to identify such rhythmic distinctions between different metric levels—distinctions which can then serve as objective bases for tactus identification and inter-opus metric analysis. I will utilize three musical datasets: trios and quartets by Corelli, Mozart, and Haydn from the Musedata repository; the Musical Corpus of Flow (Condit-Schultz, 2016); and transcriptions of 124 melodies from the McGill Billboard Dataset (Gauvin et al., 2017). For each piece in each corpus, I will model zeroth/first-order distributions of onsets for every possible interpretation of the tactus. Through Bayesian modeling, I will identify the most parsimonious interpretation(s) of tactus and metric hierarchy in the corpus. I will then investigate these models’ parameters to identify any intuitive qualitative patterns that emerge from the analysis. Condit-Schultz, N. (2016), “MCFlow: A Digital Corpus of Rap Flow.” Doctoral Thesis, Ohio State University. de Clercq, T. (2016), “Measuring a Measure: Absolute Time as a Factor for Determining Bar Lengths and Meter in Pop/Rock Music,” Music Theory Online (22/3). Gauvin, H.L. & Condit-Schultz, N. & Arthur, C. (2017), “Supplementing Melody, Lyrics, and Acoustic Information to the McGill Billboard Database.” Conference of the International Alliance of Digital Humanities Organizations. London, J. (2004), Hearing in Time: Psychological Aspects of Musical Meter. Oxford University Press. Martens, P.A. (2011), “The Ambiguous tactus: Tempo, Subdivision Benefit, and Three Listener Strategies,” Music Perception (28/5), pp. 433-448.

Subjects: Beat, rhythm, and meter, Computational approach; Corpus analysis/studies; Music information retrieval; Music theory

When: 10:30 AM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

B1-3: Tracking the Beat: A Historical Analysis of Drum Beats in Anglo-American Popular Music

Seth T Holland*(1), Nathaniel Condit-Schultz(1)
1:Georgia Institute of Technology

Drum beats have been an essential element of Anglo-American popular music since the late 19th century, especially in rock, hip-hop, and related styles since the 1950s. Unfortunately, little theoretical or empirical research concerning drum beats or their aesthetic/perceptual role in music has been reported. Our project is the first broad comparative analysis of drum beats in popular music. Taking a historical perspective, we explore the evolution of drum parts over time, in particular how standard acoustic drum kit sounds have become mapped to diverse synthetic percussion sounds since the 1980s. Our dataset is a stratified sample of music from the Billboard Hot 100 charts, ten songs randomly selected from each chart year 1958–2018. We identify and encode eight measures of drumming from the first two major sections of each piece (typically, the verse and chorus). Tempo and formal section identification were guided by de Clercq’s (2016, 2017) discussions of each subject. We discuss the theoretical/methodological difficulties of drum music research, comparing various operational definitions of drum “beat” and different approaches to beat annotation and analysis. Drumming in popular music typically involves multiple percussion instruments and there can be varying degrees of independence between the patterns played on different instruments, a situation which significantly complicates the comparative analysis of rhythmic patterns in drum beats—e.g., when do we consider the kick drum and snare as separate patterns and when do we consider them as a single holistic stream? Our annotation scheme, encoded in the humdrum syntax, allows us to divide instruments flexibly into one or more, non-exclusive, “voices” (i.e., perceptual streams). This issue of drum part “streaming” is a fundamentally perceptual question, and we hope our theoretical and computation work will be an important step towards developing hypotheses and stimuli for behavioral psychological concerning drum beats.

Subjects: Beat, rhythm, and meter, Corpus analysis/studies; Music information retrieval

When: 10:45 AM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session B2, Timbre 1

10:15-11:00 AM in KC905/907

B2-1: The Screaming Strings of the Silver Screen: Signaling Fear Using an Acoustic Feature of Human Screams

Caitlyn Trevor*(1), David Huron(1), Larry Feth(1), Luc Arnal(2)
1:Ohio State University, 2:Université de Genève

Film music scholars and reviewers have sometimes noted the scream-like quality of music used for scary scenes, such as “The Knife” cue from Herrmann and Hitchcock’s Psycho (1960). But do these scary film soundtrack excerpts actually sound like human screams? Screams have a unique acoustic feature. Specifically, they occupy a niche range of the modulation power spectrum (MPS) (Arnal, 2015). The MPS is a 2D Fast Fourier transform of a spectrogram that exhibits both temporal and spectral power modulations. Screams occupy the region between 30 and 150 Hz (called the “roughness” region). It could be that “roughness” is a universal cue for danger and may be present in scream-like music. To investigate, we compared the mean MPS amplitudes of the roughness regions for scream- like scary film music and human screams. The scream-like music database consists of second-long excerpts sampled from horror movie soundtracks. The database of human screams consists of performed screams recorded by researchers at UZH. Using MATLAB, the MPS of each excerpt was measured. Then, the mean amplitude in the roughness range was taken. These mean amplitudes were compared between the scream-like musical excerpts and the recordings of human screams using SPSS. Our hypothesis was that there would be no significant difference between the mean MPS amplitudes of the two populations. In support of our hypothesis, a Wilcoxon Signed-Ranks Test indicates that there is no significant difference between the average MPS amplitudes for human screams (M = 1.06, SD = .04) and those of scream-like musical excerpts (M = 1.12, SD = .18), (Z = 1.44, p = .23). The results support our main theory that these scary film excerpts mimic the acoustics of actual human screams and may contribute to dispel the notion that ‘scary’ musical devices have emerged via enculturation alone.

Subjects: Emotion, Composition and improvisation; Corpus analysis/studies; Evolutionary perspectives; Film / moving ima

When: 10:15 AM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

B2-2: Preferences and emotional responses to film music using orchestral and/or synthesized sounds

Renee Timmers*(1), Richard Ashley(2)
1:University of Sheffield, 2:Northwestern University

It is increasingly common practice to ‘mock-up’ musical performances in the creation of film music sound tracks. Recordings of performances are enhanced to create a full orchestral effect or the full orchestral version is synthesized without the need for musicians to perform individual parts. This study investigated whether listeners’ experiences are affected by this practice. Do listeners have a preference for live over synthesized music or vice versa? Are affective responses influenced by whether live or synthesized music is presented? Through collaboration with partners from the film music industry, we were able to use originally composed film music, recorded and synthesized in accordance with the highest standards of the US film industry. Four musical excerpts in three versions (live, synthesized and mixed) were presented to participants. Participants reported their subjective responses after each musical excerpt. Additionally, in phase 2, they gave their rank order of the versions of each excerpt. No main effect of version was found, in contrast to a strong main effect of musical excerpt on subjective responses (F(15,11)=33.628, p<.001). These results were qualified by a significant interaction between excerpt and version for suitability as Film Music (F(4.941, 118.588)=2.844, p = .012) and willingness to spend (F(4.469, 107.261)=2.409, p = .047). The effect of version was different depending on the musical excerpt. Only excerpts with solo instruments were disadvantaged by the use of synthesized music, while full orchestral excerpts were not. In fact, the results of phase 2 showed that live orchestral performances obtained lowest overall ranking. These results show the high standard of current practices of synthesizing full orchestral scores. They should however be interpreted with care as the listening experience in this experimental set up was limited compared to live listening over high fidelity speakers in a movie theatre over a longer period of time.

Subjects: Film / moving image, Aesthetics / preference; Emotion

When: 10:30 AM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

B2-3: Investigating the role of timbre on melodic alarm recognizability

Sharmila Sreetharan*(1), Cameron Anderson(1), Joseph Schlesinger(2), Mike Schutz(1)
1:McMaster University, 2:Vanderbilt University Medical Center

International Electrotechnical Commission (IEC) 60601-1-8 alarms employ short melodic sequences with flat amplitude envelopes (i.e., amplitude invariant), lacking the rich acoustic structure associated with musical instruments. Conversely, most musical instruments produce sounds with percussive amplitude envelopes (i.e., exponentially decaying sounds). IEC alarm recognizability suffers due to a lack of heterogeneity between alarm sequences. In an attempt to increase heterogeneity without changing the melodic sequences, we manipulated the timbre of these sequences. Specifically, we wanted to see the effect of timbres with contrasting amplitude envelopes (i.e., a timbre with a flat vs. percussive amplitude envelope) on IEC alarm melody recognition. Previous melody recognition studies have demonstrated timbres critical role in facilitating our ability to recall musical melodies. Yet, the stimuli used in these experiments have predominately consisted of short excerpts from popular music. It is unclear if this role of timbre persists for very short, three-note melodies (i.e., IEC alarms). In this experiment, participants listen to three-note melodies (i.e., IEC alarms), performed with a timbre designed according to IEC standard (i.e., a timbre modeled after the flat amplitude envelope shape) and with a synthesized xylophone timbre (i.e., a timbre best representing a percussive amplitude envelope shape). Participants listen to and rate a series of short melodies in the IEC or xylophone timbres across two aesthetic dimensions (to increase exposure), given instruction to focus on the melodies, ignoring the timbre. After a short retention period, participants rate the familiarity of IEC alarm melodies and a selectively-randomized set of melodies across both IEC and xylophone timbres. Participants also completed a two-alternative forced choice annoyance task to assess which tone (from pairs of tones) was perceived to be more annoying. Participants showed superior recognition when the melody was presented in the same timbre at test as encoding (i.e., encoding specificity principle) for three-note melodies. We found a marginal effect of timbre; recognition for xylophone melodies was slightly better than recognition for IEC melodies. The annoyance task revealed that participants, independent of timbre during exposure, found IEC alarms to be more annoying than the xylophone alarms, consistent with previous work indicating flat envelopes are perceived as more annoying than percussive alarms. These results highlight the importance of timbre consideration in melodic alarms.

Subjects: Timbre, Memory

When: 10:45 AM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session B3, Having Vision

10:15-11:00 AM in KC909

B3-1: Follow that beat: Using visual rhythm to regulate attention and plan eye-movements

Melissa Brandon(1)
1:Bridgewater State University

The assumption that audition is for temporal processing while vision is for spatial processing fueled research on the superiority of auditory over visual rhythm processing. However, recent studies utilizing motion to create visual rhythms found successful rhythm discrimination for infants and tapping synchronization on par with auditory rhythms in adults. The field of entrainment to environmental rhythms provides support for vision’s role in both intentional and unintentional coordination. Visual tracking increases the coherence between the movement and environmental stimuli. Together these findings indicate vision does process temporal information and use it to plan synchronous movements. But does vision make use of rhythm to regulate attention and make predictions? The role of auditory rhythm for attention regulation has long been researched, but could visual rhythm also serve as an attention regulator for planning attentional shifts and anticipatory eye-movements? Data from an eye tracking study with 8-month-olds and adults supports this idea. Participants watched a series of Sesame Street characters appearing with apparent motion at prescribed intervals (rhythmic or random timing) in the same spatial pattern. Infants’ reacted significantly faster in the rhythmic than the random-conditions (F(1,30)=7.30, p=0.01) for the beginning of the silent study. The adults’ anticipatory looks were closer to the onset of the characters in the rhythmic-condition compared to the random-condition (F(1,30)=7.55, p=0.01), when a distracting object was present. Adult used the visual rhythm to regulate attention, deciding when to disengage from the distracting object to anticipate the next character’s appearance. This eye tracking paradigm has been extended to an audio-visual version (tweeting bird) with a contingent looking component. The bird tweets if the participant visually anticipates its appearance. Findings from the contingent audio-visual experiment will be discussed with focus on participants’ motivation to anticipate. Pilot data suggests the participants intentionally do not anticipate once they discover their anticipatory looks controls the sound.

Subjects: Beat, rhythm, and meter, Audiovisual / crossmodal; Expectation

When: 10:15 AM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

B3-2: Effect of Audio-Visual Asynchrony on a Simple Performance Task by Instrumental Musicians

Taina Lorenz*(1), Steven Morrison(1)
1:University of Washington

Performances in an ensemble context require musicians to navigate a complex environment of auditory and visual information while playing in synchrony with others. These multiple streams of information may not always be synchronous or congruent, requiring musicians to adapt to changing conditions to maintain a cohesive performance. Common performance practice regards the conductor—and, thus, visual information—as the critical source of pulse information. However, laboratory studies of time-keeping have identified auditory signals as more salient stimuli. The purpose of this study was to examine pulse alignment among performers facing increasingly asynchronous auditory and visual information. Musicians (N=53) who were current members of large instrumental ensembles participated in the study. Participants watched video of a conductor outlining a 4/4 pattern while also hearing a multi-voiced instrumental ensemble soundtrack, and were asked to tap the pulse on a tablet-based pad. Each of nine examples was presented in one of three experimental formats: control (steady audio and video), audio (ensemble) accelerating/video (conductor beat pattern) decelerating, and video accelerating/audio decelerating. Rate of pulse change was +/- 7.5% with initial tempos of 108, 127, and 146 bpm. We timestamped audio beats, video beats, and response taps and calculated IOI for each. Data consisted of deviations (ms) from a consistent IOI (steady pulse). In the asynchronous conditions, participants broadly adhered to one of the two streams of information (auditory or visual) rather than to a steady rate of pulse. More responses correlated positively visual streams than with auditory streams; correlations between participant responses and stimuli tended to be stronger when preferred modality slowed. Results suggest that, when discrepancy exists, musicians determine a preferred source of pulse information. Consistent with prevailing pedagogical practice but contrary to less contextualized time-keeping tasks, visual information provided by the ensemble tended to be a more dominant source of pulse information.

Subjects: Audiovisual / crossmodal, Beat, rhythm, and meter; Music and movement; Performance

When: 10:30 AM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

B3-3: Make your space: An investigation on effects of different musical training on perception of space

Yong Jeon Cheong*(1), Udo Will(1)
1:Ohio State University

Space emerges from perception and bodily action. In music performance, we make use of our vocal apparatus in singing and our limbs in instrument-playing, each of which may lead to different spatial experience. In this study, we focus on two constituents of spatial experience: 1) hand-centered specificity and 2) audio-tactile integration. Research questions include whether non-musicians, instrumentalists, and vocalists respond differently to audio-tactile inputs near and on hands. We conducted 1) simple reaction time and 2) temporal order judgement experiments for crossed and uncrossed arms conditions with instrumentalists, vocalists, and non-musicians. For experiment 1, auditory-only, tactile-only and simultaneous audio-tactile stimuli were presented near and on hands. Subjects were asked to respond as soon as possible when they detected any signal. For experiment 2, brief auditory and tactile stimuli were delivered in pairs with various onset asynchronies. Subjects were asked to judge whether either sound or touch is presented first while their reaction time and accuracy were measured. In experiment 1, significant differences among participant groups were found for tactile only condition. Race model inequality test suggests that instrumentalists’ faster reaction time to audio-tactile stimuli is due to co-activation of both sensory channels at the early stage of perception. In experiment 2, smaller stimulus onset asynchronies lead to more incorrect responses and slower reaction time. Musicians reacted faster than non-musicians. Instrumentalists respond more correctly than the other groups. Instrumentalists showed the smallest absolute and difference thresholds. Non-musicians showed significant differences in difference threshold between crossed and uncrossed arms. Previous studies have shown musical training changes multisensory integration with audio-visual stimulation. This study provides evidence for modulative effects of musical training on multisensory processing in audio-tactile coupling. Furthermore, different types of musical training change the multisensorially established hand-centered space.

Subjects: Audiovisual / crossmodal, Spatial perception

When: 10:45 AM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session B4, Ensemble Performance 2: Improvisation

10:15-11:00 AM in KC914

B4-1: The Neural Substrates of High-Quality Improvisations among Jazz Guitarists

David S Rosen(1)
1:Stockton University

While the link between flow, expertise, and creativity is often assumed, there is a dearth of evidence supporting this claim. Flow is the mental state one enters when fully immersed in an activity, accompanied by the loss of reflective self-consciousness and the merging of action and awareness. The neurocognitive mechanisms of flow are poorly understood; however, this work aligns with theories of flow, which suggest that it may be characterized by transient hypofrontality, an inhibition of executive systems as implicit, automatic, Type 1 processes are engaged. Similar processes and mechanisms have been proposed for expert-level jazz improvisation. Here, examine the neural basis of high-quality jazz improvisations, flow (the phenomenological state), and expertise. Jazz guitarists (N = 32) improvised to novel chord sequences while 64-channel EEG was recorded. Jazz experts rated each improvisation for creativity, technical proficiency and aesthetic appeal. Behaviorally, hierarchical regression models revealed musicians’ flow scores and expertise significantly predicted the quality ratings of the improvisations. Using SPM12 for EEG, we investigated the significant clusters of electrophysiological activity for the significant behavioral factors of high- and low-flow and expertise. Broadly, high-flow was characterized by increased alpha and beta-band activity in right-posterior cortices, while low-flow displayed increased left-frontal activity. For expertise, high-expertise was characterized by increased beta and gamma-activity in left-posterior and central cortices, yet the low-expertise condition revealed a highly significant cluster of high-frequency activity bilaterally in frontal regions. Taken together, we interpreted these findings as support for the transient hypofrontality hypothesis, whereby the inhibition of executive, top-down control and enhanced recruitment of posterior, associative, bottom-up processes underlies flow, creativity, and domain-expertise in jazz improvisation. Additionally, new analyses of the data, which are underway, will be presented, including: k-fold cross validation for EEG components and model selection, EEG and Flow interactions (neural signatures for when a novice or expert experiences flow), and the neural substrates of high- vs. low-quality improvisations.

Subjects: Composition and improvisation, Musical expertise

When: 10:15 AM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

B4-2: Live coding helps distinguish between propositional and embodied improvisation

Andrew Goldman(1)
1:University of Western Ontario

Improvisation is not a single kind of cognitive process. In this theoretical paper, I advance a typological distinction between “propositional” and “embodied” improvisation. To explicate this distinction, I will consider the case of live coding, a musical performance practice in which performers write algorithmic instructions for computers in live performance (Magnusson, 2014). Live coding exemplifies propositional improvisation according to three criteria: the performer’s physical actions are temporally dissociated from the resultant sounds, the relationship between human movement and the content of auditory feedback is highly variable, and decisions are made at discrete points in time. Embodied improvisation, by contrast, links movement and sound in real-time, with systematic feedback, and decisions can be made continuously (see Figure 1). This distinction motivates reconsidering how to design cognitive experiments. Bespoke electronic instruments could be constructed that vary according to the temporal synchrony and systematicity of the auditory feedback. As musicians perform on these, neural measures (e.g., EEG) could characterize the effect of this instrumental variation on auditory-motor functional connectivity (cf. Bangert & Altenmüller 2013) as well as on sensitivity to altered auditory feedback (cf. Pfordresher 2006; Lutz et al. 2013). Behavioral studies could adapt the diatonicity and pattern variety measures used by Goldman (2013) and Norgaard et al. (2016) respectively, which both showed differences in structural characteristics of improvisations as a function of different performance conditions (see Figure 2). The main theoretical upshot is that differences in the embodied nature of technical interfaces fundamentally change how musical ideas are cognitively generated (cf. De Souza, 2017), and this difference likely has observable neural and musical correlates. I conclude by considering how this distinction is theoretically relevant to analyzing improvisation outside of musical contexts (Lewis & Piekut, 2016).

Subjects: Composition and improvisation, Embodied cognition

When: 10:30 AM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

B4-3: An fMRI study of the brain networks involved in jazz improvisation in a naturalistic setting.

Karl G Helmer*(1), Ronny Preciado(1), Richard Falco(2), Frederick Bianchi(2)
1:Massachusetts General Hospital, 2:Worcester Polytechnic Institute

The exploration of creativity and its correlation with the underlying brain function has been an expanding research area and stands to reveal the relevant neural substrates. Self-generated thought has been linked to creative behavior (Jung, 2013), but recent work (Beaty, 2015) has pointed towards a role for both attention- and executive-control networks. Evidence is accumulating that task design, rather than actual network behavior may explain the ”task negative”-“task-positive” view of self-generated thought versus attention/executive control. In this study, we use a naturalistic paradigm of jazz performance: Eight professional jazz musicians were placed in a 3.0T magnet with a keyboard and asked to perform runs consisting of 1)melodic embellishment, 2)performance of a contrafact, and 3)improvisation, all based on the jazz standard “All the Things You Are”. Data were processed by standard fMRI data-processing techniques and cleaned for motion using ICA-AROMA (FSL). Standard General Linear Model (GLM) fitting of the timecourse of each voxel to the task paradigm was performed for each run, with permutation testing used for the final group analysis. Spatial Independent Component Analysis (sICA, MELODIC, FSL) was also performed on the improvisational-only timecourses. sICA groups voxels with related signal timecourses into spatial component maps and showed clear evidence for the presence of the default-mode network (DMN, active during self-generated thought), the dorsal attention network (AN), and the executive-control network (ECN) during the improvisatory periods. Interestingly, the ventral AN, was anti-correlated to the dorsal AN during improvisation, showing that participants were deeply focused on performance. GLM statistical contrasts between task conditions show the effects of varying attentional load and the sICA results suggest a complex interplay between focused attention, monitoring, and self-generated thoughts during improvisation. We propose a model of improvisation that incorporates these results.

Subjects: Neuroscientific approach, Composition and improvisation

When: 10:45 AM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session C1, The Voice 1

11:30-12:15 PM in KC802

C1-1: From opera to pop: Do we all like the same voices?

Pauline Larrouy-Maestri*(1), Edward Vessel(2), Camila Bruder(2), Susan Rogers(3), David Poeppel(4)
1:Max-Planck-Institute for Empirical Aesthetics, 2:Max Planck Institute for Empirical Aesthetics, 3:Berklee College of Music, 4:New York University

Listeners converge in their judgments when evaluating the technical quality of singing performances (Larrouy-Maestri, Morsomme, Magis, & Poeppel, 2017). What about listeners’ preferences? Do we like the same singers? In the visual domain, the degree of shared preference varies considerably depending on the nature of the stimuli, with higher inter-individual agreement for natural kind stimuli such as faces or landscapes compared to artifacts such as architecture and artwork (Vessel, Maurer, Denker, & Starr, 2018). In the case of singing performances, which makes use of a ‘natural instrument,’ the human vocal-tract itself, one could expect high inter-individual agreement as well. In order to examine singing preferences of contrasted music material, we recorded 17 trained opera singers performing the Vocalise Op.34, No.14 of Rachmaninoff and 17 trained pop singers performing “Don’t Worry, Be Happy” of McFerrin. Eight versions of the Vocalise were evaluated in a paired-comparisons design by 38 participants with various degrees of musical expertise (26 women, Mage = 34.45 years old), two times (retest one week later). Preliminary results show that listeners are consistent in their preferences when listening to opera voices (r(test-retest) = 0.73, p < .01) and highlight a surprisingly high inter-individual agreement (“mean- minus-one” correlation MM1 = 0.78, close to the MM1 for natural visual stimuli, Vessel et al., 2018). Testing with pop voices and subsequent acoustic analyses of the contrasted singing material seeks to clarify mechanisms underpinning listeners’ appreciation of singing voices and thus pave the way to a direct comparison between aesthetic domains. Larrouy-Maestri, P., Morsomme, D., Magis, D., & Poeppel, D. (2017). Lay listeners can evaluate the pitch accuracy of operatic voices. Music Perception, 34(4), 489-495. doi:10.1525/MP.2017.34.4.489 Vessel, E. A., Maurer, N., Denker, A. H., & Starr, G. G. (2018). Stronger shared taste for natural aesthetic domains than for artifacts of human culture. Cognition, 179, 121-131. doi:10.1016/j.cognition.2018.06.009

Subjects: Aesthetics / preference, Perception, Evaluation, Singing

When: 11:30 AM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

C1-2: The Roles of Pitch Imagery and Pitch Short-term Memory in Vocal Pitch Imitation

Emma B Greenspon*(1), Peter Pfordresher(2)
1:University at Buffalo, 2:University at Buffalo, SUNY

Vocal imitation is an early emerging behavior that has a critical role in the development of both language and music ability. Although the majority of people engage in vocal imitation during their lifetime, there is a great deal of variability in people’s vocal imitation accuracy, particularly with respect to vocal pitch imitation. One potential cause of this variability is individual differences in sensorimotor mapping between the auditory system and vocal motor system. The present research addresses how this mapping may be associated with auditory representations in working memory, including the veridicality of auditory imagery and capacity of auditory short-term memory (STM). In a study involving 216 monolingual English-speaking undergraduate participants, we addressed the degree to which imagery and STM tasks relate to pitch imitation ability. We used an adaption of the Pitch Imagery Arrow Task (Gelding, Thompson & Johnson, 2015) as well as a novel phonological imagery task in order to measure pitch and verbal imagery, respectively. We measured pitch and verbal STM span using Williamson & Stewart’s (2010) adaptive pitch and digit span tasks. We found that pitch imagery scores, pitch span, pitch discrimination thresholds, and music experience were unique predictors of pitch imitation ability when controlling for all other variables, but verbal measures were not. These results suggest that pitch imitation relies on both auditory imagery and auditory STM, and that these processes recruit pitch-specific resources that are at least partially separate from resources involved in verbal processing.

Subjects: Performance, Memory; Music and language; Pitch

When: 11:45 AM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

C1-3: TBA

na(1)
1:na

NA

Subjects: na, na

When: 12:00 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session C2, Timbre 2

11:30-12:15 PM in KC905/907

C2-1: Color and Tone Color: Audio-visual Crossmodal Correspondences with Musical Instrument Timbre

Lindsey E Reymore(1)
1:Ohio State University

Crossmodal correspondences—widely-shared expectations for mapping experiences across sense domains—manifest in our everyday language. For example, musical timbres may be “bright,” “dark,” or “warm.” Empirical literature in psychology is consistent with the theory that many of these metaphors play out in perception as well as language (Marks, 2013). The current studies investigate musical timbre as a color-evocative dimension of sound. Specifically, our aim is to disentangle crossmodal correspondences between timbres and colors by using perceptual ratings to predict participants’ choices when they are asked to match timbres to colors. The words used by participants to rate timbres were derived from previous research on the perceptual dimensions of timbre as well as empirical literature in crossmodality. In each of three studies, participants used headphones and an iPad to listen to stimuli and match the timbres of various musical instruments with colors. Results from the first experiment (n=106) support the hypothesis that lighter colors are associated with timbres that are rated as higher, smaller, brighter, and happier, while darker colors are associated with timbres that are rated as lower, bigger, darker, and sadder. Data collection is underway for studies 2 and 3, with results forthcoming. Study 2 features an expanded experimental interface that allows us to simultaneously test hypotheses about lightness, hue, and saturation. Study 3 aims to untangle the influences of instrument timbre and pitch height by comparing responses to stimuli across the ranges of different keyboard instruments. Our findings bring insight to the cognitive science of metaphor and audiovisual perception. Increased understanding of latent timbre-color correspondences is relevant for composition and analysis of music visualization, a multimedia genre in which music is intentionally paired or co-created with color, shape, and movement. Additionally, our correlational results from unipolar scales lead us to recommend the avoidance of bipolar scales in timbre rating tasks. Marks, Lawrence. (2013). “Weak Synesthesia in Perception and Language.” In Julia Simner and Edward Hubbard (Eds.), Oxford Handbook of Synesthesia. Oxford University Press.

Subjects: Audiovisual / crossmodal, Timbre

When: 11:30 AM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

C2-2: Spectrotemporal modulation timbre cues in musical dynamics

Charalampos Saitis*(1), Luca Marinelli(2), Athanasios Lykartsis(2), Stefan Weinzierl(2)
1:Centre for Digital Music, Queen Mary, University of London, 2:Audio Communication Group, TU Berlin

Timbre is often described as a complex set of sound features that are not accounted for by pitch, loudness, duration, spatial location, and the acoustic environment. Musical dynamics refers to the perceived or intended loudness of a played note, instructed in music notation as piano or forte (soft or loud) with different dynamic gradations between and beyond. Recent research has shown that even if no loudness cues are available, listeners can still quite reliably identify the intended dynamic strength of a performed sound by relying on timbral features. More recently, acoustical analyses across an extensive set of anechoic recordings of orchestral instrument notes played at pianissimo (pp) and fortissimo (ff) showed that attack slope, spectral skewness, and spectral flatness together explained 72% of the variance in dynamic strength across all instruments, and 89% with an instrument-specific model. Here, we further investigate the role of timbre in musical dynamics, focusing specifically on the contribution of spectral and temporal modulations. Loudness-normalized modulation power spectra (MPS) were used as input representation for a convolutional neural network (CNN). Through visualization of the pp and ff saliency maps of the CNN it was possible to identify discriminant regions of the MPS and define a novel task-specific scalar audio descriptor. A linear discriminant analysis with 10-fold cross-validation using this new MPS-based descriptor on the entire dataset performed better than using the two spectral descriptors (27% error rate reduction). Overall, audio descriptors based on different regions of the MPS could serve as sound representation for machine listening applications, as well as to better delineate the acoustic ingredients of different aspects of auditory perception.

Subjects: Timbre, Loudness; Music information retrieval; Psychoacoustics

When: 11:45 AM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

C2-3: A Reinvestigation of the Source Dilemma Hypothesis

Douglas A Kowalewski*(1), Ronald S Friedman(1), Stan Zavoyskiy(1), Trammell Neill(1)
1:University at Albany, SUNY

Bonin, Trainor, Belyk, and Andrews (2016) proposed a novel way in which basic processes of auditory perception may influence affective responses to music. According to their source dilemma hypothesis (SDH), the relative fluency of a particular aspect of musical processing—the parsing of the music into separate audio streams—is hedonically marked: Successful stream segregation elicits pleasant affective experience whereas unsuccessful segregation results in unpleasant affective experience, thereby contributing to (dis)preference for a musical stimulus. We conducted a large-scale constructive replication of one of their studies (Bonin et al., 2016; Exp. 2), the results of which were ostensibly consistent with the SDH, yet which also suffered from methodological limitations that ultimately called its support for the hypothesis into doubt. Specifically, we asked participants to indicate their preferences for same- versus mixed-timbre versions of several polyphonic melodies. For some participants, the mixed-timbre versions were produced with a piano timbre in the lower voice and a trumpet timbre in the upper voice, whereas for others, the mixed-timbre versions were produced with a xylophone rather than a trumpet timbre in the upper voice. In addition, for some participants, all melodies were comprised of parallel minor ninths (inharmonic condition), whereas for others, they were comprised of parallel octaves (harmonic condition). While participants that listened to inharmonic stimuli, relative to those who listened to harmonic stimuli, preferred the mixed-timbre variants, this effect was moderated by timbre (such that this effect was stronger for the xylophone condition than for the trumpet condition). While these results support the SDH, they suggest that its effects are not as absolute as originally posited by Bonin et al. (2016). In particular, sourse dilemma effects were shown to be moderated by timbre, which suggests that future work should further investigate timbral effects (and that of other musical elements) on the predictions of the SDH.

Subjects: Aesthetics / preference, Emotion; Harmony and tonality; Timbre

When: 12:00 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session C3, Personal Music Listening 1

11:30-12:15 PM in KC909

C3-1: Discrete Emotions Emerge from Violation of Musical Expectancies and Contextual Information

Julian Céspedes-Guevara*(1), Kelly Sierra(2), Steven Vargas(2)
1:Department of Psychological Studies, Universidad Icesi, 2:Universidad Icesi

Several theories have proposed that violation of musical expectations induce affective responses in listeners. However, empirical evidence is unclear about the type of affect that this mechanism induces: whereas some researchers have found that violation of harmonic expectations induces changes in arousal and/or valence, others have found that it induces discrete emotions such as irritation and anxiety. The present experiment tested a constructionist hypothesis that explains these disagreements in past research by distinguishing the contribution of the musical expectancy mechanism, from the contribution of contextual information in the induction of affective responses in listeners. Stimuli consisted of specially-composed musical pieces, which either confirmed or violated listeners’ harmonic expectations, and were not expressive of any particular emotion. These pieces were presented to participants in three conditions: music-only, music paired with a short horror film, or music paired with a short sentimental film. The expectation-violating moment of each piece was paired with the films so that it coincided with a surprising moment in each film’s narrative. Participants (n=30) were asked to report experienced core affect (valence / arousal) and induced discrete emotions, and their skin conductance levels and facial expressions were measured. It was predicted that participants’ core affect (i.e. valence and arousal) would be higher in the violated-expectation condition than in the confirmed-expectation condition; that the music-only condition would not be associated with any discrete emotion, and that the music-film pairings would be associated with discrete emotional experiences of induced-fear or induced-tenderness, correspondingly. Self-report and skin-conductance data support these hypotheses. These results are interpreted as supportive of the theory that listening to music activates automatic perceptual processes (such as the musical expectancy mechanism), that induce fluctuations in the listener’s core affect, and that the activation of associative mechanisms transform these fluctuations of core affect into a variety of emotional experiences.

Subjects: Emotion, Expectation

When: 11:30 AM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

C3-2: Musical Taste and Identity: Favorite Songs May Provide Cues About Personal Characteristics of the Listener

Meagan Curtis*(1), Sarah Brothers(1)
1:Purchase College, SUNY

Musical preference is often used as a signal to attract like-minded peers and for related impression-management functions. These uses of music suggest that one’s preference communicates information about the listener, such as personality characteristics or other facets of identity. The current study examined whether listeners can make accurate inferences about an individual based on that individual’s preferred music. Twenty-eight Spotify users were asked to provide a link to a personalized playlist of the 100 songs that they listened to the most in 2017. Spotify creates these playlists for their users annually, based on each user’s listening history. We opted to use these Spotify playlists as indicators of preference instead of other forms of self-report so as to avoid giving participants an opportunity to engage in deliberate impression management. Participants were also asked to complete the Ten Item Personality Inventory (TIPI) and several demographics questions. To test the ability of observers to decode information about the participants from their playlists, we created one playlist for each of the 28 participants that contained the top 10 songs from their Spotify playlist. We then asked nine observers to sample each of the 10-song-playlists and try to determine the age, gender, ethnicity, and personality of the playlist owner. The observers were able to decode extroversion and openness-to-experience with some success. Gender, age range, and ethnicity were correctly predicted at rates that were significantly above chance. Years of musical training was analyzed as a potential explanatory variable for the decoding performance of individual observers, but it was not a significant predictor of overall accuracy or the accuracy with which any one factor was decoded. Future research will attempt to determine the musical cues that communicate information about listeners as well as factors that may influence decoding success.

Subjects: Aesthetics / preference,

When: 11:45 AM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

C3-3: Personal music listening for emotion regulation: Distinguishing primary from secondary motives

Elizabeth E Kinghorn(1)
1:University of Western Ontario

Emotion regulation is often cited as a primary motivation for engaging with music. However, the prevalence of emotional objectives and the various factors influencing music use motivations are not entirely clear. Differences in methodology have highlighted areas of ambiguity: Results from recall studies have emphasized emotional motivations as most prevalent overall, while studies employing Experience-Sampling Methodology suggest that this is largely context-dependent, with initial mood being especially critical. Furthermore, prior research has traditionally asked participants to report only primary motivations, which may oversimplify a more nuanced phenomenon. Research discriminating among primary and secondary motives for listening may provide much needed clarity. This study aimed to examine primary and secondary motives for music listening, the influence of initial mood on listening motivations, and the emotional effects associated with these motives. Undergraduate student participants used the MuPsych mobile ESM smartphone application for a two-week period, providing information about motivations and current mood during episodes of music listening. Analysis of pilot data (N = 45) found that participants chose a primary emotional motivation for listening in approximately 20% of all listening episodes. When secondary motives were accounted for, however, the total percentage of listening episodes in which an emotional motivation was at play rose to 75%. Although prior studies have found that emotional motives are reported more frequently when participants are in a negative initial mood, there was no significant difference in this sample. Further analyses with larger datasets will look to support these findings and assess the emotional effects of listening with various aims in mind. Distinguishing between primary and secondary motives has implications for gaining a more nuanced understanding of motivations for music use and the influence of personal and contextual variables. This further information may also allow us to examine the level of awareness at which these motivations operate.

Subjects: Emotion, Music Use Motivations

When: 12:00 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session C4, Symposium: LIVELab Part 1

11:30-12:15 PM in KC914

C4-1: Coordination during music making among musicians and audiences: Studies in realistic settings using the LIVELab

Laurel Trainor*(1), Andrew Chang(1), Haley Kragness(1), Daniel Bosnyak(1), Elger Baraku(1), Molly Henry(2), Daniel Cameron(3), Dana Swarbrick(1), Jessica Grahn(4), Dobri Dotov(1), Ian Bruce(1), Larissa Taylor(1), Ranil Sonnadara(1)
1:McMaster University, 2:Max Planck Institute for Empirical Aesthetics, 3:Brain and Mind Institute, University of Western Ontario, 4:University of Western Ontario

The majority of scientific studies of music are on individuals in ecologically impoverished contexts. However, throughout human history, music making and listening typically involved groups of people interacting. Recent studies suggest the social consequences of experiencing music with others can be profound, including increased helping behavior in infants following synchronous bouncing to music, physiological benefits of choir singing, and increased cooperation between adults. Yet we understand little of how musicians coordinate nonverbally in real time to create a common musical expression, or how experiencing live music in an audience differs from listening alone. New technologies and signal processing advances now make it possible to study group dynamics at both behavioral and brain levels. The LIVELab (Large Interactive Virtual Environment) is a fully functioning 100 seat concert hall in which EEG and motion capture, among other responses, can be measured simultaneously in musicians and audience members during live musical performances. In this symposium, four papers are presented. In each, data collected in the LIVELab addresses a different aspect of social coordination during performances. First, Chang et al. show that predictive communication among musicians is reflected in body sway. Second, Dotov et al. show that four novice drummers can self-organize non-verbally to play in synchrony, and that group performance as a whole is better than individuals drumming alone. Third, Henry et al. use hyper EEG scanning to show that social networks among audience members are enhanced during live compared to recorded concerts. Finally, Taylor et al. present work on improving social benefits of live concerts for people with hearing aids by developing better hearing aid music programs and using loop technology to deliver sounds directly to hearing aids through their telecoils. Together the papers contribute new understanding of the complex social interactions involved in making and experiencing music.

Subjects: Music and movement, Physiological measurement

When: 11:30 AM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

C4-2: Body sway reflects interpersonal coordination among musicians

Andrew Chang(1), Haley Kragness(1), S Livingstone(2), Daniel Bosnyak(1), Elger Baraku(1), Laurel Trainor(1)
1:McMaster University, 2:NA

Interpersonal coordination is essential for daily life; however, the mechanisms are not well understood, partially due to trade-offs between ecological validity and experimental control. In addition, dependent variables that can index bidirectional predictive coordination across time are needed. We have conducted a series of studies to investigate whether Granger causality analyses of body sway (measured by motion capture) can be used to quantify the magnitude and directionality of interpersonal coordination between pairs of musicians performing in ensembles, an ecologically valid context for interpersonal coordination. In Study 1, we investigated whether the total body sway coupling within a string quartet reflected the rated quality of musical performances, and whether the coupling revealed leader-follower relationships when roles were confidentially assigned under experimental control. Results confirmed that the coupling magnitudes of leader-to-follower directional relationships were higher than follower-to-leader or follower-to-follower relationships, and the total coupling magnitude was positively associated with ratings of performance success. In Study 2, we investigated whether body sway coupling additionally reflects joint emotional expression, a quality that is essential for joint action in aesthetic tasks such as ensemble music playing. Results showed that the total coupling magnitude was higher when pieces were played with expression than without expression, and this coupling magnitude was positively associated with self- and judge-rated emotional intensity. Studies underway are using Granger causality to investigate the relation between predictive coordination as measured in body sway and as measured directly from the sound output of the different instruments in an ensemble. Together, these studies demonstrate that body sway dynamically reflects many aspects of interpersonal coordination, including coordination quality, leadership, and emotional expression. These findings extend our understanding of joint music performance, and the directional measurement used here can be harnessed to investigate, for example, romantic attraction and social interactions in people with dementia.

Subjects: Music and movement, Physiological measurement

When: 11:45 AM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

C4-3: Emergent coordination dynamics in quartets of synchronized drummers differ qualitatively from those of dyads

Dobri Dotov(1), Daniel Bosnyak(1), Laurel Trainor(1)
1:McMaster University

Sensorimotor studies of social interaction and joint action traditionally focus on dyads. However, music often involves larger groups. From a mechanistic perspective the accumulation of variability, delays, and N-way “mirroring systems” from each member should hinder performance in larger groups but this is not the case. We used 4-person drumming circles to study emergent group dynamics in novice drummers. Each participant played alone (Solo) and in a group of 4, from which we analyzed both individual performance (Individuals-in-Group) and performance of the group as a whole (Group). Participants were required to maintain a steady tempo and synchronize with each other when in the group. Different trials started at different initial tempos. Beat onset times were collected from each individual. Group-level beats were approximated as points of peak acoustic energy of the group. The time-series of inter-beat intervals for each condition were analyzed using lagged auto-correlations and the parameters of a drift-diffusion model. Furthermore, interactions among pairs of Individuals-in-Group were examined using lag-0 and lag-1 cross-correlations. The lag-1 auto-correlations of Solo, Group, and Individuals-in-Group time series were negative, suggestive of self- correction. Importantly, cross-correlations between pairs during group performance were positive at both lag-0 and lag-1, suggestive of a dynamic which allowed individuals to anticipate each other. This is in contrast with previous dyad studies which typically report positive lag-1 and negative lag-0 cross-correlations indicative of a mutually reactive inter-personal dynamic. Furthermore, most group performances contained at least one participant with consistently positive (lagger) and one with negative asynchronies (leader) relative to the group. Finally, attractor strength was highest in the Group time series. Thus, emergent 4-person dynamics differ qualitatively from 2-person. Arguably, the central moment (Group timing) acts as stabilizing feedback and allows for mutual anticipation and consistent leaders and laggers to emerge.

Subjects: Music and movement, Physiological measurement

When: 12:00 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session D1, The Voice 2

12:15-1:00 PM in KC802

D1-1: Enhanced memory for vocal music does not involve the motor system

Michael Weiss*(1), Isabelle Peretz(2)
1:BRAMS, University of Montreal, 2:University of Montreal

Vocal melodies sung without lyrics (la la) are remembered better than instrumental melodies. What causes the advantage? One possibility is that vocal music elicits subvocal imitation, which could promote inner vocal rehearsal or lead to enhanced motoric representations of a melody. Distracting the vocal motor system during encoding should then reduce the memory advantage for vocal melodies. In Study 1, participants (n=38, 28 female, M=21.9±3.0 years) carried out movements of the mouth (i.e., chew gum) or hand (i.e., squeeze a beanbag) while listening to 24 unfamiliar folk melodies (half vocal, half piano). In a subsequent memory test, they rated the same melodies and 24 timbre-matched foils from ‘1–Definitely New’ to ‘7–Definitely Old’. Ratings were converted to area under the receiver operating characteristic curve (i.e., AUC scores; chance = 0.5, perfect = 1.0) separately by timbre. A mixed-model ANOVA (timbre x group) showed an advantage for vocal melodies (M=.801±.107) over piano melodies (M=.752±.120), F(1, 36)=8.20, p=.006, with no effect of group, F<1, and no interaction, F<1. In other words, the manipulation did not affect the magnitude of the voice advantage. The mouth movements may have failed, however, to adequately interrupt the vocal motor system, because chewing is nonmusical and does not involve vocalization. Study 2 repeated the design from Study 1, except participants (n=57, 45 female, M=23.5±4.1 years) carried out motor activities related to singing. Half vocalized (i.e., humming continuously) and half silently articulated (i.e., la la) during encoding. Once again, a mixed-model ANOVA (timbre x group) showed a significant advantage for vocal melodies (M=.760±.125) over piano melodies (M=.694±.129), F(1, 55)=7.03, p=.010, with no effect of group, F<1, and no interaction, F<1. These results challenge the notion that the motor system drives enhanced memory for vocal melodies. Instead, the voice may enhance memory processes due to its biological salience.

Subjects: Memory, Timbre

When: 12:15 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

D1-2: The perception of scoops in judgments of singing performances

Pauline Larrouy-Maestri*(1), Shi En Gloria Huan(2), Peter Pfordresher(2)
1:Max-Planck-Institute for Empirical Aesthetics, 2:University at Buffalo, SUNY

Singers rarely perform steady notes but produce small pitch changes (scoops) at the start and end of tones. Recent research confirms that scoops matter when evaluating the technical quality of a performance (Larrouy-Maestri & Pfordresher, 2018). Specifically, listeners’ ratings of pitch accuracy are affected by the presence of small dynamic changes to pitch at the start and end of tones. However, listeners commonly attend to music performances to appreciate them rather than to judge their technical correctness. To examine the effect of the task on pitch processing, participants were asked to listen to 4-tone melodies in which the third tone was manipulated with respect to its center region (either correct, 50 cents sharp, or 50 cents flat) as well as the presence of a scoop at the start and/or the end of the tone. Participants listened to pairs of performances that reflected different pitch conditions and compared them with respect to aesthetics (i.e., preference) or technical (i.e., pitch accuracy) merit. Results confirm the previous finding about the perceptual relevance of scoops when judging the correctness of sung performances. Moreover, we show that the influence of scoops on listeners’ ratings is not limited to technical judgments but is also visible in listeners’ preferences. While technical and aesthetic judgments are highly similar, listeners seem to be more tolerant when focusing on their aesthetic preferences than when judging the correctness of a melody. Interestingly, scoops that affect the melody, by disrupting the continuity between consecutive tones, seem to be appreciated – whereas they are not considered correct. The effect of task on ratings and mechanisms underlying scoop perception opens up new approaches for understanding pitch processing in the context of dynamic auditory sequences. Larrouy-Maestri, P., & Pfordresher, P. Q. (2018). Pitch perception in music: Do scoops matter? Journal of Experimental Psychology: Human Perception and Performance, 44(10), 1523-1541.

Subjects: Aesthetics / preference, Pitch; Psychoacoustics

When: 12:30 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

D1-3: Simultaneous dual-plane, real-time magnetic resonance imaging videos of the vocal tract in advanced trombone players show a close coupling of movements measured in different planes

Matthias Heyne*(1), Peter Iltis(2), Jens Frahm(3), Dirk Voit(3), Arun Joseph(3), Lian Atlas(2)
1:Boston University, Sargent College of Health & Rehabilitation Sciences, Boston, MA, 2:Gordon College, 3:Biomedical NMR, Max-Planck-Institute for Biophysical Chemistry, Göttingen

Motivation: While a small number of studies using various imaging methods have examined vocal tract movements during brass instrument performance in the sagittal plane (side view), almost no research exists on what these movements might look like in the coronal plane (frontal view). Furthermore, it is unclear how closely the movements observed in either dimension might be correlated. In this presentation, we will show results of analyses based on the first real-time magnetic resonance imaging (RT-MRI) videos of the vocal tract recorded simultaneously in both sagittal and coronal planes in 5 advanced trombone players who performed the same musical exercise. Methodology: Dual-slice RT-MRI acquisitions were implemented in a frame-interleaved manner on a Siemens 3 Tesla system with 20 millisecond acquisitions per frame to achieve two interleaved videos at a rate of 25 frames per second. Tongue movements along profile lines manually placed on top of the MRI videos recorded in both orientations were extracted using a customized MATLAB toolkit and quantified to determine the degree of temporal synchronicity across movements observed in both planes. Results: Across all subjects, our analyses revealed a precise coupling of the vertical movements of the dorsal tongue surface (viewed from the sagittal perspective), with changes in the vertical dimensions of the air channel formed between the dorsal tongue surface and the hard palate (measured along three parallel profile lines overlaid onto the coronal images). The cross-correlation between movements measured in the sagittal and coronal planes was quite strong (mean R = 0.967) and tongue position was closely tied to changes in pitch. Implications: Our results show that multiple planes should be considered when investigating vocal tract movements in brass instrument performers and we hope that the availability of this novel imaging technique will encourage similar research on other wind instruments.

Subjects: Performance, Language and speech; Music education/pedagogy/learning; Musical expertise; Physiological measurement

When: 12:45 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session D2, The Listener

12:15-1:00 PM in KC905/907

D2-1: Hearing water temperature: A case study in the development of extracting meaning from sound

Tanushree Agrawal*(1), Michelle Lee(1), Amanda Calcetas(1), Danielle Clarke(1), Naomi Lin(1), Adena Schachner(1)
1:University of California, San Diego

Music perception involves extracting meaning from sound. How does this occur? Ecological approaches suggest a role of everyday acoustic experience(Clarke, 2005): Without conscious thought, listeners link events in the world to sounds they hear. We study one surprising example: Adults can judge the temperature of water simply from hearing it being poured (Velasco et al., 2013). How do these nuanced perceptual skills develop? Ecological theories predict that extensive auditory experience is required (Gaver, 1993); others suggest they are present in infancy (Spelke, 1979). We ask: Can children judge temperature from sound, and how early in childhood does this occur? N=113 children were tested from age 3-12 years (M=5.83y, 46 female). At test, participants heard two sound clips of water pouring (in randomized order), and identified which sounded like hot vs. cold water. Acoustic stimuli were professionally-recorded sounds of hot and cold water being poured into identical cups. Pre-test trials established that all children could identify (a)images of hot/cold things, and (b)familiar sounds. We found evidence of developmental change: Participant age significantly predicted accuracy in judging water temperature from sound (χ2(1)=12.91, p<0.001, logistic regression). Preschool-aged children performed at chance (4-year-olds: 40.5% correct, n=37; p=0.32; 5-year-olds: 54.5%; n=33; p=0.73, binomial tests); Contrastingly, 85% of older children answered correctly (29 of 34 children aged 6+). Thus, the ability to hear water temperature may not be present in early childhood, instead developing over the first six years of life. This suggests that our ability to extract nuanced meaning from sound may depend on extensive acoustic experience. Future work (with adults and children) aims to understand the nature and extent of the experience required to extract meaning from both musical and ecological sounds.

Subjects: Audiovisual / crossmodal, Music and development

When: 12:15 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

D2-2: The aesthetic experience of live concerts reflected in psychophysiological reactions

Julia Merrill(1)
1:Max Planck Institute for Empirical Aesthetics

During three classical concerts with the same program, psychophysiological data and self-reports on aesthetic experience were recorded from 90 participants in total. A professional ensemble performed string quintets by Beethoven, Brett Dean (a contemporary composer) and Brahms. Self-reports were required after each movement, consisting of questions on liking, absorption (N=7) and evoked aesthetic feelings (N=14). Measurements consisted of facial EMG (zygomaticus major muscle), electrodermal activity (EDA phasic and tonic), respiration rate and heart rate variability (HRV) as well as asymmetry (decelerating and accelerating contributions to heart rate, HRA). Altered experiences such as absorption and ‘dissociation’ (i.e., forgetting about time and surroundings) were reflected in phasic and tonic EDA, as well as in HRV. While a relaxed state is reflected in parasympathetic activity (HF), in an attentional state, such as focusing on the music and being absorbed by the music, the sympathetic activity increases (LF). HRA revealed greater contributions of decelerations in heart rate in states of absorption. This means measurements were able to dissociate between a state of attention and of relaxation and distraction. Feelings of energy and power but also amazement and enchantment were related to (phasic) EDA, and sensibility and melancholy to HRV, revealing a stronger physical engagement. Liking ratings were related to a mix of activations such as more accelerations in heart rate and parasympathetic activity and only slight sympathetic activity. EMG was only marginally related to liking and involved in ‘being moved’ and feelings of melancholy, showing that the muscle activity might not only be limited to its relation to positive valence in the context of aesthetic judgments. Overall, this study extends our knowledge on the aesthetic experience of music in ecologically valid situations and on how concepts like absorption and aesthetic feelings are reflected in bodily reactions.

Subjects: Aesthetics / preference, Physiological measurement

When: 12:30 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

D2-3: Perspectival Listening: Analysis of Acousmatic music via an Embodiment Approach

Hubert Ho(1)
1:Northeastern University

The embodied cognition (EC) paradigm allows for a broadening of what a musical listener’s perspective means. Studies pointing towards an EC worldview have demonstrated expert dancers displaying increased activity in the premotor cortex (Decety and Grèzes 2006), activation in the supplementary motor area when listening to a melody (Zatorre et al. 1996), hearing of musical chords in familiar timbres inviting a motor mimetic effect (Drost et al. 2007), and activation of motor areas among both tappers and non-tappers in a rhythmic entrainment task (Chen et al. 2008). With musical motion as a common thread among these studies, Cox (2016) motivates the development of a 2X2 matrix of four listener Perspectives, the dimensions of which represent Motion/Stasis and Interiority/Exteriority. A moving observer with an interior perspective views Music as a landscape in which to move, without realization of a larger musical form. A stationary observer with an interior perspective observes Music itself as in motion, moving to and from the listener. A moving observer with an exterior perspective also sees Music as a landscape, but pinpoints an “avatar” which can be observed through the musical landscape. A stationary observer with an exterior perspective sees musical events as passing by. Traditional music analytical work (e.g. score-based analysis) has utilized only this fourth Perspective. This paper will utilize the 2X2 perspectival matrix to analyze more broadly, using multiple Perspectives, examples from the acousmatic literature. This music, disseminated via loudspeakers, is typically performed without immediate visual recourse to an agent producing the original instrumental or source sound. Audio signal processing and synthesis typically utilized in composing complicates ecological recognition (Schaeffer 1966). Musical examples are drawn from Yuasa, Harrison, Subotnick, Cage, and Rataj. This work supplements other theoretical work on perspective, agency, instrumentality, and embodiment done on instrumental music (Palfy 2015, Lewin 1987).

Subjects: Music theory, Embodied cognition

When: 12:45 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session D3, Personal Music Listening 2: Ethics

12:15-1:00 PM in KC909

D3-1: Effects of violent music on psychophysiological desensitisation to real-life acts of violence

Kirk N Olsen*(1), Wayne Warburton(2), Merrick Powell(2), Bill Thompson(2)
1:Macquarie Univeristy, 2:Macquarie University

Exposure to violent media such as video games can desensitise individuals to real-life acts of violence. It is unknown whether exposure to music with violent themes (‘violent music’) leads to similar outcomes. We asked: (1) can exposure to music with violent lyrics desensitise listeners to subsequent exposure to real-life acts of violence; and (2) does the genre of musical accompaniment make a difference (e.g., aggressive extreme metal vs. non-aggressive rap)? A two-phase experiment included 80 non-fans of violent music who listened to 16-minutes of music (4×4-minute excerpts; listening phase) and then viewed a 5-minute video of real-life acts of violence (e.g., murder by shooting, stabbing, or beating; viewing phase). Four experimental groups (n=20 per group) were established based on a 2×2 factorial design within the listening phase. The first independent variable was ‘lyrics’ (violent, non-violent) and the second was ‘music genre’ (aggressive extreme metal, non-aggressive rap). Dependent measures were recorded at three time-points: (1) before listening phase; (2) after listening phase/before viewing phase; (3) after viewing phase. They included measures of positive/negative affect, mood, stress, hostility, and skin conductance as an index of physiological arousal. It was hypothesised that participants would respond negatively after the listening phase and after the viewing phase. However, if short-term exposure to violent lyrics desensitises listeners to real-life acts of violence, we hypothesised that participants exposed to violent lyrics would show attenuated negative responses after the viewing phase, relative to those who were exposed to music with non-violent lyrics. As predicted, all groups responded negatively after the listening phase and viewing phase: negative affect, stress, and hostility significantly increased, whereas positive affect and mood significantly decreased. After viewing real-life acts of violence in the viewing phase, those who were previously exposed to violent lyrics – regardless of music genre – reported significantly attenuated changes in positive affect, negative affect, and mood, as well as a decrease in physiological arousal over the duration of the viewing phase. These results were replicated at an even greater magnitude for violent music fans (n=20). Thus, we report evidence of desensitisation to real-life acts of violence after short-term (non-fans) and long-term (fans) exposure to music with violent lyrics.

Subjects: Emotion, Health and well-being; Physiological measurement

When: 12:15 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

D3-2: The moral consequences of music: Cognitive bases of the link between music and prosocial behavior

Tanushree Agrawal*(1), Josh Rottman(2), Adena Schachner(3)
1:UCSD, 2:Franklin & Marshall College, 3:University of California, San Diego

Why does music increase prosocial behavior (Clarke et al., 2015)? We propose a novel hypothesis: Music may influence *moral* judgments. Specifically, evidence of others’ ability to enjoy music may signal their greater capacity for conscious experience (joy, pain), which is known to drive moral harm decisions (Gray et al., 2012). Does knowing that a person/animal is musical make us believe it is more wrong to harm them? Exp1. N=100 participants (31 female; M=34.79y, SD=10.71y; tested online) saw nine characters, including two critical matched pairs: Two human and two animal (monkey) individuals, one described as musical, and the other described without mentioning music (matched for length/style). Five neutral characters (frog, dog, baby, robot, “you”) reduced demand effects and ensured measure validity. On each trial (of 36), participants saw two characters and selected which would be more painful for them to harm. As predicted, participants chose the musical animal as more painful to harm than the non-musical matched animal (75/100; p<0.001, binomial test), and showed a similar trending effect with humans (60/100; p=0.057). Exp2. We replicated our findings (N=150), with one change to test whether results were driven by musicality itself, or the individual’s uniqueness (unique individuals may be more valuable). We thus presented different *species* of monkeys (all musical or non-musical, as a species). Again, participants chose the musical animal species (98/150; p<0.001) and the musical human individual (94/150; p=0.002) as more painful to harm than their matched non-musical counterparts. These results show for the first time that musicality influences moral judgments: Simply knowing that a person/animal is musical made participants judge them more painful to harm. This provides a powerful new account of how music impacts conflict and prosociality.

Subjects: Music and society, Evolutionary perspectives

When: 12:30 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

D3-3: Emotional, cognitive, and social functions and outcomes of violent music

Merrick Powell(1), Kirk N Olsen*(1), Bill Thompson(1)
1:Macquarie University

Whilst violent music such as extreme metal and violent rap is often blamed for eliciting violent and antisocial behaviours, a growing body of evidence suggests that violent music may have the capacity to offer positive emotional, cognitive, and social functions and outcomes for its fans. The present study aimed to investigate: (1) emotional response characteristics of violent extreme metal, violent rap, and non-violent classical music fans as they listen to extreme metal, violent rap, and classical music, respectively; (2) similarities and differences in the cognitive and social functions that music serves between the three fan groups; and (3) the role of passion as a predictor of positive and negative emotional, cognitive, and social functions and outcomes. Fans of violent extreme metal (n=46), violent rap (n=49), and non-violent classical music (n=50) completed questions measuring the cognitive (self-reflection, self-regulation) and social (social bonding) functions of, and passion toward, their respective genre of music. Participants then listened to four one-minute excerpts of their respective genre and rated ten emotional responses to each excerpt. As predicted, the top five emotions reported by all fan groups were positive emotions, with empowerment and joy the highest rated across all groups. However, the magnitude of positive emotions was significantly lower and negative emotions significantly higher for violent music fans, relative to classical fans. Fans of violent music utilised their specified music for positive functions to a similar or sometimes greater extent than classical fans. Finally, harmonious passion for music systematically predicted positive functions and emotional outcomes in violent extreme metal and classical fans, but not violent rap fans. The emotional, cognitive, and social impact of violent music for its fans is discussed.

Subjects: Health and well-being, Emotion; Music and society

When: 12:45 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session D4, Symposium: LIVELab Part 2

12:15-1:00 PM in KC914

D4-1: Hyper EEG scanning of audience members reveals social neural networks during listening to live music

Molly Henry(1), Daniel Cameron(2), Dana Swarbrick(3), Daniel Bosnyak(3), Laurel Trainor(3), Jessica Grahn(4)
1:Max Planck Institute for Empirical Aesthetics, 2:Brain and Mind Institute, University of Western Ontario, 3:McMaster University, 4:University of Western Ontario

Attending concerts is enjoyable for a number of reasons: live music affords a qualitatively different experience than listening to a recording. Another important contributor to the enjoyment of a concert—at least anecdotally—is bonding with others who are enjoying the same musical experience. The current study considered the possibility that a live musical experience, i.e., the presence of live performers as well as an audience, might change the way brain rhythms synchronize across audience members, reflecting audience members’ musical and affiliative experiences. We collected electroencephalography (EEG) data in three realistic social contexts in the LIVELab research concert hall. First, EEG was measured simultaneously from 20 audience members (in a larger crowd of approximately 80 people) while they observed a live musical performance. Second, EEG was measured from 20 audience members (in a larger crowd of approximately 80 people) while they watched a recording of the first concert on a movie screen and with audio identical to the live concert. Finally, EEG was measured from 20 participants in small groups of 2 participants seated apart (tested in 10 separate sessions) while they observed the recorded musical performance. Thus, we manipulated the presence of the performers while keeping audience context fixed, and we manipulated the presence of other audience members while keeping the recorded performance fixed. Network-connectivity analyses on delta-band EEG data treated individual audience members as nodes in social neural networks. Social neural networks were more densely connected when the performance was live, regardless of audience presence, and the degree to which an audience member was connected with others predicted their feelings of connection to the performers. Thus, the presence of live performers at a concert leads to increased synchronization of audience members’ brain rhythms selectively at rates that are associated with feeling and moving along with a musical beat.

Subjects: Music and movement, Physiological measurement

When: 12:15 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

D4-2: Improving audience experiences for people with hearing aids at live music concerts

Larissa Taylor(1), Daniel Bosnyak(1), Ranil Sonnadara(1), Laurel Trainor(1), Ian Bruce(1)
1:McMaster University

For hard of hearing individuals to benefit from the social interaction of a live musical experience as an audience member, it is essential to deliver amplification through hearing aids that makes the music audible and clear. Unfortunately, hearing aids that work reasonably well for speech often provide poor quality for live music. Taking advantage of the LIVELab, a unique facility for examining the interaction between technology and live music performance, we are examining the benefits of assistive listening technologies and changes to hearing aid (HA) processing at live music performances. In Experiment 1, HA users were able to test several assistive listening systems using their own HAs during a string quartet concert in the LIVELab. Excerpts were repeated with different reverberation settings (using LIVELab’s Meyer sound system). The majority of listeners who used the telecoil on their HAs – enabling us to deliver additional processed sound from microphones on stage directly into their HAs – reported that it improved music listening. The results and recordings from this study were used to develop an improved music program setting on a HA, with gains based on the spectral differences between speech and live music. In a second experiment, seven participants listened to a jazz quartet in the LIVELab and rated three programs: default conversation-in-quiet, default music, and the new music program. Overall, participants who reported preferences preferred the modified music program. We are now conducting a larger scale version of these studies in collaboration with the Hamilton Philharmonic Orchestra – testing both the improved HA music program and the assistive loop technology for delivering enhanced sound to the HA – during for a full orchestral concert in May 2019. Improving live music experiences for hard of hearing individuals has the potential to reduce social isolation and increase benefits of experiencing music with others.

Subjects: Music and movement, Physiological measurement

When: 12:30 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session E1, Ensemble Performance 3: Synchronization

2:30-3:30 PM in KC802

E1-1: Inter-brain synchrony in a piano trio: Mobile EEG evidence

Anna V Kasdan*(1), Georgios Michalareas(2), Jess Rowland(3), Ido Davidesco(3), David Poeppel(3), Suzanne Dikker(4)
1:Vanderbilt University , 2:Max Planck Institute for Empirical Aesthetics, 3:New York University, 4:New York University and Utrecht University

Performing chamber music involves high degrees of coordination among members, who often report feeling strongly and uniquely connected to one another. Prior research suggests that brain-to-brain synchrony across individuals is a marker for social interactions, likely driven by shared attentional mechanisms (Dikker et al., 2017). We thus expect that successful ensemble performance is reflected in brain-to-brain synchrony between musicians (Müller et al., 2013). Understanding the role of musical listening in relation to successful interactions between members of a musical ensemble has not been explored, particularly the role of self-listening and listening to professional groups, both active tools employed by musicians. Using portable electroencephalography (EEG), we asked whether brain-to-brain synchrony between members of a piano trio differed when listening to a recording of themselves (self condition) from a professional recording of that same piece (professional condition). EEG was recorded from a piano trio at a summer chamber music festival using wireless, 14-electrode EMOTIV headsets on three separate days. Before the EEG recordings, the trio was audio recorded playing their select piece. EEG data were recorded simultaneously from all three musicians. Brain-to-brain synchrony was measured by computing pairwise coherence (1-20Hz) per electrode per condition per day after extracting the envelope of the signal using the Hilbert transform. Preliminary results suggest that brain-to-brain synchrony levels across conditions were greater by the third recording day, when musicians had more experience practicing together. Inter-brain synchronization did not differ appreciably between self and professional conditions. When comparing the audio recording and brain signal directly in each instrumentalist, the data suggests a brain-music correlation in the beta frequency band (13-30 Hz) in the self condition, indicating a possible role of rhythmic prediction in processing musical information related to self-performance. The results elucidate the challenges involved in field-based EEG research and new methodologies to understand musical performance.

Subjects: Neuroscientific approach, Performance

When: 2:30 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E1-2: Joint synchrony, temporal variability and performance rates

Pauline Tranchant*(1), Eleonore Scholler(1), Caroline Palmer(1)
1:McGill University

Music performance relies on an individual’s ability to produce regular motor sequences and, for joint performance, on temporal coordination of motor actions between interacting partners. Recent studies show that musicians are less variable when they perform or tap at their spontaneous tempo than at other tempi (Zamm et al., 2018), a finding that may reflect the stability of movement at endogenous (natural) frequencies. We investigated the relationship between temporal variability during Solo and Duet rhythmic performance. Participants tapped at a spontaneous regular rate, and also tapped the rhythm of familiar melodies on a force-sensitive resistor, during which tones were generated with each tap. During Solo performance, participants tapped at a regular spontaneous production rate (SPR). During the Duet stage, participants tapped the melody in time with their partner, following a metronome cue of eight beats set to the tempo of each participant’s SPR. Preliminary results with 12 participants indicate a significant correlation between the variability (CV) of the spontaneous motor tempo and the variability of the SPR. In addition, there was a significant relationship between participant’s Solo and Duet tapping variability while controlling for the partner’s Solo variability. The mean asynchrony during Duet performance was related to the difference in partners’ Solo SPR. The partner whose SPR determined the Duet performance rate, determined the directionality of the asynchrony; when that partner’s solo rate was faster than the partner, the asynchrony was negative. Together, these findings suggest that success of joint coordination is related to the temporal stability of the individual partners.

Subjects: Beat, rhythm, and meter, Performance

When: 2:45 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E1-3: Using a bidirectional delay-coupled dynamical model to understand synchronization in joint music performance

Alexander P Demos*(1), Hamed Layeghi(2), Marcelo Wanderley(2), Caroline Palmer(2)
1:University of Illinois at Chicago, 2:McGill University

In joint music performance, performers can remain synchronized with their partner by changing the degree to which they anticipate the timing of their partner’s actions, allowing their partner more or less leadership in real time, and also by changing their tempo. To understand how performers use these parameters to remain synchronized, we applied a nonlinear delay-coupled model (designed to examine predictive timing) to pairs of pianists performing duets during perturbation tasks. We employed a system of coupled differential equations with three free parameters (i.e., delay, coupling, tempo change) for each individual in duet pairs. Model fits for each individual in the duet pair were compared between auditory feedback manipulation conditions in which sounded feedback was randomly removed/returned (the same way for both duet partners) from the parts performed by one or both partners to force a change in leadership. In the baseline conditions (with no feedback manipulations) the model suggested that the person playing the lower voice (accompaniment) was more strongly coupled to and anticipated the person playing the upper voice (melody) more than the inverse relationship. When auditory feedback from the upper voice (melody) was removed, the lower voice could not couple to it and showed a significant decrease in coupling term and anticipation from baseline, while the melody did not differ from baseline in coupling or anticipation. A similar but inverse pattern was observed when auditory feedback from the lower voice was removed from the duet pair. Results showed that performers change in all three parameters: delay, coupling, and tempo change, to maintain synchronization with their partner. Results were also compared to model simulations, which also showed that the three parameters were important in maintaining synchrony.

Subjects: Computational approach,

When: 3:00 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E1-4: Quantifying Coordination in Improvising Piano Duos

Matthew Setzler*(1), Robert Golstone(1)
1:Indiana University

Music produced by ensembles reflects underlying patterns of interaction. Previous work has demonstrated that in composed musical settings, the ability of an ensemble to coordinate is dependent upon mutual coupling of co-performers influencing one another in ongoing feedback loops. How does the presence of mutual coupling influence the music jointly produced in improvised music? Here we study coordination in dyads of professional jazz pianists. Participants’ musical output was recorded in one of two conditions: a coupled condition, in which two pianists improvise together as they typically would, and an uncoupled condition, in which a single pianist improvises along with a “ghost partner” – a recording of another pianist taken from a previous coupled trial. The conditions are identical except for that in coupled trials subjects are mutually coupled to one another, whereas there is only unidirectional influence in uncoupled trials. Analysis of note onset timing in the MIDI recordings revealed two ways in which the coordinated rhythmic activity of participants differed significantly as a function of condition. First, musicians synchronized more effectively when mutually coupled, as there were smaller asynchronies between near-simultaneous onsets. Second, mutual coupling resulted in greater cross-correlation of co-performers’ onset density – the number of notes played per unit time. A lagged cross-correlation revealed that onset density of ghost partners (in uncoupled trials) is more correlated with future values of onset density in the live musician than vice versa, reflecting the underlying leader-follower structure of the condition. These objective effects were paralleled by significant differences in participants’ subjective experiences. Despite being blind to condition, participants rated coupled trials to be of higher quality, and characterized by greater ease of coordination. This work provides the first controlled investigation of the quantitatively measured coordination patterns demonstrated by freely interacting jazz musicians.

Subjects: Composition and improvisation, joint action

When: 3:15 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session E2, Harmony 1: Expectation

2:30-3:30 PM in KC905/907

E2-1: Model-based fMRI reveals modulation of reward network activity to predictions in tonal harmony

Vincent KM Cheung*(1), Peter Harrison(2), Lars Meyer(1), Marcus Pearce(2), John-Dylan Haynes(3), Stefan Koelsch(4)
1:Max Planck Institute for Human Cognitive and Brain Sciences, 2:Queen Mary University of London, 3:Bernstein Center for Computational Neuroscience, 4:University of Bergen

Prediction plays an important function in music cognition, as our affective experience to music is said to be shaped by our ability to make probabilistic inferences about forthcoming musical structures. Although recent studies have suggested the involvement of the mesolimbic reward network in musical pleasure, the interaction between prediction and the reward network has only remained speculative. This study thus aimed to identify the neural correlates of predictions to tonal harmony during music listening. We used the Information Dynamics Of Music (IDyOM) model to analyse the statistical structure of chord sequences in the McGill Billboard corpus as a model of stylistic syntax. We derived the information content and entropy for each chord given the preceding context, which respectively simulate the perceived surprise and predictive uncertainty of a music listener generating probabilistic predictions about harmonic structure. Forty subjects listened to chord sequences whilst their brain activity was recorded in a 3T MRI scanner. Data were analysed in a mixed-effects regression model. Chord entropy modulated metabolic activity in the caudate, nucleus accumbens, amygdala, and orbitofrontal cortex. Chord information content further modulated activity in the inferior frontal gyrus. Both entropy and information content modulated activity in bilateral auditory cortex. Our results demonstrate that statistical properties of musical syntactic structure modulate the reward network. We suggest that musical reward may emerge from the interaction of predictive processes in the brain and statistical regularities on acquired syntactic structure as music unfolds over time. We extend previous work that showed the involvement of the mesolimbic reward network and auditory cortex in encoding musical reward value.

Subjects: Neuroscientific approach, Corpus analysis/studies

When: 2:30 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E2-2: Can musical training change the perception of dissonance? A study about broken harmonic expectations

Carlota Pagès*(1), Juan M Toro(2)
1:Center for Brain and Cognition, Universitat Pompeu Fabra, 2:Universitat Pompeu Fabra & ICREA

In the present study we explore how musical training shapes the brain responses to different degrees of violation of harmonic expectancies. In Western tonal music, the patterns of tension/release are essential for its composition. Musical tension leads to an expectation of resolution that can be broken in many ways. Previous research showed that tonal-syntactic violations are usually perceived as erroneous and elicit specific neural responses such as the early right-anterior negativity (ERAN). However, little is known about the relationship between musical unexpectedness, sensory dissonance and the effect of musical training. The main aim of the present study is to determine whether different degrees of musical violations are processed differently after long-term musical training in comparison to day-to-day exposure. To this end, we registered the ERPs of musicians and non-musicians while they passively listened to chord progressions with irregular endings that included mild (Naepolitan chords) and strong violations (dissonant clusters). We found that, irrespective of training, all violations elicited the ERAN. However, the ERAN for dissonant endings was larger in musicians than in non-musicians. More importantly, our results showed an early sensitivity to the degree of violation only in musically-trained participants. Musicians showed a larger ERAN for strong than for mild violations. This suggests that, after long-term musical training, the degree of dissonance of a musical ending can determine the degree of fitting with the context. Musicians might process dissonant irregularities as less expected than tonal-syntactic irregularities, thus triggering a larger ERAN. We also observed that violations elicited a P3 in musicians, suggesting that they might be salient enough to attract musicians’ attention. We propose that musical training modulates the sensitivity to different degrees of violation of the harmonic context.

Subjects: Harmony and tonality, Expectation; Musical expertise; Neuroscientific approach

When: 2:45 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E2-3: Harmonic Attraction: Flexible Local and Global Processing

Carol L Krumhansl(1)
1:Cornell University

Background: Two experiments were conducted to assess the relative contributions of local and global harmonic influences. A chord may be perceived relative to its immediately preceding or following chord (local influence) or relative to chords separated by some distance on the musical surface (global influence). These long-distance dependencies have been represented, for example, by the trees in the theories of Lerdahl and Jackendoff (1983) and Lerdahl (2001). Experimental design: Trials began with a I-IV-V-I (in major) progression which was then followed by a pair of chords. The chords were all major and minor chords built on the seven scale degrees and the diminished chord in the key. The chord tones were sounded as an octave complex over four octaves. Musically trained listeners rated how well the second chord of the pair (C2) followed the first (C1), a measure of harmonic attraction. Results: Chord pair ratings were influenced by both the global tonality established by the context, the tonality of C1 and, to a lesser extent, the tonality of C2. In addition, chord pairs received higher ratings when they shared more tones, had roots close on the circle of fifths, and in which C2 was a major chord. Individuals differed in terms of the relative weight given to local and global influences. At one extreme was a group whose responses depended almost entirely on the tonality of the context. At the other extreme was a group whose responses showed strong local effects, particularly of the tonality of C1. Implications: Individuals showed differentiated yet patterned response strategies in how they interpreted and weighted local and global variables in ratings of harmonic attraction. The findings support models that postulate both event-to-event processing as well as influences of chords separated in time.

Subjects: Harmony and tonality, Music theory

When: 3:00 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E2-4: Style impacts listeners’ tonal-harmonic representation of Western music

Dominique T Vuvan*(1), Bryn Hughes(2)
1:Skidmore College & International Laboratory for Brain, Music, and Sound Research, 2:The University of Lethbridge

In the years since Krumhansl and Kessler’s (1982) pioneering work on Western tonal hierarchies, scholars have continued to refine these quantitative models by leveraging corpus data (Temperley 2009) as well as by differentiating representations that were previously assumed to be homogenous, such as minor tonal hierarchies (Vuvan et al. 2011). One aspect of pitch organization that has received increased attention in recent years is musical style. In particular, corpus studies have shown that popular styles such as rock have a different pitch distribution than the common-practice structure described by Krumhansl and Kessler (De Clercq and Temperley 2011; Temperley 2018). Research in cognitive neuroscience has also interrogated whether training in a particular musical style changes one’s neurocognitive representation of tonal-harmonic relations (Bianco et al. 2018; Przysinda et al. 2017; Tervaniemi et al. 2016), but relatively less work has investigated the potential flexibility of tonal-harmonic representation within subjects. We report on a series of experiments using a priming paradigm that cued listeners to musical style (rock vs. classical). Listener behaviour was measured via goodness-of-fit ratings of cadences and probe tones, as well as accuracy and response time of a tuning judgment. Across six experiments, data converged to suggest that listeners’ tonal organization and harmonic expectations were influenced significantly by musical style, such that listeners had significantly less differentiated tonal hierarchies and harmonic expectations in rock than classical contexts. Additionally, the density of style cues in the prime mattered, such that differences between rock and classical contexts were augmented in experiments where participants received multiple style cues as compared to experiments where they received a single style cue. Our findings suggest that tonal-harmonic perception in Western musical contexts is not governed by a monolithic system. Rather, different musical styles engender differing structural representations.

Subjects: Harmony and tonality, Expectation

When: 3:15 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session E3, Facial Emotion

2:30-3:30 PM in KC909

E3-1: Evaluation of Facial, Musical and Prosody Emotion Recognition in Patients with Parkinson’s Disease

Shantala Hegde*(1), Babina Asem Asem(1), Abhishek Lenka(1), Mariamma Philip(1), Pramod Kumar Pal(1)
1:National Institute of Mental Health and Neuro Sciences

A plethora of studies have examined cognitive deficits in PD. However, with known involvement of subcortical areas such as basal ganglia and amygdala, emotion recognition deficits in PD have not been extensively studied. Most studies have examined emotion recognition in one domain, and that is facial emotion recognition. However, only a handful of studies have examined emotion recognition deficits in speech prosody and music. This study examined deficits in emotion recognition (in facial, prosody, musical domains) and its relation with cognitive function in patients with Parkinson’s disease (PD). PD (n=32) and matched controls (HC) (n=32) comprised the sample. Neuropsychological tests to measure attention and executive functions were chosen. NIMHANS Emotion Perception Test and Musical Emotion Test were used to assess emotion recognition from facial, prosody and musical expression. Chi-square, student t-test, Pearson’s product moment correlation, Mann-Whitney U test, z-score of cumulative proportion (Van der Waerden’s formula) and multivariate analysis were used for analysis of the data. Compared to HC, PD showed significant deficits in focused attention, verbal and visual working memory and response inhibition. In addition to cognitive deficits, PD showed significant impairment in perception and discrimination of emotions. PD had deficits in recognizing and discriminating facial and prosody emotions and happy emotion in the musical domain. Wrong identification of emotion was higher for fear and anger musical stimuli when compared to happy or sad musical stimuli. Performance on the emotion perception and discrimination tasks correlated with several of the cognitive functions. Findings from this study will add to our understanding of non-motor symptoms in PD as well as to develop evidenced-based music-intervention to target the incapacitating non-motor symptoms of PD.

Subjects: Emotion, Music and Emotion in clinical conditions

When: 2:30 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E3-2: Recognizing Facial Emotion during Shared Music Listening Experiences in Individuals with Autism Spectrum Disorders

Lucas J Hess*(1), Peter A Martens(1), Hannah Percival(1), David Sears(1)
1:Texas Tech University

The use of audiovisual multimodal stimuli, using music and videos of actors, has shown promising results regarding the improvement of facial emotion recognition (FER) in a neurotypical population, especially in the emotion fear. People with Autism Spectrum Disorder (ASD) experience selective impairment of facial emotion recognition in the emotion of fear. This study investigates the possibility of using music to improve FER of individuals with ASD. We composed five 20-second piano pieces as auditory stimuli that were designed to convey one of five specific emotional states (sadness, fear, anger, happiness, calmness). We then created video recordings of four “actors” who were coached to respond facially to the musical stimuli’s intended emotion. Participants viewed the video recordings while listening to the same music as person in recording (matched condition), different music as person in recording (mismatched condition), white noise, or silence. Participants then identified the emotion that the actor was expressing as well as rate their perceived valence and physiological arousal of the actor’s expression which is measured on a circumplex model. Data collection is ongoing but preliminary results suggest that individuals with ASD were highly accurate in FER in the matched condition compared to the silence and mismatch condition in the emotions of fear and anger. Lower accuracy in FER was indicated in the mismatched condition compared to the matched and silence conditions. Arousal ratings were also more extreme in the matched condition compared with the other conditions. This result fits with our general hypothesis that matched music can improve FER by possibly causing a perceived emotion to be more extreme and addresses the shared emotional states that underlie social bonding. This research has implications for the improvement of facial emotion recognition not only for people on the autism spectrum, but also for those coping with PTSD-related issues.

Subjects: Emotion, Audiovisual / crossmodal

When: 2:45 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E3-3: Priming effects of speech and song on facial emotion recognition: A comparative study between individuals with congenital amusia and high autistic traits

Yik Nam Florence Leung*(1), Can Zhou(2), Cunmei Jiang(2), Fang Liu(1)
1:University of Reading, 2:Shanghai Normal University

Sensitivity to subtle pitch changes plays a fundamental role in the decoding of emotional meaning in speech prosody and music. Congenital amusia (CA) and autism spectrum disorder (ASD) are neurodevelopmental disorders with distinctive pitch processing profiles. While hyposensitivity to pitch variations in CA relates to mild impairments in processing prosodic and musical emotions, hypersensitivity to pitch variations in ASD benefits the processing of musical emotions and plays a seemingly compensatory role in processing emotional prosody. Using a cross-modal affective priming paradigm, this study investigated whether and how auditory emotional cues guide facial emotion recognition in individuals with CA and high autistic traits within a multimodal context. Preliminary data were obtained from 14 individuals with high autistic traits (HAT), 13 with low autistic traits (LAT), 12 CA and 11 typically developing (TD) control participants, who identified emotions in faces or face-like objects (targets) after hearing a spoken or sung word (primes) with either congruent or incongruent emotions. Participants also completed baseline tasks that involved simple recognition of the emotions in faces, face-like objects, speech, and song in the absence of priming. At baseline, all groups performed comparably except for the CA group, who were less accurate at identifying emotions from objects, speech and songs. In the priming tasks, while the LAT group showed better recognition of emotions in faces and objects (targets) when the emotions of the spoken/sung words (primes) were congruent, the HAT group failed to demonstrate such priming effects. In comparison with the TD group, these priming effects were only found for the recognition of emotions in objects but not in faces in CA. These findings reveal that CA and HAT individuals process emotions differently across different domains. Pitch seems to play a compensatory role in a cross-modal manner for CA individuals yet in a within-modal manner for HAT individuals.

Subjects: Emotion, Pitch; Processing disorders

When: 3:00 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E3-4: The Effects of Real-Time Emotions and Music on Emotion Regulation During a Reading Comprehension Task

Matthew Moreno*(1), Earl Woodruff(1)
1:University of Toronto

Motivation Emotions are an integral part of learning, especially achievement emotions (Pekrun, Frenzel, Goetz & Perry, 2007), the emotions most closely related to learning capacity and performance. The effects of background music while completing learning tasks (Husain, Thompson & Schellenberg, 2002; Thompson, Schellenberg & Letnic, 2011) have been explored, but there is an absence of literature on the real-time emotions that categorize the effects of music on learners and performance. The research questions asked were: 1) Are there differences in the emotional expressions of learners as a result of listening to music while completing a reading comprehension task? 2) How might emotional expressions impact learner’s performance while completing a reading comprehension task? Methodology The task involved a reading comprehension test from the Nelson-Denny G. While completing the task, emotional expression was measured using real-time facial expression monitoring software, Emotient (iMotions). Participants were given both non-musical and musical conditions during the study, using a selection of Western-style art music. Participants were Grade 7 and 8 students at 2 elementary schools in Canada. Results Results indicating that there were statistically significant differences between anger, contempt and frustration during the reading and testing phases. To further explore these emotions, Pearson Correlations indicated a strong positive correlation between anger and frustration which were statistically significant in both reading and testing phases of the study in the music condition. Participants scored statistically-higher in the music condition compaired to the non-music condition. Implications The significant differences that were found provide preliminary insight into the emotional experiences of how music may work as a tool to modulate performance and engagement in a learning-style task. More work is needed to explore music as an affective tool as well as the capabilities of real-time emotion expression software in a learning setting.

Subjects: Emotion, Memory

When: 3:15 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session E4, Symposium: Music Training and Executive Function

2:30-3:30 PM in KC914

E4-1: Symposium (integrated special session): Music Training and Executive Functions

Franziska Degé(1)
1:Max Planck Institute for Emprirical Asthetics

Recently, the impact of music training on executive functions has increasingly become the focus of research, which comes as no surprise, because active music making relies on selective attention, set shifting, inhibition, and monitoring, all of which are executive functions. Indeed, empirical evidence suggests that associations between music training and executive functions as well as influences of music training on executive functions exist. The proposed session consists of state-of-the-art research on the influence of music training on executive functions in 4- to 12-year old children. For preschoolers (abstract 1) and 6- to 7-year-old children (abstract 2), we report positive effects of randomized controlled musical interventions on executive functions (when compared to trained and no-treatment control groups). Data from 9- to 12-year-old children will provide insight into the structure of associations between music training and cognition and the role of executive functions (abstract 3). Finally, critical thoughts and evaluations of past research and the development of best practice routines in research on music training and executive functions (abstract 4) will complement the session. The data presented will provide a solid foundation to critically evaluate the nature of the effects of music training in children: We will compare different forms of music training (conservatory music lessons vs. multimodal music program) as well as different age groups (preschoolers to 12-year-olds) and analyze differences in outcome. Regarding structure, we will discuss how music training might affect cognitive abilities, and to what extent executive functions can account for this. Concerning best practice, we will discuss the current state of the literature and identify blind spots to outline directions for solid, informative future research. Informed by the presented data and discussions on effects of musical training, we will finally reflect on the practical significance of musical training on executive functions.

Subjects: Music and development, music training and cognitive abilities

When: 2:30 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E4-1: Multimodal Music Training on Executive Functions in Preschool Children: A Randomized Controlled Trial

Jennifer A Bugos(1)
1:University of South Florida

Musical training contains sensorimotor integration. When coupled with critical thinking skills in creative tasks, it requires executive control. Sustained musical activity strengthens the brain’s attentional system resulting in potential cognitive transfer to multiple cognitive and learning domains. While many studies in early childhood music examine the effects of training on one instrument (e.g., private instruction or singing programs) on cognitive performance in young children (Bilharz, Bruhn, & Olson, 1999; Rauscher & Zupan, 2000), few studies examine the effects of a multimodal music training program. The purpose of this research was to evaluate the effects of a 10-week multimodal music program with creative improvisation, gross motor training, and vocal development, on children’s executive functions. One hundred-fifteen 4-6-year-old children were randomly assigned to music training, Lego training (active control), or a no treatment control group. Eighty-four children completed the study. Training groups received 10-weeks of instruction (45 minutes, twice weekly). The multimodal music program’s lessons integrated vocal exercises, bimanual patterning, and melodic/rhythmic improvisation. Participants completed standardized measures of executive functions, pre- and post-training. Results of Repeated Measures ANOVA showed increased accuracy for the music group on measures of processing speed compared to controls. Music training in early childhood may contribute to enhanced executive functions. Music education programs should include more complex musically integrated activities (gross motor/improvisation/vocal development), even for our youngest children, as these can provide opportunities for rich musical learning and may assist in attention maintenance, an important skill for learning in all domains.

Subjects: Music and development, music training and cognitive abilities

When: 2:30 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E4-2: The effect of music lessons on executive functions and IQ in 6- to 7-year old children

Ulrike Frischen(1), Gudrun Schwarzer(1), Franziska Degé(2)
1:Justus-Liebig-University Giessen, 2:Max Planck Institute for Emprirical Asthetics

Studies show positive associations between music lessons and executive functions as well as music lessons and IQ. Due to the correlational design of most of these studies, they do not allow for causal interpretations. Therefore, in our study we used an experimental design with two control groups to investigate if music lessons enhance executive functions and IQ in 6- to 7- year-old children. [P] Primary school children (N = 94) aged 6 to 7 years (M = 6.6 years, SD = .41 years) were randomly assigned to a music group, an arts group or a waiting control group. Lessons took place once a week for 45 minutes lasting for 8 months. Dependent measures were assessed in pre- and posttests. Executive functions such as selective attention, inhibition, flexibility, and planning were assessed with the NEPSY-II. Visual-spatial working memory was tested by using the AGTB 5-12. We administered the WISC-IV to assess full-scale IQ. Parents’ education, family income and personality served as control variables. [P] Mixed ANOVAs showed significant group x test time interactions for selective attention, inhibition, visual-spatial working memory and full-scale IQ (p ≤.05). Post-hoc analyzes revealed that only the music group significantly improved in selective attention and visual- spatial working memory from pre- to posttest (p <.01). All groups improved in inhibition, but the music group outperformed control groups (p <.05). Concerning IQ, the music and the arts groups improved (p <.001), whereas the waiting control group did not. Analyzes for posttest showed, that the music group outperformed the arts group (p <.05). [P] The findings confirm results from previous studies showing that music lessons are associated with executive functions and IQ. Our study is one of the first well-controlled experimental studies allowing the conclusion that music lessons cause an increase in executive functions and IQ in 6-to 7-year old children.

Subjects: Music and development, music training and cognitive abilities

When: 2:45 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E4-3: The association between music lessons and specific cognitive abilities in 9- to 12-year-old children: the mediating role of executive functions

Gudrun Schwarzer(1), Franziska Degé(2)
1:Justus-Liebig-University Giessen, 2:Max Planck Institute for Emprirical Asthetics

Studies show positive associations between music lessons and general as well as specific cognitive abilities. These findings raise questions about the processes by which music lessons and cognitive abilities are connected. Previous studies provided evidence that music lessons and general cognitive abilities are mediated by executive functions. It is, however, unclear to which extent associations between music lessons and specific cognitive abilities are mediated by executive functions. Therefore, our study investigates whether associations between music lessons and specific cognitive abilities (phonological awareness, mathematical abilities) are mediated by executive functions. We also intend to replicate the mediating role of executive function in associations between music lessons and general cognitive abilities (IQ, academic achievement). [P] We tested 30 (16 girls) 9- to 12-year-old children (M = 10 years; 10 months, SD = 1 year; 1 month). We assessed socioeconomic status (control variable) and amount of music lessons (predictor). As mediator the executive functions inhibition, working memory, set shifting, planning, fluency, and selective attention were measured. As criterion variables phonological awareness, mathematical abilities, IQ, and academic achievement were tested. [P] Regression models demonstrated that set shifting mediated the association between music lessons and phonological awareness, F(2,26) = 3.60, p = .04, the association between music lessons and IQ, F(2,26) = 4.57, p = .02 as well as the association between music lessons and academic achievement (partially), F(2,24) = 6.98, p = .01. Working memory mediated the association between music lessons and phonological awareness, F(2,26) = 3.26, p = .05, as well as the association between music lessons and mathematical abilities, F(2,25) = 3.76, p = .04. [P] Results suggest that executive functions (set shifting, working memory) have not only a mediating role for associations between music lessons and general cognitive abilities (IQ, academic achievement), but also for music lessons and specific cognitive abilities (phonological awareness, mathematical abilities).

Subjects: Music and development, music training and cognitive abilities

When: 3:00 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

E4-4: Best practices for investigating transfer effects from musical training

Robert Slevc(1)
1:University of Maryland

Recent years have seen an exciting growth in studies testing the impact of musical training on a variety of non-musical outcomes (e.g., on executive function, or EF). As is true in any field, the early stages of research on musical experience and EFs have mostly relied on small samples using relatively feasible designs, rather than conducting “gold-standard” double- blind, placebo-controlled, randomized trials. In addition, the existing research has examined different populations (children, college students, elderly, etc.) using different types of interventions (perceptual training, instrumental training, etc.), making it difficult to draw any firm conclusions. The current literature does, however, provide preliminary evidence and well-motivated theoretical claims that can now be tested using more rigorous (albeit more costly) methods. This component of the special session will focus on “best practices” for future work assessing potential transfer effects from musical training, using musical training and EFs as a specific example. Drawing from work on musical training and from discussions in the larger literature on cognitive training, we will discuss a number of theoretical and methodological issues and how they can be addressed. These include the importance of developing specific hypotheses about the mechanisms of transfer (which inform the choice of populations, type of training interventions, and choice of control groups), avoiding potential placebo effects and experimenter biases, managing participant attrition, assessing the outcome construct(s) with multiple measures, and determining the necessary statistical power to detect effects of interest. We will also discuss the importance of complete reporting (e.g., of null effects) and the benefits of preregistration (as is now mandatory in clinical research). This sort of design, combined with insights from past work, will allow for well-motivated research that can assess causal relationships between musical training and EFs.

Subjects: Music and development, music training and cognitive abilities

When: 3:15 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session F1, Aesthetic Responses

3:45-4:45 PM in KC802

F1-1: Musical chills: Effects of stimulus properties, stylistic preference and familiarity

Rémi de Fleurian*(1), Marcus Pearce(1)
1:Queen Mary University of London

Musical chills give a convenient insight into what makes music pleasurable because they are widespread, memorable, and observable. Changes in dynamics, texture, melody, harmony, rhythm, and instrumentation have been linked to chills, but few studies have looked at the causal influence of such factors. More specifically, it is unclear whether chills can be felt when listening to any piece of music, or whether they require a specific combination of stimulus-driven properties. Potential effects of stylistic preference and familiarity have also been proposed, but sparsely explored as of yet. In the present study, 93 songs were extracted from a previous survey study in which 221 participants reported songs during which they often experience chills. Each song was then matched with three similarly popular songs from the same artist. Participants took an online test in which they listened to randomly selected 15 s. excerpts for 40 songs and their associated matches, and rated them on liking for the genre of each excerpt and familiarity, resulting in an individual set of 12 unfamiliar songs for each participant, containing three songs for each combination of song provenance (survey or matched) and liking for the genre (liked or disliked). Participants listened to the 12 songs in two lab sessions, separated by a two-week longitudinal phase away from the lab, during which they listened to the full set of songs another eight times. In each lab session, piloerection was measured using a wearable optical device, and participants continuously reported the occurrence of chills and of intensely pleasurable moments using button presses. Preliminary results, at the time of writing, suggest that the probability of experiencing chills, intensely pleasurable moments or piloerection is higher for songs in liked genres. Experiencing pleasurable moments is also more likely for songs from the survey dataset, but other effects of song provenance and familiarity are less clear.

Subjects: Aesthetics / preference, Emotion; Expectation; Physiological measurement

When: 3:45 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F1-2: What Causes Musical Chills? Testing Theories of Auditory Looming and Fear

Scott Bannister(1)
1:Department of Music, Durham University, United Kingdom

Previous research on musical chills correlates the response with musical characteristics, including increases in acoustic intensity and spectral brightness; these relationships have been linked to fear (Huron, 2006) and auditory looming (Ghazanfar et al., 2002). However, no attempt to causally manipulate these features currently exists. This study manipulated intensity and brightness in chills excerpts, to assess effects on the chills experience. Participants (N = 40) listened to versions of two previously identified chills excerpts (Bannister & Eerola, 2018): Glósóli by Sigur Rós, characterised by a crescendo (implicating auditory looming); and Ancestral by Steven Wilson, characterised by an expressive guitar solo (no clear implication of looming processes). Versions included an original, increased or decreased intensity (by 6dBA), or increased and decreased brightness (by 6dBA in frequencies above 2000Hz). Participants listened to five of these ten versions (two from one piece, three from the other), and pressed a button to report chills experiences. Skin conductance was utilised to validate button presses, identifying significant increases in the signal concurrent with button presses, compared to baseline measures. Skin conductance amplitudes were used to indicate arousal levels during chills, averaged across three seconds after button presses. Results show that across conditions, increased intensity in Glósóli resulted in significantly more reports of chills than any other manipulation (z = 2.21, p = .02). No further differences were found across chills frequency, or skin conductance amplitudes. This study is the first attempt at causally manipulating psychoacoustic features linked to musical chills, testing theories of fear and auditory looming. Importantly, increasing acoustic intensity may increase the incidence of chills across listeners, supporting fear and looming hypotheses, but the effect appears to be dependent on emphasising underlying structures linked to fear (crescendo), as opposed to expressive voices and virtuosic performance possibly linked to contagion and social processes (guitar solo). REFERENCES Bannister, S., and Eerola, T. (2018). Suppressing the chills: Effects of musical manipulation on the chills response. Frontiers in Psychology, 9: 2046. Doi: 10.3389/fpsyg.2018.02046. Ghazanfar, A., Neuhoff, J., and Logothetis, N. (2002). Auditory looming perception in rhesus monkeys. Proceedings of the National Academy of Sciences, 99, 15755-15757. Doi: 10.1073/pnas.242469699. Huron, D. (2006). Sweet anticipation: Music and the psychology of expectation. Cambridge, MA: The MIT Press.

Subjects: Emotion, Aesthetics / preference; Evolutionary perspectives; Expectation; Physiological measurement; Psychoac

When: 4:00 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F1-3: Melancholy versus Grief: Has research on musical “sadness” conflated two different affective states?

Lindsay Warrenburg(1)
1:Ohio State University

Psychological research related to human crying suggests the existence of two different yet complementary states, melancholy and grief (Vingerhoets & Cornelius, 2012). These emotions have separate motivations and physiological characteristics. When characterizing nominally “sad” music, listeners appear to offer a wide range of descriptions. Could it be that this large variance in responses is a consequence of the failure to distinguish melancholy from grief? The current study addresses this distinction by examining listeners’ perceptions of “sad” music in a series of five studies. Three judges listened to 62 passages of “sad” music and classified them as melancholy or grieving. The first experiment asked those with superior aural skills to rate structural parameters of these melancholic and grief passages (e.g., harsh timbres, narrow pitch intervals) on 7-point unipolar scales in order to examine the musical differences between these “sad” states; the results suggest that different musical parameters can be identified in melancholy and grief music (R2 = 81.8%). The other four studies asked listeners to rate perceived emotions (Study 2; n = 49) and experienced emotions (Study 4; n = 57) from melancholic and grieving passages. Results are consistent with the hypothesis that listeners can distinguish musical grief from musical melancholy (p < 0.05) and that these two stimuli types give rise to different emotions (p < 0.05). Notably, grief music is related to feelings of crying, death/loss, and transcendence, whereas melancholy music is related to feelings of reflection, depression, and relaxation. Both the perceived and induced emotion findings were replicated using different experimental designs (Study 3; n = 57 and Study 5; n = 81) These studies have implications for refining the umbrella concept of “sadness” in music research. The results are consistent with the idea that musical “sadness” consists of more than one emotional state.

Subjects: Emotion, Harmony and tonality; Music and society; Music theory

When: 4:15 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F1-4: On the Enjoyment of Sad Music: Pleasurable Compassion Theory and the Role of Trait Empathy

David Huron*(1), Jonna K Vuoskoski(2)
1:Ohio State University, 2:University of Oslo

Why do people enjoy listening to nominally sad music? In the first instance, only about half the population reports enjoying sad music (Garrido & Schubert, 2011; Taruffi & Koelsch, 2014). Such individual variability suggests that either culture, experience, and/or personal trait factors play a decisive role in the seemingly paradoxical phenomenon of sad-music enjoyment. Recent experiments implicate trait empathy. Specifically, those listeners who most enjoy sad music typically score high on “empathetic concern” (or compassion), with nominal “personal distress” (or commiseration) (Eerola, et al., 2016; Kawakami & Katahira, 2015; Sattmann & Parncutt, 2018; Vuoskoski & Eerola, 2017). That is, when encountering sadness-related stimuli, sad-music lovers are more likely to experience pity or compassion rather than an emotional contagion of evoked sadness or commiseration. The authors review literature implicating compassion as a positively valenced affect. Neuroimagining studies show that altruistic thoughts alone are sufficient to activate regions of the medial forebrain pleasure circuit (Harbaugh, Mayr & Burghart, 2007; Izuma, Saito, & Sadato, 2008). Since compassion is a precursor affect intended to motivate altruistic behaviors, compassion must also be positively valenced. In this regard, the pleasure of compassion conforms to classic research on dopamine function, where over time, dopamine rewards shift from consummatory behaviors to anticipatory behaviors (Berridge & Robinson, 1998; Gebauer, et al., 2012; Weiss, et al., 1993). Overall, Pleasurable Compassion Theory suggests that sad-music lovers experience only moderate levels of “I feel your pain” but high levels of “I feel sympathy for you.” If compassion is a positively valenced affect, then high levels of sympathy, pity, or compassion will produce a broadly pleasurable experience. Finally, Pleasureable Compassion Theory is shown to avoid a number of classic pitfalls identified by aesthetic philosophers when accounting for the paradox of negative emotions in the arts (e.g., Levison, 2013).

Subjects: Aesthetics / preference, Emotion; Neuroscientific approach

When: 4:30 PM in KC802 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session F2, Development 1

3:45-4:45 PM in KC905/907

F2-1: Musical Instrument Practice Predicts White Matter Microstructure and Cognitive Abilities in Childhood

Psyche Loui(1)
1:Northeastern

Musical training has been associated with advantages in cognitive measures of IQ and verbal ability, as well as neural measures including white matter microstructural properties in the corpus callosum (CC) and the superior longitudinal fasciculus (SLF). We hypothesized that children who have musical training will have greater integrity and coherence in the SLF and CC. One hundred children aged 7.9 to 9.9 years (mean age 8.7) were surveyed for their musical activities, completed neuropsychological testing for general cognitive abilities, and underwent diffusion tensor imaging (DTI) as part of a larger study. Children who play a musical instrument for more than 0.5 hours per week (n = 34) had higher scores on verbal ability and intellectual ability (standardized scores from the Woodcock Johnson Tests of Cognitive Abilities), higher axial diffusivity (AD) in the left SLF, and marginally higher fractional anisotropy (FA) in the right SLF than those who did not play a musical instrument (n = 66). Furthermore, the intensity of musical practice, quantified as the number of hours of music practice per week, was correlated with axial diffusivity (AD) in the left SLF. Results are not explained by age, sex, socio-economic status, or physical fitness of the participants. Results suggest that the relationship between musical practice and intellectual ability is related to the coherence of axonal fibers in white matter pathways in the auditory-motor system. The findings suggest that musical training may be a means of improving cognitive and brain health during development.

Subjects: Musical expertise, Cross-domain effects; Language and speech; Music and development; Neuroscientific approach; Physiolo

When: 3:45 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F2-2: Effects of Music Training on Inhibitory Control and Associated Neural Networks in School-Aged Children: A Longitudinal Study

Sarah L Hennessy*(1), Matthew Sachs(1), Beatriz Ilari(1), Assal Habibi(1)
1:University of Southern California

Inhibitory control, the ability to suppress a dominant response, has been shown to predict academic and career success, wellbeing, and health. Playing music engages sensorimotor processes and draws on cognitive capacities including inhibition and task switching. While music training has been shown to improve certain cognitive and language skills, its impact on inhibitory control remains inconclusive. As part of an ongoing 5-year longitudinal study, we investigated the effects of music training on inhibitory control from behavioral and brain levels with children (starting at age 6) from underserved communities. Children involved in music training were compared with children involved in sports training and children not involved in a systematic after-school program. Inhibition control was measured using delayed gratification, Flanker, and Color-Word Stroop tasks, which were performed both inside and outside of an MRI scanner. There were no differences in performance on any of the tasks among the groups at baseline testing. In the delayed gratification task, beginning after three years of training, the music group chose a larger, delayed reward in place of a smaller and immediate reward compared to the control group. In the flanker task, music-trained children performed with higher accuracy than the control group, and with shorter reaction times than the sports group after four years of training. There were no differences between groups on behavioral measures of Color-Word Stroop task at any time point. However, after two years, the music group showed greater bilateral activation in the pre-SMA/SMA, ACC, IFG, and insula during the Color-Word Stroop task compared to the control group, but not compared to the sports group. After four years, however, such brain differences were no longer observed. The results suggest that systematic extracurricular training, particularly music-based training, can accelerate development of inhibitory control and related brain networks in school-age children.

Subjects: Music and development, Cross-domain effects; Physiological measurement

When: 4:00 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F2-3: It’s all in your head: A meta-analysis on the effects of music training on cognitive measure in schoolchildren

Patrick Cooper(1)
1:University of South Florida

The utility of music training in schools has received much attention in the United States, with the pendulum of advocacy topics swinging back and forth between aesthetic and extramusical benefits as the key argument for music’s place in the core curriculum. While some scholars remain tied to advocating for music training for autotelic purposes, others remain convinced there are extramusical benefits to music training that if found, may be used as mortar to reify music education’s status in the core curriculum of American schools. The purpose of this study was to conduct a meta-analysis to measure the overall mean effects of music training on cognitive measures in schoolchildren. While some studies showed large effects, studies with active control groups often resulted in smaller effect sizes or non-significant results. Therefore, several moderators were identified to find a truer understanding of the overall mean effect. The random-effects meta-analysis showed small to medium overall effects (N = 5612, k = 100, g = .28). Additional moderator analysis showed no clear advantage in one area of cognitive function (verbal, g = .28) compared to another (non-verbal, g = .28). Results did not differ by geographical locale or type of music intervention. When compared to active control groups, music training yielded more improvement on a range of cognitive measurements (g = .21). However, significant moderators related to methodological quality rendered findings less strong (g = .08). Overall, results suggested music training may be a positive cognitive intervention for schoolchildren, however, clear advantages as to the utility of music training compared to other cognitive interventions were less empirically supported.

Subjects: Physiological measurement, Health and well-being; Memory; Neuroscientific approach

When: 4:15 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F2-4: Do young children synchronize better with music or a metronome?

Sean Hutchins(1)
1:The Royal Conservatory

Keeping the beat is a fundamental musical activity, and is one of the most important skills taught in early childhood music education. Despite this, little is known about the factors that affect children’s ability to perceive and produce a steady beat. One factor of practical consideration to both researchers and pedagogues is the role of musical information. There is a widespread belief among early childhood music educators that tapping to real music is easier than tapping to a metronome. Real music may make beat-keeping easier, as there is a richer source of information to draw upon. On the other hand, this extra information may serve to confuse a child with note onsets that are not aligned with the underlying beat. In this study, we attempt to dissociate these competing theories. We measured beat-keeping abilities in 54 young children (ages 3–6) tapping along with either a real music excerpt or with a metronomic click track with the same tempo, across three different tempi. We measured error and variability both during a synchronization and continuation phase. The results showed that, during the synchronization phase, there was no main effect of stimulus condition on produced tempo, but there was a significant main effect of condition on the variability of the children’s tapping. Contrary to music teachers’ expectations, children were more variable in tapping behaviour when synchronizing with real music, rather than with the metronome. We also found significantly more accurate tapping for medium-tempo stimuli (~120 bps) compared with slow or fast stimuli, as well as more accurate tapping for older vs. younger children. Together, these show that beat production in young children may be aided by simpler stimuli and suggest that the ability to integrate more complex sources of musical information may be a skill that needs to be developed.

Subjects: Music and development, Beat, rhythm, and meter

When: 4:30 PM in KC905/907 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session F3, Modeling the Brain

3:45-4:45 PM in KC909

F3-1: Neural selectivity for music, speech, and song in human auditory cortex

Samuel V Norman-Haignere*(1), Jenelle Feather(2), Peteer Brunner(3), Anthony Ritaccio(3), Josh McDermott(2), Gerwin Schalk(3), Nancy Kanwisher(2)
1:Columbia University, 2:Massachusetts Institute of Technology, 3:Albany Medical College, Wadsworth Center, SUNY

Music is ubiquitous across human cultures. How is it represented in the brain? fMRI studies have reported that cortical responses to music overlap with responses to speech and language, suggesting that music co-opts mechanisms adapted for other functions. However, we have previously found that when fMRI voxel responses are modeled as the weighted sum of responses from multiple neural populations, distinct selectivities for music and speech emerge. This finding suggests that the apparent overlap reported in prior studies is due to the coarse nature of fMRI, which blurs responses from nearby neural populations. To test this hypothesis, we measured cortical responses to a diverse set of natural sounds with human electrocorticography (ECoG), which has high spatial and temporal resolution, and which can sample from relatively large regions of the cortex. We observed clear selectivity for speech in some electrodes, and for music in others, validating our prior fMRI findings. Unexpectedly, we also observed electrodes that responded primarily to music with vocals (i.e. singing), and whose response could not be explained as purely a sum of music and speech selectivity. All category-selective responses developed quickly (within 200 to 500 ms of stimulus onset), and could not be explained by standard acoustic features. Music and song-selective responses were most prominent in anterior regions of the superior temporal gyrus (although a more posterior music-selective response was also observed), while speech selectivity was most prominent in the middle STG. These findings reveal that music and speech have distinct representations in the brain, but also that music itself is processed via multiple neural populations, one specific to the analysis of singing.

Subjects: Neuroscientific approach,

When: 3:45 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F3-2: Statistical context sensitivity of ERP components in an unattended tone sequence

Tamar I Regev*(1), Geffen Markusfeld(1), Israel Nelken(1), Leon Deouell(1)
1:The Hebrew University of Jerusalem

Everyday auditory streams, such as in music and speech, contain relevant statistical features in terms of frequency distributions along multiple timescales. Here we demonstrate sensitivity of Event-related potential (ERP) components to the distribution of auditory frequencies, over multiple timescales, in an unattended sequence of tones. In three EEG experiments, 81 participants (21 musicians in Experiment 1, 27 musicians in Experiment 2 and 33 non-musicians in Experiment 3) were instructed to ignore sequences of pure tones presented through headphones, while viewing a silent film. The sequences comprised of five equiprobable notes. The notes were distributed across four octaves in Experiments 1 and 2, while in Experiment 3 this range varied over 3 conditions between large, medium, and small (4, 2 or 1 octaves, respectively). We found that the amplitude of the N1 component – a negative deflection in the EEG signal about 100 milliseconds after tone onset – was sensitive to the absolute distance between the current tone’s frequency and the mean frequency of the tones in the sequence: the farther the tone’s frequency was from the mean frequency, the larger the evoked N1 amplitude was. In contrast to the N1, the later P2 component – a positive deflection peaking about 200 milliseconds after tone onset – showed a temporally local sensitivity to the interval between the current and the previous tones’ frequencies, and a weaker sensitivity to the sequence mean frequency. We propose a simple biophysical model of adapting neurons with wide frequency tuning curves and multiple adaptation time constants to explain these results. Using the model, we show that the P2 has a stronger dependency on frequency spread of the tones in the sequence than the N1. Our results give electrophysiological evidence for pre-attentive simultaneous monitoring of distributions of sound features at multiple timescales in the human auditory cortex.

Subjects: Neuroscientific approach, Pitch

When: 4:00 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F3-3: Maurice Ravel’s Sonatine and Computational Models of the Midbrain: A Case Study of Discriminability

Braden Maxwell(1)
1:University of Rochester

Discrimination between auditory stimuli is fundamental to both the perception of speech and the enjoyment of music. For language, the auditory system must distinguish phonemes to construct meaning. For music, the auditory system must preserve differences between pitches, pitch collections, timbres, and many other musical features. The mechanisms by which these differences are represented in the auditory system are not fully understood, although this question is a foundational concern of hearing science (John et al., 2018; Carney et al., 2015; Allen et al., 2017). This study used computational models of auditory neurons developed by Zilany et al. (2014) and Mao et al. (2013) to explore the role that amplitude-modulation (AM)-sensitive cells in the midbrain may play in enhancing or preserving differences between musical stimuli. The stimuli were 280 time segments from a recording of the second movement of Ravel’s Sonatine. Stimuli were chosen for ecological validity; several musical parameters were in play simultaneously, including musical interval content, register, timbre, and performance decisions such as pedaling and dynamic levels. Discrimination of these stimuli was evaluated using a population d-prime measure on response rates of model auditory nerve fibers and midbrain cells. Results suggested that midbrain AM-sensitivity may enhance discriminability by a factor of 10 relative to auditory nerve rate representations. Further investigation of the model results revealed insights into how the musical parameters listed above may be represented by these cells and how perceptual similarity relationships may shift during early stages of auditory processing. Connections between midbrain discriminability and music theory and analysis were also identified.

Subjects: Computational approach, Harmony and tonality; Music theory; Neuroscientific approach; Pitch; Psychoacoustics; Timbre

When: 4:15 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F3-4: Tracking musical tension properties in naturalistic listening conditions: decoding intracranial EEG signal

Claire Pelofi*(1), Clare Clingain(1), Marc Scott(1), Daniele Schon(2), Morwaread Farbood(1)
1:New York University, 2:Institut de Neurosciences des Systems

Music, just like language, transmits a highly complex set of information that contains structural layers ranging from low-level acoustical features such as pitch and timbre to higher-level information such as musical tension. Tension results from a combination of disparate musical and auditory features and is an important aspect of how listeners experience music. Yet the way different layers of information are intertwined and encoded in the brain to convey tension dynamics remains unknown. New tools specifically designed to track ongoing information in the neural signal using canonical correlation analysis (CCA) and temporal response function (mTRF) have recently been developed. This study tackles the encoding of tension and release dynamics from intercranial EEG (iEEG) signal, taking advantage of these new signal-analysis techniques. iEEG data were collected on patients with pharmacoresistant epilepsy while they listened to 50 Western tonal polyphonic musical excerpts of 20 seconds each with distinct tension levels. We fit CCA model to the neural signal to examine how similar the neural signal and the audio envelope are (De Cheveigné et al., 2018). More specifically, we obtained a projection of both signals into a space that extracts similarities between them. A profile of continuous correlations was then computed from these projections: a higher correlation score corresponded to a better fit between the two signals. By contrasting the correlation scores for each excerpt, we were able to determine which features drew the most listener attention. A heuristic model of musical tension that combined weighted acoustic parameters which contribute to musical tension (Farbood, 2012) was used along with multivariate temporal response function (mTRF) analysis (Di Liberto et al. 2015) to observe the extent to which tension can help predict neural responses to music stimuli.

Subjects: Computational approach, Expectation; Harmony and tonality; Music information retrieval; Music theory; Psychoacoustics

When: 4:30 PM in KC909 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session F4, Symposium: Music-Evoked Autobiographical Memories

3:45-4:45 PM in KC914

F4-1: Music-evoked autobiographical memories: Current methods and perspectives

Kelly Jakubowski*(1), Amy Belfi(2), Petr Janata(3), Amee Baird(4)
1:Durham University, 2:Missouri University of Science and Technology, 3:University of California, Davis, 4:Macquarie University

Listening to music can bring back vivid and emotional memories from across the lifespan. Since the seminal studies on this topic a decade ago, there has been an exponential increase in interest in music-evoked autobiographical memories (MEAMs). Such research plays a key role in theoretical accounts of the mechanisms by which music induces emotions, and provides critical evidence for assessing the validity of claims about the “power of music.” Research on MEAMs may also be of practical relevance in informing the development of music-based interventions for people with memory disorders. This symposium brings together a body of research that represents the full scope of methods currently being employed to investigate MEAMs: laboratory experiments, questionnaire/diary studies, neuroimaging experiments, and neuropsychological approaches. Bringing these diverse approaches together allows for a more comprehensive understanding of MEAMs than any one approach alone; the combination of these methods allows us to examine the MEAM experience at the levels of behavior, situation, and physiology. The symposium will also demonstrate how these different approaches can complement one another in many ways—for instance, by integrating accounts from neuroimaging studies of MEAMs in healthy individuals with evidence on MEAMs in people with neurological conditions, and comparing the subjective experience of MEAMs in the laboratory to MEAMs in everyday life. The overall goals are to give a thorough account of the state-of-the-art in MEAMs research and to explore the role that different methodologies can play in answering crucial questions on how music can serve as a cue for lifetime memories, the extent to which music may have privileged access to certain aspects of memories over other perceptual cues, and the conditions under which MEAMs can be spared in the presence of brain injury or disease.

Subjects: Memory, Emotion

When: 3:45 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F4-1: A comparison of methods for analyzing music-evoked autobiographical memories

Amy Belfi(1), Elena Bai(1), Daniel B Vatterott(1)
1:Department of Psychological Science, Missouri University of Science and Technology

The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. As the field progresses, diverse analysis methods are being used to assess various characteristics of MEAMs. The goal of the present study is to evaluate three methods for analyzing autobiographical memory data (i.e., naturalistic text), to identify whether they can accurately distinguish between MEAMs and image-evoked memories. Participants (N=20) listened to popular music and viewed images of famous persons. After each stimulus, participants were asked whether the cue evoked an autobiographical memory. If so, participants verbally recalled the memory. Memory descriptions were transcribed and analyzed using the following three methods: the Autobiographical Interview (AI; Levine et al., 2002), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015), and the Evaluative Lexicon (EL; Rocklage et al., 2018). We trained three logistic regression models (one for each analysis method) to differentiate between memories evoked by music and faces. The models trained on LIWC and AI (but not EL) exhibited significantly above chance accuracy. The LIWC analysis revealed that MEAMs contained greater ‘authenticity’ (i.e., were more personal) and auditory perceptual details, while face-evoked memories contained greater visual perceptual details. The AI analysis revealed that MEAMs had a greater proportion of episodic details while the face- evoked memories had a greater proportion of semantic details. The EL, which primarily focuses on the affective valence of a text, failed to significantly predict whether memories were evoked by music or faces, suggesting similar emotional content across memory types. This demonstrates that such analysis schemes provide unique and complementary information about cued autobiographical memories, and that MEAMs are distinct from memories evoked by visual cues.

Subjects: Memory, Emotion

When: 3:45 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F4-2: Music-evoked autobiographical memories in everyday life

Kelly Jakubowski(1), Anita Ghosh(1), Amy Belfi(2)
1:Department of Music, Durham University, UK, 2:Department of Psychological Science, Missouri University of Science and Technology

Previous research on music-evoked autobiographical memories (MEAMs) has focused on cueing MEAMs in laboratory experiments. Such approaches lack ecological validity, and the use of experimenter-selected (typically pop) music may limit the range of MEAMs that are cued. We present two studies designed to capture natural experiences of MEAMs—an online survey and a diary study—to test the feasibility of these methods and compare our results to previous lab- based approaches. In the survey, a representative sample of UK participants (N=800, quota sampled on age, gender, and income) reported the most recent MEAM they could recall, for comparison to the most recent autobiographical memory they could recall as cued by watching TV. In the diary study, participants (N=31) recorded details of MEAMs as they occurred in daily life for 7 days. In both studies, we captured details about the music (e.g., title, familiarity, listening setting) and the autobiographical memories (e.g., content, age of memory, vividness, emotions). In both studies, MEAMs were cued by a range of musical genres (pop, rock, classical, soundtracks, etc.), most typically featured friends or significant others, elicited a predominance of positive or mixed emotions (happiness, nostalgia), and were rated as more involuntarily than deliberately recalled. In comparison to TV-cued memories, MEAMs were rated as more vivid, of greater life significance, and accompanied by greater reliving and stronger emotional responses (in particular, positive emotions such as happiness and love). This was despite the fact that MEAMs and TV-cued memories did not differ significantly in terms of self-reported recency of recall or age of the memory. These studies represent new methodological approaches for capturing naturally occurring MEAMs. Several results have confirmed previous findings, such as the predominance of positive emotions and uniquely vivid nature of MEAMs, suggesting that lab experiments do capture similar aspects to the everyday MEAM experience.

Subjects: Memory, Emotion

When: 4:00 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F4-3: Locating music-evoked autobiographical memories in the brain

Petr Janata(1)
1:University of California, Davis

Music, by virtue of its capacity to evoke vivid autobiographical remembering experiences, affords an exciting avenue by which to probe the neural processes underlying such experiences and potentially even understand how autobiographical memory content is organized in the brain. In this talk I will first review strategies for neuroimaging experiments that examine average music-evoked autobiographical remembering processes in population samples as a function of the specific task that a participant is asked to perform. I will then describe a case study illustrating an approach to studying the distribution of autobiographical memories in the brains of individual participants that aims to capture the structure of autobiographical knowledge at the level of the individual: the person’s “neurobiography.” A female participant provided autobiographical memory reports for over 200 different pieces of music drawn from 8 distinct periods of her life. She then underwent over 6 hours of functional magnetic resonance imaging (fMRI) scanning while listening to 30-second excerpts from over 150 of these pieces. The fMRI portion of the experiment was replicated exactly one year later, yielding replication data for 107 music excerpts. Replicable song-specific brain activation patterns were identified using general linear modeling. In almost all cases, the song-specific responses encompassed multiple brain areas of both hemispheres, primarily in the superior lateral temporal lobe and throughout the frontal lobe. While specific locations (voxels) in the auditory and premotor cortices exhibited replicable per-song responses for multiple songs, responses in the lateral prefrontal cortex were characterized by slightly different spatial distributions for different songs. Such response distributions are to be expected within a brain area that is generally involved in semantic retrieval, but where specific semantic information is spatially distributed. Music may serve as an exquisitely sensitive probe of the cerebral organization of a person’s autobiographical knowledge.

Subjects: Memory, Emotion

When: 4:15 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

F4-4: Music-evoked autobiographical memories in people with neurological conditions

Amee Baird(1)
1:Macquarie University

People with brain injury and disease such as dementia often have impaired memory functions, including difficulty retrieving autobiographical or personal memories. Observations of preserved music-evoked autobiographical memories (MEAMs) in these populations have led to the suggestion that music may be more effective than other stimuli at evoking autobiographical memories, but few studies have compared music with other familiar stimuli. To explore this issue, a series of three studies characterised MEAMs compared with verbal and visual evoked autobiographical memories in people with (1) severe acquired brain injury (ABI, n=5), (2) Alzheimer’s Dementia (AD, n=10) and (3) Behavioural variant Frontotemporal Dementia (Bv- FTD, n=6). In an attempt to match and date stimuli across different time periods, famous songs and photos of famous events were used, in addition to verbal cues of the Autobiographical Memory Interview (AMI), a standard measure of autobiographical memory. In a case series of people with severe ABI, music was more efficient at evoking memories than the verbal cues of the AMI in the majority of cases (3/4). One ABI case with impaired pitch perception had no MEAMs. In people with AD or Bv-FTD, there was no difference in the frequency of memories evoked by music and photo stimuli. In those with AD, however, the frequency of MEAMs was in keeping with healthy elderly people, while the frequency of photo-evoked memories (PEAMs) was significantly reduced. In contrast, people with Bv-FTD showed reduced frequency and specificity of both MEAMs and PEAMs compared with healthy people and those with AD, consistent with the known reduced autobiographical memory function in people with this type of dementia, and the integral role of medial frontal regions in the retrieval of MEAMs. Overall, these findings suggest that the mnemonic power of music is relatively resistant to some, but not all, types of neurological disorders.

Subjects: Memory, Emotion

When: 4:30 PM in KC914 on Mon Aug 5 – Day 1
Return to Day Schedule.
Return to Full Schedule.

Session G1, Beat & Meter 3: Time

9:30-10:15 AM in KC802

G1-1: Motown, Disco, and Drumming: The Effects of Beat Salience and Song Memory on Tempo Perception

Justin London(1)
1:Carleton College

Our tempo memory is highly accurate (Levitin & Cook 1996, Jakubowski, et al. 2015) as are absolute judgments for tempo in the range of 80-140 BPM (Madison & Paulin 2010, Gratton, et al. 2016). London, et al. (2016), found a conflict between remembered tempo and absolute tempo judgment, which they called the “tempo anchoring effect” (TAE). Three experiments further probed the TAE. Exp1 (a replication of London 2016) used pairs of Motown songs at core tempos of 105, 115, and 125 BPM, which were then time-stretched to produce stimuli spanning the 100-130 BPM range in 5 BPM increments; time-stretching alters tempo without changing pitch. Exp2 used the same stimulus design but replaced the Motown stimuli with six disco songs, and Exp3 used looped drum patterns. Exps 2 and 3 systematically increased beat salience while reducing other cues (melody, harmony). Exp2 and Exp3 also included blocks of unaltered stimuli. Stimuli were presented in different random orders for each participant, and the task was to rate each stimulus on a 7-point scale (1=slowest; 7=fastest). The TAE was replicated in Exp1, reduced in Exp2, and absent in Exp3. In Exp2 and Exp3 tempo judgments for unaltered stimuli corresponded to their BPM rates. Thus the TAE is negatively correlated with beat strength/clarity and with the presence of melodic and harmonic cues. While BPM is usually regarded as the dominant cue for musical tempo (but see Drake, Gros, & Penel 1999; Boltz 2011; London 2011; Elowsson & Friberg 2013), the TAE shows that other musical parameters play into our judgments of musical tempo. The TAE also depends upon tempo memory for distinct musical performances, showing that real-world tempo judgments involve remembered and stimulus-driven components.

Subjects: Beat, rhythm, and meter, Memory

When: 9:30 AM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

G1-2: Timing is Everything… or is it? Effects of Timing Style and Timing Reference on Drum-Kit Sound in Groove Performance

Guilherme S Câmara*(1), Anne Danielsen(1), Kristian Nymoen(1)
1:University of Oslo

This study tests the hypothesis that in addition to expressive onset timing, sound parameters such as timbre, intensity, and duration are fundamental to the production and perception of timing styles in groove-based music. 20 professional drummers performed a single “back-beat” pattern on a drum-kit (hi-hat, snare, kick) in three different timing styles: a) Laid-back, b) Pushed, and c) On-beat relative to an isochronous timing reference (96bpm) presented with: i) woodblock sounds (metronome), and ii) guitar/bass (instrumental backing-track). Onset location and three descriptors – duration, sound-pressure level (SPL), spectral centroid (SC) – previously shown to affect perceived timing of events were extracted from the recorded audio of each drum’s individual strokes. Repeated measures ANOVAs were conducted with Style (Laid-back, On-beat and Pushed) and Reference (Metronome and Instrumental) as independent, and mean stroke Onset, Duration, SC and SPL as dependent, variables. All differences reported are significant at p<0.05. As expected, onset location corresponded to the instructed timing Styles for all instruments. Significant interaction was also found: pairwise comparisons revealed earlier mean onset for the metronome reference in the Laid-back and On-beat pairs, but later in Pushed (kick/hi-hat only). For the sound descriptors, there were main effects of Style: on SPL for snare (Laidback louder than On-beat), and hi-hat (Pushed louder than On-beat); on duration for snare (Laidback longer than On-beat); and on SC for kick (Laid-back and Pushed higher than On-Beat). There were no main effects of Reference on sound descriptors and no interaction. The results confirm previous research on snare drum (Danielsen et al. 2015) and further showed that, in full-drum kit performance, drummers also produced systematic differences in duration, SPL and/or SC of strokes played with the different timing styles. This suggests that sound envelope parameters are important in communicating the intended timing of events in groove-based music and will be discussed in light of findings from “P-center” (perceputal center) (Danielsen et. al., 2019; Villing, 2010) and timing-sound interaction studies (Goebl & Parncutt, 2002; Tekman, 2002). The difference in mean onset between metronome and instrumental backing-track can be related to the phenomenon of “negative mean asynchrony” from the tapping literature (Repp, 2005).

Subjects: Musicology, Beat, rhythm, and meter; Music information retrieval; Musical expertise; Performance

When: 9:45 AM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

G1-3: Time and Timelessness in 20th-Century Music: An Experimental Study

Jason Noble*(1), Stephen McAdams(1), Tanor Bonin(1)
1:McGill University

Contemporary musical discourse frequently invokes concepts of temporal dilation, contraction, and even suspension in timelessness, but how music can be “timeless” even as it unfolds “in real time” remains obscure. Induction of musical timelessness (the listener’s subjective experience of time is suspended while listening to music) is often conflated with perception of musical timelessness (the listener interprets music as expressing suspended time). Our experiment compares induction/perception of musical timelessness with (1) music’s temporal organization and (2) subjective listening behaviors. 38 participants heard 20 excerpts of 20th-century music featuring unorthodox temporal organization (e.g., extreme event duration, extreme repetition, absence of pulse) while using a joystick to continuously indicate temporal acceleration, deceleration, or normativity. Simultaneously, participants used a trigger to indicate whether the music was “in time” or “timeless.” Following each excerpt, they rated the strengths of their overall senses of time and timelessness during the previous task. Participants were divided into a perception group (evaluating what the music expressed) and an induction group (evaluating what the music made them feel). Additionally, participants completed the “Absorption in Music Scale” (Sandstrom and Russo, 2013) to indicate their propensity for absorption in music. Graphical representations of participants’ joystick and trigger activities indicate coherent relations between participants’ senses of time/timelessness and musical properties. High-absorption participants reported significantly stronger senses of timelessness than low-absorption participants (mean 3.14/5 vs. 2.72/5, p<.001) but statistically equivalent senses of time (3.16 vs. 3.3, p=.164). Perception-group participants reported significantly weaker senses of time (3.04 vs. 3.39, p<.001) and significantly stronger senses of timelessness (3.22 vs. 2.62, p<.001) than induction-group participants. These results suggest that the sense of timelessness in music is related to musical properties, to the perception-induction distinction, and to subjective propensity for absorption in music. These findings will be further explicated through detailed statistical and acoustical analyses, currently underway.

Subjects: Cross-domain effects, Embodied cognition; Music and language; Music theory

When: 10:00 AM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session G2, Harmony 2

9:30-10:15 AM in KC905/907

G2-1: Harmonicity and Consonance Within an Unconventional Tuning System

Ronald S Friedman(1)
1:University at Albany, SUNY

Recent research has suggested that consonance ratings of isolated chords are strongly correlated with their harmonicity, the extent to which the partials present in their component tones collectively produce a complex sound wave approximating a harmonic series. According to the vocal similarity hypothesis (VSH; Bowling et al., 2017), harmonicity may be associated with the pleasantness of a chord because it is a distinguishing characteristic of human vocalizations. Given the vital communicative role of vocal stimuli, our perceptual systems have evolved an innate, generalized preference for relatively harmonic, and therefore more speechlike, environmental sounds. This suggests that even if individuals were exposed to chords that do not exist within the musical systems to which they have been enculturated, they should be inclined to evaluate such chords more favorably when these are higher in harmonicity. To empirically test this hypothesis, we conducted a study using stimuli generated from a highly unconventional scale, the just tempered Bohlen-Pierce (BP) scale. This non-octave-repeating scale divides a tritave (representing the span of an octave plus a major fifth) into 13 intervals based on odd integer frequency ratios. Participants were randomly presented with and asked to rate the consonance of every possible BP dyad and triad within a tritave. Chords were presented in either a piano or a clarinet timbre. The harmonicities of each chord were computed using algorithms devised by Bowling and his colleagues (2017). Consistent with the VSH, results revealed significant positive correlations between harmonicity and consonance ratings for both unconventionally-tuned dyads and triads, irrespective of timbre. Notably, these correlations were found to be most robust within a range of harmonicity typical of conventional chords. Although the present findings do not conclusively support the VSH, they do lend credence to the proposition and thereby advance the longstanding debate regarding the origins of musical consonance.

Subjects: Aesthetics / preference, Psychoacoustics

When: 9:30 AM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

G2-2: Identifying prototypical harmonic progressions across (tertian) styles

David Sears*(1), David Forrest(1)
1:Texas Tech University

An extensive body of research shows that certain harmonies within the tonal system are perceived as more final and serve as better continuations than others (e.g., Bharucha & Krumhansl, 1983), leading some to suggest that listeners with exposure to tonal music might learn and remember the syntactic principles associated with tonal harmony (e.g., Patel, 2008). Nevertheless, much of this research restricts the ‘tonal’ purview to only one period in music history, namely that of the so-called common practice (1600–1910). Thus, few studies have considered whether the syntactic progressions remembered by listeners might differ from one style to another, a claim we call the style specificity hypothesis (SSH). To examine the SSH for tonal harmony, this study identifies characteristic three-chord progressions in three annotated corpora: the Annotated Beethoven Corpus (ABC) (Neuwirth et al., 2018), which consists of all Beethoven string quartets (70 movements); the McGill Billboard Corpus (Burgoyne et al. 2011), which consists of 740 songs selected from the Billboard “Hot 100” (1958–1991); and the Rolling Stone Corpus (de Clercq & Temperley, 2011), which consists of 200 songs from Rolling Stone magazine’s list of the 500 greatest songs of all time. To facilitate comparisons across corpora, we first convert all annotations into Roman numeral and relative-root representations (Quinn, 2010). We then compute statistical measures that rank the progression types in each corpus relative to the other corpora (Damerau, 1993). Together, these methods identify progression types common to all corpora, as well as characteristic types for the classical and pop/rock corpora, suggesting that knowledge of tonal music might reflect a plurality of potentially overlapping tonal systems, each governed by a given repertory or style period, and each depending on the previous experiences of a given individual listener.

Subjects: Corpus analysis/studies, Harmony and tonality

When: 9:45 AM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

G2-3: Harmonic Grammar, Chord Frequency, and Database Structure

Christopher W White*(1), Emily Schwitzgebel(2)
1:University of Massachusetts Amherst, 2:Uni

Harmonic function and syntax –especially in Western tonal music– has become an increasing focus in corpus and cognitive research (Jacoby, Tishby, and Tymoczko 2015, White and Quinn 2018). This paper contributes to this conversation with three arguments: 1) that the most frequent chords within a corpus necessarily provide the pillars of a tonal grammar, 2) that the remaining harmonies are defined in relation to those pillars, and 3) that this dynamic challenges us to think of harmonic grammars as a way of organizing different types of data rather than as a language-style syntax. Corpus analysts (e.g., Krumhansl 1990, Huron 2006) have demonstrated that tonic harmonies are most frequent within distributions, while theorists like Zanette (2006) show such events to follow power-law distributions. Using various models and corpora (Hidden Markov, minimum entropy models; Classical, pop/rock, and early-modern guitar corpora), I show that this recurrent characteristic privileges two or three most-frequent chords when creating grammatical categories, with remaining chords and categories then situate in relation to these most-frequent structures. This creates two classes of harmonic function: categories created around frequency versus those created in relation to those most-frequent categories. I connect this distinction to database theory (Chen 1976), and then present the results of an experiment testing the salience of this distinction. Here, trained musicians are presented with a series of diatonic triads under one of two condition: the triads are either randomly generated based on (unigram) chord frequency, or on chord-progression statistics (bigrams). However, the chord symbols have been randomly ciphered into shapes (e.g., tonic triads become circles, supertonics become squares, etc.). The participants then decipher which shape is associated with which diatonic triad. Participants were significantly successful at identifying the most-frequent tonic and dominant chords regardless of condition, but only identify “predominant” (IV, ii, vi) chords when contextually generated.

Subjects: Music theory, Music and language

When: 10:00 AM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session G3, Neuroscience 1

9:30-10:15 AM in KC909

G3-1: Prevalence of BDNF polymorphism in musicians: Evidence for compensatory motor learning strategies in music?

Tara L Henechowicz*(1), Joyce L Chen(1), Leonardo G Cohen(2), Michael Thaut(1)
1:University of Toronto, 2:NIH/NINDS

PURPOSE. The study compared the prevalence of Val66Met BDNF SNP polymorphism (r6265) in musicians in professional training (N=41) to an ethnically matched general population sample from the 1000 Human Genome Project (N=424). The polymorphism has a typical prevalence of 25-30% and is associated with decreased Brain-derived Neurotrophic Factor production, a critical protein for synaptic-plasticity (Egan et al., 2003). Thus, polymorphism carriers show deficits in motor learning and corticospinal system plasticity (Joundi et al., 2012; Kleim et al., 2006). One may predict that musicians have reduced prevalence compared to the general population due to the high motor skill demands in music. METHODOLOGY. DNA was extracted from saliva samples and genotyped for the SNP rs6265 (BDNF; Val66Met). RESULTS. Genotypic and allelic frequencies were not significantly different between groups. Genotypic Frequency: G/G 62.74% Controls vs 58.54% Musicians; A/G 33.25% Controls vs 39.02% Musicians; A/A 4.01% Controls vs 2.44% Musicians (p=0.76). Allelic Frequency: G Allele 79.36% Controls vs 78.05% Musicians; A Allele 20.64 % Controls vs 21.95% Musicians (p=0.90). There were no significant age differences in musicians. However, Met-Carriers had x̄=3.3 more years of primary instrument training (p<0.05). CONCLUSION. Presence of the polymorphism did not bias against high-end motor skill learning in music. Characteristics of music-motor learning may compensate for genotype predisposition. Greater primary instrument training in Met-carriers may represent possible compensatory differences. IMPLICATIONS. Since the polymorphism is associated with decreased rates of stroke recovery (Kim et al., 2016) data may have relevance for clinical translations of music-based training to stroke rehabilitation. Future research directions include investigating the effects of music training/music-based therapies on BDNF in healthy musicians/non-musicians and clinical populations. Although few current investigations (N=3) have examined the effect of music on BDNF in humans, rat and mice studies (N=11) which show a possible effect of music on BDNF, primarily in the hippocampus.

Subjects: Neuroscientific approach, Music and movement; Music therapy

When: 9:30 AM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

G3-2: Enhanced subcortical responses of musicians to sounds presented on metrically strong beats

Kyung Myun Lee(1)
1:Korea Advanced Institute of Science and Technology

Dynamic attending theory suggests that metrically guided attention facilitates the auditory perception of sounds presented on strong beats. Whereas enhanced cortical auditory processing at times of high metric strength was evidenced by heightened N1 and P2 peaks, our previous experiment showed that the subcortical responses of 15 non-musicians were not different between metrically strong and weak positions. To examine how music training changes the effect of the metrical hierarchy on the early auditory processing of sounds, this study measured the auditory brainstem responses (ABRs) (Skoe & Kraus, 2010) of 15 musicians to four different beats of the quadruple meter. In order to prime the quadruple meter, a sinusoidal four-tone sequence composed of A7 (3520Hz), A6 (1760Hz), A6 (1760Hz), and A6 (1760Hz) (500ms IOI) was repeatedly played, while a short speech sound, /da/, was simultaneously presented every 500ms. In the result, musicians showed significantly faster onset latency for the first beat and larger amplitude of the onset peak for the third beat. However, the consistency of brainstem responses (Tierney and Kraus, 2013) did not differ between musicians and non-musicians. Both groups showed more consistent and less variable brainstem responses to /da/ presented on the first beat. This result indicates that early auditory processing is influenced by the metrical hierarchy of sounds and music training enhances the metrical modulation in the subcortical level. Skoe, E., & Kraus, N. (2010). Auditory brainstem response to complex sounds: a tutorial. Ear and hearing, 31(3), 302. Tierney, A. & Kraus, N. (2013). The ability to move to a beat is linked to the consistency of neural responses to sound. Journal of Neuroscience. 33(38): 14981–14988.

Subjects: Neuroscientific approach, Beat, rhythm, and meter

When: 9:45 AM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

G3-3: Neural time-frequency characteristics of auditory and visual rhythm entrainment

Daniel C Comstock*(1), Ramesh Balasubramaniam(1)
1:University of California, Merced

Human ability to entrain to rhythms is dependent on the sensory modality of the rhythm, as well as the context. This may be explained by research suggesting the existence of modality and context-specific timing systems in the brain. Using time-frequency measures, we can attempt to tease apart the specificity or overlapping nature of these timing systems. We report findings showing time-frequency characteristics of rhythm entrainment to auditory and visual rhythms during tapping and passive listening/viewing using EEG in human participants. We found entrainment specific to visual rhythms arising from posterior regions of the brain in the beta band that predicts visual rhythm onset. We also found theta band oscillations stemming from the motor cortex for both auditory and visual rhythms. When comparing tapping to non-tapping trials we found specific shifts in the frequencies of entrainment only for auditory rhythms. Taken together, these findings suggest separate but overlapping timing systems for auditory and visual rhythm entrainment, with more context-dependent specialized systems for auditory rhythms. This specialization may help explain the dominance of the auditory system over the visual system when it comes to rhythm perception and sensorimotor synchronization.

Subjects: Neuroscientific approach, Beat, rhythm, and meter

When: 10:00 AM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session G4, Effects of Music Training

9:30-10:15 AM in KC914

G4-1: Auditory processing abilities in formally trained and self-taught musicians

Benjamin Zendel*(1), Emily Alexander(1)
1:Memorial University of Newfoundland

Musicians have enhanced auditory processing abilities compared to non-musicians. These benefits have been observed using a variety of auditory processing tasks that vary from simple pitch change detection to the ability to understand speech when there is loud background noise. Often behavioral enhancements are paralleled by modulations in brain structure or function. While much of this work has been cross-sectional, the few longitudinal studies suggest that these benefits are at least partially due to music-training related neuroplasticity. In most of these studies, musicians are defined by having trained formally. One important question is if formal music training matters? To investigate this possibility, three groups of participants were recruited: Formally-trained Musicians who received training through a conservatory or by private lessons; Self-Taught Musicians who learned to play music through informal methods, such as with books, videos, or by ear; Nonmusicians who had no or minimal formal or informal music training. Auditory processing abilities were assessed across four tasks: 1. the ability to automatically detect small pitch changes measured by the mismatch negativity (MMN); 2. the ability to automatically detect an out-of-key note, measured by the early-right anterior negativity (ERAN); 3. the ability to consciously become aware of an out-of-key note as measured by performance accuracy and the P600; 4. the ability to understand speech-in-noise using the QuickSIN test. Across all tasks Formally trained musicians performed better than Nonmusicians, replicating previous findings. Self-taught musicians ability to understand speech-in-noise was comparable to the Formally-trained musicians. At the same time, the MMN evoked by a small pitch deviant in Self-taught musicians was similar to Nonmusicians. For the tonal judgment task, Self-taught Musicians performed better than Nonmusicians, but not as good as the Formally-trained musicians. These results suggest that training format impacts the auditory processing advantages observed in musicians.

Subjects: Musical expertise, Neuroscientific approach

When: 9:30 AM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

G4-2: Musical training and decision making ability: A resting-state amplitude of low frequency fluctuations (ALFF) study

Jiancheng Hou*(1), Qinghua He(2), Chuansheng Chen(3), Qi Dong(4), Vivek Prabhakaran(5)
1:University of Wisconsin-Madison, 2:Faculty of Psychology, Southwest University, 3:Department of Psychology and Social Behavior, University of California, 4:State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, 5:School of Medicine and Public Health, University of Wisconsin-Madison

Background: The Iowa Gambling Task (IGT) has been widely used to assess decision making ability1,2,3. Our previous study showed the benefit of musical training to IGT performances than those without musical training behaviorally4. The current study aimed to examine the neural correlates between musical training and decision making with neuroimaging techniques. Method: Two groups with and without musical training (n=56 for each group) finished the IGT (Table 1) and MRI scan. The fMRI data was collected on a 3T Siemens scanner, with the following parameters: TR/TE/θ=2000ms/25ms/90°, FOV=192×192mm, matrix=64×64, slice thickness=3mm. Data Processing and Analysis of Brain Imaging (DPABI) was used for data preprocessing5, and the amplitude of low-frequency fluctuation (ALFF) was analyzed with RESTplus6. The correlation between ALFF and IGT score, and ALFF difference between two groups, were conducted using RESTplus. Multiple comparison correction was p<.05, 1000 simulations, cluster size>90 (2430mm3). Results: Musical training group had significantly increased scores in the first 40 trials (IGT1, under ambiguity) and the last 60 trials (IGT2, under risk) than those without musical training (Table 1). ALFF results were: (1) correlations between IGT1 and temporal lobe, between IGT2 and temporal/frontal/limbic/cerebellum lobes, were significant in training group; (2) correlations between IGT1 and frontal/cerebellum lobes, between IGT2 and limbic lobe/sub-lobar, were significant in no training group (Table 2 and Figure 1); (3) ALFF differences between two groups were on frontal/temporal/parietal/cerebellum lobes (Table 3 and Figure 2). Conclusion: For musical training group, semantic function s is involved in decision under ambiguity; visual perception/emotion/memory/reward/motor control are involved in decision under risk. For no training group, emotion/memory/reward/motor control are involved under ambiguity; memory/reward are involved under risk. The differences of attention/sensorimotor function possibly reflect the characteristics for IGT performance through musical training. These results showed disparate cognitive strategies during making decision between the individuals with and without musical training.

Subjects: Neuroscientific approach, Music training/learning

When: 9:45 AM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

G4-3: Musical Training and Emotion: Does Experience Affect Perception?

Aimee E Battcock*(1), Mike Schutz(1)
1:McMaster University

Previous work exploring differences between musicians and non-musicians indicate conflicting evidence as to whether musical training provides perceptual advantages for emotions conveyed in music (Bigand, Vieillard, Madurell, Marozeau, & Dacquet, 2005; Castro & Lima, 2014). These experiments often focus on emotional recognition accuracy or grouping/categorization tasks, or utilize experimentally manipulated stimuli to investigate differences in perceived emotion. In our study, we explore how individuals with musical training (>7 years formal lessons) perceive emotion using three cues (attack rate, mode and pitch height). We examine the influence of these cues in two experiments comparing 1) the emotional judgements of musical excerpts cut to be eight musical measures in length (typically representative of a musical phrase) and 2) musical excerpts cut to end in the same mode as the start. For each experiment thirty participants rated perceived emotion in 48 excerpts of Bach’s Well-Tempered Clavier (WTC). Furthermore, we plan to compare results with previous work looking at untrained participants (Battcock & Schutz, under review), to examine whether individuals with formal musical training utilize cues differently than untrained ones. We employed multiple linear regression and commonality analyses to assess contributions of the selected cues and determined which ones are predictors of valence and arousal ratings. Preliminary results indicate pitch height is not a significant predictor of valence ratings for musically trained participants, contradictory to our findings with untrained participants. In addition, commonality analyses show mode is more predictive of valence ratings for musically trained participants than untrained participants. This suggests that mode is a stronger cue for participants with formal musical training, and aligns with developmental work demonstrating sensitivity to mode is more dependent on learning/experience than timing cues. Our results thus far imply individuals with musical training use cues differently to decode emotional information compared to untrained participants. We will discuss this further and elaborate on the implications of musical expertise on perceived emotion.

Subjects: Emotion, Perception

When: 10:00 AM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session H1, Beat & Meter 4: Processing

10:15-11:00 AM in KC802

H1-1: A neurocomputational model of beat-based temporal processing

Jonathan J Cannon*(1), Ani Patel(2)
1:Meridian Academy, 2:Tufts University

Beat processing is a fundamental aspect of music cognition involving precise, periodic temporal predictions. Humans can quickly entrain their movements to a beat and maintain this entrainment in the face of complex rhythms, omitted beats, and tempo changes. The basal ganglia and motor planning regions (including supplementary motor area (SMA)), which play a critical role in movement initiation, have been shown to be involved in beat-based processing. However, it is unclear what roles they play and how they interact. Building upon the basal ganglia and motor cortical modeling literature, the authors propose a “two-timer” model of sub-second beat-based temporal processing with two distinct circuits. In the first circuit, SMA (possibly in conjunction with cerebellum) measures absolute time between perceived beats and transmits tempo estimates to putamen by inducing persistent activity in frontal or prefrontal cortex. In the second circuit, tempo signals from putamen set the speed of a relative timekeeper in SMA. When this timekeeper reaches a certain point, a beat is anticipated and/or imagined, which cues a timer reset and disinhibition of motor activity (facilitating synchronized movements) via the basal ganglia’s hyper-direct pathway. The first circuit is responsible for period correction and the initial stages of synchronization, while the second is responsible for phase correction and beat continuation. A tonic dopaminergic signal is modulated by the accuracy of predictions and moderates competition between the two circuits. This model reproduces certain data on period correction (Repp & Keller 2004) and offers a possible explanation for entrainment of beta rhythms by the beat. It also provides a possible mechanism for some timing-related aspects of Parkinson’s disease, including the efficacy of a metronomic pulse in alleviating freezing of gait. This “two-timer model” represents a first step in building biologically-realistic models of beat-based temporal processing, and makes several testable predictions.

Subjects: Beat, rhythm, and meter, Computational approach; Music and movement; Neuroscientific approach

When: 10:15 AM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

H1-2: Differential Effects of Internal and External Cues on Gait Kinematics in Parkinson Disease

Elinor C Harrison*(1), Adam P Horin(1), Gammon Earhart(1), Peter Myers(1), Marie McNeely(2), Kerri Rawson(1), Ellen N Sutter(3)
1:Washington University in St. Louis, 2:Unfold Productions, LLC, 3:University of Minnesota

Objective: Internal cueing techniques (e.g., singing or mental singing) may provide a useful alternative to external cueing (e.g., listening to music) by eliminating the need to match an external source. Mental singing, in particular, can improve spatiotemporal features such as gait variability for people with Parkinson disease (PD), but the effects on gait kinematics are largely unknown. Methods: Thirty-five participants (60% male) were tested ‘on’ medication during walking trials in four conditions (UNCUED, MUSIC, SING, MENTAL). Song tempo was adjusted to 110% of each participant’s preferred cadence. Three repeated measures MANOVAs assessed differences between gait characteristics (velocity, cadence, and stride length), gait variabilities (coefficients of variation), and joint ranges of motion (ROM; hip, knee, ankle). A sub-analysis assessed differences between responders (i.e., those who increased gait velocity by > 0.06m/s) and non-responders using separate RM-MANOVAS. α<.05. Results: Between-condition differences in gait characteristics (F(9, 306)=7.328, p<.001) revealed that all cues increased velocity, cadence, and stride length from UNCUED (all p<.029). Gait variability effects (F(9, 306)=2.223, p<.001) revealed improved stride time variability (p=.003) in MENTAL and SING than MUSIC (all p<.013). Spatiotemporal gait changes were reflected in increased ROM (F(3, 102)=11.647, p<.001) at the hip joint during cueing compared to UNCUED (all p<.003). Responders (n=23) increased velocity, cadence, stride length, and hip ROM during all cued conditions as compared to UNCUED (all p<.025). Non-responders worsened stride length variability during MUSIC (p<.01). Conclusion: This study provides evidence that both internal and external cues can induce immediate kinematic improvements in PD gait, but that some people may benefit substantially more than others. Internal cues may hold potential to improve both gait speed and variability, which may contribute to overall stability for people with PD. More work is warranted to determine what factors contribute to likelihood of responding positively to internal cues.

Subjects: Music and movement, Beat, rhythm, and meter

When: 10:30 AM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

H1-3: Feeling the Beat: A neural and behavioural investigation into vibrotactile beat perception

Sean A Gilmore*(1), Phuong-Nghi T Pham(1), Frank Russo(1)
1:Ryerson University

Prior research on beat perception has revealed an apparent auditory advantage. This has been assessed behaviorally with sensorimotor synchronization paradigms involving tapping to the beat. While the majority of studies comparing modality effects have focused on visual presentations of rhythms, a small number of recent studies have also considered vibrotactile presentations. In the case of a metronomic rhythm, comparable levels of sensorimotor synchronization have been observed between vibrotactile and auditory rhythms. Another means of assessing beat perception is with magneto/electro-encephalography (M/EEG). The extent to which low-frequency neural oscillations entrain their phase to the frequency of the beat has been used as a neural index of beat perception. No research to date has examined neural oscillations in the context of vibrotactile rhythms. In the current study neural entrainment and sensorimotor synchronization (SMS) were assessed for rhythms presented in auditory, vibrotactile and multimodal conditions. The rhythms varied in their complexity: metronomic (i.e., isochronous beats) or simple (i.e., isochronous beats with metrical subdivisions). SMS was assessed using a tapping task and indexed by tapping variability, neural entrainment was assessed using EEG in a passive listening task. Behavioral and neural data were analyzed using multi-level modeling and a priori planned contrasts. Behavioral results revealed that multimodal SMS was superior to auditory SMS and vibrotactile SMS. These modality effects were moderated by rhythmic complexity. For metronomic rhythms, auditory SMS was comparable to vibrotactile SMS, but for simple rhythms, auditory SMS was superior to vibrotactile SMS. EEG results showed that multimodal neural entrainment was marginally better than auditory neural entrainment, which was in turn marginally better that vibrotactile neural entrainment. These results replicate prior behavioral work and provide new electrophysiological evidence for the auditory advantage in beat perception. Moreover, they suggest that multimodal rhythms may enhance temporal acuity in beat-based processing.

Subjects: Beat, rhythm, and meter, Audiovisual / crossmodal

When: 10:45 AM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session H2, Learning

10:15-11:00 AM in KC905/907

H2-1: What is happening in a student’s mind when they perform melodic dictation?

David J Baker(1)
1:Louisiana State University

Teaching melodic dictation– the process of hearing a melody then noting it– involves instructing students on what and where to direct their attention in order to improve their abilities. As students’ experience increases, they are able to memorize larger chunks of music and can dictate melodies they once found difficult. But what is happening in the student’s mind over the course of aural skills instructions that allows for this growth? This research puts forward a computational, cognitive model of melodic dictation with the goal positing a falsifiable theory on how students improve at melodic dictation. The model is based in research from both cognitive psychology (Cowan, 2011) and computational musicology (Pearce, 2018) and incorporates relevant theoretical aspects such as working memory (Chenette, 2019; VanHandel et. al 2011) and the structure of the melody itself. The model consists of three main modules: Prior Knowledge, Selective Attention, and Transcription. First, the model is trained on a corpus of melodies using a computational model of auditory cognition (Pearce, 2018) that derives measures of expectancy based on prior listening experience. Second, the melody is “heard” by the computer and the incoming music is chunked based on the information content of the melody. Third, the model searches for a match within the Prior Knowledge and if found, the contents of Selective Attention are successfully notated. If not, the model truncates the chunk and recursively repeats the process. The model outputs a difficulty rating of the melody relative to the Prior Knowledge and also make several testable predictions about how melodies are learned. Presenting a computational model additionally demonstrates every ontological commitment, thus making it completely amenable to criticism. This research directly address the recurring call (Butler, 1997; Klonoski, 2006; Karpinski, 2000) to address the chasm in research between music cognition and music theory pedagogy.

Subjects: Computational approach, Music education/pedagogy/learning

When: 10:15 AM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

H2-2: Mediating effect of cognitive load in song learning with visually presented lyrics

Yo-Jung Han(1)
1:University of Maryland

A previous study showed seeing the lyrics while learning a difficult song was beneficial for non-music majors in that they better recalled pitches and rhythm of the learned song when seeing the lyrics compared to not seeing the lyrics. The study assumed seeing the lyrics allows the learner to process the verbal information dually – aurally and visually; the dual processing of verbal information might reduce cognitive load in the aural channel, leading to more capacity to process musical information. Thus, this present study aimed to ascertain whether seeing the lyrics while learning a difficult song aurally induces less cognitive load in learners compared to not seeing the lyrics, leading to better recall accuracy of the learned song. Thirty-six non-music majors individually learned two songs through prerecorded aural instruction. For one song, they saw the lyrics and for the other song, they did not. The presentation order of instructional condition and song were counterbalanced. Participants’ recall accuracy was measured regarding lyrics, pitches, and rhythm. Participants’ aural cognitive load was measured through reaction times in a simple auditory monitoring task under a dual-task paradigm. A faster reaction indicates more free cognitive capacity available. Results showed instructional condition affected cognitive load but not recall accuracy: When seeing the lyrics, participants’ reaction times were faster compared to when not seeing the lyrics, suggesting lower cognitive load induced when seeing the lyrics. A path analysis revealed a mediating effect of cognitive load regarding lyrics and rhythm, suggesting seeing the lyrics indirectly increases recall accuracy of lyrics and rhythm through its positive effect on cognitive load. Given limited instructional time, several strategies should be considered to prevent learners from experiencing cognitive overload while learning a difficult song aurally. Showing the lyrics of the song could be one strategy for that purpose, at least for non-music majors.

Subjects: Music education/pedagogy/learning, Music information retrieval

When: 10:30 AM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

H2-3: Learning and memory for tonal and atonal melodies in exceptional musicians

Michael Weiss*(1), Isabelle Peretz(2)
1:BRAMS, University of Montreal, 2:University of Montreal

Child prodigies can perform complex musical pieces with adultlike competency. Anecdotally, they learn quickly too. The current study asks whether adolescents and adults who showed prodigious talent in childhood continue to learn and remember melodies better than their peers. Moreover, we consider whether their ability to learn quickly is limited to familiar systems of music (i.e., tonal). Data collection is ongoing. The current sample includes 14 former prodigies (5 female, M=23.6±7.7, range=13.7–35.0 years) who had achieved prominence in childhood (e.g., won a major performance, garnered media attention, etc.). The control group includes 20 musicians (7 female, M=26.4±6.8, range=14.1–35.6 years). The groups do not differ in age, IQ, or cumulative hours of deliberate practice. Participants completed a 2-day melody learning task with two tonal and two atonal melodies (28 notes each). On the first day, participants listened to each melody and sang it back, with ten attempts per melody. On the second day, participants were asked to recall the melodies from memory. Following recall, participants continued the learning task (i.e., listen / sing back) with five additional attempts. Trials were scored using an edit distance algorithm that compared the contour of the sung rendition to the contour of the intended melody. Analysis of the learning task compared average performance by group (prodigy, musician), tonality (tonal, atonal), and day (day 1, day 2). A mixed-model ANOVA showed a three-way interaction. For tonal melodies, both groups improved similarly from day 1 to day 2, and prodigies outperformed musicians overall. For atonal melodies, both groups had similar performance on day 1, and performance improved on day 2 for prodigies only. Analysis of the recall task is ongoing. These preliminary results suggest that former prodigies are better learners of both familiar (i.e., tonal) and unfamiliar (i.e., atonal) styles of music.

Subjects: Memory, Musical expertise

When: 10:45 AM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session H3, Neuroscience 2

10:15-11:00 AM in KC909

H3-1: The neural representation of pitch – height versus chroma

Tamar I Regev*(1), Israel Nelken(1), Leon Deouell(1)
1:The Hebrew University of Jerusalem

The perceptual organization of pitch is frequently described as helical, with a monotonic dimension of pitch height and a circular dimension of pitch chroma, accounting for the repeat- ing structure of the octave. Although the neural representation of pitch height is widely studied, the way in which pitch chroma representation is manifested in neural activity is currently de- bated. We tested the automaticity of pitch chroma processing using the MMN—an ERP component indexing automatic detec- tion of deviations from auditory regularity. Musicians trained to classify pure or complex tones across four octaves, based on chroma—C versus G (21 participants, Experiment 1) or C ver- sus F# (27, Experiment 2). Next, they were passively exposed to MMN protocols designed to test automatic detection of height and chroma deviations. Finally, in an “attend chroma” block, participants had to detect the chroma deviants in a sequence similar to the passive MMN sequence. The chroma deviant tones were accurately detected in the training and the attend chroma parts both for pure and complex tones, with a slightly better performance for complex tones. However, in the passive blocks, a significant MMN was found only to height deviations and complex tone chroma deviations, but not to pure tone chroma deviations, even for perfect performers in the active tasks. These results indicate that, although height is represented preattentively, chroma is not. Processing the musical dimension of chroma may require higher cognitive processes, such as attention and working memory.

Subjects: Pitch, Neuroscientific approach

When: 10:15 AM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

H3-2: Source analysis of the frequency following response to pitch-shifted stimuli with high-density EEG

Karl D Lerud*(1), Ed Large(1)
1:University of Connecticut

Pitch is a perceptual rather than physical phenomenon, important for spoken language use, musical communication, and other aspects of everyday life. The frequency of pitch has a complex relationship with the various sounds that evoke it, in many cases being non-obvious. Particular auditory stimuli can be designed to probe this relationship. It is also possible to measure the responses to auditory stimuli from the brain and periphery to ask what the neural correlates of pitch percepts are. Chief among the more utilized such techniques is the frequency following response (FFR). The FFR is an electroencephalographic (EEG) response to periodic auditory stimuli, measured from one or more active electrodes on the scalp. A simple and direct question about the EEG correlates of pitch perception is whether the pitch frequency is always present in the spectrum of the FFR, even when the stimulus spectrum does not contain the pitch frequency. This question has been asked a handful of times in the literature to mixed results, with more recent conclusions in the negative. Whether or not the FFR contains the pitch frequency itself is debated, but what is agreed upon is that it contains nonlinearities not present in the stimuli, including correlates of the amplitude envelope of the stimulus; however these nonlinearities also remain undercharacterized. Part of the reason that the FFR and its relationship to auditory stimuli are not fully understood is that the FFR is a composite response reflecting multiple neural and peripheral generators, and their contributions to the scalp-recorded FFR vary in ill-understood ways depending on the electrode montage and the stimulus. The FFR is typically assumed to be generated in the auditory brainstem, and there has also been evidence both for and against a cortical contribution to the FFR. Here we used an exacting and collaborative methodology to answer these questions. Novel stimuli were designed to tease apart direct biological correlates of pitch and amplitude envelope. FFRs were recorded with high-density EEG nets, in contrast to a typical FFR setup which only contains a single active electrode. Additionally, structural MRI scans were obtained for each participant to constrain a source localization algorithm. The results of this localization shed light on the generating mechanisms of the FFR, including both cortical and subcortical sources. This is the first study to use EEG and MRI in this way with respect to the FFR.

Subjects: Pitch, Neuroscientific approach

When: 10:30 AM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

H3-3: Tracking the building blocks of pitch perception in auditory cortex

Ellie B Abrams(1)
1:New York University

While there is a general consensus that fundamental frequency, spectral content, and musical context contribute to pitch perception, it is currently unclear which aspects of perceived musical pitch are neurally encoded during early auditory processing. To investigate this, we recorded brain responses to two types of tones: i) pure tones of fundamental frequency only (F0); ii) complex tones of five partials (integer multiples of, but not including, F0). Participants listened to musical tone sequences, ranging from 220-624Hz (the notes of the A, C, and Eb major scales), while magneto-encephalography (MEG) was recorded. Although the two tone-types have non-overlapping spectral content, they are perceived as the same pitch, thus creating an orthogonal relationship between the sensory input and perceptual output. Multivariate analyses were used to decode frequency and tone-type from the activity across MEG sensors. High decoding accuracy across time would suggest that these features are in fact encoded in the spatial pattern of neural responses to musical pitch. We found that a classifier trained at ~50 ms after the onset of the tone could accurately decode whether a listener was presented with a pure or complex tone based on the spatial pattern of activity. At 100ms, we could decode F0 from both tone-types, even for complex tones for which F0 was absent. Further, we were able to use a classifier trained on pure tones to accurately predict the frequency of the complex tones, suggesting that the missing fundamental is restored at this latency. From 200-300ms, tone-type decoding accuracy increased, and the F0 spatial pattern no longer generalised from one tone-type to the other. In sum, separable response components seem to track the spectral content of musical tones as well as the present, or restored, fundamental frequency. Overall this suggests that central aspects of musical pitch perception are indeed encoded in early auditory neural responses.

Subjects: Pitch, Computational approach; Neuroscientific approach; Psychoacoustics

When: 10:45 AM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session H4, Absolutes

10:15-11:00 AM in KC914

H4-1: Implicit Learning, Cultural Encoding, and the ‘Heightened Tonal Memory’ Model of Absolute Pitch Ability

Suyin Mak*(1), Betsy Marvin(2)
1:Chinese University of Hong Kong, 2:Eastman School of Music

The received definition of absolute pitch (AP) is the ability to identify and reproduce specific pitches without any external reference. While widely accepted, this definition assumes that AP perception is clear-cut and invariable, without acknowledgement that both the fixed-pitch template and the tuning/temperament standards to which the pitch names refer are culturally constructed rather than universal. Our paper reexamines this assumption through qualitative analysis of interview data acquired from Hong Kong. We interviewed 20 AP musicians to collect first-hand accounts of their AP abilities and musical training. All participants scored above 85% on a 48-item pitch identification test (Mean = 97.1% correct). From the full data set, we present select cases that illustrate how AP abilities may be impacted by musical experiences. Our findings reveal that, while most informants demonstrate strong preference for A440 tuning and impaired AP processing when listening to music involving non-440 tuning and unequal temperament (such as traditional Chinese music, Baroque music using historical temperament, or European orchestras tuning to A442-446), a significant number are also able to adjust their AP ability contextually. Two cases are especially noteworthy: one informant whose first exposure to music was via learning the saxophone, a transposing instrument, reported acquiring “AP in the key of E-flat” before transitioning to “AP in standard concert pitch,” while another reported the ability to adjust her AP when playing in ensembles adopting non-440 tuning. With reference to the heightened tonal memory (‘HTM’) model of AP ability posited by Ross, Gore, and Marks (2005), we hypothesize that AP ability may be culturally encoded, and that pitch standards may be implicitly learned, reinforced or fine-tuned in response to musicians’ daily exposure to fixed-pitch instruments, ensemble tunings, and recordings that tacitly establish a standard. Keywords: absolute pitch, implicit learning, cultural encoding, non-440 tuning, unequal temperament

Subjects: Pitch, Cross-cultural comparisons/non-Western music

When: 10:15 AM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

H4-2: Robust absolute pitch representations in the general population: Evidence from popular melodies

Stephen C Van Hedger*(1), Shannon Heald(2), Howard Nusbaum(2)
1:Western University, 2:University of Chicago

Background: Many listeners can judge when a familiar music recording has been shifted in pitch, suggesting that good pitch memory is widespread (e.g., Schellenberg & Trehub, 2003). In the present experiments, we test whether this pitch memory generalizes across instrumental timbre, which is a hallmark of genuine absolute pitch (AP). If non-AP listeners can generalize across timbre in making absolute pitch judgments of familiar melodies, this would suggest a more abstract representation of absolute pitch in long-term memory. Method: On each trial, participants judged which of two novel versions of a popular melody (20 in total) was correct. The correct melody was presented in the same key as the exemplar (original) recording, while the incorrect “foil” melody was shifted up or down by either one or two semitones. Experiment 1 used “covers” of the original recordings found on an online video sharing website. Experiment 2 used simplified MIDI piano versions of the familiar melodies. Results: In Experiment 1, participants (n = 28) were correct 61.9% of the time (59.7% of the time when the foil was one semitone removed, 64.3% of the time when the foil was two semitones removed). In Experiment 2, participants (n = 27) were correct 57.4% of the time (56.7% when the foil was one semitone removed, 58.2% of the time when the foil was two semitones removed). All of these accuracy levels were significantly above the chance estimate of 50%. Conclusions: These results demonstrate that listeners without AP have associated familiar melodies with a specific key. In other words, listeners’ absolute pitch judgments of a melody are not inherently tied to an episodic, “echoic” representation (i.e., the original recording), thus suggesting a more general representation of absolute pitch in everyday listeners that parallels what is observed in the phenomenon of genuine AP.

Subjects: Pitch, Memory

When: 10:30 AM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

H4-3: Absolute Memory for Loudness

Daniel J Levitin(1)
1:McGill University

Our prior work suggests absolute memory for pitch and tempo may be relatively widespread amongst the general population. We previously asked participants to spontaneously sing the tones of popular songs, and found 40% of subjects produced the correct tone on at least one trial, 12% on both, and 81% came within two-semitones on at least one trial (Levitin, 1994). We later found the majority of people tested could sing popular songs from memory with only an 8% difference from the original tempo (Levitin & Cook, 1996). Here, we wondered to what extent, if any, listeners might be able to demonstrate absolute memory for another musical parameter, loudness. Participants (n = 15) listened to a one-minute excerpt of a song of their choice three times/day for nine consecutive weekdays at a randomly assigned listening level between 60-85dBSPL (training phase). On day ten, participants reproduced their loudness level by adjusting a volume dial from 0dB. We found a high correlation between participants’ recalled and trained loudness levels, r = .90 (M = 2.7dB, sd = 2.7). Musicians and non-musicians could accurately produce the trained loudness level from memory, and reproduced levels were within the just noticeable difference range for loudness. Our results complement previous studies of absolute memory for pitch and tempo, collectively suggesting relatively stable absolute representations of sounds along certain auditory dimensions, and that learning and recalling may benefit from training and repeated exposure. These results also provide evidence against theories claiming auditory imagery does not retain loudness information. The general trend of participants’ errors, although not significant, suggests they generated an internal representation of loudness level subject to the perceptual rules of external stimuli.

Subjects: Memory, tempo

When: 10:45 AM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session I1, Memory

11:30-12:15 PM in KC802

I1-1: Music lessons and verbal memory: Mechanism underlying this association in children and adults

Franziska Degé*(1), Tina Roeske(1), Gudrun Schwarzer(2), Melanie Wald-Fuhrmann(1)
1:Max Planck Institute for Empirical Aesthetics, 2:Justus-Liebig-University Giessen

Positive associations between music lessons and verbal memory have been demonstrated. Two candidate mechanisms for this association are an enhanced articulatory rehearsal mechanism, and/or improved central executive in musically trained participants. We conducted three studies to investigate these hypotheses: Study 1 compared musically trained and untrained children (Study 2: adults) in verbal memory and reading fluency, to measure effectiveness of articulatory rehearsal. Study 3 investigated the role of central executive for associations between music lessons and verbal memory in adults. In Study 1, we tested verbal memory of 32 children (age 10-12; 16 musicians) with a list-learning paradigm under two conditions: normal vs. articulatory suppression (speaking a word after each stimulus) and measured reading fluency. Musically trained children had higher reading fluency (p = .01), and remembered significantly more words in the normal condition (p = .043). However, this difference disappeared in the articulatory suppression condition (p = .37). In Study 2, we tested 30 students (age 18-35; 15 musicians) with the same study design, and obtained similar results: The adult musicians outperformed the non-musicians with respect to reading fluency (p = .04) and verbal memory in the normal condition (p = .036), whereas no difference was found in the suppression condition (p = .42). In Study 3, we assessed verbal memory of 30 students (age 18-31; 15 musicians) with a list-learning paradigm under two conditions: normal vs. second task (tapping). We found that musically trained adults had significantly better verbal memory under normal condition (p = .03), and that this advantage disappeared in the second task condition (p = .24). Results confirmed the hypotheses that articulatory rehearsal mechanism (better reading fluency and no differences in suppression condition) as well as central executive (no differences in second task condition) contribute to better verbal memory in musically trained children and adults.

Subjects: Memory, Music and development

When: 11:30 AM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

I1-2: From Melody to Memory: Contribution of Surface Features to Nonadjacent Key Relationships

Joanna Spyra*(1), Matthew H Woolhouse(1)
1:McMaster University

Temporally nonadjacent key relationships are ubiquitous in music: a passage, beginning in the tonic, may temporarily modulate before returning to the home key. However, the degree to which these structural relationships are perceived is uncertain. Previous experiments, using homophonic chord progressions (Woolhouse et al., 2016) and/or stimuli relatively devoid of surface features (Farbood, 2016), indicated that memory for a key can remain active for up to 20s after modulation. Moreover, Spyra et al. (2018) showed that melodic and rhythmic complexity enhances key-memory preservation within nonadjacent musical contexts. The current study sought to expand upon this research by presenting stimuli in which musically rich progressions (i.e. containing melody and rhythmic activity) were systematically separated from each other by intervening keys of varying duration. The experiment tested the effects of these manipulations on global key perception through harmonic closure. Stimuli consisted of three parts: X1 (key establishing sequence; tonic), Y (second key; modulation), and X2 (probe cadential sequence in the key of X1). Surface features were operationalized as the addition of melodic figuration and rhythmic activity to standard Western tonal chord progressions. Y was modified to last between 6 and 36 seconds (8 to 48 chords), well beyond the 20s time limit of previous studies. Participants were asked to rate, on a 7-point scale, the goodness of completion of X2. The main effect of intervening key duration was significant (F5,92 = 38, p < 0.01); as the duration of the intervening key section increased, average rating of goodness of completion decreased. This provides further evidence that surface musical features strengthen the perception of temporally nonadjacent key relationships. These results and possible mechanisms by which surface musical features effect nonadjacent key relationships will be presented at the conference. Farbood, M. (2016). Memory of a tonal center after modulation. Music Perception, 34(1), 71-93. Spyra, J., & Woolhouse, M.H. (2018). Effect of Melody and Rhythm on the Perception of Nonadjacent Harmonic Relationships. In Parncutt, R., & Sattmann, S. (Eds.) Proceedings of ICMPC15/ESCOM10, 421-425. Graz, Austria: Centre for Systematic Musicology, University of Graz. Woolhouse, M. H., Cross, I. & Horton, T. (2016). Perception of non-adjacent tonic-key relationships. Psychology of Music. 44/4: 802–815.

Subjects: Memory, Composition and improvisation; Expectation; Harmony and tonality; Music information retrieval; Music

When: 11:45 AM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

I1-3: Associations between Music Perception Skills and Episodic Musical Memory

Gladys Heng*(1), Nur Diyanah Abdul Wahab(1), Annabel Chen(1)
1:Nanyang Technological University

Introduction Despite interest in associations between musical competency and non-musical domains, such as memory, the multi-dimensional facet of musical competency has been largely understudied given that most studies investigated distinctions between musicians and non-musicians. Therefore, the current study aims to investigate the relationship between musical perception skills (consisting of melody, tuning, tempo and accent) and episodic musical memory. Methods 51 healthy, right-handed young adults (29M 27F, age: M= 22.5 years) with a range of musical experiences completed a music recognition task adapted from the Mnemonic Similarity Task [1]. In the implicit encoding phase, participants were instructed to provide emotion ratings as they listened to eight 45-second instrumental excerpts. In the recognition phase, participants heard various 6-to-8-second clips and were to classify them as “Old” (sounds exactly the same as in implicit encoding phase), “New” (not heard before) or “Similar” (“Old” clips which were a major third higher, with all other featural contrasts kept constant). Episodic musical memory capacity was represented by two indices: (1) general recognition performance – calculated by subtracting “Old” false alarm rates from hit rates, and (2) lure discrimination – calculated by subtracting “Similar” false alarm rates from hit rates. Thus, lure discrimination index represents the orthogonalisation of similar inputs, which is a more precise mechanism underlying episodic memory. Music perception skills were assessed using the Mini-Profile of Music Perception Skills [2]. Results Simple linear regression showed overall music perception skills predicted recognition performance (b =.35, t(49) = 2.62, p < .05), but not lure discrimination (b =.14, t(49) = 0.95, p =.35). Instead, multiple regression revealed melody and tempo perception skills predicted lure discrimination (b =.38, t(46) = 2.46, p < .05; b = -.39, t(46) = -2.31, p < .05), suggesting greater distinction of musical skills in relation to precision memory. Discussion Current findings elucidate the underlying distinction in music perception skills to enhance musical learning and memory in the context of music education. Particularly, training the ability to perceive changes in melody could improve musical memory. References [1] Stark, S., Stevenson, R., Wu, C., Rutledge, S., & Stark, C. (2015). Stability of age-related deficits in the mnemonic similarity task across task variations. Behavioral Neuroscience, 129(3), 257-268. http://dx.doi.org/10.1037/bne0000055 [2] Law, L., & Zentner, M. (2012). Assessing Musical Abilities Objectively: Construction and Validation of the Profile of Music Perception Skills. Plos ONE, 7(12), e52508. http://dx.doi.org/10.1371/journal.pone.0052508

Subjects: Memory, Musical expertise

When: 12:00 PM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session I2, Melody 1: Topography

11:30-12:15 PM in KC905/907

I2-1: Wayfinding in tonal pitch space

Richard Ashley(1)
1:Northwestern University

The cognitive representation of tonal pitch relationships is central to many theories of music,. Many studies model such representations through geometric methods or analyses, whether experimentally (Kumhansl 1990 and thereafter) or theoretically (Lerdahl 2001; Tymoczko 2009, 2011). In such approaches distance between pitch events—notes, chords, or tonal regions—is measured behaviorally through methods such as probe tones (Krumhansl op. cit) or continuous tension ratings (Lerdahl & Krumhansl 2007, Farbood 2012) and theoretically through mathematical models, often complex and multivariate (Farbood, Lerdahl, Tymoczko). Other approaches deal with structural units such as phrases (Vallieres et al 2009) and cadences (Sears et al 2018) and their contributions to real-time comprehension of musical form and process. Intrigued by research in spatial cognition, here we investigate tonal cognition as wayfinding (cf. Harrison 1996, Arthur 2018). We use an approach motivated by scalar predicates in linguistics, where musical events are categorized as “arrived,” “almost there,” and “on the way,” thereby privileging local saliences such as cadences (i.e. local minima in tension calculations and judgments). Our model employs commonly-noted features distinguishing cadence formulas: scale degree tendencies and resolutions, voice contours, and meter. This approach identifies the local minimum tension points found in other approaches as points of arrival, with their immediately preceding events being “almost there” and others “on the way.” Using selections from the Well-Tempered Clavier, we are currently investigating this model’s predictions experimentally using judgments obtained by probing listeners at targeted locations during continuous real-time listening. Early data support listeners’ use of salient tonal, voice-leading, and rhythmic cues in “finding their way” toward tonal goals, with proximity to a goal as the primary metric of tonal distance; we are particularly interested in how meter, not an emphasis in other models, influences these judgments.

Subjects: Harmony and tonality, Music theory

When: 11:30 AM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

I2-2: For tonics, turn left and go high: Spatial mappings of tonal stability

Zohar Eitan*(1), Neta Maimon(1), Dominique Lamy(1)
1:Tel Aviv University

In music discourse, tonal relationships are ubiquitously represented in spatial terms: keys are proximate or distant from each other; the tonic pulls the leading tone; closural progressions (cadences, from cadere, to fall) descend. While music theorists often suggest that such representations are abstracted from our bodily interactions with physical space, little is known about the ways listeners actually associate tonality and spatial features. Here we investigate, using both explicit and implicit measures, how musicians and nonmusicians associate tonal stability with vertical and horizontal spatial position. Forty participants (20 musicians) took part in each of 3 experiments. In the explicit test (Exp1), participants heard a tonality-establishing context followed by a probe tone, and assigned each probe to a subjectively appropriate location on a two-dimensional grid. In the implicit association test (IAT), auditory stimuli (tonally stable or unstable sequences) and visual stimuli (Exp2: high/low circle; Exp3: left/right circle) were pre-assigned to the same response keys, in two combinations: hypothetically congruent (e.g., tonally stable/low; unstable/high) and incongruent (e.g., stable/high unstable/low). Faster and more accurate responses were hypothesized for congruent combinations. Results (Exp. 1, 2; Exp. 3 is in progress) indicate that stable tones are associated with spatially higher (contrary to conventional music discourse) and left-hand positions. The vertical effect applied equally to musicians and nonmusicians (in both explicit and implicit measures); the horizontal effect was stronger for musicians. Findings indicate that while listeners indeed associate tonal stability with physical space, this tonal space differs substantially from those proposed by music theorists. We suggest that separate sources underlie vertical and horizontal mappings of tonal stability. Horizontal mappings may stem from the orientation of musical keyboards, where bass notes, commonly associated with stability, are positioned to the left. Vertical mappings are mediated through the “higher is better” conceptual metaphor: tonally stable tones are “better,” hence higher.

Subjects: Audiovisual / crossmodal, Harmony and tonality

When: 11:45 AM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

I2-3: What tone-scramble experiments reveal

Charles Chubb*(1), Tyler Dean(1), Solena Mednicoff(1), Joselyn Ho(1), Sebastian C Waz(1), Christopher Douthitt(2), Kyle Comishen(3), Scott A Adler(3)
1:University of California, Irvine, 2:Princeton University, 3:York University

Tone-scrambles are randomly ordered sequences of pure tones. In tone-scramble experiments, listeners strive (with feedback) to classify the tone-scramble presented on each trial according to the notes it contains. Experiments using tone-scrambles have yielded surprising results. First, 70% of listeners perform near chance at classifying tone-scrambles comprising equal proportions of G5, B5, D6 and G6 (i.e., notes of a G-major triad) vs G5, Bb5, D6 and G6 (G-minor triad); the other 30% are near perfect (Chubb et al., 2013). Second, this result holds regardless of the rate (tones per second) at which tone-scrambles are presented (Mednicoff et al., 2018). Third, low-performing listeners in the major-minor tone-scramble task perform poorly in other tone-scramble tasks requiring sensitivity to scale variations relative to a fixed tonic (Dean & Chubb, 2016). Fourth, musical training is not sufficient for skill in tone-scramble tasks. Fifth, six-month-old infants show the same distribution of performance as adults in the major-minor tone-scramble task (Comishen, Chubb & Adler, SMPC, 2019) suggesting that tone-scramble sensitivity arises very early in life. This presentation explores the implications of these findings. 1. We will argue that low-performing listeners in tone-scramble tasks may well be able to extract the scale-defined qualities of actual music as effectively as high-performing listeners. 2. We will present (as a working hypothesis) a model proposing that the qualities that high-performing listeners are able to extract from tone-scrambles are analogous to colors. Under this model, the histogram of intervals relative to whatever note is established as the tonic in the tone-scramble is analogous to the spectrum of a light, and the quality evoked by the tone-scramble is coded by activations produced in some set of neural mechanisms (analogous to retinal cone-classes) that are differentially sensitive to different intervals relative to the tonic.

Subjects: Harmony and tonality, Music and development

When: 12:00 PM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session I3, Embodiment

11:30-12:15 PM in KC909

I3-1: Adolescents’ drumming as emotion embodiment

Suvi H Saarikallio*(1), Birgitta Burger(2), Geoff Luck(1), Laura Hakula(1), Linnea Vallius(1)
1:University of Jyväskylä, 2:University of Jyvaskyla

Music is a forum for embodied emotional expression. Musical movement conveys felt and perceived emotions; even young children can transfer musical emotions into body movement. Adolescents typically find music emotionally important, while at the same time, they go through changes in body image, not always feeling comfortable in embodied expressions. This study investigated whether music serves as an emotion embodiment for adolescents. We studied whether there would be differences in body movements during djembe playing between five intended basic emotion expressions. We further investigated whether adolescents’ general emotion regulation tendencies would relate to the observed differences in the embodied expressions. 61 adolescents joined the study, but due to incomplete data the final sample consisted of 47 participants (34 girls, mean age 14 years; S.D. 0.44). Adolescents were instructed to play djembe drums to create improvisatory expressions for joy, sadness, anger, fear, and tenderness. Body movements were motion captured (using Qualisys ProReflex), and features for hand speed, acceleration and jerk as well as the drum area used were extracted. General emotion regulation tendencies for reappraisal and suppression were measured through the Emotion Regulation Questionnaire (ERQ). ANOVAs showed significant differences between emotions in several movement features. Most distinctive differences were based on the activation level dividing emotions into high arousal (joy, anger, fear) and low arousal (sadness, tenderness). Correlations and linear regressions showed that the general emotion regulation style Reappraisal was positively connected to congruent expression of sadness and tenderness, as indicated by lower hand speed, jerk and use of drum area. The results demonstrate that already through a relatively basic musical act of djembe drumming, adolescents can express emotions bodily through musical movement. Findings further suggest that musical emotion expression skills may depend on general emotion skills. Findings are relevant for both education and therapy.

Subjects: Emotion, Embodied cognition; Music and development; Music and movement

When: 11:30 AM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

I3-2: Performer-Generated Aspects of Musical Structure in Rock and Pop Music

Nicholas Shea*(1), Leo Glowacki(1), Daniel Shanahan(1)
1:Ohio State University

Musical communication has been characterized as an inference of musical structure (e.g., form, harmony) from audible surface-level features (e.g., pitch, rhythm), in a process Temperley (2004) calls “communicative pressure.” The focus of musical structure is most often harmony, which theoretically governs form and syntax in a given style (Lerdahl & Jackendoff, 1983; Nobile, 2016; White & Quinn, 2018). However, embodied music cognition research and first-hand accounts by songwriters have advocated that performers are also key agents to conveying and generating musical structure (De Souza, 2017; Sudnow, 1978). In a style such as rock—where songwriters often have limited musical training and composition frequently occurs from the instrument—an ecological-based theory of affordances (Gibson, 1986) suggests that the physical constraints of instruments (Parncutt et al., 1997) may have more structural determinacy than a given harmonic progression. This study investigates the relationship between the cognitive and physical aspects of music-making in a corpus of fully-scored rock songs, as part of a broader effort to do the same for other styles, while also providing methodology for adding notational specificity to existing popular music corpus studies. Here, a Cartesian distance model tracks topographical distance on the guitar and keyboard to coordinate performative shifts with metric and harmonic transitions. These models are then applied to scores (n = 75) of crowdsourced transcriptions. We expect that (1) performers prefer to maximize affordance when playing chord progressions, irrespective of root motion, and (2) shifts in instrumental affordances correspond with the song’s formal boundaries. An ongoing coordinated behavioral study also implements motion capture technology to track how performers generate musical texture in the context of style as they play corpus-derived harmonic progressions, with the assumption that (3) “pop” chord progressions are more easily executed on the piano, while “rock” chord progressions are more easily executed on the guitar.

Subjects: Composition and improvisation, Computational approach; Corpus analysis/studies; Embodied cognition; Harmony and tonality; Music and

When: 11:45 AM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

I3-3: Motion Patterns of Feet’s Movements and Metrical Structure in Electronic Music’s Dance Style

María Marchiano*(1), Isabel Cecilia Martinez(1)
1:Laboratorio para el Estudio de la Experiencia Musical, Universidad Nacional de La Plata

Motivation. In some dance styles, musical meter is encoded in the dancer’s space: some parts of the body go through the same spatial points on the same beats (Naveda and Leman, 2010), producing a metrically aligned motion-pattern (MP). As to electronic dance music (EDM), arms, chest and head movements seem to be highly spontaneous but still aligned with musical metre (Marchiano and Martínez, 2018). In this study we aim at extending the analysis to EDM feet movements to see whether and how MPs are metrically aligned. Methodology. Stimulus: Audiovisual recording of an EDM party in La Plata City, Argentina. Analysis: Microgenetic observational analysis of 27 minutes of 31 people’s feet’s movements, aiming at describing motion regularities and ways of synchronization with music metrical levels. Results. All subjects’ motions showed exclusively 2 looped MPs, both defined by the entrainment to the beat at the footstep level: (i) 2 (1-1) beat cycle (strong-weak/right-left foot alternation) stationary; and (ii) 4 (2-2) beat cycle (strong/weak right – strong/weak left foot alternation), with feet displacement on the horizontal axis. Implications. EDM’s dancers embody the metrical structure of the music through their feet’s spatiotemporal location. The presence of MPs in a dance style non-taught but still developed in parties, attest the social, non-verbal instantiation of musical features through embodied alignment with music. References. Marchiano, M. and Martínez, I. C. (2018). Expressive alignment with timbre. Proceedings of ICMPC15/ESCOM10 (272-278). Graz, Sydney, Montreal and La Plata. Naveda, L. and Leman, M. (2010). The Spatiotemporal Representation of Dance and Music Gestures using Topological Gesture Analysis (TGA). Music Perception, 28(1), 93-111.

Subjects: Music and movement, Beat, rhythm, and meter; Embodied cognition

When: 12:00 PM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session I4, Symposium: Open Science Part 1

11:30-12:15 PM in KC914

I4-1: SMPC Symposium on Open Science, Part 1: The Open Science Process

Dominique T Vuvan*(1), David J Baker(2), Haley Kragness(3), Psyche Loui(4), Finn Upham(5), Robert Slevc(6)
1:Skidmore College & International Laboratory for Brain, Music, and Sound Research, 2:Louisiana State University, 3:McMaster University, 4:Northeastern, 5:New York University, 6:University of Maryland

The goal of this symposium is to facilitate discussion of open scientific practices in the SMPC community, and to provide templates and tools to facilitate their adoption. We believe that such an effort will advance our field by fostering a rigorous and collaborative spirit of inquiry in SMPC. The first part of this symposium will present the principles of open science and practices that arise from the application of these principles. We will explore the full life cycle of an open science project, describing examples of pre-registration, data sharing, and open access publication from real projects conducted by members of SMPC. This will be followed by a 15 minute Q & A discussion on working open science practices into the research process. Presentation slides, links, and other materials pertaining to this symposium can be accessed at https://osf.io/9bvue/.

Subjects: Open Science,

When: 11:30 AM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

I4-1: Pre-registration

Dominique T Vuvan(1)
1:Skidmore College & International Laboratory for Brain, Music, and Sound Research

Study pre-registration is one of the core practices of open science. In this subsection I will start by defining pre-registration and describing the ecosystem of tools that support its implementation. Next, I will motivate the use of pre-registration on both scientific and pedagogical grounds. Scientifically, pre-registration produces better quality science by encouraging practices such as thoughtful design and analysis planning and the separation of confirmatory and exploratory analyses, as well as acting as a safeguard against avoidable inferential errors (Type I errors in particular). Pedagogically, pre-registration is an important process for training students, in particular because it makes explicit best practices for theory building and study design that trainees usually learn implicitly and unsystematically. Additionally, pre-registration has the effect of increasing student independence, and providing a template that can be iterated upon for later stages of the project (i.e., writing), as well as across multiple projects. Finally, I will delineate the steps through which a PI and a student can collaborate to pre-register a study on the Open Science Framework using an actual in-process experiment in my own lab.

Subjects: Open Science,

When: 11:30 AM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

I4-2: Open data

Psyche Loui(1)
1:Northeastern

Digital data are being generated at an exponential rate, with the volume of data doubling every two years (Voytek, 2016). Since conventional publications are constrained by limitations in space, and by the pressures of the review process placed on editors and reviewers alike, the vast majority of raw data being collected are currently not made available via the journal publishers. This creates an opaque system wherein findings can be difficult to replicate and interpret. In contrast, open data has two key benefits, 1) improving reproducibility, and 2) enabling extension and meta-analysis. I will demonstrate two projects from my lab that have used two platforms for data sharing: figshare and neurovault, which have a low barrier for adoption and enable the sharing of behavioral and neuroimaging data respectively. I will also suggest some useful practices for fostering a community that values and supports open data, while circumventing challenges that might arise from institutional review or privacy concerns.

Subjects: Open Science,

When: 11:45 AM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

I4-3: Open access and self-archiving publications

Haley Kragness(1)
1:McMaster University

Due to the high cost of accessing research articles, students, clinicians, and educators often do not have access to the results of publicly-funded work. While the existence of full- and hybrid-Open Access journals partially addresses this issue, researchers may be unable or unwilling to publish in such formats. This subsection will demonstrate best practices for researchers to make their research output publicly available and compliant with agency and institution open access policies via legal self-archiving. We will discuss the societal and individual benefits to making research output openly available in this way.

Subjects: Open Science,

When: 12:00 PM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session J1, Music Training 2: Language

12:15-1:00 PM in KC802

J1-1: Speech Interval Preference: Does Musical Training Impact Linguistic Pitch Perception?

Natalie Miller(1)
1:The University of Texas at Austin

In this study, brass musicians and non-musicians rate English sentences on a five-point “naturalness” scale to determine the interpretation of pitch within a linguistic context. Participants heard nine questions and nine statements, following common English pitch contours for interrogative and declarative sentences. The interrogative sentences feature a Mid-Low-High prosodic curve, while the declarative sentences follow a Mid-High-Low curve (Fox 2000). The stimuli varied in terms of the amount of pitch change between the highest and lowest pitches of the phrase, henceforth labeled the speech interval. Speech intervals were selected based on the musical interval correlates of the linguistic frequencies; the nine stimuli for declaratives featured 2, 4, 6, 8, 10, 12, 14, 16, and 18 semitones, while the nine stimuli for interrogatives featured 4, 6, 8, 10, 12, 14, 16, 18, and 20 semitones. The subject pool consisted of 24 brass musicians with 5 or more years of experience, and 24 non-musicians with little to no musical training. All subjects were provided a specific context for the sentence, in which the sentence would appear without additional inflection (such as sarcasm or surprise). Participants then listened to each stimulus nine times in a randomized order, and were tasked with rating each utterance on a 5-point scale for “naturalness,” with 1 being completely unnatural and 5 being completely natural. Preliminary results indicate that musicians and non-musicians have similar preferences for the “most natural” speech intervals in both the declaratives and interrogatives. The declarative sentences feature a peak in naturalness for all intervals larger than 8 semitones, while the interrogative sentences feature highest natural ratings at 8-14 semitones. Further analysis will investigate common scoring patterns within participants. The lack of significant difference in preference between musicians and non-musicians is noteworthy as it suggests that pitch processing occurs separately for linguistic and musical contexts.

Subjects: Music and language, Cross-domain effects

When: 12:15 PM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

J1-2: Finding Common Time: Sensitivity to the Beat in Culturally Familiar and Unfamiliar Music is Related to Speech Segmentation Ability

Jessica E Nave-Blodgett*(1), Joel Snyder(1), Erin Hannon(1)
1:University of Nevada, Las Vegas

For listeners to comprehend speech and music, they must parse a continuous sound stream into meaningful units such as syllables or notes, words or melodies, and sentences or musical phrases, respectively. Meter is a key temporal structure in music (and to some extent, speech) consisting of periodic points in time when a listener hears emphasis and taps or claps along. We asked if the ability to perceive meter in a piece of music is related to the ability to accurately segment natural speech, and how an individual’s cultural and linguistic background influences their speech segmentation and meter perception abilities in familiar and unfamiliar exemplars. English-speaking college students from the U.S. performed several tasks: a meter perception task, a tapping task, and a natural-language speech segmentation task. In the meter perception task, participants listened to culturally familiar (U.S.) and unfamiliar (Turkish) music paired with metronomes that matched or mismatched the meter. Listeners rated the fit of the metronomes to the music. In a separate block, participants tapped to the same music as in the meter perception task, without the aid of a metronome. In the speech segmentation task, participants identified target words embedded in spoken sentences in familiar (English) and unfamiliar (Turkish) languages. Overall, participants performed better for culturally familiar stimuli. They were more sensitive to the beat, were more accurate at tapping to the beat of culturally familiar music, and were faster and more accurate at segmenting words from familiar than unfamiliar speech. Across tasks, participants who were more accurate beat perceivers with culturally familiar and unfamiliar music in the meter perception task were more accurate at the speech segmentation task in familiar and unfamiliar languages. Individuals who were more accurate at tapping to the beat of culturally unfamiliar music were also more accurate in the unfamiliar-language speech segmentation condition.

Subjects: Music and language, Beat, rhythm, and meter

When: 12:30 PM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

J1-3: Iconic associations between vowel acoustics and musical patterns

Gertraud Fenk-Oczlon(1)
1:Alpen-Adria- universität

Recent research demonstrates strong relationships between vowels and music (e.g. Kolinsky et al. 2009; Fenk-Oczlon 2017). Vowels play a decisive role in generating the sound or sonority of syllables, the main vehicles for transporting prosodic information in speech, and singing. Vowels show all the core components of music, i.e., timbre, intrinsic pitch, intensity and duration. Previous studies found non-arbitrary associations between vowel pitch and musical pitch in non-lexical/meaningless syllables (Fenk-Oczlon & Fenk 2009): In songs containing strings of meaningless syllables the vowels are connected to melodic direction in close correspondence to their intrinsic pitch or the frequency of the second formant F2. This paper focuses on vowel intrinsic duration and its representation in music. It is generally assumed that open (low) vowels like [a ɔ o] have a higher intrinsic duration than close (high) vowels like [i y] and that there is a positive correlation between the first formant F1 and duration (e.g. Peterson & Lehiste 1960). Hypothesis: In songs containing meaningless syllables, syllables with open vowels like [a ɔ o] should be favored for long notes. Method: This assumption was tested based on all Alpine yodelers (n=20) in Pommer’s collection from 1906. All half notes, which represent the longest relative note values in our sample, and all dotted notes were counted and assigned to the respective syllables. Result: 75% of all half notes (n=193) as well as 75% of all dotted notes (n=356) were linked with syllables like [ha hɔ ra rɔ jɔ dɔ] containing open (low) vowels. Discussion: It will be argued that the iconic associations between vowel acoustics (intrinsic pitch, duration) and music which become apparent in the case of singing meaningless syllables where “the pressures of sense are relaxed to those of sound” (Butler 2015), might strengthen the idea of a musical protolanguage (e.g. Fitch 2006).

Subjects: Music and language, Evolutionary perspectives; Language and speech

When: 12:45 PM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session J2, Methodology

12:15-1:00 PM in KC905/907

J2-1: Embodying Expectation: An Expansion of Predictive Coding Approaches to Musical Agency

Bree K Guerra(1)
1:University of Texas at Austin

Predictive coding theory frames agency, understood as a sense of personal ownership of one’s actions, as a match between anticipated and encountered sensory feedback during movement. Leman (2016) adapts this approach to music by proposing that the experience of musical agency results from a listener’s predictive synchronization between (actual or virtual) movement and sound. This perspective, however, poses musical agency as an interaction with a passive virtual environment, forming a one-to-one relationship between intended agential movement and sonic output. I show that expanding Leman’s framework to encompass intramusical expectations recasts musical agency a negotiation with an active virtual environment by effectively recreating the predictive coding structure for initiating action, which involves the projection of a difference between one’s present and anticipated sensory state by engaging higher layers of prediction. Represented most simply in the embodied experience of musical tension as force, this connection provides a theoretical conceptualization of the capacity of musical agency to evoke more interactive and complex encounters with a virtual world. Applying this idea to musical expression, I explore the role of expectations in constructing narrative or dramatic understandings of music, which point to music’s ability to engage with situational aspects of emotion. Drawing on examples from Schubert and Brahms that illustrate Margulis’s (2005) three types of expectation-based tension, I present how possible dramatic understandings of these transformations of expectations reflect how a listener’s agential, embodied orientation toward the future can function as a kind of perception of a virtual world, and further construct an embodied experience of plot-like development through time. Overall, this perspective posits a direction for expanding the purview of musical agency’s role in musical emotional expression beyond a focus solely on the body (imitating expressive gestures and behaviors), and suggests a convergence between embodied music cognition and traditional research in musical expectation.

Subjects: Expectation, Embodied cognition; Emotion; Music and movement; Music theory

When: 12:15 PM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

J2-2: Implicit Tonal Effects in Music Processing

Olivia M Podolak*(1), Mark Schmuckler(1), Dominique T Vuvan(2)
1:University of Toronto Scarborough, 2:Skidmore College

Control of stimulus confounds is an ever present, and ever important, aspect of experimental design; such control is as critical in music cognition work as it is in any other area of research. Typically, researchers concern themselves with such control on a local level, ensuring that individual stimuli contain only the properties they intend for them to represent. Significantly less attention, however, is paid to stimulus properties in the aggregate, aspects that although not present in individual stimuli can nevertheless become emergent properties of the stimulus set when viewed in total. The current paper describes two case studies of such emergent stimulus effects, drawn from widely different areas of music cognition research – performance of two note dyads, and listeners’ memories for expected versus unexpected tones in melodic contexts. Both of these contexts demonstrated an impact of emergent tonal effects, with these tonal frameworks induced by the abstraction of structure across individual stimulus materials explicitly intended to avoid instantiating a tonal framework. Interestingly, these emergent tonal effects were found to structure participants’ behavior in these two studies, modifying planning of the performances of the note dyads in the first case study, and influencing memory for individual tones in atonal melodies in the second case study. As such, these examples demonstrate how such properties can exist across otherwise tonally neutral stimuli in experiments, and how such properties can influence participants’ responses in subtle ways.

Subjects: Pitch, Tonality

When: 12:30 PM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

J2-3: Meta-analysis of the prevalence of hypothesis testing in corpus studies

Joshua Albrecht(1)
1:The University of Mary Hardin-Baylor

Motivation Many methodological pitfalls can occur in music corpus studies employing big data, foremost among them multiple tests and spurious correlations. One of the primary strategies to avoiding these pitfalls is to test a priori hypotheses, but this remains a rare methodological choice. This meta-study examines methodology in 292 published corpus articles from 1972-2017, testing the hypotheses that A) corpus studies have become more popular over time, and that B) hypothesis testing has become more common over time. Methodology/Dataset Of 3,630 article published in JNMR, Psychology of Music, Psychomusicology, Musicae Scientiae, and EMR, 292 were deemed to be corpus studies. These studies were examined for hypothesis tests and whether an a priori hypothesis was identified. Results There has been a significant growth in corpus studies as a percentage of total output from 1972-2017 (F(1,3628)=57.62, p < .0001), consistent with hypothesis A. Among corpus studies, if model building and testing is considered a hypothesis test, then hypothesis testing has increased over time (F(1,290)=7.30, p = .007). Nevertheless, hypothesis testing remains in use less than 50% of time. Also, many of these ‘tests’ are of models built from similar or reserve datasets, and are not properly a priori. Implications By not checking multiple tests and the possibility of spurious correlations, the field may be in danger of the type of crisis facing the social sciences right now. This author encourages his readers to integrate a priori hypothesis testing more fully into their methodological designs.

Subjects: Corpus analysis/studies, Meta-analysis

When: 12:45 PM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session J3, Music Therapy

12:15-1:00 PM in KC909

J3-1: Dance for enhancing motor and cognitive skills in children with cerebellar developmental anomalies

Valentin Begel*(1), Asaf Bachrach(2), Simone Dalla Bella(3), Julien Laroche(2), Sylvain Clément(1), Arnaud Delval(4), Audrey Riquet(4), Delphine Dellacherie(1)
1:Université de Lille, 2:Centre national de la recherche scientifique, 3:University of Montreal, 4:CHU Lille

Cerebellar developmental anomalies are rare dysfunctions of the cerebellum that affect motor and cognitive skills. To date, there are very few attempts to devise behavioral therapies for remediation of these disorders. The cerebellum plays an important role in temporal cognition, including sensorimotor synchronization, critical for motor and cognitive development. It has been shown that dance training has a positive impact on the cerebellum, the basal ganglia and the cerebral cortex, structures that form the sensorimotor neuronal circuitry. Dance is a motivating activity that involves full-body synchronization with music and/or partners. It has been used as a physical therapy in neurological conditions and its positive impact on motor and cognitive domains is well documented. However, the possibility of using dance as a training protocol to improve motor and cognitive functions in children with developmental cerebellar anomalies has not been assessed so far. In this small-scale study we investigated whether dance can reduce motor and cognitive difficulties associated with cerebellar dysfunctions. Seven children (aged 7-11) with cerebellar developmental anomalies participated in a 2-month dance training protocol (3h/week). A test-retest design protocol with multiple baselines was used to assess children’s perceptual and sensorimotor rhythmic abilities, as well as motor and cognitive skills. The training led to improvements in motor tasks (reduced variability in paced tapping), in balance tasks and in executive functioning (flexibility). The beneficial effects of the dance training were visible at the individual level in almost all participants. Notably, gains were maintained two months after the intervention. Contrary to our hypothesis, no improvements were visible in other variables (e.g., phase coupling in synchronization, inhibition). This has important implications on the understanding of sensorimotor synchronization and the mechanisms sustaining sensorimotor training. In sum, these findings pave the way to innovative intervention strategies for children with neurodevelopmental disorders based on dance.

Subjects: Music therapy, Music and movement; Music education/pedagogy/learning

When: 12:15 PM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

J3-2: Parent-Child Integrated Music Program for Preschoolers with ASD: Feasibility and Preliminary Efficacy

Miriam Lense(1), Sara Beck*(2), Adam Summers(3), Rita Pfeiffer(4), Christina Liu(1), Nicole Diaz(4), Nia Goodman(4), Megan Lynch(4)
1:Vanderbilt University Medical Center, 2:Randolph College, 3:Belmont University, 4:Vanderbilt University

Musical activities, a natural type of parent-child and peer play, may provide a good platform for supporting social interaction in children with autism spectrum disorder (ASD) because they are motivating and provide a predictable context to scaffold social engagement (Lense & Camarata, 2018). Musical experiences are associated with prosocial behaviors in typically developing (TD) individuals (Kirschner & Tomasello, 2010) while music therapy may support social communication development in ASD (Kim et al., 2008). We use a mixed methods approach to examine feasibility and preliminary efficacy of an integrated parent-child music program designed to support social engagement and foster community integration. 14 families of preschoolers with ASD and 14 TD preschoolers participated in the 10-week program. Fidelity to program curriculum was high (>98%) as was family attendance (>80% sessions). On a program evaluation survey, caregivers reported interest in participating in additional music classes (4.7 (0.7) out of 5) and in recommending the program to others (4.9 (0.4)). Behavior coding revealed that children with ASD were less actively engaged and more unengaged during class than TD children (p’s<0.01) but both groups increased active engagement in class over time (p=0.01). Children with ASD demonstrated significant increases in imitation skills (23.4% (19.1%), p<0.001, d=1.196) and gestures/actions repertoire (4.5 (4.5), p=0.002, d=1.014) over the program. Parents reported significant decreases in parenting stress in the ASD group (p=.008, d=0.9) and the TD group (p=0.05, d=0.6). Interviews highlighted that important aspects of the program for all parents included forming connections within and across families, increased understanding of child development and ASD, and learning parenting skills through musical activities. This pilot study suggests that parent-child music classes may provide a potential vehicle for scaffolding specific social communication goals for children with ASD while also providing a context for community integration and connection.

Subjects: Music therapy, Music and development

When: 12:30 PM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

J3-3: What Makes a Music Therapist? An Examination of Therapist Behaviors

Kimberly Sena Moore*(1), Deanna Hanson-Abromeit(2)
1:University of Miami, 2:University of Kansas

Background The music therapy profession is defined in large part by the implementation of music interventions to address medical, developmental, psychological, and wellness needs. However, it is not the only profession to do so. Systematic examination of how a trained music therapist (MT) facilitates complex music interventions would help identify and operationalize behaviors essential to the implementation and outcomes of the intervention, and clarify interventionist scope of practice. Aims We report a portion of a larger study exploring the fidelity of a complex music intervention, Musical Contour Regulation Facilitation (MCRF). Our purpose is to identify and examine MT behaviors exhibited during the delivery of the MCRF intervention. Method This project was a retrospective microanalysis of videotaped footage from four weeks of MCRF sessions facilitated May through July 2014. From a representative sample of 30% of sessions (n = 12), we randomly selected one per week for analysis. Three research assistants independently coded the videos, tracking the frequency of verbal, nonverbal, and musical behaviors in 15 second intervals (interrater agreement = 80.6%). We conducted a series of one-way between subject ANOVAs to examine differences in MT behaviors (verbal, nonverbal, musical) based on intervention component (neutral, high, and low arousal). Results Results showed the MT exhibited an average of 694.75 behaviors per 13-minute session, grouped into five verbal, four nonverbal, and seven musical behaviors. We found significant main effects for two nonverbal and four musical behaviors; post-hoc comparisons indicated these occurred more frequently during high arousal components than low (and sometimes neutral). Conclusions Findings suggest the MT continually adjusted her behaviors during intervention sessions. This study provides a systematic approach to categorizing MT behaviors exhibited when delivering a complex music intervention. Results lay a foundation for examining music intervention implementation and may inform clinical training and practice standards of professional MTs.

Subjects: Music therapy, Music intervention development

When: 12:45 PM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session J4, Symposium: Open Science Part 2

12:15-1:00 PM in KC914

J4-1: SMPC Symposium on Open Science, Part 2: Open Science Ecosystem

Dominique T Vuvan*(1), David J Baker(2), Haley Kragness(3), Psyche Loui(4), Finn Upham(5), Robert Slevc(6)
1:Skidmore College & International Laboratory for Brain, Music, and Sound Research, 2:Louisiana State University, 3:McMaster University, 4:Northeastern, 5:New York University, 6:University of Maryland

The goal of this symposium is to facilitate discussion of open scientific practices in the SMPC community. The second part of this symposium will discuss the opportunities and impacts of working within an open science ecosystem. Specifically, we will describe what an open science culture looks like, address common arguments for and against open science, and then explore how to work with open source code and the effects of open scientific practices on academic careers in music science. Part 2 will end with a discussion on what music perception and cognition researchers need to develop a robust open science culture and how organisations such as SMPC might facilitate. Presentation slides, links, and other materials pertaining to this symposium can be accessed at https://osf.io/9bvue/.

Subjects: Open Science,

When: 12:15 PM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

J4-1: The open science ecosystem

Finn Upham(1)
1:New York University

The ideals of open science are laudable but the work of learning relevant skills and sharing materials present substantial hurdles for many researchers. In communities with established open practices, these challenges are more readily met. Members learn relevant skills earlier in their academic development and, once there is enough material available, the products of open science contribute to new projects at all stages of the research cycle. This sharing culture encourages researchers to collaborate, to draw more directly from past work, fostering discussion of methodology and meta-analysis. However, interdisciplinary communities such as Music Science face additional difficulties as individual researchers vary greatly in priorities and backgrounds. More of our work is without direct precedent, or rather without accessible precedent, and fewer peers share both our interests and our research strategies. This presentation considers how open science practices can also help address these community needs by bringing more awareness of related work and better understanding of projects across disciplines.

Subjects: Open Science,

When: 12:15 PM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

J4-2: Open source code

David J Baker(1)
1:Louisiana State University

One of the key features of any scientific discovery is the ability to reproduce the finding. This could mean either reproducing the analyses from another researcher’s data or reproducing the entire experiment under a different set of conditions. One major obstacle to reproducing either of these is access to the author’s original materials. This section of the presentation will demonstrate how using open source software for both experimental design and data analysis can lead to more stable, shareable, and repeatable findings in music psychology. We will introduce resources on how researchers can get started learning to use open source code and the benefits that using open source code provides such as multi-site data collection, experimental version control, the ability to collect diverse samples, and the archiving of experimental designs.

Subjects: Open Science,

When: 12:30 PM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

J4-3: Impacts of open science

Robert Slevc(1)
1:University of Maryland

One reason for the relatively slow adoption of open science practices (so far) may be that researchers are unsure how using open science practices will impact their careers. For example, one may worry that preregistration and open data/code could reduce flexibility to publish interesting exploratory findings, lead to research being ‘scooped’, or that tenure committees may not appreciate work published open access, outside of the traditional journals in the field. Here, we will briefly describe evidence that open science practices are, in contrast, associated with a variety of career benefits, including increased citations, media exposure, job and funding opportunities. We will also discuss how to frame one’s work in a research statement and CV to highlight the use of, and value of, one’s open science practices.

Subjects: Open Science,

When: 12:45 PM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session K1, Social Interventions

2:30-3:00 PM in KC802

K1-1: Parental views of participation in music programs and children’s socio-emotional skills and personality: A longitudinal report

Beatriz Ilari*(1), Priscilla Perez(1), Alison Wood(1), Assal Habibi(1)
1:University of Southern California

In this paper, we report on a study that tracked parental views of socio-emotional skills and personality of children who were involved in an intensive community-based music program and two comparison groups: intensive sports programs, and control (no participation in extracurricular activities). All parents came from underserved urban communities in Los Angeles, were primarily of Latino ethnicity, and had children aged 6-7 at the beginning of the study. Parents were interviewed yearly on home musical experiences, reasons for enrolling/re-enrolling children in music and sports programs, and consequences of children’s participation. They were also asked to complete the BASC-BESS II report of children’s socioemotional skills and the TIPI personality on select years. Although home musical backgrounds were quite similar across participating families, interview data suggested a shift in terms of motivation to enroll children in music programs. These ranged from personal and social reasons to occupational ones. Parents also spoke about changes to family life (in both negative and positive ways), describing changes to children’s musicality, personal and social skills as a consequence of children’s participation in music. In terms of quantitative measures, at the beginning of the study and prior to entering the programs, there were no significant group differences for the BASC-BESS and TIPI measures. After 4 years of participation in music, sports or no participation, parents of children involved in both EC activities rated their children higher on the emotional stability personality trait, lower on aggression, and lower on hyperactivity when compared to children not involved in EC activities. These data will be presented at the conference with implications for future research and practice.

Subjects: Music and development, Music education/pedagogy/learning

When: 2:30 PM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

K1-2: A new view on classical music listeners: Consumer habits and the influence of professional music review

Elena Alessandri*(1), Antonio Baldassarre(1), Olivier Senn(1), Katrin Szamatulski(1), Victoria J Williamson(2)
1:Lucerne University of Applied Sciences and Arts, 2:Department of Music, University of Sheffield

In the classical music world, new technologies and communication media have radically changed the way we purchase, consume, and discuss music. Classical music critique has a long history of providing listeners with guidance on what to listen to or buy. What is the relevance of critique in the modern classical music market? Aims To document classical music listeners’ purchasing and listening habits with a focus on their engagement with professional music review. Method Online survey (English/German) for regular listeners of classical music recordings. The survey covered listening and purchasing habits and the use of opinion sources, including music criticism. Using multiple logistic regression, we tested the effect of demographics, listening and purchasing habits (59 variables) as predictors for consumers’ engagement with music critique and created a model of seven predictors that explains 35.2% of the deviance in the outcome variable. Results 1,200 classical music listeners (~age 44yro, range 17-85, 62 countries) completed the survey. Participants’ musical sophistication index (GoldMSI) was consistent with population average (85.05). 54% used CDs regularly, making them as popular as digital files and YouTube (56%). Word-of-mouth was the most often used opinion source although review was considered the most useful. 62% of listeners had recently engaged with professional music review. Predictors for review consumption were: higher musical engagement and education, older age, preference for extended evaluations and traditional sources like newspapers, lower use of streaming services, and higher inclination to purchase music. Conclusions Contrary to the stereotypical view, participants were not elite or trained/experienced listeners; they consumed music in a variety of ways and used a range of opinion sources. Professional review is still popular; more so among older, musically educated listeners than among younger streaming users. Professional critique still plays an important role, although it needs to connect to a new generation of classical listeners.

Subjects: Music and society, Musicology

When: 2:45 PM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session K2, Form 1

2:30-3:00 PM in KC905/907

K2-1: Acoustic cues for emotion distinguish classical sonatas and rondos

Jonathan De Souza*(1), Adam Roy(1), Andrew Goldman(1)
1:University of Western Ontario

Musical form is often understood as a large-scale repetition pattern. For example, rondo forms are defined by a recurring refrain that alternates with episodes. Contrasting episodes reduce inattention, functioning as dishabituation stimuli; meanwhile, the increasingly familiar refrain enhances processing fluency and aesthetic pleasure (Huron, 2013; Margulis, 2014; see also, Reber, Schwarz & Winkielman, 2004). In this account, the refrain primarily engenders veridical expectations (not schematic ones), the formal structure is theoretically independent of content, and rondo and sonata movements “almost certainly do not evoke different listening schemas” (Huron, 2006, p. 208). Yet historically, rondos were identified with a particular mood: “gay, lively, and brilliant” (Czerny, 1848, p. 81; see also, Cole, 1964). Listeners, then, might develop schemas for sonata and rondo movements via stylistic or affective features. In a corpus analysis, we examined paired sonata and rondo movements from 180 instrumental works, composed between 1770 and 1799. Rondos had significantly higher average pitch height and higher average attack rate. These results are consistent with prior research on acoustic cues for happiness in music and speech (Schutz, 2017). There were also significant differences related to meter and dynamics. We then conducted experiments involving 20 participants with at least 5 years of musical training (Exp. 1) and 20 participants with less than 6 months of musical training (Exp. 2). In both experiments, participants listened to 120 15-second audio clips, taken from the beginnings of movements in our corpus. After a training phase, they attempted to categorize the excerpts (2AFC task). D-prime scores were significantly higher than chance levels for both groups. In post-experiment questionnaires, participants identified musical traits that were highlighted in our corpus study, and they reported that rondos sounded happier than sonata movements. This suggests that classical formal types can be indirectly recognized, because of distinct stylistic and affective tendencies.

Subjects: Music theory, Corpus analysis/studies; Emotion; Musicology

When: 2:30 PM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

K2-2: Music and categorical thought: Evidence from perception of form

Richard Ashley(1)
1:Northwestern University

The nature of conceptual and categorical thought is a core topic in cognitive science writ broadly, but has been directly considered less in music cognition, with a few notable exceptions (cf. Zbikowski 2004). This presents a contrast with music cognition’s cousin discipline, music theory, where categorization is a central concern (cf. Gjerdingen 2007, Hepokoski and Darcy 2006, and many other studies), and where appeal to a particular musical structure–either specific or abstract—as “prototypical” or “archetypal” is commonplace. This study engages one aspect of musical structure—musical form—from the standpoint of theories of categorization, using a variety of data related to judgments of form, based on a large corpus of popular music. The corpus under consideration is made up of the top 10 songs on Billboard magazine’s end-of-year charts for 60 years (1956-2016), thus totalling 600 pieces. The current study was motivated by the evident variability in analysts’ and listeners’ formal descriptions of songs. This variability argues for categorization of musical forms as probabilistic and further raises the question of the roles of prototypes and exemplars in such processes. One proposal for a prototypical musical form is “rotation” form (Hepokoski and Darcy 2006), in which a series of musical units is repeated, perhaps in varied form (e.g. ABCABxC). Our prior results from two experiments, one with listeners’ perception of form in pop songs heard for the first time, and a second one with judgments of the “goodness” of pop song forms compared to one another, support variants of a two-rotation structure as prototypical for pop songs. We are now carrying out an additional experiment to see if typicality judgments of specific songs shed more light on the roles of prototypes vs. exemplars in such judgments.

Subjects: Music theory, Memory

When: 2:45 PM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session K3, Medical Interventions

2:30-3:00 PM in KC909

K3-1: The Effect of Acetaminophen on Music, Speech, and Natural Sounds

Lindsay Warrenburg(1)
1:Ohio State University

The capacity of listeners to perceive or experience emotions in response to music, speech, and natural sounds depends on many factors including dispositional traits, empathy, and enculturation. Emotional responses are also known to be mediated by pharmacological factors, including both legal and illegal drugs. Existing research has established that acetaminophen, a common over-the-counter pain medication, blunts emotional responses to visual stimuli (e.g., Durso, Luttrell, & Way, 2015). The current study extends this research by examining possible effects of acetaminophen on both perceived and felt responses to emotionally-charged sound stimuli. Additionally, it tests whether the effects of acetaminophen are specific for particular emotions (e.g., sadness, fear) or whether acetaminophen blunts emotional responses in general. Finally, the study tests whether acetaminophen has similar or differential effects on three categories of sound: music, speech, and natural sounds. The experiment employs a randomized, double-blind, parallel-group, placebo-controlled design. Participants are randomly assigned to ingest acetaminophen or a placebo. Then, the listeners are asked to complete two experimental blocks regarding musical and non-musical sounds. The first block asks participants to judge the extent to which a sound conveys a certain affect (on a Likert scale). The second block aims to examine a listener’s emotional responses to sound stimuli (also on a Likert scale). The study is currently in progress and has tested 184 participants of a planned 200 cohort. In light of the fact that some 50 million Americans take acetaminophen each week, if the final results prove consistent with existing research on the emotional blunting of acetaminophen, this suggests that future studies in music and emotion might consider controlling for the pharmacological state of participants.

Subjects: Emotion, Pharmacology

When: 2:30 PM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

K3-2: The influence of listening to music during caesarean sections on patients’ anxiety levels

Nora Schaal*(1), Philip Hepp(2)
1:Heinrich Heine University, 2:Clinic for Gynecology and Obstetrics, HELIOS University Hospital Wuppertal

Several studies have shown that music interventions before and during surgery can lead to reduced anxiety and pain levels of the patient. The present study investigated the effect of a music intervention during the caesarean section on subjective (State Trait Anxiety Inventory, visual analogue scale) and objective (cortisol, amylase, heart rate, blood pressure) measures of anxiety and stress perceived before, during and after the caesarean on the day of surgery. The patients (N = 304) were randomly allocated to the experimental group, listening to music in the operating room, or to the control group, undergoing the procedure without music. The analysis revealed lower levels of subjective anxiety at the end of the surgery in the experimental group compared to controls. The objective parameters showed significant differences between the groups in salivary cortisol increase from admission to skin suture as well as systolic blood pressure and heart rate, indicating lower stress and anxiety levels in the music group. These results propose that listening to music during a caesarean section leads to a reduction of anxiety and stress levels. Music during caesareans seems to be an easy implementable and effective way of reducing stress and anxiety of the expectant mother.

Subjects: Health and well-being, Music therapy

When: 2:45 PM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session K4, Melody 2

2:30-3:00 PM in KC914

K4-1: Automatic comparison of global children’s and adult songs supports a sensorimotor hypothesis of scale origin

Shoichiro Sato*(1), Shinya Fujii(1), Patrick E Savage(1)
1:Keio University

Human music throughout the world uses diverse scales, but there are also some commonalities shared throughout much of the world’s scales (Savage et al., 2015, PNAS). Are there musical laws or biological constraints that underlie this diversity? The origin of scales have been based on perceptual theories involving small integer ratio of interval since the time of Pythagoras (Bowling & Purves, 2015, PNAS). However, these theories are generally based on tunable instruments, and some are skeptical as to whether this theory applies to vocal song, recognized as the most ancient and universal instrument of human music. One alternative theory is that scales arise not from perceptual constraints regarding integer ratios but instead due to production constraints on how precisely the voice can generate pitches (Pfordresher & Brown, 2017, J. Cognitive Psychology). To investigate this theoretical dispute, we conducted automatic comparative analysis of 100 matched children’s and adult songs (n = 50 songs each) from around the world using automated pitch analysis (Six et al., 2013, J. New Music Research). We found that children’s songs tend to have more imprecise tuning and fewer scale degrees than adult songs (precision: t = 2.0, p = .009; scale degrees: t = 1.7, p = .007), consistent with motor constraints due to their earlier developmental stage. On the other hand, we also found that both adult and children songs throughout the world shares some common tuning intervals such as perfect 4th (4:3) and perfect 5th (3:2), consistent with sensory theories of integer-ratio preference. These results suggest that some universal aspects of musical scales may be caused by a combination of sensory and motor mechanisms, which we unify into a new sensorimotor hypothesis.

Subjects: Cross-cultural comparisons/non-Western music, Computational approach; Corpus analysis/studies; Harmony and tonality; Music information retrieval

When: 2:30 PM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

K4-2: A contextual constraint approach to studying melodic expectation: behavioral, computational, and neural studies

Allison R Fogel*(1), Emily Morgan(2), Gina Kuperberg(1), Ani Patel(1)
1:Tufts University, 2:University of California, Davis

Melodic expectation is a core topic in music cognition, and has been explored using behavioral, computational, and neural methods. However, the topic has mainly been studied in one way, by manipulating the expectancy of a target note without controlling the prior melodic context. Inspired by research on prediction in language processing, we have developed a new approach to studying melodic expectation based on controlling the extent to which a melodic context constrains expectations to one particular continuation (the “contextual constraint” approach). Using this method, we can develop melodic contexts that either do or do not lead to a strong prediction for a certain note to come next, and quantify the degree to which different notes are expected using the melodic cloze probability paradigm (Fogel et al. 2015). This approach is valuable for studying numerous aspects of melodic expectancy. In this talk we illustrate three applications of this approach. First, we show how the approach has been used to explore the powerful role of implied harmony in governing melodic expectations. Second, we show how the approach has been used to test two state-of-the-art computational models of melodic expectancy, Temperley’s Probabalistic Model of Melody Perception and Pearce’s Information Dynamics of Music(IDyOM) model. Third, we show how the approach has been applied to study to study the neural correlates of melodic expectation in a novel way, by exploring ERP effects of violating predictions with in-key target notes. When these notes are compared to the same notes in non-constraining melodies (where they do not violate any predictions), we observe an anterior positivity that strongly resembles the ERP effect seen in language studies of comparable prediction violations. We discuss further uses of the contextual constraint approach and offer our stimuli and experimental scripts to researchers interested in pursuing this approach.

Subjects: Expectation, Language and speech; Neuroscientific approach

When: 2:45 PM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session L1, Mental Representations

3:00-3:30 PM in KC802

L1-1: Ratios that attract the mind: A hidden resemblance between the perception of pitch and rhythm

Ani Patel*(1), Nathaniel J Zuk(2), Grant Steinhauer(1)
1:Tufts University, 2:Trinity College Dublin

Since Pythagoras there has been a fascination with the relationship between the consonance of a pitch interval and the simplicity of the ratio of its two pitches. Interestingly, simple frequency ratios are important not only for pitch but also for rhythm processing. For example, when Western listeners imitate rhythms they tend to adjust the timing of events such that inter-onset-intervals are in simple integer ratios. This hints at intriguing resemblances between pitch and rhythm processing in the mind. Here we search for such resemblances by asking whether frequency ratios regarded as consonant when realized as pitch intervals are heard as pleasing when realized as polyrhythms, while ratios regarded as dissonant pitch intervals are heard as unpleasant when realized as polyrhythms. We created polyrhythms from the frequency ratios of all 12 pitch intervals in Western tonal music and had listeners rate them for their perceived pleasantness. Such polyrhythms consisted of two metronomes whose tempi had the same ratio as Western pitch intervals in just intonation (e.g., 3:2, 9:8). We tested a condition in which the faster metronome in each polyrhythm was set to 120 BPM and a condition in which the rate of each polyrhythm was adjusted to a cycle duration of 2 seconds. We found that in the latter condition (in which all polyrhythms fit into echoic memory) the pleasantness ratings of polyrhythms was significantly related to the ratings of their corresponding pitch intervals. Using a biologically realistic computational model of subcortical auditory processing (Zuk et al., 2018), we find that the rhythm preferences we observe are related to the degree of synchronization in subcortical neural activity to beat-related frequencies in the polyrhythmic stimuli. We discuss how our findings bear of the question of shared vs. distinct principles governing musical pitch and rhythm processing in the mind.

Subjects: Beat, rhythm, and meter, Neuroscientific approach

When: 3:00 PM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

L1-2: Music Stimulus-Encoding-Model Reconstruction for Validation of Cognitive Representations in fMRI

Michael A Casey*(1), Jefferey Mentch(2)
1:Dartmouth College, 2:Massachusetts Institute of Technology

Recent research shows that visual features corresponding to subjects’ perception of images and movies can be predicted and reconstructed from fMRI via stimulus-encoding models. We present the first evidence, to our knowledge, that listeners were able to discriminate between stimulus-encoding-model reconstructions of a target stimulus and null-encoding-model reconstructions. To support this claim, we conducted a forced-choice listening test in which 250 stimulus-encoding-model and null-encoding-model reconstructions were compared to corresponding target stimuli (baseline=50%, acc=60.8% p=0.00038). We argue that the such listening tests of stimulus- and null-encoding-model reconstructions provide a robust framework for mapping hypotheses about cognitive music representations via fMRI. Stimulus encoding models represent musical information in fMRI data as high-dimensional (100+) feature spaces—i.e. neural representational spaces—with dimensions corresponding to voxel locations and values corresponding to voxel activations. Stimulus features representing harmony and timbre were used in the current study. Machine learning models predict the voxel patterns in the representational spaces that correspond to the stimulus features. Once a model is trained we use it to predict response patterns to a large corpus of novel audio clips, called priors. For a held-out fMRI-stimulus pair, corresponding to a music perception fMRI task, we match the acquired brain image to the corpus of predicted brain images. Stimulus reconstruction selects prior audio clips for which the target stimulus’ acquired brain image most closely matches their stimulus-encoding- or null-encoding-model predicted brain images. Between-subject hyperalignment allows models trained on one group of subjects to be used for reconstructing stimuli for held-out subjects. Results show that hyperalignment yields common models of music cognition, and significantly improves model performance with respect to both within-subject and anatomically-aligned encoding models. To encourage further development of these new methods for probing music cognition, the code, stimuli, and high-resolution fMRI data have been released via the OpenfMRI initiative.

Subjects: Neuroscientific approach, Music information retrieval

When: 3:15 PM in KC802 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session L2, Form 2: Closure

3:00-3:30 PM in KC905/907

L2-1: Neurophysiological tracking of musical phrases in Bach

Xiangbin Teng(1), Pauline Larrouy-Maestri*(2), David Poeppel(3)
1:Max Planck Institute for Empirical Aesthetics, 2:Max-Planck-Institute for Empirical Aesthetics, 3:New York University

Music, like speech, can be considered as a continuous stream of sounds organized in hierarchical structures. Human listeners parse continuous speech into linguistic units of phrases and sentences. Inspired by EEG and MEG studies on speech parsing, this study examines the neural signatures of the parsing of musical structures. 25 participants listened to Bach chorales while undergoing EEG recording. Eleven selected pieces were manipulated mainly to remove the temporal cues (fermatas) indicating musical phrases, so that participants principally had to rely on the harmonic structure to parse the musical structures. To further control acoustic confounds, two additional control conditions were implemented: 1) local reversal: disturbance of the musical phrases while keeping the harmonic structure of the phrases cadences; 2) global reversal: temporal reversal of the music pieces. Music pieces were synthesized with piano sounds and presented at three different tempi (66, 75, and 85 bpm). Employing advanced EEG component analysis and machine learning techniques, we show that the temporal response function of the first beat of each phrase was of significantly higher in power in the original pieces than in the control conditions, which suggests that the brain can indeed rely on harmonic structures alone to identify the beginning of each phrase and hence parse continuous music streams. Note that the neural responses to acoustic properties of music, quantified either by acoustic-neural coherence or by acoustic reconstruction performance from EEG signals, were comparable between the structural manipulations. Moreover, we replicated previous findings on the positive correlation between formal training in music and the neural entrainment to music, which indicates that music training probably specifically improves auditory processing of acoustic properties in music. The data demonstrate that the brain extracts hierarchical musical structures online and segments continuous music streams into units with ‘musical’ meanings.

Subjects: Neuroscientific approach, Beat, rhythm, and meter; Expectation; Music and language; Physiological measurement

When: 3:00 PM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

L2-2: Melodic Prototypes as Cues in the Perception of Tonal Cadences: A Corpus Study

Ben Duane(1)
1:Washington University in St. Louis

Musicians and non-musicians alike can recognize authentic and half cadences (Bigand & Parncutt, 1999; Boltz, 1989; Rosner & Narmour, 1992; Sears, Caplin, & McAdams, 2014), and they may do so partly by recognizing prototypical pitch sequences. Music theorists have long argued that cadences tend to be paired with one or more common sequences of scale degrees, such as ^3-^2-^1 in the melody of authentic cadences, or ^4-#^4-^5 in the bass line of half cadences (Caplin, 1998; Gjerdingen, 2007). Research on statistical learning, moreover, suggests that listeners would be able to learn such prototypes through repeated exposure (Creel, Newport, & Aslin, 2004). This study uses corpus analysis to assess how beneficial such scale-degree prototypes would be in perceiving authentic and half cadences. Two corpora—one containing Classical string quartets, the other containing Galant instructional duets known as solfeggi—were analyzed by two musical experts, who independently identified all authentic and half cadences. Scale-degree prototypes were identified using two computational frameworks—n-gram models and profile hidden Markov models (PHMMs)—and the output of these models was used to attempt to identify authentic and half cadences in the corpus. The results suggest a strong correspondence between cadences and prototypical scale-degree sequences in the string quartets. With the n-grams, cadences were correctly identified up to 78.1% of the time, depending on the model’s parameters. With the PHMMs, these rates reached 44.5%, which exceeded chance. In the solfeggi, however, neither model attained above-chance performance. Follow-up analyses of the n-gram distributions suggests that the models fail on the solfeggi because these pieces employ common melodic prototypes at all times, whereas the string quartets employ them only at cadences—a finding that harkens back to Caplin’s (1998) distinction between “conventional” cadential material and “characteristic” non-cadential material.

Subjects: Corpus analysis/studies, Computational approach; Expectation; Harmony and tonality; Music information retrieval; Music theory

When: 3:15 PM in KC905/907 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session L3, Music in the Hands

3:00-3:30 PM in KC909

L3-1: Finger Kinematics During the First Days of Playing a Wind Instrument

Laura Stambaugh(1)
1:Georgia Southern

“Finger kinematics” refers to finger movements. For example, previous research with expert clarinetists revealed finger acceleration was correlated with temporal evenness, and fingers moved more quickly at key release than they did at key press (Palmer et al., 2009). Multiple studies using motion capture technology have examined the movements of expert musicians, though little is known about kinematics of novice musicians. Therefore, the purpose of this study was to examine finger kinematics, performance accuracy, and temporal evenness during the earliest stage of learning to play a wind instrument: the first eight days. Participants (N = 8) completed up to eight private music lessons. They learned to play seven pitches on an electronic wind controller, which is similar to a soprano saxophone. Each day, an Optotrak Certus motion capture system recorded finger movements. This optical system has spatial resolution as low as .1mm in the X, Y, and Z planes at 200 frames per second. In addition, instrument playing was digitally recorded with Logic Pro. Four test music exercises were played each day at slow, moderate, and fast tempos. These were designed to highlight single finger movements and finger coordination. The dependent variables for musical performance were pitch accuracy and evenness. Preliminary results indicate more errors were played when participants lifted up their fingers (53% error) compared to when they put down fingers (18% error). Also, the number of performance errors decreased with practice. Repeated measures ANOVAs will examine day-to-day temporal evenness, and Pearson correlations will determine relationships between evenness and tempo. The kinematic dependent variables were peak acceleration during key press and key release, and peak finger height. I will discuss expert/novice differences, the potential role of the Sensory Accumulator Model (Aschersleben et al., 2001) in novice performance, and the role of feed-forward planning in novice finger kinematics.

Subjects: Physiological measurement, Music education/pedagogy/learning

When: 3:00 PM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

L3-2: Hand Shape Familiarity Affects Guitarists’ Perception of Sonic Congruence

Keith Phillips*(1), Andrew Goldman(2), Tyreek Jackson(3)
1:Royal Norther College of Music, 2:University of Western Ontario, 3:St. John’s University

Motivation: Musical performance depends on the anticipation of the perceptual consequences of motor behavior. Altered auditory feedback has previously been used to investigate auditory-motor coupling but studies to date have predominantly used MIDI piano in experimental tasks. Methodology: In the present study, we extend the AAF paradigm to the guitar, which differs from the piano both motorically and in its pitch-to-place mapping, allowing further investigation into the nature of this coupling. Guitarists (n = 21) played chords on a MIDI guitar in response to tablature diagrams. Some of the chords used familiar hand shapes while others used unfamiliar hand shapes. In half of the trials, one of the notes in the heard chord was artificially altered. Participants judged whether the feedback was altered or not, responding as quickly and accurately as possible by pressing one of two buttons on a footswitch. To assess the familiarity of the stimuli, participants ranked the familiarity of the chord shapes and the hand shapes of the stimuli. Results and implications: Judgement of sonic congruence was faster when the chord and hand shape were familiar, and when feedback was congruent, though there was no interaction between these factors. Our findings suggest that guitarist’s auditory-motor coupling is heterogenous with respect to their technique, given that some chords had stronger perception-action coupling than others. We discuss implications of these findings with regard to forward models and embodiment.

Subjects: Embodied cognition, Music and movement

When: 3:15 PM in KC909 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session L4, Beat & Meter 5: Non-Human Perspectives

3:00-3:30 PM in KC914

L4-1: Rhythmic discrimination in a non-vocal learner

Alexandre Celma Miralles*(1), Juan M Toro(2)
1:Universitat Pompeu Fabra, 2:Universitat Pompeu Fabra & ICREA

In the present study we explore the biological bases of rhythm across species, using a non-vocal learning model (Rattus norvegicus). Our first experiment focuses on a universal feature of rhythm: the isochronous beat. We trained rats to discriminate regular from irregular sequences of sounds using a go/no-go paradigm (Celma-Miralles and Toro, 2018). The training items were forty sequences of regular (i.e. isochronous) and irregular (i.e. non-isochronous) sounds presented at four tempi. After training, we tested the animals with 20 novel sequences at two different new tempi and observed that the rats gave more nose-poking responses the regular sequences than for the irregular ones. This difference in the number of times that the rats introduced the nose into the “feeder hole” after each stimulus suggests that they can detect isochrony and generalize this temporal feature to novel sequences at tempi not presented during training. Our second experiment focuses on the rhythm of a melody. The animals were familiarized with an excerpt of the “Happy Birthday” song while they received food. After the familiarization phase, we tested the rats with three items: the same familiar excerpt, the rhythm of the excerpt in a unique tone (i.e. isotonic), and a version of the excerpt maintaining the melodic intervals but scrambling the rhythmic values. Compared to the isotonic rhythm, rats gave lower responses for the familiar items and higher responses for the rhythmically-scrambled melodies. These results show that rats discriminated a familiar melody from its rhythmically-scrambled versions. Together, both experiments suggest that the capacity (i) to detect temporal regularities in sound sequences and (ii) to discriminate rhythmic patterns in a familiar melody may be widespread in the animal kingdom and not restricted to vocal learning species.

Subjects: Beat, rhythm, and meter, Evolutionary perspectives

When: 3:00 PM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

L4-2: Nuancing the beat: Distinguishing beat perception from isochrony perception

Henkjan Honing(1)
1:University of Amsterdam

Beat perception, defined as the detection of a regular pulse in a varying auditory signal, is considered a fundamental human trait that, arguably, played a decisive role in the origins of music/ality [1]. However, theorists continue to be divided on the issue whether this ability is innate or learned. Winkler et al. (2009) [2] provided the first evidence in support of the view that beat perception is innate (or at least spontaneously developing). More recently, Bouwer et al. (2014) pointed out that the used paradigm needs additional controls to be certain that any effects (or the lack thereof) are due to beat perception, and not, for instance, a result of pattern matching, acoustic variability or sequential learning. To replicate the results of Winkler et al. (2009) and to compare it to two recent studies using a novel paradigm in humans (Bouwer et al., 2016) [3]) and non-human primates (i.e. macaques; Honing et al., 2018) [4], the latter paradigm is currently being used in a pilot with newborns at the Institute of Cognitive Neuroscience and Psychology, Budapest (MTA). The lecture will discuss some preliminary data and compare these with the existing results from human adults and macaques. [1] https://mitpress.mit.edu/books/origins-musicality [2] https://doi.org/10.1073/pnas.0809035106 [3] https://doi.org/10.1016/j.neuropsychologia.2016.02.018 [4] https://doi.org/10.3389/fnins.2018.00475

Subjects: Beat, rhythm, and meter, Cross-species comparisons

When: 3:15 PM in KC914 on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Session M1, Symposium: Everyday Music in Infancy

9:30-10:30 AM in KC802

M1-1: Everyday music in infancy

Jennifer K Mendoza*(1), Caitlin Fausey(1)
1:University of Oregon

Robust models of the power of music to shape early learning and well-being demand quantitative estimates of foundational dimensions of early music experience. In this symposium, we present four recent efforts to dramatically improve the evidence base about this early musical ecology across contexts and cultures. This suite of work reveals the dimensional structure of music at home, using new survey instruments (Paper 1) and day-long audio recordings (Papers 2 and 3). We quantify the extent to which everyday musical experience in infancy is live, vocal, instrumental, and recorded, and also show how this ecology links to vocal production, vocabulary, and grammar skills. Surveys with desirable psychometric properties as well as small, wearable recorders now permit these empirical insights that scale beyond the lab. We speculate that insights based on the everyday lives of infants provide helpful foundations for music therapy interventions implemented in everyday settings. Here, we articulate one such framework for an urgent public health challenge currently facing therapists and families (Paper 4). Collectively, these papers share exciting insights about periods of development, cultures, and analytic targets that are under-represented in traditional inquiries into human music perception and cognition. We will also share corpora, manuals, and code (as allowable; e.g., figshare, HomeBank, and Open Science Framework) to spur future collaborative discoveries.

Subjects: Music and development, Music and language

When: 9:30 AM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M1-1: Play it again, mama: Music at home as a scaffolding to language development?

Nina Politimou(1), Lauren Stewart(2), Daniel Müllensiefen(3), Mirco Fasolo(4), Giuliana Genovese(5), Aspa Papadimitriou(2), Nora Schaal(6), Catherine Smith(7), Fabia Franco(1)
1:Middlesex University London, 2:Goldsmiths University of London, 3:Goldsmiths, 4:Chieti-Pescara University, 5:Milan-Bicocca University, 6:Heinrich-Heine-Universität Düsseldorf, 7:GoldsmithsUniversity of London

The influence of the home musical environment on developmental outcomes is still largely unknown, although research has elucidated important effects of formal musical training on linguistic skills. Based on our previous study suggesting associations between the home musical environment and the development of key language areas in young pre-schoolers, we sought to develop and validate an instrument for the systematic assessment of the home musical environment for children under 5 (Music@Home Infant and Preschool Questionnaires), based on two online surveys with n = 500 for the Infant-Q and n = 560 for the Preschool-Q. Factor analytical and confirmatory methods were used to identify different dimensions comprising the home musical environment of infants and pre-schoolers and consolidate the factor structure of the new instrument. Convergent and divergent validity and internal and test- retest reliability of the questionnaire were established. The Music@Home has been adapted in German and Italian, and preliminary results of a validation study in the German population show good psychometric properties. Subsequent studies established significant associations between Music@Home scores and language development in different age groups and cohorts. So far, an enriched home musical environment was associated with better grammar skills in 4-6-year-old children (n = 48)3 and with better word comprehension scores and use of communicative gestures in 8- 12-month-old infants (n = 64) in the UK. Similarly, in a longitudinal study with Italian infants (n = 32), home musical enrichment at 4-10 months predicted higher word comprehension and production scores at 16-17 months; a corresponding longitudinal project is currently ongoing in the UK. The combined findings of the present project provide important insights for the use of music in the home environment as a vehicle for supporting language development. They also contribute toward a comprehensive account of the relationship between musical experience and language acquisition starting from early development.

Subjects: Music and development, Music and language

When: 9:30 AM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M1-2: The content and timing of music in infants’ home environments

Jennifer K Mendoza(1), Caitlin Fausey(1)
1:University of Oregon

Infants acculturate to their soundscape over the first year of life, yet theories of how they do so rarely make contact with details about the sounds available in everyday life. Here, we report on a ubiquitous early ecology in which foundational skills get built: music. We captured daylong audio recordings from 35 infants ages 6-12 months at home. We fully double-coded 467 hours of everyday sounds for music and its features, voices, and tunes. Across these recordings, we identified 4,798 music bouts that cumulated to 42 hours of music. [P] Analyses of this first-of-its-kind corpus revealed three properties of infants’ everyday musical ecology. First, infants encountered vocal music in over half, and instrumental in over three-quarters, of everyday music. Live sources generated one- third, and recorded sources three-quarters, of everyday music. Second, infants did not encounter each individual tune and voice in their day equally often. Instead, the most available identity cumulated to many more seconds of the day than would be expected under a uniform distribution. Third, rather than occurring regularly or randomly across the day, clusters of music bouts happened close together in time and were separated by longer periods without music yielding a bursty rhythm at the daily timescale. These properties of the everyday musical ecology in human infancy are different from what is discoverable in environments highly constrained by context (e.g., laboratories) and time (e.g., minutes rather than hours). [P] Our findings provide quantitative anchors about properties of the musical input available in infants’ everyday environments and guide hypotheses about how everyday music experience shapes infants’ learning. We will present and discuss a model in which the specific profiles of musical content as well as the daily temporal dynamics work together with other indices of high-quality experiences (e.g., live, vocal, and infant- directed) to potentiate infants’ remarkable early auditory learning.

Subjects: Music and development, Music and language

When: 9:45 AM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M1-3: Music in the lives of American and Tanzanian infants and toddlers: A daylong sampling

Lucia Benetti(1), Eugenia Costa-Giomi(1)
1:The Ohio State University

Daylong recordings of infants’ home environments are a rich source of data for understanding infants’ real-life music experiences, music learning, and musical development. In this talk we present two studies based on such data. [P] In the first study, we identified and analyzed instances in which a 15-month-old American infant imitated music that he had heard previously that day. He vocalized melodies he had heard sung by his parents and played by a toy hours earlier and engaged in imitative behavior with his father. In the second study, we compared the characteristics of the music environments of infants and toddlers from the United States (n = 12) and Tanzania (n = 8) (age range: 5 months–3;2 years). The preliminary results of the analyses of a subset of a corpus of 114 daylong recordings and parental interviews show similarities and differences in the music experiences of children between and within the two countries. For example: (1) Except for one American child, all children heard live singing; (2) Tanzanian children were exposed to a greater variety of sound sources, including a larger number of different singing voices; (3) Tanzanian children were exposed to more background noise and experienced fewer periods of silence; (4) Tanzanian children were exposed to less music made specifically for children. [P] These two studies show that extended, naturalistic recordings of infants’ music environments allow for a detailed analysis of young children’s early music production and experiences and the investigation of the connections between their environment and musical development. The value of our findings for the understanding of developmental music trajectories and processes of music learning early in life are discussed.

Subjects: Music and development, Music and language

When: 10:00 AM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M1-4: Theoretical modeling of a music intervention to decrease symptoms of neonatal abstinence syndrome in NICU hospitalized infants

Deanna Hanson-Abromeit(1)
1:University of Kansas

Maternal use of opoid-based drugs during pregnancy is a growing concern because infants with intrauterine exposure demonstrate withdrawal symptoms, labeled Neonatal Abstinence Syndrome (NAS). Indicators of NAS include extreme irritability, tremors, excessive crying, poor suck reflex, and difficult, irregular sleep patterns, which interfere with infant feeding, sleeping, emotional regulation and social interaction. Currently, the primary intervention is carefully monitoring decreased use of opiods, such as morphine or methadone, to minimize negative developmental outcomes. Long-term consequences of opioid exposure to infant development are unclear, almost certainly negative, thus non-pharmacological interventions are desired. Existing music intervention studies for NAS show benefits but use a generalized approach without a specified theoretical framework to define how music functions as an essential component of change for NAS symptoms. Therefore, development of a music intervention specific for infants with NAS is warranted. [P] This project is an ongoing Phase I intervention study to support the conceptualization and theoretical modeling of a clinically responsive music intervention for NAS. First, the existing literature is informing a theoretical framework to guide key music characteristics and essential intervention components. It is hypothesized that vocal improvisational singing that matches the music characteristics (e.g. tempo, rhythm, dynamics) to the intensity of the infant’s behaviors may interfere with withdrawal symptoms and reduce agitation behaviors by activating the limbic system and dopamine production. Therefore, music may neurologically replace the exposed drug resulting in decreased onset and duration of distress behaviors. Development of the theoretical framework will be followed with three clinical case studies of hospitalized newborns experiencing NAS. Analysis of the case studies will support the mapping of the theory- informed intervention and ensure clinical appropriateness. This presentation will illustrate the theoretical framework for modeling the intervention and hypothesized neurological mechanisms as the foundational guide for future clinical case studies.

Subjects: Music and development, Music and language

When: 10:15 AM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session M2, Beat & Meter 6: Syncopation

9:30-10:30 AM in KC905/907

M2-1: Assessments of statistical measures of syncopation: Two approaches

Noah R Fram(1)
1:Stanford University

We seek a measure of the perceived “strength” of a syncopation. Existing metrics to statistically assess the syncopation of a given rhythmic sequence consider various relationships between a rhythm and the metrical grid, thus capturing different combinations of complexity and syncopation. However, as most assessments of these measures with respect to perceptual data are accuracy comparisons, precise distinctions among these metrics have yet to be directly determined. In the first part of this study, a sample from the space of rhythms consisting of 32 isochronous onsets was assessed for syncopation under a common-time meter at different sixteenth-note offsets, using a collection of commonly-applied measures. An exploratory factor analysis was run using this data, producing three primary factors, the first of which appears to be insensitive to offset. The weighted note-to-beat distance (WNBD), off-beatness, and Keith’s measure all loaded primarily onto the first factor, while Longuet-Higgins and Lee’s and Toussaint’s measures loaded onto either the second or third, depending on the offset. These data suggest either that syncopation measures not derived from a metric hierarchy are confounded by the overall complexity of the rhythm, or that these two groups of measures are capturing different perceptual implementations of syncopation derived from distinct musical traditions. In the second part, perceptual data was collected from a population of United States residents and applied to a structural equation model (SEM) representing the theorized relationship among complexity, syncopation, and the same measures of syncopation. In addition, the perceptual syncopation ratings were correlated with the two factor groups produced in the initial analysis. These procedures allow us to assess the suggested relationship between these measures of syncopation and perceptual ratings while controlling for the effects of musical engagement, musical training, and musical exposure.

Subjects: Beat, rhythm, and meter, Computational approach

When: 9:30 AM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M2-2: Modeling Syncopation: Beyond Onset Pattern

David Temperley(1)
1:Eastman School of Music

Most existing models of syncopation define it entirely in terms of the pattern of note-onsets in relation to the meter. Most famously, Longuet-Higgins & Lee (1984) defined a syncopation as a note on a weak beat with no note on the following stronger beat, with the “strength” of the syncopation depending on the metrical levels of the beats involved. This is logical, under the usual understanding of syncopation as something that conflicts with the underlying meter: a note on a weak beat with nothing following is “long” and therefore accented. However, syncopation could potentially involve numerous other sources of accent besides length. Here I present a corpus study that highlights the inadequacy of models of syncopation based only on onset pattern. The study focuses on a particular pattern that I call “second-position syncopation”: accenting of the second eighth-note position in a half-note unit. This is a characteristic feature of popular music around 1900 (e.g. ragtime), and there has been much speculation about its origins. In my corpus study, I count second-position syncopations in six corpora of 19th-century songs (English, French, German, Italian, Euro-American, and African-American). I count them in two ways, one defined purely in terms of onset pattern, the other considering other musical factors—the pitch interval to the syncopated note (ascending vs. descending and step vs. leap), text-setting (syllabic vs. melismatic), and the sheer duration of the note (quarter-note or eighth-note plus rest). When second-position syncopations are defined purely in terms of onset pattern, they are common in English, American, and Italian music; when other features are considered, however, they are frequent only in the English and American corpora. This shows that an improved model of syncopation can shed light on an important historical issue.

Subjects: Beat, rhythm, and meter, Corpus analysis/studies; Cross-cultural comparisons/non-Western music; Music theory

When: 9:45 AM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M2-3: The relation between groove and syncopation is intricate – not any pattern will do

George Sioros*(1), Guy Madison(2), Diogo Cocharro(3), Fabien Gouyon(3)
1:University of Oslo, 2:University of Umå, Department of Psychology, 3:INESC-TEC

Groove is the pleasurable sensation of wanting to move to music. A series of studies has attempted to understand the function of this phenomenon by examining its relation to physical properties in the sound signal and found, among other things, that groove increases at optimal levels of syncopation. Here, we tested if the amount of syncopation is the critical factor, rather than the specific pattern of notes that are syncopated. To this end, our algorithm transformed ten short funk and rock loops consisting of drums, bass and keyboards to 1) remove the original syncopation, and 2) introduce various amounts of new syncopation: 0%, 25% (similar amount to the original), 50%, and 70%. All the examples were produced using professional sound samples and were rated by 27 listeners. The ratings were highest for the original versions, next highest for the 0 and 25% versions, and lowest for 50% and 70% versions, with statistically significant differences between the original and the rest, and the 25% and the 50% and 70%, but not between the 0% and 25%. Apparently, our algorithm failed to recreate the groove of the original music. Comparing the original and algorithmic syncopation we found: (1) The algorithmic syncopation is relatively uniformly distributed across the instruments, while the original versions have less syncopated drums with almost no syncopated hi-hats, and the back-beat snare never syncopated. (2) The original syncopation forms more and longer cross-rhythmic or metrically shifted patterns, as often encountered in the funk style. (3) Differences in the micro-timing alignment of sounds. In conclusion, groove is greatly increased by syncopation; although, not necessarily by syncopation per se, as the results point to several structural factors that may be important and can be further tested and add to our understanding of the functional properties that underlie the sensation of groove.

Subjects: Beat, rhythm, and meter, Expectation; Musicology

When: 10:00 AM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M2-4: Neural Resonance to Syncopated Rhythms: Model Predictions and Experimental Tests

Ed Large*(1), Yi Wei(1), Charles S Wasserman(1)
1:University of Connecticut

We examined neural and perceptual responses to rhythms with no spectral energy at the frequency people perceive as the basic beat, or pulse. First, we constructed a dynamical systems model of interactions between oscillatory auditory and motor networks, assuming a mature nervous system with connections reflecting Western metrical structures. The model predicted that for “missing pulse” rhythms, strong oscillations would emerge in the motor planning network at the missing pulse frequency. In Experiment 1, we measured pulse synchronization using a behavioral task, and steady state evoked potentials (SS-EPs) using 32-channel cortical EEG, in healthy adult musicians. We observed 1) strong pulse-frequency SS-EPs to isochronous and missing pulse rhythms, but not to a random control; 2) strong coherence between neural SS-EP responses and model-predicted auditory and motor SS-EPs; and 3) different pulse-frequency topographies for missing pulse rhythms (versus isochronous and random rhythms). These results tended to suggest different neural generators of the pulse frequency for isochronous and missing pulse rhythms. In Experiment 2, we replicated the results of Experiment 1 using high density 256-channel EEG. We also collected MRI images to enable localization of neural responses. Our results support the theory that pulse perception occurs as the result of emergent population oscillations in multiple interacting brain regions. We refine and parameterize the model based on these observations.

Subjects: Beat, rhythm, and meter, Computational approach; Embodied cognition; Expectation; Neuroscientific approach

When: 10:15 AM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session M3, Speech

9:30-10:30 AM in KC909

M3-1: Do Elements of Musicians’ Speech Prosody Influence Their Created Vocal Melodies?

Alissandra Reed(1)
1:Eastman School of Music

Researchers have often investigated parallels between linguistic prosody and musical melodies (Patel & Daniele, 2003; Patel, Iverson & Rosenberg, 2006; Han et al., 2011; McGowan & Levitt, 2011; Temperley, 2017). Their projects have compared generalized measures of languages (such as nPVI or contoural complexity) to those of musical corpora. Most have taken for granted connections between speech prosody and texted music. Aiming to help disentangle some claims of causality between prosodic tendencies and melodic tendencies, this study seeks to draw discrete connections between the rhythmic and contoural characteristics of the way subjects speak a sentence and the way they choose to sing it. Ten musicians are recorded speaking fourteen sentences and subsequently singing them on improvised melodies. Using spectrograms, similarities in rhythm (syllable timing) and contour (intonation) are measured between a subject’s spoken and sung versions of each sentence. To measure similarities in rhythm, syllables are coded into an ordered vector as longer than (+), shorter than (–), or the same (0) as their preceding syllable. This measure is also taken for syllabic contour, coding each syllable as higher than, lower than, or identical in pitch to the preceding syllable. For both measures, the spoken vectors are compared syllable-for-syllable with the sung vectors, averaging the absolute difference between each syllable. Similarity measures for all sentences are compared with the average similarity expected by chance (0.89) using a one-sample t-test. Initial results suggest that musicians will speak and sing a sentence with similar inter-syllabic timing (0.6, sd=0.12, p=0.0001) but will not speak and sing a sentence with similar inter-syllabic contour (0.85, sd=0.15). Further tests compare these within-subjects similarity measures to those between subjects. To bolster findings on contour, a between-subjects similarity test will be run on reduced contours (Marvin & Laprade, 1987) for each spoken and sung sentence.

Subjects: Music and language, Language and speech

When: 9:30 AM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M3-2: Parsing ungrammatical sentences lead to preference for non-congruent musical pieces

Mythili Menon*(1), Drew Colcher(1)
1:Wichita State University

Effects of language on musical processing have received little attention. We investigated whether participants are less sensitive to “errors” in chord progression after they are exposed to a sentence in which the subject and verb mismatch in number. Specifically, we investigated (i) whether comprehension of subject-verb agreement errors transferred to dissonance of musical chords played on guitar, and (ii) whether comprehension of subject-verb matches transferred to congruence in musical chords. We used a 2 x 2 design with a) agreement match (match/mismatch) and b) ambiguous number (singular/plural) as factors. We provided participants (N= 36, 20 targets, 20 fillers) with subject-verb agreement sentences. (1) [The key] to [the cabinets] [was rusty from many years of disuse] Match condition (2) [The key] to [the cabinets] [were rusty from many years of disuse] Mismatch condition (3) [The screen] of [the phone] [was cracked from top to bottom] AmbigSingular condition (4) [The mugs] on [the shelves] [were still wet from being washed] AmbigPlural condition We created musical stimuli in the major diatonic scale in two keys – G, C as well as their relative minor keys. They consisted of a series of 7 chords, the last acting as a coda that refers back to a previous note. Musical stimuli was made to resemble ambiguity in each condition. Participants read a sentence in one of the conditions and answered a comprehension question, following which they heard two musical targets. They were asked to choose which target they preferred. We find an overlap between syntactic processing and musical processing with a main effect of agreement match (p < 0.03). Our results show that after reading a grammatical subject-verb agreement sentence and answering a comprehension question, participants show a preference for the musically congruent chord progression. However, when they read a mismatched (ungrammatical) sentence, their preference shifts significantly.

Subjects: Music and language, Cross-domain effects

When: 9:45 AM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M3-3: Is turn prediction accuracy across language and music dependent on the idiosyncrasies of one’s own experience?

Nina Fisher*(1), Lauren Hadley(2), Martin Pickering(1)
1:The University of Edinburgh, 2:The Univeristy of Edinburgh

According to prediction-by-simulation, people’s ability to predict an observed action is based on their experience of producing that action. Hence during an interaction, the accuracy of predicting a partner should vary according to how similarly that partner produces actions to oneself. Support for this theory comes from music performance, for which it appears that pianists are more accurate at making predictions about their own output than the output of others (Keller et al., 2007). Here we investigated whether people use their own production experience to predict the speech/music that they hear. We specifically tested whether they more accurately predict the timing of their own utterances and musical performances than those recorded from other people, and whether predictions of other people are affected by similarity to one’s own production style. Over two sessions three months apart, we recorded 30 pianists playing 120 melodic phrases on a MIDI keyboard, and reciting 120 utterances. Three months later, we presented the pianists with their own recordings, recordings of a participant independently rated as similar to them, and of a participant rated dissimilar to them. Participants predicted the end of the melodic phrases and spoken utterances, responding either by pressing a button or by producing a verbal/musical response. Results suggest that there are no significant differences between responses to recordings six months prior and three months prior to testing, which suggests that predictions were not reliant on memory of production. Analyses suggest that people were more accurate at predicting their own turn-ends than when predicting turn-ends when the stimuli was produced by the similar and dissimilar participants. These findings suggest that prediction accuracy may be dependent on the idiosyncrasies of one’s own experience, providing support for the prediction-by-simulation account of turn-taking. Furthermore, it appears that this finding spans both language and music, suggesting that mechanisms involved in turn-end prediction may cross these domains.

Subjects: Music and language, Language and speech

When: 10:00 AM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M3-4: Spontaneous tempo in music and speech production: Domain-specific tuning of endogenous oscillations?

Peter Pfordresher*(1), Emma B Greenspon(1), Amy Friedman(2), Caroline Palmer(2)
1:University at Buffalo, SUNY, 2:McGill University

Recent evidence suggests that individuals are highly consistent in the spontaneous rate at which they produce music or engage in rhythmic tapping, and that these spontaneous tempos influence coordination across performers (e.g., Zamm, Wellman, & Palmer , 2016). Such results suggest that performance timing is driven by an endogenous oscillator characterized by a natural (spontaneous) frequency. Performers adapt this endogenous rhythm to stimulus-specific frequencies during a performance. We report two experiments that addressed whether speech production is also guided by endogenous oscillations, and if so whether these oscillations are tuned to similar frequencies to those in music tasks. In Experiment 1, monolingual English speaking pianists produced 13-syllable sentences organized into two phrases and performed 16-note melodies. In Experiment 2 English-French bilingual pianists produced 8-syllable English phrases and performed similar melodies to those in Experiment 1. Participants in both experiments produced all sequences at a self-selected comfortable rate. For both experiments, individuals were highly consistent in their spontaneous tempo for different melodies, as well as in speech production. Thus, the timing of speech, like music, may be based in part on an endogenous oscillator. Correlations of spontaneous timing across music and speech production, however, were weak and did not reach statistical significance. These results suggest that music and speech may rely on endogenous oscillators that are tuned to different natural frequencies, potentially based on constraints that reflect communicative pressures.

Subjects: Music and language, Beat, rhythm, and meter; Cross-domain effects; Performance

When: 10:15 AM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session M4, Symposium: Musical Expression in the Eye of the Beholder

9:30-10:30 AM in KC914

M4-1: Musical expression in the eye of the beholder: Relating movement features to perception

Jonna K Vuoskoski*(1), Birgitta Burger(2), Marc Thompson(2), Petri Toiviainen(2)
1:University of Oslo, 2:University of Jyväskylä

In musical performance, emotional expression emerges from the interplay between the structural features of the music (i.e., the composition) and the expressive efforts of the performer. Previous research has shown that the body movements and gestures of the performer constitute an important source of expressive information, successfully communicating different expressive intentions to audiences. However, it is still unknown whether visual kinematic information about performers’ movements could also modulate perceived emotions. [P] This study aimed to investigate the relative contributions of auditory and visual cues to the communication of emotion in musical performance. A pianist and a violinist performed four short musical passages (composed to express sadness, happiness, threat, and peacefulness; Vieillard et al., 2008), each with four different emotional expressions: sad, happy, angry, and deadpan. The musicians’ movements were tracked using optical motion capture. [P] A total of 90 participants took part in three perceptual experiments, where they rated perceived emotions using four scales. There were four rating conditions: audio-only, video-only (with point-light animations generated from motion capture data), audiovisual, and time-warped audiovisual. In the time-warped condition, motion capture animations from all four expressive conditions were combined and synchronized with the audio of the deadpan performances. [P] Repeated-measures ANOVAs revealed that participants could accurately recognize emotional expressive intentions based on visual information alone. In the audio-only condition, Type of Composition (mean effect size = .40; generalized eta- squared; Bakeman, 2005) accounted for more variance than Expressive Intention (mean effect size = .17) in participants’ emotion ratings. In the audiovisual condition, the difference between mean effect sizes was reduced (Type of Composition = .36; Expressive Intention = .20), indicating that visual kinematic information enhanced the perceptual salience of expressive intentions. The time-warped audiovisual condition (where animations with different expressive intentions were paired with deadpan audio) also revealed that visual information could modulate perceived emotions.

Subjects: Music and movement, Audiovisual / crossmodal; Embodied cognition; Emotion

When: 9:30 AM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M4-1: The contribution of visual and auditory cues to the perception of emotion in musical performance

Jonna K Vuoskoski(1), Marc Thompson(2)
1:RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, Department of Musicology & Department of Psychology, University of Oslo, 2:University of Jyväskylä

In musical performance, the gestures and mannerisms of a musician can have a profound impact on the observer’s experience of the music. In this presentation, we investigate to what degree this holds true when presenting participants silent videos of abstracted movement (stick figure animations derived from motion capture data). [P] A pianist and violinist individually performed four pieces (composed to express sadness, happiness, threat, and peacefulness; Vieillard et al., 2008), each with four different emotional expressions: sad, happy, angry and deadpan. The 32 performances were tracked using optical motion capture. Subsequently, participants (piano group: n = 31; violin group: n = 34) viewed the performances as stick-figure animations and provided ratings of perceived happiness, anger, sadness, and tenderness for each performance. [P] From the 3D motion capture data for each performance, we computed variables based on the movements of the head, torso, shoulders, arms and hands. We computed the average velocity, acceleration and jerk of each direction (x, y, z) as well as the norm of the vectors. We compared these features with the averaged perceptual ratings. [P] Correlation analyses revealed that participants were likely to rate a performer’s intention high on happiness and anger when the movements were high in activity (e.g. quick movements, many changes in directions), while the performances were rated high on sadness and tenderness when the movement were low in activity. For both the pianist and the violinist, no single part of the body stood out as correlating more highly with the perceptual ratings than others. [P] The results support past research that the communication of emotional intentions in a musical performance is possible even when viewing the performances without sound and in an abstracted setting. They also indicate that the visual aspects of a performance are experienced as a single gestalt (as opposed to paying attention to individual parts of the body).

Subjects: Music and movement, Audiovisual / crossmodal; Embodied cognition; Emotion

When: 9:30 AM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M4-2: Everything but the sound: Investigating the relationships between movement features and perceptual ratings of silent music performances

Marc Thompson(1), Jonna K Vuoskoski(2)
1:University of Jyväskylä, 2:RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, Department of Musicology & Department of Psychology, University of Oslo

Subjects: Music and movement, Audiovisual / crossmodal; Embodied cognition; Emotion

When: 9:45 AM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M4-3: Relationships between movement characteristics and perception of emotions in dance

Birgitta Burger(1), Petri Toiviainen(2)
1:Finnish Centre for Interdisciplinary Music Research, Department of Music, Art and Culture Studies, University of Jyväskylä, 2:University of Jyväskylä

Conveying emotions through movement is a crucial aspect in music performance and dance. In order to investigate how and which movement characteristics support emotion perception in quasi-spontaneous music-induced movements, this study aims to link perceptual emotion ratings with computationally extracted movement characteristics. [P] Stick-figure animations were created from motion capture recordings of four dancers moving to four musical stimuli representing happiness, anger, sadness, and tenderness. These 16 animations (20 seconds each) were subsequently presented as silent video clips to 80 observers, who were asked to rate the emotional content perceived in each performance according to the four aforementioned emotions on 7- point scales. Ratings were averaged across observers. From the motion capture data, 48 movement features related to velocity, acceleration, and jerk of head, hips, hands, and feet were extracted and averaged across dancers. [P] Correlational analysis suggested distinct relationships between the emotion ratings and the movement features. Happiness ratings correlated strongly with features related to feet and hips, anger ratings with head-related features (all positive correlations), while sadness ratings showed strong negative correlations with hip- related features (all r(16)>±.74, p<.001). Tenderness did not reveal strong correlations (possibly due to systematic confusions perceiving the tender stimuli rather as happy or sad). Subsequently, speed of head, hips, hands, and feet were used in step-wise regression models to predict the emotion ratings. Foot speed predicted happiness (R2=.761, 𝛽=.872, p<.001) as well as sadness ratings (R2=.402, 𝛽=-.634, p<.01), while speed of head and feet predicted anger (R2=.712, 𝛽=.821/.489, p<.001/.01). Tenderness failed to be predicted successfully. Finally, multidimensional scaling of the correlation data located the four discrete emotions to opposite corners of the space, resembling the locations of these emotions in Russel’s dimensional model and Thayer’s emotion model. These results suggest that observers perceive emotions in dance movements in systematic ways that show clear links to movement characteristics.

Subjects: Music and movement, Audiovisual / crossmodal; Embodied cognition; Emotion

When: 10:00 AM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

M4-4: Kinematics of perceived dyadic interaction in music-induced movement

Petri Toiviainen(1), Martín Hartmann(2), Tasos Mavrolampados(2), Emma Allingham(2), Emily Carlson(2), Birgitta Burger(2)
1:University of Jyväskylä, 2:Finnish Centre for Interdisciplinary Music Research, Department of Music, Art and Culture Studies, University of Jyväskylä

We studied the relationship between music-induced movement and perceptual ratings of similarity and interaction of pairs dancing to music. We hypothesized that dancers’ movements tend to be perceived as more similar when they exhibit spatially and temporally comparable movement patterns, and as more interactive when they spatially orient more towards each other. We ran an experiment in which dyads were asked to move freely to music excerpts while their movements were recorded using optical motion capture. Subsequently, we presented stick-figure animations of dyads to observers in two separate perceptual experiments, where the dyads’ level of interaction and similarity of movement were rated. In the first experiment (n=33), the movements of 12 female dyads (paired according to their self-reported trait empathy) were rated, and in the second experiment (n=50) 35 same-gender dyads were rated. [P] Mean perceptual ratings were analyzed with regard to three approaches for quantifying synchrony: temporal coupling, spatial coupling, and torso orientation. Temporal and spatial coupling measures were obtained by applying dynamic Partial Least Squares (PLS) on the movement data in order to deal with the multidimensionality, inter-subject variance, and nonstationarity of the movements. Correlations and partial correlations across dyads were computed between each estimate and the perceptual ratings. A systematic exploration showed that torso orientation is a strong predictor of perceived movement interaction even after controlling for other features, whereas temporal and spatial coupling are better predictors of perceived similarity. Our results suggest that, compared to similarity, interaction could be better predicted from features based upon just a few markers. In contrast, prediction of similarity seems to require the use of data-driven features focusing on full-body synchrony between dancers.

Subjects: Music and movement, Audiovisual / crossmodal; Embodied cognition; Emotion

When: 10:15 AM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session N1, Perceived Emotion 1

2:30-3:30 PM in KC802

N1-1: The influence of interpretative choices on conveyed musical emotions

Aimee E Battcock*(1), Mike Schutz(1)
1:McMaster University

Why are audiences enthralled by one pianists’ interpretation of a well-worn sonata, yet turned off by another? How do juries select winners from different performances of the same piece? The answer lies in different interpretations of the music. Interpretation is an individualistic process based on a performer’s musical intentions (Palmer, 1997), which affect the emotional content conveyed. Previous research has examined the relations between cues and perceived emotion, however it is still unclear how interpretative differences influence the listener’s understanding of conveyed emotion in music. In this series of exploratory experiments, we investigated differences in listeners perceived emotion for various interpretations of pieces by JS Bach. For each study, we exposed thirty non-musician participants to 48 excerpts of the Well-Tempered Clavier, performed by one of seven pianists. After each excerpt, participants rated perceived emotion on scales of valence and arousal. Our results indicate notable differences in the emotional responses of different interpretations of the same piece. Additionally, building on our past approaches we used multiple regression and commonality analysis to examine how listeners use select cues (attack rate, pitch height and mode) across various musical interpretations. Overall, we found similar trends across interpretation in the relationship between cues and perceived valence and arousal, with variation in the relative weight of attack rate across performers. Comparing the fit of our three-cue model for listener ratings of emotion across performers, we find the predictive value (R2 values) of cues ranged from 51-78% for arousal ratings and 76-82% for valence ratings amongst performers. Furthermore, we compared mean ratings of valence and arousal for each piece across performers, demonstrating variability in perception based on performer interpretation. These results demonstrate that performers’ interpretative decisions lead to differences in listeners’ perceived emotional experiences. We will discuss how these results inform our understanding of performer interpretation on emotional communication in complex musical passages.

Subjects: Emotion, Perception

When: 2:30 PM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N1-2: Live jazz audience members with greater perspective-taking ability more accurately identify musically expressed emotion

Omer Leshem*(1), Michael F Schober(1)
1:The New School

Do audience members with greater cognitive empathy (perspective-taking ability) more accurately identify an emotion that a jazz improviser intends to express during a performance? And are audience members with greater affective empathy more likely to feel the same emotions as the performer? This study explored these questions in a full-length solo improvised concert by pianist and Grammy nominee Andy Milne held at the Glass Box Theater, home of leading NYC jazz venue “The Stone.” Cognitive and affective reactions of audience members who were willing to participate (and thus reimbursed their ticket price after the concert, n = 23) were measured as non-intrusively as possible. The method involved instructing the performer (who was unaware of what the manipulation would be) mid-performance with a paper note asking him to “perform a 3-5-minute improvised piece with the intention of conveying sadness.” Immediately afterwards, participants and the performer responded to a paper-and-pencil questionnaire from a first envelope under their seat. Participants first described the emotion the performer had intended to express using their own words, and then they selected the intended emotion from a list. They also reported the emotions they had felt while listening using Izard’s Differential Emotions Scale (DES). At the end of the concert, participants answered demographic questions and filled out Davis’ Interpersonal Reactivity Index (IRI). Findings demonstrate that audience members with greater perspective-taking ability (n = 16) were more likely to accurately identify sadness as the expressed emotion, and less likely to overlap in felt emotion with the performer (who did not report feeling sad). Audience members who accurately selected “sadness” reported feeling marginally sadder than people who did not select sadness. Results replicate findings from solo lab studies in a concert setting, and demonstrate the viability of exploring empathy and collective cognition in an improvised live performance.

Subjects: Performance, Emotion

When: 2:45 PM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N1-3: Music influences the appreciation of contemporary art work

Bruna De Oliveira(1), Giulia Ventorim(1), Claudia Feitosa-Santana(2), Patricia Maria Vanzella(1)
1:Federal University of ABC, 2:Fundação Dom Cabral

Music evokes emotions, changing the activation of related structures in the brain. In addition, research shows that background music can influence art appreciation. However, little is known about the musical features that may cause this effect. Two possible factors are emotion valence and arousal related to music, but this has not been tested before. In this study, we investigate whether different kinds of emotions conveyed by music influence the aesthetic experience. We selected four music excerpts of about 60 seconds that were tested in a previous study, each one corresponding to one of the quadrants in the Circumplex Model of emotions (positive or negative valence, and high or low arousal). The 142 participants of this study were visitors of the MAC USP (Museum of Contemporary Art, University of São Paulo, Brazil). All participants observed the same painting (“Composição Clara”, 1942, Wassily Kandinsky) while listening on headphones to one of the four music excerpts or in silence, resulting in one control and four experimental groups. A between-group design was adopted. During the first 40 seconds, participants listened to instructions about the experiment while the excerpt was played in the background. The music excerpt was then played in loop during the observation period, until the participant decided to stop appreciating the painting. A questionnaire was filled immediately after the observation. We found a relationship between emotion valence of the art work and group condition. The results showed no effect of emotion arousal. Experimental groups who listened to music of positive valence, but not negative, showed a more positive valence response towards the art work comparing to the control group. This finding suggests that the emotion conveyed by music can play an important role on the aesthetic experience of a visual art work, thus contributing to discussions on neuroaesthetics and related fields.

Subjects: Aesthetics / preference, Emotion

When: 3:00 PM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N1-4: Tonics laugh, chromatics cry: children associate tonal hierarchy with emotional valence

Assaf Suberry(1), Zohar Eitan(2)
1:Levinsky college, 2:Tel Aviv University

Western music is largely governed by tonality, a system regulating continuity and closure in melodic and harmonic progressions. Converging measures have established the psychological reality of tonality as a cognitive schema generating specific expectations for continuity or closure. In children, such expectations gradually develop between ages 4-11. Importantly, while in adults deviations from tonal expectations were associated with evoked and perceived emotion, little is known about such effects in children, assumed to rely mainly on basic psychoacoustic cues such as tempo, loudness, or pitch height to interpret musical emotion. Here we examine whether children associate realizations and violations of implied tonal closure with emotional valence, whether such associations are age- or gender-dependent, and whether they interact with other musical dimensions (instrumental timbre, pitch height). 52 children, aged 7, 11, listened to stimuli composed of a chord progression implying closure followed by a probe tone. Probes could realize the closure implication (tonic note), violate it mildly (another diatonic scale-degree) or extremely (a chromatic, out-of-key note). Three timbres (piano, guitar, woodwinds) and three pitch levels were used. Stimuli were described to participants as exchanges between two children (chords, probe); for each stimulus, participants chose one of two emojis, suggesting positive/negative emotions, as representing the 2nd child’s response. Logistic regression, with emoji selection the dependent variable, indicates a robust effect of tonal closure (F=45.29, p<.001), with no related interactions. Pitch and timbre effects were weaker, and interacted with age, gender, and each other. Findings suggest that tonality, a higher-level cognitive schema, affects children’s perception of emotion in music earlier, more robustly, and more stably than pitch and timbre, basic dimensions of auditory perception. Children reliably associate degrees of tonal closure and stability with levels of emotional valence, applying an implicit understanding of musical syntax to their interpretation of musical affect.

Subjects: Harmony and tonality, Emotion; Music and development

When: 3:15 PM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session N2, Modeling Performance

2:30-3:30 PM in KC905/907

N2-1: Variations on a theme of eye-hand span: An integrated perspective on sight-reading skills

Yeoeun Lim*(1), Joel Popkin(2), Suk Won Yi(1)
1:Seoul National University, 2:University of Massachusetts Medical School

Sight-reading has been studied in terms of chronological processes, proficiencies, and variable factors, but integrated perspectives on components of the sight-reading procedure have been less discussed. The present study explores the process of sight-reading in terms of three domains (musical, physiological, and behavioral) and the interrelationships among them. The domain indicators are musical complexity and playing tempo (musical domain), eye-hand span, i.e., the distance between a performer’s fixation and execution of a note (physiological domain), and performance accuracy (behavioral domain). A total of thirty professional pianists played four musical pieces with two different complexities (simple and complex) and tempi (slow and fast) during sight-reading. We measured the participants’ eye-hand span (beats, seconds, and notes) and evaluated their performance accuracy by the dynamic time warping algorithm. Correlations between eye-hand span and performance accuracy and the influence of musical variables on eye-hand span and performance accuracy were investigated. Interestingly, we found that the eye-hand span did not change solely along with performance accuracy of sight-reading. Rather, the relationship between the eye-hand span and performance accuracy varied according to the difficulty of sight-reading tasks. For a relatively easy task, for example, a longer eye-hand span was advantageous for a more accurate performance. In contrast, for a relatively difficult task, a shorter eye-hand span became a better strategy for an accurate performance. Our results suggest that eye-hand span is associated with performers’ perception of difficulty instead of their competence. Proficient sight-readers thus seem to be particularly skilled at adjusting their eye-hand span instead of always keeping it large.

Subjects: Performance, Musical expertise

When: 2:30 PM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N2-2: Synchronization and Desynchronization in the Performance of Steve Reich’s Drumming: A Dynamical Systems Perspective

Ji Chul Kim*(1), Mike Schutz(2)
1:University of Connecticut, 2:McMaster University

Phasing is a compositional process in which identical phrases played on multiple instruments move in and out of phase. In Steve Reich’s Drumming (1971), percussionists play a rhythmic pattern in unison, then gradually shift out of synchronization until they reach a non-unison, interlocking pattern. The score calls for one of the musicians to slightly increase tempo (moving part) while others hold a constant tempo (steady part). Previously, two of the world’s leading percussionists (Bob Becker and Russell Hartenberger) recorded this process in a naturalistic environment of the LIVE Lab at McMaster University. An analysis of the performance data showed that than directly diverging, both musicians sped up and slowed down together throughout the process, contrary to their own impressions and intentions (https://maplelab.net/reich/). Here we provide a dynamical systems perspective to the non-monotonic trajectories in the performance data. Stable patterns of synchronization, such as a unison and interlocking patterns, are attractors in interpersonal coordination. Phasing can be considered as traveling a phase space with multiple attractors. The acceleration of the both parts observed when they began to desynchronize (by accelerating the moving part) indicates an attraction back to synchronization. The deceleration observed when they approach a stable interlocking pattern reflects a relaxation towards a new attractor. We modeled the attractor dynamics in phasing performance with two coupled oscillators, one with a fixed natural frequency (steady part) and the other with an adaptive frequency controlled by the targeted phase difference (moving part). The instantaneous (actual) frequencies of the oscillators showed a pattern of acceleration and deceleration as found in the human performance data. Thus, the model successfully captured the dynamical interaction between the artistic intention of phasing and the obligatory tendency of interpersonal synchronization.

Subjects: Performance, Beat, rhythm, and meter

When: 2:45 PM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N2-3: Measuring Intra- and Inter-Brain Dynamics during Joint Rhythmic Tasks

Rebecca Scheurich*(1), Alexander P Demos(2), Anna Zamm(1), Brian Mathias(1), Caroline Palmer(1)
1:McGill University, 2:University of Illinois at Chicago

Research has identified neural oscillations that entrain to auditory rhythms and support rhythmic behaviors (Nozaradan, 2014; Zamm, Debener, Bauer, Bleichner, Demos, & Palmer, 2018). We tested a novel implementation of Recurrence Quantification Analysis (RQA) for characterizing dynamics of cortical oscillatory activity measured with electroencephalography (EEG) within (intra-brain) and across (inter-brain) individuals during a joint tapping task. Eight participants tapped with a confederate partner in two rhythm conditions while EEG was recorded. The participant tapped at the same frequency across conditions; the confederate tapped at either half or twice the frequency of the participant, forming tapping ratios of 1:2 and 4:2 (confederate:participant). Stability of neural oscillations at the participant’s (constant) tapping frequency was examined across rhythm conditions. Data from a central and left-lateralized electrode (C1) were chosen as input to auto- and cross-recurrence analyses because this electrode yielded maximal power across tapping frequencies, tapping ratios, and partners. Auto- and cross-recurrence outcomes revealed greater intra- and inter-brain stability of oscillatory activity (more and longer returns to previous states within individuals and shared states across individuals) at the participant’s tapping frequency for participants and the confederate in the 1:2 condition (when the participant’s tapping frequency was the dominant or most often heard frequency). Recurrence plots revealed recurring regularities at approximately the period of the participant’s tapping frequency, as well as changes in period and phase relationships between partners’ brains over time. These findings suggest that RQA is a viable method for measuring intra- and inter-brain dynamics of joint rhythmic behaviors.

Subjects: Neuroscientific approach, Computational approach

When: 3:00 PM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N2-4: A Dynamic Model of Polyrhythmic Bimanual Coordination: Hebbian Plasticity and Long-Term Retention of Personal Styles

Ji Chul Kim*(1), Se-Woong Park(2), Dagmar Sternad(2), Ed Large(1)
1:University of Connecticut, 2:Northeastern University

Bimanual coordination is an essential skill required for playing musical instruments. Dynamic properties of bimanual movements in a 1:1, metrical (n:1) or polyrhythmic (n:m) fashion have been studied extensively in both the experimental and the modeling literature. Until recently, however, the processes involved in learning such motor skills have received relatively little attention. Here, we demonstrate that a coupled oscillator model with Hebbian plasticity can simulate multiple learning processes that were experimentally shown in different time scales with individually developed patterns. The model was motivated by previous experimental studies by Park and colleagues (2013, 2015) that tracked the acquisition of multifrequency (3:1 and 3:2) bimanual coordination over a 2-month practice period. The studies showed that different aspects of coordination, such as frequency ratio, inter-manual interference, and relative phase, progressed on multiple time scales and that individual participants not only developed their idiosyncratic patterns, but also retained this pattern for 6 months and up to 8 years. We developed a dynamical model consisting of two adaptive-frequency oscillators connected via plastic coupling. The multiple time scales documented in the human data were modeled with different rates of frequency adaptation and coupling plasticity. By adapting the oscillators’ natural frequencies, the system first converged to the targeted frequency ratio. The subsequent strengthening of the coupling for synchronization in the target ratio (3:1 or 3:2) relative to the coupling for the default 1:1 synchronization led to the slower change, corresponding to the slow reduction of inter-manual interference. Due to the neutral stability of the plastic coupling phase, the relative phase between the two oscillators converged to different values for different initial conditions. This may explain the variety of individual styles. This model readily generalizes to other polyrhythmic ratios and can explain their relative difficulties. Implications for music instruction and practice are discussed.

Subjects: Music and movement, Beat, rhythm, and meter

When: 3:15 PM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session N3, Dance

2:30-3:30 PM in KC909

N3-1: How music moves us: The influence of salient acoustic features on continuous movements

Birgitta Burger*(1), Henkjan Honing(2), Benjamin Schultz(3)
1:University of Jyväskylä, 2:University of Amsterdam, 3:Maastricht University

Motivation Dynamic changes in certain acoustic features are perceived as salient; they attract bottom-up attention and evoke subtle motor responses. We hypothesize that the motor system resonates with salient acoustic features leading to overt movement. We used motion capture to examine continuous relationships between the movements of different effectors with salient acoustic features such as spectral features and harmonic change. Methodology Participants (N=40) were recorded individually and instructed to move to the thirty musical excerpts in a way that felt natural and dance if desired. Excerpts covered a range of genres and discrete musical features and were presented in a random order. Each pairing of effectors and acoustic features within a song was subjected to dynamic time warping and the cross-correlation coefficient at lag zero was obtained. The Fisher transformed cross-correlation coefficients were analyzed using a linear mixed-effects model with fixed factors Effector (3; head, hands, and feet) and Feature (5; acoustic intensity, spectral centroid, inharmonicity, onset strength, and spectral flux), and random factors Song nested within Participant. Results For all effectors, significant correlations were demonstrated for vertical movements but not across the horizontal plane. Three different types of salience were identified: 1) Foot movements corresponded to onsets strength and spectral flux, which both reflect onsets and beats (Beat salience), 2) hand movements corresponded to changes in acoustic intensity and spectral centroid, which represent continuous changes in amplitude and energy (Dynamic salience), and 3) Head movements related to changes in inharmonicity (Harmonic salience). Implications Salient acoustic features correspond to vertical motion during spontaneous dance suggesting such features may guide movement. Besides corresponding to the beat, these features relate to dynamic and harmonic change in the music. Results provide insights into how bottom-up motor responses to dynamic patterns of salience might be embodied and develop motor programs of quasi-spontaneous dance movements.

Subjects: Music and movement, Beat, rhythm, and meter; Embodied cognition; Harmony and tonality; Music information retrieval; Timb

When: 2:30 PM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N3-2: Multimodal Emotion Associations in Music and Dance

Lindsay Warrenburg*(1), Lindsey E Reymore(1), Daniel Shanahan(1)
1:Ohio State University

Dance and music have been associated with emotional responses in audience members. Predictors of these emotions have included structural features of the music and specific movements of dance (e.g., Juslin, 2000; Walk & Homan, 1984). This study focuses on three negatively-valenced emotions: grief, melancholy, and fear. Previous research has suggested that grief may act as an overt, social emotion, while fear and melancholy may act as covert, self-directed emotions (Huron, 2015). That is, grief may function to solicit assistance from others, whereas fear and melancholy may function to improve one’s own situational prospects. We hypothesize that there are more prosocial interactions in dance while expressing grief compared to melancholy and fear. Four members of a professional dance troupe were recorded dancing together, without and with music, in response to prompts of melancholy, grief, and fear. In three ongoing studies, we investigate how viewers perceive emotions in dance, music, and multimodal (dance and music) performances. In the first study, we code the amount that dancers touched each other during responses to each prompt. In the second study, we test the idea that viewers perceive more sociality among the dancers in grief prompts than in melancholy and fear prompts. Finally, we perform a content analysis of interviews with the dancers, which may suggest that they were intending to be more prosocial while expressing grief as compared to melancholy and fear. We compare the results of the dance studies with the results of Warrenburg and Huron (forthcoming) that grief music leads to more feelings that require social responses than melancholy music. The aim is to provide support for the idea that both grieving music and dance lead to perceptions of sociality, consistent with the idea that grief may function as an ethological signal, whereas melancholy and fear may act as ethological cues.

Subjects: Emotion, Cross-domain effects; Music and movement; Performance

When: 2:45 PM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N3-3: Small-Group Interactions with Music and Others in Social Dance

María Marchiano*(1), Isabel Cecilia Martinez(1)
1:Laboratorio para el Estudio de la Experiencia Musical, Universidad Nacional de La Plata

Motivation. In musical environments, large groups of people synchronize on intra-personal, inter-personal, and inter-group levels (Clayton, 2013). Motion behaviour with music has been studied in personal, dyadic (Carlson et al., 2018), and large group’s levels in dance conditions. We aim at studying personal and interpersonal small-group level interactions with music in an electronic dance music (EDM) social context. Methodology. Stimulus: Audiovisual recording of an EDM party in La Plata City, Argentina. Musical analysis: musicological analysis of form. Movement analysis: Observation of a 4:26 minutes of Tech House track, interactive behaviour and microanalysis of legs’ motion patterns of 15 dancers. Results. Small-group interaction: people group together in circles (2-5 people each). Each group shows two kinetic behaviours: (i) a shared 2 beat/4 beat leg’s motion pattern in entrainment with musical metre; and (ii) a dyadic dance-together behaviour inside the small group with momentary inter-personal movement synchronisation, prompted by intentional body contact or mutual gazes. Personal behaviour: most people change their leg’s movement at the main themes’ beginnings (signalled mainly by the salience of the bassline and the high pitched percussions, among others timbral-textural features). Implications. Even though movements’ dancers show a common kinetic behaviour, and a personal, intentional kinetic alignment with musical features such as metric and timbral-textural changes of EDM musical style, interactions within small groups tend to be reduced to dyads, suggesting a transcendence of early traces of communicative musicality in adult social life. References. Carlson, E., Burger, B. and Toiviainen, P. (2018). Dance Like Someone is Watching. Music and Science, 1, 1-16. Clayton, M. (2013). Entrainment, Ethnography and Musical Interaction. In: M. Clayton, B. Dueck, and L. Leante (Eds.). Experience and Meaning in Music Performance. Oxford, UK: Oxford University Press.

Subjects: Embodied cognition, Music and movement

When: 3:00 PM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N3-4: How auditory cues travel in Argentine tango: Behavioral and perceptual evidence from the dancers to the viewers

Olivia Xin Wen*(1), Birgitta Burger(2), Joshua S Bamford(3), Vivian Zayas(1), Petri Toiviainen(4)
1:Cornell University, 2:University of Jyvaskyla, 3:Finnish Centre for Interdisciplinary Music Research, University of Jyväskylä, 4:University of Jyväskylä

Motivation: Argentine tango is a social dance that is closely tied to tango music, with its popularity rising around the world. The dance has been primarily studied in terms of dynamic coordination (Kimmel & Preuschl, 2016), interpersonal synchrony (Sevdalis & Keller, 2011), and neural activities during body movement (Brown, Martinez, & Parsons, 2005; Karpati, Giacosa, Foster, Penhune, & Hyde, 2015), giving little attention to the role of music in the dance. Two experiments explored how auditory cues are processed perceptually and behaviorally in Argentine tango. Methodology: Experiment 1 motion-captured 9 pairs of dancers from Finland using the silent disco paradigm in four auditory conditions (leader-follower): Music-Music, Music-Beat, Music-Silence, and Beat-Music. Experiment 2 presented the silent point-light videos from each auditory condition in Experiment 1 to 30 individual dancers from the U.S., who rank ordered the videos based on the physical synchrony between the Finnish dancers. Results: Mixed-model analysis revealed that in Experiment 1, the condition where the leader heard only beats resulted in less physical and perceived synchrony compared to conditions where the leader heard the music. When the leader heard the music, the follower’ audio affected perceived synchrony for both the leader and the follower, but not their physical synchrony. In Experiment 2, the viewer’s rank ordering, which was solely based on the dancers’ physical movements, corresponded to the leader’s auditory conditions but not the followers’. Implications: The findings suggest that the leader and the follower jointly dance to the leaders’ auditory cues while inhibiting moving to the followers’ cues. This mechanism contributes to the interpersonal synchrony between tango dancers and their feeling of being in sync with each other. In addition, the reverse process is also valid in that the viewers were able to identify the corresponding auditory conditions of the leader based on observed interpersonal synchrony.

Subjects: Music and movement, Beat, rhythm, and meter

When: 3:15 PM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session N4, Symposium: The ACTOR Project Part 1

2:30-3:30 PM in KC914

N4-1: Interdisciplinary Studies in Orchestration and Timbre: The ACTOR Project (2-part symposium proposal, SMPC 2019)

Jason Noble*(1), Kit V Soden(2), Stephen McAdams(1), Robert Hasegawa(1), Julie Delisle(1), Zachary Wallmark(3), Manda Fischer(4), Caroline Traube(5), Victor Cordero(6), Carmine-Emanuele Cella(7), Lawrence Marks(8), Étienne Thoret(1), Max Henry(1), Meghan Goodchild(9)
1:McGill University, 2:McGill University, CIRMMT, 3:Southern Methodist University, 4:University of Toronto, 5:Université de Montréal, 6:Haute école de musique Genève – Neuchâtel, 7:University of California, Berkeley, 8:Yale University, 9:Queen’s University

Orchestration and timbre have traditionally been relegated to secondary roles in music theory and analysis, due in part to their complex multidimensional natures and the technical challenges of studying them quantitatively. Scholarship in recent years, empowered by methodological and technological advances, has begun to embrace orchestration and timbre as focal areas of research. The ACTOR project (Analysis, Creation, & Teaching of Orchestration; https://www.actorproject.org/) represents a significant milestone, uniting researchers from around the world in interdisciplinary research on orchestration and timbre. The proposed two-part symposium presents research from ACTOR members (professors, post-doc researchers, and graduate students), organized around two broad themes: Analyzing Musical Timbre and Orchestration (Part 1), and Applying Musical Timbre and Orchestration (Part 2). \nThis first ACTOR session showcases various perceptually and cognitively driven approaches to analysis of orchestration and timbre, featuring contributions from the fields of acoustics, music theory, and music perception and cognition. The first presentation demonstrates how instrumental sounds can be compared using acoustical descriptors. The second coins the terms metatimbre and paratimbre to account for microvariations and complexes of closely related timbres, for example the range of timbres produced by a single musical instrument. The third reports perceptual experiments on the role of timbre in perceptual segregation in orchestral music. The fourth demonstrates how orchestration can be analyzed from the standpoint of auditory grouping principles, articulating an important component of the larger ACTOR project of developing a psychologically grounded theory of orchestration.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 2:30 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N4-1: Playing techniques and timbre spaces: Comparing instrumental sounds with acoustical descriptors

Julie Delisle(1)
1:McGill University

Introduced by Grey in his work on timbre perception (1975), the concept of timbre space aims to compare instrumental sounds and to place them in a multidimensional space where the dimensions are correlated to perceptual aspects. This concept has been adopted by several researchers, mainly to compare timbres obtained from different instruments. But for a given musical instrument, it is possible to choose between several playing techniques and gestural parameters, leading to the production of a variety of sound colors. If to provide a technical description of sound production modes remains easy, it is much more difficult to describe them acoustically and perceptually. [P] The aim of this paper is to present a methodology for the acoustical analysis and comparison of instrumental sounds using computational tools, thus creating acoustical timbre spaces. Here, the case of the flute will be presented. A selection of pre-recorded sounds, representative of various sound production modes, has been used for this experiment. The sound instances were grouped in categories, according to the following classification: long tones (continuous excitation), short tones (instantaneous excitation), and percussive techniques. For standard tones, the use of dynamics and of two different timbre “colors”, labeled as bright and round, has also been explored. [P] The samples have been analyzed through the extraction of 46 acoustical descriptors using the Timbre Toolbox (Peeters et al. 2011). Then, performing a principal component analysis allowed us to identify the most important descriptors when it comes to the distinction between instrumental sounds. For each category, acoustical timbre spaces were obtained using graphical representations of the sound instances according to the PCA dimensions. [P] Results showed that acoustical descriptors related to timbre brightness (spectral centroid, roll-off and spectral slope), and to temporal aspects were some of the most important, along with the amount of spectral energy, spectral flux, spectral irregularity, and noisiness.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 2:30 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N4-2: On relationships of timbral properties of instruments across sections and families, and how to group them accordingly

Kit V Soden(1), Victor Cordero(2)
1:McGill University, CIRMMT, 2:Haute école de musique Genève – Neuchâtel

The use of the word timbre itself has been insufficient to describe both the macro archetype of the instrument and the micro variations of sound colour, since timbre describes multiple perceptual qualities. McAdams and Goodchild (2017) elucidate the fact that any given instrument has, in fact, hundreds of individual timbres: “a specific clarinet played with a given fingering (pitch) at a given playing effort (dynamic) with a particular articulation and embouchure configuration produces a note that has a distinct timbre” (p. 129). Composers are therefore not just working with “the clarinet timbre,” they are working with hundreds if not thousands of clarinet timbres. Brant (2009) attempts a classification based on archetypes of sound. Certain ranges and dynamics from various instruments correspond to a given “prototype timbre”, e.g., Brant’s “Wind-Group I: ‘Flute’ timbre” (p.56) lists the available tone-qualities from the flute family, clarinet family, bassoon (in the high range only), horn (with fiber mute only), and strings (harmonics only), and pipe or (flute stops only). On a similar topic, Blatter’s (1997) instrument substitution list gives us another insight into timbral associations. [P] We propose the terms metatimbre and metatimbre-class for generalized updates of Brant’s “prototype” and Blatter’s “substitution” models, and as well as the term paratimbre to represent the relationship between similar timbres that group together to form a metatimbre. [P] By using the term metatimbre, we attempt to differentiate between one specific micro variation of sound-colour (timbre), and the amalgam of the totality of one instrument’s timbral possibilities or of grouping related timbres across instruments (metatimbre). Paratimbre represents closely related timbres: two timbres that only slightly differ from one another, thus having a close paratimbral relationship. The instruments belonging to the same metatimbre- class have the same perceptual function and can be exchanged without altering the perceptual effect, as they are interchangeable.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 2:45 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N4-3: The role of timbre in perceptual segregation in orchestral music

Manda Fischer(1), Kit V Soden(2), Stephen McAdams(3)
1:University of Toronto, 2:McGill University, CIRMMT, 3:McGill University

Differences in timbre (sound quality) serve as powerful cues for perceiving separate streams in music. Composers may intuitively capitalize on perceptual heuristics when selecting certain instrument combinations. To date, orchestration treatises have been based on musical examples selected by skill and intuition. This research aims to develop a psychological theory of orchestration, by connecting orchestration practice to underlying perceptual grouping principles. [P] We select musical excerpts containing two streams, determined by music analysts, and consisting of different instrument families (woodwind, brass, string, or other). For each excerpt, we ask both musicians and nonmusicians to rate how perceptually segregated the two streams are. We find that heterogeneous instrument combinations yield greater perceptual segregation than homogeneous ones. Two follow-up experiments validate these finding and also suggest that (1) the degree of segregation within each stream does not directly predict global segregation between streams, but that (2) instrument combinations do. Specifically, by re-orchestrating the excerpts to string instruments only, we show that overall perceived segregation across excerpts is lower when timbral differences between streams are reduced. [P] Acoustic and score-based covariates are included in all statistical models to further contextualize the findings. The results show an effect of instrument family, as well as part-crossing, consonance, and rhythmic factors. Acoustic analyses reveal that spectral variation, spectral flatness, and spectral skew may be particularly important dimensions of timbre for the perceptual segregation of real music. Taken together, these findings provide a psychological basis for understanding how composers use timbral cues to shape listeners’ perceptions. In addition, this study provides empirical evidence that timbral differences can be operationalized in terms of instrument family combinations. This may serve as a high-level tool that composers use to shape listeners’ perceptions.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 3:00 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

N4-4: Orchestration analysis from the standpoint of auditory grouping principles

Stephen McAdams(1), Meghan Goodchild(2), Kit V Soden(3)
1:McGill University, 2:Queen’s University, 3:McGill University, CIRMMT

Most of the music we enjoy uses the musical qualities of different instruments to create specific perceptual, expressive, and emotional effects that composers sculpt over time. Timbre is the auditory attribute that distinguishes different instruments. Research on timbre perception has demonstrated that it is multifaceted and contributes in many ways to the perceptual organization of musical structures. The art of structuring music with timbre is traditionally called orchestration. A survey of orchestration treatises reveals the dearth of underlying theory, in sharp contrast to other traditional areas such as harmony and counterpoint, which have long theoretical traditions. We seek to develop a theoretical ground for orchestration practice starting with the structuring role that timbre can play in music. Many facets of musical structuring are achieved by auditory scene analysis, the perceptual grouping processes that: 1) fuse different acoustic components into events (e.g., instrumental blend), 2) integrate events into one or more auditory streams or other sequential groupings (e.g., surface textures or orchestral layers), 3) segment groups of events into motifs, phrases, and sections (e.g., antiphonal contrasts, section boundaries), and 4) form larger-scale units encompassing changes in orchestration that are extended over time (e.g., orchestral gestures). We propose a new taxonomy of orchestral effects based on these grouping processes and informed by orchestration techniques used by composers when structuring their music and alluded to in orchestration treatises. The roles that timbre plays in the manifestation of these principles in orchestration practice, and the insight it can provide for composers, music scholars, and music psychologists, will be considered as a point of departure for music analysis, with the aim of developing elements of a perceptually based theory of orchestration.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 3:15 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session O1, Perceived Emotion 2

3:45-4:45 PM in KC802

O1-1: Interrogating Reasons for Inter-rater Disagreement in Time-varying Music Emotion Perception

Simin Yang*(1), Mathieu Barthet(2), Elaine Chew(3)
1:Centre for Digital Music, Queen Mary University of London, 2:QMUL, 3:Centre for Digital Music, Queen Mary University of London, UK

Music perception studies show that the same music can communicate ranges of emotions that vary over time to listeners. Emotion recognition methods typically average human annotations or discard inconsistent ones. Here we set out to explore perceived music emotion disagreement, at multiple time scales, which auditory, demographic or personality factors may explain differences across listeners. In order to better understand the dynamic aspects of emotional response to music, we collected time-varying emotion ratings (valence, related to pleasantness, and arousal, related to excitation) for a complex classical music piece in both live concert and controlled lab conditions. In the controlled lab condition, listeners provided explanations for their ratings retrospectively for seven pre-selected segments after rating the whole piece. Measures of personality (TIPI), music preference (STOMP), and musical sophistication (Gold-MSI) were also collected. We assessed the agreement between participants’ emotion ratings using the intra-class correlation coefficient (ICC) and inter-rater valence and arousal distances computed over time. Results show that the level of agreement varies significantly according to the character of each segment and salient passages within the segments. Several segments yield disagreement with non-significant ICCs and high inter-rater valence and arousal distances. Overall, in line with other studies, a stronger agreement is found for arousal compared to valence ratings. Thematic analysis of participants’ explanations revealed that disagreement in arousal could be related to listeners attending to different music features such as loudness, tempo, pitch, and instrument/timbre. Finally, analyses of variance (ANOVAs) at the second-to-second level showed statistically significant differences in both arousal and valence ratings associated with listeners’ openness to new experiences, preference for classical or pop music, cultural background, and gender.

Subjects: Emotion, Psychoacoustics

When: 3:45 PM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O1-2: Deconstruction of Perceived Emotional Expression in Music

Annaliese Micallef Grimaud(1)
1:Durham University

This paper focusses on the relationship between music and perceived emotions, specifically looking at how altering the structure and expressive cues of a musical composition affects the perceived emotional expression. This study identifies two issues in previous studies; familiarity bias of the musical examples used, and retaining ecological validity of music while it is being altered to communicate different emotional expressions. To tackle both issues simultaneously, short musical pieces were especially composed, ones that can be regarded as ‘real music’ and hold ecological validity even when manipulated. Subsequently, an analysis-by-synthesis approach is taken, where participants personally alter features of these pre-composed musical pieces, using a specifically-created computer interface. In this study, 42 participants were instructed to alter 6 parameters (tempo, mode, articulation, pitch, dynamics, and brightness) of 7 musical examples in order to convey a variety of emotional expressions (anger, sadness, fear, joy, surprise, calm, and power) for each musical piece. The distinct values of the 6 parameters for each combination of musical piece and emotion were recorded. Results indicate that there is a significant main effect of emotions and target musical pieces for each emotion. The means of the different parameter values for each emotion are mostly consistent with the past literature, however, distinct differences have also been identified in the way parameter combinations are made to convey emotions, which might shed new information over past studies. The analysed results will allow to discover how the structural parameters and expressive cues of a musical composition aid in altering emotion perception in music and will contribute to a better understanding of how music communicates emotions, having significant implications on musicology and also applied fields such as music information retrieval, music therapy, music streaming services, and other fields were music is used to create immersive experiences.

Subjects: Psychoacoustics, Emotion

When: 4:00 PM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O1-3: Predicting emotion ratings for music versus sound using psychoacoustic features

David Sears*(1), Akbar Siami Namin(2), Keith Jones(1)
1:Texas Tech University, 2:Computer Science Department, Texas Tech University

Psychoacoustic features like loudness, timbre, and attack time contribute to the experience of emotions during listening (Eerola et al., 2009). Loud sounds, for example, tend to indicate physically large and/or proximal sources, thereby eliciting the perception of threat or danger (Guillaume, 2001). Auditory alarms also sometimes consist of high-frequency or dissonant sounds that minimize masking in noisy environments, and so tend to be perceived as more salient (McAdams & Drake, 2002). And yet despite the publication of numerous emotion data sets over the past two decades, little has been done to compare the role(s) these features might play in listening conditions involving either music or sounds. To address this issue, we present the findings from a meta-study that predicts the arousal and valence ratings from five data sets. The music corpus consists of an equal number of excerpts from the Soundtracks (Eerola & Vuoskoski, 2011), Experimental Music (Fan, Tatar, et al., 2017), Natural History of Song (Mehr et al., 2018), and EmoMusic data sets (Soleymani et al., 2013) (N=400; Duration <= 30 s). The sounds corpus consists of four semantic categories of sounds from the Emo-Soundscapes data set: human, nature, mechanical, and indicator (Fan, Thorogood, et al., 2017) (N=400; Duration = 6 s). To model the perceived emotion ratings from each corpus, we estimated partial least squares (PLS) regression models that predicted the normalized arousal and valence ratings using a large set of (psycho)acoustic features extracted with the MIRToolbox (Lartillot & Toiviainen, 2007). Both models produced significantly higher fits for the sounds corpus. Features associated with the computation of harmonic or speech-like sounds also played a larger role in predicting the sounds corpus, suggesting that the top-down identification of valenced sources (e.g., giggling infant) is more important for emotion perception in sounds vs music.

Subjects: Emotion, Psychoacoustics

When: 4:15 PM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O1-4: Are musical emotions different from emotions experienced in everyday life?

Diana Kayser*(1), Hauke Egermann(1)
1:University of York

Motivation Utilitarian emotions have been in the foreground in research on experienced emotions in music. However, Scherer (2004) suggests that music evokes a wider scope of emotions, including aesthetic and epistemic emotions that lack the activation of the physiological reaction component due to their lack of associated behavioral tendencies. In this study we wanted to see whether self-reported aesthetic emotions evoked by music are accompanied by physiological changes normally associated with utilitarian emotions. We tested whether distinct facial expressions of emotion and physiological changes predicted participants’ subsequent ratings on the Aesthetic Emotions Scale (AESTHEMOS), which has been developed by Schindler, Hosoya and Menninghaus (2017) to assess experienced emotions in an aesthetic context such as listening to music. Methodology In a laboratory experiment 39 participants (14 males, mean age 28 years, range 19-61 years) listened to 15 excerpts of film music alone, via headphones. We measured galvanic skin response, movement energy, heart rate, and took video recordings of participants’ faces. Facial expressions of emotion were subsequently classified using automated face analysis software. Participants were asked to retrospectively rate their felt emotions on the AESTHEMOS. Results Results of the indicate that the AESTHEMOS-factors negative emotions, aesthetic emotions, animation, sadness, and nostalgia/relaxation could be predicted by changes in GSR, HRV, movement energy and distinct facial expressions. Implications Results show that some aesthetic emotions are accompanied by embodied components and can be predicted by various physiological changes normally associated with utilitarian emotions. We therefore conclude that physiological changes in various body activation parameters influence retrospective ratings of aesthetic experience. These findings therefore question the simple dichotomy between utilitarian embodied and aesthetic non-embodied emotions.

Subjects: Emotion, Embodied cognition

When: 4:30 PM in KC802 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session O2, Expert Performance

3:45-4:45 PM in KC905/907

O2-1: The relationship between motion patterns, performance precision, and expertise in a single-handed drumming task

Bryony Buck*(1), Gerard Breaden Madden(1), Scott Beveridge(2), Scott Beveridge(2), Hans-Christian Jabusch(1)
1:Institute of Musicians’ Medicine University of Music Carl Maria von Weber, 2:Institute of High Performance Computing – Social & Cognitive Computing Department Agency for Science, Technology and Research

Motivation: Accurate high-speed drumming actions require performers demonstrate high levels of motor control. Economic motor strategies incorporating whip-like movements have been shown to relate to greater levels of drumming expertise. Maintaining temporal accuracy of successive strikes is likely to become more challenging at extreme tempi. Employing such strategies potentially facilitates greater precision when playing at high speeds. Aim: The current study examines movement strategies in single-handed drumming tasks with respect to tempo, experience, and expertise. Effects on task performance characteristics (timing variability, striking-velocity variability) of movement strategies are examined. Method: Expert and amateur drummers (ED, AD; n = 22) were recorded while performing single-handed drumming tasks at five tempi (80, 160, 240, 320, 400 hits per minute; HPM) using 3D motion-capture. Expertise-related variables were assessed using a questionnaire. Participant groups were matched for age and differed in experience and cumulative life practice-time. The trajectory of markers placed over the metacarpophalangeal joint II, wrist, elbow, and shoulder were analysed along the axis perpendicular to the drum pad. Specifically, movement patterns of adjacent segments (hand, forearm, upper-arm) were assessed with respect to performance parameters, tempo, and expertise criteria. Results: Clustered groups of in-phase and antiphase movements were observed across all adjacent segments, varying with tempo and expertise. In ED’s high tempo hand movements, antiphase patterns were generated more often than in-phase patterns (Chi2 tests p<0.01). Permutated ANOVAs revealed effects of tempo and expertise on performance parameters. E.g., striking-velocity variability was lower in ED compared to AD at 320 HPM (Mann Whitney U = 16, p<0.05). Conclusions/Implications: Antiphase hand movements occurred only at higher tempi and were most prominently observed with ED. Tempo-dependent antiphase movement strategies were less apparent in forearm and upper-arm segments. Movement strategies are indicative of increased performance precision. Advantages of these movement patterns have implications for percussion pedagogy.

Subjects: Music and movement, Music education/pedagogy/learning; Musical expertise; Performance; Physiological measurement

When: 3:45 PM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O2-2: Does ‘Almost too serious’ mean ‘Almost too metrical?’ Two (of many) ways to perform the 2/8 meter in Robert Schumann’s ‘Fast zu ernst’, from ‘Kinderszenen’, op.15

Ira L Braus(1)
1:Hartt School/University of Hartford

Robert Schumann’s “Kinderszenen,” op.15, is a cycle of thirteen piano pieces that relive childhood through adult sensibilities. A composer obsessed with metrical dissonance, Schumann used this technique in every piece of the cycle. (Metrical dissonance is deliberate misalignment of 1+n metrical layers in a compositional texture; layers are recurrent isochronous groupings of x-cardinality in the texture that may be either integrally or non-integrally proportional.) The tenth piece, “Fast zu ernst” (‘Almost too serious’) stands out in this regard, since it has elicited a broad spectrum of metrical interpretations by pianists such as Martha Argerich, Walter Gieseking, and Vladimir Horowitz. While it will be impractical to compare the 100+ recordings of this piece along this dimension, we’ll juxtapose two that demonstrate the extremes on its metrical performance spectrum. The critical variable here is “displacement dissonance,” that is, the degree to which the performer allows the piece’s continuous (and notated) syncopation to displace the metrical (bar line) accent. The two recordings of interest exploit rubato, meaning flexibility of tempo associated with the performance practice of nineteenth-century European piano music. The extent to which such rubato engages displacement dissonance varies conspicuously across the two performances. At the end of my talk, I entertain the possibility that the title of the piece alludes ironically to the continuous disconnect between the metrical and non-metrical rhythmic layers of the piece.

Subjects: Performance, Expectation; Music theory

When: 4:00 PM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O2-3: Expressivity and creativity in expert musical performance: A case study of two elite cellists

Stacey Davis(1)
1:University of Texas at San Antonio

Previous research indicates that expert performance is characterized by both precision and flexibility, with artists able to consistently execute their interpretation of a piece across multiple performances while also exhibiting creativity and spontaneity during each individual rendition. Such studies typically compare repeated performances of a single piece within controlled laboratory settings or across short periods of time (e.g., Clarke, 1995; Chaffin, Lemieux, & Chen, 2007). The current study adds to this body of research by comparing professionally recorded performances that span many years and represent different stages of an artist’s career, thereby examining creativity as a product of both intuitive and deliberate interpretative choices. Data was collected from commercial recordings of Bach’s cello suites by two elite performers, Yo-Yo Ma and Pieter Wispelwey, each of whom has recorded the entire collection three times. Recordings were made on both modern and period instruments between 1983 and 2018, which span the ages of 28-62 and 28-50 respectively. Comparisons between movements and recordings were made after using Sonic Visualiser software to measure expressive timing, dynamics, articulation, and vibrato. Data was also collected from publicly-available interviews with both cellists, which supplement the quantitative measurements with personal insights about differing expressive intentions over time and across recordings. Results confirm the partnership between precision and flexibility that is indicative of expert performance. The location of expressive nuances relative to musical structure is typically consistent across recordings, with creative differences occurring in the frequency and magnitude of those nuances. Variations in timing and dynamics are often more pronounced or dramatic over time, perhaps reflecting the impact of increased musical maturity on expressivity and creativity. Some movements also differ in overall tempo and articulation, with a tendency toward slower tempos in the later recordings.

Subjects: Performance, Music theory; Musical expertise

When: 4:15 PM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O2-4: Violinists employ more expressive gesture around musical resolutions: a motion capture study

Aditya Chander*(1), Madeline Huberth(1), Stacey Davis(2), Samantha Silverstein(3), Takako Fujioka(3)
1:Stanford University, 2:University of Texas at San Antonio, 3:Center for Computer Research in Music and Acoustics, Stanford University

Studies have shown that performers use expressive physical motion to embody musical features. When such motion is non-essential to playing an instrument, it is termed ancillary or non-technical motion. Previous work on non-technical motion at phrase boundaries has focused primarily on pianists and wind players, measured at specific body parts. The present study examines violinists’ whole-body motion around phrase boundaries pertaining to musical resolution. Seven violinists performed a transcription of the Allemande from Bach’s Partita for Solo Flute, BWV 1013, which consists entirely of isochronous sixteenth-notes (except at the boundary concluding the first reprise). Motion was recorded using 12 infrared cameras, with markers on the major joints of the whole body, bow, and violin. We simultaneously recorded the audio from each performance, then used note onsets extracted from the audio to analyse the motion position data using the MoCap Toolbox. Based on a music-theoretic analysis, we identified four structurally important cadences of the piece and examined surrounding segments that build up to and follow each of those cadences. Additionally, the cadences were characterised based on whether the resolution overlaps with the beginning of the subsequent musical phrase. Our hypothesis was that violinists would use more non-technical motion around moments of musical resolution than at transition passages. The primary non-technical principal component of motion, left-to-right whole body swaying, showed significant main effects of build-up versus resolution and no-overlap versus overlap (p < 0.05 in each case). The two-way interaction was also significant: the contrast between non-technical motion during build-up and resolution was exaggerated when no-overlap occurred. This extends previous findings on phrasing by highlighting the embodied nature of resolution towards the conclusion of a group of overlapping musical phrases.

Subjects: Embodied cognition, Audiovisual / crossmodal; Computational approach; Music and movement; Music theory; Performance

When: 4:30 PM in KC905/907 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session O3, Development 2

3:45-4:45 PM in KC909

O3-1: Infants Mismatch Response to Omitted Sounds

David Prete(1)
1:McMaster University

The mismatch negativity (MMN) is an event-related potential in response to infrequent changes (deviants) to a repeating sequence of sounds (standards), such as a change in pitch. In infancy, the MMN has been elicited by deviations to many sound features, including pitch and intensity. Traditionally, the MMN in infants has been interpreted as reflecting prediction error between the expected standard sound and the unexpected deviant sound (i.e., the predictive coding hypothesis). However, the MMN may reflect supressed neural firing to the frequent sounds that returns to baseline when the deviant sound is presented (i.e., the neural adaptation hypothesis). Eliciting the MMN in response to an unexpected omitted sound would support the predictive coding hypothesis, as there would be no release from adaptation in the absence of physical stimulus. Though omissions have been tested within adults, few have investigated how infants respond to omissions. To test these hypotheses, we collected electroencephalography (EEG) data from adults and 6-month-old infants during an auditory oddball paradigm. We presented a sequence of piano tones consisting of a standard (C4, 236 Hz), a pitch deviant (F4, 351 Hz), and an omission deviant. Deviants occurred randomly within the sequence and comprised 20% of all trials (10% pitch deviant and 10% omissions). The MMN was elicited by pitch and omission deviants in both age groups, however the MMN to omissions was smaller in adults compared to infants. Furthermore, pitch deviants elicited a larger MMN compared to omission deviants. Overall our data supports the predictive coding hypothesis in both adults and infants, indicating that early in development the brain is actively trying to predict incoming stimulus and updating these predictions when there is an error. Future studies could use omission deviants to explore how infants’ process complex patterns, and how early in development does predictive coding occur.

Subjects: Music and development, Expectation; Neuroscientific approach

When: 3:45 PM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O3-2: Analysis of infant vocalisations in a structured context of music classes

Helga R Gudmundsdottir(1)
1:University of Iceland

Studies on infants are mostly conducted in the controlled environment of a laboratory. In the past, such studies have revealed remarkable abilities of infants in terms of music perception. Infants are considered passive knowers of music and musical elements, hence, their production abilities in music are much less studied than their perception of music. Recent studies on singing in toddlers suggest that their renditions of songs from their culture preserve melodic and rhythmic patterns quite well, making their songs easily recognisable to adults. This raises questions of the developmental period leading up to toddlerhood. How an infant practices vocal skills in a musical context leading up to the ability to sing songs. Such infant production of music is not easily studied in a laboratory and requires a semi-controlled situation where music is practiced in an environment familiar to the infants. The present study is based on recordings of 8-month-old infants in music classes with their parents, attending ten 45 minute classes of infant-directed music lessons with a fixed sequence of musical activities. Vocalisations were analysed for category type (positive, negative, imitative etc.), intensity and duration. Validity of the categories were tested with non-expert adult listeners. The types of vocalisations were linked with the types of ongoing activities in order to establish whether vocalisations occurred randomly or followed patterns. The results suggest that different musical activities had significantly different effects on the type and intensity of infant vocalisations. Some activities induced exclamatory vocalisations while other activities elicited imitative elements. The imitative elements could be indications of early singing attempts. An interesting finding was the occurrence of silence as an attentive signal during particular activities. Studies on infants in semi-structured conditions are an important method of eliciting data on musical behaviour and development in the very early stages of development.

Subjects: Music and development, Music education/pedagogy/learning

When: 4:00 PM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O3-3: Auditory and Auditory-Motor Timing Deficits in Children with Developmental Coordination Disorder

Chantal Carrillo*(1), Andrew Chang(1), Yao-Chuen Li(2), Jennifer Chan(3), John Cairney(3), Laurel Trainor(1)
1:McMaster University, 2:China Medical University, 3:University of Toronto

Developmental coordination disorder (DCD) is a neurodevelopmental disorder involving deficits in motor coordination, affecting 5-6% of children. Children show deficits in visual-motor and motor timing, but auditory timing has not been well studied despite its importance for speech and music. Given previous research showing motor areas are involved in auditory time perception, we hypothesized that children with DCD would also have impaired auditory timing perception. Our first study measured discrimination thresholds for duration timing, rhythm timing, and pitch (control task). We found that children with DCD aged 6-7 (n = 20) have larger discrimination thresholds for duration (p = 0.009) and rhythm-based timing (p = 0.012), but not for pitch, compared to typically developing (TD) children (n = 27). We also found electrophysiological responses (MMN or P3a) to occasional changes in duration or rhythm timing are delayed in children with DCD compared to typical controls (n = 27 DCD, 27 TD). Our second study explores whether auditory rhythmic stimuli can help children (6-7 years) with DCD to execute rhythmic motor movements. We are testing motor entrainment with the following tapping tasks: maintaining steady tapping alone; tapping with a metronome (at 400, 550, and 700 ms inter-onset intervals), continuation tapping (maintaining tapping after metronome stops); and tapping to the beat of musical excerpts (Beat Alignment Test; 400-600 ms inter-onset intervals). We hypothesize that tapping in children with DCD will be more variable both in phase and tempo compared to TD children, but that the differences between groups will be diminished when an auditory stimulus is present, that is, when tapping with a metronome or to musical excerpts. Data collection for the second study is ongoing. The results are important for informing whether auditory-motor training may confer additional benefit for children with DCD compared to conventional interventions based on motor function.

Subjects: Beat, rhythm, and meter, Music and movement

When: 4:15 PM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O3-4: Beat Perception in Children with Specific Language Impairment and Typical Developing Peers: an EEG Investigation

Leyao Yu*(1), Anna Kasdan(1), Olivia Boorom(2), Devin McAuley(3), Reyna Gordon(2)
1:Vanderbilt University, 2:Vanderbilt University Medical Center, 3:Michigan State University

According to Dynamic Attending Theory (DAT), neurons entrained to auditory stimuli exhibit neural oscillations aligned with rhythmic patterns in music, and generate temporal expectancies for future events. Dynamic attending studies have been done with typically developed adults, but little is known about the brain entrainment process in children, especially children with Specific Language Impairment (SLI). SLI is a communication disorder characterized by difficulties with acquiring grammar and vocabulary. Children with SLI have deficits in rhythm perception, along with lexical and grammatical impairments. The study investigated differences in brain responses between children with SLI and their typically developing (TD) peers under the DAT framework in order to identify additional evidence for robust, automatic brain entrainment patterns to rhythmic stimuli. Participants were SLI children (N = 14, mean age = 6.64 years) and TD children (N = 66, mean age = 6.67 years). Using electroencephalograms, the study measured passive brain responses evoked by two different rhythmic patterns. Cluster-based permutation tests were applied to ERP and time frequency domains. The TD group had two clusters indicating differences between conditions, but the SLI group had one negative cluster with shorter latency, indicating a relatively weak sensitivity to metrical structure (TD: positive cluster 0.062 – 0.200 s, p < .001, negative cluster 0.206 – 0.398 s, p < .001; SLI: negative cluster 0.252 – 0.396 s, p < .001). Both evoked beta and gamma activities showed early entrainment to tones. Moreover, the asymmetry of the evoked beta activity provided evidence for the activity’s contribution to metrical interpretation. Overall, these results support the DAT framework and provide a neural basis of passive beat perception in children with typical and atypical language development, opening a new way to investigate the relationship between neural responses and behavioral measures.

Subjects: Beat, rhythm, and meter, Expectation; Music and language; Neuroscientific approach

When: 4:30 PM in KC909 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Session O4, Symposium: The ACTOR Project Part 2

3:45-4:45 PM in KC914

O4-1: Interdisciplinary Studies in Orchestration and Timbre: The ACTOR Project — Part 2: Applying Musical Timbre and Orchestration

Caroline Traube(1), Zachary Wallmark(2), Lawrence Marks(3), Robert Hasegawa(4), Étienne Thoret(4), Max Henry(4)
1:Université de Montréal, 2:Southern Methodist University, 3:Yale University, 4:McGill University

Orchestration and timbre have traditionally been relegated to secondary roles in music theory and analysis, due in part to their complex multidimensional natures and the technical challenges of studying them quantitatively. Scholarship in recent years, empowered by methodological and technological advances, has begun to embrace orchestration and timbre as focal areas of research. The ACTOR project (Analysis, Creation, & Teaching of Orchestration; https://www.actorproject.org/) represents a significant milestone, uniting researchers from around the world in interdisciplinary research on orchestration and timbre. The proposed two-part symposium presents research from ACTOR members (professors, post-doc researchers, and graduate students), organized around two broad themes: Analyzing Musical Timbre and Orchestration (Part 1), and Applying Musical Timbre and Orchestration (Part 2). \nThis second ACTOR discusses compositional and performative applications of musical timbre, as well as cross-domain, multimodal, and metaphorical interpretations of musical timbre and orchestration. The first presentation analyzes multimodal aspects of perception of piano timbre. The second describes cognitive linguistic bases for cross-modal associations between visual and timbral brightness, with supporting evidence from perceptual studies. The third describes applications of timbral and emergent acoustical phenomena in a contemporary composition, Pascale Criton’s Wander Steps (2018). The fourth discusses metaphorical associations in contemporary “sound-based” music, drawing on perceptual data and acoustical analyses.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 3:45 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O4-1: Multimodal production and perception of piano timbre

Caroline Traube(1), Felipe Verdugo(1), Justine Pelletier(1)
1:Université de Montréal

When pianists reach a high level of expertise, they demonstrate the ability to modulate piano timbre in very subtle ways. This rich timbre space, verbalized through a vast lexicon including descriptors such as round, thin, metallic, warm, glassy, velvety and shimmering, is often opposed to a scientific perspective which reduces the control of piano timbre to the speed of attack as the only significant parameter, therefore excluding the possibility that piano timbre could be varied independently from intensity. In this paper, based on several studies conducted on piano timbre production and perception, we propose to investigate the different factors which explain the complexity of this multimodal and experiential phenomenon. First, we examine how piano timbre is constrained by the various physical interactions within the complex chain made of the key, action mechanism, hammer, string and soundboard. In particular, keys can be depressed at different depths which varies the amount of impact noise of the key against the keybed, and the profile of the key descent can determine the timing and amplitude of the oscillatory flexion of the hammer shank. Piano timbre can also be considered as resulting from the complex interaction of performance parameters such as timing, dynamics and articulation. The use of pedaling expands timbral possibilities as it can be applied at different depths (e.g. half-pedaling) and in a complex interaction with timing and articulation. Perception of piano timbre calls for other several sensory modalities in different ways and in both top-down and bottom-up sensorimotor and cognitive processes. While playing, auditory perception is integrated with vision, touch and proprioception. Depending on the character of the piece, pianists inhabit piano timbre with poetic meaning (e.g. thinking of glass or ice) which in turn influences their playing, as the whole musculoskeletal chain, from feet to fingers, is involved in sound production.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 3:45 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O4-2: “Bright” timbres modulate visual brightness discrimination

Zachary Wallmark(1), Lawrence Marks(2)
1:Southern Methodist University, 2:Yale University

Timbre is often conceptualized crossmodally in reference to our sense of vision. Despite the ubiquity of adjectives such as “bright” and “dark” in the timbre discourse, however, little is known about the cognitive linguistic bases for this semantic overlap. In recent work, Wallmark (2019) demonstrated mild semantic crosstalk between timbre perception and crossmodal adjectives using a modified Stroop task. This suggests that task-irrelevant timbral dimensions may interfere with semantic processing when timbral and semantic frames are misaligned (e.g., hearing a “bright” sound but seeing the word DARK). [P] In this paper we explore a related question: Can timbre modulate visual perception? Participants (N = 140) with a diverse range of musical backgrounds were shown a gray baseline square; then they were presented with a timbre prime consisting of either a “bright” or “dark” tone. Next, a target square was presented that either varied slightly from the baseline (darker or bright) or was the same. Participants were told that the difference between baseline and target was very small, but still discriminable by most people. A forced-choice response required that they identify the target as either darker or brighter than baseline. Results suggest that participants were significantly more likely to see the identical target square as brighter than the baseline after primed with “bright” timbres. A follow-up logistic regression suggested that brightness response bias was exclusive to trained musicians. We interpret these findings through the lens of the semantic coding hypothesis (Martino & Marks, 2000), which suggests that automatic synesthetic audio-visual correspondences may interact with learned semantic representations. We close by sharing results from a follow-up experiment examining the effects of timbral primes on visual texture perception, and discuss the implications of these finding for our understanding of timbre, orchestration, and musical affect.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 4:00 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O4-3: Timbre, interference effects, and room acoustics in Pascale Criton’s Wander Steps

Robert Hasegawa(1)
1:McGill University

Contemporary compositions often present unique challenges to music theory and analysis, demanding an interdisciplinary approach incorporating tools from music cognition, psychoacoustics, and acoustics. One such challenging work is Wander Steps by the French composer Pascale Criton (b. 1954), written in 2018 for the Duo XAMP and their unique microtonal accordions. [P] In Wander Steps, a minimally notated score gives rise to a rich and detailed sonic result. The notated rate of change is very slow, with written notes often lasting a minute or more. However, the real musical drama unfolds in the timbral variations and unusual interference effects that arise between the two accordions. Criton’s score favours the emergence of complex acoustical phenomena such as combination tones, near-unison beating, and phasing effects. [P] These phenomena are delicate, and must be carefully adjusted by ear during performance. Tiny changes in dynamics or intonation can drastically shift the resultant interference effects. The work also draws attention to the acoustics of the space in which it are performed. The performers are encouraged to adopt an “eco-sensitive” interpretation, reacting flexibly to the response of the concert hall. Wander Steps is thus not only a musical composition, but also a real-time experimental investigation of the hall’s acoustics. [P] Traditional analytical tools, based on a conception of the note as a discrete, quantifiable unit, break down when confronted with works based on timbre and emergent acoustic phenomena. My analysis of Wander Steps draws on spectrograms and audio descriptors to examine the sonic effects central to the work’s impact. The role of concert hall acoustics is explored through the comparison of multiple recordings in different spaces. To close, I categorize the compositional strategies underlying Wander Steps in a guide for composers and performers interested in exploring similar acoustic phenomena.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 4:15 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

O4-4: Metaphorical Associations in Sound-Based Music as Mappings between Acoustical Properties and Semantic Domains

Jason Noble(1), Étienne Thoret(1), Max Henry(1)
1:McGill University

Contemporary “sound-based” music presents rich opportunities for semantic study: in attenuating note-based organization, such music shifts the nexus of association from conventional musical units such as melodies onto properties such as timbre and texture. Metaphorical associations in such cases may consist in acoustical-to-semantic cross-domain mappings explicable through acoustical analysis. [P] In a perceptual experiment, 38 participants rated 40 excerpts of contemporary music along 23 semantic scales, e.g., rating the extent to which the music suggested “machinery,” or seemed “kaleidoscopic.” Significant consistency was observed between participants in associating these semantic domains with musical excerpts. Excerpts that were rated highly for the same semantic scales tended to have similar musical properties (e.g., “kaleidoscopic” excerpts tended to be timbrally heterogeneous and internally dynamic). More precise descriptions of the stimuli were sought through acoustical analyses. [P] Summary statistics of classical acoustical timbre descriptors were computed and correlated with ratings through multiple linear regressions, but these explained only a small proportion of the variance. An alternate, data-driven approach was adopted to fit mathematical distances between neuromimetic representations such as Spectro-Temporal Receptive Fields to those between human ratings, yielding robust correlations between the two. However, it remains unclear what acoustical information contained in this representation contributes to the simulation of human ratings. A complementary approach was provided by a model of cochlear and mid-level processing of sound textures based on summary statistics. The model allows for resynthesis of a given texture based on its measured statistics, confirming that the model captures the relevant acoustical information. However, it has proven to be more effective for some kinds of musical textures than others, a subject of ongoing investigation. [P] Through initial analyses with classical audio descriptors followed by alternative approaches with neuromimetic models, we have made first steps towards understanding how subjects map acoustically rich, sound-based musical textures onto semantic domains.

Subjects: Timbre, Audiovisual / crossmodal; Composition and improvisation; Computational approach; Corpus analysis/stu

When: 4:30 PM in KC914 on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Poster session P1

3:30-4:45 PM in Rosenthal Pavilion

P1-1: Implicit learning of tetrachords in an atonal context

Jenine L Brown*(1), Nathan Cornelius(1)
1:Peabody Conservatory of Music – Johns Hopkins University

This study is the first to test implicit learning of tetrachords in an atonal familiarization phase. Music students at a conservatory (N=72) were divided into four groups of 18 participants apiece, each group hearing a familiarization phase predominated by a different tetrachordal set-class: [0167], [0268], [0148], or [0257]. Two of these set-classes are consonant according to Huron’s consonance index ([0257] and [0148]), whereas two are dissonant ([0167] and [0268]). Two contain the salient semitone, whereas two do not. The music heard during familiarization was a Bartók composition containing 32 instances of [0167] out of the 42 total tetrachords in the accompaniment. To create the other familiarization phases, the work was recomposed, replacing [0167] with one of the other tetrachordal set-classes. Before familiarization, participants rated 34 tetrachords on how often each tetrachord occurred in music heard throughout their lifetime; after familiarization, they rated how often it occurred in familiarization. All 29 tetrachordal set-classes were rated in these pre- and post-familiarization tests. Tetrachordal stimuli were Shepard tones, played as simultaneities to minimize effects of register, contour, and chordal inversion. A mixed ANOVA suggests that the most frequent tetrachord from familiarization was learned. It was rated significantly higher after familiarization than tetrachords occurring less often in familiarization as well as tetrachords that never occurred (F(1.494,101.576)=31.656, p<.001). There was no between-subjects effect, meaning that participants learned the tetrachordal motive no matter which tetrachord they heard frequently in familiarization (F(3,68)=.188, p=.904). Analyses of difference-scores echo these findings. Findings also suggest that intervals more novel to tonal music made the biggest impact on post-familiarization ratings; this has pedagogical implications for post-tonal music theory classes. This is the first study to suggest that listeners can implicitly learn frequent tetrachords in an atonal context, regardless of the tetrachord’s consonance, novelty, and intervallic construction.

Subjects: Harmony and tonality, atonal

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-3: Introducing the Melody Annotated String Quartet (MASQ) dataset

Sarah A Sauvé(1)
1:Memorial University of Newfoundland

Melody extraction, a common task in the music information retrieval (MIR) community, consists of identifying the melody in a polyphonic music context. Melody is defined as “the sequence of monophonic pitches that a listener might sing or hum when asked to reproduce a polyphonic piece of music” (Salamon, Gomez, Ellis, & Richard, 2014). The melody extraction task consists of: (1) identifying the appropriate pitch from all possible pitches at any given moment and (2) identifying whether a melody is present, referred to as voicing. An important aspect of MIR melody extraction research is having appropriate ground truth to evaluate algorithms; there are only two datasets known to include instrumental music where the melody may move between instruments (Bittner et al., 2014; Bosch, Marxer, & Gómez, 2016). Melody Annotated String Quartets (MASQ) is a new instrumental music ground truth dataset providing melody annotations for string quartets, a genre not yet represented. Thus far, 7 Mozart and fourteen Haydn string quartet movements have been annotated by three listeners each; data available at https://github.com/sarahsauve/MASQDataset. An analysis of annotation disagreements between listeners revealed that disagreements were found in an average of 25.8% measures for each movement and that the two most common types of disagreements were voicing and competing saliency – high voice (labelled by the author) accounting for 47.7% and 34.5% of disagreements respectively. The prominence of voicing disagreements in these annotations highlight the importance of voicing in melody extraction. On the other hand, the prominence of the competing saliency – high voice category demonstrates the high-voice superiority effect, where perception is drawn to the highest voice, regardless of whether or not it contains thematic material. This type of data and analysis offer the opportunity for more refined melody extraction algorithms capable of taking into account the possibility of differing perceptions of melody.

Subjects: Music information retrieval, Corpus analysis/studies

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-5: Validation of a Paired-Comparison Speech-In-Noise Test Against the HINT Test: Effects of Musical Training and Musical Aptitude on Auditory Filtering Abilities

Betsy Marvin*(1), Hannah Dick(1), Charles Babb(2), Anne Luebke(2)
1:Eastman School of Music, 2:University of Rochester

The ability to understand speech in the presence of background noise is a critical skill, and this skill is often assessed using the Hearing-in-Noise Test (HINT), which requires a verbal response. We were interested in designing a test similar to HINT that did not require a verbal response. We were also interested whether there is a ‘musician’ advantage in HINT results, with “musicianship” parsed by music major, musical training, musical experience, and/or musical aptitude. To explore these relationships, we recruited 46 adults (18-30 yrs; 25 music majors) with normal audiometric thresholds whose first language was English. After recruitment, they then completed a musical experience/sophistication survey, underwent musical aptitude testing using Gordon’s Advanced Measure of Musical Audiation (AMMA), and we tested their mid-level DPOAEs (0.5 to 8 KHz) to assess any underlying peripheral impairments. We then assessed their speech-in-noise ability by administering the HINT test adaptively, and our new paired-comparison HINT test, with same-different (S/D) discrimination at differing SNR levels, to determine S/D HINT thresholds. The S/D HINT consists of sets of sentence pairs, where “different” pairs included one changed word that rhymed with the original. Sentence sets were balanced for word order, grammatical function, and word frequency of the rhyming word in an American English corpus, and a phonemic distance metric. We found no significant differences in DPOAE amplitudes between music majors and other majors. We also found no significant differences between the adaptive HINT score and many other measures of musicianship (major, years training, age begun training, and AMMA score). Our S/D HINT test exhibited d’ and c validity, and was correlated with both adaptive HINT and AMMA score. One possibility for a lack of a ‘musician’ advantage between our music major and other-major groups is they were very similar with respect to age, years of musical study, and when they began music study, and only differed by musical aptitude, weekly hours playing and practicing, suggesting that there are multiple parameters that may influence the ‘musician’ advantage. Moreover, we have validated our S/D HINT test with the adaptive HINT test, enabling assessments of speech-in-noise in minimally-verbal populations. The S/D HINT offers a test that is easier and less expensive to administer, which may encourage its use earlier in the diagnostic process.

Subjects: Music and language, Musical expertise

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-7: Why We Can’t Understand the Lyrics: (a multimodal analysis of the perception of sung language)

David Wolfson(1)
1:Hunter College

Condit-Schultz and Huron (2015) found that under 75% of lyrics are intelligible, on average, across the twelve genres they studied, resulting in either lyric substitutions (“mondegreens”) or in lyrics not being decoded into words at all. While some of this phenomenon can be ascribed to the condition of the acoustic signal before it reaches the auditory cortex (e.g. poor diction, unfriendly acoustics, a vocal-backward recorded mix), some of it is inherent in the linguistic and musical content of the signal itself: some combinations of words and music are simply more difficult to parse into lyrics than others. I propose an interdisciplinary model drawing on concepts of surprisal and expectation from psycholinguistics, similarity of articulatory features from phonology, and the limits of working memory from cognitive science, as well as recent musicological research on the intelligibility of sung language. This model uses information about a song’s phonological, lexical and semantic content—as combined with its musical content—to analyze the likelihood that a sung word or phrase will be decoded correctly in its performed or recorded context. This approach, in addition to opening a new window onto the common experience of mishearing lyrics, could be of practical use to composers and songwriters interested in increasing the intelligibility of their ou

Subjects: Music and language, Composition and improvisation

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-9: The Effect of Temperament System on Makam Recognition Performance: A Cross-Cultural Comparison

Firat Altun*(1), Hauke Egermann(1)
1:University of York

Background Previous research on recognition and cross-cultural cognition shows that listeners’ musical expectancies are biased by culture and are based on in-culture schemata to organise unfamiliar out-culture music (Demorest et al., 2016). However, despite being the foundation of pitch relations and sets, the effect of temperament systems on recognition performance had not been evaluated comprehensively yet. Aims This study aims explore the effect of temperament systems on subjective set recognition performance (Snyder, 2000:45). We aim to evaluate how recognition performance is developing for two specific Turkish Makams after participating in an ear training classes which is either conducted in familiar or unfamiliar temperament. A makam is a melodic texture consisting of progressions, directionality, tonal and temporary centres and cadences. It usually compares with the concept of a scale. We aim to compare the Turkish 24 unequally divided interval temperament system (OT) with the 12 equally divided temperament system (ET). Method We recruited 30 music students in the UK and another 30 music student in Turkey. In a pre-experiment, all participants listened to 10 excerpts of 5 different makams in ET and OT in random order and asked to choose the name of makam. Participants were then randomly assigned into one of two different four-week long ear training classes that focused on the theoretical features of Karcigar and Huzzam makams. In a mid-experiment (after two weeks), all participants listened to same excerpts that were presented in the pre-experiment and completed makam recognition task again. Subsequently, in a post-experiment (after four weeks), all participants listened to same excerpts and an additional four excerpts that have not listened before. Results Analyses will show if the 4-week long course will lead to an increase in Makam recognition rates. Furthermore, we will assess if presenting the class in culturally familiar temperament system leads to different recognition rates compared to the class that culturally unfamiliar. Conclusions These results will show, how temperament systems are the critical mediator of recognition performance. References Demorest, S. M., Morrison, S. J., Nguyen, V. Q., & Bodnar, E. N. (2016). The influence of contextual cues on cultural bias in music memory. Music Perception: An Interdisciplinary Journal, 33(5), 590-600. Snyder, B., (2000). Music and memory: An introduction. MIT press. Keywords: Temperament System, Recognition Performance, Cross-Cultural Comparison

Subjects: Memory, Cross-cultural comparisons/non-Western music

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-11: A Multi-Modal Investigation of Woodwind Articulation Performance

Laura Stambaugh*(1), Carolyn Bryan(2)
1:Georgia Southern, 2:Georgia Southern University

Musicians often circle markings in their printed music, ostensibly because they may otherwise forget to perform them. Likewise, music teachers frequently need to remind their students to pay attention to dynamic, articulation, and other markings. Despite the ubiquity of articulation markings in printed music, researchers have largely overlooked this topic. The purpose of this study was to examine the effects of visual, kinesthetic, visual-kinesthetic, and aural-kinesthetic modes of practice on performance accuracy of articulation markings. We pilot-tested all study procedures and materials before data collection began. Participants were university student flutists, clarinetists, and saxophonists (N = 50) from three colleges in the southern United States. After completing a working memory screening, each participant completed a practice trial, a control trial, and then four experimental trials in a counterbalanced order. In each condition, participants had two minutes to practice an eight-measure exercise that included a variety of articulation markings. At the end of two minutes of practice, participants made two test recordings of the exercise. In the control condition, the music appeared in black ink, as is customary. In the visual condition, the articulations were printed in colored ink. In the kinesthetic condition, participants traced over the black articulation markings with a black pen. In the visual-kinesthetic condition, participants traced over the black articulation markings with a yellow highlighter. In the aural-kinesthetic condition, participants said the articulations out loud before starting to play the exercise. Approximately 24 hours after completing the first study session, participants returned for retention testing. Results of this study are forthcoming. The dependent variables are performance accuracy and temporal evenness at acquisition and retention. Repeated measures ANCOVAs will evaluate within-participant differences among mode conditions and between-condition differences. For implications, we will consider how mode of interaction may affect learning and performance for common music markings.

Subjects: Performance, Music education/pedagogy/learning

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-13: Auditory categorical learning is shaped by inherent musical listening skills

Kelsey Mankel*(1), Gavin Bidelman(1)
1:University of Memphis

Humans organize diverse, continuous stimuli in the environment into categories that share perceptual similarities, including speech and music sounds. The neural mechanisms underlying categorical learning, including when and where in the brain categories are formed, remains undetermined. Additionally, while music expertise enhances speech perception and sound-to-meaning learning, it is unclear whether innate musicality (in the absence of formal music training) influences categorical learning of unfamiliar sounds. To address these questions, we trained nonmusicians to identify musical pitch intervals (minor and major 3rd dyads) in a short-term learning task (15-20 min). A separate continuum (minor to major 6ths) served as a control set to assess perceptual learning and transfer effects. Identification training was highly effective as most individuals scored >80-90% on interval labeling by post-test. Psychometric curves for the trained continuum were steeper post-training relative to the untrained stimulus set, indicative of stronger categorization performance. Although smaller, post-training gains for the untrained intervals suggested subtle transfer in perceptual learning. These findings demonstrate that feedback training was more critical for establishing perceptual categories than mere exposure. Category learning was then compared with performance on a test of receptive musicality (Profile of Music Perception Skills; PROMS). Individuals who possessed naturally higher musicality (better PROMS scores) showed enhanced tone categorization by higher accuracy and faster response times compared to those with lower musicality. Our results have implications for understanding individual differences in categorical perception and learning by demonstrating certain listeners with inherently superior auditory skills are better primed to map sounds to meaning.

Subjects: Music and language, Language and speech; Music education/pedagogy/learning; Psychoacoustics

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-15: College musician’s psychophysiological responses to music performance anxiety assessed as an ensemble

Kate L Schwarz*(1), Martin Norgaard(1)
1:Georgia State University

The purpose of this research project was to analyze Music Performance Anxiety (MPA), among members of a college level large performance ensemble. MPA, otherwise known as stage freight, is a debilitating problem among many musicians. Addressing this issue during musicians’ formative years could facilitate later successful performance and teaching careers. Here we investigated if musicians in an ensemble setting experience MPA in similar ways. Sixty-one percent (N=21) of the members of the ensemble received and completed the Kenny Music Performance Anxiety Inventory. In addition, 7 participants wore a heart-rate activity tracker during a dress rehearsal and performance of the same music. Analysis of this data showed that the seven members experienced changes in heart rate at different times as related to musical events. Some of these changes were only seen during the performance ruling out the possibility the changes were due to physical demands of movement and breathing. Heart-rate changes were also related to experience level suggesting that the students with less musical experience suffer from MPA more. Survey results indicated MPA is a serious concern among our sample. This research may provide information that can help music programs and faculty design educational experiences for music students that attenuate performance anxiety.

Subjects: Performance, Physiological measurement

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-17: The Famous Melodies Stimulus Set: Development and normative data

Amy Belfi*(1), Kaelyn Kacirek(1)
1:Missouri University of Science and Technology

Famous musical melodies are frequently used to assess cognitive abilities such as memory or lexical retrieval, often in studies of patients with dementia, aphasia, or other neurological disorders. While such melodies (for example, “Over the Rainbow” or “Rudolph the Red-Nosed Reindeer”) are commonly used, the specific melodies chosen as stimuli tend to vary widely across studies. The goal of the present work was to create a standardized stimulus set of famous musical melodies (similar to standardized sets of images, such as the International Affective Picture System). First, 100 online participants were surveyed to obtain a list of possible melodies to include. After identifying the most frequently named melodies, a final set of 109 melodies was created. Melodies represented a variety of genres: children’s music (n=20), religious (n=6), patriotic (n=12), classical (n=9), movie/tv themes (n=18), pop (n=18), Christmas (n=17), and “other” (n=11) for those that did not clearly fit into the above categories. Next, normative ratings were collected from an additional 250 online participants on the following variables: age of acquisition (“When did you first learn this melody?”), familiarity (“How familiar is this melody?”), emotional valence (“How negative or positive is this melody?”), emotional arousal (“How relaxing or stimulating is this melody?”), and naming (“What is the name of this melody?”). In addition to providing normative ratings for each stimulus, we investigated the relationships between these variables: Valence and arousal were positively correlated, while age of acquisition was negatively correlated with both familiarity and naming. Intraclass correlation coefficients indicated high interrater reliability for the variables measured here. Overall, these results will provide researchers with a standardized and openly available set of musical stimuli (with normative data from a US-based sample) to be used in future work.

Subjects: Not Listed, Emotion

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-19: The relationship between small music ensemble and empathy: A cross-sectional study

Jeoung Yeoun Han(1), Eun Cho(2)
1:Pai Chai University, 2:University of California, Riverside

Small music ensemble represents a unique form of human social activity, involving a profound level of social and emotional interaction. In small ensemble contexts, musicians involve in group creative processes, collaboratively solve musical problems, and engage in performance that is deeply interactive. Rich literature has suggested that continuous participation in playing music in groups may boost the capacity for empathy. In line with this view, the previous study (Cho, 2018) explored the relationship between American music students’ small ensemble experience and their empathy skills and found a close association between levels of small ensemble participation and empathy—specifically, those who participated in small ensemble more often having a higher level of empathy skills. With an attempt to replicate and extend the previous study, the present study examined the relationship between small ensemble experience and empathy skills among the Korean music student population. Undergraduate music performance majors in South Korea (N=188) voluntarily completed an online survey that included questions about their background and participation in and attitudes toward small ensemble. They also completed a self-assessment questionnaire that measures their dispositional empathy levels and personality. Preliminary results showed that overall, Korean students scored lower in the empathy measure, which echoes the relatively lower empathy scores among students with Asian ethnic backgrounds in the previous study. In addition, consistent with the previous finding, a close association between primary area of study and empathy was found, with non-classical music majors (i.e., popular music, jazz) showing higher levels of empathy than classical music majors. Additionally, linear regression analysis indicated that students’ attitudes toward small ensemble significantly predicted their empathy skills. A full analysis of data will be presented at the conference along with implications for music psychology research.

Subjects: Cross-cultural comparisons/non-Western music, Music education/pedagogy/learning

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-21: Infants processing of ambiguous rhythm patterns: Can they maintain metrical interpretations not given directly in the stimulus?

Erica Flaten*(1), Laurel Trainor(1)
1:McMaster University

Previous studies have shown adults can be primed to hear an ambiguous rhythm pattern with one metrical interpretation or another, reflected in EEG oscillation responses showing more energy at frequencies corresponding to their metrical interpretation. In a previous study we presented 7-month-olds with a 6-beat repeating ambiguous rhythm containing energy at the beat level (3 Hz) as well as at groupings of 2 (1.5 Hz) and 3 (1 Hz) beats. We found EEG oscillation responses at all three frequencies in the stimulus: beat, duple and triple levels. Here we investigate whether infants can be primed to perceive one metrical interpretation versus another. 6-month-olds hear an ambiguous 6-beat repeated pattern with no accents. Half the infants are primed to hear the pattern as 3 groups of two beats, and half as 2 groups of three beats, by inserting 4 repetitions of the pattern with accents added on every second or on every third beat, respectively, every 20 repetitions of the ambiguous unaccented pattern. The 20 repetitions of the unaccented ambiguous pattern are identical, regardless of the prime stimulus. In adults, there is some suggestion that such priming effects depend on attending to the stimuli. To focus infants’ attention on the auditory rhythm in general, infants see a visual ball stimulus increase suddenly in size on the first beat of every repetition of the 6-beat pattern (a strong beat for both duple and tripe interpretations). Data collection is ongoing. Measuring EEG during presentation of the ambiguous rhythm, we expect increased energy at neural oscillation frequencies corresponding to the duple versus triple meter, according to which was primed. This study will inform whether infants are able to hold endogenous metrical interpretations of rhythm patterns that are not directly giving in the stimulus.

Subjects: Beat, rhythm, and meter, Audiovisual / crossmodal; Music and development; Neuroscientific approach

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-23: Revisiting timbral brightness perception

Charalampos Saitis*(1), Kai Siedenburg(2), Christoph Reuter(3)
1:Centre for Digital Music, Queen Mary, University of London, 2:Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, 3:Institute of Musicology, University of Vienna

Brightness has been long shown to play a major role in timbre perception but relatively little is known about the specific acoustic and cognitive factors that affect brightness ratings of musical instrument sounds. Previous work indicated that sound source categories influence general timbre dissimilarity ratings. To examine whether source categories also exert an effect on brightness ratings of timbre, we collected brightness dissimilarity ratings of 14 orchestral instrument tones from 40 musically experienced listeners and the data were modeled using a partial least-squares regression model that takes audio descriptors of timbre as regressors. It was found that adding predictors derived from sound source categories did not improve the model fit, indicating that timbral brightness is informed mainly by continuously varying properties of the acoustic signal. A multidimensional scaling analysis suggested at least two salient cues: spectral energy distribution and attack time and/or asynchrony in the rise of harmonics. This finding seems to challenge the typical approach of seeking acoustical correlates of brightness in the spectral envelope of the steady-state portion of sounds. To further investigate these aspects in timbral brightness perception, a new group of 40 musically experienced listeners will perform MUSHRA-like brightness ratings of an expanded set of 24 orchestral instrument notes. The goal is to obtain a perceptual scaling of the attribute across a larger set of sounds to help delineate the acoustic ingredients of this important aspect of timbre perception. Preliminary results indicate that between sounds with very close spectral centroid values but different attack times, those with faster attacks tend to be perceived as brighter. Overall, these experiments will help clarify the relation between two salient dimensions of timbre: onset and spectral energy distribution.

Subjects: Timbre, Psychoacoustics

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-25: Item Difficulty and Performance Accuracy on Interval Identification and Melodic Dictation Tasks

Bryan Nichols*(1), D Gregory Springer(2)
1:Penn State University, 2:Florida State University

Previous research has indicated that jazz musicians outperformed classical musicians on one of six working memory span tasks (Author, 2018). In that study, participants were asked to recall the last pitch from a series of triads by playing them serially on a piano and in comparison, by notating them on a staff. When participants used the piano, they were better able to recall the last pitches, but without the assistance of the piano, neither jazz nor classical musicians performed well. Performance was unexpectedly low on this dictation task, which leads us to explore what series of tasks may provide easy, medium, and difficult levels of item difficulty, and to investigate the relationships between aural interval identification and melodic dictation. The purpose of this study was to investigate the ability of student musicians to correctly identify short pitch spans after a brief tonicization (hearing tonic-dominant diad played twice). We also examined relationships between interval identification and melodic dictation. College musicians with classical or jazz backgrounds completed an interval identification test (Author, in press) and a series of new melodic dictation tasks based on those used by Author (2018). Current results (N = 9, anticipated N = 24) indicate a moderate correlation between interval identification and melodic dictation (r = .538). Jazz musicians answered more items correctly than classical musicians on interval identification and melodic dictation tasks, but Mann-Whitney U tests indicated that these differences were not statistically significant (p = .714 and .095, respectively). Difficulty indices ranged from .22 to 1.00 across interval items and from .33 to .89 across melodic dictation items. These reflect a battery of items ranging from “very easy” to “very difficult” (Allen & Yen, 2001). Implications of these results and suggestions for future research will be discussed.

Subjects: Pitch, Memory; Musical expertise

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-27: Schematic Differences Between Two Performances of Woody Guthrie’s “This Land Is Your Land”

Alfred W Cramer(1)
1:Pomona College

Music theorists and linguists have lately foregrounded a distinction between generalized, unified, widely applicable schematic models (such as the hierarchical pitch-space model of tonal harmony) and specific, non-reductive, idiomatic schematic constructions (for example, as in compositional learning as described by Gjerdingen). The former has received much empirical study, but the latter has not. Nevertheless, many music analysts believe the differences between the two models may be musically quite significant. In order to probe such differences, this study investigates possible causes of significant prosodic differences between two influential performances of Woody Guthrie’s “This Land Is Your Land,” which has become almost an alternative American national anthem, often sung with an approach like that of Pete Seeger, whose 1957 recording is one of the performances studied here. Guthrie’s first recording of the song (1944) is the other. In these recordings, the instrumental accompaniments are such that with minimal signal processing it is possible to obtain a reasonably accurate graph of the vocal intensities, pitches, and durations used by each singer. Analysis of these graphs suggests that Seeger’s recording is consistent with generalized schemas: dynamic emphases coincide with pitches that have greater tension within the schema of tonal pitch space, and these in turn place focus primarily on salient words, in keeping with a generalized association between intonational prominence and grammatical focus. In contrast, Guthrie’s approach is more closely aligned with constructions: the emphases often correspond to those in the melodies from which he borrowed in fashioning the tune of “This Land,” and they tend to accentuate boundaries of linguistic idioms. These differences between the two performances might be expected, given the biographies of Guthrie and Seeger. Still, this study is just an early step toward the empirical study of the perception of construction schemas in music.

Subjects: Music theory, Harmony and tonality; Language and speech; Music and language; Music and society; Musicology; Perfor

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-29: The Importance of Utilizing Emotional Granularity in Music and Emotion Research

Lindsay Warrenburg(1)
1:Ohio State University

Music, perhaps more than any other art form, is able to influence moods and affect behavior. Innumerable accounts of music elicit feelings of nostalgia, transcendence, and other seemingly ineffable emotions. In the scientific study of music and emotion, however, approximately five music-induced emotions have been studied in depth: happiness, sadness, fear, anger, and tenderness (Juslin, 2013; Warrenburg & Léveillé Gauvin, submitted). Although these emotions are certainly important and can be expressed and elicited through music listening, a pertinent question is whether these five words accurately capture all affective states related to music. I argue that in order to better understand emotional responses to musical stimuli, we must change the way we use emotional terminology and examine emotional behaviors. Drawing on recent psychological research on emotional granularity (Barrett, 2004), this research will be the first to examine how differences in musical structure can result in subtle shades of emotion, such as melancholy versus grief. An experiment consistent with this idea is reviewed, where participants were asked to respond to nominally-sad excerpts with more emotionally-granular terms, such as melancholy and grief. The results are consistent with the idea that listeners are able to utilize these emotionally-granular terms to identify sub-groups of music, previously unrecognized in the music and emotion literature. By using more emotionally-granular terms, we aim to alleviate the problem of semantic underdetermination in music and emotion research. I further suggest that some of the inconclusive results from previous meta-analyses may be due to the inconsistent use of emotion terms throughout the music community. As music is often used to change people’s emotions, my research holds implications for future music psychological experiments, media outlets, and commercial sources.

Subjects: Emotion, Music and language; Music theory; Musical expertise

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-31: Bimodal Distribution of Performance in Discriminating Major/Minor Modes in 6-Month-Old Infants

Kyle Comishen(1), Charles Chubb(2), Scott A Adler(1)
1:York University, 2:University of California, Irvine

In the “3-task,” on each trial, the listener hears a single “tone-scramble” (a rapid, randomly ordered sequence of 65-ms pure tones) and strives (with feedback) to classify it as major vs. minor. All tone-scrambles include 8 G5’s, 8 D6’s, and 8 G6’s to establish G as the tonic (with dominant D). In addition, “major” (“minor”) tone-scrambles include 8 B5’s (Bb5’s)—degree 3 of the G major (minor) scale. In adults, the 3-task yields a dramatic bimodal distribution in performance: 70% of listeners perform near chance; the other 30% are near perfect (Chubb et al., 2013). The present study was designed to investigate if the discriminatory capacity underlying performance in the 3-task is present in the early months of life. Six-month-old infants’ ability to discriminate major vs. minor tone-scrambles was investigated using the Visual Expectation Cueing Paradigm (Baker, Tse, Gerhardstein, & Adler, 2008). In this paradigm, one of two cues, A or B, is randomly presented on each trial; however, A (B) reliably predicts the presentation of a target a few seconds later on one side (the other side) of a screen. The percentage of anticipatory saccades made by the infant to the target location (vs. the opposite location) is measured. In this study, cues A and B were major and minor tone-scrambles paired with the same central visual stimulus. Only infants that can discriminate major vs. minor tone-scrambles will be able to correctly anticipate the target’s location above chance performance. Results revealed a bimodal distribution strikingly similar to that shown by adults in the 3-task: 7 out of 20 infants showed near-perfect target-anticipation; the other 13 were near-random. These findings indicate that the perceptual capacity enabling performance in the 3-task either develops during the very first months of life or is biologically determined.

Subjects: Music and development, Harmony and tonality

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-33: Testing the innateness of low-pitch timing superiority

Haley Kragness*(1), Laura K Cirelli(2)
1:McMaster University, 2:University of Toronto Scarborough

In music, melodic information is often assigned to high voices and timing information assigned to low voices. These compositional tendencies might be attributable to human processing biases. Specifically, melodic information is better processed when it is carried in high compared to low pitches and the reverse pattern is observed for rhythmic information. Evidence that these biases reflect innate predispositions comes from modeling cochlear dynamics, as well as from developmental work showing very young infants’ brains respond more to pitch deviants in high-pitch contexts than low-pitch contexts. However, no such developmental evidence has been reported for low-pitch timing superiority. In the present study, we use a preferential looking paradigm to investigate the effect of pitch height on infants’ perception of audiovisual rhythmic synchrony. Eight- to 12-month-old infants are seated facing a single screen with two videos displayed side by side. Each video depicts a finger tapping on a surface of a table, one at a rate of 430ms and the other at 600ms. Simultaneously, a series of sine tones plays at either a 430ms or 600ms inter-onset interval, with the auditory stream phase-aligned (synchronous) with the tempo-matched finger. Across trials, the sine tones play at either 1236.8 Hz (high pitch) or 130 Hz (low pitch). Infants’ gaze to the synchronous and asynchronous videos across a trial is recorded using an eye tracker. We expect infants to preferentially gaze at the synchronous video, especially at the beginning of each trial. We hypothesize that this preference will be enhanced in the low-pitch versus high-pitch conditions. Results will offer insight into the developmental origins of auditory scene analysis, as well as implications for early audiovisual time perception.

Subjects: Music and development, Audiovisual / crossmodal; Pitch

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-35: Music Emotion and Pupillary Responses to Timbre: Analyzing Orchestral Sounds Through Arousal/Valence and Verbal

Ivan Eiji Simurra(1)
1:University of ABC

Music is one of the artistic practices which are capable to elicit human emotions. Additionally, affective states and human emotions can also be studied from the perspective of electrophysiological signals retrieved by direct measurements such pupil dilation.Variation on pupil dilation occurs over very short periods, and can be recorded by regular cameras which makes data acquisition more convenient and less invasive. This study presents a listening test exploring timbre characteristics of orchestral instruments relating pupillary responses and induced emotional states represented by means of the circumplex model of affect. These experimental data were gathered via an active response in which participants rated the whole of the auditory stimuli using arousal and valence scales, and a passive responses in which pupilometry were measured during listening. Stimuli were designed to cover contemporary music techniques primarily used to create new sounds and textures, by focusing on the blending of diverse timbres. A total of 33 stimuli were selected each one with a duration of 05s. Several statistical techniques were employed for the analysis of verbal attributes and pupillometry data such as correlation matrix, PCA and Factor Analysis. Results suggest that pupillary responses can be triggered by the slightest variations in a myriad of available orchestral settings and instrumental techniques.The study here also presented strengthens the observation that there is a noticeable relationship between pupil diameter and auditory stimuli, and that this relationship is affected by the audio contents as evidenced by the differences observed among Valence/Arousal groups, which were modulated by variations in timbre, instrumental techniques and orchestral settings. Our results point out that pupil dilation responses can be interpreted in terms of the Valence/Arousal groups, in the sense that some groups (e.g. Low.Valence↔Low.Arousal) can be distinguished by comparing the time-window of listening to the stimulus and the timespan after listening.

Subjects: Timbre, Emotion

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-37: Hemispheric differences in the role of the parietal cortex in auditory beat perception.

Jessica Ross(1), Shannon Proksch*(2), John Iversen(3), Ramesh Balasubramaniam(2)
1:Harvard Medical School, 2:University of California, Merced, 3:University of California, San Diego

Previous work with transcranial magnetic stimulation (TMS) demonstrated a causal relationship between dorsal auditory stream function and musical beat perception. Specifically, cTBS to a left posterior parietal (PPC) target, applied to disrupt dorsal stream network activity, interfered with accurate detection of shifts of beat-phase, but did not interfere with absolute interval timing discrimination or detection of changes in musical tempo. This is pilot data with N=18 and a target sample of N=35. We examined whether the right PPC, which is implicated in many aspects of spatial cognition and pitch transformation, is also causally involved in beat-based musical timing perception. Additionally, we examined whether there are differences between left and right hemisphere PPC involvement in beat-based timing perception. We compared the perceptual effects of downregulating the left versus right PPC in 18 participants to discover hemispheric differences in absolute and beat-based musical timing perception. Three aspects of timing perception were investigated: 1) discrete interval timing discrimination, as well as two facets of relative beat-based musical timing—detection of alterations to 2) tempo (sped up/slowed down) and 3) shifts in phase (forward/back). Participants were tested pre- and post- stimulation using a psychoacoustic test of sub-second interval discrimination and the Adaptive Beat Alignment Test (A-BAT) subtests. Preliminary data suggest a role for the left PPC in detecting shifts in phase and interval discrimination, but not in tempo detection. The data also suggest a possible role of the right PPC in interval discrimination. We discuss these trends in the context of hemispheric and functional differences across the parietal lobes and the Action Simulation for Auditory Prediction (ASAP) hypothesis.

Subjects: Neuroscientific approach, Motor Control / Timing

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-39: Nature of Young Adults’ Music Engagement and its Therapeutic Implications

Durgesh K Upadhyay(1)
1:Department of Psychology, Mahatma Gandhi Kashi Vidyapith

This study aimed to achieve fourfold objectives, which were as follows: (1) To examine the basic underlying dimensions of functions of music and rasa (emotion) being perceived {factor analysis of Function of Music Scale (FMS) and Music Emotion Scale (MES)} developed by the researcher); (2) To explore inter-correlations among personality factors, music listening styles, music listening types (active and passive) and dimensions of Music Preference Scale (MPS), FMS, and MES; (3) To establish differences, based on demographics (e.g. music background, age, and gender), in all the variables of interest (if any); and (4) To discuss implications of these findings in health and therapeutic contexts. A sample of 229 young adults (M = 131; F = 98; M = 22.4 years) completed measures of the above constructs and data were analysed via factor analysis, correlations, one-way ANOVA, post hoc tests, and independent sample T-tests. Sukhātmaka and Dukhātmaka were the two factors emerged out of 11 Rasa. For functions served through music, two factors emerged namely mood based and memory based functions. Significant correlations among dimensions of different scales and differences based on participants’ music background could be established. Lastly, a ‘Music Engagement Model (MEM) and its Therapeutic Implications for Young Adults’ have been proposed and discussed.

Subjects: Aesthetics / preference, Emotion; Health and well-being; Music and movement; Music therapy

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-41: Stimulating linguistic competences through singing. An experimental study with adult migrants

Lea M Siekmann*(1), Vera Busse(2), Gunter Kreutz(1)
1:University of Oldenburg, 2:University of Vechta

Singing is often proposed as an efficient strategy for language learning. Few empirical studies, however, have addressed this claim systematically. Here we investigate the effects of singing on learning specific grammatical phenomena in adult migrants who are in the process of learning the language of instruction in their host country (Germany). In this ongoing project, the participants are randomly assigned to one of three “listen-and-repeat” learning conditions: covert speaking, overt speaking, or singing. Dependent measures include written and verbal cloze test scores at baseline, post intervention and after a twenty-minute retention interval. Mood changes, educational background, musical ability, and basic cognitive skills are controlled for and included in the analyses. Data collection is expected to be completed in March 2019 and results will be presented at the conference.

Subjects: Music and language, Language and speech; Memory

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-43: Vowel Perception in Congenital Amusia

Jasmin Pfeifer*(1), Silke Hamann(2)
1:Heinrich-Heine-University, 2:University of Amsterdam

Background Congenital amusia is a disorder that negatively influences pitch and rhythm perception (Peretz et al. 2002). While amusia had been reported to affect only the musical domain (Ayotte et al. 2002), studies have shown that amusics also have impaired intonation perception (Patel et al. 2008). In the present study we aimed to show that amusia also has an influence on linguistically relevant cues other than pitch by investigating vowel perception. We assessed amusics’ behavioral and electrophysiological responses to vowel changes. Method We tested 11 congenital amusics diagnosed with of the Montreal Battery of Evaluation of Amusia (Peretz et al. 2003) and 11 controls matched for age, gender, education and musical training. All participants were right handed, had normal hearing and had German as their native language. Our stimuli were four isolated synthetic vowels, /ɛ/, /ɛ:/, /e/ and /e:/, created by Klatt synthesis in Praat (Boersma & Weenink 2016), varying in either duration or spectral properties, based on the properties of natural German vowels. For the behavioral study, we employed an ABX task and the stimuli were presented with an inter-stimulus interval (ISI) of either 0.2 s or 1.2 s. For the EEG study, the stimuli were presented in a multi-deviant oddball paradigm in 4 blocks. In each block, one vowel was the standard and occurred 85% of the time, while the other three vowels served as deviants, each occurring 5% of the time. This resulted in 16 event-related potentials (ERPs) per participant: 4 standards and 12 deviants. Results Behavioral results: We calculated a linear mixed model with subject as random effect. We found main effects of group t(20) = 2.26, p = 0.035, ISI t(2436) = 5.73, pp < .0001 and cue t(2436) = 4.60, p < .0001. Amusics performed worse than controls, the short ISI was overall harder and duration was overall harder than formant cues. We used a linear mixed model for the MMN data as well. We found significant main effects for group t(323.7) = -2.45, p = 0.024 with amusics (M = -2.68) overall having a smaller MMN than controls (M= -3.37). In addition we found a main effect for cue t(2351.8) = -6.05, p < .0001. Conclusions Our study shows that amusia affects not only pitch perception in language but also vowel perception, therefore having more far reaching consequences for speech perception than previously assumed. Not only was the behavior of amusics shown to be affected, we also showed differences in the MMN, reflecting differences in early auditory change detection

Subjects: Processing disorders, Expectation; Language and speech; Music and language; Neuroscientific approach; Pitch

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-45: How Undergraduates Engage with Music Cognition: A Content Analysis of Students’ Experiment Proposals

D Gregory Springer*(1), Amanda L Schlegel(2)
1:Florida State University, 2:University of South Carolina, School of Music

The pedagogy of music cognition is of prime interest to psychologists, musicians, and other researchers who teach music cognition. At a previous SMPC meeting, Springer (2017) provided recommendations on best practices for online music cognition courses that are designed for students of all majors. This proposal is a follow-up to this earlier discussion with an analysis of students’ culminating projects that were completed over two semesters. Two cohorts of undergraduate students (N = 48) enrolled in an online music cognition course submitted an experiment proposal that involved music. The course was an approved general education course, so students of all majors were enrolled. We analyzed these proposals for the following elements to gain a greater understanding of how students engage with music cognition: the musical behavior involved, the independent and dependent variables, the content area from the course that best aligned with the proposal, and the journals that participants cited. Results indicated that listening was by far the most common musical behavior mentioned (n = 40), followed by learning music (n = 3) and active music therapy (n = 2). (Other behaviors were only listed by single students.) Music was most often used as a between-subjects independent variable (music vs. non-music condition) and was rarely used as a dependent variable. The most common course topic areas were “Music and Human Health” (n = 32), “Perception of Beat and Rhythm” (n = 12), and “Musical Preferences” (n = 8). The most common journals cited were Psychology of Music (n = 18), Journal of Music Therapy (n = 11), and PLOS ONE (n = 5), but there was notable variety in journal selections. These results give music cognition instructors an understanding of the transfers that students make between music cognition and their own lives, as well as recommendations for future course improvement.

Subjects: Music education/pedagogy/learning, Musical expertise

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-47: The Role of Bilingualism in Rhythm Perception and Grammar Development

Courtney K Rooker*(1), Reyna Gordon(2), Tonya Bergeson(1)
1:Butler University, 2:Vanderbilt University Medical Center

Within the past few decades, researchers and scholars have been bridging the gap between music and language. Most studies have been conducted with monolingual English speakers. Thus, there is still a great deal to be learned about the relationship between music and language with children who have exposure to more than one language. The following study investigates the effect of dual language exposure on the relationship between rhythm and grammar. Twenty-three typically-developing 5- to 7-year-olds who were monolingual English speakers or had dual language exposure in English/Spanish, English/French, or English/Mandarin were given standardized assessments of rhythm perception and language development. The chosen languages were selected strategically to investigate whether linguistic rhythm (i.e., syllable- versus stress-timed language) affects the relationship between rhythm and grammar skills. The current hypothesis is that the correlation between rhythm and grammar skills will hold for children with dual language exposure, and might be stronger because of their enhanced attention to elements of sound (Krizman et al., 2012). Furthermore, individuals who have dual language experience in two differently timed languages will have better rhythm discrimination skills because they have the ability to change between differently timed linguistic rhythms. Analyses revealed a significant positive correlation between rhythm perception and early language skills for children with dual language exposure in two differently timed languages (syllable versus stress-timed): English + Spanish and English + French language groups (r=0.528, p<.05). The similarly timed group, the stress-timed language group (English only and English + Mandarin), revealed a significant positive correlation between the rhythm and IQ (r=0.648; p<.05). Therefore, children who have dual language exposure in two differently timed languages have stronger relationships between rhythm discrimination skills and grammar skills. Correlational analyses for our second rhythm measure, the Beat Based Advantage Assessment, are still in progress.

Subjects: Music and language, Beat, rhythm, and meter; Language and speech; Music and development

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-49: Pattern Discovery using Melodic-Harmonic Reductions of Bach Chorales

Jonathan E Verbeten*(1), David Sears(1)
1:Texas Tech University

Tonal music is hierarchically organized such that certain events – be they notes, chords, or longer contrapuntal patterns – are more permanent (or stable) in memory and facilitate processing during perception (Krumhansl, 1990; Lerdahl & Jackendoff, 1983; Patel, 2008). According to studies of sequence memory, for example, listeners form non-contiguous associations for the most salient events in tonal melodies despite considerable surface embellishments (Deutsch, 1980). Music theorists typically account for this hierarchical organization by developing analytical methods that can reduce the musical surface to its most important melodic and/or harmonic events (e.g., Schenkerian analysis, Roman numeral analysis, etc.). Nevertheless, current research has yet to provide a data set of melodic-harmonic reductions that will allow researchers to generate hypotheses about (1) the sorts of contrapuntal patterns that might characterize a given style period; and (2) the degree to which listeners perceive these patterns in multi-voiced textures. To resolve this issue, we present a corpus of melodic-harmonic reductions of 100 chorales composed by Johann Sebastian Bach. For each chorale, we imported the corresponding kern file (kern.ccarh.org) and then created a homorhythmic reduction that excluded non-chord tones and consonant skips from each voice, as well as non-syntactic simultaneities across all voices that did not correspond with the harmonic rhythm. We also encoded key and Roman numeral annotations as a separate spine using a variant of the Perl-compatible regular expression syntax developed by Neuwirth et al. (2018). If accepted, the data set, which consists of both the original and reduced kern files for each chorale, will be stored in an online, open-access repository. For illustrative purposes, we also identified the most recurrent two- and three-event melodic-harmonic complexes using scale-degrees for the melody and Roman numerals for the harmony. Statistical association measures revealed characteristic patterns like the compound cadence.

Subjects: Corpus analysis/studies, Harmony and tonality

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-51: Preference and Perceived Complexity for Rhythms in Isolation and Embedded in Real-World Music

Jay Appaji*(1), Blair Kaneshiro(2)
1:Southern Methodist University, Dallas, Texas USA, 2:Stanford University

Beat processing is a central topic in music cognition and can be studied through various response modalities. Here we analyze behavioral ratings of musical and rhythmic stimuli from various rhythm families. We aim to probe beat processing in the context of non-Western musical traditions, and to begin bridging simple, synthesized excerpts often used in perceptual research to complex, real-world music actually experienced in everyday life. Stimuli were 30-second excerpts from the Bollywood genre. Excerpts were drawn from 4 rhythm families – Even, Syncopated, Polyrhythm, Swing – with 1 synthesized rhythm and 3 real-world excerpts per family (16 total). Participants listened to excerpts while dense-array EEG (not analyzed here) was recorded and rated enjoyment, perceived rhythmic complexity, and ease of finding the beat on a scale of 1-9 after each trial. We collected 30 trials for each stimulus across 5 participants. We analyzed the ratings using repeated-measures ANOVA with fixed effects of stimulus type (simple or real-world) and rhythm family and random effects of stimulus and participant. Statistical tests indicate that stimulus type significantly impacted participant ratings of enjoyment, perceived rhythmic complexity, and ease of finding the beat (all p<0.001), with simple stimuli producing lower enjoyment and perceived rhythmic complexity, and greater ease finding the beat. Rhythm family significantly impacted perceived rhythmic complexity (p<0.001), was only marginally significant for ease of finding the beat (p=0.08), and did not significantly impact enjoyment (p=0.25). To conclude, rhythm families common to Bollywood music vary in perceived complexity while still providing a detectable beat and may therefore be useful for studying beat processing. Importantly, listener experience was strongly impacted by ecological validity of the stimuli, pointing to the value of real-world stimuli. As next analyses, we will begin relating behavioral results to an analysis of the cortical data.

Subjects: Beat, rhythm, and meter, Computational approach; Music information retrieval; Neuroscientific approach; Psychoacoustics

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-53: Sound pattern recognition: a comparative approach

Paola Crespo-Bojorque*(1), Alexandre Celma Miralles(1), Juan M Toro(2)
1:Universitat Pompeu Fabra, 2:Universitat Pompeu Fabra & ICREA

Humans recognize musical patterns, regardless of changes in salient features such as pitch, timbre and tempo. That is, humans can identify a specific melody independently of whether its frequency is shifted up or down, played in a piano or a violin, or if its tempo is faster or slower. The present study explores how these three musical parameters (pitch, timbre and tempo) are processed by a non-vocal learner species distant from humans. We ran a melody recognition task. The animals (Long-Evans rats) were familiarized to the “Happy Birthday” tune during 20 sessions. After familiarization we presented novel test items. These included changes in pitch (higher and lower frequencies), timbre (string [violin] and woodwind [piccolo] instruments) and tempo (faster and slower speeds). We observed no differences in responding between the familiar and the test stimuli that included changes in pitch and tempo. This suggests that the rats recognized the familiarized acoustic sequence independently of the manipulations in frequency and tempo. Interestingly, when timbre was modified, the animals responded significantly more to the familiar version of the stimuli than to the test stimuli. That is, melody recognition was affected by modifications in timbre. This study provides insights regarding how other species, distant from humans, rely on certain aspects of musical sequences to recognize a sound pattern.

Subjects: Evolutionary perspectives,

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-55: Psychoacoustic Etudes: The Composer as Cognitionist

Ira L Braus(1)
1:Hartt School/University of Hartford

During the last millenium, the discipline of music as traced a circle from science to art to science. Music began as branch of mathematics during the middle ages and morphed into an artistic medium during the renaissance. It persisted as such until the late nineteenth century, when it spurred the beginnings of cognitive psychology through the work of Carl Stumpf and his students. Since then, music theory, history, performance and psychology have moved towards convergence, building on the work of Guido Adler, Erich von Hornbostel, and Charles Seeger. Interestingly, composers as early as Joseph Haydn (1732-1809) made cognition-related phenomena a conscious part of their creativity, acknowledging these phenomena musically AND verbally in their scores. In my talk, I’ll present examples illustrative of this idea, such as: (1) Haydn’s sensitivity to repetition blindness in rapidly iterating a short rhythmic figure (1797); (2) Carl Loewe’s inferential (psychaoauditive) projection of partials from a bass tone on the piano, to effect tonic resolution of a dominant chord preceding it (1832); (3) Alban Berg’s uncanny orchestration and verbal description of Shepard tones, avant le lettre, as a device of musical pictorialism (1925); and Elliott Carter’s use of interleaved melodies as a technique of thematic variation (1951). This trend persisted into the twentieth century and beyond, with Gerard Grisey’s synthesis of spectral and chronometric rhythm. In sum, this paper shows how cognitive factors inform compositional processes trans-historically, often foretelling artistically what is later validated scientifically.

Subjects: Musicology, Aesthetics / preference; Beat, rhythm, and meter; Corpus analysis/studies; Loudness; Music theory; P

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-57: Children’s Facial Affect on Singing Tasks: Results of Imitated and Improvised Vocal Responses

Jennifer A Bugos*(1), Darlene DeMarie(1), Miranda Torres(1), Ayo Gbadamosi(1), Sydney Andersen(1)
1:University of South Florida

Improvisation in many school music programs has been met with mixed reactions due to limited exposure in music educators, preservice teachers, and adolescent music students (Wehr-Flowers, 2006; Alexander, 2012). However, little is known about young children’s emotional responses to vocal improvisation in early childhood. The purpose of this study was to examine the facial affect in children who sang their favorite song, a folk song, and an improvised vocal response. Forty- children (4-6 years) were recruited to complete three singing tasks based on the AIRS singing measure. Criteria for research participation included children between the ages of 4-6 years, no previous musical training, and not currently reading or performing music. Both informed parent consent and child assent were obtained in accordance with the Institutional Review Board’s policies. All participants completed three singing tasks from the AIRS TBSS which included singing: a favorite song, a traditional folk song, and an improvised musical response. Pitch and video analysis analysis included processing through PRAAT to yield the frequency of each participants’ ceiling note, floor note, the first note sung, last note sung, and overall pitch accuracy. FaceReader was used to evaluate facial affect with a child model. Data analysis is currently underway. Preliminary results showed participants displayed happier affect for the improvisatory task as compared with the Brother John task which detected a surprised or sad affect. Overall, participants showed a happier affect for singing their favorite song. In addition, we hypothesized that participants will demonstrate more errors on pitch accuracy for Brother John and the improvisatory task as compared to their favorite song. Results obtained from this study may help us understand how emotional content influences vocal performance in young children. Findings will inform creative tasks related to programs in music education, music psychology, and early childhood development.

Subjects: Emotion, Music education/pedagogy/learning; Physiological measurement

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-59: A randomized controlled study to examine the effects of music training on mathematical achievements and working memory performances

Ingo Roden(1)
1:Carl von Ossietzky University Oldenburg

The present experimental study examined the effects of music and math training on mathematical skills and visuospatial working memory capacity in kindergarten children. For this purpose, N = 54 children (mean age: 5.46 years; SD = .29) were randomly assigned to three groups. Children in the music group (n = 18) received weekly sessions of 60 min music training over a period of eight weeks, whereas children in the math group (n = 18) received the same amount of training focusing on mathematical basic skills, such as numeracy skills, quantity comparison and counting objectives. The third group of children (n = 18) served as waiting controls. The groups were matched for sex, age, IQ and previous music experiences at baseline. Pre-Post intervention measurements revealed a significant interaction effect of group x time, showing that children in both music and math groups significantly improved their early numeracy skills, whereas children in the control group did not. No significant differences between groups were observed for the visuospatial working memory performances. These results confirm and extend previous findings on transfer effects of music training on mathematical abilities and visuospatial working memory capacity. They show that music and math interventions are similarly effective to enhance children’s mathematical skills. More research is necessary to establish, whether cognitive transfer effects arising from music interventions might facilitate children’s transition from kindergarten to first-grade.

Subjects: Cross-domain effects, Memory

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-61: American Listeners Perceive Culturally Unfamiliar Music as Faster than Culturally Familiar Music, Regardless of Actual Tempo

Jared W Leslie(1), Jessica E Nave-Blodgett*(1), Erin Hannon(1)
1:University of Nevada, Las Vegas

Listeners perceive foreign speech as spoken faster than native speech even if there is no difference in the rate of sound, a phenomenon called the ‘Gabbling Foreigner Illusion’. Similarly, studies have found that people tap at lower (faster) metrical levels to culturally unfamiliar music than familiar music. Cultural-specific experience may be necessary for listeners to perceive larger structures that unfold at a slower rate (sentences or phrases), which may lead them to focus on the rapidly changing surface of speech or music, giving rise to the illusory impression that unfamiliar speech or music is faster. We conducted two studies to ask whether listeners perceive the tempo of culturally unfamiliar and familiar music differently, by presenting English-speaking listeners from the USA with wordless excerpts of commercial pop music from multiple cultures (West African, American/British, Indian, Turkish, and Latin American). Participants heard pairs of musical excerpts and indicated if the tempo of the second clip was slower, the same, or faster than the first clip. In one experimental condition, participants made ratings after listening passively and in the other condition they made ratings after tapping along to each excerpt. In both conditions, listeners’ tempo ratings were more accurate when there were no culture changes between clips in a pair. When presented with a clip of culturally familiar American/British music paired with a clip from another unfamiliar musical culture, participants always rated the American/British clip as slower, regardless of the actual relative tempo of the clip. These data suggest that, at least for listeners from the United States, cultural familiarity (or lack thereof) influences our perception of relative tempo in music.

Subjects: Cross-cultural comparisons/non-Western music, Beat, rhythm, and meter

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-63: The influence of rhythmic and sequential structure on classifying major vs. minor tone-scrambles

Joselyn Ho*(1), Charles Chubb(1)
1:University of California, Irvine

Most listeners (70%) struggle to classify randomly-ordered, major vs. minor tone sequences (Chubb et al., 2013). This study investigated whether introducing simple rhythmic and/or sequential structure to these stimuli could heighten sensitivity in such tasks. Listeners participated in 8 tasks. Stimuli were tone-scrambles, which are sequences of pure tones comprising equal numbers of a target note T plus notes G5, D6, and G6. In “3-task” (“4-task”) variants, T was either Bb or B (C or Db). On each trial in a given task, the subject heard a single stimulus and strove (with feedback) to guess T. In “fast” task-variants, each stimulus contained twenty, 65ms tones. In “random” task-variants, the tones were randomly ordered. In the “FR” (fast-random) variant, stimuli were presented in an unbroken stream. In the “FRwR” (fast-random-with-rests) variant, a 130ms rest was inserted after each successive block of 4 tones. In the “FCwR” (fast-cyclic-with-rests) variant, the first four tones included one each of G5, D6, G6 and T, and this sequence repeated 5 times, each time isolated by rests. In the “Slow” variant, each stimulus comprised 1 each of the notes G5, D6, G6 and T played at 325ms/tone in random order. Results and Conclusions: In the 4-task, performance was ordered from best to worst as follows: FRwR > FR > FCwR > Slow, and all differences were significant. The 3-task variants followed the same pattern, but the difference was significant for only the Slow variant. Post-hoc analysis revealed that the suppressed performance in both the Slow and FCwR task-variants is due to a powerful bias inclining listeners to respond “major” (“minor”) if the 4-note sequence defining the stimulus ends on a high (low) note. Importantly, the current results indicate that inserting regular rests into random stimuli can heighten sensitivity to the target note difference.

Subjects: Pitch, Beat, rhythm, and meter

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-65: A Corpus-based Listening Experiment: Evaluating Probability Versus Chord-Distance Models of Harmonic Surprise

Claire Arthur*(1), Alejandra Silcott(1)
1:Georgia Institute of Technology

Several models of harmonic expectancy have relied on chord “distances” (i.e., circle of fifths) to predict “goodness of fit” (e.g., Krumhansl et al., 1982; Tillman et al., 2008). While several models of melodic expectancy have examined probability distributions drawn from real melodies, there have been no evaluations comparing statistical models of harmonic probability with chord-distance models. A (logical) assumption is that these models will be correlated. While differences between theoretical and empirical models may not be significant across broad harmonic “expectancy categories” (e.g., “expected”, “somewhat unexpected,” “very unexpected”), we hypothesize that for finer-grained distinctions within the same category the models will differ (e.g., is I-bII more surprising than I-bIII?). In this study, we compare chord-distance models with chord transition probabilities estimated from a large corpus of popular music data (e.g., McGill Billboard dataset; DeClercq & Temperley’s rock-pop corpus) against listener judgements, to see which model best accounts for perceptual and emotional effects of harmonic “surprise.” Several studies have demonstrated perceptual, physiological, and neurological effects related to the degree of harmonic expectancy, with (theoretically) less probable events eliciting stronger reactions (e.g., Janata, 1995; Steinbeis, 2006). However, there are many “flavors” of surprising musical events, and we lack understanding of their qualia and the musical contexts that give rise to them. Thus, this study focuses primarily on highly surprising harmonic musical events. We describe a pilot and main experiment, where listeners rated the amount of harmonic “surprise” for different estimated harmonic expectancy values. Additionally, the pilot experiment tested the effects of “lead in time” and artificial vs. ecologically valid stimuli for their role in generating harmonic surprise. In the main study, listeners also described surprising harmonic events by checking one or more descriptive words obtained from the results of a pilot study (e.g., nostalgic, triumphant, ominous, etc.).

Subjects: Expectation, Corpus analysis/studies; Emotion; Harmony and tonality; Music information retrieval; Music theory; M

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-67: Steady State Evoked Potentials Reflect Context-Induced Perception of Musical Beat in an Ambiguous Rhythm

Karli Nave*(1), Erin Hannon(1), Joel Snyder(1)
1:University of Nevada, Las Vegas

Synchronous movement to music is often effortless, yet relatively little is understood about the mechanisms that underlie this ability. Previous research has shown that the intended beat of a rhythmic stimulus can be reflected in periodic neural activity. However, it is difficult to disentangle whether these periodic neural responses reflect perception of musical rhythm or simply stimulus-driven processing of the stimulus. We used electroencephalography (EEG) to investigate whether steady state evoked potentials (SSEPs, the electrocortical activity from a population of neurons resonating at the frequency of a periodic stimulus) arising from auditory cortex reflect beat perception when the physical information in the stimulus is ambiguous and supports two possible beat patterns. Participants listened to a musical excerpt that strongly supported a particular beat pattern (context phase), followed by an ambiguous rhythm consistent with either beat pattern (ambiguous phase). During the final probe phase, listeners indicated whether or not a superimposed drum matched the beat of the ambiguous rhythm. Accurate performance required that participants perceive the beat in the musical excerpt and also maintain that percept throughout the ambiguous rhythm, despite having no surface evidence to reinforce that perception exclusively. Participants gave higher match ratings to probes that matched the beat of the context than to probes that did not match the beat of the context. SSEPs during the ambiguous phase had higher amplitudes at frequencies corresponding to the beat of the preceding context. Finally, trial-by-trial analyses revealed that the amplitude of the beat-related SSEPs was predictive of whether or not participants correctly perceived the beat. These findings support the idea that SSEPs arising from auditory cortex reflect perception of musical rhythm and not just stimulus encoding of temporal features.

Subjects: Beat, rhythm, and meter, Neuroscientific approach

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-69: Dancers’ Auditory Perception of Microtiming Deviations Within Drum Grooves

Benjamin Guerrero(1)
1:Eastman School of Music

The purpose of this experimental study is to investigate the differences in auditory rhythmic sensitivity between dancers, percussionists, and participants inexperienced in formal musical or dancing instruction. The auditory discrimination test will involve participants listening to a series of paired drum grooves from various styles of music and determining if the pairings are the same or different. The altered drum grooves will only differ in swing ratio. Factors under investigation include microtiming deviations within drum grooves (10ms, 20ms, and 40ms), musical style (Latin, jazz, and hip-hop), and experience in formal musical or dancing instruction. The research questions addressed in this study include: 1. How does dancers’ auditory perception of microtiming deviations in drum grooves in various styles of music differ from percussionists and participants inexperienced in formal musical or dancing instruction? 2. To what extent does dancing experience in a specific musical style affect the temporal resolution of the participant? Music cognition researchers have documented the neurological relationship between movement and musical rhythms (Bengtsson et al., 2009; Grahn & Brett, 2007; Leow, Parrott, & Grahn, 2014; Loehr & Palmer, 2009; Thaut, 2009; Thaut, Trimarchi, & Parsons, 2014; Trainor et al., 2009; Witek et al., 2014). It has also been established that percussionists have a higher rhythmic sensitivity than nonmusicians (Davies, Madison, Silva, & Gouyon, 2013; Rammsayer & Altenmüller, 2006; Rammsayer, Buttkus, & Altenmüller, 2012). Many popular musical genres have established grooves or feels, and experienced dancers internalize those grooves through movement. This study aims to produce correlative data that can provide insight for music educators when teaching music of different styles in their classroom. This study may help provide evidence supporting the use of popular music, multicultural music, and movement or dancing in the classroom so that students receive a well-rounded music education.

Subjects: Beat, rhythm, and meter, Music and movement

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-71: Changed Appreciation of Novel Interpretations after Focused Training in a Specific Historical Performance Practice

Song Hui Chon*(1), Tom Beghin(2)
1:Belmont University, 2:Orpheus Institute

1. Background and Aims There are various degrees of nuanced interpretations possible on historical pianos. A question arises whether an experienced performer’s appreciation of novel approaches can be altered after a brief period of intense training. 2. Method A short segment from Beethoven’s famous “Waldstein” Sonata Op. 53 (1803) was selected, along with another from Steibelt’s Grande sonate Op. 64 (1805) to contrast with the former. Three pedaling approaches were applied to each excerpt, according to the degree of nuance applied. Six stimuli were recorded on an 1803 Erard replica, a type of instrument on which both pieces were composed. Nine participants were recruited at the 2018 Summer Academy of the Orpheus Institute in Ghent, Belgium. The experiment was conducted twice, before and after a ten-day period. Each trial presented two approaches of one excerpt. Participants indicated which of the two sounded more “appropriate, successful, or convincing.” 3. Results A three-way repeated measures analysis of variance was performed on the number of preferences. Three independent variables were EXCERPT, SESSION, and PAIR. Both SESSION and PAIR were significant (F(1,8) = 7.699, p < .05 for SESSION; F(2,16) = 4.000, p < .05 for PAIR) but not EXCERPT. No interactions were significant. 4. Discussions Despite interpersonal differences, the results show that on average the intense training did change one’s taste in nuanced pedaling—away from the usual approach utilizing only the damper pedal, challenging ever more modern-day conventions. This finding is noteworthy considering that before the training started all except one participant were unfamiliar with this specific fortepiano and the French performance practices of the period, even with their significant experience playing on historical instruments. It will be all the more interesting to repeat the training and experiment with modern pianists who are less or unfamiliar with historically informed practice.

Subjects: Aesthetics / preference, Musical expertise

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-73: Does Musical Training Protect Against Auditory Distractions?

Katherine M Vukovics*(1), Emily Elliott(1), Yiqing Ma(1), David J Baker(1)
1:Louisiana State University

Auditory stimuli can impair performance, and this has often been studied in the context of serial recall tasks. For example, Elliott and Cowan (2005) found that although the presence of irrelevant sounds did have a significant negative effect on performance in serial recall tasks, the level of susceptibility to the irrelevant sounds was independent of the level of memory span. However, they did find a reliable range of susceptibility, suggesting the existence of possible mediating variables. One such variable may be experience with music, and to explore this hypothesis, we assessed both musical ability and musical sophistication, similar to the approach in prior work (Baker et al., in press). In this preregistered study, we examine the relationship between individual differences in working memory capacity, the size of the irrelevant sound effect, musical sophistication and musical ability. Will an individual’s level of musical sophistication influence the magnitude of the disruption caused by irrelevant sounds? To measure working memory capacity (WMC), participants completed one block each of the Operation Span, Symmetry Span, and Rotation Span Tasks (Foster et al., 2005). The irrelevant sound effect (ISE) was measured by performance in a serial recall task under three different conditions: steady-state tones, changing-state tones, and silence. Musical sophistication was measured by the self-report items of the Gold-MSI (Müllensiefen et al., 2014) and musical abilities were measured by the new, adaptive versions of the melodic discrimination and beat alignment tests (Harrison & Müllensiefen, 2017; 2018). Participants were recruited from the School of Music and the Department of Psychology, to ensure a broad sample of individual differences in musical experiences. The analysis approach includes examining correlations between the WMC and ISE tasks and employing a mediational analysis of WMC predicting the size of the ISE, with Musical scores from the Gold-MSI as the mediator.

Subjects: Memory, Musical expertise

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-75: An EEG Study of Speech and Music Processing in Children with Autism Spectrum Disorder

Sylvie Goldman*(1), Joseph Isler(1), Natasha Yamane(1), Sophia Wyne(2), Michael Myers(2), Nim Tottenham(3)
1:Columbia University Medical Center , 2:Columbia University Medical Center, 3:Columbia University

Music and language in autism spectrum disorder (ASD) have been examined as clinical markers; however, few studies have systematically investigated their neurological underpinnings. Longitudinal studies and interventions emphasize speech acquisition and often include music therapy, yet the underlying neurophysiological mechanisms of speech versus music processing remain unknown. The majority of children with ASD present cognitive, emotional, and language impairments, with a large proportion remaining nonverbal. Despite these deficits, most children with ASD show preserved musicality, evidenced by heightened positive affect. Electroencephalogram (EEG) is a valuable measure of atypical electrocortical activity in ASD. Existing work demonstrates an overlap among brain regions involved in speech, music, and emotional processing. However, few studies have examined EEG activity underlying these interactions. In light of these findings, the goal of this study was to use EEG to analyze neural response to matched musical and spoken stimuli in children 4 to 6 years. Typically developing (TD) children and sex- and age-matched children with ASD with varying levels of cognitive functioning were recruited. Children were tested in a child-friendly pediatric neurology clinic by psychologists with experience in autism. Children sitting on their mothers’ laps were presented with low-impact visual stimuli while listening to two counterbalanced sets of four spoken and sung familiar children’s songs. Testing was performed with a custom stimuli presentation software and a portable 128-electrode EEG acquisition system. Preliminary results confirm the feasibility of our protocol in young children and identify different profiles of EEG power spectra within and between the two groups. Children with ASD had greater variability in their log-transformed spectral power compared to the TD group, pointing to a potential, novel electrocortical signature of ASD. Our work will further address brain connectivity and contribute to our understanding of the effect of music on neuronal processing in order to design targeted music-based interventions.

Subjects: Music and development, Neuroscientific approach

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-77: Cortical thickness and beat processing ability in patients with schizophrenia

Karin Matsushita*(1), Ryosuke Tarumi(1), Yoshihiro Noda(1), Shiori Honda(1), Ryo Ochi(1), Natsumi Nomiyama(1), Sakiko Tsugawa(1), Patrick E Savage(1), Shinichiro Nakajima(1), Masaru Mimura(1), Shinya Fujii(1)
1:Keio University

Background: People with amusia show cortical thinning in the medial orbital frontal gyrus, anterior cingulate gyrus, and inferior temporal gyrus. It was noted that two thirds of patients with schizophrenia showed symptoms of amusia and that they showed malformations of cortical thickness in the frontal and temporal lobes. However, the link between beat processing ability and cortical thickness remains unclear in the schizophrenia. Thus, we investigated the relationship between beat processing ability and the cortical thickness in patients with schizophrenia. Methods: Fifty-five patients with schizophrenia and age- and sex- matched 25 healthy controls participated in this study. We used the Harvard Beat Assessment Test (H-BAT), the Positive and Negative Syndrome Scale (PANSS), and the Simpson-Angus Scale (SAS) to assess beat-processing ability, clinical severity, and extrapyramidal symptoms, respectively. Participants were scanned with a GE 3.0T MRI scanner and the cortical thickness was evaluated with the FreeSurfer software. Pearson’s correlation coefficients were calculated between the cortical thickness and H-BAT scores in the control group. For the patient group, the partial correlation coefficients were calculated controlling for PANSS and SAS scores, and daily antipsychotic dose. Results: The cortical thickness of medial orbitofrontal cortex was negatively correlated with the H-BAT scores in the patient group (r=-0.45, p<0.001), and positively in the control group (r=0.46, p=0.02). No other region showed these correlations. Conclusion: Cortical thinning in the medial orbitofrontal cortex may relate to impaired beat-processing ability in patients with schizophrenia. Increased cortical thickness may also explain the individual difference in the beat processing ability in healthy controls. Future research will be needed for the interpretation of the relationship. This research will ultimately contribute to the elucidation of the relationship between the symptoms of amusia and pathophysiology of schizophrenia.

Subjects: Health and well-being, Schizophrenia

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P1-79: “Donut” Studies as a Simplified Paradigm for Music Cognition Research

Christopher W White(1)
1:University of Massachusetts Amherst

This paper argues for a simplified experimental paradigm especially suited to behavioral research performed in music performance schools. I suggest that since potential participants are plentiful and accessible but unlikely to commit to a traditional subject pool, and the risk associated with statistical errors is low, short studies using less-formal infrastructures and simpler statistical methods are better equipped to leverage the resources of a music school. Because of the focus on practicing, participating in ensembles, and the large slate of required music classes, undergraduate performance majors spend a lot of time within the walls of their institution’s music building. While this should make recruitment for experimental participation relatively easy, because of their extended time commitments, I have found it difficult to recruit these students into a subject pool. However, because these students’ schedules do afford plenty of short breaks within the music building, I have found it easier to recruit participation in 3-5 minute studies incentivized by a small snack– for instance, a donut. These “donut studies” naturally require simpler experimental design: instead of an extended battery of tests, these designs only accommodate a handful of tasks. While participation increases, the limited responses per participant can hurt a study’s statistical power and can admit confounds. I argue that, instead of relying on complicated, subtle, and powerful statistical methods, these designs are best adapted to simpler approaches, e.g., using chi-squares and binomial distributions. Of course, yielding statistical power and admitting confounds allows for potential error. However, I argue that as a “low risk field” –our studies have minimal human risk (Huron 1999)– we should prioritize producing provocative research over curating and policing potential errors. Simplified models and accessible designs also allow for more outreach to and collaboration with our colleagues in music theory, musicology, and performance.

Subjects: Not Listed, Experimental Design

When: 3:30-4:45 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Poster session P2

4:45-6:00 PM in Rosenthal Pavilion

P2-2: Toward an Understanding of Amotivation and Role of Social Support in Music Education

Hyesoo Yoo(1)
1:Virginia Tech

The aim of this study was to investigate associations among student perceptions of teachers’ inadequate supports, amotivation subtypes, and intention for future music participants. Based on Self-Determination Theory (SDT), Legault et al. (2006) developed a taxonomy of amotivation from multifaceted perspectives: (a) Deficient ability beliefs (low perceived competence); (b) Deficient effort beliefs (lack of desire to exert the effort); (c) Insufficient values (devaluing academic tasks); and (d) Unappealing characteristics of the tasks (lack of interest in class participation). Participants were 480 students from eight elementary schools in Eastern New York. Participants completed a multi-questionnaire that consisted of (a) Amotivation Music Inventory (AMI), (b) Interpersonal Behavior Scale (Shen, Li, Sun & Rukavina, 2010), and (c) Intention for future music education participation (Shen, Li, Sun & Rukavina, 2010). Participants indicated on a 7-point scale ranging from 1 (very unlikely) to 7 (very likely). Structural equation modeling analysis revealed that student perceptions of teachers’ inadequate supports in autonomy, competence, and relatedness were associated with different subtypes of amotivation. For example, lack of competence support showed the strongest predictor of all subtypes of amotivation. Lack of relatedness support was significantly related to insufficient values, deficient efforts beliefs, and unappealing task characteristics. Furthermore, unappealing characteristics of school tasks demonstrated the strongest association with students’ intention to participate in future music education. That is, when students feel lack of interest in music, there is a greater chance they will not take music classes in the future. The findings indicate that lack of support from teachers may act as a significant factor resulting in students’ amotivation and their intentions for future music participation. The multidimensional nature of amotivation should be identified and instructionally addressed to design effective motivational strategies to enhance students’ involvement.

Subjects: Music education/pedagogy/learning, psychological process

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-4: Music, social engagement, and empathic decision making

Aaron Colverson(1)
1:University of Florida

Music and empathy are core components of social engagement. However, how music, empathy, and social engagement are linked is not clear. Similar and adjacent functional brain systems are required in the production and understanding of music, the processing of emotion and engagement in social behavior. Activity in these brain systems is often reflected in autonomic features including dynamic behavior of the parasympathetic and sympathetic nervous system. It may be that the degree of engagement with music or system response to music may influence empathic decision making and this engagement may be reflected by the behavior of the autonomic nervous system. Thus, the current experiment was designed to address these relationships. Healthy undergraduate students (N = 60) of the University of Florida participated in Cyberball, a task sensitive to differences in empathic decision making, while listening to and not listening to different types of music. Results indicated that there was no effect of music condition on autonomic function and further, there were no interactions between the empathic decision-making results and decline in sympathetic nervous system activity. Future work will address these incongruencies by adapting the Cyberball task and music settings, to involve more engaging and dynamic activity over the entire experiment.

Subjects: Physiological measurement, Aesthetics / preference; Cross-cultural comparisons/non-Western music; Cross-domain effects; Emotion

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-6: The Sound of Music: Stimulus Features that Differentiate Organized Sound Sequence Categories

Elizabeth Phillips(1)
1:UNR

Studies from cognitive science have shown that there are many shared neural mechanisms for processing music, language, and organized environmental sounds (OES). However, the pathway utilized to process each category of sound sequences is nonetheless distinct from the others. What is still unknown is how the brain differentiates sound sequences to be processed by the ‘proper’ pathway. What quantitative features of a sound sequence are necessary and sufficient to categorically differentiate it? This study examined candidate properties of music, language, and OES which the brain could potentially utilize to categorize sound sequences for differential processing. 632 existing audio tracks from around the world were sampled from online databases to represent either music, urban OES, natural OES, or language. They were analyzed frame-by-frame for 393 features using MIRtoolbox (a quantitative audio analysis software package), and the average value across all frames (for each feature) was determined for each track. A factor analysis showed that the measured variables relied on four underlying factors: a Spectral Factor, a Tonal Factor, an Energy Factor, and a Flux Factor. A multivariate analysis of variance was used to compare the effect each variable had on each category. Almost every variable tested had a significant main effect, and a Tukey post-hoc test also identified multiple significant interactions. The amount of spectral kurtosis was useful for differentiating music and only music; the amplitude of note onsets and the overall mode (major vs minor) could only differentiate language; and the duration of the attack phase and the Mel frequency cepstral coefficient two could only differentiate natural EOS. These results point to significant correlations between the category of a sound sequence and its average values for these variables, and thus to information the brain could potentially exploit to distinguish music and other sound categories.

Subjects: Music information retrieval, Computational approach; Psychoacoustics

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-8: Musical syntax: can tonal functions elicit metrical structure?

Alexandre Celma Miralles*(1), Carlota Pagès(2), Juan M Toro(3)
1:Universitat Pompeu Fabra, 2:Center for Brain and Cognition, 3:Universitat Pompeu Fabra & ICREA

Music is hierarchically structured in the rhythmic and the harmonic domains. On the one hand, the beat is organized in metrical patterns that cyclically alternate strong and weak positions. Interestingly, neural populations can entrain to the underlying beat of the rhythms as well as to their hierarchical metrical structures (Nozaradan, 2014). On the other hand, the melodies and chords of Western tonal music are organized following a hierarchy of stability. The combination of stable and unstable chords generate patterns of tension and resolution (i.e. dominant-tonic cadences) that naturally group around the tonal center. In the present study, we aim to elucidate if hierarchy of tonal syntax can boost metrical structures. To this end, we presented participants with sequences of chord progressions and analyzed the EEG recordings following a frequency-tagging approach. The harmonic progressions followed either a ternary or a quaternary metrical structure, and all chords were constantly presented at 3Hz. We used tonic-subdominant-dominant progressions to elicit a ternary structure and tonic-submediant-subdominant-dominant progressions to elicit a quaternary structure. To grasp the effect of tonal hierarchy we designed two conditions: the first alternated chords in root-position with first inversions, and the second alternated root positions with functionally-equivalent chords. These conditions were compared to their respective control condition, where the chords of each progression were shuffled in a pseudo-random manner. Preliminary EEG analyses revealed clear peaks at the frequency of chord appearance (the beat) for all conditions and controls. Neural entrainment to the frequency of the ternary and quaternary meter appeared for the second condition, suggesting that the alternation of analogous functional degrees elicited metrical groupings. The entrainment to metrical frequencies did not appear in the frequency spectra of the controls. These findings suggest that tonal syntax could elicit neural entrainment to metrical structures.

Subjects: Harmony and tonality, Beat, rhythm, and meter; Cross-domain effects; Neuroscientific approach; Physiological measurement

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-10: Fundamentally different? Variations between musicians and non-musicians in a pitch discrimination task

Lauren H Vomberg*(1), John Vokey(2), Scott Allen(1)
1:University of Lethbridge, 2:University of Queensland

Musicians are better able to discriminate changes in pitch than non-musicians, particularly when they are judging tones lacking the fundamental frequency. Understanding the processes by which musicians complete this task may allow us to provide support for non-musicians, allowing them to obtain musician-level accuracy. Participants are asked to judge whether a test tone (presented second) was higher or lower than a reference tone (presented first). We have previously found that musicians are likely to spontaneously hum while completing this task, and when specifically asked to hum out loud, their performance is more accurate. To investigate what aspects of humming are important, we tested musicians and non-musicians across four conditions, three designed to directly control humming; no specific instructions (so they could hum if they chose to), specifically asked to hum, and speeded response (leaving no time to hum). The fourth condition attempts to replicate the subtle muscular feedback obtained from humming in an embodied manner. Rather than simply pressing one button for up and another for down (as in the other conditions), participants responded by moving a vertical slider up or down on the computer screen to the extent which they thought the tones differed. Results indicated similar responses regardless of condition, with musician accuracy being significantly higher than non-musician accuracy in their ability to identify the direction of the tones.

Subjects: Pitch, Embodied cognition

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-12: REJUVENATING THE MEMORY OF THE ELDERLY PEOPLE THROUGH MUSIC: A case-study of the Elderly People Homes in Lagos, Nigeria.

Florence E Nweke(1)
1:Department of Creative Arts, Faculty of Arts, University of Lagos, Nigeria

In Nigeria, life expectancies are very low, according to the World Health Organization data published in 2018, the average life expectancy of Nigerians is 55.2 years. Those that managed to advance in age are celebrated and well respected. Nigeria culture and tradition accord so much respect to the elderly people, but the advent of westernization in modern day Nigeria disrupted the traditional family set up where the elderly are cared for by their family members. Westernization brought about the elderly being sent to the old people’s home with the attendant issue of the elderly being dislocated from their extended family structures, life is characterised by boredom and alienation from the society. This study, through participant observation method, tried to use music as a viable tool in aiding memory recall and to alleviate stress and boredom in the elderly. The study engaged the elderly people in Regina Mundi Home for the elderly, Mushin-Lagos in musical performances using traditional and popular tunes. The result was startling as the elderly ones suddenly sprung back to life, shuffling their feet to musical tunes; within a twinkling of an eye, positive facial emotions were shown by selected participants. The study had a group activity of music performance, as well as singing in responsorial style in a musical performance put up by this study, involving 15 elderly people and the research group, social functioning and feelings of belonging were actualised at the end of the study among the elderly people. The study engaged the respondents in several musical activities that require them to move their bodies, to perceive music and respond to music. The respondents’ reactions to music were videotaped and recorded for proper documentation. The study finds out that, respondents who were as old as 80 years of age and above, who were somehow alienated from the society, suddenly regained their memory and started sharing their views about the music, became energized and asked for more music. The implication of the study is that involvement in music can improve memory recall, spatial and body concepts, the study has it, that music aids memory recall and promotes human activity that strengthens metabolism and therefore reducing boredom to the barest minimum.

Subjects: Music therapy, Music and society

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-14: Involuntary Musical Imagery Characteristics Across the Adult Lifespan

Georgia Floridou*(1), Victoria J Williamson(2), Daniel Müllensiefen(3)
1:University of Sheffield, 2:Department of Music, University of Sheffield, 3:Goldsmiths

Research on involuntary musical imagery (INMI or “earworm”, i.e. the spontaneous and repeated experience of music in the mind) has provided evidence regarding its characteristics and situational antecedents, intra-musical features, and individual differences, such as personality and musicality. However, it remains unclear how basic INMI characteristics such as frequency, vividness, and valence might change across the adult lifespan. Our recent research in this area has found age-related changes in INMI frequency and vividness, but not in valence. However, that study used retrospective self-reports; an important question following from this research is whether these findings can be validated by daily-life measures that are timed to the moment INMI occurs. The main aim of the present study is to investigate the relationship between aging and INMI characteristics such as frequency, vividness, and valence across the lifespan using a digital smartphone diary thought-sampling method. An additional aim is to investigate the role of factors such as musical training and engagement, and attentional resources, in any age-related changes. Based on insights from previous studies, we predict that INMI frequency and vividness will decline with increasing age but that there will be no relationship with valence. Furthermore, we expect that musical training and engagement as well as attentional control will account in any of the observed relationships between age and INMI frequency. Data collection is ongoing at present. Young (18-34 years) and old (65-85 years) adult participants are contacted over 3 days, randomly 7 times each day, to report on their INMI experiences and characteristics. The results will provide the first demonstration of digital thought-sampling methodology in the study of aging and INMI in daily life and will shed new light on theories and research in cognitive aging and INMI, as well as related forms of involuntary cognition such as mind wandering and semantic memories.

Subjects: Memory, cognitive aging

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-16: Interaction between music genre and musical training during reading comprehension

Dominique T Vuvan*(1), Helen Gray-Bauer(2)
1:Skidmore College & International Laboratory for Brain, Music, and Sound Research, 2:Skidmore College

Incidental music has been shown to impair cognitive performance. This impairment has been exhibited in both musicians and non-musicians (Patston & Tippet, 2011; Yang et al., 2008). Additionally, it has been shown that incidental sound, including background voices and lyrical music, have a detrimental effect on one’s reading ability (Vasilev et al., 2018). The current study investigates whether one’s musical background will alter the extent to which incidental music decreases reading comprehension. We compared the reading comprehension abilities of classical musicians, jazz musicians, and non-musicians while listening to classical music, jazz music, or pink noise. We hypothesized that there would be a main effect of musical background, such that trained musicians would perform better in all conditions compared to non-musicians. Furthermore, we hypothesized an interaction between musical background and listening condition, such that jazz musicians would perform worst when listening to jazz music and classical musicians would perform worst when listening to classical music. Current data suggests that participants performed best while listening to classical music, but provides inconclusive evidence for our hypothesized effects. This study will give insight into how musical training and experience can affect one’s interaction with the environment.

Subjects: Musical expertise, Music education/pedagogy/learning

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-18: Evaluation of Bimanual Coordination: Enhanced Synchronization and Accuracy in Music

adrian iordache*(1), Jennifer A Bugos(1)
1:University of South Florida

Musical training may impact bimanual coordination (Haslinger et al. 2004; Martins et al. 2018); yet, there are few measures that account for dual task processing in motor coordination. The Bimanual Coordination Task (BMCT), a temporal sequential coordination task, is based upon the information-processing theory and the dynamic systems theory. The information processing theory accounts for working memory demands which include retention and recall of the metrical pulse. Error correction of the BMCT accounts for the uncontrolled variability in the dynamic systems theory (Repp 2005; Van der Steen & Keller 2013). The BMCT was shown to have strong reliability in a large sample of older adults (r =.81). The purpose of this study was to evaluate the differences in young adult musicians and non-musicians on a fine and gross motor bimanual coordination task. Fifty-four participants (27 musicians, 27 non-musicians) are being recruited from a major research university in the Southeastern United States. The BMCT includes two subtests that measure fine motor, as well as gross motor bimanual coordination abilities in pattern accuracy, hand synchronization, and timing control. Preliminary data revealed that musicians outperformed non-musicians in both synchronization and accuracy on the fine motor subtest (t = -3.09, p=.014) and the gross motor subtest (t = -5.09, p=.0001). Data collection will be complete prior to presentation. The BMCT may be used to differentiate fine, and gross motor coordination for dissecting changes in motor coordination based upon instrumentation and training programs). This measurement tool can be used to assess motor deficiencies on clinical, as well as healthy populations. Bimanual coordination assessments can elucidate the role of music training on coordination, learning and development.

Subjects: Physiological measurement, Beat, rhythm, and meter; Memory; Music and development; Music education/pedagogy/learning

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-20: High intellectual abilities might not be necessary for early and exceptional musical talent

Chanel Marion-St-Onge*(1), Megha Sharda(1), Margot Charignon(1), Isabelle Peretz(1)
1:University of Montreal

Musical prodigies are exceptional musicians who attain outstanding achievement in music, before adolescence. It has been suggested that musically gifted individuals may have certain behavioural, cognitive or neural predispositions that allow them to learn faster and stand out from their peers. In the current study our goal was to test if such predispositions manifest in the form of higher IQ or working memory which have been previously associated with musical prodigiousness (Ruthsatz, Ruthsatz-Stephens and Ruthsatz, 2014). Musical experience is known to have an effect on cognitive abilities. Our goal was to determine if musical prodigies differ from musicians with similar musical experience on intellectual abilities. Our sample consisted of 20 musical prodigies who received a special recognition for their talent at a young age (e.g. by winning national competitions, appearing in the media; 6 females, age 25.45 ± 9.28 years) and 20 control musicians (7 females, age 24.49 ± 6.74 years). The groups were matched in terms of age, age of onset of musical training, years of musical experience and years of regular education. On a standardized IQ test (WAIS-IV), the musical prodigies obtained a global IQ (M = 115.06, SD = 13.48) comparable to controls (M = 113.11, SD = 15.24; t(35) = .411, p = .683). Prodigies did not show significantly higher working memory (WAIS-IV WM index, M = 107.25, SD = 13.54) compared to controls (M = 103.95, SD = 22.80; t(37) = .553, p = .583). Considering these results, it seems that outstanding musical talent does not require exceptional intellectual abilities. Other intrinsic factors might have allowed the prodigies to express their talent early in life. These predispositions might be in more domain-specific skills such as sensorimotor learning rather than general intellectual ability or working memory.

Subjects: Musical expertise,

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-22: Reduced pain while listening to music is influenced by music attribute preferences

Krzysztof Basiński*(1), Agata Zdun-Ryżewska(1), Mikołaj Majkowicz(2)
1:Medical University of Gdańsk, 2:Pomeranian University in Słupsk

Music-induced analgesia (MIA) is a well-documented phenomenon in which listening to music influences pain perception. Surprisingly little research has been performed to determine what characteristics of music are optimal for MIA. Here we used a model of music attribute preferences proposed by Greenberg et al. (2016) to study the relationships between the individual’s preferences for music attributes and amount of analgesia provided by music. The model proposes three dimensions of music attribute preferences: arousal, valence and depth. N = 60 participants underwent experimental pain stimulation (using the cold pressor task) while listening to a variety of short musical excerpts from different genres. Results of previous studies were used to choose the excerpts that scored high on each of the three music attribute dimensions, while at the same time being novel to the participants. Results showed significantly lower pain scores for music scoring high on arousal (p < .01) and depth (p < .05) in comparison to a noise condition. Regression analysis showed a significant effect of individual preference on pain in the valence condition (R2 = .108, beta = .213, p < .05) but not in arousal and depth conditions. These results suggest that individual preferences for music attributes play a significant role in MIA. Future studies on music and pain should control for individual music preferences. The results of this study may also contribute to the development of novel evidence-based therapies of chronic pain. This is especially relevant in context of recommendation algorithms used by music streaming services.

Subjects: Health and well-being, Aesthetics / preference

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-24: Tapping to your own beat: experimental setup for exploring subjective tacti distribution and pulse clarity

Martin A Miguel*(1), Mariano Sigman(2), Diego Fernandez Slezak(1)
1:LIAA, DC, UBA, 2:LNI, UTDT

Pulse clarity and rhythm complexity are two aspects of a rhythm that have been related to mechanisms that generate affect from music. Experimental setups that measure these concepts have used questionnaires, sensitivity tests to changes in a rhythm and most commonly measure precision in synchronization and reproduction tasks. In the last two cases, participants are required to produce a specific rhythm by tapping either a defined metronome or rhythmic pattern. These setups do not provide information about the subjective tactus experienced by the participants. For example, we cannot know whether various tacti interpretations are possible or if the induced beat changes throughout the passage. In this work we propose a new experimental setup that asks participants to tap to the beat subjectively induced by a rhythmic passage as if trying to emulate the informal experience of listening to a song and tapping along. The setup allows participants to select whichever beat they find most organic and even change it while listening to the stimuli. We tested whether the setup allows looking into pulse clarity by asking participants to tap the beat of 30 rhythmic passages and rate the difficulty of the task for each trial (N=27). The passages were mostly non isochronous rhythms with varying levels of complexity selected from Povel and Essen (1985) and Fitch and Rosenfeld (2007). We calculated a pulse clarity score by measuring each participant’s isochrony with its own pulse. We also defined a more general pulse clarity measure by inspecting the coherence between subjects. Both metrics correlated significantly with the subjective tapping difficulty reported in questionnaire. This novel setup opens a window to study subjective tacti and its distribution across a passage, e.g. rhythmic passages with high agreement or examples where many tacti are possible.

Subjects: Physiological measurement, Beat, rhythm, and meter; Music information retrieval

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-26: Measuring musical expectation using reaction time

Joshua Albrecht*(1), Juan Pablo Correa-Ortega(2)
1:The University of Mary Hardin-Baylor, 2:The Autonomous University of Aguascalientes, Mexico

Motivation According to information theory (Shannon 1948), unexpected signals convey more information, and should take longer to process (Levy 2008), resulting in slower reaction times (Bartlette 2007). We hypothesize that listeners will have more cognitive capacity and will therefore react more quickly to extramusical tasks at cadences and in more familiar music. Methodology For each trial, participants first hear a test alarm selected randomly from a list of 16 alarms and then listen to a musical excerpt. During playback, 1-2 test and 1-2 distractor alarms play; participants must press a button as quickly as possible after the test sound. Our experiment tests two populations, college music students in Texas and central Mexico, and three styles of music, Bluegrass (familiar to Texans but not Mexicans), Son Huasteco (the reverse), and electroacoustic music (familiar to neither). Alarm sounds are balanced so that half the recordings play test alarms at cadences and the other half are randomized, resulting in a 2X3 balanced design with three musical styles and two alarm locations. We hypothesize reverse reaction time effects for cultural familiarity (but similar in both populations for electroacoustic music), and faster reactions for cadences (magnified by culture). Results Data collection is underway for Mexican participants, so cultural comparisons cannot yet be drawn. For Texas participants, mean reaction times were statistically indistinguishable for the three styles (electroacoustic = 610ms, Bluegrass = 600ms, Son = 598ms), but were significantly faster for ‘tonal’ styles (p = .03). For tonal styles, reaction times were not statistically different for cadences (p = .9). Implications These negative results are interesting, suggesting that reaction time may not be a useful indirect measure of expectation. However, the tonal styles may be similar enough that they trigger the same sets of expectations. We are quite interested to compare differences between the two cultural populations.

Subjects: Expectation, Cross-cultural comparisons/non-Western music

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-28: Valence Specific Emotional Perception of Music in Individuals with Autism Spectrum Disorder

Hannah Bachmann(1), Lindsay Warrenburg*(1), Daniel Shanahan(1)
1:Ohio State University

Neurotypical individuals and those with autism spectrum disorder (ASD) have been observed to exhibit subtle differences in emotional perception (e.g., Golan, Baron-Cohen, Hill, & Rutherford, 2007). Although studies have documented these differences in terms of verbal and visual stimuli, few studies have been conducted in order to examine the effect of music on emotional perception in individuals with ASD. Additionally, it appears that when comparing music-related affect and valence, some studies suggest that individuals with ASD experience emotions differently than an age and gender-matched control group (Kopec, Hillier, & Frye, 2014). The current study recruits participants with ASD and a neurotypical control group. The participants listen to musical excerpts that have been previously shown to represent grief, melancholy, happiness, and tenderness (e.g., Warrenburg & Huron, forthcoming). Listeners will then be asked to determine which emotion(s) they perceived in the musical stimuli. The two variables of interest in this study are (1) the reaction time from the moment the question was presented until the participant makes a decision about the emotion(s) represented in the excerpts and (2) the accuracy of the emotion selections compared to previous research. Student’s t-tests compare the ASD and non-ASD participant groups on these two measures. Finally, we compare the two participant groups with respect to the four emotions represented, in order to determine whether there are differential effects across negatively- and positively-valenced emotions. The study is currently ongoing, with an expected cohort of 150 participants.

Subjects: Processing disorders, Emotion; Music and society; Music theory; Music therapy

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-30: The Effect of a Drumming-to-Speech Intervention on Prosody Perception in Preschoolers with Cochlear Implants: An Exploratory Study

Jessica MacLean(1)
1:Frost School of Music, University of Miami

Children who utilize cochlear implants (CIs) often have trouble detecting prosody, an element of speech that uses variances in timing, pitch, and dynamics to communicate meaning. Without recognizing prosody, they can miss conversational elements (e.g., sarcasm) and may not communicate effectively with others. Children with CIs match peers in measures of rhythm perception, but fall behind in pitch perception. Research suggests that improvements in speech rhythm perception can lead to improvements in prosody perception. In this exploratory study I examined the effect of a novel Drumming-to-Speech (DTS) intervention that facilitates practice in identifying stressed syllables in speech to improve prosody perception in children with CIs. In addition, I explored the impact of the intervention on music perception and examined relationships between demographic variables (e.g. hearing age) and synchronization ability with intervention outcomes Twelve preschoolers with CIs completed the DTS intervention, which included four weeks of individual music therapy sessions and at-home practice. Sessions incorporated drumming to stressed syllables in speech and rhymes, as well as practice synchronizing to speech and drumming. Participants completed assessments of music and prosody perception pre- and post-intervention. I conducted a series of nonparametric related samples Wilcoxon signed-rank tests to assess intervention efficacy, as well as a series of Spearman’s rank correlations to examine relationships between demographic and synchronization variables and intervention outcomes. While participants did not improve in linguistic prosody perception, they did improve in affective prosody perception, though this did not reach significance. In addition, participants improved significantly in synchronization variability at slower tempos (more pertinent for speech perception), as well as in rhythm and melody perception. Overall, results indicate potential for the DTS intervention to improve affective prosody perception and identification of speech rhythm. Clinical implications and recommendations for intervention modifications and research in this area will also be addressed.

Subjects: Music therapy, Language and speech

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-32: Biases, Stereotypes, and Prejudices against Artificial Intelligence Music Composition

Jisang Ahn*(1), Kyungho Kim(2)
1:Bellarmine College Preparatory , 2:SK Hynix Memory Solutions

Background: Certain cutting edge artificial intelligence is able to compose music that is indistinguishable to many (at least by ear) from music composed by human musicians, yet many seem to believe in the stereotype that A.I. music composition must be bland, monotonous, and odd. Aim: This research investigated people’s potential biases against A.I. musical composition and the possibility of those biases being mitigated. Method and Results: The study conducted three sets of tests, each asking 150 participants which of two similar pieces they believed was composed better: “Etude-Tableau in E flat Op.33, No.7” by Rachmaninoff or “Suite (in the style of Rachmaninoff)” by Experiments in Musical Intelligence, an A.I. developed by David Cope. These two pieces were chosen intentionally because of their similarity in overall genre and composer style in order to prevent the participants from favoring a certain piece for personal musical genre preferences. Participants of the first test who were not even aware that one of the pieces was written by an A.I. tended to judge the two pieces fairly, with about 43% favoring the A.I. piece, 42% favoring the human piece, and 15% rating them equally. Meanwhile, participants of the second test who were truthfully told which piece was written by the A.I. generally disliked the A.I. composition, with only about 19% favoring the A.I. piece, 49% favoring the human piece, and 32% rating them equally. Furthermore, participants of the third test who were intentionally misled that the A.I. piece was written by a person and that the human piece was written by an A.I. overall preferred the piece they believed was the human composition, which was in reality, written by the A.I. Only about 13% favored what they were told was the A.I. piece (in reality human), while 52% favored what they believed was the human piece (in reality A.I.), and 35% rated them equally. In addition, before listening to and comparing the two pieces, 49% of the participants who were asked whether or not A.I. can compose great music despite being unable to feel emotions stated that A.I. indeed cannot compose music as great as people because it cannot feel emotions. However, after listening and comparing the two pieces, 41% of those specific participants were apparently willing to make an exception to that stereotype by favoring the A.I. piece. Conclusion: This research observed that a considerable portion of people indeed hold negative biases against A.I. musical composition. The results concurrently suggest while people possess preconceptions against A.I. composition, many can separate themselves from that bias and judge the music objectively in certain cases. In the future, if people develop a more neutral mindset towards A.I. music, it could potentially diversify and integrate into musical culture.

Subjects: Computer music, Psychology

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-34: There’s more to timbre than musical instruments: a meta-analysis of timbre semantics in singing voice quality perception

Charalampos Saitis*(1), Johanna Devaney(2)
1:Centre for Digital Music, Queen Mary, University of London, 2:Brooklyn College

Imagine listening to the famous soprano Maria Callas (1923–1977) singing the aria “Vissi d’arte” from Puccini’s Tosca. How would you describe the quality of her voice? When describing the timbre of musical sounds, listeners use descriptions such as bright, heavy, round, and rough, among others. In 1890, Stumpf theorized that this diverse vocabulary can be summarized, on the basis of semantic proximities, by three pairs of opposites: dark–bright, soft–rough, and full–empty. Empirical findings across many semantic differential studies from the late 1950s until today have generally confirmed that these are the salient dimensions of timbre semantics. However, most prior work has considered only orchestral instruments, with relatively little attention given to sung tones. At the same time, research on the perception of singing voice quality has primarily focused on verbal attributes associated with phonation type, voice classification, vocal register, vowel intelligibility, and vibrato. Descriptions like pressed, soprano, falsetto, hoarse, or wobble, albeit in themselves a type of timbre semantics, are essentially sound source identifiers acting as semantic descriptors. It remains an open question as to whether the timbral attributes of sung tones, that is verbal attributes that bear no source associations, can be described adequately on the basis of the bright-rough-full semantic space. We present a meta-analysis of previous research on verbal attributes of singing voice timbre that covers not only pedagogical texts but also work from music cognition, psychoacoustics, music information retrieval, musicology, and ethnomusicology. The meta-analysis lays the groundwork for a semantic differential study of sung sounds, providing a more appropriate lexicon on which to draw than simply using verbal scales from related work on instrumental timbre. The meta-analysis will be complemented by a psycholinguistic analysis of free verbalizations provided by singing teachers in a listening test and an acoustic analysis of the tested stimuli.

Subjects: Timbre, Singing

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-36: Development of Musical Skills in Underprivileged Children Enrolled in a Community-Based Music Training Program

Assal Habibi*(1), Priscilla Perez(1), Beatriz Ilari(2)
1:University of Southern California, 2:USC

Longitudinal research on the development of musical skills in underprivileged children engaged in community-based group training music programs is scarce. As part of an ongoing 5-year longitudinal study, this study investigated the development of pitch and rhythmic discrimination and beat perception in children from underserved neighborhoods in Los Angeles. Children in an El-Sistema inspired program were compared to children in community-based sports training programs and children not involved in any systematic extra-curricular program over the course of four years. Assessments were conducted once prior to training and annually thereafter. There were no differences in musical abilities among the groups prior to training. However, after 3 years of training, children from the music program preformed significantly better, compared to both groups, on pitch and rhythm discrimination tasks as measured by Gordon’s Intermediate Measures of Music Audition. After 4 years, children from the sports-training group performed equivalently to the music group but differences more pronounced in the rhythm discrimination task, remained between the music and control groups. Additionally, children with music training performed significantly better on a beat perception task compared to children involved in sports training and children not involved in any systematic training. These findings suggest that participation in music programs accelerates the development of pitch and rhythmic discrimination abilities and improves beat perception. The development of these skills in childhood is critical not only for music training, but they also contribute to language and communication skills.

Subjects: Music and development,

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-38: Examining the role of the motor system in the vocal memory advantage

Emily A Wood*(1), Frank Russo(1)
1:Ryerson University

A series of studies by Weiss and colleagues has demonstrated a memory advantage for vocal melodies over those produced by other instruments (piano, banjo, marimba). In the current study, we investigate whether the source of this vocal memory advantage is preferential engagement of the motor system under vocal conditions relative to instrumental conditions. This type of preferential engagement may lead to a vocal-motor memory trace that supports the representation of vocal melodies. If this interpretation is correct, then the vocal memory advantage should be interrupted by introducing motor interference during encoding of melodies. We accomplish this by having participants engage in articulatory suppression—isochronous production of a task-irrelevant word or syllable—while listening to melodies. The act of articulatory suppression should interfere with vocal-motor activity that spontaneously arises during vocal melody listening. In the first phase of the experiment, participants listened to 24 unfamiliar folk melodies presented in a vocal or piano timbre, which were encoded during listen-only or articulatory suppression conditions. In the second phase of the experiment, participants heard the original 24 melodies presented amongst 24 foils and judged whether melodies were old or new. Our preliminary results have replicated the vocal memory advantage in the listen-only condition. In addition, we find that the vocal memory advantage is eliminated in the articulatory suppression condition. Future work will include an active control condition, wherein participants are asked to tap isochronously. The intent of this active control is to control for the cognitive interference arising from a secondary task while avoiding the disruption of spontaneous vocal-motor activity thought to underpin the vocal memory advantage. We anticipate that the results of the study will have implications for the development of theory concerning the role of sensorimotor integration in the perception and representation of music.

Subjects: Music information retrieval, Embodied cognition

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-40: Memory for Harmony in Popular Music

Ivan E Jimenez*(1), Tuire Kuusi(1), Christopher Doll(2)
1:Sibelius Academy, UNIARTS Helsinki, 2:Rutgers University

[Author] (2017) recently identified 77 different chord progressions commonly found in North American and British popular music and proposed that these chord progressions can be stored in long-term memory in the form of harmonic schemata that allow listeners to hear them as stereotypical chord progressions. To investigate listeners’ ability to realize that they have previously heard a chord progression we asked 231 listeners with various levels of musical training to rate their confidence on whether or not they had previously heard six diatonic four-chord progressions. To control for the effect of extra-harmonic features such as timbre and tempo, we instantiated the chord progressions in a way that resembled the piano of a famous song and controlled for participants’ familiarity with that song and whether they had played its chords. We found that ratings correlated with the frequency of occurrence of the progressions in hooktheory.com for the two groups of participants who had played an instrument for at least one year (players who had not played the reference piece, r(6)=.846, p=.034; players who had played the reference piece, r(6)=.924, p=.009), and to a lesser extent for the other participants (r(6)=.689, p=.130). Additionally, all “players” were more confident than the other participants about knowing songs that use more common chord progressions; thought of specific songs more often; and had a tendency to mentioned songs that better matched the stimuli in harmonic terms. However, there was no effect associated to how long participants had played an instrument or of the type of the instrument. Our research supports the notion that both musical training and extra-harmonic features affect listeners’ ability to realize whether they have previously heard a chord progression. In our presentation we will discuss our findings in more detail as well as their implications for our general understanding of memory for harmony.

Subjects: Harmony and tonality, Memory; Musical expertise

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-42: Stability ratings in novel, microtonal scales

Gareth Hearne(1)
1:The MARCS Institutes

Since 1979, probe tone experiments have procured insight into the cognition of tonality in music. Participants are first played context-setting stimuli, after which a probe tone is sounded and participants are asked to rate how well it “fits” the context. In previous experiments we found that ratings of fit were insignificantly different overall to ratings of stability for tones and triads probed after the uniformly, randomly distributed sounding of notes from 5 familiar and 4 unfamiliar scales as context, and that these ratings may be predicted by the spectral pitch class similarity (SPCS) of the probe to the context scales. In the current project, an exploratory analysis of sequential data from these experiments provides evidence for spectral pitch class similarity as a measure of a psycho-acoustic influence on the cognition of harmonic tonality, distinct from both short term and long-term statistical learning. In an additional pair of experiments we test for the perceived stability of probe tones and triads after a uniformly, randomly distributed sounding of notes from 8 novel scales of 22-tone Equal Temperament, diminishing the possible effect of long term statistical learning. So far we find for this set of scales that SPCS is a much weaker predictor of stability ratings than for the previously tested scales. It seems that though SPCS describes a measurable psycho-acoustic effect, familiarity with the musical material may be required for the employment of such a mechanism in the cognition of tonality.

Subjects: Harmony and tonality, Psychoacoustics

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-44: Cognitive Coupling Between Stress and Meter

Alissandra Reed*(1), Braden Maxwell(2), David Temperley(1)
1:Eastman School of Music, 2:University of Rochester

It has been widely observed that there is a tendency in vocal music for stressed syllables to be aligned with strong beats of the meter. Corpus evidence from several different genres shows that composers respect this principle. This suggests that stress and meter are cognitively coupled in some way—served by interdependent mental representations, or perhaps even by a single representation. However, this phenomenon has never been tested experimentally. In our experiment, participants (music students) heard a constant 4/4 drum track, along with a series of piano melodies. Immediately following each melody, they sang it back with words. The melodies all consisted of eight eighth-notes, either “trochaic” (starting on the downbeat) or “iambic” (starting on the eighth-note beat just before the downbeat). The words were lines from 19th-century poems, either trochaic (TELL me NOT in MOURN-ful NUM-bers) or iambic (be-SIDE the LAKE be-NEATH the TREES). The stress pattern of the words could be either matched or mismatched with the meter of the melody. The vocal performances were recorded and evaluated by three independent judges (not including the authors of the study) for fluency and accuracy, with regard to words, pitch, and rhythm. (The judges did not hear the piano melodies or the drum track.) Melodies were sung more accurately when text meter and melodic meter were matched, either both iambic or both trochaic. Apparently, subjects found it difficult to sing an iambic text to a trochaic melody, or vice versa. This demonstrates the cognitive interdependence between musical meter and syllabic stress. The experiment also revealed several other interesting results, including a preference for trochaic over iambic melodies, and a slight (non-significant) preference for iambic over trochaic texts.

Subjects: Beat, rhythm, and meter, Music and language; Performance

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-46: Neural correlates of beat tracking in Williams Syndrome

Anna Kasdan*(1), Miriam Lense(2), Reyna Gordon(2)
1:Vanderbilt University, 2:Vanderbilt University Medical Center

Williams syndrome (WS) is a neurodevelopmental disorder characterized by hypersociability, heightened auditory sensitives, and strong musical interests despite variable musical skills. Individuals with WS exhibit variability in musical beat perception, and this is associated with individual differences in social communication (Lense & Dykens, 2016). We sought to investigate the neural basis of beat tracking (important for both musical and social interactions) in these individuals. Using EEG, we tested 28 individuals with WS and 15 age-matched controls in a dynamic attending paradigm in which participants passively listened to musical rhythms with accents on either the first (condition 1) or second (condition 2) tone of the pattern, leading to distinct beat percepts. Individuals with WS and controls showed strong evoked activity in the gamma (31-55 Hz) frequency band in response to physically accented beats – these responses were time-locked at similar latencies from beat onset in both conditions (condition 1:WS, 0-136ms, p<0.001; controls, 0-90ms, p=0.003; condition 2:WS, 190-316ms, p=0.002; controls, 204-298ms, p=0.039). Additionally, significant beta (13-30 Hz) activity was found for the WS and control groups in both conditions (condition 1: WS, 0-188ms, p<0.00l; controls, 0-142ms, p=0.005; condition 2: WS, 196-436ms, p<0.001; controls, 168-388ms, p<0.001). This is in line with previous research showing that meter perception driven by physical and perceived accents in tone sequences modulates beta and gamma activity in ERF brain responses in adults (Iversen et al., 2009). Individuals with WS additionally exhibited significant alpha (8-12 Hz) activity (condition 1: 0-228ms, p<0.001; condition 2: 258-514ms, p=0.004). Overall, brain activity was more widely distributed across the scalp for the WS group compared to controls, and results are consistent with increased attention to auditory stimuli in WS. Future analyses will explore individual differences in evoked brain activity in relation to IQ and social communication scores within WS.

Subjects: Beat, rhythm, and meter, Neuroscientific approach

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-48: Developing an avian model for human rhythm perception

Andrew Rouse*(1), Ani Patel(1), Mimi Kao(1)
1:Tufts University

Every culture has some form of music with a beat: a regularly-occurring perceived pulse to which people can entrain movements. Beat perception is a predictive process and is thought to involve the motor system (even when not moving), and has interesting connections to disorders, ranging from Parkinson’s disease to dyslexia. An animal model would allow investigation of the neural mechanisms underlying beat processing. The zebra finch is a promising candidate with excellent auditory discrimination and strong auditory-motor connections, including recurrent connections between premotor and auditory regions. Recently, it was shown that a zebra finch’s ability to predict timing in a partner’s vocalizations depends on signals from forebrain motor areas (Benichov et al. 2016). Our first step in evaluating zebra finches as a model for studying beat perception is to identify whether they can categorize auditory patterns based on temporal regularity. Previous work found that zebra finches can distinguish isochronous from irregularly-timed sequences of tones but do not generalize this ability to stimuli at novel tempi (Van der Aa et al. 2015; ten Cate et al. 2016). We developed an automated operant conditioning system to test rhythm perception, using a paradigm based on go/no-go and with stimuli made of zebra finch song elements. Consistent with prior studies, preliminary data show that zebra finches (n=5 of 8) can learn to discriminate between isochronous and irregular sequences at rates ranging from 120 ms to 180 ms inter-onset interval. Moreover, 4 of 4 birds generalized this discrimination to novel tempi within the trained range. The successful generalization suggests that zebra finches can categorize patterns based on regularity, and therefore may be an appropriate model for understanding human rhythm perception. We plan to investigate the role of the auditory-motor connections in rhythm perception and temporal prediction via direct manipulation of song motor areas.

Subjects: Beat, rhythm, and meter, Neuroscientific approach

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-50: The Reliability of iOS Application of the Harvard Beat Assessment Test: Consistency between Different Versions of iPad Devices

Rei Konno*(1), Gottfried Schlaug(2), Patrick E Savage(1), Shinya Fujii(1)
1:Keio University, 2:Harvard University

The Harvard Beat Assessment Test (H-BAT) is a battery of tests to assess individual differences in the abilities to perceive, produce, and synchronize with a music beat [Fujii & Schlaug, 2013]. However, the original version of H-BAT is difficult to distribute because it requires particular hardware setups. Thus, we developed an iOS application of H-BAT and reported the results of the reliability previously [Konno, et al., International Conference on Music Information Retrieval, 2018]. However, the number of participants was limited in the previous report, and the reliability across the different iOS devices had not been tested. Here, we increased the number of participants and compared the reliability across different iOS devices to further test the reliability of the H-BAT iOS application. The H-BAT was run on iPad Pro 9.7 inch and 10.5 inch (iOS 11 and iOS 10, respectively), and the data from 21 participants were recorded. Ten participants were tested with the iPad 9.7 inch while eleven participants were tested with the iPad 10.5 inch. For each of the iPad, tapping was recorded on both of the iOS and Matlab setups. The reliability between the iOS and Matlab setups were evaluated with the Intra-class Correlation Coefficients (ICC) and the Bland-Altman methods. As for the inter-tap interval measure, the ICC was very high between the iOS and Matlab setups on both iPad devices (iPad Pro 9.7 inch, ICC = 0.996~0.997; iPad Pro 10.5 inch, ICC = 0.999~0.999, respectively). We also confirmed that the asynchrony measure (i.e., the synchronization error between a tap and a beat) was highly correlated between the iOS and Matlab setups on both iPad devices (iPad Pro 9.7 inch, ICC = 0.995~0.995; iPad Pro 10.5 inch, ICC = 0.997~0.997, respectively). These results suggest that the H-BAT iOS application can be reliably used regardless of different iPad devices.

Subjects: Beat, rhythm, and meter,

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-52: The role of subvocalization in the mental transformation of melodies

Anna Honan(1), Tim Pruitt(1), Emma B Greenspon(1), Peter Pfordresher(2)
1:University at Buffalo, SUNY, 2:University at Buffalo

Mental imagery is thought to be an important process in the vocal imitation of pitch. Auditory imagery may play a critical role in this process by forming intermediary representations that guide the mapping between perception and action (Pfordresher & Halpern, 2013; Pfordresher, Halpern, & Greenspon, 2015). Several studies have demonstrated that during periods of auditory imagery, individuals tend to engage in subvocalization (Brodsky et al., 2008; Smith, Wilson, & Reisberg, 1995). Additionally, surface electromyography (sEMG) studies measuring orofacial and laryngeal muscles have shown that individuals subvocalize when engaged in auditory imagery of short melodies prior to singing (Pruitt, Halpern, & Pfordresher, 2019). The current research project aims to contribute to this body of work by replicating a study by Greenspon, Pfordresher, and Halpern (2017) while incorporating sEMG measures during auditory imagery. Greenspon and colleagues (2017) used a melodic transformation task, analogous to Shepard and Metzler’s (1971) classic object rotation task. Participants sung melodies based on exact repetition or based on a mental transformation: transposition of key, reversal of serial order, or serially shifting the starting position. Reproduction of mental transformations is highly challenging due to demands on auditory imagery. The current study addresses whether sEMG activity during mental transformations reflects the difficulty of these tasks. Data collection is ongoing; preliminary results suggest that participants subvocalize more during transformation conditions compared to untransformed repetitions. This observation suggests that mental rehearsal of a melody can also involve the rehearsal of movements used for reproduction, and that engagement in movement may increase for tasks with high cognitive load (e.g., mental transformations). Such finding lends further evidence to the notion that mental imagery for pitch incorporates both auditory and motor processes (Pfordresher, Halpern, & Greenspon, 2015).

Subjects: Performance, Embodied cognition; Pitch

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-54: The Frequency Facilitation Hypothesis

David J Baker(1)
1:Louisiana State University

Music is made up of many small, repeated patterns (Margulis, 2014). Research in music perception demonstrates that these patterns are learned implicitly (Rohrmeier & Rebuschat, 2012), and importantly, they are related to a listener’s sense of musical anticipation (Huron, 2006). While research on patterns and music has established how this kind of repetition is related to expectancy and melodic segmentation (Pearce, 2018), how these implicitly learned patterns affect a listener’s load on memory has not been explored to the same extent. This research presents a novel theory of musical memory that links music’s repetitive structure to the limits of working memory. We first draw from research in cognitive psychology that hypothesizes that more predictable events are less taxing on memory. Given research in music perception based on the statistical learning hypothesis and the probabilistic prediction hypothesis (Pearce, 2018), we posit that more predictable musical events would be less taxing on memory as a result of more efficient processing and present empirical evidence to corroborate these claims. To demonstrate this, we present both evidence from a newly encoded corpus of over 750 sight singing melodies and a pilot experiment (N = 15+). We use a within subjects design using a musical series recall task with trained musicians. The paper tests the hypothesis that an n-gram’s frequency distribution in a corpus is related to its load on memory when quantified using the information content measures derived from the IDyOM computational model of auditory cognition (Pearce, 2005). Using a series of mixed effects models, we fit multiple models to our data comparing measures of information content to that of number of notes and other computationally derived features (Müllensifen, 2009).

Subjects: Memory, Pitch

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-56: The effects of music and mental singing on gait and finger tapping variability in healthy adults and people with Parkinson disease

Adam P Horin*(1), Elinor C Harrison(1), Kerri Rawson(1), Gammon Earhart(1)
1:Washington University in St. Louis

Background: Parkinson disease (PD) is a neurodegenerative movement disorder characterized by gait deficits. Rhythmic auditory cueing, where an individual synchronizes their steps to the beat, has been widely studied as a means of enhancing gait. Externally-generated cues (e.g. music) and self-generated cues (e.g. singing) similarly increase velocity and stride length. However, externally-generated cues can increase gait variability, whereas self-generated cues do not. These different effects may be attributed to differences in neural pathways involved. Externally-generated cues may employ compensatory neural pathways bypassing the basal ganglia, however neural pathways involved in self-generated cues are unknown. Future studies utilizing neuroimaging to compare external to self-generated cues are needed and finger tapping is a commonly used proxy for gait. As such, the present study investigated whether gait and tapping respond similarly to externally-generated and self-generated cues. The primary outcome was cadence coefficient of variation (CV) for gait (inter-step interval) and finger tapping (inter-tap interval). Methods: Healthy older controls (n=21) and people with PD (n=21), performed uncued gait and tapping. Participants then performed gait and tapping in two cued conditions: movement to music and movement while mentally singing, with cues set to 100% of uncued cadence. Results: There were no significant main effects of group on CV for gait (F(1,40)=1.42, p=.24) or tapping (F(1,40)=0.39, p=.54). There were significant main effects of condition on CV for gait (F(1.93,77.26)=4.04, p=.02) and tapping (F(1.96,78.21)=4.80, p=.01). There were no interaction effects. Movement variability in gait and tapping was greater in music, but not in singing, compared to uncued. Conclusions: These results show rhythmic auditory cueing can similarly affect variability of gait and finger tapping; both movements responded differently to externally-generated and self-generated cues. Therefore, finger tapping may serve as an adequate proxy for gait in future neuroimaging studies of response to different types of auditory cueing.

Subjects: Music and movement, Beat, rhythm, and meter

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-58: Can Music Induce Interbrain Synchronization in Clinical Settings?

Kyurim Kang*(1), Michael Thaut(1), Tom Chau(2)
1:University of Toronto, 2:Holland Bloorview Kids Rehabilitation Hospital

Music arouses emotions and is also one of the most powerful tools to bring people together. Music listening or playing can help to build social interaction. Findings have indicated that physiological indicators can align spontaneously and contemporaneously between people during social interaction. EEG research has shown this also for music-based interaction, how interbrainwave synchronization emerges while performers in music ensembles interact musically with each other. In such context music drives a mutually calibrated state of emotional and physiological ‘synchronization’. Theories have ascribed these states as an expression of empathy. Physiological signals of synchronization have been observed in heart rate between mother and child, electrodermal activity between patient and therapist, or higher order brain networks between speaker and listener. It has also been suggested that empathy states may facilitate group cohesion and cooperative behavior. The research investigating ‘physiological empathy’ applied to clinical neurodevelopmental or neurorehabilitation settings is very limited. However, an appraisal of the literature would suggest that music may be a potent language to drive interpersonal synchronization socio-emotionally and physiologically. Furthermore, in clinical populations, especially with severe disabilities, verbal communication may be very limited to express mutual empathy in words. Therefore, new research that monitors interpersonal, physiological synchronization in clinical settings could be important to objectively assess the extent of how participants in a therapy context (e.g., clients, caregiver, therapist) become connected in empathic interaction and if intentions and actions of clients with limited or no verbal capabilities are being communicated and comprehended in a way that elicits an empathic synchronized brain response in their caregivers. If physiological markers in clinical settings can emerge, then music-based interventions may be one of the most effective tools to elicit them. Pilot and a current study data will be presented to show rationale, experimental design, and preliminary evidence for clinical interbrain synchronization.

Subjects: Physiological measurement, social interaction

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-60: When unfamiliar music becomes familiar: Perceptual and neural responses in a probe-tone paradigm

Anja-X Cui*(1), Nikolaus F Troje(2), Lola L Cuddy(1)
1:Queen’s University, 2:York University

Listeners are keenly aware of statistical regularities embedded in music (Kuhn & Dienes, 2005), an awareness or knowledge that develops with cultural exposure (Lantz, Cuddy, & Kim, 2014). However, such knowledge may also be acquired within the timespan of a lab study (Loui, Wessel, & Hudson Kam, 2010). Here, our goal was to uncover potential neural correlates of this acquisition. We measured perceptual and responses during a probe-tone task that required listeners to learn an unfamiliar pitch distribution during a 30-min exposure phase. Forty participants gave probe-tone ratings to probe tones following melodic context before and after exposure to the to-be-learned distribution. Probe tones were categorized by whether they occurred in the probe-tone context and whether they occurred during exposure. While participants gave probe-tone ratings, we recorded their EEG data using 128-electrode EGI Hydrocel Geodesic Sensor Nets. Probe tone ratings were influenced by the local tone distribution heard in the probe-tone context but also by the tone distribution of the entire music genre. After exposure, tones occurring only during exposure received higher ratings than those which never occurred in the genre. In previous work we have shown that participants’ brain activity in the time window of 380 to 450 ms after probe tone onset, associated with the P3b component, captures participants’ long-term knowledge about musical regularities. We thus expected a closer correspondence of this component to probe-tone ratings after exposure. However, it more closely corresponded to probe-tone ratings before exposure. Taken together our results suggest that participants are able to gain knowledge about musical regularities after short exposure. Neural correlates of long-term knowledge begin to emerge after a short timespan. Subsequent research should aim to measure the longevity of this knowledge, and consider the implications of our results for the interpretation of the P3b component.

Subjects: Pitch, Neuroscientific approach

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-62: The effect of arts integration instruction on cognitive flexibility and creativity with middle school students

Martin Norgaard*(1), Christy Todd(2)
1:Georgia State University, 2:Rising Starr Middle School

Often links between arts education and creativity are studied separately for individual arts disciplines. Here we investigated creativity and cognitive flexibility with students in an arts integration program that included a number of disciplines including media, film, animation, visual arts, and performing arts. Common to all discipline specific activities was the creation of novel and appropriate outcomes which aligns with the traditional definition of a creative product. Students (N=98) in a middle school in which a new arts integration program is being piloted completed pre- and post-tests. During the eight months between testing, one group (ArtsFull, n=39) participated in all activities of the arts integration program, a second group (ArtsPart, n=16) only participated in some of the program, and a control group (Control, n=43) did not participate. The tests were computerized versions of the Wisconsin Card Sorting Task which measures cognitive flexibility, the classic Stroop word color task which measures inhibition, the Remote Associates Test (RAT) and the Alternate Uses (AU) test which both measure creativity. Independent of discipline specific activities, all students in the ArtsFull group were placed on academic teams that utilized arts integration, developed e-portfolios, and completed year-long capstone projects in students’ interest area with the guidance of industry mentors. Though random assignment was not possible in the current context, results of the pre-test showed no significant group differences in any of the outcome measures. In the pre-test data, we found significant correlations between the measure of cognitive flexibility (percentage of perseverative errors) and the AU total score (r =.39, p<,001); and between scores for the AU and the RAT (r =.38, p<,001). We hypothesize that students in the ArtsFull group will score higher than the control group on creativity measures after arts integration instruction. Full results will be available by the time of the proposed presentation.

Subjects: Cross-domain effects, Music education/pedagogy/learning

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-64: Rhythmic priming improves grammar processing in children with and without Specific Language Impairment

Eniko Ladanyi*(1), Agnes Lukacs(2), Judit Gervain(3)
1:Vanderbilt University Medical Center, 2:Budapest University of Technology and Economics, 3:Universite Paris Descartes

According to recent evidence (Chern et al., 2018; Bedoin et al., 2016), performance on a grammaticality judgement task improves if children are presented with a regular vs. an irregular rhythm or environmental noise immediately before the linguistic stimuli. The phenomenon is referred to as rhythmic priming and was shown in English- and French-speaking children. The generality of rhythmic priming, however, is not well understood yet, neither across languages, nor across cognitive domains. Motivated by these results, our first aim was to test whether Hungarian-speaking children with and without Specific Language Impairment (SLI) show the same effect at 5-7 years of age. We also wanted to investigate whether the effect is specific to grammar or regular rhythm also improves performance on a) a picture naming task – a linguistic task which involves no grammar and b) a non-verbal Stroop task – a non-linguistic task. According to our results children showed a significantly better performance following an exposure to a regular rhythm vs. an irregular rhythm/silence in the grammaticality judgment task but rhythm did not have any effect in the case of the picture naming and non-verbal Stroop tasks. These results suggest that rhythmic priming improves grammar processing in Hungarian similarly to English and French supporting the generality of rhythmic priming across languages. The phenomenon was found to be specific to the grammaticality judgement task indicating shared mechanisms between rhythm and grammar processing. References Bedoin, N., Brisseau, L., Molinier, P., Roch, D., & Tillmann, B. (2016). Temporally Regular Musical Primes Facilitate Subsequent Syntax Processing in Children with Specific Language Impairment. Frontiers of Neuroscience, 10, 245. Chern, A., Tillmann, B., Vaughan, C., Gordon, R.L. (2018). New evidence of a rhythmic priming effect that enhances grammaticality judgments in children. Journal of Experimental Child Psychology, 173, 371-379.

Subjects: Language and speech, Beat, rhythm, and meter

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-66: Marches, not Pastorals: The Influence of Contextual Information and Topics on Narrative Experiences of Music

Janet Bourne*(1), Sami Alsalloom(1), Tim Bausch(1), Heather Cardoz de la Torre(1), Michelle Dalarossa(1), Tommy Kan(1), Annie Lai(1), Gregory Moreno(1), Jishing Yu(1)
1:University of California, Santa Barbara

Recent experiments demonstrate that listeners imagine narratives and associations while listening to music (Margulis, 2017; Herbert & Dibben, 2018). However, these experiments do not consider the influence of contextual information or topics (familiar styles with conventional extra-musical associations; waltzes, marches, etc.). Our aim was to investigate how contextual knowledge and different topics (march or pastoral) influence when participants narratively engage with music. Three groups of participants (N=75) were told different contextual information: one thought they were listening to film music, another Western art music (WAM), and another no context. After hearing each excerpt, participants were asked several questions, including if they imagined a story while listening. If “yes,” then they described the story. We used a 3x2x2 mixed design where context (film music vs. WAM vs. no context) was a between-subjects factor while topic (march vs. pastoral) and mode of topic (major vs. minor) were within-subjects factors. We hypothesized that participants would report more narratives in the film music context and no difference between marches and pastorals. A 3x2x2 ANOVA revealed no statistically significant differences in percentage of “yes” responses to the story question based on context (F=1.75,p=0.18) or mode (F=3.27,p=0.075). However, there was a significant main effect of topic (F=12.37,p<0.001,np2=0.145). Participants imagined narratives significantly more often for marches than pastorals. To analyze participants’ reported narratives, we used a computerized language analysis technique called the Meaning Extraction Method (Boyd, 2017), which statistically groups frequent words into common themes. Inter-subjective agreement was high, with consistent themes, often inspired by multimedia. Despite musicological assumptions, results indicate that listeners experience music narratively more for some topics than others. This study has implications for the process of listening and meaning construction, specifically the relationship between context and musical features, understanding current associations of topics and their relationship to narrative, and considering modes of listening.

Subjects: Aesthetics / preference, Meaning

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-68: The effect of tempo on learning performance and real-time emotions of adolescents in a learning task

Matthew Moreno*(1), Earl Woodruff(1)
1:University of Toronto

Motivation Research has indicated that emotions are an integral part of the learning process (D’Mello & Graesser, 2012). Literature (Husain, Thompson and Schellenberg, 2002; Thompson, Schellenberg and Husain, 2016) has examined the impact of musical tempo on performance and arousal. Further research examining real-time emotions and the interactivity between this emotional stimuli in the cognitive process is needed. Research questions: 1) What differences may exist in the emotional expressions of learners who have or have not listened to music of contrasting tempi while completing a comprehension task? 2) Are there differences in performance results of learners who have or have not listened to music of contrasting tempi while completing a comprehension task? Methodology Participants were 1st year undergraduate students (n=74) at a research-university in Canada. In this repeated measures study, participants were randomly placed into one of three conditions: 1) no music (control), 2) slow music (110bpm) and 3) fast music (150bpm). Participants are asked to read passages from the comprehension component of the Nelson-Denny Form H (Brown, Fishco & Hanna, 1993) followed by accompanying comprehension questions. During the trial, participants’ faces are recorded with iMotions Emotient facio-muscular software measuring 19 Action Units (AUs) and probability scores for 9 emotions. Results The mean value of test scores between all three conditions, post-hoc tests indicated there were significant differences (p=0.005) between scores of fast and no music conditions, while there were insignificant differences between slow and no conditions. Participants in the fast-music condition displayed significantly different (p=0.005) levels of joy, fear and contempt in comparison to the no and slow music conditions. Implications These preliminary results provide empirical evidence on the use of facial recognition technology to identify the emotional states of learners. Research will continue to provide data on the emotions that contribute to identifying how music may engage, sustain or enhance emotions for optimizing success while learning.

Subjects: Emotion, Beat, rhythm, and meter; Physiological measurement

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-70: The effects of group singing on pain threshold and beta-endorphins in older adults with and without Parkinson’s disease

Alexander Pachete*(1), Arla Good(1), Fran Copelli(1), Frank Russo(1)
1:Ryerson University

Social well-being is often compromised in older adults due to a confluence of factors. These include social isolation and loneliness, which can arise from retirement, separation from family, cessation of driving, and the death of loved ones. Social isolation and loneliness are especially prevalent in those diagnosed with age-related diseases, such as Parkinson’s. Over the past decade, numerous research groups have found support for the idea that synchronous movement, such as that which occurs in group singing, may foster social bonding. Other studies have found that group singing leads to increases in pain thresholds, even after accounting for analgesic effects associated with cardiovascular activity. Dunbar and colleagues have suggested that the increases in pain thresholds may be due to the release of the hormone, beta-endorphin. To the best of our knowledge, this intriguing sociobiological explanation for increases in pain thresholds following group singing has not yet been tested using hormonal assays. In the current study, we tested analgesic effects of group singing in two groups of older adults. One of the groups is a Parkinson’s choir, and the other is a healthy older adult choir. The research presented here is part of the SingWell project, an international research study investigating group singing in older adults from a biopsychosocial perspective. We conducted a pain threshold test using a dolorimeter and obtained a saliva sample immediately before the choir session began and immediately after the choir session ended. Results revealed the expected post-singing increase in pain thresholds for both choirs. Beta-endorphin assays will be used to assess sociobiological explanations of the analgesic effects of group singing.

Subjects: Health and well-being, Music and movement; Not Listed; Physiological measurement

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-72: Shared variance in contextual auditory discrimination ability and accuracy of instrumental music performance

Bob Duke(1), Sarah Allen*(2), Lani Hamilton(3), Carla Cash(4), Amy Simmons(1)
1:The University of Texas at Austin, 2:Southern Methodist University, 3:University of Missouri- Kansas City, 4:Texas Tech University

Previous investigations have systematically examined myriad aspects of musicians’ perceptual abilities, although only more recently in the context of authentic musical contexts (e.g., Hamilton et al., 2018). The current study expands our group’s previous findings concerning the relationship between levels of auditory discrimination in contextualized, artist-level music making and accuracy in music performance. We have demonstrated in previous work that instrumental musicians’ ability to hear small differences in pitch, timing, and loudness in recorded performances of artist-level playing is significantly correlated with musicians’ own instrumental performance ability, as defined by musicians’ educational attainment (Study 1) and by teachers’ rankings of their overall skills as performers (Study 2). Rather than determining precise difference thresholds in isolated dimensions of sound, our test assesses musicians’ discrimination abilities regarding the types of variations that typically occur in the performance of music. The test has been shown to have acceptable internal reliability (KR20 = .57) and high concurrent validity (r = .80). To further interrogate this relationship we are testing individual musicians (N = 40) using the same artist-level auditory discrimination test together with targeted tests of instrumental performance accuracy (Study 3). Extant investigations of music performance skills and much of instrumental music pedagogy focus primarily on the various components of motor production and less on the development of auditory skills that ultimately guide the incremental refinement and updating of procedural memories. Our research suggests a need to reconsider the role of auditory discrimination in the development of performance skills, recognizing the extent to which the perceived discrepancies between musical intentions (efference copy) and performance outcomes (afferent feedback) are the central components of music learning. The current study further examines the extent to which the the acquisition of highly refined performance skills is “in the ears” as much as it is in the hands.

Subjects: Performance, Music and movement; Music education/pedagogy/learning; Musical expertise

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-76: The Influence of Familiarity on Beat Perception and Oscillatory Entrainment

Joshua Hoddinott*(1), Molly Henry(2), Jessica Grahn(3)
1:Western University, 2:Max Planck Institute for Empirical Aesthetics, 3:University of Western Ontario

Humans often spontaneously synchronize movements to a perceived underlying pulse, or beat, in music. Beat perception may be indexed by the synchronization of neural oscillations to the beat, marked by increases in electrical amplitude at the same frequency as the beat in electroencephalography (EEG) signals (Nozardan, Peretz, & Mouraux, 2012). Neural synchronization to the beat appears stronger for strong-beat than non-beat rhythms (Tal, et al., 2017), and has been hypothesized to underlie the generation of an internal representation of the beat. However, because we are exposed disproportionately to strong-beat rhythms (e.g., in most music) in the daily environment, comparisons of neural responses to strong-beat and non-beat rhythms may be confounded by relative differences in familiarity. Thus, in this study we disentangled beat-related and familiarity-related effects by comparing EEG responses during the perception of strong-beat and non-beat rhythms that were either novel or familiar. First, we recorded EEG to a set of strong-beat and non-beat rhythms. Then, subjects were familiarized with half of the rhythms over 4 behavioural sessions by listening to and tapping along with the stimuli. Finally, EEG to the full set of rhythms (half now familiar, half still unfamiliar) was recorded post-familiarization. Preliminary data show changes in EEG amplitude at beat-related frequencies between pre- and post-familiarization, suggesting that oscillatory entrainment is influenced by stimulus familiarity. Further analyses will characterize whether the contributions of familiarity are similar for strong-beat and non-beat rhythms. Grahn & Brett (2007). J. Cognitive Neurosci, 19, 893-906. Nozardan, Peretz, & Mouraux (2012). J. Neurosci, 32, 17572-17581. Schiffer & Schubotz, (2011). Front Hum Neurosci, 5, 1-12. Tal, Large, Rabinovitch, Wei, Schroeder, Poeppel, & Golumbic (2017). J. Neurosci, 37, 6331-6341.

Subjects: Neuroscientific approach, Beat, rhythm, and meter

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-78: It Looks Like It Sounds: Transcribing Young Children’s Music Vocalizations

Kathleen K Arrasmith(1)
1:University of South Carolina

With the intention of increasing understanding of young children’s music vocalizations, the purpose of this study was to investigate and describe transcription and analysis techniques. Music notation creates a visual representation of aural patterns and allows for visual analysis of an aural phenomenon. The purposes of transcription include description, enhancing analysis, creating performance material, generating notation techniques and styles, and learning about and learning from music notation. Young children make a variety of music vocalizations, but few researchers incorporate detailed transcriptions to aid their analysis and to augment readers’ understandings and interpretations. I selected four, short video-recorded music engagement session excerpts based on variations in established music contexts, illustrations of music development stages, differences in and abundance of music vocalizations, and representations of social music interaction. Participants included children between 4-months and 3-years old. I primarily used my own aural skills and music theory training to create each transcription and only occasionally employed sound analysis software to illuminate specific difficult passages. I engaged in four stages of transcription: preparation, which included repeated listening and discriminating extraneous noise from music vocalizations; initial transcription, which included rough sketches using shorthand notation techniques; intermediary drafts, which included working toward saturation of visual sound representation, adjusting traditional Western notation practices, and creating new notation practices; and the final transcription, which included hand-drawing clean, simple, and accurate notations of young children’s music vocalizations. Seeing young children’s transcribed music vocalizations may enhance musicians’ and music development specialists’ ability to hear specific, individual music vocalizations. Adding detailed transcriptions to articles may aid readers’ ability to understand, analyze, and audiate descriptions of young children’s music vocalizations. Data from high-quality transcriptions may contribute to the understanding of young children’s music vocalizations, music development, and social music interactions.

Subjects: Music and development, Music theory

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

P2-80: Differences Between Melodic and Harmonic Consonance Preferences in Westerners Suggest Influence of Exposure Statistics

Nori Jacoby*(1), Malinda McPherson(2), Marion Cousineau(3), Claire Pelofi(4), Josh McDermott(5)
1:Max Planck Institute for Empirical Aesthetics, 2:Harvard University, 3:University of Montreal, 4:New York University, 5:Massachusetts Institute of Technology

For Westerners, combinations of musical notes vary in their pleasantness, or consonance, for reasons that remain debated. Research on consonance has focused on concurrent notes (forming ‘harmonic’ pitch intervals), but successive notes (‘melodic’ intervals) are also considered consonant or dissonant. The relationship between harmonic and melodic consonance is potentially diagnostic because they have distinct occurrence statistics in Western music: some intervals that are common melodically are rare harmonically. We measured pleasantness ratings of concurrent and sequential note pairs in 82 participants with varied degrees of musical expertise (28 musicians, 27 non musicians, and 27 participants with varied degree of musicianship). These ratings differed substantially for small pitch intervals, which were heard as unpleasant when presented harmonically, but much less so when presented melodically. Empirical distributions of each type of interval in Western music calculated based on five large corpora ranging from Barlow and Morgenstern’s collection of classical musical themes to a collection of over 17,000 MIDI pop songs from the Lakh dataset (Raffel 2016), qualitatively mirrored the differences in their ratings. The results are consistent with the idea that exposure statistics determine consonance preferences, and demonstrate that even in Westerners, such preferences are not determined exclusively by similarity with the harmonic series.

Subjects: Aesthetics / preference, Corpus analysis/studies; Harmony and tonality; Pitch

When: 4:45-6:00 PM on Tue Aug 6 – Day 2
Return to Day Schedule.
Return to Full Schedule.

Poster session P3

10:30-11:45 AM in Rosenthal

P3-1: Learning by singing: results from intervention studies in language education

Vera Busse*(1), Ingo Roden(2), Gunter Kreutz(3)
1:University of Vechta, 2:Carl von Ossietzky University Oldenburg, 3:University of Oldenburg

As rhythmic organisation may be beneficial for the phonological encoding of linguistic materials, several studies have begun to explore the benefit of singing for language education. First results show positive effects on word acquisition and pronunciation. Little is known, however, to what extent singing can support emergent literacy skills and grammar acquisition. The paper reports on a series of intervention studies with a pre-post- and follow-up design investigating the effect of singing on language learning. Learning progress was measured via language tests and cued song recall; students’ cognitive abilities were assessed via an intelligence test. In the first study (within-subject design), recently migrated, primary school children (N = 35) received three 40-minute sessions where all students learnt the lyrics of two German songs through alternating teaching modalities (singing and speaking). While the two teaching modalities did not show differential effects on cued recall of song lyrics, children significantly improved their language knowledge; it can be assumed that a positive motivational effect of singing carried over to the speaking modality. Effect sizes indicate substantial progress, which was sustained over the retention interval. Importantly, students also showed progress on tasks targeting the transfer of grammatical skills. In a second intervention with third-graders learning English [N = 60], we therefore used separate sessions for singing and speaking (between-subject design; singers vs. speakers vs. control) and measured affect after each session. Singers showed greater learning than speakers and children in the control condition. Moreover, singers also showed higher positive affect scores in the first two lessons as compared to speakers. Taken together, the study indicates that singing can be a valuable supplement to grammar instruction irrespective of initial language proficiency. Younger children, who tend to have shorter attention spans and more difficulties with explicit grammar instruction, may particularly benefit from singing.

Subjects: Language and speech, Music and language

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-3: Rhythmic timing in music and speech: Evidence for shared resources.

Rhimmon Simchy-Gross*(1), Elizabeth Margulis(1)
1:University of Arkansas

Neural oscillations synchronize with rhythmic events in speech (e.g., stressed syllables in stress-timed languages; Luo & Poeppel, 2007) and music (e.g., on-beat tones; Snyder & Large, 2005). This synchronization decreases perceptual thresholds to temporally predictable events (Lawrance et al., 2014), improves task performance (Ellis & Jones, 2010), and enables speech intelligibility (Peelle & Davis, 2012). Despite implications of music-language transfer effects for improving language outcomes (Gordon et al., 2015), proposals that shared neural and cognitive resources underly music and speech rhythm perception (e.g., Tierney & Kraus, 2014) are not yet substantiated. We aimed to explore this potential overlap in the present research. We tested whether music-induced oscillations affect speech tempo perception, and vice versa. In each of 108 trials, we presented a prime sequence (seven repetitions of either a metric speech utterance or an analogous musical phrase) followed by a standard-comparison pair (either two identical speech utterances or two identical musical phrases). Twenty participants judged whether the comparison was slower than, faster than, or the same tempo as the standard. We manipulated whether the prime stimulus was slower than, faster than, or the same tempo as the standard. Tempo discrimination accuracy was higher when the standard tempo was the same as, compared to slower or faster than, the prime stimulus tempo. These findings support the shared-resources view more than the independent-resources view; if independent resources process speech and music rhythm, then there should be no differences between any of the prime-tempo conditions in the between-domain groups. These findings have the potential to illuminate mechanisms underlying music-language transfer effects showing improvements in verbal memory (Chan et al., 1998), speech-in-noise perception (Strait et al., 2012), and reading ability in children and adults (Tierney & Kraus, 2013).

Subjects: Music and language, Beat, rhythm, and meter

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-5: The impact of aging on neurophysiological entrainment to a metronome

Sarah A Sauvé*(1), Emily Bolt(1), Sylvie Nozaradan(2), David Fleming(1), Benjamin Zendel(1)
1:Memorial University of Newfoundland, 2:University of California, Louvain

Neural entrainment is an automatic process where neural oscillations synchronize with environmental events. The resonance theory proposes that the synchronicity of neural populations oscillating at beat frequency leads to beat perception. Recent support for this theory comes from Nozaradan and colleagues, who showed that recorded steady-state evoked-potentials (SSEPs) contained amplitude peaks at 2.4Hz when listeners were hearing a sound pattern containing a 2.4Hz beat. The SSEPs also showed peak amplitudes at either 1.2Hz for listeners imagining a binary meter and at 0.8Hz and 1.6Hz for listeners imagining a ternary meter, where meter refers to beat groupings. In order to examine the impact of aging on neural entrainment to musical rhythms, we presented older and younger adults with a stream of isochronous tone pips at a slow (2.5Hz) and fast (5Hz) rate while monitoring electrical brain activity. SSEPs had strong peaks at 2.5Hz and 5Hz for the slow and fast conditions, respectively, reflecting an encoding of the stimulus rate. The first three harmonics of each frequency rate were also analyzed. In younger adults, there was a reduction in the amplitude of the neural oscillation from the stimulus rate to the third harmonic. This harmonic-related amplitude reduction was reduced in older adults. That is, the difference between the stimulus frequency and the fourth harmonic was smaller in older adults compared to younger adults. The lack of age-related reduction in SSEPs amplitude for the harmonics may be due to an age-related reduction in inhibitory activity that should reduce the strength of these neural oscillation in reference to the stimulus rate.

Subjects: Beat, rhythm, and meter, Aging

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-7: Brain activity and network dynamics during singing an opera aria

Shoji Tanaka(1)
1:Sophia University

Singing an aria requires highly integrated cognitive and emotional control. However, how the brain (regions and networks) contribute to such control is unknown. This study aims to characterize brain activity and the dynamics of brain networks during singing an aria. Electroencephalograms (EEG) were recorded using a 32-channel wireless EEG device while vocalists were singing an aria, such as “Va! Laisse couler mes larmes” from Opera Werther by Massenet. The acquired data were analyzed using MATLAB, EEGLAB, and LORETA to estimate power spectral density, coherence, time-frequency characteristics, and source localization. To characterize EEG during singing, this study conducted EEG recording also during imagined singing, watching a video clip, listening to music, and resting. The power spectral density profile during singing is characterized to be lower theta/alpha powers, relatively higher beta/gamma powers, and highest delta-band power at the most of the electrodes. The source localization analysis showed higher activities in the prefrontal cortex and visual cortex during the most of the periods of singing. The coherence analysis showed higher coherence between electrode pairs in the central and parietal brain regions, suggesting that singing requires intimate network communication in the sensorimotor and parietal regions of the brain. In conclusion, the recorded EEG reflects cognitive and emotional control required for singing an aria. To the best of the author’s knowledge, this study is the first to extract such characteristics of EEG during singing an aria.

Subjects: Neuroscientific approach, Cross-domain effects; Emotion; Memory; Musical expertise; Performance

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-9: Musical deficits in Schizophrenia and its relation with cognitive functions and emotion recognition

Shantala Hegde*(1), Nisha Chandrashekaran(1), Ganesan Venkatasubramanian(1)
1:National Institute of Mental Health and Neuro Sciences

Schizophrenia (SZ) is characterised by positive, negative symptoms, deficits in cognition and emotion processing. Few studies have examined the role of musical deficits in the same. Studying musical deficits is important to understand the likely association between cognition, emotion and psychopathology. The present study was formulated to examine musical deficits and observe its relation with cognitive functions, emotion recognition, psychopathology in a sample of 26 patients with SZ (ICD10 criteria) and gender, age, education matched healthy controls. Assessments included ‘Scale for Assessment of Positive & Negative Symptoms’; WHO Disability Assessment Schedule’; battery of neuropsychological tests for cognitive deficits; ‘Montreal Battery of Evaluation of Amusia’, ‘Beat Alignment Test’, ‘Seashore Rhythm Test’ for musical deficits, ‘Tool for recognition of emotion in psychiatric disorders’ for emotion recognition deficits. Data was analysed using Chi-square, student t-test, Pearson’s product moment correlation, stepwise linear regression analyses. SZ had global deficits in cognitive functioning (higher deficits in verbal, visual learning and memory) and significant under-identification of emotions. The groups differed significantly on rhythm perception (rhythmic contour), musical incidental memory. Musical abilities, multiple cognitive functions and emotion recognition were highly correlated. Rhythm perception predicted verbal working memory; music memory predicted verbal memory; melodic, rhythm perception predicted emotion recognition.

Subjects: Cross-domain effects, Music and Cognition in Clinical Conditions

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-11: Singing to learn: How melodic content affects encoding and retrieval

Rachel M Thompson*(1), James Mantell(1)
1:St. Mary’s College of Maryland

Musical pitch information may facilitate encoding and retrieval of linguistic content. We performed a replication and extension of an experiment by Ludke, Ferreira, and Overy (2014). They showed that participants were better at producing Hungarian phrases that were learned with melodic contours compared to non-melodic versions. We extended their work such that participants in the song-learning condition were randomly assigned to spoken-recall or sung-recall so that we could determine whether the melodic advantage is specific to the encoding or recall phase of learning. Thirty participants learned 20 German phrases in a sung or spoken modality with the paired-association paradigm. We assessed performance across three measures: a multiple choice pre/posttest, a phrase production test, and a delayed recall conversation task. If the melodic advantage is specific to recall, participants in the singing-singing condition should outperform participants in the singing-speaking condition. However, if the melodic advantage is specific to encoding, participants in the singing conditions should perform equivalently. Finally, if there is no effect of melodic exposure on encoding or recall, the speaking condition should outperform the singing conditions. We conducted three one-way between-subjects ANOVAs with three levels of recall-learning condition (speaking-speaking, singing-singing, and singing-speaking) with planned comparisons and we compared effect sizes across conditions. Although a repeated measures t-test revealed improvement from pretest to posttest, there were no statistically significant differences among the conditions. However, when we eliminated 12 participants because of their high performance on the pretest, planned analyses revealed a difference between the singing-singing condition and the speaking condition on the multiple-choice posttest only. Our experiment replicates prior work by supporting a song benefit for language learning. It expands the literature by showing that the melodic benefit may require singing during both encoding and retrieval. However, since the results were mixed, we suggest avenues for future work on music’s putative facilitative effects in language learning.

Subjects: Music and language, Memory; Pitch

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-13: The mnemonic effect of songs after stroke and the underlying cognitive and neural mechanisms

Vera Leo*(1), AJ Sihvonen(1), T Linnavalli(1), M Tervaniemi(1), M Laine(2), S Soinila(3), T Sarkamo(1)
1:University of Helsinki, 2:Åbo Akademi University, 3:University of Turku

Sung melody provides a mnemonic cue that can enhance the acquisition of novel verbal material in healthy subjects. Recent evidence suggests that also stroke patients, especially those with mild aphasia, can learn and recall novel narrative stories better when they are presented in sung than spoken format. Extending this finding, the present study explored the cognitive mechanisms underlying this effect by determining whether learning and recall of novel sung vs. spoken stories show a differential pattern of serial position effects (SPEs) and chunking effects in non-aphasic and aphasic stroke patients (N = 31) studied 6 months post-stroke. The structural neural correlates of these effects were also explored using voxel-based morphometry (VBM) and deterministic tractography (DT) analyses of structural MRI data. Non-aphasic patients showed more stable recall with reduced SPEs in the sung than spoken task, which was coupled with greater volume and integrity (indicated by fractional anisotropy, FA) of the left arcuate fasciculus. This indicates that the cues provided by the musical structure facilitate covert rehearsal of the material in verbal working memory mediated by the left dorsal pathway (AF), resulting in more even recall performance of the sung vs. spoken story. The aphasic patients, in turn, benefited from the repetitive melody and rhythm of song by showing a larger recency effect (better recall of the last vs. middle part of the story) and enhanced chunking (larger units of correctly recalled consecutive items) in the sung than spoken task. Neurally, the sung over spoken recency effect in aphasic patients was coupled with greater grey matter volume in a bilateral network of temporal, frontal, and parietal regions and also greater volume of the right inferior fronto-occipital fasciculus (IFOF). These results provide novel cognitive and neurobiological insight on how a repetitive sung melody can function as a verbal mnemonic aid after stroke.

Subjects: Memory, Neuroscientific approach

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-15: Acoustic Characteristics used to Differentiate Speech from Song and Individual Factors that Impact their Effectiveness

Xin Qi(1)
1:Western University Brain and Mind Institute

There are many acoustic differences between speech and song, such as frequency range, average fundamental frequency, pitch stability, and rhythmic regularity. Previous studies have shown that musical and linguistic knowledge is recruited differently, but no studies have addressed what acoustic features people use to differentiate between speech and song. Our experiment is designed to determine what acoustic characteristics are used to distinguish speech from song, and to elucidate whether individual factors, such as musical training and language background, have an effect on these characteristics. In Experiment 1, participants were asked to rank 15 acoustic characteristics according to their importance in differentiating between speech and song. Results showed that melody, beat, and rhythmic regularity were ranked significantly higher (X2=92.69, p<0.001) than other characteristics, but these characteristics were not statistically different in their relative rankings to each other. From these results, Experiment 2 will have participants categorize sentences as speech or song when we parametrically manipulate the rhythmic and melodic characteristics on a continuum from speech to song. We anticipate that increasing pitch stability and rhythmic regularity will result in a greater proportion of song responses with higher ratings of confidence. Individual differences will also likely affect the proportion of song responses. We anticipate that musically trained participants will show greater sensitivity to pitch and rhythm changes and tonal language users will be less likely to hear changes in pitch stability as sounding musical, due to their experience with pitch changes as linguistically meaningful. Results from this study will provide insights on the specific cognitive processes used to differentiate music and language, as well as addressing the acoustic differences between them and how that affects the way the sound is perceived. Supervisors: Christina M. Vanden Bosch der Nederlanden and Jessica A. Grahn

Subjects: Language and speech, Music and language

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-17: A continuous model of pulse clarity: towards inspecting affect through expectations in time

Martin A Miguel*(1), Mariano Sigman(2), Diego Fernandez Slezak(3)
1:LIAA, DC, UBA; ICC, CONICET, 2:LNI, UTDT, 3:LIAA, DC, UBA

Music has a unique capability to evoke emotions. A very interesting one is that of tension. Tension arises in a situation of dissonance and uncertainty that yearns for resolution. This is a kind of affect that develops in time, given that it is dependent on expectations about what will happen next. Specifically in rhythms, two concepts have been explored that relate to the comfort and understanding of music: pulse clarity and rhythm complexity. Several computational models have been introduced to analyze these concepts. In most cases their analysis is static — i.e. full passage is studied — and does not consider how they evolve in time. We present THT, a novel beat tracking model, that given the onset times of a rhythmic passage provides continuous information of which tacti are most reasonable and how salient they are. It works by tracking multiple tactus hypothesis overtime and providing a score designed to reflect confidence on the tactus. In this work we set to evaluate the output of the THT model as a proxy for pulse clarity. The mean of the continuous tactus confidence curve was taken as the pulse clarity score of the model and we performed a beat tapping experiment to evaluate our metric. The experiment consisted of asking participants (N=27) to tap the subjective beat while listening to 30 rhythmic passages. After each trial they were asked about the task’s difficulty as a subjective measure of clarity. We also calculated the within-subject tapping clarity as an empirical measurement. The proposed computational metric correlated significantly against both subjective and objective measures (spearman_r < -0.77, p < 0.001). Comparison with other models yielded similar results (Lartillot 2008, Fitch 2007, Povel 1985). This positive result allows us to inspect music emotions that arise from changes in rhythm perception.

Subjects: Music information retrieval, Beat, rhythm, and meter; Computational approach; Emotion

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-19: Childhood Music Training Induces Change in Brain Structure: Results from Longitudinal and Cross-sectional Studies

Assal Habibi*(1), Katrina Heine(1), Hanna Damasio(1)
1:University of Southern California

Playing a musical instrument is a complex multisensory experience requiring several skills including reading and translating abstract musical notation to fine and coordinated muscle movements in order to produce a sound. The mastering of this rich and demanding process requires regular and intense practice, often from a young age, and the combination of such demands is likely to influence the differential development, maintenance, and operation of certain brain structures. Evidence has been accumulating to suggest that music training is associated with structural brain differences in adults. Auditory structures are among the most consistently involved areas of change, and that the changes in the morphometry of the auditory regions have been shown to relate to the learning and mastering of musical skills. The group comparisons in the many of these studies, however, are cross-sectional; therefore, it is not possible to conclude whether the anatomical differences result from pre-existing traits, lengthy musical training, or an interaction of the two. It is also not clear whether such changes might be found in children undergoing music training given that brain development is a dynamic process of change, shaped by genetic and experience. We used MRI in two studies to investigate the course of neuroanatomical changes related to music training in children. In study 1, we cross-sectionally compared a group of 15 children musicians (ages 9-11) to matched 15 non-musicians. We calculated cortical thickness in 3 individual regions of interest in each hemisphere. (1) Heschl’s gyrus, (2) anterior superior temporal gyrus, and (3) posterior superior temporal gyrus. We found thicker cortex in the right posterior superior temporal gyrus, and in the left Heschl’s gyrus in the children who had music training compared to those who did not. Additionally, in the music group, music proficiency was correlated to cortical thickness in the right posterior superior temporal gyrus. In study 2, we also compared changes in cortical thickness, in the same regions of interest as in study 1, in a group of 12 children involved in a systematic music program and another group of 11 children without music training, but here with a longitudinal design. In this case, all children were evaluated at the beginning of the study (before the start of music training for the music group) when they were ages 6 -7 and four years later when they were 10-11 years old. Although all children showed some degree of cortical thickness reduction in all ROIs, as expected in healthy development, we found a significantly smaller reduction of cortical thickness specifically in the right posterior segment of the superior temporal gyrus in children who had music training. Together, our results provide evidence that music training induces regional macrostructural brain changes in school-age children and that these changes occur predominantly pronounced in the right auditory association areas.

Subjects: Music and development, Neuroscientific approach

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-21: Timbre ordering and timbre networks

Roger T Dean*(1), Yvonne Leung(2), Felix Dobrowohl(3)
1:The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, 2:University of New South Wales, 3:MARCS Institutes

We tested the hypothesis that simple changes in task condition encourage untrained listeners to form different relations between 30 short complex sounds. Such an ability seems essential if timbral motifs can be effective compositional elements in sound-based music. Participants were asked to order and group five types of sounds that were obtained (instrumental, environmental or electronic) or digitally transformed. In task 1, participants ordered the sounds (represented by initially randomly positioned uninformative labelled icons) along a horizontal line on a computer screen such that for them the sequence was coherent, by means of repeated listening. Then the icons were again randomized in screen position, and undistinguished boxes provided in a row at the screen bottom. Four boxes were presented rather than the 5 implied by the sound curation, providing conditional pressure. The second task was to place sounds that the user felt belonged together in chosen boxes. Any number of boxes could be used, with any number of items in each. Then the items from the first box re-appeared randomly placed on the screen (as the participant was informed), and the second task was continued by ordering them on a horizontal line as in task 1. Each box’s contents was presented successively in this way until task 2 was complete. A range of comparisons of the orderings implied by task 1 vs 2 tested our hypothesis, including comparisons of the sub-orders (box orders) with the single order of task 1, and comparisons of the restricted Markov chain representation mostly likely for each of the tasks. These comparisons demonstrate that participants indeed chose different ordering relationships in the two tasks. Current work to be presented visualizes the relationships in the form of exponential random graph models (ERGM) networks, which may be useful for further assessment of generative mechanisms based on audio features.

Subjects: Timbre, Aesthetics / preference; Composition and improvisation

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-23: Melodic similarity in music copyright law: An experimental investigation

Sho Oishi*(1), Rei Konno(1), Charles Cronin(2), Daniel Müllensiefen(3), Quentin Atkinson(4), Shinya Fujii(1), Patrick E Savage(1)
1:Keio University, 2:George Washington University Law School, 3:Goldsmiths, 4:University of Auckland

Music copyright lawsuits often result in multimillion dollar settlements, yet there are few objective guidelines for applying copyright law in infringement claims involving musical works. Recent research has attempted to develop objective methods based on automated melodic similarity algorithms (Müllensiefen & Pendzich, 2009, Musicae Scientiae; Savage et al., 2018, Proc. Folk Music Analysis), but there remains almost no perceptual data on the role of melodic similarity in music copyright decisions (Lund, 2011, Virginia Sports and Entertainment Law Journal). We conducted a pilot experiment (n = 19 participants) collecting perceptual judgments of copyright infringement for 13 copyright cases from the Music Copyright Infringement Resource database (mcir.edu) involving musical similarity for which we could prepare both full-audio and melody-only (MIDI) versions. Due to the historical emphasis in legal opinions on melody as the key criterion for deciding infringement (Fishman, 2018, Harvard Law Review), we predicted that listening to melody-only versions would result in perceptual judgments that more closely matched actual past legal decisions. Surprisingly, however. the reverse was true: participant judgments more closely matched past decisions when listening to full audio including non-melodic factors such as timbre, instrumentation, and lyrics (paired t = 2.2, p = 0.049). By SMPC we plan to expand this pilot study to include more copyright cases. Our findings have important practical implications regarding whether jury members should be allowed to listen to full audio recordings during copyright cases – a point of major contention during recent high-profile cases such as the dispute involving Blurred Lines.

Subjects: Music and society, Composition and improvisation; Pitch

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-25: Auditory Attentional Blink and Musical Expertise

Merve Akca(1)
1:University of Oslo

Background. Attending to goal-relevant information can leave us metaphorically ‘blind’ or ‘deaf’ to the next relevant information while searching among distractors. The Attentional blink (AB; Raymond, Shapiro, & Arnell, 1992) refers to the phenomenon that of two targets presented in close temporal proximity, people often fail to report the second after identifying the first correctly. Although there is evidence that certain visual stimuli relating to one’s area of expertise can be less susceptible to AB effects, it remains unexplored whether the dynamics of temporal selective attention vary with expertise and objects types in the auditory modality. Methods. Using the auditory version of the attentional blink paradigm, the present study investigates how musical expertise shapes the deployment of attention in time for different auditory targets. In this paradigm, expert cellists and non-musician participants were asked to first identify a target sound, and then to detect instrumental timbres (cello and organ tones) and human voice as second target in a rapid auditory stream. Results. The preliminary results showed that expert cellists outperformed non-musicians in their overall accuracy levels of target identification and detection. Furthermore, these results also indicated a significant main effect of the second target type, F (2, 76) = 3.234, p <.05, reflecting that human voices may be less susceptible for the attentional blink as compared with cello and organ tones in both groups. Discussion. The results will be discussed in terms of perceptual salience of voices, as well as the interaction of perceptual expertise, attention and working memory. Conclusion. Results from this study may have the potential to extend our understanding of the selective auditory attention in relation to musical and perceptual expertise.

Subjects: Musical expertise, Music cognition

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-27: That syncing feeling: Physiological arousal in response to observed social synchrony

Haley Kragness*(1), Laura K Cirelli(1)
1:University of Toronto Scarborough

Arguably, one important function music plays in human culture is to encourage social bonding through shared movement experiences. Music and dancing are often key elements in activities where social bonding and emotional connection are a shared goal, such as religious gatherings, sporting events, parties, and weddings. Previous studies have shown that moving in synchrony with others enhances prosocial attitudes and affiliative behaviors. Similarly, observers attribute more social closeness to people moving synchronously together than people moving asynchronously, and these effects can be observed even in infants. The mechanisms by which synchrony modulates these attributions are not well understood. In the present study, we ask whether viewing synchronous activities increases physiological arousal as measured by skin conductance (SC), and whether group size (large versus small) impacts this effect. University undergraduates view a series of YouTube videos depicting people moving either (1) in or out of synchrony with each other and (2) in a large or small group context. Participants’ SC is measured throughout the viewing. To analyze SC, we will measure tonic SC level changes over time and count phasic SC responses while viewing each video type. We expect that participants will experience elevated SC levels and more frequent SC responses when viewing synchronous behavior versus asynchronous behavior, and that this effect will be further enhanced by the presence of a large versus a small group. Results will have implications for understanding the mechanisms by which synchrony modulates social expectations and behavior, and may provide evidence to explain the emotional intensity of live music events.

Subjects: Music and movement, Physiological measurement

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-29: Catching the Theme: Aligning Musical Analogs in a Classical Theme and Variation

Nicholas B Swett(1)
1:University of Sheffield

Recent work by Bourne and Chun suggests that musical themes are relational schemata, categories that are learned through structure alignment and analogies (2017). Both intentional and implicit analogical comparisons can lead to rapid learning of complex relational concepts in domains like math and engineering (Gentner et al., 2016). The analogy literature emphasizes certain principles of exemplar presentation that facilitate this transfer of knowledge, among them Concreteness Fading, where a concrete or relatable example of a relational concept is followed by more and more abstract representations (Goldstone & Son, 2005). Do we build knowledge through analogy when listening to a Classical Theme and Variation? Much music theory literature on the form implies that analogies are involved, and these pieces do feature the systematic, one-to-one alignments that are crucial for analogy formation (Bourne, 2015). Concreteness Fading may also be at play in typical Variation works, where a recognizable Theme is followed by increasingly abstract renditions of its harmonic template. In an experiment, participants of varying musical backgrounds will listen to three variations from either Mozart K. 352 or Beethoven WoO 72 in two different orders. After this training phase, they will report expectancy violations in another variation and complete a puzzle task, assembling yet another variation from scrambled excerpts (Deliege, 1996). These tasks will clarify whether exposure to a few variations leads to transfer of a large-scale Thematic Schema, and whether, as the principle of Concreteness Fading would suggest, the order of the variations presented in that training task impacts this transfer. Data collection is ongoing (N = 6; 40 expected). Results will be discussed in relation to the cognitive science literature on analogy in learning (Goldwater & Schalk, 2016), theoretical and historical perspectives on Theme and Variations (Ivanovitch, 2010), and work on the relative importance of surface features and deep-structural features in music listening and similarity judgments (Deliege, 2007).

Subjects: Music education/pedagogy/learning, Expectation; Harmony and tonality; Music theory; Musical expertise

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-31: Musical Texture as an inducer of cross-modal associations: synaesthesia cases

Svetlana Rudenko(1)
1:Trinity College Dublin

Background: Musical texture is the DNA of the musical composition combining complex elements such as rhythm, melody/accompaniment or polyphonic organization, etc. (in relation to piano compositions). B. Galyev points out that cross-modal associations are a normal process of musical thinking. In the case of synaesthesia, blending senses condition, there is sensory feedback on musical sounds from additional percepts, such as tactile,visual, or olfactory cortex. Chromesthesia synaesthetes, for example Messiaen, “saw” colours to the pitch. Aims: To view music analysis from the perspective of cross-modal perception, characteristic for synaesthetes, and helping performers to be aware of possible audio and tactile visualisations of musical texture. Main Contribution: Although various music analyses discuss harmonic structure, style of the composer, epoch or genre, there is very little discussion of cross-modal associations induced by elements of musical texture. The paper offers insights into the system of music analysis, based on archetypes of musical texture, applied to Scriabin’s Piano Sonata form by S. Garcia. S. Rudenko hypothesises that this system of music analysis is useful to map archetypes of musical texture for creation of the cross-modal associations narrative of the composition. This paper offers discussion how the musical texture could be viewed in a different perspective of three cases: 1. Scriabin and music analysis based on archetypes of musical texture, audibly and visually recognisable gestures. 2. 4D Visualisation of Musical texture as a model of musical-space synaesthesia 3. Art works on music by artists-synaesthetes Results/ Conclusions: -Effective model of music analysis based on archetypes of musical texture is proposed for mapping cross-modal associations. -Structural visualisation of musical texture, based on musical-space synaesthesia perception model, could be useful as a time-management. -Paintings on music from artists-synaesthetes broaden our conception of imagery and how sound could be perceived and viewed in additional sensory modalities.

Subjects: Audiovisual / crossmodal, Music education/pedagogy/learning

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-33: The Effect of Musical Play on Interactions Between Children with ASD and their Parents

Olivia Boorom*(1), Meredith Watson(1), Rongyu Xin(2), Valerie Munoz(1), Miriam Lense(1)
1:Vanderbilt University Medical Center, 2:Vanderbilt University

Although a growing body of research looks to harness music for social communication development in children with autism spectrum disorder (ASD), there is limited research investigating behavioral processes by which music may impact social engagement in ASD. Shared engagement and parent physical and verbal responsiveness to children’s focus during play are associated with children’s social and language development (Gulsrud et al., 2016). Musical play may support interactions because it is familiar, reinforcing, and predictable, which may help children attend to activities while also providing parents with an accessible way to be responsive to their child (Lense & Camarata, 2018). However, musical play may also impede interactions due to its sensory and repetitive components. We examined whether use of musical play/toys during parent-child play is related to parental responsiveness. Ten parent-child dyads of preschoolers with ASD were video recorded for ten-minute play sessions that included both musical and non-musical toys. Videos were coded using a five-second partial interval schema for children’s attentional leads and corresponding parental physical toy play or verbal responses. Parents’ and children’s engagement in musical play (e.g., playing musical instrument) and use of musical toys for non-musical play (e.g., building with drums) were also coded for each interval. Overall, parents showed similar responsiveness to children’s musical play/musical toy leads (63.0%±16.6%) versus non-musical leads (55.4%±17.0%) (W=14, p = 0.19) but this differed by type of responses. In response to children’s musical vs. non-musical leads, parents provided significantly more physical play responses (p=0.037) and significantly fewer verbal responses (p=0.014). Follow-up analyses will address parents’ use of musical play in their responses. Results have implications for incorporation of music into therapy. Future research should examine links between parent-child musical play and non-verbal vs. verbal communication skills.

Subjects: Music and development, Music and Autism Spectrum Disorders

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-35: The power of music surpasses the power of suggestion: No effect of titles on imaginative music listening

Naomi Benecasa(1)
1:University of Sheffield

Music discourse is rife with cross-modal correspondences for musical stimuli (Zbikowski, 2008; Eitan & Timmers, 2010). Music is not only spoken about in terms of other domains, it is also conceptualized cross-modally (Casasanto, Philipps, & Boroditsky, 2003). This study examines whether cross-modal verbal prompts, in the form of titles, can affect the prevalence of imaginative listening, and whether such listening correlates with increased enjoyment. The influence of verbal information on listening has been examined through studies on program notes, showing mixed effects on enjoyment (Margulis, 2010; Bennett & Ginsborg, 2018). Titles alone may be both succinct and elaborative enough to evoke imaginative listening, per research on titles of artwork and meaningful experience (Millis, 2001). In an empirical study, 48 adult participants with minimal training in classical music were presented 12 recorded excerpts of classical music. Each minute-long excerpt was presented in one of three title conditions: absent, a mere number; descriptive, a formal-analytical title; and elaborative, the original, cross-modal title. After each excerpt, participants were asked, “What, if anything, does the music bring to mind?”; they also reported on a six-point Likert scale measuring Interest, Liking, Affect, and Familiarity. Open-ended responses were coded according to cross-modal content, and self-report ratings were parametrically tested for any correlation with title condition and cross-modal response data. Interestingly, no significant differences were found across title conditions for the participants responses on any parameter. Further, there is little evidence to suggest a prevalence of cross-modal listening for those receiving a cross-modal title; participants were equally imaginative in the absent-title condition. This lack of effect may be due to experimental limitations and to the participants’ inherent preferences. Future research will concern the interaction of personality correlates with imaginative listening. Word count: 283 Bennett, D., & Ginsborg, J. (2018). Audience reactions to the program notes of unfamiliar music. Psychology of Music, 46(4), 588-605. Casasanto, D., Phillips, W., & Boroditsky, L. (2003). Do we think about music in terms of space? Metaphoric representation of musical pitch. Proceedings of the Annual Meeting of the Cognitive Science Society, 25. Eitan, Z., & Timmers, R. (2010). Beethoven’s last piano sonata and those who follow crocodiles: cross-domain mappings of auditory pitch in a musical context. Cognition, 114(3), 405-422. Margulis, E.H. (2010). When program notes don’t help: Music descriptions and enjoyment. Psychology of Music, 38(3), 285-302. Millis, K. (2001). Making meaning brings pleasure: The influence of titles on aesthetic experiences. Emotion, 1(3), 320-329. Zbikowski, L. (2008). Metaphor and music. In R. Gibbons (Ed.), The Cambridge handbook of metaphor and thought (pp. 502-524), Cambridge University Press.

Subjects: Cross-domain effects, Music listening

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-37: The Contributions of Auditory and Visual Cues to Social Rhythmic Entrainment

Youjia Wang*(1), Michael Z Burchesky(2), Miriam Lense(2)
1:Vanderbilt University, 2:Vanderbilt University Medical Center

Rhythm and timing play an important role in social interactions. Predictable rhythmic behaviors – such as those provided during speech and singing – serve to entrain attention to socially adaptive information beginning in infancy. When infants as young as two months of age view audio/visual (a/v) stimuli of infant-directed singing, they increase gaze to the eyes of the singer, a fundamental marker of social engagement, during the strong rhythmic beats of the singing (Lense & Jones 2017). However, it is unknown what cues contribute to this entrainment. During the course of a typical interaction, rhythm is specified crossmodally: for example, people use coordinated rhythmic speech, gestures, facial expressions, and movements. During singing, visual cues co-occur with auditory structure, and both infants and adults use visual cues to attend to and recognize singers (Trehub et al., 2013, 2015). In the current study, we aim to replicate infant patterns of entrainment in an adult sample. We additionally manipulate the availability of auditory and visual information to parse the contributions of these cues. Adults (n=25) watched videos of singing while eye-tracking data was collected. We examined if entrainment was modulated based on the rhythmic structure in original a/v stimuli, visual-only stimuli (audio removed), and auditory-preserved visually degraded stimuli (via visual blurring and visual noise). Peri-stimulus time histograms revealed that adults’ attention was rhythmically time-locked with increased attention to the eyes of the singer during the strong beats of singing, as is seen in infancy. Comparisons of entrainment across original and manipulated versions showed rhythmically time-locked increases in eye-looking of similar magnitude across all conditions. Results suggest that social rhythmic attention is a robust and persistent behavior. Future studies will further probe multisensory rhythmic timing cues and the salience of their contributions to social visual entrainment.

Subjects: Beat, rhythm, and meter, Rhythmic Entrainment

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-39: Effects of Genre Tag Complexity on Popular Music Enjoyment

Lauren M Shepherd*(1), Elizabeth Margulis(1)
1:University of Arkansas

The popular online streaming platform Spotify added over 1000 genre tags in the last two years. Despite that numerous artists and composition competitions claim to seek projects that “transcend the traditional notion of genre,” the industry has only added more complex and mystifying genre labels. This dichotomy between artists and industry ignores the effects these labels have on consumers. Do more complex genre tags enhance the listening experience for the average consumer by providing additional information about what they are about to hear? The current research seeks to examine the effects of the granularity of genre tags on popular music perception by identifying whether more complex genre tags increase enjoyment and understanding of popular music excerpts. Participants heard four 20-second excerpts of popular music from four broad genre categories—including pop, country, rap/hip-hop, rock—as defined in Gjerdingen & Perrott, 2008 and Mace et al., 2011. Excerpts were presented simultaneously with two or three corresponding broad genre category tags or nuanced subgenre category tags in a randomized order. Participants used Likert-type scales to rate how well the genre tags matched the excerpt with which they were presented, how much they enjoyed the excerpt, and were asked to self-label each excerpt with a genre tag. Results showed that participants rated excerpts presented with broad genre categories higher than subgenre categories for both matching (F(1, 2109.67) = 19.07, p < .001) and enjoyment (F(1, 2109.38) = 56.47, p < .001). Additionally, participants did not self-label any of the excerpts with genre categories that were not previously attached to the respective stimuli. These results have practical implications for how music producers market popular music since the broad genre categories were preferred and appear to convey sufficient expectations for popular music. An influence of genre experience and preference was present, which is also explored.

Subjects: Music and language, Popular music

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-41: Does cold stimulation enhance musical frisson? Effect of cold stimulation on perceptual rating of consonant and dissonant intervals

Yuri Ishikawa*(1), Patrick E Savage(1), Masashi Nakatani(1), Shinya Fujii(1)
1:Keio University

Music listening sometimes induces the sensation of frisson, a pleasant tingling feeling accompanied with raised body hairs and gooseflesh. Some believe that we feel an emotion and physiological response occurs subsequently. On the contrary, the James-Lange theory proposes that physiological change is primary, and emotion is then experienced. Considering the James-Lange theory, we hypothesized that providing cold stimulation to a skin while listening to sounds may change our sound perception. Here we tested this hypothesis by providing cold stimulus to skin during listening to consonant and dissonant intervals. Ten healthy students participated in the pilot study. We used perfect fifth and minor second. Each of the chord had two different root notes in order to counterbalance the pitch. A cold stimulation was provided to the mastoid using a cooling device. As a control sham condition, we touched the skin with another cooling device while turning off the power. Participants rated the degree of frisson and pleasantness using the Visual Analog Scale. We performed a two-way repeated measures analysis of variance (ANOVA) with the factors of musical interval (consonant/dissonant) and stimulus condition (cold/control). The ANOVA showed no significant interaction between the musical interval and stimulus condition. As for the frisson rating, the main effect of stimulus condition was significant (F(1,9)=10.744 , p<0.01, η2=0.544), showing that the cool stimulus increased the frisson rating. The main effect of interval was not significant (F(1,9)=3.011, p=0.117, η2=0.251). As for the pleasantness rating, there was no significant main effect of stimulus condition (F(1,9)=0.003, p=0.957, η2<0.001). There was a significant main effect of interval (F(1,9)=18.116, p<0.01, η2=0.668), showing that perfect fifth was perceived as more pleasant than minor second. The results suggest that cold stimulation may increase the subjective frisson rating but not affect the pleasantness rating while listening to intervals. Our finding partially supported the James-Lange theory of emotion.

Subjects: Emotion, Neuroscientific approach; Psychoacoustics

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-43: The perception of musical structure: a comparative approach

Paola Crespo-Bojorque*(1), Juan M Toro(2)
1:Universitat Pompeu Fabra, 2:Universitat Pompeu Fabra & ICREA

The ability to process hierarchical structures is central to the development of higher human capacities such as language and music. The musical system makes use of perceptually discreet elements (chords) organized in a hierarchical manner. Musical sequences are not created by a random arrangement of elements. Instead, syntactic combinational principles operate at different levels, such as in the creation of chords, chords progressions and keys. Comparative research has shown that basic abilities involved in music processing might be the result of general acoustic biases not specific to human species. That is, music might arise from simpler pre-existing systems that evolved for other tasks. Whether hierarchical processing is one of these pre-existing abilities is a critical question to explore the evolutionary origins of human cognitive skills. The present work addresses the perception of musical hierarchical structure and tonality from a comparative perspective. We ran experiments on the discrimination of structured from unstructured melodies implemented in single (Experiment 1) and multiple (Experiment 2) tonalities in animals. Structured melodies were excerpts of Mozart’s sonatas. Unstructured melodies were the result of the recombination of fragments of different sonatas. Our results demonstrate that both human participants and non-human animals (rats) successfully discriminated melodies based on the structure level when there were no changes in tonality. That is, they were able to tell apart structured from unstructured melodies. Interestingly, when tonality changes were included, the musical structure discrimination capacity in human participants was enhanced whereas that of animals was diminished. Together, results point towards similarities and differences across species. The fact that animals were able to discriminate musical structure suggests that some of the mechanisms involved in hierarchical structure processing might arise from biological constraints present in other species. However, the differential effect across species by changes in tonality suggests species-specific adaptations on the use of fundamental frequency information.

Subjects: Evolutionary perspectives,

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-45: Synchronization to vibrotactile rhythms in Deaf individuals

Phuong-Nghi T Pham*(1), Sean A Gilmore(1), Frank Russo(1)
1:Ryerson University

It is well documented that we perceive and synchronize better to rhythms presented in the auditory modality than in other modalities. The vocal learning hypothesis suggests that this auditory advantage stems from auditory-motor connections that support vocal learning. In the current study, we assess beat perception in a population of Deaf participants. Although these participants are expected to have impoverished auditory experience, we expect that, on the basis of prior research, the auditory cortex will be recruited during the perception of vibrotactile rhythms. As such, it is possible that Deaf participants will experience vibrotactile presentations of rhythm in a manner that is similar to how hearing participants experience auditory rhythms. Recent work in the lab has investigated beat perception in hearing participants through EEG (passive listening) and sensorimotor synchronization (tapping) paradigms. This study follows up by presenting isochronous and non-isochronous rhythms to Deaf participants in the vibrotactile modality (vibrating backpack). Non-isochronous rhythms are comprised of house beats, to replicate the stimuli with which participants may have previously engaged in a club environment. Hearing participants will be presented with both vibrotactile and auditory stimuli. Data collection for this study is currently in progress.

Subjects: Beat, rhythm, and meter, Evolutionary perspectives

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-47: ERP Components of Attentional Control in Anxious Musicians

Sarah ER Lade*(1), Laurel Trainor(1), Daniel Bosnyak(1), Dave Thompson(1)
1:McMaster University

Music Performance Anxiety (MPA) results in debilitating anxiety surrounding musical performances and negatively impacts performance quality due to overly heightened arousal. A likely underlying cause of this lowered performance quality is impaired attentional control processes, particularly in anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (DL-PFC). While individuals with clinical anxiety show abnormalities in ACC and DL-PFC function, the specific effects of music performance anxiety on attentional control are unknown. Abnormalities in attentional processes can be measured with EEG while participants complete a cognitive task. The Go/No-Go task is a validated method for examining abnormalities in the attentional system, particularly the monitoring of action-response processes. During a Go/No-Go task, participants must selectively respond/withhold responding to particular letter combinations on a computer screen. Response-inhibition can be measured by examining frontal ERP components; both the N200 (N2) and P300 (P3) are enhanced when a desired response is withheld. The error-related negativity (ERN) component is associated with increased attentional effort during error detection. In the present study, high-level classical pianists (RCM Grade 10+) were separated into high- and low-performance anxiety groups according to a standardized questionnaire (Perf-AIM). Both groups performed twice, once in an empty auditorium (Jury Absent – JA) and once for a musical jury (Jury Present). After performing, EEG was measured during a Go/No-Go task. We predict that high-anxiety (compared to low-anxiety) participants will have larger N2’s and ERN’s, but smaller P3’s, indicating worse attentional functioning. We also predict that high-anxiety participants will show larger amplitude N2, ERN and P3 following the JP than JA performance (indicating an increase in attentional difficulties), whereas low-anxiety participants will show no significant differences across these conditions. Piloting analyses indicate that our Go/No-Go task elicits the desired ERP components from both groups; additional data collection is ongoing to compare between groups and conditions.

Subjects: Neuroscientific approach, Performance

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-49: Towards an Understanding of Musical Expressions: A functionalistic Approach

Kework Kalustian(1)
1:Max Planck Institute for Empirical Aesthetics

Traditionally, Leitmotifs of Richard Wagner’s ‘Ring’ cycle are regarded according to the following principle: Information of musical sound events (e.g., melody) stands for extra-musical entities. Consequently, if a unit of musical sound events (i.e. musical expression) refers regularly to a thing such as ‘sword’ within a context such as Wagner’s ‘Ring’ cycle, the so-called ‘sword’-motif is provided as a Leitmotif as well as it is semantically labeled. Hence, Leitmotifs are considered as musical and semantical representations of extra-musical entities. In my view, this strategy is reliable as long as we are aiming at the explanatory factors of common-sense meanings of linguistic utterances about certain musical expressions. However, once we aim to investigate mental representations of musical expressions, we sharply need to distinguish these two layers from each other. Likewise, we conceptually need to disentangle the musical expressions of the Leitmotifs from their attached semantic labels (cf. enactivism). For doing so, I propose a conceptual approach to characterize and to predict mental states of recipients towards musical expressions/Leitmotifs according to the different ways the recipients can be directed at one and the same musical expression/Leitmotif. By means of this introduced approach I suggest a dynamic stage model by which different varieties of intentional directedness towards musical expressions are conceptually integrated—gradually ranging from basic to complex cognitive styles of music listening. Consequently, the fact that these different ways of intentional directedness towards musical expressions fulfill different functions to conceive of musical expressions, the functional perspective of this approach is finally provided. With some caveats in mind, I conclusively adapt the conceptual dynamic stage model to empirical purposes by positing a path diagram for a structural equation model based on recent findings within the scope of empirical music aesthetics which might be also interesting for further research on musical expression.

Subjects: Musicology, Aesthetics / preference; Music and language

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-51: A New Roadmap for Research in Neurologic Music Therapy Regarding Individuals with Autism Spectrum Disorders

Nicole Richard*(1), Michael Thaut(1)
1:University of Toronto

The therapeutic effects of music for people with Autism Spectrum Disorders (ASD) have gained increasing recognition in recent years. This review will elucidate why more fundamental research is needed regarding how music influences the cerebellum and long-range functional connectivity in the brain, in order improve techniques for using music as therapy in this population. Recent research has identified motor and attention deficits in people with ASD that correlate with and may even underlie the social, communicative, and behavioural symptoms typically associated with the diagnosis. Neuroimaging studies indicate that differences in cerebellar volume, such as in right cerebellar Crus I/II, are correlated with the motor, cognitive, social, and behavioural, hallmarks of ASD. In addition, long range connectivity in the brain, including cerebellar-cortical loops, have been found altered in people with ASD compared to typically developing (TD) individuals. Interestingly, engaging in music making and listening activates and affects cerebellar areas, including some that overlap with areas affected by ASD. Furthermore, because music can effect change across multiple brain areas, it has been found to affect functional connectivity in the brain in a positive manner for typically developing people, as well as for individuals with ASD in some recent studies. There is thus much potential for the role of music in addressing underlying neurodevelopmental factors in ASD; however, more research is needed. This theoretical paper will outline new research directions to help identify how music can affect cerebellar and connectivity issues in ASD that relate to motor and attentional factors, which in turn seem to underlie the classic ASD symptoms. Such research would provide a stronger foundation for future clinical studies that may solidify neurologic music therapy strategies to directly and positively impact those with ASD.

Subjects: Music therapy, Health and well-being; Music and development; Neuroscientific approach

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-53: Quantifying Karnāṭaka: Raga Knowledge on Expectations of Melodic Conformity

Neerjah Skantharajah*(1), Matthew H Woolhouse(1)
1:McMaster University

Interpretations of melodic conformity, i.e. the goodness of fit of notes within a linear sequence, are shaped by veridical and schematic knowledge. Previous studies have found similarities in schematic knowledge between Western musicians and non-musicians (Krumhansl, 1990). Arguably, this can be accounted for by the relatively constrained nature of Western tonality: the more-or-less exclusive use of two modes enables non-musicians to schematically learn Western tonality through enculturation, including in infancy. In contrast to Western tonality, South Indian classical music has a relatively complex theoretical framework, employing over a 100 unique ragas, which are similar to scales. Given this complexity, the current study attempted to uncover the presence or absence of schematic knowledge within Carnatic and Western music using four groups: Carnatic performers and listeners, and Western performers and listeners. Two sets of novel melodic stimuli were created; one for Carnatic participants, the other for Western participants. The stimuli were further subdivided into those conforming to the raga/mode and those that were non-conforming due to the presence of a single wrong note, i.e. an out-of-scale tone. Stimuli were 6-seconds long, sung recordings containing 16 notes at 80bpm. Participants were asked if the stimuli contained a wrong note and answered using a 5 point-scale ranging from “definitely” to “definitely not.” We hypothesized that (1) Carnatic performers would have significantly more correct responses than Carnatic listeners, and (2) there would be relatively little difference between Western performers and listeners. Results showed that performers, regardless of genre, performed better on the task than listeners in the same group. This refutes our second hypothesis and suggests that within both Western and Carnatic context, schematic knowledge for melodic conformity is possessed only by performers, i.e. experts. Krumhansl, C. L. (1990). Cognitive foundations of musical pitch. New York: Oxford University Press.

Subjects: Cross-cultural comparisons/non-Western music, Expectation; Harmony and tonality; Music information retrieval; Music theory; Musical expertise; Per

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-55: Synchronization abilities correlate with performance on a melodic intonation therapy task and reading fluency

Yi Wei*(1), Ed Large(1)
1:University of Connecticut

Melodic intonation therapy (MIT) has a long history of application for patients with non-fluent aphasia. The fundamental technique involves tapping to the onsets of syllables while speaking/singing. We refer to this as the MIT task. Research has also shown impairment of rhythmic synchronization in many clinical populations with language related deficits, such as aphasics and dyslexics. In this study, we explored the relationship between rhythmic synchronization ability, performance on the MIT task, and reading fluency and comprehension in healthy English- and Mandarin-speaking adults. We assessed rhythmic synchronization by asking subjects to synchronize taps with a metronome that exhibited occasional tempo and phase perturbations. We used three different base tempi (2 Hz, 2.5 Hz, and 3 Hz), and manipulated direction (negative and positive) and size (8%, 15% and 25%) in the phase and tempo perturbation conditions. Subjects were instructed to synchronize taps to every tone in the rhythmic stimuli as accurately as possible. Rhythmic synchronization performance was assessed by phase variability immediately following the perturbation. We assessed ability to perform the MIT task by asking subjects to synchronize taps to the onset of each syllable they produced while reading sentences as naturally as possible. Performance on the MIT task was measured by the variability with which subjects synchronized taps to syllable onsets. Finally, language skills were measured using reading fluency and comprehension assessments for both native English and Mandarin speakers. We observed that participants’ ability to synchronize with a perturbed metronome correlates strongly with performance on the MIT task, and synchronization performance also correlated strongly with language fluency scores. Both findings generalized across English and Mandarin speakers. Implications for developing intervention and rehabilitation methods based on rhythmic synchronization training are discussed.

Subjects: Music and language, Music therapy

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-57: Influence of rhythm and beat priming on receptive grammar task

Singyi Yen(1), David Bendoly(1), Matthew Heard(1), Yune S Lee(1)
1:Ohio State University

A growing body of evidence has demonstrated the influence of rhythm and beat on language processes. For example, regular beat priming facilitates grammaticality judgments (e.g., morpho-syntactic judgment task) in children with developmental language disorder and Parkinsonian patients. This study seeks to explore these observations in two similar experiments that used either musical rhythm sequences or binaural beat frequencies respectively. In the first experiment, the priming effect of rhythms (simple and complex) were compared to baseline conditions (constant tone and ambient environment sounds) on a subsequent language task. The second experiment compared the priming effect of binaural beats (beta and gamma frequencies) to baseline (constant tone and silence) conditions. In both experiments, young adults (ages 18-39) performed the same grammar task involving syntactic re-analysis on short spoken sentences constructed with either object- or subject-relative (OR/SR) clauses. Subjects were asked to identify the gender of the agent in the sentences, a task which is more difficult in OR than in SR sentences. To further increase task difficulty, half of the sentences also included a multi-talker babble track which degraded their acoustic quality. The first experiment (N=22, 14 female) found no significant main effect of any of the rhythmic primes. In contrast, the second experiment (N=18, 15 female) yielded a significant main effect of beta (z-score: 3.3; p <0.001), but not gamma binaural beats. In sum, the rhythm priming effect was not generalized to a different type of language grammar task in the young adult cohort. However, our findings of a beta priming effect are consistent with literature suggesting that a beta frequency enhances auditory language performance, which could be due to entrainment of neural oscillations in the basal ganglia, the core sensorimotor area for sequencing and analyzing language structures. Further study is underway to corroborate the current initial data.

Subjects: Music and language, Beat, rhythm, and meter

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-59: Towards a Historical Perception of Music: An Empirical Study of a Galant Schema

Sammy Gardner(1)
1:University of North Texas

Vasili Byros, in his dissertation, references a critique of Beethoven’s third symphony by Friedrich Rochlitz, who claims that the symphony should have modulated to the key of G minor during measures 6-9. This hearing stands in opposition to more recent hearings, such as Henrich Schenker’s, that view this passage in E-flat major. A central thesis of Byros’ dissertation is that “schemata provide access to historical modes of listening today.” This begs the question; can a modern listener reconstruct a historical perception of music? Schema theory has attempted to provide an answer to this question, arguing that one can, reconstruct a historically situated music perception. Problematizing the notion of reconstructing a historically situated music perception is that the people of today do not exist in eighteenth-century culture. So how could one possibly understand music in its original culture? This paper explores the process of understanding the le-sol-fi-sol schema identified by Byros in his dissertation. I set up an experiment that tests musicians on how they hear the le-sol-fi-sol schema over a corpus of music, and gauge their expectation as the schema moves towards a cadence. I then deny their cadential expectations and track the results. I hypothesize that when a key defining schema, such as the le-sol-fi-sol, is primed for an expectation, and that expectation is denied, that one can better access a historical mode of music perception by experiencing denied expectation that same way a contemporary listener, like Rochlitz, would have. My experiment results are consistent with my hypothesis, finding that participants were able to build up the historically accurate expectation for the le-sol-fi-sol schema over a large enough corpus of music. Further, I found that when I denied their new-found expectation, participants then expressed that they experienced denied expectations that are perhaps similar to that of an eighteenth-century listener.

Subjects: Expectation, Composition and improvisation; Corpus analysis/studies; Memory; Music theory

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-61: The Effects of Musical Improvisation Instruction on Visual and Auditory Statistical Learning

Martin Norgaard*(1), Joanne A Deocampo(1), Christopher Conway(2)
1:Georgia State University, 2:Boys Town National Research Hospital

This research investigates the effects of musical improvisation training on visual and auditory statistical learning in early adolescence. Improvisation training involves manipulating musical elements in real time within physical and stylistic constraints. This training may have specific cognitive benefits not identified in previous research on far-transfer effects of music instruction. In particular, as improvisers create new sequences of notes, they must follow syntactic rules, a central element of statistical learning that may enhance abilities in other domains where sequences are created in real time. Students participating in a university sponsored jazz instruction after-school program for adolescents (N=12) participated in the study. Participants completed electroencephalography (EEG) measures collected during several statistical learning (SL) tasks assessing the learning of both adjacent and nonadjacent dependencies in visual and auditory input streams. Learning was assessed both pre and post four months of improvisation training. A late positivity event related potential (ERP) effect was elicited for stimuli that predict the target stimulus based on the sequential dependencies embedded in the input stream. Percent change scores will be calculated for the ERP and reaction time data for each task, subtracting pre-training values from post-training values and dividing by the pretraining value. Improvisation achievement was measured pre and post with an improvisation continuation task evaluated both through expert ratings and using information-theoretic measures. We will calculate percent change scores between the two time points for the achievement measures as well. We hypothesize that better improvement on the SL tasks will correlate with greater gains on improvisation achievement scores. Pre-training data collection is complete and contains a wide distribution of scores on the initial improvisation achievement test. Full analysis of the behavioral and EEG data from both pre and post training will be available by the time of the poster presentation.

Subjects: Composition and improvisation, Music education/pedagogy/learning

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-63: Tablet version of the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA)

Mélody Blais(1), Naeem Komeilipoor(2), Camille Gaillard(2), Hugo Laflamme(2), Melissa Kadi(2), Agnès Zagala(2), Simon Rigoulot(3), Sonja A Kotz(4), Simone Dalla Bella(5)
1:BRAMS, 2:BRAMS, University of Montreal, 3:BRAMS, University of Montreal & Université du Québec à Trois Rivières, 4:BRAMS, University of Maastricht & Max Planck Institute for Human Cognitive and Brain Sciences, 5:University of Montreal

Perceptual and sensorimotor timing skills can be fully assessed with the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA; Dalla Bella et al., 2017). The battery is a reliable tool for evaluating timing and rhythm skills (Bégel et al., 2018) revealing high sensitivity for individual difference. We present a recent implementation of BAASTA as an app on a tablet device. Using a mobile device ensures portability of the battery while maintaining excellent temporal accuracy in recording the performance in perceptual and motor tests. BAASTA includes 9 tasks (four perceptual and five motor). Perceptual tasks are duration discrimination, anisochrony detection (with tones and music), and a version of the Beat Alignment Test. Production tasks involve unpaced tapping, paced tapping (with tones and music), synchronization-continuation, and adaptive tapping. Normative data obtained with the tablet version of BAASTA in a group of 40 healthy non-musicians are presented, and profiles of perceptual and sensorimotor timing skills are detected using machine-learning techniques. The relation between the identified timing and rhythm profiles in non-musicians and general cognitive functions (working memory, executive functions) is discussed. These results pave the way to establishing thresholds for identifying timing and rhythm capacities in the general and affected populations. References Dalla Bella, S., Farrugia, N., Benoit, C. E., Begel, V., Verga, L., Harding, E., & Kotz, S. A. (2017). BAASTA: battery for the assessment of auditory sensorimotor and timing abilities. Behavior research methods, 49(3), 1128-1145. Bégel, V., Verga, L., Benoit, C. E., Kotz, S. A., & Dalla Bella, S. (2018). Test-retest reliability of the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). Annals of physical and rehabilitation medicine.

Subjects: Beat, rhythm, and meter, Music and movement

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-65: Songbooks Increase Parent-Child Social Interactions in Preschoolers with and without ASD

Talia Liu(1), Danielle Dai(1), Benjamin Schultz(2), Christina Liu(1), Olivia Boorom*(1), Miriam Lense(1)
1:Vanderbilt University Medical Center, 2:Maastricht University

Providing natural opportunities that scaffold interpersonal engagement is important for supporting social interactions for children with ASD. The familiar, predictable, and reinforcing context of musical activities may provide a platform for the development of social interaction skills. For example, children with ASD showed increased eye gaze and turn-taking with therapists during music therapy versus play therapy (Kim et al., 2008). Beyond impacts on children’s behavior, musical activities may also support children’s interaction partners in providing opportunities for and being receptive to moments of validated social engagement. We assessed the impact of a musical context on child and parent behavior during book sharing interactions. Thirteen children with ASD (10 male, M = 3.93 years) and sixteen typically developing (TD) children (10 male, M = 2.98 years) were videotaped during a 5-minute picture book and a 5-minute songbook activity with their parents. A five-second partial interval coding schema (Klimenko, 2007) assessed parents’ and children’s visual attention towards the books and their partner and frame-by-frame interpersonal movement activity was extracted from videos. Across dyads, children demonstrated greater sustained attention to songbooks than picture books (F(1,27) = 8.36, p = 0.007) with the effect evident in both children with ASD and TD children. Parents showed greater gaze toward their child during songbooks than during picture books (F(1,27) = 22.36, p < 0.0001). This effect was evident for parents of TD children and ASD children. Preliminary analyses of interpersonal movement coordination suggest a pattern for TD dyads with children Granger causing parent movement during the picture book activity versus bidirectional Granger causality during the songbook activity. Music activities such as songbooks may provide an accessible context for supporting both parent and child engagement. Implications for incorporation of musical activities into natural therapeutic contexts and different levels of measurement will be discussed.

Subjects: Music and development, Music therapy

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-67: Heartbeat entrainment: A physiological role for empathy in the act of music listening?

Michael Winters*(1), Bruce Walker(1), Grace Leslie(1)
1:Georgia Institute of Technology

Empathy plays a key role in our ability to experience emotions in music (Egermann & McAdams, 2013; Clarke et. al, 2015), but this has yet to be explained on a physiological level. We propose a mechanism wherein feelings of empathy are mediated by the sound of another’s heartbeat, and test this theory by measuring entrainment between the heartbeat of the listener and the simulated heartbeat of another person. We recorded participants’ (N = 32) electrocardiograms (ECG) while they listened to 20-second simulated heartbeat sounds of various speeds representing the pulse of an imagined person. The participants then rated that person’s emotional state by choosing one of four emotions from the “Reading the Mind in the Eyes” task (Baron-Cohen et. al, 2001). After making a selection, participants reported their own transient empathetic states by rating how well they were able to “feel what the other was feeling” on a 7-point scale. At the end of each study, participants completed four validated survey instruments that measured their dispositional empathy. We found a significant effect of the speed of the heartbeat stimulus on the heartbeat of the listener (p = 0.03), and medium to high ratings of “feeling what the other was feeling” (μ = 4.3, skew = -0.47). Future work will examine two other physiological measures of this data for effects of dispositional and situational empathy: the degree to which the the listener’s heartbeat entrained to the imagined partner’s heartbeat, and the heartbeat evoked potential (HEP), an index of interoceptive processing. This work seeks to explain a physiological role for empathy in the act of music listening.

Subjects: Emotion, Empathy

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-69: Investigating the Role of Amplitude Envelope Manipulation on Melodic Alarm Recognition in a Divided Attention Task

Sharmila Sreetharan*(1), Rebecca Benjamin(1), Joseph Schlesinger(2), Mike Schutz(1)
1:McMaster University, 2:Vanderbilt University Medical Center

Clinicians working in intensive care units (ICUs) are required to attend and rapidly respond to multiple stimuli across various modalities (e.g., hearing alarms going off, reading patient files, etc.). Identification of melodic alarms used in the ICU (i.e., International Electrotechnical Commission (IEC) 606061-1- 8) is poor – especially when completed in tandem with other cognitively demanding tasks. One aspect that may aid in alarm identification is manipulation of the amplitude envelope (i.e., changes in amplitude over time) of tones within an alarm’s melodic sequences. Currently, IEC alarms employ flat amplitude envelopes or amplitude invariant tones, but previous studies have demonstrated that incorporating percussive amplitude envelopes (i.e., exponentially decaying sounds characteristic of impact sounds) aid in both associative memory and reduce perceived annoyance. Our current experiment explores the effect of amplitude envelope manipulation on divided attention; specifically, we examine the effects of amplitude envelope on IEC alarm recognition under varying cognitive loads. Participants complete an audiovisual delayed matching-to-sample task; this task involves the simultaneous priming of both an auditory stimulus (IEC alarm) varying in amplitude envelope type and a visual stimulus (serial presentation of a letter string varying in length — three or seven letters. After the visual primes, short auditory and visual masks were presented. Finally, either an auditory or visual stimulus (i.e., the target) was presented. Participants decided whether the prime and the target differed in a two alternative forced-choice task. Our results show percussive alarms aid in higher recognizability accuracy than flat alarms under high cognitive load, without affecting response time. This suggests that percussive alarms reduce cognitive load allowing for attentional resources to be dedicated to other tasks. This work complements previous research conducted by our group demonstrating that percussive alarms are perceived as less annoying compared to flat alarms. Together, manipulating amplitude envelope offers a cost-efficient solution to reducing alarm annoyance and improving alarm recognizability.

Subjects: Audiovisual / crossmodal, Memory

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-71: Jazz and Raga: A hierarchical temporal structure comparison

Butovens Médé*(1), Ramesh Balasubramaniam(1), Christopher Kello(1)
1:University of California, Merced

Previous studies have found the underlying temporal structure exhibited in conversation to be similar to jazz music. An explanation for this phenomenon is that the temporal patterns in conversation and jazz are similar. Although culturally and perceptually different, jazz and Indian raga share many commonalities. In this study, we compared the hierarchical temporal structure (HTS) of jazz and Indian raga recordings using the Allan Factor (AF) analysis. This method converts sound energy into temporally spaced events that correspond to peak amplitudes in sound. AF analysis quantifies the amount of event clustering across a range of timescales to measure HTS. We analyzed 60 different recordings (20 per genre) of Hindustani Raga, Carnatic Raga and American Jazz. Each genre was subdivided between instrumental only recordings and vocal + instrumental (10 recordings per sub category). We also analyzed 10 conversation recordings from the buckeye corpus as a reference set. Overall, Hindustani and Carnatic Raga were found to have nested clustering of events very similar to that of jazz music and conversational speech. The highest similarity was seen between instrumental Carnatic Raga and instrumental jazz which displayed nearly identical HTS across all timescales measured. Vocal Raga and vocal Jazz both had increased nested clustering in the longer timescales compared to their instrumental counterpart. These results suggest that 1) spontaneous performances whether in music or speech may contain features that give rise to a specific HTS “signature” that is culturally and modally independent, and 2) the addition or removal of the speech influences HTS.

Subjects: Cross-cultural comparisons/non-Western music, Computational approach

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-73: The beat processing abnormality in patients with treatment-resistant schizophrenia

Shiori Honda*(1), Ryosuke Tarumi(1), Yoshihiro Noda(1), Karin Matsushita(1), Natsumi Nomiyama(1), Ryo Ochi(1), Sakiko Tsugawa(1), Patrick E Savage(1), Shinichiro Nakajima(1), Masaru Mimura(1), Shinya Fujii(1)
1:Keio University

Background: Amusia is often found in patients with schizophrenia. Although previous studies suggested that there is a close link between amusia and cognitive dysfunction, the relationship between beat deafness and cognitive dysfunction in schizophrenia has not yet been fully elucidated. Patients with schizophrenia can be subdivided into Treatment-Resistant Schizophrenia (TRS) and non-TRS based on antipsychotic treatment response. However, there has been no study investigating the difference of beat-processing ability between TRS and non-TRS. Thus, we aimed to investigate the relationship between beat deafness and cognitive dysfunctions in patients with schizophrenia. Methods: Fifty-eight patients with schizophrenia (27 TRS and 31 non-TRS) and thirty healthy controls (HC) participated in this study. To assess beat-processing ability and cognitive performance, we used the Harvard Beat Assessment Test (H-BAT) and Repeatable Battery for the Assessment of Neuropsychological Status (RBANS), respectively. H-BAT has subtests for assessing beat perception and production abilities. We performed a two-way analysis of variance and post-hoc analyses for the H-BAT measures. The within-subject factor was beat processing [perception/production] and the between-subject factor was group [TRS/non-TRS/HC]. We conducted the partial correlation analyses between the difference of perception and production abilities and cognitive performance, controlling for severity of extrapyramidal impairment as well as chlorpromazine equivalent dose. Results: The ANOVA showed a significant interaction between beat-processing ability[perception/production] and groups[TRS/non-TRS/HC] (F2,64=4.84, p=0.016). Compared with HC, TRS group had lower beat perception and production ability while non-TRS group had lower beat perception ability (F2,84=6.45, p=0.002). There were correlations between the beat processing ability and RBANS scores in patients with non-TRS (r=0.55, p=0.003). Conclusions: There may be a significant difference in the beat-processing ability between TRS and non-TRS, suggesting a close link between the beat-processing and the pathophysiology of antipsychotic response in patients with schizophrenia. Future studies are needed in order to clarify the neural underpinnings.

Subjects: Health and well-being, schizophrenia

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-75: Effects of Attentional Focus to Modeled Pitch and Timbre on Pitch Accuracy Among Collegiate Wind Instrumentalists: A Pilot Study

Amanda L Schlegel*(1), D Gregory Springer(2), Ann Harrington(3)
1:University of South Carolina, School of Music , 2:Florida State University, 3:Ball State University

Results from a number of studies (Krumhansl & Iverson, 1992; Melara & Marks, 1990) indicate that pitch and timbre interact, indicating that attending to one may affect performance or perception of the other. Advanced collegiate musicians identified various focus of attention (FOA) strategies that they typically used when making judgements about pitch accuracy in performance, suggesting that varying their focus of attention is common in pitch-matching tasks (Author, 2016). One of the identified strategies included attending to timbral inconsistencies between the provided stimulus and participant performance. The purpose of this study was to determine the effects of focusing attention to the pitch and timbre of a recorded model on collegiate wind players’ pitch accuracy. Collegiate flute majors (approximate N = 15), will warm-up and tune their instrument and then be given 30 seconds to study a four-measure excerpt from William Schumann’s Chester. After initially playing the excerpt, participants will listen to a recording of an expert flutist performing the Chester incipit. While listening, participants will be instructed to focus their attention to the pitch across the context of the entire excerpt or to the performer’s timbre across the excerpt. Participants will experience both focus conditions and will perform the excerpt after each focus condition. The order of focus conditions will be balanced across participants. Participants will also answer four questions about what they noticed in each condition to better understand what aspects they focused on in each condition. Three target pitches will be analyzed per performance, resulting in nine tones for analysis per participant. We will calculate the deviation of each performed pitch from the stimulus tone (from the recording) to which it was matched (in cents). This cent deviation value (expressed in absolute value) will be the dependent variable. Data collection is in progress.

Subjects: Pitch, attention

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-77: Effects of Internal and External Focus of Attention on Pitch Accuracy Among College Wind Instrumentalists

Amanda L Schlegel*(1), William Melven(2)
1:University of South Carolina, School of Music , 2:University of South Carolina

An external focus of attention (FOA) has been shown to positively affect perception of trained singers’ tone quality (Atkins, 2016), motor skill control (Duke, Cash, and Allen, 2011; Mornell and Wulf, 2019) but resulted in more errors in woodwind performance scenarios (Stambaugh, 2017). Atkins (2018) investigated specific aspects of vocal tone—resonance, intonation, and timbre—as a consequence of varying participants focus of attention. Differences were observed due to an internal and external focus of attention. No study has examined the effects of varying one’s focus on attention on the components of instrumental tone. The purpose of this study is to determine the effects of internal and external focus on attention pitch accuracy of selected tones performed within the context of ascending and descending Bb-concert scales by college trumpet players. For this study, participants will play these scales with varying FOA strategies. The internal FOA strategy, participants will be instructed to “play the scale, as notated, with warm air.” In the external FOA, participants will be instructed to “play the scale, as notated, with a warm sound.” In the neutral FOA, participants will be instructed to “play the scale, as notated.” Ascending scales will begin on concert Bb-3 (written C4) and end on concert Bb-4 (written C5). Descending scales will begin on concert Bb-4 (written C5) and end on concert Bb-3 (written C4). Participants will play the scale six times (two scales and 3 FOA conditions). The first and last tone in each condition will be analyzed. A total of 12 total tones per participant will be examined. Mean cent deviation from equal temperament standards for Bb3 and Bb4 will function as the dependent variable. Data collection is in progress.

Subjects: Pitch, attention

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-79: Exploring the Structure of German Folksong

Andrew W Brinkman(1)
1:Ohio State University

In an attempt to uncover the relationship between music and cultural groups, scholars such as Ling (1997) and Eerola et al. (2001) have found that the defining musics of larger cultural groups (e.g. national groups typically defined by geopolitical divisions) contain features that make them distinctly different from one another. While these findings are interesting enough on their own, other related work suggests that differences in musical features are present even on the subnational or regional scale. Although scholarly interest in the field of cultural music analysis (or “folk music” as it is more commonly encountered) and its unique qualities has increased considerably over the past decade more work on in Europe is necessary. This study aims to address the above problem in two parts, by identifying influential musical features of German folksong and by providing a working definition of German folksong structure based on features present across regional divisions. The researcher examined some 6,000 German folksongs from the Essen Folksong Collection (Schaffrath, 1995) systematically using the Humdrum Toolkit (Huron, 1994), looking for, and tallying instances of, nearly 20 musical features. These included features like overall pitch content, specific note-to-note transitions, and measurements of rhythmic variability. Once completed, the researcher then tested for correlation of these features to longitudinal coordinates spanning North and South Germany in order to determine whether some features might be unique to specific regions. Some of the most prevalent features include presence of large leap intervals (r = .12, p = .02), scale degree transitions outside of the expected norm (r = -.42, p = .01), and varying levels of tonal stability (r = .11, p < .001). As a whole, the results suggest that German folksong contains some highly interesting pitch content and that some differences in feature content exist between regional divisions.

Subjects: Music theory, Computational approach; Corpus analysis/studies; Cross-cultural comparisons/non-Western music; Harmo

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-81: Single, double, and triple finger tapping performance of professional hand percussionists

Kazuaki Honda*(1), Patrick E Savage(1), Shinya Fujii(1)
1:Keio University

Background: Manual skills of professional musicians can be regarded as a model to investigate human motor skill acquisition after prolonged practice. Previous study showed that pianists was able to perform single finger tapping faster than non-pianists especially with the ring finger (Aoki, et al., Motor Control, 2005). Double-finger tapping of pianists was also faster than that of non-pianist. Although tapping skills of drummers were investigated previously (Fujii et al., Neurosci Lett, 2009), finger coordination skill of hand percussionist is still largely unknown. In this study, we investigated the finger tapping skill of professional hand percussionists (Darbuka players). Methods: Eight professional percussionists and eight amateur percussionists participated in this study. All participants were right-handed. They were asked to perform finger tapping as fast as possible for 12 seconds. There were eleven finger tapping tasks: four single-finger tapping task (i.e., each of the left-index, left-ring, right-index, right-ring fingers), six double-finger tapping task (i.e., all combinations of the above four fingers), and one triple-finger tapping task (i.e., right-index, left-index, and left-ring finger coordination). The tasks were repeated three times each. The mean tapping frequency was calculated for each trial and averaged over the three trials. Results: As for the single-finger tapping task, there was no significant difference between the professional and amateur percussionists. As for the double- and triple-finger tapping tasks, professional percussionists performed faster than the amateur percussionists (P < 0.05). Conclusion: The professional percussionists showed faster performance in the double- and triple-finger tapping tasks but not in the single-finger tapping task compared with the amateur percussionists. The result suggest that professional percussionists have acquired organized finger coordination skill over the prolonged practice.

Subjects: Performance, Musical expertise

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P3-83: Pivot chords as harmonic garden paths: Cognitive revision from key change

Sami Alsalloom*(1), Tim Bausch(1), Tommy Kan(1), Kyle Douglas(1), Gregory Moreno(1), Harini Pathak(1), Heather Cardoz de la Torre(1), Michelle McKee(1), Janet Bourne(1)
1:University of California, Santa Barbara

The close connection between music and language allows us to examine the boundaries between two cognitive domains. Western tonal harmony has long been examined using linguistic frameworks (Lerdahl and Jackendoff, 1983) and the extent to which this comparison holds has been the subject of much investigation. Experiments done by Slevc, Reitman and Okada (2013) showed that unexpected harmonic movement caused a strain on cognitive control that was compounded by a verbal stroop task. Vuong and Martin (2013) found that the verbal stroop effect is correlated with the revision process that is caused by a linguistic garden path, but a non-verbal stroop effect is not. Therefore, it is promising to know that a verbal stroop effect causes interference in musical processing. A missing link in the conversation is whether a revision process occurs from pivot chords. While a pivot chord would be less jarring than the harmonic movement used in the experiments by Slevc et al. (2013), it would corroborate the claim that listeners sense musical grammar and must revise their sense of tonality when the key changes unexpectedly at a point of ambiguity. This study uses a 3 (Color Stroop: Congruent, Incongruent, Neutral) by 3 (Modulation: None, Pivot, Direct) x 2 (Final Cadence: Tonic or Non-Tonic) design. The participants solved Stroop tasks while listening to 72 chorales. Accuracy and reaction time was measured. In a pilot experiment, the findings of Slevc et al. (2013) were replicated, showing that unexpected harmonic movement causes a significant latency in the Stroop task. In chorales containing no modulation or a direct modulation, the difference in Stroop task performance was significant. Surprisingly, in chorales containing a pivot chord, the effect of a non-tonic ending did not significantly differ from a tonic ending. Performance at the realization of the pivot chord was tested in a follow-up experiment. The results and conclusions regarding the pivot chord will be discussed in detail in the poster.

Subjects: Music and language, Harmony and tonality

When: 10:30-11:45 AM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

Poster session P4

11:45 AM-1:00 PM in Rosenthal

P4-2: Musical Movement Quality and Psychomotor Development in Preschool Children

Michał Kierzkowski*(1), Katarzyna Kierzkowska(1)
1:The Stanislaw Moniuszko Academy of Music in Gdansk

BACKGROUND The literature reveals correlations between the individual movement style of the human being and variables such as personality, emotional empathy, depression, anxiety level and social relations. The most promising theoretical model of a deepened description of movement seems to be the Laban/Bartenieff movement analysis. The results of numerous studies based on that framework suggest that the quality of movement measured under musical conditions can be a vital factor in diagnosing adults and a valuable indicator of the psychomotor functioning of children. AIMS It seems reasonable to assume that there are relations between musical movement quality and numerous spheres of children’s psychophysical functioning. Detailed musical movement indicators of individual dimensions during music performance, such as body awareness, intensity and creativity, can be reflected in the general psychomotor development of the child. METHOD A correlational study was conducted. The randomly selected participants (N=30) were examined according to the Music and Movement Dimension Measuring Scale of R. Laban, the Education Readiness Scale for 5-year-olds and the Set of Diagnosing Methods for Psychomotor Development in 5- and 6-year-old Children. Musical movement quality was examined using musical examples, which were the base and inspiration to perform movement tasks. RESULTS The exploratory research indicates strong correlations between musical movement and selected spheres of a child’s psychophysical functioning as well as characteristic movement patterns in relation to a child’s specific activities, such as graphomotor and self-care skills, verbal-linguistic functions, spatial orientation and control of emotions. CONCLUSION The research is a source of information about the relations between musical movement and general development skills in preschool-aged children. It can provide important guidance for an experienced observer in pedagogical practice. Awareness of the described correlation can enable a more elaborate approach to musical movement development, cognition and education.

Subjects: Music and movement, Music and development

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-4: MUSIC AND VISUAL IMAGES: A STUDY OF SELECTED PAINTINGS OF BOLAJI OGUNWO.

Florence E Nweke*(1), Bolaji Ogunwo(1)
1:Department of Creative Arts, Faculty of Arts, University of Lagos, Nigeria

When viewing an artwork, there is a great tendency of looking with our eyes, to a musician or an audience in the creative arts space, much could be attained if those images are experienced through sound. An adage by the Igbo culture, southeastern part of Nigeria affirms –‘I want to go and see the music’. This, it is believed that music can be seen via the optical sensory, which also, implies a propensity of seeing music and hearing pictures invariably. This study externalized the musical inclinations inherent in the selected paintings of Bolaji Ogunwo. The oeuvre of music associated with the paintings, the ways in which movement and rhythm characterized the form and content of the paintings. Respondents were asked to interpret the paintings using musical imaginations. What goes on in respondents mind while viewing these paintings musically is recorded and analysed. The titles of the paintings which include “Change”, “Nation Building”, One Love, No lower than Here, “Sound of Victory”, “Arise o compatriot”. Some of the findings as imagined by students revealed: one love – Serene linear melody rendered softly on the flute like a lullaby. The painting clearly depicts peace, quietness and beauty. Played on the flute with soft piano accompaniment. After this contact with Ogunwo’s paintings, respondents were ushered into the realm of musical imaginations and these were collected through structured questionnaires and series of interviews. 30 students whose participation met the requirements of an introductory music psychology course in the Department (Music Unit) participated. Each participant received 10 different paintings representing different happenings in society. The selected paintings if there is anything the pictures mirror; it is not the painted pictures but the musical experience (imaginations) by the respondents which were documented. This also transcends the content of whatever has been painted. It was evident that both the paintings and the responses from the music students depict Unity within the Arts. The implication of the study is that Arts express Life, music can be used as a means of expressions for the visual arts, and this could foster harmony among the art practitioners and breed creative exchange between these fields of arts.

Subjects: Harmony and tonality, Emotion; Not Listed

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-6: Music rhythm processing reflected in the autonomic nervous system

Tian Zhao(1)
1:University of Washington

Humans start responding to music before birth, and these earliest responses have been observed in the autonomic nervous system (ANS), by examining cardiac activities (Kisilevsky et al., 2004). Research has also demonstrated repeatedly that music therapy in the NICU can benefit infants in many ways, including ANS function (Standley, 2012). One particular measure, the heart rate variability (HRV), has received increased attention as a way to reflect aspects of ANS function and it has also been linked to psychological factors, such as cognitive and emotional skills (Laborde, Mosley & Thayer, 2017). However, whether and how ANS responds to specific aspects of music (e.g. rhythm) have not been examined. In this study, we aimed to systematically examine how ANS, reflected by HRV, responds to different music rhythms. Fifteen adults with varying music training backgrounds participated in the study. Their electrocardiogram (ECG) was recorded while they sat quietly in a sound-treated booth and listened to 3 conditions of sounds (5 minutes each): 1) randomly timed notes 2) duple metered beats (i.e. strong-weak-strong-weak) and 3) triple metered beats (i.e. strong-weak-weak) in addition to baseline measurement. HRV for each condition was calculated using the root mean square of successive differences (RMSSD) in the intervals between peaks in cardiac activities (i.e. QRS complexes). We compared the ΔHRV (i.e. change from baseline) across the three conditions using a repeated measures ANOVA while controlling for participants’ music background. Results revealed a significant effect of condition and that the ΔHRV is significantly more negative in the triple meter condition than in the random condition. However, there was no difference between duple and random conditions. In addition, baseline HRV was not correlated with years of music training. Interpretation and implications of the results will be discussed in relation to the theoretical backgrounds of both HRV and rhythm processing research.

Subjects: Beat, rhythm, and meter, Physiological measurement

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-8: Singers’ Gaze Fixation While Performing with a Conductor: A Pilot Study

Steven M Demorest*(1), Adam White(1)
1:Northwestern University

Background Choral conducting has been found to affect chorister movement and muscle tension by manipulating the conductor’s left hand (Fuelberth, 2004), preparatory gestures (Manternach, 2012), or facial lip-rounding (Daugherty & Brunkan, 2013). Yet conducting textbooks tend to focus only on hand gestures as the primary vehicle for musical communication (Durrant, 2003; Neuen, 2003). It is unclear where singers look to get information from a conductor. Purpose The purpose of this pilot study was to track where singers look to receive information from a conductor during a performance. The following research questions were addressed: (a) Where do choristers look when singing from memory with a conductor? (b) Does their visual focus change depending on musical instructions given? The findings have implications for our understanding of the role of visual information in music performance, the nature of non-verbal communication in a musical setting, and conducting pedagogy. Method Participants (n=8) stood nine feet in front of a life-sized projected video stimulus. Following fitting with the eye tracking apparatus (Pupil-Labs, rev 021, 120Hz monocular IR camera with dark pupil tracking) and calibration using screen marker calibration, participants were asked to sing from memory short excerpts (30 seconds) from three randomly ordered songs: America the Beautiful, Danny Boy, and Shenandoah while following the video conductor stimulus. Participants were given (a) no instruction (control), (b) asked to sing expressively (expressive), and (c) asked to sing with rhythmic accuracy (rhythmically accurate). Song selection and expressive and rhythmically accurate conditions were randomized for order. The eye-tracking apparatus was re-calibrated following each sung excerpt. Conductor surfaces, face, left, and right, were defined using surface trackers. Data were collected with MacBook Air using open-source Pupil-Capture software. Results and Implications A 3X3 repeated-measures ANOVA was conducted to compare proportion of frames on three conductor surfaces each condition. Significant differences were found by conductor surface, F(2, 14) = 5.730, p = .045, with face (M=.570, SD= .294) and right hand (M=.316, SD= .231) receiving a greater proportion of frames than left hand (M=.114, SD= .078). No significant differences were found by condition or song. Results suggest choristers received the majority of their information by gazing at the conductor’s face regardless of condition. Data collection is ongoing and we plan to measure at least ten more participants by the conference. References available ups request. No room.

Subjects: Audiovisual / crossmodal, Music and movement; Physiological measurement

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-10: Music to facilitate sleep: Do musical characteristics matter?

Renee Timmers*(1), Tim Metcalfe(1), Franziska Goltz(2), Maan van de Werken(3)
1:University of Sheffield, 2:Radboud University Nijmegen, 3:BrainTrain2020 Ltd.

Music is frequently used as a self-aid to support sleep, whether using self-selected or commercially available sleep music. Empirical studies investigating the effectiveness of music to assist sleep show varying, but promising results, including improvements in sleep measures, such as sleep onset latency and sleep quality in music compared to control conditions. This study continues this line of research by investigating what musical characteristics may contribute to a sleep-facilitating effect. The study consisted of two phases: In Phase 1, common characteristics of over 25,000 songs were analysed that had references to ‘sleep’ ‘insomnia’ etc. The results were used to generate three types of music: 1) ‘sleep typical music’, 2) ‘sleep atypical music’, and 3) sleep typical music’ including a number of enhancements developed in collaboration with the company SleepCogni. In Phase 2, the effect of using each of the three types of music was tested on various sleep measures, using a between-participants design. Two samples of respectively university staff and students were asked to listen to the assigned music in three consecutive nights and report their sleep quality the next morning. The results of Phase 1 showed systematic differences between sleep music and UK Top 10 popular music in the use of mode, event density, pulse salience, and spectral energy. The results of Phase 2 confirmed more positive evaluations of the ‘sleep typical’ music (music 1 & 3) than the ‘sleep atypical’ music (music 2) in terms of liking, tempo, relaxation, and being helpful for sleep. The effects on sleep measures were variable, showing a benefit of the type of music specially designed for this study (music 3). However, this benefit was not significant for both samples. This study demonstrates specific characteristics of sleep music, and indicates ways to further develop properties to optimise sleep facilitation.

Subjects: Health and well-being, Psychoacoustics

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-12: Specialized high-level processing of speech and music revealed with EEG

Nathaniel J Zuk*(1), Emily Teoh(1), Edmund Lalor(2)
1:Trinity College Dublin, 2:University of Rochester

While there are clear spatial differences in neural activity for speech and music (Norman-Haignere et al, 2015, Neuron, 88:1281-1296), the temporal responses are not well understood, and it is not clear if the temporal responses are unique for speech and music. We hypothesized that neural responses measured with electroencephalography (EEG) may capture unique and discriminable responses to speech and music stimuli resulting from high-level processing. Subjects listened to 30 different two-second-long sounds, including speech, music, and other environmental sounds. Using linear discriminant analysis to classify the two-second EEG responses to each sound, we found that the speech and music sounds, in addition to impact sounds, produced higher classification accuracies than all other environmental sounds. Separately, we repeated this experiment using model-matched versions of the speech, music, and impact sounds by resynthesizing the sounds using a model of low-level processing with identical spetrotemporal statistics to the originals (McDermott & Simoncelli, 2011, Neuron, 71:926-940). Model-matched impact sounds were classified identically to their original counterparts, showing that the EEG responses were dominated by the processing of low-level statistics. In contrast, model-matched music and speech sounds were classified worse than the originals. While classification of speech and music were best between 200-400 ms of the EEG response, music classification was significantly better than the classification of model-matched music sporadically throughout the two-second stimulus. Our study demonstrates that EEG captures temporally unique responses to speech and music more strongly than other environmental sounds. Furthermore, the unique responses are dominated by high-level processing in the brain. These results highlight the importance of using naturalistic sounds when using EEG to study the neural processing of speech and music in humans.

Subjects: Neuroscientific approach, Computational approach; Language and speech

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-14: Pop melodies have become more repetitive throughout the Billboard era

Joshua Albrecht(1)
1:The University of Mary Hardin-Baylor

Motivation Using the Lempel-Ziv algorithm (1977), Morris (2017) demonstrated that pop song lyrics have become increasingly repetitive over time, as a measure of text compressibility ratio, and that the top 10 songs were more repetitive than the rest of the dataset. Symbolic music encodes musical data as text, which permits compressing the information using the same method. This study tests the hypothesis that pop song melodies have similarly become increasingly repetitive over time as a measure of compressibility ratio, and that top 10 melodies are more repetitive than less popular songs. Methodology/Dataset This study tests its hypothesis using a new dataset: MIDI encodings of the top 20 Billboard year-end popular songs from 1956-2018 scraped from the web. Using Humdrum, melodies are extracted from complex textures, turned into text strings, and subjected to the Lempel-Ziv algorithm, resulting in a percentage of the file compressed for each song. Results Data has been collected, but is still in the process of curation. Therefore, no results can be reported yet. However, I hypothesize that compression percentage will increase as a function of year, and that top-10 songs will demonstrate greater compression percentage than the songs in the 10-20 range. Implications Possible implications of the research could indicate a growing homogeneity of melodic expression over time. On the contrary, negative results may indicate a transfer of complexity from lyric to melody.

Subjects: Corpus analysis/studies, Computational approach

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-16: Aesthetic responses to microtonal intervals

Meng-Jou Ho*(1), Rei Konno(1), James Tomokane(1), Josh McDermott(2), Nao Tokui(1), Shinya Fujii(1), Patrick E Savage(1)
1:Keio University, 2:Massachusetts Institute of Technology

Musical pitches are one of the core building blocks of almost all of the world’s music (Savage et al., 2015, PNAS). In many cultures, different relationships between pitches are associated with different aesthetic responses. For example, in most Western music, consonant intervals based on simple integer ratios (e.g., octaves, perfect fifths) are perceived as pleasant while dissonant intervals based on more complex ratios (e.g., 2nds, 7ths) are perceived as unpleasant (McDermott et al., 2010, Current Biology). However, there remains limited experimental data for intervals outside of the standard 12-note Western chromatic scale. Here we present experimental data on aesthetic responses to all possible interval dyads within a single octave based on a quartertone (50-cent) 24-note equal-tempered scale, including both melodic and harmonic intervals. Pilot data from four Japanese participants suggest that for both melodic and harmonic intervals, microtonal intervals are perceived as less pleasant than chromatic intervals (melodic: t = 2.5, p = .02; harmonic: t = 3.9, p = .0008) and that aesthetic ratings are correlated with the degree of harmonic similarity between overtone spectra (melodic: r = .68, p = .0002; harmonic: r = .42, p = .04). These results provide tentative support for the role of general psychoacoustic harmonicity principles in shaping aesthetic preferences to music, with the caveat that these pilot data come from a small sample with extensive exposure to Western musical systems. By SMPC we plan to collect additional data from Japanese participants both with and without extensive training in Japanese traditional music to investigate the role of musical experience in shaping aesthetic responses to musical intervals.

Subjects: Harmony and tonality, Aesthetics / preference; Cross-cultural comparisons/non-Western music; Pitch

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-18: Music and cooperation: Disentangling causal mechanisms

Momoka Yamauchi*(1), Miri Hamaguchi(2), Aya Kato(2), Yoichi Kitayama(2), Shinya Fujii(2), Patrick E Savage(2)
1:Keio University , 2:Keio University

Throughout human history, music has been used to boost cohesiveness of groups. From sacred songs in religious rituals, to marching in armies, to dancing at rock concerts, music has been used to bring a sense of unity to diverse groups. Several previous experimental studies have suggested that rhythmic synchronization is the key mechanism by which music facilitates cooperation (Wiltermuth & Heath, 2009, Psychological Science; Reddish et al., 2013, PLOS ONE; Mogan et al., 2017, J. Experimental Social Psychology). However, these experiments were not preregistered, had small effects, and did not dissociate the effects of rhythm, melody, lyrics, and movement. Thus, if and how music has a causal effect on cooperation remains poorly understood. We performed a (non-preregistered) pilot study tentatively confirming that music (singing “Twinkle Twinkle Little Star” in pairs) can cause a slight increase in cooperation as measured through an economic task, relative to a null control (n =103 participants, t = 2.1, p = .03). By SMPC we plan to perform additional pilot experiments using a more nuanced paradigm comparing a null control against the sequential addition of 1) lyrics, 2) rhythm, 3) melody, 4) movement, and 5) large groups We predict that each of these factors will result in a sequential increase in cooperation, but that the increase will be strongest for rhythm. Based on these pilot experiments and feedback from SMPC we plan to conduct a full preregistered experiment to disentangle the multidimensional mechanisms by which music may facilitate cooperation.

Subjects: Cross-domain effects, Beat, rhythm, and meter; Music and movement; Music and society

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-20: Protest songs’ framing and their effect on empathy

Naomi Ziv(1)
1:College of Management – Academic Studies

Motivation Popular music is created within social contexts and reacts to ongoing events. In societies involved in ongoing conflict, protest songs express messages opposing national policies or acts. One of their aims is to elicit emotions, influence attitudes and move people towards action. However, the call for change may be framed positively (peace songs, emphasizing integroup similarities) or negatively (anti-war songs, criticizing the ingroup). Framing has been shown to influence emotions and attitudes. The aim of the present research was to examine whether positive or negative framing of protest messages through songs may influence emotions and empathy towards an outgroup member. Methodology Two on-line studies were conducted in Israel. In study 1, 139 participants heard either a “positive” or “negative” protest song while viewing its lyrics and rated positive and negative emotions elicited. They then read a fictitious story concerning a Palestinian woman in an Israeli blockade who wishes to cross into Israel in order to visit a sick relative but does not have the required documents. Political orientation and empathy towards the woman were measured. Study 2 repeated the procedure with 111 participants, but participants were only exposed to songs’ lyrics. Results of both studies show that framing influences emotions (with positive framing leading to more positive emotions and vice versa). Study 1 shows that beyond political orientation, emotions and framing interact in predicting empathy, with negative emotions significantly adding to explained variance only in the negative-frame song. In Study 2, only political orientation predicted empathy. Implications The research suggests two conclusions: first, that the effect of protest songs is not solely explicable by the lyrics, and their being expressed through music has a significant effect. Second, to the extent that songs elicit emotions, only negative emotions elicited through negative framing may affect feelings towards outgroup members.

Subjects: Music and society, Emotion

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-22: How Electrical Muscle Stimulation Assists in Rapid Drumming Training

Reo Anzai*(1), Rei Konno(1), Kazuaki Honda(1), Patrick E Savage(1), Pedro Lopes(2), Shinya Fujii(1)
1:Keio University, 2:University of Chicago

Expert drummers exhibit virtuoso skills such as rapid tapping with a drumstick. As a reference, the winner of a contest to find the world’s fastest drummer (WFD) performed a record of about 1200 taps with hands in a single minute [Fujii, et al., Neurosci. Lett., 2009], which equals 10 Hz movement with each hand over the 60 sec. Acquiring such very rapid drumming skill is laborious for untrained individuals and beginner-level drummers. While previous studies showed augmented feedback modalities (such as auditory feedback [Fujii, et al., Front. Neurosci., 2016]) can boost the motor learning experience; we believe that an active approach, i.e., one that stimulates directly the learner’s muscles, can provide even more benefits. As such, we turned to Electrical Muscle Stimulation (EMS) as a means to assist in learning very rapid drumming. We present an exploratory study, where we trained three participants that were new to drumming. One participant was trained using an EMS system we developed that contracted the participant’s wrist flexor at 10 Hz (same speed as WFD); while the other participants formed our control group that performed an unassisted training. In the first day prior to the start of the study, we measured participants’ drumming speed using the standard 60s drumming task (same as the official WFD contest). Then, participants trained for 3 days, performing 48 trials in a day. In each day, participants practiced by alternating hands, one hand at a time for 10s (15s rest). At the last day, we measured their drumming speed for the 60s task. Our results show both in feedback and non-feedback, individuals improved their drumming speed. In particular, we found that the participants who trained with EMS improved their ability to perform: (1) more symmetrical drumming between hands and (2) more stable inter-tap interval compared to the control group.

Subjects: Embodied cognition, Memory

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-24: Musical Training Mediates the Relation Between Auditory Working Memory and Preference for Musical Complexity

Ethan Simon(1), David J Baker(2), Elizabeth Monzingo(3), Emily Elliott(2), Dominique T Vuvan(4)
1:Skidmore College, 2:Louisiana State University, 3:Ohio State University, 4:Skidmore College & International Laboratory for Brain, Music, and Sound Research

Previous research indicates that musical training is associated with improved auditory working memory as well as an increase inpreference for musical complexity (Bugos et al., 2007; Burke & Gridley, 1990; Lu & Greenwald, 2016; Przysinda et al. 2017; Ramachandra et al., 2012). Relatedly, work in the visual domain suggests that appreciation is increased when the visual complexity of an artwork is compatible with the viewer’s visual working memory capacity (Sherman et al., 2015). We therefore hypothesized a novel mediation model in which auditory working memory (AWM) mediates a relation between musical training (MT) and preference for musical complexity (PMC). We collected data on these three variables from a sample of n = 251 as part of a larger study (Elliott, Ventura, Baker, & Shanahan, submitted). MT was quantified using the musical training subscale of the Goldsmiths Musical Sophistication Index (Mullensiefen et al., 2014), AWM was quantified using a tone span task (Elliott et al., submitted), and PMC was quantified using the Reflective and Complex dimension from the Short Test of Music Preferences (Rentfrow & Gosling, 2003). In accordance with previous studies, there were significant positive pairwise correlations among all three variables of interest. However, auditory working memory did not significantly mediate the relationship between musical training and preference for musical complexity. Rather, exploratory analyses indicate evidence for a model in which musical training mediates the relation between auditory working memory and preference for musical complexity. The current study refines our understanding of the association between complex skills training and cognitive outcomes.

Subjects: Musical expertise, Memory

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-26: The Roles of Contrast and Enculturation in the Generation of Musical Narratives

Lucas Bellaiche*(1), Elizabeth Margulis(1), Devin McAuley(2)
1:University of Arkansas, 2:Michigan State University

One burgeoning topic of interest in music psychology is the formation of perceived narratives in response to wordless music. Recent studies show that participants who listen to music with high contrast (like music from the late 19th and early 20th century) form musical narratives more readily than participants who listen to music with little contrast (e.g., minimalist and Baroque music) within the piece (Margulis, 2017). According to the prevailing theory, expectation violations presented by contrasting material are made sense of by ascribing a story (Huron & Margulis, 2010). Enculturation, where sound patterns come to be associated with external referents, is another important factor in musical narrativization. This study adapts excerpts used in previous research to systematically vary the degree of contrast and the degree of topical association, allowing a controlled investigation of the factors that give rise to perceived narratives. Contrast and topical association are measured by specially devised algorithms as well as expert ratings. Participants hear 12 such excerpts. After each excerpt, listeners complete a previously developed Narrative Engagement Scale (NES) and provide a free response description of any story they perceived. Results (both scores on the NES and free response story characteristics as autocoded by content coding software) are analyzed using contrast (high or low) and topical association (high or low) as the factors, allowing insight into whether structural features in the acoustic signal or topical associations that arise through enculturation are the primary drivers of musical narrativization. This research sheds new light on the relationship between music and language, and on the question of how music comes to acquire meaning within cultural groups.

Subjects: Music and language, Cross-cultural comparisons/non-Western music

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-28: IS PARTICIPATION IN MUSIC FESTIVALS A SELF-EXPANSION OPPORTUNITY? IDENTITY, SELF-PERCEPTION, AND THE IMPORTANCE OF MUSIC’S FUNCTIONS.

Rafał Lawendowski(1)
1:Department of Social Sciences, University of Gdansk

Participation in music festivals could be an important facet of developing communal relations with others. Nevertheless, little is known about how music festivals impact a person’s perception of music’s functions or what kinds of predictions regarding music functions could be made based on identity mechanisms shaped within a festival community. A correlational study was conducted among attendees of three music festivals in Poland (N = 828). The main goal was to examine how functions ascribed to music are related to (a) a feeling of being united with other attendees, (b) the perception of being independent from or (c) interdependent with other attendees, and (d) a feeling of self-growth resulting in self-expansion. Participants completed a questionnaire consisting of the scales measuring: self-construal, identity fusion, self – expansion, and functions of music. Using structural equation modelling, we showed the following. First, people who feel stronger connections and experience more personal relationships with other attendees report a stronger feeling of self-growth during music festivals and ascribe more importance to the social functions of music. Second, a strong, direct relationship exists between independent self-construal (i.e., an individualistic view of the self as autonomous from other people) and the self-awareness function of music as well as between interdependent self-construal (i.e., a more collectivistic view of the self as imbedded in the group and community) and the social function of music. Finally, the results of the mediation analysis of self-expansion for the relationships between different aspects of self and the functions of music indicated that self-expansion is a statistically significant partial mediator of these relationships for the social and self-awareness function of music but not for the emotional function. That is, participants, who experienced changes in self-construct related to self-growth and self-development from their participation in a music festival used music to facilitate self-awareness and social relatedness.

Subjects: Music and society, Music and development

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-30: Effect of prime variability on harmonic priming in rock and classical contexts

Rachel Chang(1), Bryn Hughes(2), Dominique T Vuvan(3)
1:Skidmore College, 2:The University of Lethbridge, 3:Skidmore College & International Laboratory for Brain, Music, and Sound Research

Motivation Previous experiments have demonstrated that different musical styles elicit different structural expectancies, indicating a greater preference for the V-I progression over the bVII-I progression in classical music than in rock music (Vuvan & Hughes, 2019). Follow-up experiments have shown that participants respond faster and more accurately to V-I than bVII-I in classical, but not rock contexts (Vuvan & Hughes, Psychonomics 2018). Whereas these experiments primed the classical or rock styles using a single style-defining excerpt, the current study used a variety of musical excerpts as style primes, with the goal of increasing the generalizability of previous findings. In a tuning judgment task, we predicted that listeners would respond faster and more accurately for stylistically congruent chords than for stylistically incongruent chords. Methodology Trials were presented in two blocks (classical vs. rock). On each trial, participants heard one of eight style primes, followed by a two-chord progression. This progression was either stylistically congruent (e.g., V-I in classical and rock) or stylistically incongruent (e.g., bVII-I in classical). Participants were asked to make a tuning judgment on the first chord in the two-chord progression. Results/Implications Unexpectedly, in contrast to previous experiments, there was no interaction between style and progression. Specifically, response times and judgment accuracy were equivalent for V-I and bVII-I progressions in the classical and rock styles. Future work will include item analyses to determine how the larger variety of primes affected listeners’ expectations differently from the single prime used in previous experiments.

Subjects: Harmony and tonality, Expectation

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-32: How do you feel the beats: An EEG study of beat imagination

Tzu-Han Cheng*(1), John Iversen(1)
1:University of California, San Diego

Listeners can perceive metrical structure of rhythms guided by physical features, such as accenting, or by internally generated interpretation. Past research suggests the hypothesis that beat perception depends on interactions between auditory and motor systems. To test this, we used EEG to measure causal connectivity between motor and auditory components during beat listening, imagining and production. Trials had three listening phases with 12 bass drum strokes played with a 2.4 Hz period. Control: 12 unaccented sounds; Physical Beat: accents every two or three drum strokes (+10.5dB rms), creating a duple or triple meter; Imagined Beat: participants instructed to subjectively impose the same meter on unaccented sounds. Finally, the sound stopped and they tapped the imagined meter for verification. After preprocessing, independent components analysis was run on each participant. Motor and auditory independent components (ICs) were identified as those accounting for maximum variance of separate auditory and motor localizer evoked responses. Preliminary results (N=6): Frequency domain analysis for the auditory ICs displayed a peak at 2.4 Hz corresponding to the rate of the sounds. A peak corresponding to the beat rate (1.2 Hz in duple; 0.8 Hz in triple meters) was present in the Imagined Beat condition, and was higher than Control (duple: t(5)=-2.57, p=0.05;triple: t(5)=-2.34, p=0.07) suggesting top-down influence of imagined meter on auditory responses. Directional causality analysis revealed two-way causal flow between motor and auditory ICs in both alpha (8–12 Hz) and beta bands (13–30 Hz) in all conditions, though with large individual differences. Mean causal flow from motor to auditory ICs was significantly higher than Control (t(5)=-3.39, p=0.02) in the alpha band in the duple Physical Beat condition. These preliminary results are broadly consistent with top-down modulation and two-way coordination between motor and auditory regions during rhythm perception, favoring an embodied view of active perception.

Subjects: Beat, rhythm, and meter, Embodied cognition; Neuroscientific approach

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-34: Case studies suggesting a role for timbral cues and motor imagery in instrument-specific absolute pitch

Lindsey E Reymore(1)
1:Ohio State University

While true absolute pitch (AP) ability is rare in expert musicians, anecdotal evidence suggests that some musicians may better identify pitches played on their native instrument. Apart from a few studies considering non-AP string and piano players’ responses to timbres of their own instruments (Wong & Wong, 2014; Marvin & Brinkman, 2000), instrument-specific AP has not been studied systematically, particularly not in wind players. This study tested whether expert musicians without global AP possess AP for their own instrument, aiming to identify underlying mechanisms of this ability. Case studies were conducted on two professional oboists (including the first author). Each oboist-participant recorded all chromatic pitches from B♭3 to G6 on their own and the other person’s instrument. Pitch-shifted stimuli and piano stimuli were also produced. Participants completed 34-alternative forced-choice pitch identification tasks. Using instrument-specific blocks, Experiment 1 tested for superior pitch identification for oboe over piano. Experiment 2 assessed the influence of four factors on pitch identification accuracy: performer (self vs. other), instrument (own vs. other), transposition (untransposed vs. pitch-shifted), and motor interference (gum-chewing/finger-tapping vs. no motor interference). In Experiment 1, the first author performed above chance for both piano (24% correct) and oboe (66% correct), with significantly better performance for oboe (p < .0001). In Experiment 2, pitch labeling was significantly less accurate for pitch-shifted tones (p < .001) and during motor interference (p < .0032) whereas performer and instrument remained non-significant predictors of AP accuracy. Data collection for the second oboist is ongoing. This proof-of-concept study establishes that instrument-specific AP is detectable in some musicians. These novel insights into underlying mechanisms suggest a role for pitch-specific timbral cues and motor imagery. Future experiments should generalize these findings to wider populations of oboists and other instrumentalists, with implications for teaching, practicing, musicianship tasks, and the understanding of musical expertise.

Subjects: Pitch, Musical expertise; Timbre

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-36: Spatial perception in congenital amusia revisited

Jasmin Pfeifer*(1), Silke Hamann(2)
1:Heinrich-Heine-University, 2:University of Amsterdam

Background While the memory aspect of spatial processing seems to be intact in amusia, as shown by Williamson et al., different components of spatial abilities, such as spatial orientation or perspective taking are yet unexplored in amusia. We therefore conducted two tests assessing different aspects of spatial rotation abilities in amusics. There are differing findings regarding this matter. Douglas and Bilkey (2007) found a connection between spatial processing difficulties in a sample of 8 amusics using a Mental Rotation task (Shepard & Metzler, 1971). Tillmann et al. (2010) tested amusics’ spatial abilities with two different tasks, finding no difference between controls’ and amusics’ accuracy or reaction time, concluding that there is no deficit in spatial processing in amusia. Williamson et al. (2011) also addressed amusics’ spatial processing utilizing a version of the Mental Rotation task and two further tasks assessing memory for sequences of spatial location (Milner 1971) and memory for visual patterns (Della Sala et al. 1997). No difference in accuracy between amusics and controls on any of these tasks was found. Methods We administered the Object Perspective Taking Test (Hegarty & Waller, 2004), measuring perspective taking abilities, and the Santa Barbara Solids Test (Cohen & Hegarty, 2012), assessing not only mental rotation but the ability to identify the two-dimensional cross section of a three-dimensional geometric shape. This test also provides information on the source of difficulty by analyzing error patterns along the two dimensions included in the task: Complexity of the geometric object and the orientation of the cutting plane (Cohen & Hegarty, 2012). These two tests were chosen as they differentiate between spatial orientation abilities and spatial visualization abilities. We first administered the test to a dizygotic twin pair (see results section), of which one twin is amusic and the other one is not. In addition, we have tested seven further amusics and ten controls so far but testing is still ongoing. Results The twins performed differently on one of the visual tasks, with the non-amusic twin (83% correct) outperforming the amusic twin (20% correct). The results of both spatial abilities tests taken together indicate that the amusic twin can perform egocentric spatial transformations, as shown by the Object Perspective Taking Test. This was also the same strategy she employed incorrectly on the Santa Barbara Solids Test, resulting in her low scores. This shows that she is able to make egocentric spatial transformations but struggles with object-based spatial transformations that were required of her, with which her sister had no difficulties. Conclusions This study shows that at least this one amusic has impaired spatial visualization abilities with intact spatial orientation abilities. This warrants further scrutiny of amusics’ spatial abilities and a fractionating of their skills in this regard.

Subjects: Processing disorders, Audiovisual / crossmodal; Cross-domain effects; Memory

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-38: Categorical rhythms shared between songbirds and humans

Tina Roeske(1)
1:Max Planck Institute for Empirical Aesthetics

Rhythm – the organization of sounds in time – is a universal feature of human music. Of the infinite ways of organizing events in time, human rhythms are distributed categorically. We compared rhythms of classical piano playing and finger tapping to rhythms of thrush nightingale songs. Across species, we found similar common rhythms, as relative durations of intervals formed three categories: isochronous 1:1 rhythms, small integer ratio rhythms, and high ratio ‘ornaments’. In both species, those categories were invariant within extended ranges of tempi, indicating natural classes. In all cases, the number of rhythm categories decreased with higher tempi. Finally, in birdsong, high ratios (ornaments) were derived from very fast rhythms containing inflexible (probably uncontrollable) interval ratios. These converging results indicate that birds and humans similarly create simple rhythm categories from a continuous temporal space. Such natural categories can promote cultural transmission of rhythmic sounds – a feature that songbirds and humans share.

Subjects: Beat, rhythm, and meter, Cross-species comparison

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-40: Lyrics and Emotion in Songs: A Conceptual Replication Study of Ali and Peynircioglu, 2006

Yiqing Ma*(1), Emily Elliott(1), David J Baker(1), Connor Davis(1), Katherine M Vukovics(1)
1:Louisiana State University

Work by Ali and Peynircioglu (2006) reported that the presence of lyrics in music detracted from a listener’s emotional ratings in happy and calm music, yet enhanced their ratings in sad and angry music. Though to date, this finding has led to over 190 citations on Google Scholar, there have not yet been any replications of the author’s original findings. The present experiment aims to both replicate and extend the findings of Ali and Peynircioglu (2006) with new adapted stimuli, and a more comprehensive measure of musical experience (Müllensiefen et al., 2014). We have pre-registered our study design, and we plan to recruit participants from two separate areas of study in order to obtain a representative sample, to ensure the robustness of the original study’s findings. We plan to recruit a minimum of 50 participants before ending data collection. Following the author’s original design, we will use 32 unfamiliar stimuli in our experiment using musical excerpts from different genres. Using a within-subjects design, participants will listen to eight musical excerpts from each of four emotional conditions, both with and without lyrics. We plan to analyze the data using a 4 x 2 mixed-model ANOVA and will test for both the main effects and interaction reported in the original paper. We then plan to determine if individual differences in emotional engagement, as measured by the sub-scale from the Goldsmiths Musical Sophistication Index, will predict emotional rating. This research examines the replicability of the original findings from music psychology and extends the findings based on individual differences in musical experience.

Subjects: Emotion, Music and language

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-42: Human Perception of Rhythm Similarity: A Multidimensional Scaling Evaluation

Matthew R Moritz*(1), Matthew Heard(1), Yune S Lee(1)
1:Ohio State University

Despite the long history of music psychology, rhythm similarity perception remains largely unexplored. Several existing studies suggest that the edit distance (ED) model—based on the minimum number of rhythm notation substitutions required to transform one rhythm into another—can predict rhythm similarity judgments. However, it has not been determined if ED upholds in the presence of different tempos, or whether the serial position of an edit influences similarity. Here, we attempted to address this question by creating sixteen rhythms using quarter and eighth notes. Pairwise ED’s between rhythms spanned from 1 to 4, and rhythms were presented at a fast (150 BPM) or slow (90 BPM) tempo. Ten musicians rated the similarity of 136 rhythm pairs on a 4-point scale. Using non-metric multidimensional scaling (nMDS), we determined that tempo strongly influences the similarity of rhythms. Furthermore, nMDS revealed that rhythms were rated more similarly if the first rhythmic notations were shared, an effect we dub rhythm primacy. ED predicted similarity only if two rhythms shared primacy. Linear mixed effects modeling further confirmed the nMDS results by revealing that primacy and tempo significantly predicted similarity rating (p < 0.05). However, ED predicted similarity only when primacy effects were present (p = 1.51e-08). Together, our findings suggest that ED requires revision to account for the effects of tempo and primacy on judgements of rhythm similarity. Data included from an ongoing fMRI study will determine if the behavioral findings correspond to similarities in patterns of neural activation related to rhythm processing.

Subjects: Beat, rhythm, and meter, Music theory

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-44: Modeling Infants’ Perceptual Narrowing to Musical Rhythms: Neural Oscillation and Hebbian Plasticity

Parker Tichko(1)
1:University of Connecticut

Ontogeny is a complex, emergent process that arises from interactions between the developing organism and the structures present in the rearing environment. In the field of infant development, one of the most well known consequences of organism-environment interactions is the adaption and re-organization of perception-action systems to structural regularities in the environment, a phenomenon called “perceptual narrowing” or “perceptual fine tuning.” Previous work suggests that infants’ perception of musical rhythms is gradually fine-tuned to culture-specific musical structures over the first post-natal year. To date, however, little is known about the neurobiological principles that underlie this process. In the current study, we modeled infants’ perceptual narrowing to culture-specific musical rhythms using oscillatory neural networks with Hebbian synaptic plasticity. We demonstrate that, during a period of unsupervised learning, oscillatory networks adapt to the rhythmic structure of Western and Balkan musical rhythms through the self-organization of network connections. We show that these learned connections affect the ability of the network to discriminate between native and non-native rhythms, a pattern of findings that mirrors the behavioral data on infants’ perceptual narrowing to musical rhythms. We develop an overall framework for modeling rhythm learning and development, and discuss how this may account for the process of enculturation to the rhythms of one’s musical environment.

Subjects: Beat, rhythm, and meter, Music and development

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-46: Generalization of Novel Sensorimotor Associations among Pianists and Non-pianists

Chihiro Honda*(1), Karen Chow(1), Emma B Greenspon(2), Peter Pfordresher(1)
1:University at Buffalo, 2:University at Buffalo, SUNY

In the process of acquiring musical skills, such as playing the piano, we develop sensorimotor associations which enable us to predict motor activities based on perceived pitch. However, past research suggests that these acquired associations are inflexible and show limited generalizability to novel task demands. Pfordresher and Chow (in press) had pianists and non-pianists learn melodies by ear based on a normal or inverted (lower pitch to the right) mapping of pitch. Pianists who were trained with inverted mapping were not better at recalling learned melodies during a later test phase compared to non-pianists, who performed similarly regardless of the pitch mapping that was used during training. These results suggest that musical training may constrain sensorimotor flexibility. The current study further investigates whether piano training constrains the ability to generalize learning based on an unfamiliar (inverted) pitch mapping, by using a transfer-of-training paradigm (Palmer & Meyer, 2000). As in Pfordresher and Chow (in press), pianists and non-pianists in the current study learned a training melody by ear with normal or inverted pitch mapping. After training, participants listened to and then immediately reproduced four types of melodies that varied in their similarity to the melody used during training: same, similar structure, inverted pitch pattern, or different structure. The feedback mapping during the generalization test matched training. Overall, pianists produced fewer errors and performed faster than non-pianists. However, benefits of training were absent for pianists who trained with inverted feedback when they attempted to reproduce a melody with a different structure than the melody used for training. This suggests that piano experience may constrain one’s ability to generalize learning that is based on sensorimotor associations.

Subjects: Performance, Embodied cognition; Music education/pedagogy/learning; Musical expertise

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-48: Dysprosody of speech in two singers: Dissociations of pitch, timing and rhythm

Yoonji Kim*(1), Diana Sidtis(1)
1:New York University

Singing and speech “melody” or prosody share elemental acoustic components (pitch, timing, and rhythm) which may be differentially impacted by brain damage. The effects of focal brain damage on fundamental frequency (F0, also pitch), timing, and rhythm in speech production and in singing were retrospectively investigated in two persons diagnosed with severely dysprosodic speech due to cerebral vascular accidents; both were experienced singers and native speakers of American English. The effects of speech dysprosody on singing competence are not well known. Participant 1 (P1) suffered a large right hemisphere infarct and Participant 2 (P2) sustained a right-sided ischemic subcortical lesion. Pitch and timing in lexical (linguistic) contrasts (greenhouse/green house) were acoustically analyzed, rhythm and fundamental frequency mean and range in spontaneous speech were quantified, and accuracies of pitch and rhythm in personally familiar songs were measured acoustically and rated by listeners. Both study participants produced lexical contrasts with non-normal pitch trajectories but with normal timing relations. Rhythm in spontaneous speech deviated from normal values for P1 but not for P2; rhythm in singing for both subjects was accurate, as demonstrated by acoustic measures and listeners’ ratings. These case studies show dissociations between pitch, rhythm, and timing abilities for speech versus singing. Both persons failed in linguistic pitch but preserved linguistic timing; they differed in speech rhythm but not in sung rhythm ability; they both differed in sung pitch ability. Brain structures underlying preserved and deficient elements of are described. The findings from this study support the view that talking and singing are modulated by different cerebral systems. Better understanding of these dissociations may lead to improved models of modes of vocalization and may assist in assessment and treatment of dysprosody and amusia.

Subjects: Music and language, Singing

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-50: This is how we do it – the influence of musical training on music genre perception & categorization

Peer Herholz(1)
1:Montréal Neurological Institute, McGill University

Although the influence of musical training serves as a role model for the plasticity of the human mind, the vast majority of research work focused on rather low level aspects and thus abilities within the processing of music like pitch, rhythm and timbre, hence not considering possible alterations of subsequent high level and abstract aspects like music categorization. In order to address this gap, the categorization of music by means of the perception of its genres was compared between musicians and non-musicians. To this end, participants were tasked with arranging a set of 20 different music genres covering 4 main genres with 5 sub genres each based on their perceived similarity on a computer screen. Using inverse multidimensional scaling, the arrangements were utilized within the framework of representational models and compared with a broad range of computational models ranging from low level acoustic features (e.g., based on pitch, spectrum, harmony, timbre and tempo) to high level conceptual models (e.g., main vs. sub genres), aiming to test their predictive ability, that is if the representations of musicians and non-musicians could be explained by diverging low or high level aspects, thus cognitive processes. Additionally, supervised and unsupervised machine learning approaches were applied to tests if representations could predict musicianship. The obtained representations did not differ between musicians and non-musicians as neither their low and high dimensional arrangement varied nor were the machine learning approaches able to sufficiently learn and predict group-respective representations. The same accounts for the tested models, as their predictive ability did not diverge between groups. However, low level models were consistently outperformed by high level models. Together, results indicate that musical training seemingly has no influence on the categorization of music genres, as the respective processes might be driven by more abstract concepts such as genre definitions.

Subjects: Musical expertise, Computational approach; Music information retrieval; Neuroscientific approach

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-52: Hey, You’ve Got to Hide Your Love Away: Private vs Public Musical Preferences

Selena Bordeaux*(1), Meagan Curtis(1)
1:Purchase College, SUNY

People often bond over shared taste in music, and music is one of the most popular conversation topics among new acquaintances. However, listeners generally keep some of their musical preferences private. The current study explored the factors associated with why listeners consider some of their preferred songs to be guilty pleasures. It was hypothesized that private preferences would be typified by songs from genres that are inconsistent with a listener’s typical genre preferences and that are not viewed by the listener as socially-acceptable. Forty-two Spotify users ranging in age from 18 to 24 were recruited from a small liberal arts college in the northeastern United States. A survey was administered online via Qualtrics. Participants were asked to examine a personalized Spotify playlist of the 100 songs that they listened to the most in 2017. They were asked to identify the first three songs on their playlist that they considered to be private preferences as well as the first three songs that they regard as public musical preferences. They evaluated each song by rating their level of agreement with each of 15 statements about the song. A stepwise linear regression was used to determine which factors distinguished private from public preferences. Three significant predictors accounted for a total of 21.9% of the variance in the data. Privately preferred songs were rated as less socially acceptable and as having less musical complexity than the publicly preferred songs. Private preferences were also associated with genres that were inconsistent with how the participant wished to be perceived by their peers. These results underscore identity management as the defining factor in determining whether a song is viewed as a guilty pleasure. Genre and musical complexity may relate to whether a musical preference is revealed publicly or kept as a private indulgence.

Subjects: Aesthetics / preference,

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-54: The Accuracy of the Stereotypes Associated with the Fans of Different Genres of Music

Tiana Pistillo*(1), Meagan Curtis(1)
1:Purchase College, SUNY

The current study examined the accuracy of stereotypes about fans of different genres of music. Although past research suggests that people hold stereotypes about the fans of specific genres, there hasn’t been as much research examining the accuracy of those stereotypes. To examine this, the personality traits of 40 participants were assessed using the Ten-Item Personality Inventory (TIPI). Participants were then asked to listen to four 30-second musical excerpts from each of 12 different genres (36 excerpts total). After hearing each excerpt, participants were asked to rate on a Likert scale how much they liked the song. After hearing all four exemplars from a specific genre, the participants were asked to assess the personalities of fans of that genre by completing the TIPI about those fans. This was repeated for each genre, such that participants made judgments about the fans of each genre using the TIPI. This enabled a comparison between the actual personality profiles of participants who reported that they liked songs from each genre with the stereotypes about the personalities of fans of those same genres. The results revealed a small number of significant correlations between preference for specific genres and individual dimensions of personality. However, the stereotypes about fans of individual genres generally did not converge with the personality data from the fans of those genres. Only one convergence was found: fans of pop were stereotyped to be high in conscientiousness, which aligned with the actual personality data of pop fans. However, we cannot conclude from these results alone that people hold inaccurate stereotypes about the fans of specific genres. The general lack of convergence may simply reflect the relatively small sample size of this study. Further research is needed with a larger sample of participants.

Subjects: Aesthetics / preference, Music and society

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-56: Redefining perfect pitch to be less perfect

Stephen C Van Hedger(1), John Veillette*(2), Shannon Heald(2), Howard Nusbaum(2)
1:Western University, 2:University of Chicago

Background: Absolute pitch (AP), also known as “perfect pitch,” is associated with early musical training and tonal language experience. Evidence for these associations typically involves comparing individuals with and without AP, yet this approach may represent an inappropriate sampling technique if AP is a graded (non-dichotomous) ability. Thus, the present aim was to assess how early musical training and tonal language experience relate to variation in AP performance. Method: 192 individuals completed an online AP assessment (48 notes). The notes had complex but generally unfamiliar timbres (24 triangle waves, 24 custom complex tones). On each trial, participants heard a note and clicked a button corresponding to its perceived note name within 5 seconds. Notes were sampled from a lower (C3-B3) and higher (C5-B5) octave, with octaves interleaved across trials. This was done to minimize relative pitch strategies (as the smallest relative pitch difference between two notes would be a minor ninth). After the assessment, participants answered questions about their musical and tonal language background. Results: AP performance was graded rather than dichotomous; approximately one-third of participants performed above chance but below typical “genuine” AP thresholds. The at-chance participants reported later ages of music onset and were less likely to speak a tonal language, supporting prior work. These factors did not differentiate the genuine AP performers from the intermediate AP performers. Gaussian mixture modelling provided support for conceptualizing the intermediate AP participants as belonging to the same distribution as the genuine AP participants. Conclusion: Intermediate AP performers have often been assumed to rely on different mechanisms for absolute pitch judgments. The present results, in contrast, suggest similar experiential profiles between intermediate and high (“genuine”) AP abilities. As such, these results suggest that AP should be redefined to include intermediate levels of performance and support the treatment of AP as a graded phenomenon.

Subjects: Pitch, Memory

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-58: Comparing Brain Responses to Music and Language Stimuli to Classify Consciousness

Steven L Meisler*(1), Yelena Bodien(1), David Zhou(2), Brian Edlow(1)
1:Massachusetts General Hospital, 2:Massachusetts Institute of Technology

Following severe traumatic brain injury (TBI), patients often experience impaired awareness and arousal, known as disorders of consciousness (DoC). Task-based functional MRI (fMRI) and EEG tools may be used to detect “covert consciousness” that is not apparent on bedside examination. However, the ability of a patient to demonstrate consciousness through active tasks, such as motor imagery, may be hindered by confounds such as pain, illness, or sedation in the intensive care unit. This highlights the need for robust tools using passive stimuli to identify preserved cortical function necessary for consciousness. In this study, we compare how EEG metrics derived from brain responses to language and music stimuli classify consciousness revealed by command-following on behavioral examination or task-based fMRI. Methods using responses to spoken language have previously been evaluated. However, we hypothesized that music, due to the robust global neural activation needed to process it, is a more discriminative stimulus paradigm. In 14 patients with DoC from severe TBI (10 following commands, 4 not following commands) and 16 healthy controls, we computed the average strength (STR) and global efficiency (GE) of EEG α-band functional networks. We tested the specificity and sensitivity of each metric for classifying consciousness, defined as being within one standard deviation of the healthy control mean. We also compared areas under receiver operator characteristic curves (AUC) after testing several thresholds for each metric. When using GE as a discriminant, music had 100% specificity (4/4 non-command following patients correctly classified), 30% sensitivity (3/10 command following patients correctly classified), and an AUC of 0.525. This metric with language yielded a lower specificity (75%) but a higher sensitivity (50%) and AUC (0.675). When discriminating with STR, music and language had equal specificities (75%), but language had a higher sensitivity and AUC (40% and 0.625, respectively) than music (30% and 0.5, respectively). These findings suggest that language, rather than music, is a more discriminative stimulus paradigm for classifying patients who are conscious and that GE is a better discriminative metric.

Subjects: Music and language, Traumatic brain injury

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-60: The Origins of Dance: Characterizing infants’ earliest spontaneous dance behavior

Minju Kim*(1), Adena Schachner(1)
1:University of California, San Diego

Dance is a universal and early-developing human behavior. What drives and limits the earliest ages dance can appear? We conducted a detailed parent questionnaire characterizing infants’ earliest dance behavior, asking: When do infants begin to spontaneously dance, and how variable is this age of onset? How frequently do infants produce dance, and how does this change over infancy? N=276 parents of infants age 0-24 months completed an online survey (83% from US; infants’ Mage=13.1mo, SD=6.3). Questions included whether their child dances (yes/no); the age they first started dancing; how frequently they dance (1-7 scale); and whether children initiate dance themselves. We asked parents to include movements that were produced by the child, occurred more when music is playing, and “looked like dance, to them”. 75.7% (N = 209) of infants in our sample were reported to dance(Mage=15.4mo, SD=5.2), with 94.3% initiating dance episodes themselves. Children’s first dance occurred at M=9.6 months(SD=3.3), and as early as 2 months(range=2.0-11.7m). Children who did not dance(N = 67) were mostly younger(M = 5.7, SD = 3.1), but included some notably older toddlers (range=0.9-17.3mo). While there was a positive relationship between age and frequency of dance (F(1,205)=10.5, p=0.001), even very young children danced frequently (0-6mo: M=4.9, ‘every few days’, SD=1.8; overall M=5.9, ‘almost every day’). These findings show that dance behavior starts early in life, and that the age of onset varies across individuals. The *motivation* to move to music may be present in early infancy, with its age of onset only constrained by motor development. Future analyses (of other measures in this questionnaire) will characterize the nature of infants’ earliest dance movements, and relation to motor development. This study provides an initial characterization of the developmental origins of dance, to inform future experimental and in-lab studies of infants’ earliest dance behavior.

Subjects: Music and development, Music and movement

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-62: Synchronizing to Stimuli that Appear to Change in Tempo: How do Pitch-Induced Temporal Illusions Affect Tapping Behavior?

Toni M Smith*(1), Ed Large(1)
1:University of Connecticut

Context effects, including changes in pitch, can influence how an individual perceives the timing of a given stimulus; even if two stimuli are exactly the same in terms of temporal content, they might be perceived to differ with respect to tempo or duration if the pitch differs. Of particular interest here is how changes in pitch over the duration of a stimulus can induce such illusory percepts. In an oscillator model of time perception, the entrainment of an internal oscillator to an external stimulus is the basis for our ability to track and predict events in time. Under this sort of model, illusory percepts of time must be caused by a change in either the period or phase of an internal oscillator. Previous work suggests that these pitch-induced temporal illusions are a result of changes in the phase behavior of neural oscillations, but not a change in period. We hypothesize that the observed changes in phase are due to a change in the natural frequency of the oscillator. In order to investigate whether relative phase over time behaves as would be predicted by a change in natural frequency, we presented participants with frequency modulated (FM) sounds that increased, decreased, or remained stable in pitch and/or modulation rate and asked them to tap once per cycle, at the peaks in pitch. In a separate task, participants also rated how much they perceived each stimulus to be speeding up or slowing down. We found that pitch changes induced illusory rate changes in the participants, replicating results of previous work. The hypothesized phase behavior of taps was only partially observed. Further experimentation is in progress using simpler, discrete stimuli that are easier to synchronize with, as the continuous nature of FM stimuli might have made synchronization difficult and confounded results.

Subjects: Beat, rhythm, and meter, Pitch

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-64: Individual differences in rhythmic neural entrainment and grammar production

Valentina Persici*(1), Olivia Boorom(2), Reyna Gordon(2)
1:University of Milano – Bicocca, 2:Vanderbilt University Medical Center

Studies on music and language processing and production have shown both trait-like and state-like evidence for a rhythm-grammar link. Not only are rhythm perception abilities and grammar skills associated in both typical and atypical populations, but the auditory presentation of rhythmic stimuli can also affect subsequent receptive grammar task performance (Chern et al., 2018). These results may be related to the brain’s ability to entrain to auditory stimuli. However, it is unclear whether this ability also modulates the short-term rhythmic priming effects (RPE) on syntax production. To investigate whether differences in neural entrainment to rhythmic stimuli are associated with individual differences in RPE on grammar production, 20 typically developing children (ages 5-8 years) are asked to listen to rhythmically regular or irregular primes, and then complete a conversational language sample (a more ecologically valid measure of grammar production than standardized tests). Participants are screened to rule out hearing loss, intellectual disability, and language impairment through standardized assessments of receptive and expressive language skills. Complex syntax production after the rhythmic primes is elicited through conversational prompts (see Hadley, 1998) and transcribed in SALT (Miller & Iglesias, 2012). Grammar complexity is analyzed both in terms of clausal density and of percentage of correct clause units (see Loban, 1963). Continuous electroencephalography (EEG) data are recorded while participants listen to the rhythmic tones, and individual differences are calculated using time-frequency and clustering analyses, as in Lense et al. (2014). RPE on nonlinguistic abilities is measured using a control visuospatial task. Data collection is underway. We expect enhanced clausal density and increased accuracy of clause units after listening to rhythmically regular primes, especially for children showing larger brain responses to the auditory stimuli. Results may improve rhythmic treatments in children with impaired language.

Subjects: Music and language, Beat, rhythm, and meter; Neuroscientific approach

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-66: Examining the effects of tempo on psychophysiological response of adolescents during a learning task

Matthew Moreno*(1), Earl Woodruff(1)
1:University of Toronto

Motivation Researchers have explored the physiological impact of altered tempo on perception (Kim, Strohbach & Wedell, 2018), perception of musical emotions (Kamenetsky, Hill, & Trehub, 2016) and arousal (Fernández-Sotos, Fernández-Caballero & Latorre, 2016). There is a gap in understanding how physiological changes differ as an effect of musical tempo during learning tasks. The following question guided this study: What changes in the psychophysiological responses of learners who listen to music of contrasting tempi while completing a comprehension task? Methodology Participants were 1st year undergraduate students at a research-university in Canada. In this repeated measures study, participants were presented with a Western-art music piano piece for 2 minutes under three conditions: 1) no music (control), 2) slow music (110bpm) and 3) fast music (150bpm). Participants are asked to read passages from the comprehension component of the Nelson-Denny H (Brown, Fishco & Hanna, 1993). Electroencephalogram (EEG) and Electrodermal Analysis (GSR) data was collected using a Biopac MP160 to measure changes of brain activity, and levels of consciousness with single-channel Cz sensor. Data was recorded for the following frequencies: Gamma: 40 Hz to 100Hz Beta: 12 Hz to 40 Hz Alpha: 8 Hz to 12 Hz Theta: 4 Hz to 8 Hz Delta: 0 Hz to 4. Focus areas were analyzed in 1-second segments using the Activity Analysis methodology (Upham & Adams, 2018). Results Results indicated high correlations of Alpha waves in the slow-music condition, and increased Theta waves in the fast-music condition. EDA responses were statistically higher (p=0.005) in fast-music conditions. Implications These results suggest that fast-music increases Theta waves, increasing the likelihood of distraction along with EDR that are more frequent with lower amplitudies. These preliminary results provide empirical evidence into the psychophysiological changes that can arise from the tempi of music used during learning-style tasks. Research will continue to understand the physiological indicators of how music can be used to enhance learning and performance.

Subjects: Physiological measurement, Beat, rhythm, and meter

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-68: Evidence of a single neural mechanism underlying scale-sensitivity

Sebastian C Waz*(1), Charles Chubb(1)
1:University of California, Irvine

In a “tone-scramble” task, a listener is presented on each trial with a random sequence of tones and strives to classify it based on the notes it contains. Dean & Chubb (2017) tested listeners in five tone-scramble tasks. In each, G was established as the stimulus tonic, and the listener strove to judge which of two possible target notes were also present. Their results suggested that a single cognitive resource, “scale-sensitivity,” controls performance on all five tasks. Strikingly, scale-sensitivity was found to facilitate three of the five tasks (the 2-, 3- and 6-tasks, described below) with equal strength: a given listener was likely to perform equally well on three. The current study exploits this finding to investigate the following question: Is scale-sensitivity conferred by a single neural mechanism that is differentially sensitive to notes of different scale degrees relative to the stimulus tonic? Listeners were tested in 7 tasks. All stimuli contained 32, 65ms tones, including 8 each of the notes G5, D6, and G6 as well as 8 target notes: In the 2-task (3-task; 6-task) the target notes were all A♭ or all A (B♭ or all B; E♭ or all E). There were also four “hybrid” tasks: In the 2u3-task [2×3-task], the target notes comprised 4 each of (i) A♭ and B♭ or (ii) A and B [(i) A♭ and B or (ii) A and B♭]. Corresponding 3u6- and 3×6-tasks were also tested. The single-mechanism model predicts that one of the 2u3-task or the 2×3-task should yield performance equal to the 2- and 3-tasks, and the other should be much worse. Results confirm this pattern: performance in the 2×3- and 3×6-tasks is much worse than in the 2u3- and 3u6-tasks. This result suggests that a single neural mechanism predominates in controlling performance in all seven tasks.

Subjects: Harmony and tonality, Pitch; Psychoacoustics

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-70: The career choice of singer-songwriter: Internal and external influences

Quincy Beck*(1), Annabel Cohen(2)
1:Brown University, 2:University of Prince Edward Island

Background: Our survey aimed to reveal the relative importance of self-assessed external and internal influences on the career choice of a singer-songwriter. Two compatible theoretical frameworks provide context for the multiple variables considered. Gaunt and Hallam’s (2016) Interactionist Model maps the complexity of interactions among family, school and community that influence an individual’s development of musical skills, Self-determination theory (Ryan & Deci, 2000) focuses on the importance of intrinsic reward and associated self-regulation of competence, autonomy, and relatedness (social worth). Method: The on-line questionnaire (~15 minutes) obtained information about demographics, professional output, music training, family, school and community music programs. Tapping into self-determination, respondents were requested to distribute 100 points to reflect the relative influence of themselves and persons in various roles on the decision to become a singer-songwriter. Further, respondents were asked to identify an inspirational singer-songwriter and compare their own past, present, and future vocal performing ability to him or her. Results: Of the 206 confirmed singer-songwriters (mean age 44 years, range 18–73), the majority (88%) identified as performing/recording artists with a lifetime average of 4.7 CD albums and 3.6 on-line-for-purchase albums, being self-taught singers and instrumentalists. About one third reported no formal music education, and 40% acknowledged primary, middle, or high school training. On a 10-point scale, the musicians judged that they were developing a unique voice (mean 7.7), females higher than males. Both males and females judged their singing ability as continually improving, F(2,540)= 146.03, p <.001, and judged themselves as their greatest influence followed by famous singer-songwriters, after which family was most often acknowledged (before local singers/musicians, friends, and teachers). Discussion: Evidence of strong self-efficacy is consistent with Self-determination theory, and the influences of family over school are interpretable in terms of the Interactionist model. Whether the low influence of school (observed also in our prior work with Christopher Robison and Michael Speelman) arises through lack of access, type of music training, or over-riding self-efficacy remains to be explored. References Gaunt, H. & Hallam, S. (2016). Individuality in the learning of musical skills. In S. Hallam, I. Cross, & M. Thaut (Ed.), The Oxford Handbook of Music Psychology, Second Edition (pp. 463 – 477). Oxford, UK: Oxford University Press. Ryan, R.M., & Deci, E.L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and wellbeing. American Psychologist, 55, 68-78.

Subjects: Composition and improvisation, Music education/pedagogy/learning

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-72: Using psycholinguistic inquiry to measure felt emotion in autobiographical memories of musical experiences

Olivia S Yinger*(1), D Gregory Springer(2)
1:University of Kentucky, 2:Florida State University

Although many researchers have investigated emotional responses to music, few have done so using psycholinguistic analysis, which allows participants to describe their perceptions in a natural, candid, open-ended format. The purpose of this study was to determine whether writing about positive and negative autobiographical memories related to music corresponds to positive and negative emotion counts measured by Linguistic Inquiry and Word Count (LIWC), a software program that can be used to quantify elements of language. Participants were undergraduate students (N = 99) at two large universities in the southeastern United States. Each participant was asked to write about a time in their life when music or an experience with music made them feel negative emotions, and a time when music or an experience with music made them feel positive emotions. Participants used significantly more positive than negative emotion words to describe positive memories of music, but there was no significant difference between the rates of negative and positive emotion words to describe negative memories of music. A content analysis revealed a similar trend: 51% of participants described mixed, conflicting, or changing emotions when describing negative experiences, whereas descriptions of positive experiences tended to be highly positive. When writing about positive experiences, participants also used significantly more first-person plural pronouns (“we”) and fewer third-person pronouns (“she,” “he”) compared to when they were writing about negative experiences with music. Music majors most often wrote about positive and negative experiences in which they were making music, whereas non-music majors most often wrote about positive and negative experiences listening to music. The content analysis revealed both similarities and differences between participants’ descriptions of autobiographical memories of music and the experiences described by participants in Gabrielsson’s studies of strong experiences with music.

Subjects: Emotion, Language and speech; Memory

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-74: Contributions of absolute and relative pitch to the long-term memory of familiar melodies

Shannon Heald*(1), Stephen C Van Hedger(2), Howard Nusbaum(1)
1:University of Chicago, 2:Western University

Background: Many individuals possess accurate long-term pitch memories for popular music recordings, in that they can judge when a familiar recording has been shifted in pitch by one or two semitones (e.g., Schellenberg & Trehub, 2003). A testing range of one or two semitones, however, does not fully capture the hierarchical tonal relationships that are integral to music understanding. For example, pitches separated by perfect fourths and fifths – while distant in absolute pitch – are psychologically close in relative pitch (e.g., keys separated by perfect fourths or fifths share six of seven notes in a major scale). Thus, in the present experiment we tested pitch memory for popular music recordings using pitch shifts that more appropriately captured these tonal hierarchies. Method: 40 participants judged whether 84 familiar melodies (represented in MIDI format) were in the correct key. Half were correct, while the other half were shifted in pitch (both flat and sharp) by up to a perfect fifth. On each trial, participants heard a single version of familiar song and made a forced-choice judgment (correct or incorrect). Participants also rated their familiarity with each melody. Results: While we observed a significant effect of absolute pitch distance (i.e., performance increased as absolute distance increased), there was also evidence that participants were more likely to mistakenly judge an incorrect melody as correct when it was closely related to the correct melody in relative pitch (i.e. when it was shifted by a fourth or fifth). Conclusion: These results indicate that listeners’ long-term pitch memories for familiar music recordings are influenced by both absolute and relative pitch information. This paradigm could be potentially used to quantify an individual’s absolute versus relative pitch processing, which could then be associated with other aspects of musical processing.

Subjects: Pitch, Memory

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-76: Seashore, Science, and the Measure of a Singer

Annabel Cohen(1)
1:University of Prince Edward Island

Background: Carl Seashore (1866-1949) is known for his early psychometric tests of musical ability (Boyle & Radocy, 1987). The present paper considers a lesser known but equally prescient aspect of Seashore’s interests, illustrated in his paper, “The Measure of a Singer” presented as his 1911 Presidential address to the American Psychological Association (APA). Vision: The address outlined four primary dimensions of measurement: the sensory (pitch, intensity, and time), motor (pitch, intensity, and time), associational (imagery, memory, and ideation), and affective ( preference, reaction, interpretation). This comprehensive collection of tests extends far beyond measurement of pitch accuracy, which has occupied much of the literature on singing ability to this day (Svec, 2017). As with other talents, Seashore argued, there were four aspects of the measurement of singing: initial talent, the ability to learn, skills, and knowledge, using the contemporary concept of plasticity to refer to the ability to acquire new knowledge. The work of the psychologist in measuring singing, he argued, was an excellent example of the potential of the, then new, applied psychology. Although the original audience was the APA, he insisted on print disseminated to the wider readership of the prestigious journal Science. It is the only APA presidential address ever published in a non-psychology journal. Significance: Publishing in Science suggests the importance Seashore attributed to the example of singing for scientific progress at large. Digital and audio technologies are being exploited now to begin to measure singing in new ways (e.g., Berkowska & Dalla Bella, 2013; Demorest et al., 2015; Ellis et al; 2018). Seashore’s vision of 1911 serves as both an inspiration for ever more comprehensive work on the measurement of singing, and a reminder of the importance of singing as a universal musical activity whose measurement provides a window into understanding human behavior and the mind in general. References Berkowska, M., & Dalla Bella, S. (2013). Uncovering phenotypes of poor-pitch singing: The Sung Performance Battery (SPB). Frontiers in Psychology, 4, 714. Boyle, J. D., & Radocy, R. (1987). Measurement and evaluation of musical experiences. New York, NY: Schirmer. Demorest, S. M., & Pfordresher, P. Q. (2015). Seattle Singing Accuracy Protocol – SSAP [Measurement instrument]. https://ssap.music.northwestern.edu/. Ellis, B. K., Hwang, H., Savage, P. E., Pan, B.-Y., Cohen, A. J., & Brown S. (2017). Identifying style-types in a sample of musical improvisations using dimensional reduction and cluster analysis. Psychology of Aesthetics, Creativity, and the Arts, 12, 110–122. Seashore, C. E. (1912). The measure of a singer. Science, 35, no. 893, 201-212. Svec, C. L. (2017). The effects of instruction on the singing ability of children ages 5 to 11: A meta- analysis. Psychology of Music, 46, 326-339.

Subjects: Music and development, (1) Singing (2) History of MPC (3) Tests of musical ability

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-78: Evaluating effects of electrical muscle stimulation in time duration reproduction

Rei Konno*(1), Reo Anzai(1), Kazuaki Honda(1), Patrick E Savage(1), Pedro Lopes(2), Shinya Fujii(1)
1:Keio University, 2:University of Chicago

Electrical Muscle Stimulation (EMS) is a technique that allows actuating muscles by means of electrical impulses. This technique has been used for rehabilitation for stroke patients [Hummelsheim, et al., 1997]. More recently, EMS has been used not only for clinical cases but also in interactive systems that stimulate the user’s muscles for a variety of tasks, such as how to operate everyday objects [Lopes et al., 2015] or to practice drumming patterns [Ebisu et al., 2017]. The latter, opens up interesting research questions that remain unanswered: Does the use of EMS improve the performance of rhythmic tasks? In this study, we set out to explore how EMS affects memorization of time intervals. So far, auditory stimulation has been the major method by which researchers explore how we memorize time intervals. However, we argue that stimulating also the subject’s proprioception may reveal some benefits. We compared auditory and EMS conditions using tasks of a previous study by Mioni et al. (2014). After a time duration of 1, 4, 9, 14, 18 sec (inter-stimulus interval: ISI), participants were asked to memorize for either the intervals of 0, 5, and 15 sec (memory interval: MI), then reproduce the ISI. We recruited five participants (3 female, age = 21~23). By analyzing the difference between the time reproduction and target duration (absolute error), the ratio between time reproduction and target duration (RATIO) using repeated measures ANOVA, we found that there were no significant interactions between ISI × conditions and MI × conditions nor significant main effect of condition in both absolute error and RATIO. We further conducted an additional experiment of detecting small changes in the target duration to assess the ability of time duration perception, which showed that there was no difference between EMS and auditory conditions. These results suggest that EMS may not have positive effect on time duration perception and reproduction.

Subjects: Memory, Beat, rhythm, and meter

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.

P4-80: Universal constraints on rhythm revealed by large-scale cross-cultural comparisons of rhythm priors

Nori Jacoby*(1), Rainer Polak(2), Jessica Grahn(3), Daniel Cameron(4), Shinya Fujii(5), Patrick E Savage(5), Kyung Myun Lee(6), Kelly Jakubowski(7), Martin Clayton(7), Elizabeth Margulis(8), Patrick Wong(9), Eduardo Undurraga(10), Ricardo Godoy(11), Tomas Huanca(12), Timon Thalwitzer(13), Esra Mungan(14), Ece Kaya(15), Luís Jure(16), Martín Rocamora(16), Daniel Goldberg(17), Andre Holzapfel(18), Josh McDermott(19)
1:Max Planck for Empirical Aesthetics, 2:Max Planck Institute for Empirical Aesthetics, 3:University of Western Ontario, 4:Brain and Mind Institute, University of Western Ontario, 5:Keio University, 6:Korea Advanced Institute of Science and Technology, 7:Durham University, 8:University of Arkansas, 9:Chinese University of Hong Kong, 10:Universidad Católica de Chile, 11:Brandeis University, 12:CBIDSI Bolivia, 13:University of Vienna, 14:Bogaziçi University, Psychology Department, 15:Boğaziçi University, 16:Universidad de la República, 17:University of Connecticut, 18:KTH Royal Institute of Technology in Stockholm, 19:Massachusetts Institute of Technology

Music is present in every known culture, implying some biological basis. Yet the nature and extent of biological constraints have remained unclear, in part because cross-cultural comparisons have been limited. We measured a signature of mental representations of rhythm in over 500 participants from 13 countries on four continents, spanning modern societies and traditional indigenous populations belonging to 27 subgroups with varied musical expertise. Listeners were asked to reproduce random ‘‘seed’’ rhythms; their reproductions were fed back as the stimulus (as in the game of “telephone”), such that their biases (the prior) could be estimated from the distribution of reproductions (Jacoby and McDermott 2017). Every tested group showed priors with peaks. These peaks always overlapped with integer ratio rhythms, supporting the idea that rhythm “categories” at integer ratios are universal. However, the relative importance of different integer ratios varied considerably across cultures. Rhythmic prototypes in many cases reflected rhythms prevalent in the musical systems of a participant group’s culture. However, university students in non-Western countries tended to resemble Western participants, underrepresenting the variability evident across indigenous participant groups and highlighting the problematic over-reliance on student participants in cognitive science (Henrich et al. 2010). The results illustrate consistency in rhythm perception amid cultural variation, demonstrating biological constraints and their interaction with culture-specific traditions.

Subjects: Cross-cultural comparisons/non-Western music, Beat, rhythm, and meter; Computational approach; Musical expertise

When: 11:45 AM-1:00 PM on Wed Aug 7 – Day 3
Return to Day Schedule.
Return to Full Schedule.