PIRE fellow Christianna Otto presents at the 12th International Symposium on Bilingualism.
Differences in how languages map acoustic space onto phonetic categories present challenges in second language (L2) learning, but those challenges are exacerbated by phonetic variation within the L2 (e.g. regional or social lects). In this study, we asked what happens when L2 listeners encounter native speakers whose speech exhibits unfamiliar features. Listeners adapt easily to such features in their native language, a process known as perceptual learning, but the evidence suggests that they often attribute those features to talker-specific idiosyncrasies. This may also be the case in L2 listening, but since L2 users are more likely to encounter unfamiliar lects shared by many talkers, they might be more open to the possibility that a second talker would share the same features.
We explored this hypothesis by presenting proficient, late Dutch-English bilinguals, residing in the Netherlands, with English speech exhibiting a vowel merger and a consonant merger. // and // were merged, either in favor of  (e.g. pitcher ptcher) or  (e.g. ketchup ktchup), counterbalanced across participants, and /s/ and /f/ were merged, either in favor of [s] (perfect per[s]ect) or [f] (mustard mu[f]tard). Participants were familiarized with the novel lects via sentences produced by a single talker. Learning was then assessed via a cross-modal priming task in which participants made lexical decisions on visual targets preceded by matching or mismatching auditory words (with or without the merged phonemes). Words exhibiting the mergers initially produced weaker priming, which strengthened throughout the task, demonstrating learning of the unfamiliar variation. The speech of a second talker, exhibiting the same mergers, was then introduced in a second cross-modal priming task. Words with the merger immediately yielded strong priming, suggesting that listeners had formed the expectation that the second talker’s speech would exhibit the same features.
Citation: Carlson, M. T., Otto, C., Schuhmann, K., & McQueen, J. M. (2019, June). Cross-talker perceptual learning in a second language. Paper presented at the 12th International Symposium on Bilingualism, Edmonton, Alberta, Canada.
PIRE fellow Carly Danielson presents at the 60th Annual Meeting of the Psychonomic Society.
Research shows that native-accented speech is easier to comprehend than foreign-accented speech. Most studies presented speech in isolation. We examined how faces cuing the speaker’s ethnicity create expectations about upcoming speech, and how this impacts the comprehension of American- and Chinese-accented English. Caucasian American monolinguals listened to American-accented and Chinese-accented sentences, preceded by a picture of an Asian face or a Caucasian face, yielding two congruent face-accent conditions (Caucasian face/American accent; Asian face/Chinese accent) and two incongruent face-accent conditions (Asian face/American-accent; Caucasian face/Chinese-accent). Immediately after hearing the sentence, listeners transcribed the sentence. For American-accented sentences, transcription accuracy was lower when preceded by an Asian face than by a Caucasian face. For Chinese-accented sentences, transcription accuracy did not differ for Caucasian and Asian faces. This indicates that faces cuing ethnicity only trick our ears in native- accented, but not in foreign-accented speech. Results will be discussed in terms of reverse linguistic stereotyping and accent-driven asymmetries in face-accent processing.
Citation: Danielson, C., Fernandez, C. B., & Van Hell, J. G. (2019). Faces can trick your ears: Speaker identity affects native-accented but not foreign-accented speech. Poster presented at the 60th Annual Meeting of the Psychonomic Society, Montreal, Canada, November 14-17.