Rapid access to speech gestures in perception: Evidence from choice and simple response time tasks.

Number 1317
Year 2003
Drawer 25
Entry Date 01/23/2008
Authors Carol Fowler, Julie Brown, Laura Sabadini and Jeffrey Weihing
Contact
Publication Journal of Memory and Language 49, pp 396-413
url http://www.haskins.yale.edu/Reprints/HL1317.pdf
Abstract Participants took part in two speech tests. In both tests, a model speaker produced vowel-consonant-vowels (VCVs) in which the initial vowel varied unpredictably in duration. In simple response task, participants shadowed the initial vowel; when the model shifted to production of any of three CVs (/pa/, /ta/ or /ka/), participants produced a CV that they were assigned to say (one of /pa/, /ta/ or /ka/). In the choice task,participants shadowed the initial vowel; when the model shifted to a CV, participants shadowed that too. We found that, measured from the model’s onset of closurefor the consonant to the participant’s closure onset, response times in the choice task exceeded those in the simple task by just 26ms. this is much shorter than the canonical difference between simple and choice latencies [100-150ms according to Luce (1986)] and is near the fastest simple times that Luce reports. The findings imply rapid access to articulatory speech information in the choice task. A second experiment found much longer choice times when the perception-production link for speech could not be exploited. A third experiment and an acoustic analysis verified that our measurement from closure in Experiment 1 provided a valid marker of speakers’ onsets of consonant production. A final experiment showed that shadowing responses are imitations of the model’s speech. We interpret the findings as evidence that listeners rapidly extract information about speakers’ articulatory gestures.
Notes

Search Publications