To /b/ or not to /b/: Characterizing the neural timecourse of speech perception
Abstract
Details
- Title: Subtitle
- To /b/ or not to /b/: Characterizing the neural timecourse of speech perception
- Creators
- McCall E Sarrett
- Contributors
- Bob McMurray (Advisor)Inyong Choi (Committee Member)Kristi Hendrickson (Committee Member)Jean Gordon (Committee Member)Jan Wessel (Committee Member)Eliot Hazeltine (Committee Member)
- Resource Type
- Dissertation
- Degree Awarded
- Doctor of Philosophy (PhD), University of Iowa
- Degree in
- Neuroscience
- Date degree season
- Autumn 2020
- DOI
- 10.17077/etd.005674
- Publisher
- University of Iowa
- Number of pages
- x, 154 pages
- Copyright
- Copyright 2020 McCall Evonne Sarrett
- Language
- English
- Description illustrations
- color illustrations
- Description bibliographic
- Includes bibliographical references (pages 146-154).
- Public Abstract (ETD)
In day-to-day conversation, listeners hear spoken words unfold at an incredibly quick rate, up to 7+ syllables per second. In the brain, a widespread, interconnected neural network is involved in language processing. To work efficiently, this network must carry out multiple functions simultaneously (and quickly!). That is, the brain must both process the acoustic signal and figure out how this acoustic information fits with the surrounding context (such as the preceding sentence, or the visual environment). The work of this dissertation used sentences or pictures that primed research participants to expect a certain word to follow—for example, “Honey is made by—(bees)” (Experiments 1 & 3) or a picture of bees (Experiment 2). We examined how the brain processes the auditory target word that followed at the end of a sentence (in this example, bees) or after a picture. We were interested in a few different cases: (1) when the target word matches what is expected (i.e. listeners expect bees and they hear bees), (2) when the target word does not match what is expected (i.e. listeners expect bees and they hear peas), and (3) when the target word is acoustically ambiguous, relative to listeners’ expectations (i.e. they expect bees and they hear an unclear word that could have been bees or peas). We measured millisecond-by-millisecond brain activity to characterize how language processing unfolds over time. Our experiments indicate that the brain is highly sensitive to even slight changes in the acoustics of speech, and that higher-level processes are also affected by these small acoustic changes. Furthermore, we found that, in certain cases (when the target word is unclear), the brain uses expectations based on sentence of visual context to influence how auditory information is perceived.
- Academic Unit
- Interdisciplinary Graduate Program in Neuroscience; Center for Social Science Innovation
- Record Identifier
- 9984035694502771