Logo image
Neural representations of speech: Decoding bottom-up acoustics and examining top-down effects using electroencephalography
Abstract

Neural representations of speech: Decoding bottom-up acoustics and examining top-down effects using electroencephalography

McCall E Sarrett, Bob McMurray and Joseph C Toscano
The Journal of the Acoustical Society of America, Vol.150(4 Supplement), pp.A311-A311
10/2021
DOI: 10.1121/10.0008396

View Online

Abstract

The acoustics of spoken language are highly variable, and yet most listeners easily extract meaningful information from the speech signal. Psycholinguistic work has revealed which acoustic dimensions are relevant when listeners categorize speech sounds, and how listeners use higher-level expectations to shift their categorization responses. However, the real-time neural mechanisms subserving such processes are not well understood. Two questions of interest include which perceptual distinctions are detectable in neural responses and whether higher-level information influences perceptual encoding directly. We present three electroencephalography (EEG) studies that address these issues. First, we examine perceptual encoding of speech sounds using machine learning techniques (N = 27). We contrast two approaches with machine learning and EEG and discuss methodological considerations for researchers using such techniques. We find that this approach can reveal neural sensitivity to phonetic contrasts that are indistinguishable in traditional EEG analyses. Second, we examine how top-down influences, such as sentence context (N = 31) or visual information (N = 33), affect perceptual encoding. We show that neural representations of speech sounds are influenced by listeners’ context-based expectations in limited cases, specifically, when acoustic cues are ambiguous. We compare how task design may impact linguistic processing, and how more naturalistic tasks may lead to richer processing dynamics.

Details

Metrics

19 Record Views
Logo image