How do we make sense of the mixture of sounds we hear in ordinary social settings?
What sound attributes allow us to segregate sounds from a mixture?
How does the brain direct attention to select a desired sound?
Why do even normal-hearing listeners differ in their ability to selectively attend?
How does the brain direct attention to select a desired sound?
Why do even normal-hearing listeners differ in their ability to selectively attend?
Research in the Auditory Neuroscience Laboratory addresses various aspects of these questions, studying everything from basic perceptual sensitivity to the ways in which different brain regions coordinate their activity during complex tasks. We use a range of approaches to explore these issues, since each method for studying the brain has different limitations as well as different strengths.
Enter the name for this tabbed section: Behavioral Experiments
Behavioral experiments teach us what acoustic attributes are important for performing different tasks and what listeners are capable of achieving. We present listeners with carefully controlled sounds under headphones or from loudspeakers and explore their ability to detect sounds, identify stimuli, and extract meaning from the sound they hear.
Individual differences in the ability to selectively attend are unrelated to listener age, but differ greatly across listeners. Reverberant energy degrades performance for all listeners. (Dorea Ruggles)
Tone sequences and natural sound sequences are stored in memory differently, as revealed by reversals in performance for the two stimulus types in different tasks. For tones, stored naturally as sequences, but not for natural sounds, it is easy to detect a reversal of sequence order; for natural sounds, but not for tones, it is easy to detect whether one item was present in a sequence or not. (Lenny Varghese)
In an ambiguous sound mixture, the contribution of a sound component to where a listener perceives a composite auditory object and to the object's perceived spectro-temporal content is inconsistent. Across a set of subjects, "what" and "where" contributions can be roughly equal (in right panel, mean data fall near zero); however, for some subjects the contribution to where the composite object is located is greater than the contribution to what the object is (data below the diagonal in the bottom left panel, for S5); for others, the opposite is true (data above the diagonal in the top left panel, for S3); while yet others seem to weight what and where information equally (data along the diagonal in the middle left panel, for S4). (Andrew Schwartz)
Enter the name for this tabbed section: Neuroelectric Imaging
Electro-encephalography and magneto-encephalography allow us to measure synchronous electrical activity with great temporal precision. We combine anatomical MRI scans with M/EEG data to estimate what cortical regions generate observed activity. In addition to simple power analysis, we also look at functional connectivity and phase-locking to stimulus inputs in ongoing activity. Responses from the auditory brainstem can be revealed from the same measures, depending on the frequency range analyzed.
The ability of individual listeners to perform a selective attention task is correlated the strength of the brainstem encoding of periodic sound structure. (Dorea Ruggles and Hari Bharadwaj)
Combined M/EEG activity reveals attentional modulation of responses in auditory cortex contralateral to the attended source. Here, in response to the same pair of competing sound streams, the neural power at a given frequency is higher when it is the frequency contained in the attended, rather than the unattended stream. (Hari Bharadwaj and Adrian KC Lee)
Once a listener knows where to listen in an upcoming stimulus, activity in auditory cortex becomes correlated with areas associated with attentional control, both within (left) and across (right) hemispheres. (Sid Rajaram and Adrian KC Lee)
Enter the name for this tabbed section: Computational Approaches
Mathematics are important to many aspects of our work. We analyze the acoustics of the signals reaching the ears in order to deduce what properties are perceptually relevant. We use computational modeling to try to test our understanding of how information is encoded in various parts of the neural pathway. To make sense of inherently noisy neuroelectric data, we use various methods to extract reliable information, infer the sites of underlying neural activity, and explore how different neural areas coordinate their responses.
In reverberant space, sound localization in distance (left) and in direction (right) are related to different aspects of the stimuli a listener receives. Distance is closely related to the direct-to-reverberant energy ratio (bottom left), while azimuthal direction is closely related to interaural level differences (top right) and interaural time differences (middle right). (BGS-C)
For two narrowband (top) or wideband (bottom) clicks, the relative intensity of the lead click has a large effect on localization performance (right panels). For narrowband clicks, the interclick interval has a big effect on performance, but the effect is smaller for wideband clicks. A neural population model of brainstem neurons accounts for all of these effects. (Jing Xia)
In analyzing M/EEG functional connectivity data, it is critical to correct for point spread of source activity, given that the mathematical inverse for estimating where neural sources are located, is under-constrained and imperfect. How best to do this is still an area of debate. (Sid Rajaram and Adrian KC Lee)
Enter the name for this tabbed section: Collaboration
We can't be experts in every approach, but we are lucky enough to know many talented collaborators interested in the kinds of questions we want to answer. With our collaborators, we employ fMRI, animal behavior, electrode neurophysiology, computational models, and other methods to explore auditory neuroscience.
In a collaboration with Prof. Adrian KC Lee (U Wash), we are using combined M/EEG to investigate how attention to different acoustic features alters the attentional control network. Holding the stimuli constant, we vary task demands and explore how this affects network activity during the behavior. In this image, we show what neural regions in the left hemisphere are more active when listeners direct attention to source location compared to source pitch. (Jing Xia, Sid Rajaram, and Hari Bharadwaj)
Work with Prof. David Somers' group (BU Psychology) compares fMRI activity during attentionally demanding auditory and visual tasks. In addition to standard power analyses, we also explore multi-voxel pattern classification techniques as well as functional connectivity analysis. Here, functional connectivity from parietal areas reveals the attentional network from resting state data. (Lingqiang Kong)
With Prof. Tim Gardner's laboratory (BU Biology), we are exploring how structured sound, such as bird song, can be efficiently encoded, both from a signal processing perspective, and in avian forebrain (using electrode neurophysiology). This image shows a "contour" representation of sound, where different time-frequency scales of analysis lead to different structured representations, each emphasizing unique aspects of a natural bird song's structure. (Yoonseob Lim)
In Prof. Kamal Sen's laboratory, we have been looking at how spatial location affects the coding of natural communication signals. Here, we see neural responses from avian forebrain to two different songs, each causing a different response pattern that is robust, regardless of spatial location. (Ross Maddox)
Physiological experiments in collaboration with Prof. Dan Polley (Eaton-Peabody Laboratory) are examining neural coding in auditory cortex. In this image, we show that mouse auditory cortex has extensive tonotopic organization. (Wei Guo)