My research interests involve both invasive and non-invasive
techniques for brain computer interfacing. A major research obstacle
for intracortically based BCIs is the long-term chronic
recording of neural units. Initial steps have been taken toward such
recordings, but I am interested in refinement of these techniques for
increased reliability and stability of single-/multi-unit
recordings. Such refinements include improved electrode (hardware)
designs and robust spike detection and classification techniques
(software). I am also involved with the design and implementation of
sophisticated neural decoding algorithms, primarily for predicting
speech information (either acoustic or articulatory) from spiking
activity using continuous, adaptive filter techniques.
My current research direction is aimed at non-invasive BCI techniques, using EEG for prediction of speech information (acoustic or articulatory - similar to intracortical aims). The focus of the EEG BCI projects is to decode speech productions from covert speech production attempts by the BCI user. In addition, this research is primarily concerned with auditory feedback control mechanisms as opposed to standard visual feedback techniques. The challenges of EEG-based BCI decoding are somewhat different from intracortical techniques, and include optimal feature selection (time-series vs. amplitude/phase decomposition), BCI control paradigm (synchronous vs. asynchronous EEG), subject BCI control strategies (e.g., motor imagery), and decoding methodology (continuous filtering vs. discrete classification).
Last, I am interested in the development of modular neural prosthesis software frameworks which are capable of supporting intracortical, discrete (spikes) and continuous (LFP, MUA), signals, intracranial (ECoG) and surface (EEG) neurophysiology. In addition, these software methods should be general enough for use in EMG-based, or alternative, non-verbal speech communication systems.
The focus of my dissertation [pdf|ps] was to evaluate the characteristics of premotor/primary motor cortex neurons involved in the production of speech using chronic microelectrode implantation of the precentral gyrus and to develop a neural prosthesis (or brain computer interface) for speech restoration in conjunction with Neural Signals, Inc. The project involved prediction of intended speech utterances from a locked-in patient with complete paralysis except for some voluntary eye movements and instantaneous acoustic feedback of the synthesized predicted speech. A summary of preliminary results were presented at the 2007 annual meeting of the Society for Neuroscience. [click here for pubs].
|December 2009||Wired Magazine, MSNBC News, Popular Science, Discovery News|
|November 2008||Scientific American Mind, Nature News|
|October 2008||Esquire Magazine, Discover Magazine|
|July 2008||New Scientist, MIT Technology Review, Boston Globe|
|February 2008||Dana Foundation: BrainWork|
|November 2007||New Scientist|
My previous research involved development and experimental simulation of a computational model of speech production, the Directions Into Velocities of Articulators (DIVA) model, created by Dr. Frank Guenther of the Department of Cognitive and Neural Systems at Boston University. I was responsible for creation of a standardized graphical user interface to the model to aid in collaborative activity with speech researchers around the world. The latest version of the DIVA model is available for download.
I am broadly interested in the field of computer graphics programming, specifically methods used for modeling of natural phenomena and parallel processing utilizing the GPU. Follow this link for some examples of graphics programming completed in a graduate-level computer graphics course at BU.
My particular interest lies in the exploitation of human visual systems for use as underlying frameworks for artificial computer vision. Specifically, space and time variant sampling systems of the human retinal circuitry provides high resolution imaging for humans over the entire visual field. I developed a C/C++ program as a part of my graduate coursework simulating two layers of retinal/LGN integrate and fire neurons for space and time invariant image processing. It takes advantage of large, quickly integrating receptive fields in the periphery and small, slowly integrating receptive fields in the fovea. The code can be found at: [presentation pdf, demo (requires OpenGL and GLUT)].