How are visual stimuli encoded in human visual pathways?
We investigate how visual stimuli are encoded in visual pathways in the human brain, and in turn how the encoding process relates to visual perception. We work with computational models of visually-driven signals measured with several instruments, including magnetic resonance imaging (MRI), electroencephalography (EEG), magnetoencephalography (MEG), and electrocorticography (ECoG), as well as behavior (psychophysics). One question we are investigating is, given a description of a visual stimulus, can we predict the response measured at different points in the visual pathways? To do so requires modeling the computations that are performed by the visual pathways. By building better models, we can begin to answer questions like, how does the visual system integrate inputs that are spread out over space and time? How do these computations determine what we can (and cannot) see? How does damage to the visual pathways limit and affect vision? A related question is, what are the circuit properties that give rise to fMRI, EEG, MEG, and ECoG signals in visual cortex? Answering this question requires understanding what neural information can be obtained by these different instruments, how those measurements differ from one another, and how they relate to the visual input.