A recent video Introduction to the Lab that surveys a few recent projects (circa Spring, 2021) is available here.
Research Areas
How does vision determine the size, shape and boundaries of objects in
our environment? Research in my laboratory centers on various aspects of visual perception and the visual control of action. Recent research on the visual system has grown into an exciting collaboration among psychologists, physiologists, computer scientists, and mathematicians. My research continues to blur the lines between these fields in two ways. First, traditional psychophysical methods are enhanced using advanced computer graphics and image processing techniques for stimulus generation and analysis. Second, both mathematical methods and computer simulations are used to model the psychophysical results. As much as possible, the simulation models attempt to reflect a feasible physiological implementation, as I have a strong interest in neural network models of vision. Next, I describe each broad area of my research in turn.
Sensory decision-making. I study how observers make perceptual decisions under uncertainty. Sensory signals are noisy and an ideal observer will combine such signals with knowledge of their uncertainty, prior expectations and knowledge of potential outcome- and decision-contingent rewards to guide decisions. We ask whether humans act as ideal decision-makers and, if not, where are compromises made or heuristics used. We have shown that orientation estimation appears to be consistent with the ideal-observer model and that humans use a prior distribution of orientations that matches environmental statistics. We examine sequential effects as the observer changes her decision criterion from trial-to-trial to track changing perceptual categories or prior probabilities. We also examine how payoffs and priors affect both the (type I) judgment and a subsequent confidence (type II) response. We have developed a new model of how sensory evidence is accumulated over time, which has implications for modeling reaction-time and cued-response tasks.
Sample publications:
- Norton, E. H., Fleming, S. M., Daw, N. D. & Landy, M. S. (2017). Suboptimal criterion learning in static and dynamic environments. PLoS Computational Biology, 13(1):e1005304. doi:10.1371/journal.pcbi.1005304
- Norton, E. H., Acerbi, L., Ma, W. J. & Landy, M. S. (2019). Human online adaptation to changes in prior probability. PLoS Computational Biology, 15(7): e1006681. doi: 10.1371/journal.pcbi.1006681.
- Locke, S. M., Gaffin-Cahn, E., Hosseiniaveh, N., Mamassian, P. & Landy, M. S. (2020). Priors and payoffs in confidence judgments. Attention, Perception, & Psychophysics, https://doi.org/10.3758/s13414-020-02018-x.
Sensory cue integration. Perhaps I am best known for my work on the integration of multiple sources of sensory information. This work began in collaboration with my NYU colleague, Larry Maloney. We developed an ideal-observer model of cue integration as well as a psychophysical approach to estimating cue reliability and the cue-combination rule used by observers. We often find that cues are combined linearly with near-optimal weights that respond to changes in cue reliability on a trial-by-trial basis. We have found cue integration to be mandatory in some circumstances, so that observers appear not to have access to information from individual cues, even when it would improve performance.
See my interview for NPR’s Science Friday, October, 2012, on this topic.
Sample publications:
- Landy, M. S., Maloney, L. T., Johnston, E. B. & Young, M. J. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research, 35, 389-412.
- Hillis, J. M., Ernst, M. O., Banks, M. S. & Landy, M. S. (2002). Combining sensory information: mandatory fusion within, but not between, senses. Science, 298, 1627-1630.
- Ganmor, E., Landy, M. S. & Simoncelli, E. P. (2015). Near-optimal integration of orientation information across saccades. Journal of Vision, 15(16):8, 1-12.
- Saarela, T. & Landy, M. S. (2015). Integration of feature dimensions but failure of attentional selection in object recognition. Current Biology, 25, 920-927.
- Badde, S., Navarro, K. T. & Landy, M. S. (2020). Joint visual-tactile attention enhances integration and recalibration by increasing prior expectations of visual-tactile correspondence. Cognition, 197, 104170.
Spatial vision and texture perception. Sometimes when one texture pattern is placed on a background of another, the two segregate quickly and seemingly effortlessly into foreground and background. Other times not. Why is this? What sequence of linear and nonlinear image transformations leads to this variation in texture segregation performance? Our research in this area consists of both psychophysical experiments and computational modeling to determine the details of the visual machinery used to code, interpret and segregate texture patterns. We have also looked at the identification of shapes defined by texture (e.g., letters) and the estimation of texture properties (e.g., surface roughness) in 3-d rendered scenes. We have examined the cortical coding of 2nd-order patterns by looking for orientation-selective adaptation of cortical responses to 1st- and 2nd-order patterns using functional magnetic resonance imaging (fMRI) in collaboration with David Heeger (NYU). We have also developed and explored a new model of cortical pattern adaptation, which has implications for computational theory, physiology and perception.
Sample publications:
- Larsson, J., Heeger, D. J. & Landy, M. S. (2010). Orientation selectivity of motion-boundary responses in human visual cortex. Journal of Neurophysiology, 104, 2940-2950.
- Wang, H. X., Heeger, D. J. & Landy, M. S. (2012). Responses to second-order texture modulations undergo surround suppression. Vision Research, 62, 192-200.
- Westrick, Z. M., Henry, C. A. & Landy, M. S. (2013). Inconsistent channel bandwidth estimates suggest winner-take-all nonlinearity in second-order vision. Vision Research, 81, 58-68.
- Westrick, Z. M., Heeger, D. J. & Landy, M. S. (2016). Pattern adaptation and normalization reweighting. Journal of Neuroscience, 36, 9805-9816.
Visually guided action. We have also applied statistical decision theory to modeling visuo-motor control. This began as an extension of my collaboration with Larry Maloney. We were concerned with the notion of perception as optimal, when optimality itself depends on the choice of cost function. We wondered whether behavior would remain optimal when the cost function was known, i.e., was imposed by the experimenter. In this research, subjects perform pointing or other tasks under tight time constraints. Subjects earn points (and eventually, money) for fast, accurate performance of the task (pointing at a target region), but lose points if they respond late or point towards penalty regions. By measuring outcome uncertainty (the variance in motor outcome), we can compute the optimal aim point for any configuration of payoff and penalty regions and values. In a variety of situations, subjects are optimal or near-optimal in this task. That is, they earn as many points as would have been earned by an ideal movement planner having the same movement variability as the subject. Subjects appear to have available an estimate of their movement variability and take it into account in movement planning, even in situations in which that variability has been increased (artificially) by the experimenter. More recently we have studied movement planning for reaches and saccadic eye movement using both learning and adaptation experiments to study the coordinate systems in which movements are planned.
- Trommershäuser, J., Maloney, L. T. & Landy, M. S. (2008). Decision-making and movement planning. Trends in Cognitive Sciences, 12, 291-297.
- Landy, M. S., Trommershäuser, J. & Daw, N. D. (2012). Dynamic estimation of task-relevant variance in movement under risk. Journal of Neuroscience, 32, 12702-12711.
- Wolpert, D. M. & Landy, M. S. (2012). Motor control is decision-making. Current Opinion in Neurobiology, 22, 996-1003.
- Hudson, T. E. & Landy, M. S. (2016). Sinusoidal error perturbation reveals multiple coordinate systems for sensorymotor adaptation. Vision Research, 119, 82-98.
Depth perception. I am interested in the details of how the visual
system determines depth and object shape using a variety of visual cues. I have done psychophysics coupled with modeling on several individual depth cues (as well as their combination, discussed below), including structure from motion, binocular stereopsis and pictorial depth cues (texture, contour). The work on binocular stereopsis looks at the entire process, from the low-level computation of binocular disparity through the scaling of disparities to estimate absolute depth and interpolation of local measurements to form perceptual contours and surfaces. Studies on the pictorial cues center on the prior constraints (i.e., Bayesian priors) used to resolve perceptual ambiguities.
Sample publications:
- Mamassian, P. & Landy, M. S. (2001). Interaction of visual prior constraints. Vision Research, 41, 2653-2688.
- Brenner, E., Smeets, J. B. J. & Landy, M. S. (2001). How vertical disparities assist judgements of distance. Vision Research, 41, 3455-3465.
- Warren, P. A., Maloney, L. T. & Landy, M. S. (2004). Interpolating sampled contours in 3D: Perturbation analysis. Vision Research, 44, 815-832.
- Banks, M. S., Gepshtein, S. & Landy, M. S. (2004). Why is stereoresolution so low? Journal of Neuroscience, 24, 2077-2089.