HOME RESEARCH PUBLICATIONS




Current Research

My research focuses on visual perception. I study how the visual system might construct and maintain a relatively constant perception of the visual world from the sensory signal incident at the eye. I approach this problem psychophysically, computationally and physiologically. My experimental stimuli are both simple (e.g., computer generated color patches) and complex (e.g., real-world scenes, realistic images). I am currently studying lightness and color perception.


Psychophysics

Phenomenally, we know that in the real world, color and lightness constancy are very good. For example, we perceive bananas as having a stable color of yellow whether we see them outside under the sun or inside under a fluorescent light (a change in illumination) or whether we see them surrounded by a green tree or surrounded by other bananas (change in the reflectance properties of the local surround). However, there are also times when constancy fails (at dusk, for example). An important goal of vision research is to characterize when constancy occurs, and on which variables constancy depends. In general, we know that constancy is better when scenes are more complex, and worse when scenes are highly reduced. I am currently working on two projects that aim to quantify the effects of different scene variables on lightness perception. The first project involves the use of real, illuminated objects and explores the relationship the effect of local contrast on lightness perception when it is changed either by manipulating reflectance or by manipulating the incident illumination via scene geometry. Click here for more details. The second project involves the use of a high-dynamic range display to explore the relationship between perceived lightness and local surround. Click here for more details.


LabApparatus




Computational Studies

I am also interested in modeling the visual perception of complex scenes. I take a computational approach with the ultimate goal of creating a model that will take any real image as input and return quantitative predictions about its perception. So far I have focused on models of lightness perception that mimic vision in the sense that they take the retinal image as input and from this input estimate the reflectance (or perceived lightness) and illumination at each location in the world. I use a Bayesian framework, and thus my model estimations are constrained by observations about what surfaces and illuminants are likely to be present in the world. I find the Bayesian framework an attractive one because the visual system evolved in a world with particular statistical properties. Given the ambiguity inherent in the retinal image, it seems a sensible idea to use real-world observations to constrain the solutions. Click here for more information on my Bayesian modeling.



Home