Home


Research Interests

Visual Search:
In search experiments, observers typically search for a fixed image in an organized array. In everyday vision, observers search for objects in cluttered scenes. Are these fundamentally different tasks?

Search Template| Measuring Clutter| Searching in Clutter


Object Segmentation:
Clutter can obscure an object's shape (for an example, see the fish in the image below). If we recognize objects from their shape, how do we recognize objects in clutter?

Top-Down Segmentation | Cueing Object Parts | Recognition in Clutter


3D Surface Segmentation:
The Gestalt grouping rules of common fate and similarity make intuitive sense because many surfaces move rigidly and are spatially uniform. But when these surfaces are projected onto the retina, the result is usually a complex pattern of motions and textures. Can perceptual grouping deal with this complexity?

Motion | Texture




Search Template

When searching for a specific target, say the ketchup in the refrigerator, what sort of mental representation does the observer have in mind? Is the observer looking for something simple, e.g. the color red, or for something much more specific? The results of this experiment suggest that observers engaged in search hold in mind a fairly specific mental image of the target.




See Bravo & Farid, 2009



Measuring Clutter

Clutter makes it difficult to find things. To quantify this effect, we developed a measure of clutter that can be used with real images. The measure explained about 40% of the variance in observer's search times.




See Bravo & Farid, 2008



Searching in Clutter

Search is especially difficult for cluttered scenes and for targets that are defined by their membership in a broad category (e.g. food). Under these conditions, search times are strongly affected by the amount of stuff in the stimulus. In these experiments, we show that the amount of stuff is not simply the number of objects in the stimulus.

See Bravo & Farid, 2004



See Bravo & Farid, 2006



Top-Down Segmentation

We perceive a world organized into familiar objects like can openers, fish and cell phones. How does this organization arise? Are bottom-up grouping cues sufficient? What role does object knowledge play in organizing complex visual scenes? We created stimuli in which bottom-up grouping cues were inadequate to define object boundaries. Our results indicate that observers can use the boundaries of recognized objects in order to accurately segment, top-down, a novel object.

See Bravo & Farid, 2003


Cueing Object Parts

When viewing a cluttered scene, observers may not segment whole objects prior to recognition. Instead, they may segment and recognize objects in a piecemeal way. We tested whether observers can use the appearance of one object part to predict the location and appearance of other object parts.

See Bravo & Farid, 2004




Recognition in Clutter

Observers in recognition experiments typically view objects against a blank background, while observers of real scenes often view objects against dense clutter. In this study we examined whether an object's background affects the information used for recognition. Our results are consistent with the idea that global shape is used for the recognition of isolated objects, while local image fragments are used for the recognition of objects in clutter.



See Bravo & Farid, 2006


Motion Segmentation

How does the visual system organize local 2D motions into surfaces? One theory is that the visual system applies a smoothness constraint to the 2D motion pattern. Discontinuities in the motion pattern would then be interpreted as discontinuities in the surface. To test this theory, we designed three stimuli that had equally smooth 2D patterns but different 3D interpretations. Since these flow fields are all equally smooth, the theory predicts that they should produce equally coherent organizations. Nonetheless, our observers found it easiest to detect a discontinuity in the stimulus with the simplest 3D interpretation. These results show that a 2D smoothness constraint is inadequate to explain human motion segmentation, and they argue for mechanisms that recover 3D structure from motion.




See Bravo & Farid, 2000; Bravo, 1998


Texture Segmentation

Observers can readily discriminate textures with different orientations when they are presented on a planar surface. When the surface is folded, however, the resulting change in viewing geometry alters the textures seen by the observer. We find that observers have difficulty determining whether a change in image texture is due solely to a change in surface geometry or if it also reflects a change in the intrinsic surface texture. This suggests that while humans are quite adept at detecting texture discontinuities in an image, they are limited in their ability to interpret the cause of these discontinuities.

See Bravo & Farid, 2001


Home