When searching for a specific target, say the ketchup in the
refrigerator, what sort of mental representation does the observer have
in mind? Is the observer looking for something simple, e.g. the color
red, or for something much more specific? The results of this
experiment suggest that observers engaged in search hold in mind a
fairly specific mental image of the target.
Clutter makes it difficult to find things. To quantify this effect, we
developed a measure of clutter that can be used with real images.
The measure explained about 40% of the variance in observer's search
times.
![]() |
![]() |
We perceive a world organized into familiar objects like can openers,
fish and cell phones. How does this organization arise? Are
bottom-up grouping cues sufficient? What role does object knowledge
play in organizing complex visual scenes? We created stimuli in which
bottom-up grouping cues were inadequate to define object boundaries.
Our results indicate that observers can use the boundaries of
recognized objects in order to accurately segment, top-down, a
novel object.
See Bravo & Farid, 2003 |
||
![]() |
![]() |
![]() |
When viewing a cluttered scene, observers may not segment
whole objects prior to recognition. Instead, they may segment and
recognize objects in a piecemeal way. We tested whether observers
can use the appearance of one object part to predict the location and
appearance of other object parts.
See Bravo & Farid, 2004 |
![]() ![]() |
Observers in recognition experiments typically view objects against a blank background, while observers of real scenes often view objects against dense clutter. In this study we examined whether an object's background affects the information used for recognition. Our results are consistent with the idea that global shape is used for the recognition of isolated objects, while local image fragments are used for the recognition of objects in clutter. |
![]() ![]() |
How does the visual system organize local 2D motions into surfaces? One
theory is that the visual system applies a smoothness constraint to the
2D motion pattern. Discontinuities in the motion pattern would then be
interpreted as discontinuities in the surface. To test this theory, we
designed three stimuli that had equally smooth 2D patterns but
different 3D interpretations. Since these flow fields are all equally
smooth, the theory predicts that they should produce equally coherent
organizations. Nonetheless, our observers found it easiest to detect a
discontinuity in the stimulus with the simplest 3D interpretation.
These
results show that a 2D smoothness constraint is inadequate to explain
human motion segmentation, and they argue for mechanisms that recover
3D structure from motion.
Observers can readily discriminate textures with
different orientations when they are presented on a planar surface.
When the surface is folded, however, the resulting change in viewing
geometry alters the textures seen by the observer. We find that
observers have difficulty determining
whether a change in image texture is due solely to a change in surface
geometry or if it also reflects a change in the intrinsic surface
texture. This suggests that while humans are quite adept at detecting
texture discontinuities in an image, they are limited in their ability
to interpret the cause of these discontinuities.
See Bravo & Farid, 2001 | ![]() |