Shadows may be "discounted" in human visual perception because they do not provide stable, lighting-invariant, information about the properties of objects in the environment. Using visual search, R. A. Rensink and P. Cavanagh (2004) found that search for an upright discrepant shadow was less efficient than for an inverted one. Here we replicate and extend this work using photographs of real objects (pebbles) and their shadows. The orientation of the target shadows was varied between 30 and 180°. Stimuli were presented upright (light from above, the usual situation in the world) or inverted (light from below, unnatural lighting). RTs for upright images were slower for shadows angled at 30°, exactly as found by Rensink and Cavanagh. However, for all other shadow angles tested, the RTs were faster for upright images. This suggests, for small discrepancies in shadow orientation, a switch of processing from a relatively coarse-scaled shadow system to other general-purpose visual routines. Manipulations of the visual heterogeneity of the pebbles that cast the shadows differentially influenced performance. For inverted images, heterogeneity had the expected influence: reducing search efficiency and increasing overall search time. This effect was greatly reduced when images were presented upright, presumably when the distractors were processed as shadows. We suggest that shadows may be processed in a functionally separate, spatially coarse, mechanism. The pattern of results suggests that human vision does not use a shadow-suppressing system in search tasks.