Grouping Effects on Spatial Attention in Visual Search

Grouping Effects on Spatial Attention in Visual Search

Min-Shik Kim

For the past 2 decades, many researchers have used the visual search paradigm to examine limitations in visual processing. Typical visual search tasks require a participant to determine whether a target is present among a variable number of distractors in a visual array. The total number of stimuli (set size) is manipulated, and any significant decrease in performance with an increase in the set size (set-size effect) is often interpreted as an indication that attention is necessary to identify the target.

Traditionally, visual search results have been explained by the assumption that visual information processing consists of two functionally independent, hierarchical mechanisms: an early, parallel, preattentive mechanism and a later, serial, attentive mechanism (Neisser, 1967). Parallel search is assumed to occur when participants detect a target with little or no increase in reaction time (RT) as the number of distractors increases. In this type of search task, a target is usually defined by a single unique feature in the display (e.g., a red item among green items) and the detection of the target varies little with set size. However, if RT increases linearly and steeply as the display set size increases, it is usually interpreted as evidence for spatially serial search (e.g., Treisman & Gelade, 1980). Treisman and her colleagues (Treisman, 1988; Treisman & Gelade, 1980; Treisman & Gormican, 1988; Treisman & Souther, 1985) have proposed the feature integration theory (FIT) of attention, according to which a preattentive stage processes information about primitive visual features such as color, orientation, brightness, depth, and so forth, automatically and spatially in parallel across the whole visual field. That preattentive stage is not sufficient for recognizing objects that are specified by a conjunction of features, which requires spatial attention.

Townsend (1990) and Townsend and Nozawa (1995) demonstrated that the response-time data that have been used to support FIT and other theories of serial search can also be explained by a parallel limited-capacity search process. Such ambiguities in visual-search measures raise doubts about FIT and other theories in which display elements are selected and processed serially. However, experiments with a variety of other experimental techniques provide strong evidence that in many difficult visual tasks, some locations are selected over others for more efficient or more complete processing. (See Cave & Bichot, in press, for a recent review.)

An important issue that arises in the design of any theory of visual selection is the question of whether selection is affected by perceptual grouping. Regardless of whether an attention theory has a strictly serial mechanism as in FIT or allows larger portions of the visual input to be processed simultaneously, there may be a useful role for grouping in determining which locations are selected. Some theories rely on grouping to explain discrepancies between visual-search data and the predictions from FIT In some circumstances, conjunction searches produce response times that are uniformly fast regardless of set size, leading some to suggest that search can be limited to a small number of grouped display elements. Other theories, however, account for those same search results without assuming any grouping effects on search.

Grouping Required in Visual Search

In the experiment of Egeth, Virzi, and Garbart (1984), the participants searched for a conjunctively defined target, a red O among red Ns and black Os. When the number of one type of distractor varied and the number of the other did not (e.g., 2 red Ns and 2, 12, or 22 black Os), the search function remained flat, suggesting that participants could limit search to one subset of stimuli (e.g., red items in this example). Thus, those data indicated that in conjunction search, some items in the display can be excluded from the serial search. More recently, Kaptein, Theeuwes, and Van der Heijden (1995) also showed that participants searching for a conjunction target could limit their search to the elements sharing one of the target features.

Raising questions about the claim that serial search is required at all in conjunction search, Nakayama and Silverman (1986) reported that conjunctions of stereo and motion and conjunctions of stereo and color were searched in parallel. Moreover, their finding implies that stimuli sharing a specific value on certain feature dimensions, such as the same depth or a common motion, might be grouped together and then searched in parallel within the subgroup (e.g., McLeod, Driver, & Crisp, 1988; McLeod, Driver, Dienes, & Crisp, 1991; Theeuwes, 1996).

Duncan and Humphreys (1989) showed that search slope increases as target-nontarget similarity increases and as nontarget-nontarget similarity decreases. They interpreted those results as arguments against FIT’s dichotomy between serial search for conjunction targets and parallel search for a feature target. Duncan and Humphreys further suggested an alternative model to FIT based on late selection (Duncan, 1980). They assumed that a complete structural description (including color, shape, location, and so forth) for each visual object and perceptual segmentation is constructed in a parallel stage, followed by a selective stage in which stimuli compete for admission to visual short-term memory (VSTM) according to similarity to the target and similarity to other elements in the display. Although they made no attempt to define similarity, their model seems to take into account a large variety of experimental results. On the basis of Duncan and Humphreys’s model, Humphreys and Muller (1993) built a neural network model for visual search, named SERR, in which grouping processes are also heavily used.

Recently, Grossberg, Mingolla, and Ross (1994) proposed a model in which multiple items are organized into candidate perceptual groups, which are then selected serially for further analysis. Citing Humphreys, Quinlan, and Riddoch (1989) and Bravo and Blake (1990), they claimed that multi-item boundaries produced by perceptual grouping can be viewed as an output of preattentive vision and that this output in turn can serve as a perceptual feature in visual search. For example, Humphreys et al. showed that when the letters were organized into coherent global shapes or familiar shapes, search was facilitated, suggesting that the group of letters was treated as a whole. In Bravo and Blake’s study, participants searched for a perceptual group with a unique shape among other distractor groups. They found that search times were independent of the number of distractor groups, suggesting that perceptual groupings can be processed in parallel in visual search. Those results implied not only that the processes mediating grouping and segmentation were involved in visual search but also that perceptual groups could be formed in parallel in visual search. (See also Beck, 1982; Braun & Sagi, 1990, 1991; and Julesz, 1984, 1986, who claimed that perceptual grouping and texture segregation have the same or similar mechanisms and belong to preattentive processing.)

The model of Grossberg et al. (1994) differs from that of Duncan and Humphreys (1989) in the method of grouping elements. In the model of Grossberg et al., search elements form groups based on a task-relevant target feature. Thus, a target can be grouped with distractors that have one of the target features. Moreover, when a candidate search group is selected, its region is selected for further processing. On the other hand, the assumption in Duncan and Humphreys’s model is that grouping is used to inhibit nontarget elements in parallel. Thus, the target should not be included in any group to be inhibited. According to their model, one can enhance search by increasing grouping between nontargets and decreasing grouping between target and nontargets.

Visual Search Without Grouping

Wolfe, Cave, and Franzel (1989) reported that when participants searched for a conjunction of color and shape, search slopes varied between individual participants from 2.0 ms/item to 20.2 ms/item, suggesting that search varied from parallel to serial. To account for those data, especially for fast conjunction search, Cave and Wolfe (1990) proposed the guided search (GS) model, in which they assumed some interaction between the parallel and serial processes. (See also Wolfe, 1994, and Wolfe & Gancarz, 1996, for revised versions.)

Like FIT, GS assumes that a parallel stage processes basic features spatially in parallel, followed by a serial stage that operates more thoroughly on a spatially limited part of the visual field. However, in contrast to FIT, GS assumes that the parallel stage “can combine information from different feature dimensions to measure how likely a stimulus is to be a target” (Cave & Wolfe, 1990, p. 232) and pass that information on to the serial stage. For example, in a search for a red horizontal bar among red vertical bars and green horizontal bars, all locations with a red item and all locations with a horizontal bar are activated in parallel. The location with the target item then is doubly activated, and the item at the most activated location is checked first by a spatially serial mechanism. Thus, GS is expected to search for the conjunction target perfectly in parallel. However, because of the assumption in GS that the visual system is subject to a certain amount of noise that makes the information provided by the parallel stage less than perfect, attention will often be directed to a distractor location that contains nontarget features.

According to the GS assumption, each location will be activated to the extent that the item at that location has target features. This top-down component of the model (knowledge of the target) can either activate the locations that have the target features or inhibit (or decrease the activation of) the locations that contain features different from those of the target. Besides the top-down activation, the model also includes a bottom-up component that considers the differences in features for each dimension. Thus, when an element is different from the other elements in a particular feature dimension, the bottom-up activation for the element will increase. That component thus allows the model to search for a simple feature target very quickly. The same type of interaction between bottom-up and top-down activation is built into a more recent model of attention named FeatureGate (Cave, in press), which is implemented in a neural network.

Citing Nakayama and Silverman (1986), McLeod et al. (1988), and Wolfe et al. (1989), Treisman and Sato (1990) proposed a revised FIT to explain the observation that conjunction searches can sometimes be nearly parallel. They examined three possible explanations: (a) There may be a few conjunction detectors corresponding to visual cortex cells responsive to multiple features; (b) conjunction stimuli can be reorganized and segregated by one feature, and then search can be limited to the grouped elements, as grouping models suggested; and (c) locations with nontarget features may be inhibited, and the simultaneous inhibition of nontarget features can speed conjunction search. They rejected the conjunction detector hypothesis by showing that conjunction searches involving orientation are generally slow. Orientation was particularly relevant, because many of the cortical cells suspected of being conjunction detectors respond to combinations of orientation and either spatial frequency or direction of motion.

Treisman and Sato (1990) also tested the grouping explanation in their experiments. Participants searched for a conjunction target on some blocks of trials. On other blocks of trials with the same stimuli, participants determined match or mismatch of the shape defined by a particular feature dimension (e.g., green items) to a global shape defined by a luminance pattern. If participants responded to conjunction targets by first separating one set of elements from another, then their performance on such a same – different matching task should predict their search performance. However, the correlations between the searching times and same-different matching times were very low for individual participants. In another experiment with apparent motion, participants decided whether a target set of elements moved coherently from the first frame to the second. Ability to detect the apparent motion did not predict search performance. The researchers took those results as arguments against the grouping explanation.

Finally, Treisman and Sato (1990) focused on the feature-inhibition explanation. Participants searched for the same conjunction target but in two different conditions. In one condition, there were only two distractor types (standard conjunction search). In the other condition, there were the same two distractor types along with two other distractor types that had features more discriminable from the target features. The results showed that a standard conjunction search was easier than one in which half the distractors were even less similar to the target in the two relevant features, suggesting that it takes more time to inhibit more features. That finding was taken as an indication that selection is accomplished by inhibiting certain distractors rather than by activating a particular target.

The updated FIT (Treisman & Sato, 1990) and the GS (Cave & Wolfe, 1990) are very similar. For example, in the revised FIT, Treisman and Sato assumed mutual inhibition of elements within each feature map and inhibition of locations with nontarget features, which are, respectively, much like the bottom-up and top-down activation in GS. More important, unlike some attentional models such as those of Duncan and Humphreys (1989) and Grossberg et al. (1994), FIT, GS, and FeatureGate all explain efficient searches without any explicit grouping mechanisms. For example, GS selects each location on the basis of its properties, rather than simultaneously selecting one group of similar items. In GS, similar items share similar top-down activations as well as similar bottom-up activations, and those mechanisms produce predictions similar to those from search models based on grouping processes. In fact, GS can account for the selective type of search found in the study by Egeth et al. (1984) without grouping (Cave & Wolfe, 1990).

The models that explain visual search without grouping do not explicitly claim that grouping does not occur or that grouping cannot form the organized perceptual units, as in Bravo and Blake’s (1990) study; rather, grouping is merely an unnecessary processing step for many types of search (e.g., Friedman-Hill & Wolfe, 1995).

Experimental Tests of Grouping

To determine the role of grouping in simple visual-search tasks, we performed two experiments by using a probe technique. Earlier experimenters (Cave & Pashler, 1995; Hawkins et al., 1990; Hoffman & Nelson, 1981; Hoffman, Nelson, & Houck, 1983; Kim & Cave, 1995, in press; Kowler, Anderson, Dosher, & Blaser, 1995; LaBerge & Brown, 1989; Muller & Humphreys, 1991; Shih & Sperling, 1996) have shown that the allocation of attention for one stimulus can benefit the processing of any stimulus following immediately at the selected location. Thus, one reasonable method to measure the effects of grouping on spatial attention is to use a dual-task procedure with postdisplay probes. Kim and Cave (1995) used such a probe method to measure spatial selection during visual search. The primary task was to search for a predefined target among many distractors. In one of the experiments, we presented a letter array immediately after the primary stimuli disappeared; we measured the accuracy of reporting the probe letter at each location occupied by a search display element. Participants reported the letters preceded by the target or a distractor with a target feature more frequently in both conjunction and feature searches.

In the current study, we measured whether a search target was grouped with nearby distractors sharing a feature with it and whether such grouping was accomplished by selecting the locations occupied by the elements rather than by selecting a nonspatial group representation. As in Kim and Cave (1995), our primary task was a visual-search task designed to elicit the allocation of attention. In contrast to our earlier study in which the search elements appeared at random locations, however, the search elements in the present study formed a spatially contiguous region by color in feature search and by color or shape in conjunction search. We expected that when the search elements to be grouped were spatially contiguous and formed a good gestalt, they would be selected together more easily than when the search elements were positioned at spatially noncontiguous locations (e.g., Grandison, Hendel, & Egeth, 1996; Grossberg et al., 1994; Theeuwes, 1996; Treisman, 1982). Responses to the spatial probes indicated whether spatial selection favored the locations of distractors that were grouped with the target.

The primary stimuli were presented briefly and then followed by several different probe letters, each appearing at a location that had been previously occupied by one of the primary stimuli. The probe task was to identify the letters correctly. Differences in the accuracy with which letters at different locations were reported reflected differences in spatial attention across those locations. In Kim and Cave (1995), we had used two different measures of spatial attention: (a) response time at reporting a dot probe that appeared at a single location and (b) accuracy at reporting letters from an array that included all possible stimulus locations. In that study, the two methods yielded similar results. The accuracy measure with the array of letter probes was also used in a study by Bichot, Cave, and Pashler (1999), and we used it in the present study as well.

We examined whether and how grouping mechanisms are used in visual search when the participant did not know the target location. Moreover, we tried to answer an important question as to how grouping processes in visual search can be influenced by top-down and bottom-up information of search items – that is, search groups might be formed only on the basis of known target features in a top-down fashion (Grossberg et al., 1994), grouping might occur independent of the target features in a bottom-up fashion (Hendel, Grandison, & Egeth, 1996), or both.


In our earlier study, as mentioned before, we showed spatial selection of the conjunction target location and some distractor locations that shared one of the target features, indicating that they received relatively more attention than other distractors that shared neither of the relevant target features (Kim & Cave, 1995). In that study, distractors of different types were located randomly on an imaginary circle around a fixation cross. Thus, because the locations of search elements with potential target features were not all contiguous in most trials, it was potentially difficult to group them together. In the current experiment, we manipulated grouping by locating search elements with the same feature in a contiguous region that could easily be separated from the rest of the display. As shown in Figure 1, for example, when the elements with the same color are located in a contiguous region (left panel), they can be grouped more easily than when they are alternately located (right panel).

If the grouping occurs preattentively in a bottom-up fashion, then we can expect that the locations with the same feature will be grouped together regardless of whether the feature is a defining feature of the target. Moreover, if search elements with the same feature are preattentively grouped and each grouped region is selected serially (e.g., Treisman, 1982), then any elements in the same group with the target should be selected together – that is, when the elements with the same feature are easily grouped, the locations of elements sharing the color with the target should receive more attention than the locations with a color different from that of the target ([ILLUSTRATION FOR FIGURE 1 OMITTED], left panel). However, when the elements with the same feature are not easily grouped, the elements with the color of the target should receive less attention than those that are easily grouped with the target ([ILLUSTRATION FOR FIGURE 1 OMITTED], right panel).

In Experiment 1, we tested for evidence of preattentive grouping in feature search. The participants searched for a circle target among squares as the primary task. On the irrelevant-feature dimension (color), one half of the stimuli were red and the other half green. As a secondary task, we asked the participants to identify probe letters presented immediately after the search array. We used accuracy in identifying the probe letter at each location as a measure of the amount of spatial attention allocated at that location. If grouping occurs on the basis of each feature dimension independently of the relevancy of that feature dimension to the task and if the grouped region is used somehow in visual selection, then there should be more benefit in reporting probes at the locations of distractors sharing the color of the target than at the locations of distractors with a color different from that of the target in the contiguous condition ([ILLUSTRATION FOR FIGURE 1 OMITTED], left panel). In the noncontiguous condition, however, such a difference between the two different element types should not be found ([ILLUSTRATION FOR FIGURE 1 OMITTED], right panel).

Besides grouping, there is another variable that might affect probe accuracy in the current experiment. As shown in the left panel of Figure 1, the distractors with the same color as the target always are positioned closer to the target than the distractors with a different color than the target in the contiguous condition. If the probe accuracy is higher at the target color distractor than at the nontarget color distractor location, then it might be due to the effect of distance from the target rather than the effect of grouping with the target. However, our previous study (Kim & Cave, 1995) with a similar search task showed no effect of distance from the target in feature or conjunction search.



Twenty-seven undergraduates at Vanderbilt University participated in the experiment in partial fulfillment of a course requirement. All of them had normal or corrected-to-normal visual acuity and normal color vision.


We conducted the experiment on three AppleColor High-Resolution RGB monitors controlled by Macintosh microcomputers. The screen resolution was 640 x 480; 69 dpi. The participants responded via custom-built response keys that were connected to Strawberry Tree parallel interface cards. Responses were timed with clocks on the interface cards.


The primary stimuli in each trial consisted of eight colored shapes against a white background. The eight shapes were presented equally spaced on the perimeter of an imaginary circle around a fixation cross. The target shape was always a circle that appeared on half of the trials, and the distractors were always squares. On each trial, the location of the target was selected to be one of the eight locations, and the color of the target was randomly selected to be either red or green. The color of each distractor was also randomly selected to be one of the two colors for each trial with a constraint that half of the shapes should have the same color as the target and the other half should not. Also, there were two different grouping conditions, each with a different configuration of the distractors. In the contiguous condition, all elements with the same color were located next to one another on the imaginary circle ([ILLUSTRATION FOR FIGURE 1 OMITTED], left panel). When the target was present in this condition, it was always surrounded by elements of the same color; it never appeared next to the boundary between the two color regions. In the noncontiguous condition, elements with a different color were alternately located ([ILLUSTRATION FOR FIGURE 1 OMITTED], right panel). Trials with those two display types were randomly intermixed. Thus, the participants could not know the target location, its color, or the grouping condition until the primary stimulus appeared.

Each probe stimulus consisted of a circular array of 8 randomly selected black letters from 21 uppercase alphabet consonants, each in the center of the location previously occupied by a search element. The letters were drawn by using the Macintosh Monaco font.

We matched the shades of red and green for luminance for each video monitor by using a Minolta Luminance ft-1 [degrees] light meter. On the basis of a viewing distance of approximately 58 cm, the overall size of the imaginary circle measured approximately 14.2 [degrees] x 14.2 [degrees] of visual angle. Each search shape in the display approximated 2.2 [degrees] x 2.2 [degrees] of visual angle. Also, each probe letter subtended a visual angle of 0.6 [degrees] vertically and 0.4 [degrees] horizontally.


The sequence of displays in a single trial is illustrated in Figure 2. First, a fixation cross appeared at the center of the screen for 1 s, and the primary search display appeared for 60 ms. Following 120 ms after onset of the search display, the letter probes appeared for 60 ms. Next, the participants reported whether there was a circle target or not (the primary task) by pressing a yes or no button with one of two fingers of the nondominant hand. The importance of a correct and speedy response in the primary task was emphasized. If the participants did not respond for the primary task within 3 s after onset of the primary display or if they responded incorrectly in the primary search task, they heard an error sound. After that response, a display containing all 21 possible letters appeared. By using a mouse, the participants selected 4 letters that they were most confident they had seen in the probe array (the probe task). They were instructed that accuracy was important and that speed did not matter. After the probe responses, the participants were given feedback as to which of their letter choices were correct. Each participant performed a total of 128 main trials. Because there were four types of conditions – two types of target (present or absent) trials x two types of grouping – each experimental condition consisted of 32 trials. Participants were given a break every 32 trials. Each participant worked through at least 60 practice trials before data collection.


Primary Task Results

The participants produced incorrect responses in the primary task on 17.5% of the trials. The participants’ reaction times in the primary task were also measured. Search RTs more than 3.5 standard deviations from the mean were trimmed iteratively. They were less than 2% of all observations. The participants’ mean RTs from correct trials and error percentages in the primary task were subjected to analysis of variance (ANOVA) with two types of trials (target-present vs. target-absent trials) and two types of grouping (contiguous vs. noncontiguous). For both RTs and error percentages, there was neither a main effect of trial type or grouping type nor a significant interaction effect between the two at the alpha level of .05.

Probe Task Results

In this experiment, attentional allocation is reflected in the accuracy with which the probe letters at each location were reported. On the probe task, the participants correctly reported 2.45 probe letters out of 8 in each trial, on average. We included all the correct probe responses, regardless of whether the participants had made an error on the primary search task. (Another analysis including only the trials with a correct search response showed the same results.) We submitted the mean proportions of reported probe letters in each condition to an ANOVA with grouping type (contiguous or noncontiguous) and probe location (target location, distractor location with the target color, or distractor location with the nontarget color) as repeated factors. Figure 3 contains the mean percentages of correct responses to probe letters at each condition.

The analysis showed a significant main effect of probe location, F(2, 52) = 48.83, p [less than] .001. Participants correctly reported the letters preceded by the target more frequently than those preceded by a distractor. The main effect of grouping type was not significant, F(1, 26) = . 118, p [greater than] .7. However, there was a significant interaction effect between grouping type and probe location, F(2, 52) – 3.56, p [less than] .05. That interaction effect was mainly from distractor locations [ILLUSTRATION FOR FIGURE 3 OMITTED]. In the contiguous condition, probe accuracy was significantly higher at the distractors that shared the target’s color than at the distractors that had the nontarget color, F(1, 26) = 11.08, p [less than] .005. In the noncontiguous condition, however, probe accuracies at the two locations did not significantly differ from each other, F(1, 26) = .94, p [greater than] .3.

The significant interaction between grouping type and probe location suggests that distractors with the target color received more attention than distractors with the nontarget color in the contiguous condition, but not in the noncontiguous condition. As mentioned earlier, however, in the contiguous condition, the distractors with the target color were always located closer to the target than were distractors with the nontarget color. Grouping in this experiment was confounded with distance. Thus, the interaction could reflect distance rather than grouping. To test for a grouping effect with constant distance, we conducted another ANOVA that included only the distractor locations two positions away from the target in the contiguous condition. Although the participants showed higher probe accuracy at the target-color distractors (26%) than at nontarget-color distractor locations (22%), the difference was not statistically significant, F(1, 26) = 2.15, p = .15.

Also, in the noncontiguous condition, we compared probe accuracies at the distractor locations one or two positions away from the target with those three or four positions away from the target to test for distance effects. Probes near the target location (27%) were more frequently reported than those far from the target (25%), F(1, 26) = 4.34, p [less than] .05, showing that distractors near the target received more attention than those farther away.

To determine ‘whether grouping had any attentional effect beyond that of distance, we performed the last analysis on probe accuracies only from distractor locations, with grouping type (contiguous or noncontiguous) and distance from target (near or far) as variables. The distance variable had two levels: The first level (near) included distractor locations one position away from the target, and the second level (far) included those three or four positions away from the target. Because the target in the contiguous condition never appeared at the boundary between the two color regions, the distractor at near distance in the contiguous condition was always target colored, whereas the distractor at near distance in the noncontiguous condition was always nontarget colored. Any interaction between the grouping and distance variables would indicate a grouping effect on attentional performance. The two-way repeated-measures ANOVA revealed only a main effect of distance, F(1, 26) = 21.5, p [less than] .001, but no significant interaction, F(1, 26) = 1.07, p = .31. Although grouping could have been responsible for part of the attentional differences across the distractors, at least some of the difference, and perhaps all of it, was due to distance from the target.


In the current experiment, distractors near the target received more attention than those far from the target. As mentioned before, the effect of distance from the target was not originally expected because there had been no distance effect in our earlier feature search experiments (Kim & Cave, 1995). The difference between these two experiments may have derived from the requirement in the present experiment that participants encode and recall 4 probe letters on each trial, whereas in the earlier experiments participants could report just 1 or more probe letters. Thus, the relatively heavy load of the probe task may have led the participants to allocate attention to a broader region around the target location in the current experiment in order to include a large block of letters as well.

In another study (Kim & Cave, 1999), participants showed a grouping effect based on a task-irrelevant feature (color). Those participants reported a letter target that appeared in the middle of the display between two flanking distractors, one with the same color as the target and the other with a different color. Response times to spatial probes in those experiments showed that a distractor received more attention if it was the same color as the target. As in the current experiment, the participants in that study did not know the color of the target beforehand, and the color of the stimulus was irrelevant to the task itself. However, if grouping had any effect on selection in the current experiment, it must have been much weaker than in our earlier experiments. The main difference between the two experiments is that the target in the current study was defined by a nonspatial feature (i.e., a divided-attention task), whereas it was defined by its location in the earlier experiments (i.e., a focused-attention task).

Why was there a strong task-irrelevant color grouping effect in the focused-attention task but little or no grouping effect in the current visual-search task? First, the color grouping effect on attentional performance might have occurred when the participants focused attention on the target but not when they divided attention between the search elements in a display, as suggested by Harms and Bundesen (1983); that is, when participants determined whether the search display contained a specific shape, regardless of spatial location, each element might have been attended to some degree, and the divided attention could have prevented color grouping or color segregation.

Second, the strong grouping effect in our other experiments and the lack of it in the current experiment may be attributable to the fact that the target in the previous experiments was defined by its location and the target in the current experiment was defined by some feature other than location (shape). The location cue might be superior to the other nonspatial cues in the sense that the target location can be selected more directly or more quickly when the target is defined by its location than when it is not. That is, when participants know the target location in advance, selection at that location may occur very quickly, and then any features at that location can be highly activated. The high activation of the features at the selected location in turn might trigger selection of any locations that have the same features as the selected location through a selection mechanism like the top-down activation in GS (Cave & Wolfe, 1990; Wolfe, 1994). Thus, according to that conjecture, the grouping effect in the other experiments might not have originated from a preattentive grouping stage, but instead might have occurred concurrently with target selection or after that selection was completed.

Third, the mechanism for attending a cued location might be selecting only from a representation that has already been grouped. When a target is defined by its location, as in our previous study, it may be difficult to select the precued location precisely before stimuli appear. When the stimulus appears, the items sharing color may be grouped before location selection; when the target is selected, the rest of the group is included. However, when the target is defined by a nonspatial feature such as shape or color, the location of each search item may be easily individualized by nonspatial features, and spatial attention may be guided to a separate object location by the target feature.

The last possibility is that when stimuli can be grouped by more than one feature dimension, a single dimension for grouping may be chosen on the basis of search efficiency. In the current experiment, for example, stimuli could be grouped by either color or shape. If color grouping was used to inhibit elements with the nontarget color, another processing step would be needed to inhibit other distractor locations within the selected target-color group. However, if shape grouping was used, then all the distractor locations could be inhibited in one step. In the focused-attention study (Kim & Cave, 1999), the only feature difference to support grouping was in the color dimension. Although the color feature was not relevant to the task in those experiments, inhibition of distractor locations grouped by color might be more efficient than inhibition of each individual distractor location separately.

In the next experiment, we tested those four interpretations with conjunction search. Any evidence for grouping effects in conjunction search would provide an argument against claims that a grouped region cannot be selected when the target is defined by nonspatial visual features or when participants are expected to “divide” attention among search elements.


In Experiment 1, we found no results that could definitely be attributed to color grouping that was irrelevant to the task. That result, however, does not necessarily indicate that there was no selection or inhibition of grouped objects. Instead, selection-inhibition of grouped objects may have occurred only on the basis of shape, which was relevant to the task of Experiment 1. In other words, search elements can be grouped automatically by all possible feature dimensions – color or shape – in the preattentive stage, but at any one time, the participants may use only one of those dimensions for selection. If so, then the participants of Experiment 1 might have used only shape grouping to select the target or to inhibit the distractors efficiently. That conjecture, however, could not be tested by simply showing more attention at the target location than distractor locations in a simple feature-search task because the same result would be predicted by “no-grouping” search models such as FIT and GS, as mentioned before.

In Experiment 2, we used a conjunction search task to determine whether grouping by one of the features could be involved in visual search and whether the grouped elements were selected spatially. Unlike Experiment 1, there was no task-irrelevant feature dimension that varied in this experiment. The participants searched for a target defined by a conjunction of color and shape. Some distractors shared the same color with the target, and others shared the same shape. If grouping occurs more easily when the elements sharing a feature are within a single contiguous region (e.g., Grossberg et al., 1994; Humphreys et al., 1989), we would expect that when the color-shape conjunction target was located among the same color distractors, color grouping would allow for more efficient selection than shape grouping would. Likewise, when the conjunction target was located among the same shape distractors, participants would be expected to select the target-shape group more effectively than the target-color group.

For example, when the target was a red circle, the target could be located either among red squares ([ILLUSTRATION FOR FIGURE 4 OMITTED], Panel A; “color grouping condition”) or among green circles ([ILLUSTRATION FOR FIGURE 4 OMITTED], Panel B; “shape grouping condition”). We expected that the target-color distractors would receive relatively more attention than the target-shape distractors in a color grouping condition ([ILLUSTRATION FOR FIGURE 4 OMITTED], Panel A). Likewise, the green circles (same-shape distractors) would receive relatively more attention than the red squares (same-color distractors) in the shape grouping condition ([ILLUSTRATION FOR FIGURE 4 OMITTED], Panel A).



Twenty-eight undergraduates at Vanderbilt University participated in the experiment in partial fulfillment of a course requirement. All of them had normal or corrected-to-normal visual acuity and normal color vision.

Apparatus, Stimuli, and Procedure

Apparatus, viewing condition, and procedure were identical to those in Experiment 1. As in Experiment 1, each participant received 32 trials in each condition, for a total of 128 trials. Also, the number, size, and color of search elements were the same as in Experiment 1, with the following exception: As a primary task, each of the 28 participants searched for one of four elements: a red square, a red circle, a green square, or a green circle; 7 participants searched for each target. Each trial included two types of distractors, one set sharing the same color as the target (same-color distractors) and the other set sharing the same shape as the target (same-shape distractors). The two types of distractors were presented in two contiguous groups on the imaginary circle, so that they could be grouped easily as in the contiguous condition in Experiment 1 [ILLUSTRATION FOR FIGURE 4 OMITTED]. In the target-present condition, the conjunction target was randomly located either among the three same-color distractors ([ILLUSTRATION FOR FIGURE 4 OMITTED], Panel A; color-grouping condition) or among the three same-shape distractors ([ILLUSTRATION FOR FIGURE 4 OMITTED], Panel B; shape-grouping condition). The target never appeared on the boundary between the two feature groups. We randomly intermixed the color-grouping and shape-grouping trials.


Primary Task Results

Search response times exceeding 3.5 standard deviations from the mean were trimmed iteratively, which led to a loss of under 2% of all observations. Figure 5 contains mean RTs from correct trials and error percentages in the primary task. For each of these measures, we performed an ANOVA with three types of trials (target-present in color grouping, target-present in shape grouping, and target-absent trials). There was a main effect of trial type for both RTs, F(2, 54) = 6.79, p [less than] .01, and error percentages, F(2, 54) = 7.65, p [less than] .01; the participants responded more slowly and made more errors in the target-absent trials than in the target-present trials. There seemed to be a speed-accuracy tradeoff between the two target-present conditions. The participants responded more quickly but made more errors in the color-grouping condition than in the shape-grouping condition.

Probe Task Results

As in Experiment 1, the accuracy at reporting probes at each location reflected the strength of attentional allocation at that location. On the average, 2.18 probe letters in each trial were correctly reported. Figure 6 contains the mean percentage of correct responses to probe letters at each condition. The data were submitted to a mixed-factors ANOVA with Target Type (green circle, green square, red circle, or red square), Grouping Type (color or shape grouping), and Probe Location (target location, distractor location with the target color or the target shape) as factors. The analysis showed a significant main effect of probe location, F(2, 48) = 44.65, p [less than] .001. The participants correctly reported the letters preceded by the target more frequently than they reported those preceded by a distractor. The only other effect to reach significance was the interaction between grouping type and probe location, F(2, 48) = 8.24, p [less than] .001. That interaction effect was mainly from distractor locations [ILLUSTRATION FOR FIGURE 6 OMITTED]. A similar ANOVA without data from the target location also showed a significant interaction between the two, F(1, 24) = 21.97, p [less than] .001, along with significant main effects for grouping type, F(1, 24) = 14.19, p [less than] .001, and for probe location, F(1, 24) = 7.91, p [less than] .01. Those results indicate that probe letters were more frequently reported when they appeared at the target-color distractor locations than at the target-shape distractor locations. Also, the probes were more frequently reported when the target was presented among the target-color distractors (color grouping condition) than when it was presented among the target-shape distractors. Most important, though, target-color distractors received more attention in the color-grouping condition than in the shape-grouping condition, whereas target-shape distractors did not show such a difference in attentional strength between the two grouping conditions, which indicates a strong evidence for grouping effect on spatial selection.

As in Experiment 1, however, color grouping was confounded with distance. In the color-grouping condition, the target-color distractors were always located closer to the target than were the target-shape distractors; in the shape-grouping condition, distractors with the target shape were always closer to the target than distractors with the same color as the target. To test whether the grouping effect in the previous ANOVA still existed without the distance factor, we performed another ANOVA with only the distractor locations two positions away from the target. We found the same trends as in the previous test; there were two significant main effects for grouping type, F(1, 24) = 7.07, p [less than] .05, and for probe location, F(1, 24) = 14.4, p [less than] .001. Most important, there was a significant interaction between the two, F(1, 24) = 5.03, p [less than] .05, indicating that the target-color distractor received a large attentional advantage over the target-shape distractor under color grouping, but not under shape grouping [ILLUSTRATION FOR FIGURE 7 OMITTED]. If the probe accuracies were determined only by distance or by a featural difference, then there should be no interaction. That result confirmed that the grouping effect found in the current experiment was not attributable purely to distance.


The participants in Experiment 2 showed higher probe accuracies at the distractor locations that were easily grouped with the target than at the locations that were not. Moreover, this effect was still found when the distance between the target and probe location was held constant. Thus, the results suggest that perceptually grouped objects can be selected or inhibited as a unit in visual search and that the selection of grouped objects is mediated by their locations. The grouping may have occurred only for color and not for shape in Experiment 2. Alternatively, there may have been grouping by feature dimension, coupled with a general tendency to attend more to same-color distractors than to different-color distractors, regardless of grouping.

In the discussion of Experiment 1, we suggested four possible interpretations as to why task-irrelevant color grouping appeared so strongly in another study with a focused-attention paradigm (Kim & Cave, 1998) but not in the first experiment. The first possibility was that the selection of a grouped region cannot occur when the target is defined by nonspatial visual features and attention is divided among search elements. Experiment 2 eliminated that explanation, because it showed that grouping can occur even in a search task in which participants have no advance knowledge of the target location. The second possibility was that location cuing triggers selection of locations sharing features with the cued location. That may be true, but it cannot be the entire story of grouping, because the current experiment demonstrated that grouping can occur in a conjunction search without location cuing. The third possibility was that a cued location can only be selected by selecting its group. Again, the current experiment showed that this cannot be the entire story, because grouping also occurred in the search task, without spatial cuing. The last possibility was that it may be impossible to group by more than one feature dimension simultaneously. That possibility is certainly compatible with the current results.

The grouping effect in Experiment 2 was based on task-relevant feature dimensions. As mentioned before, Grossberg et al. (1994) suggested that candidate search groups are formed on the basis of one of the target-feature dimensions. Thus, their model appears to account for the results from both Experiments 1 and 2; that is, in a simple search such as Experiment 1, a task-irrelevant feature dimension such as color should not be used to form a candidate group, and, thus, color should have no effect on the selection of the target. In conjunction search, however, distractors with one of the target features can be selected along with the target when they are grouped with the target. According to these results, one might argue that the selection of a grouped region is based only on a task-relevant feature dimension.

However, the focused-attention study (Kim & Cave, 1999) suggested that participants may group elements based on a task-irrelevant feature dimension when no other feature difference in the display will support grouping. Moreover, in a recent study, Hendel et al. (1996) showed that search can proceed within a group that is formed by a task-irrelevant feature dimension. In that study, participants made a two-alternative response indicating whether there were one or two target Ts among multiple Ls. When two targets were presented, the participants responded more quickly to displays in which the two targets were positioned in the same color group than those in which they were positioned in the different color group. Note that the color of the target was irrelevant to the task in their study. Thus, the researchers concluded that when there are grouped regions available in visual search, participants may select each group one by one, and search can proceed within the selected group even when the grouped regions are formed by a task-irrelevant feature. All those results fit nicely with the assumption that grouping can affect search but are more difficult to reconcile with any model based only on top-down guidance (see also Pashler, 1988; Theeuwes, 1991, 1992).

Thus, the difference in grouping between Experiments 1 and 2 cannot be attributed to the task relevancy of the feature. In other words, candidate search groups are not necessarily formed on the basis of features known to be relevant to the task. Task-irrelevant feature grouping might also occur in visual search, especially when no other grouping is possible.


We contrasted the predictions of two different classes of visual search models. One emphasizes the role of perceptual grouping in visual search, and the other does not. Proponents of the search models based on grouping have claimed that search begins with preattentive segmentation of a search array into separable figural units or objects on the basis of Gestalt properties such as similarity, proximity, or contiguity. Attention then operates on the preattentively organized perceptual units or objects (Duncan & Humphreys, 1989; Grossberg et al., 1994; Kahneman & Henik, 1977, 1981; Kahneman & Treisman, 1984; Treisman, 1982). In some of those models, location is treated not as a fundamental consideration in controlling attention, but just one factor equivalent to others such as shape, color, movement, and so on. (In some of these grouping models, such as SERR [Humphreys & Muller, 1993], location does play a fundamental role.) Proponents of the other class of model usually assign a special role for location in visual selection, assuming that selection occurs within representations organized by location.

In the current study, we demonstrated a particular type of grouping process, based on selection of locations. The use of spatial probes in Experiment 1 showed little evidence for a task-irrelevant color grouping effect in simple feature search. In Experiment 2, however, there was clear evidence of location-based group selection in a conjunction target defined by nonspatial features. Those results are consistent with Treisman’s (1982) demonstration that arranging elements into groups affected conjunction search but not feature search.

The small or nonexistent color-grouping effect in Experiment 1 suggests that even when feature differences made it possible to group search elements into subgroups, such grouping has little or no effect on the selection of the target. On the other hand, when the target is not very salient, as in conjunction search, the grouping has a substantial effect on the degree to which each element is selected. Candidate subgroups might be formed on the basis of a known-to-be-relevant feature dimension in a top-down fashion (Egeth et al., 1984; Grossberg et al., 1994; Kaptein et al., 1995) or by a bottom-up segregation independent of the target feature (Hendel et al., 1996).

In considering the implications of our current study, it is important to underscore how it differed from previous work. In general, researchers have supported grouping in visual search by showing that search times for a conjunctively defined target were independent of the number of non-target-group distractors but that they increased as the number of target-group distractors increased (e.g., Egeth et al., 1984; Kaptein et al., 1995). The logic underlying those studies is that if search can be limited to a subset, then search latencies should not be affected by the number of distractors that do not belong to the subset. Thus, the authors’ main interest was the variation of search times for the target with the number of within-subset or out-of-subset search items. In the current experiments, we focused more on distractor locations. We compared the amount of attention at the distractor locations that were easily grouped with the target with the attention at locations that were not. The logic underlying our current study was that if grouping is involved somehow in visual search, then the distractors grouped with the target should be more attended. Also, our underlying assumption for using a spatial probe was that this attention is mediated by location selection, whether it is directed by primary features such as shape or color, or by perceptual organization.

Recently, Theeuwes (1996) showed that search for a color-orientation conjunction target could be performed in parallel when each type of distractor formed a spatially contiguous group, as in our Experiment 2. based on the results, he suggested sequential two-step parallel processing of a conjunction target, similar to the model of Grossberg et al. (1994). The first parallel process enables a segmentation of one target feature group from another, followed by a second parallel process that allows a pop-out of the conjunction target in one of the segmented groups. Combining Theeuwes’s results with the current data, we would expect that in the first phase of search, the region including the target and all the nearby distractors sharing a relevant feature would be selected. In the second phase, we would expect that only the target location would be selected.

One might argue that the grouping effects in our Experiment 2 and in the study of Hendel et al. (1996) do not necessarily mean that visual search used the candidate target group to select the target. Attention could be directly allocated to the target location first, and then it might spread over the grouped region. In the study of Hendel et al. (1996), for example, two targets in the same color group might have been detected more easily because attention at the first detected target location could have spread over the same color-group region. According to this conjecture, grouping is not used in selecting a search target, but it is a byproduct of selecting a target. If that conjecture were true, however, then we would expect a stronger color-grouping effect in Experiment 1 as well.

That new evidence for using perceptual groups in visual search provides important constraints for many current search models in which perceptual organization of search items is not considered important. For those models, it is now important to explain why grouping occurs in those tasks, even though it may require more processing steps, and how grouping can be implemented within the current structure of the models.

It is not clear under which conditions search proceeds among groups and under which conditions it does not. For example, participants might not use groups for search when there are many groups in a search display (e.g., Experiment 2 in Treisman &: Sato, 1990; Theeuwes, 1996). Moreover, the results from the current experiments cannot determine whether participants selected the target group or inhibited the distractor group in visual search. However, our recent data and those of others have suggested that spatial attention operates in search tasks such as those by inhibiting nonselected locations (Cave & Zimmerman, 1997; Cepeda, Cave, Bichot, & Kim, 1998; Moran & Desimone, 1985; Treisman & Sato, 1990). Given inhibition of nontarget locations, it might be more efficient to inhibit an entire group than to inhibit each element individually, depending on the way the display elements are represented.

To summarize, the results from our spatial probes empirically demonstrated location selection directed by perceptual organization in visual search. A group of contiguous spatial locations can be selected together in the course of visual search, even when the group is defined by nonspatial features such as color. The evidence for selection of perceptual groups suggests that grouping should play some role in models of visual selection. The effect of color grouping was strong in Experiment 2 but appeared weak or nonexistent in Experiment 1, suggesting that the selection of a subgroup in visual search differed depending on the nature of the task, the features shared by different elements in the display, or both.

Thanks to Amy Coombs for help in testing participants, and to Narcisse Bichot, Randolph Blake, Keith Clayton, Joe Lappin, Alan Peters, and Jeff Schall for their helpful suggestions. This work was supported in part by National Eye Institute Grant EY 08126 to the Vanderbilt Vision Research Center. The data were presented at the annual meeting of the Psychonomic Society, October 1996, in Chicago.


Beck, J. (1982). Texture segregation. In J. Beck (Ed.), Organization and representation in perception (pp. 285-317). Hillsdale, NJ: Erlbaum.

Bichot, N. P., Cave, K. R., & Pashler, H. (1999). Visual selection mediated by location: Feature-based selection of noncontiguous locations. Perception & Psychophysics, 61, 402-423.

Braun, J., & Sagi, D. (1990). Vision outside the focus of attention. Perception & Psychophysics, 48, 45-58.

Braun, J., & Sagi, D. (1991). Texture-based tasks are little affected by a second task which requires peripheral or central attentive fixation. Perception, 20, 483-500.

Bravo, M., & Blake, R. (1990). Preattentive vision and perceptual groupings. Perception, 19, 515-522.

Cave, K. R. (in press). The FeatureGate Model of Visual Selection. Psychological Research.

Cave, K. R., & Bichot, N. P. (in press). Visuo-spatial attention: Beyond a spotlight model. Psychonomic Bulletin and Review.

Cave, K. R., & Pashler, H. (1995). Visual selection mediated by location: Selecting successive visual objects. Perception and Psychophysics, 57, 421-432.

Cave, K. R., & Wolfe, J. M. (1990). Modeling the role of parallel processing in visual search. Cognitive Psychology, 22, 225-271.

Cave, K. R., & Zimmerman, J. M. (1997). Flexibility in spatial attention before and after practice. Psychological Science, 8, 399-403.

Cepeda, N.J., Cave, K. R., Bichot, N. P., & Kim, M. (1998). Spatial selection via feature-driven inhibition of distractor locations. Perception & Psychophysics, 57, 421-432.

Duncan, J. (1980). The locus of interference in the perception of simultaneous stimuli. Psychological Review, 87, 272-300.

Duncan, J., & Humphreys, G. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433-458.

Egeth, H., Virzi, R., & Garbart, H. (1984). Searching for conjunctively defined targets. Journal of Experimental Psychology: Human Perception & Performance, 10, 32-39.

Friedman-Hill, S., & Wolfe, J. M. (1995). Second-order parallel processing: Visual search for the odd item in a subset. Journal of Experimental Psychology: Human Perception & Performance, 21, 531-551.

Grandison, T. D., Hendel, S. K., & Egeth, H. E. (1996). Grouping in a multiple-target visual search task. Investigative Optholmology and Visual Science, 37, 296.

Grossberg, S., Mingolla, E., & Ross, W. D. (1994). A neural theory of attentive visual search: Interaction of boundary, surface, spatial, and object representations. Psychological Review, 101, 470-489.

Harms, L., & Bundesen, C. (1983). Color segregation and selective attention in a nonsearch task. Perception & Psychophysics, 33, 11-19.

Hawkins, H. L., Hillyard, S. A., Luck, S., Mouloua, M., Downing, C. J., & Woodward, D. P. (1990). Visual attention modulates signal detectability. Journal of Experimental Psychology: Human Perception & Performance, 16, 802-811.

Hendel, S. K., Grandison, T. D., & Egeth, H. E. (1996, October). Grouping of task-irrelevant features affects visual search times. Poster presented at the annual meeting of the Psychonomic Society, Chicago.

Hoffman, J. E., & Nelson, B. (1981). Spatial selectivity in visual search. Perception & Psychophysics, 30, 283-290.

Hoffman, J. E., Nelson, B., & Houck, M. R. (1983). The role of attentional resources in automatic detection. Cognitive Psychology, 15, 379-410.

Humphreys, G. W., & Muller, H. J. (1993). Search via recursive rejection (SERR): A connectionist model of visual search. Cognitive Psychology, 25, 43-110.

Humphreys, G. W., Quinlan, P. T., & Riddoch, M. J. (1989). Grouping processes in visual search: Effects with single- and combined-feature targets. Journal of Experimental Psychology: General, 118, 258-279.

Julesz, B. (1984). A brief outline of the text on theory of human vision. Trends in Neurosciences, 7, 41-45.

Julesz, B. (1986). Texton gradients: The texton theory revisited. Biological Cybernetics, 54, 464-469.

Kahneman, D., & Henik, A. (1977). Effects of visual grouping on immediate recall and selective attention. In S. Dornic (Ed.), Attention & performance VI (pp. 307-332). Hillsdale, NJ: Erlbaum.

Kahneman, D., & Henik, A. (1981). Perceptual organization and attention. In M. Kubovy & J. R. Pomerantz (Eds.), Perceptual organization (pp. 181-211). Hillsdale, NJ: Erlbaum.

Kahneman, D., & Treisman, A. M. (1984). Changing views of attention and automaticity. In R. Parasuraman & D. R. Davies (Eds.), Varieties of attention (pp. 29-61). New York: Academic Press.

Kaptein, N. A., Theeuwes, J., & Van der Heijden, A. H. C. (1995). Search for a conjunctively defined target can be selectively limited to a color-defined subset of elements. Journal of Experimental Psychology: Human Perception and Performance, 21, 1053-1069.

Kim, M.-S., & Cave, K. R. (1995). Spatial attention in visual search for features and feature conjunctions. Psychological Science, 6, 376-380.

Kim, M.-S., & Cave, K. R. (1999). Perceptual grouping via spatial attention in a focused-attention task. Manuscript submitted for publication.

Kim, M.-S., & Cave, K. R. (in press). Top-down and bottom-up attentional control: On the nature of interference from a salient distractor. Perception and Psychophysics.

Kowler, E., Anderson, E., Dosher, B., & Blaser, E. (1995). The role of attention in the programming of saccades. Vision Research, 35, 1897-1916.

LaBerge, D., & Brown, V. (1989). Theory of attentional operations in shape identification. Psychological Review, 96, 101-124.

McLeod, P., Driver, J., & Crisp, J. (1988). Visual search for a conjunction of movement and form is parallel. Nature, 332, 154-155.

McLeod, P., Driver, J., Dienes, Z., & Crisp, J. (1991). Filtering by movement in visual search. Journal of Experimental Psychology: Human Perception and Performance, 17, 55-64.

Moran, J., & Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229, 782-784.

Muller, H. J., & Humphreys, G. W. (1991). Luminance-increment detection: Capacity-limited or not? Journal of Experimental Psychology: Human Perception and Performance, 17, 107-124.

Nakayama, K., & Silverman, G. H. (1986). Serial and parallel processing of visual feature conjunctions. Nature, 320, 264-265.

Neisser, U. (1967). Cognitive psychology. New York: Appleton.

Pashler, H. (1988). Cross-dimensional interaction and texture segregation. Perception and Psychophysics, 43, 307-318.

Shih, S.-I., & Sperling, G. (1996). Is there feature-based attentional selection in visual search? Journal of Experimental Psychology: Human Perception and Performance, 22, 758-779.

Theeuwes, J. (1991). Cross-dimensional perceptual selectivity. Perception & Psychophysics, 50, 184-193.

Theeuwes, J. (1992). Perceptual selectivity. Perception & Psychophysics, 51, 599-606.

Theeuwes, J. (1996). Parallel search for a conjunction of color and orientation: The effect of spatial proximity. Acta Psychologica, 94, 291-307.

Townsend, J. T. (1990). Serial vs. parallel processing: Sometimes they look like Tweedledum and Tweedledee but they can (and should) be distinguished. Psychological Science, 1, 46-54.

Townsend, J. T., & Nozawa, G. (1995). Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories. Journal of Mathematical Psychology, 39, 321-359.

Treisman, A. M. (1982). Perceptual grouping and attention in visual search for features and for objects. Journal of Experimental Psychology: Human Perception and Performance, 8, 194-214.

Treisman, A. M. (1988). Features and objects: The Fourteenth Bartlett Memorial Lecture. The Quarterly Journal of Experimental Psychology, 40A, 201-237.

Treisman, A. M., & Gelade, G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97-136.

Treisman, A. M., & Gormican, S. (1988). Feature analysis in early vision: Evidence from search asymmetries. Psychological Review, 95, 15-48.

Treisman, A. M., & Sato, S. (1990). Conjunction search revisited. Journal of Experimental Psychology: Human Perception and Performance, 16, 456-478.

Treisman, A. M., & Souther, J. (1985). Search asymmetry: A diagnostic for preattentive processing of separable feature. Journal of Experimental Psychology: General, 114, 285-310.

Wolfe, J. (1994). Guided Search 2.0: A revised model of visual search. Psychonomic Bulletin and Review, 1, 202-238.

Wolfe, J. M., Cave, K. R., & Franzel, S. (1989). Guided Search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception & Performance, 15, 419-433.

Wolfe, J. M., & Gancarz, G. (1996). Guided Search 3.0: A model of visual search catches up with Jay Enoch 40 years later. In V. Lakshminarayanan (Ed.), Basic and clinical applications of vision science (pp. 189-192). Dordrecht, The Netherlands: Kluwer Academic.

COPYRIGHT 1999 Heldref Publications

COPYRIGHT 2004 Gale Group