Stimulus Motion: A Basic Influence Factor in Visual Search

Background: Stimulus motion constitutes fundamental visual attributes within visual search tasks. Nevertheless, a comprehensive exploration of its impacts on visual search performance and the potential mechanisms governing these effects remains absent in the existing literature. Objective: The aim of the review is to fill the gap in the existing literature by reviewing the impact of stimulus motion on visual search performance. Method: This review discussed extant studies from two perspectives: stimulus motion and visual search performance, and theoretical models explaining underlying influential mechanisms. Results: The effects of stimulus motion on search performance differ under various types of conditions. Motion's influence on search performance varies depending on the specific type of motion; it can either have a negative or a positive effect. In cases of uniform motion, performance decreases as velocity increases under high-velocity conditions. However, motion's impact is negligible when velocity is low. Three theoretical models, including feature integration theory, Guided Search Model and visual salience theory have been introduced to explain how stimulus motion could influence the performance of visual search. Conclusion: We concluded from this review that stimulus motion may enhance or impair search performance under different conditions. This review will provide valuable references for practice and further research in the field of visual search.


INTRODUCTION
Vision is the primary means by which humans gather information from the external world.Visual search, the fundamental process of locating specific targets among visual objects through eye movements [1], is a crucial part of vision.For the ergonomic design of visual tasks, a basic two-part academic question arises: What factors influence visual search performance, and how do they do so?Numerous studies have explored this question and have shown that influential factors can be categorized into three main categories: stimulus characteristics, individual observer characteristics, and environmental factors.Among these three influential factors, stimulus characteristics receive the most focus in visual search research because many stimulus features are visual [1].An important influencing factor of visual search is motion, which categorizes it into static and dynamic visual search based on different motion characteristics.We chose to focus on motion and shape for three reasons.First, both motion and shape are basic visual features of stimuli that many researchers have studied.Second, both motion and shape represent complex visual dimensions that can be further divided into sub-feature dimensions.For example, two fundamental attributes of motion are velocity and movement direction [2,3].Third, both motion and shape have close relevance to real-world tasks as opposed to laboratory tasks.It is worth noting that most research on visual search has been conducted under static conditions, even though the majority of real-world search tasks take place in dynamic environments.Therefore, findings from static conditions may not universally apply to dynamic scenarios.
This paper provides an overview of the relationship between stimulus motion and visual search performance, along with the underlying mechanisms.Motion is a fundamental feature in visual search, encompassing dimensions such as speed and motion direction.Additionally, this article introduces three representative theoretical models to provide insights into how stimulus motion affects visual search performance.

STIMULUS MOTION AND VISUAL SEARCH PERFORMANCE
Although static stimuli have been a common choice in various visual search studies, it's essential to acknowledge that real-world visual scenes are typically dynamic, with the observers' eyes and the visual stimuli engaged in relative motion.Dynamic vision has been found to have a stronger relationship with practical life than static vision [2].Indeed, many visual search behaviors occur not in static situations but in dynamically changing environments [3].When viewing dynamic scenes, eye movements differ from those in static scenes, and the gaze area is smaller under dynamic conditions [4].Human eyes are sensitive to motion signals, which can be detected even at velocities as high as 10,000 deg/sec [5].Generally, it takes more time to process motion compared to other visual features such as color [6].However, the larger the change in velocity, the less time it takes to detect the motion [7].Regarding visual search tasks, motion significantly influences the saliency of the target in the feature map [8], thus affecting the allocation of attention resources [9].When a motion signal provides information about the target, it attracts attention; however, if no target information is conveyed through motion, it does not capture attention.If the target is the only moving item, no attention resources are needed to detect motion.Dynamic visual search studies primarily focus on the influence of motion type and moving velocity on search performance.

Motion Type and Visual Search Performance
Under different dynamic visual search conditions, there are various types of motion.In terms of motion direction, it can be categorized into linear motion, rotation, and composite motion.When considering motion uniformity, it can be divided into uniform motion and non-uniform motion.Concerning visual elements, there are three scenarios: either only targets or only distractors are in motion; some items are in motion while others remain static; all items are in motion.
When distractors keep static, the target is easier to be detected when it is moving than static, because the visual system filters static signals better than moving signals [10].Similarly, search performance is better when the target is moving while the distractors are static than when the target is static while the distractors are moving [11].For visual search tasks with moving distractors and static targets, reaction time is shorter if the distractors move homogeneously than randomly [12].Homogeneous motion allows observers to make perception grouping of distractors, which consequently improves search performance [13].In situations where all distractors are in motion, the efficiency of target detection is higher when the targets have a greater velocity compared to the distractors [14].When one searches for a fast-moving target amidst slower-moving distractors, the reaction time increases gradually as the number of distractors grows.Conversely, when attempting to detect a slow-moving target among fast-moving distractors, the reaction time is prolonged and exhibits a linear relationship with the number of distractors [15].Under conditions of rotating targets, targets could be efficiently selected from among translating items but not among distractors rotating in the opposite direction [16].The reason why motion could exert influence on search performance is that motion changes the discrimination features of target items, such as relative position [17].Horowitz et al. (2007) designed 6 experiments to investigate the effects of three types of motion i.e., ballistic motion, random walk, and composite motion, on visual search performance [14].It was found that search performance was determined by two features: motion change and linear motion.First, it is simpler to identify a greater number of linear motion characteristics within a scenario characterized by less motion than it is to identify fewer linear motion features in a context with more motion.Second, it is easier to find more change features among less than to find less change among more.Although most dynamic visual search studies found a negative effect of motion on search performance [18], some research using the real-world scene as a stimulus revealed performance promotion under dynamic conditions.Thornton et al. (2002) found searching for a human face was more efficient under dynamic conditions than static conditions [19].Pilz et al. (2006) drew a similar conclusion and discovered the improvement under dynamic conditions was contributed by faster recognition speed than in static conditions because participants were familiar with target faces [20].

Moving Velocity and Visual Search Performance
Studies on moving velocity and its impact on visual search performance have primarily focused on search tasks in which all items move uniformly.These tasks are commonly encountered in practical scenarios, such as X-ray screening in security checks and driving [21].William et al. (1963) investigated the effect of stimulus velocity on visual search performance [22].They designed six groups of search tasks with varying moving velocities ranging from 0 to 32 deg/sec, with the motion direction being horizontal from left to right.The results indicated that when the velocity exceeded 8 deg/sec, search performance deteriorated with increasing velocity.Erickson (1964) conducted a multi-target dynamic search experiment in which items moved from the upper portion of the visual frame to the lower portion [23].Participants found fewer target items as the velocity increased.In addition to reaction time and target number, velocity was also found to significantly affect search accuracy [24].Drury et al. (1996) conducted an experiment in which items moved from the lower portion of the visual frame to the upper portion [25].The results revealed that when the velocity ranged from 1.5 deg/sec to 4 deg/sec, performance did not decline with increasing velocity.Under static and low-velocity conditions, the inhibition of return of the visual system (i.e., the phenomenon which refers to the situation where the response times to targets that appear at a location previously indicated by a cue are shorter compared to response times for targets that appear at an unindicated location) aids in improving search efficiency [26].However, when items move at high speeds, inhibition of return disappears, leading to a decrease in search efficiency and a subsequent decline in performance [27].Additionally, as velocity increases, retinal image smearing (i.e., motion blur experienced when viewing moving targets because the visual system sums up signals over time, approximately 120ms in daylight [28]) increases.This reduction in the discrimination efficiency of the visual system negatively impacts search performance [6].

THEORETICAL MODELS EXPLAINING THE INFLUENTIAL MECHANISMS
Most visual search studies that examine the influence of stimulus motion on search performance have primarily focused on the input (i.e., the search task) and the output (i.e., search performance) of visual search, while the search process itself has often been treated as a "black box".In visual search, we deploy our selective visual attention in two forms: bottom-up and top-down guidance.Bottom-up guidance is scene-dependent, where visual attention is naturally drawn to more visually salient or attractive elements.Top-down guidance, on the other hand, is task-dependent, directing attention to items with known features or characteristics.In general, top-down control is a conscious process, while bottom-up control operates unconsciously.In most practical search tasks, both forms of guidance simultaneously play a role.To gain a better understanding of visual search behavior, researchers have developed numerous theoretical models of visual attention.Three of the most well-known models include the Feature Integration Theory, the Guided Search model, and the Visual Salience Theory.The Feature Integration Theory (FIT) introduces a systematic framework that combines both serial and parallel searches.FIT is primarily concerned with bottom-up processes, emphasizing that human attention is driven by the stimulus itself.Therefore, human attention is naturally drawn to more salient items.To account for controversial results, Treisman modified the theory and incorporated search asymmetry and perceptual grouping effects into the framework.Search asymmetry is the term used to describe the phenomenon wherein the search process becomes parallel when the target exhibits a distinctive, unique feature, and serial when the target is differentiated solely by the absence or reduction of a feature that is present in all distractors [29].This suggests the existence of two types of visual codes in early vision: (a) sets of substitutive features and (b) single positively coded features [30].During the preattentive stage, perceptual groups are independently organized within each dimension, resulting in a search process that seems to be serial across these groups but parallel within them [31].Additionally, when the target and distractors exhibit high similarity, even basic feature search may become a serial process [30].
As a supplement and development of the Feature Integration Theory (FIT), the Guided Search Model calculates a ranking of stimuli by combining information derived from both bottom-up and top-down processes.The Guided Search model has received support from various studies and can be used to explain learning effects, novelty effects, and similarity effects in visual search.Learning aligns with an enhancement in the selectivity of the top-down selection mechanism within the Guided Search model, leading to an enhancement in search performance [32].In visual search tasks, participants acquire the ability to link particular visual features with the group of familiar distractors, enabling them to steer their search away from these features and resulting in a preference for unfamiliar distractors [33].Both visual similarity and semantic similarity can guide attention in visual search, but the visual similarity effect is substantially larger than the semantic similarity effect [34].The Guided Search model is also applicable in real-world scene search tasks, where scene-based context and specific target templates can be employed to guide search behavior [35].
As a quantitatively computational model for analyzing and predicting visual behavior and performance, visual salience theory has gained increasing importance in computer vision in recent years.Salience theory provides a more robust explanation for the phenomenon where conjunction search can be swift and unaffected by set size when compared to feature integration theory.Target salience has the capacity to expedite visual search, irrespective of the target's definition or the specific distinguishing features that set it apart from non-targets in a given task, primarily because salient objects conspicuously contrast with distractors [36].In comparison to other models, the salience model is a computational model that performs well in predicting both search behavior and performance [37].Rosenholtz's salience model successfully predicted the results of all classic motion search experiments [38].Another advantage of the salience model is its strong alignment with actual eye movements, making it useful for predicting where humans look and what interests them [39].The salience model has also demonstrated superior performance in analyzing free-viewing tasks and complex natural scene tasks compared to many popular cognitive approaches [40].Additionally, other studies have shown that visual salience and motion speed interact to affect people's ability to detect moving targets [41].More recent models of visual salience have placed greater emphasis on motion features and have shown stronger predictive capabilities for human gaze locations [42].

CONCLUSIONS
Visual search is a significant area of research within the field of human factors and ergonomics.This study provides a selective review of how stimulus motion impacts visual search performance, along with theoretical models that explain the underlying mechanisms.Factors affecting search performance have long been a central focus of visual search studies and can be categorized into three main groups: stimulus characteristics, individual factors, and environmental factors.Numerous investigations have explored the impact of stimulus characteristics on visual search performance.Motion, as a fundamental visual attribute, has a close connection with real-world visual search tasks.Conclusions drawn from static visual searches do not always apply to dynamic visual search scenarios since many visual search tasks are conducted under dynamic conditions.Dynamic visual search studies primarily concentrate on examining the relationship between motion type and moving velocity and their effects on search performance.Motion's influence on search performance varies depending on the specific type of motion, and it can either have a negative or a positive effect.In cases of uniform motion, performance decreases as velocity increases under high-velocity conditions.However, motion's impact is negligible when velocity is low.Three classical models have been proposed to elucidate how motion can affect visual search behavior and performance.This article aims to highlight the role of motion stimuli in visual search tasks, emphasizing its significance.