Developing a sophisticated object detection and recognition algorithms has been a long distance challenge in computer and robot vision researches. Such algorithms are required in most applications of computational vision, including robotics, medical imaging , intelligent cars , surveillance , image segmentation [5, 6]
and content-based image retrieval. One of the major challenges in designing generic object detection and recognition systems is to construct methods that are fast and capable of operating on standard computer platforms without any prior knowledge. To that end, pre-selection mechanism would be essential to enable subsequent processing to focus only on relevant data. One promising approach to achieve this mechanism is visual attention: it selects regions in a visual scene that are most likely to contain objects of interest. The field of visual attention is currently the focus of much research for both biological and artificial systems.
Attention is generally controlled by one or a combination of the two mechanisms: 1) a top-down control that voluntarily chooses the focus of attention in a cognitive and task-dependent manner, and 2) a bottom-up control that reflexively directs the visual focus based on the observed saliency attributes. The first biologically-plausible model for explaining the human attention system was proposed by Koch and Ullman , which follows the latter approach. The basic concept underlying this model is the feature integration theory developed by Treisman and Gelade  which has been one of the most influential theories of human visual attention. According to the feature integration theory, in a first step to visual processing, several primary visual features are processed and represented with separate feature maps that are later integrated in a saliency map that can be accessed in order to direct attention to the most conspicuous areas. In an example shown in Fig. 1, a red car placed on the right in the frame should be attentive, and therefore people directs one’s attention to this area. The Koch-Ullman model has been attracting attention of many researchers, especially after the development of an implementation model by Itti, Koch and Niebur . Later, so many attempts have been made to improve the Koch-Ullman model [11, 12, 13, 14, 15] and to extend it to video signals [15, 16, 17, 18].
Although the feature integration theory well explains the early human visual system, a part of the theory includes one crucial problem, namely, people may attend to different locations on the same visual input at the same time. The example shown in Fig. 1 exactly indicates the phenomena: people may pay attention to a blue traffic sign at the center, a white line at the bottom left or others. Previously, this inconsistent visual attention has been considered to be caused by object-based attention, rather than location-based attention , which implies that inconsistent visual attention are heavily controlled by higher order processes such as top-down intention, knowledge and preferences. Another typical example can be seen in Fig. 2. Let us consider a search task with a single target among a lot of distractors. We can easily understand that the left case is easy and the right case is difficult to find the target. However, based on the feature integration theory, we can immediately identify the target for both easy and hard searches since we always select the location where the response of the detector tuned to the target visual property is greater than at any other locations.
. According to this theory, the elements in a visual display are internally represented as independent random variables. Again let us consider the search task shown in Fig.2. The response of a detector tuned to the target orientation is represented as a Gaussian density. The response of the same detector to the distractor is also a Gaussian density with lower mean value. For a target and vertical distractors, these densities barely overlap, which implies that we can immediately detect the target. On the other hand, in the case of hard search, the target density is identical to the easy search case, but the distractor density is shifted rightward, so that the two densities corresponding to the target and distractor overlap. This implies that the probability we focus on the distractors becomes high and therefore it takes much time to detect the target.
With the paradigm of the signal detection theory, we proposes a new stochastic model of visual attention. With this model, we can automatically predict the likelihood of where humans typically focus on a visual input. The proposed model is composed of a dynamic Bayesian network with four layers: (1) a saliency map that shows the average saliency response at each position of a video frame, (2) a stochastic saliency map that converts the saliency map into a natural human response through a Gaussian state space model based on the finding of the signal detection theory, 3) an eye movement pattern that controls the degree of “overt shifts of attentions” (shifts with saccadic eye movements) through a hidden Markov model (HMM), and 4) an eye focusing density map that predicts positions that people probably pay attention to based on the stochastic saliency map and eye movement patterns. When describing the Bayesian network of visual attention, the principle of the signal detection theory is introduced, namely, the position where values of the stochastic saliency map takes the maximum is the eye focusing positions. The proposed model also provides a framework simulating top-down cognitive states of a person at the layer of eye movement patterns. The introduction of eye movement patterns as hidden states of HMMs enables us to describe the mechanism of eye focusing and eye movement naturally.
The paper is organized as follows: Section 2 discusses several related researches that focuses on modeling of human visual attention by using probabilistic techniques or concepts. Section 3 describes the proposed stochastic model of visual attention. Section 4
presents the methods for finding maximum likelihood (ML) estimates of the model parameters based on the Expectation-Maximization (EM) framework. Section5 discusses some evaluation results. Finally Section 6 summarizes the report and discusses future work.
2 Related work
Several previous researches focused on modeling of human visual attention by using some kind of probabilistic techniques or concepts. Itti and Baldi 
investigated a Bayesian approach to detecting surprising events in video signals. Their approach models a surprise by Kullback-Leibler divergence between the prior and posterior distributions of fundamental features. Avraham and Lindenbaum utilized a graphical model approximation to extend their static saliency model based on self similarities. Boccignone  introduced a nonparametric Bayesian framework to achieve object-based visual attention. Gao, Mahadevan and Vasconcelos [15, 23]
developed a decision-theoretic approach attention model for object detection.
The main contribution of our stochastic model against the above previous researches is the introduction of a unified stochastic model that integrates “covert shifts of attention” (shifts of attentions without saccadic eye movements) driven by bottom-up saliency with “overt shifts of attention” (shifts of attention with saccadic eye movements) driven by eye movement patterns by using a dynamic Bayesian network. Our proposed model also provides a framework that simulates and combines the bottom-up visual saliency response and the top-down cognitive state of a person to estimate probable attended regions, if eye movement patterns can deal with more sophisticated top-down information. How to integrate such kinds of top-down information is one of the most important future researches.
3 Stochastic visual attention model
Figs. 3 and 4 illustrates the graphical representation of the proposed visual attention model. The proposed model consists of four layers: (deterministic) saliency maps, stochastic saliency maps, eye focusing positions and eye movement patterns. Before describing the model of the proposed visual attention model, let us introduce several notations and definitions.
denotes an input video, where is the -th frame of the video and is the duration (i.e. the total number of frames) of the video . The symbol also denotes a set of coordinates in the frame. For example, a position in a frame is represented as .
denotes a saliency video which comprises a sequence of saliency maps obtained from the input video . Each saliency map is denoted as , where is called saliency which is the pixel value at the position . Each saliency represents the strength of visual stimulus on the corresponding position of a frame with the real value between 0 and 1.
denotes a stochastic saliency video which comprises a sequence of stochastic saliency maps obtained from the input video . Each stochastic saliency map is denoted as , where is called stochastic saliency which is the pixel value at the position . Each stochastic saliency corresponds to saliency response perceived through a certain kind of random processes.
denotes a sequence of eye movement patterns each of which represents a pattern of eye movements, A previous research  implies that there are two typical patterns 111Peters and Itti  prepared the other pattern, interactive state, which can be seen when playing video games, driving a car or browsing webs. We will omit the interactive state since our setting in this paper does not include any interactions of eye movements when one is simply watching a video: 1) Passive state in which one tends to stay around one particular position to continuously capture important visual information, and 2) active state in which one actively moves around and searches various visual information on the scene. Eye movement patterns may reflect purposes or intentions of human eye movements.
denotes a sequence of eye focusing positions. The proposed model estimates the eye focusing position by integrating the bottom-up information (stochastic saliency maps) and the top-down information (eye movement patterns). A map that represents a density of eye focusing positions is called an eye focusing density map.
Only the saliency maps are observed, and therefore eye focusing positions should be estimated under the situation where other layers (stochastic saliency maps and eye movement patterns) are hidden.
In what follows, we denote a probability density function (PDF) of anas , a conditional PDF of an given as , and a PDF of with a parameter as .
The rest of this section describes the detail of the proposed stochastic model and the method for estimating eye focusing positions only from input videos.
3.2 Saliency maps
We used Itti-Koch saliency model  shown in Fig. 5 to extract (deterministic) saliency maps. Our implementation includes twelve feature channels sensitive to color contrast (red/green and blue/yellow), temporal luminance flicker, luminance contrast, four orientations (, , and
), and two oriented motion energies (horizontal and vertical). These features detect spatial outliers in image space using a center-surround architecture. Center and surround scales are obtained from dyadic pyramids with 9 scales, from scale 0 (the original image) to scale 8 (the image reduced by a factor ofin both the horizontal and vertical dimensions). Six center-surround difference maps are then computed as point-wise differences across pyramid scales, for combinations of three center scales () and two center-surround scale differences (). Each feature map is additionally endowed with internal dynamics that provide a strong spatial within-feature and within-scale competition for activity, followed by within-feature, across-scale competition. In this way, initially noisy feature maps can be reduced to sparse representations of only outlier locations which stand out from their surroundings. All feature maps finally contribute to a unique saliency map representing the conspicuity of each location in the visual field. The saliency map is adjusted with a centrally-weighted ’retinal’ filter, putting a higher emphasizes on the saliency values around the center of the video.
3.3 Stochastic saliency maps
When estimating a stochastic saliency map , we introduce a pixel-wise state space model characterized by the following two relationships:
where is the Gaussian PDF with mean
and variance. The first equation in the above model implies that a saliency map is observed through a Gaussian random process, and the second equation exploits the temporal characteristics of the human visual system. For brevity, only in this section we will omit the position where explicit expression is unnecessary, e.g. instead of .
We employ a Kalman filter to recursively compute the stochastic saliency map. Assume that the density at each position on the stochastic saliency mapat time given saliency maps up to time is given as the following Gaussian PDF:
where the position is omitted for simplicity. Then, the density of the stochastic saliency map at time is updated by the following recurrence relations with the saliency maps up to time :
The above model implies that model parameters (, ) of every Gaussian random variable is independent from the frame index and the position . We can easily extend the model to consider adaptive model parameters depending on the frame index and the position. In this case, model parameters can be updated via on-line learning with adaptive Kalman filters (e.g. [25, 26].) (Remark 1 ends.)
3.4 Estimating eye motions
By incorporating the stochastic saliency map and the eye movement pattern , we introduce the following transition PDF to estimate the eye focusing position such that
where the PDF of the stochastic saliency map at time is represented as for simplicity, namely
The stochastic saliency map controls “covert shifts of attention” through the PDF 222The notation seems to be unusual, however, the PDF of eye focusing positions estimated from the stochastic saliency map can be determined by the PDF of the stochastic saliency map, not the stochastic saliency map itself, as shown in Section 3.4.3.. On the other hand, the eye movement pattern controls the degree of “overt shifts of attention”. In what follows, we call a pair , consisting of an eye focusing position and an eye movement pattern, as the eye focusing state for brevity. The following PDF of eye focusing positions given a PDF of stochastic saliency maps up to time characterizes an eye focusing density map at time :
Since the formula for computing Eq. (4) cannot be derived, we introduce a technique inspired by a particle filter with Markov chain Monte-Carlo (MCMC) sampling instead. The PDF of eye focusing states shown in Eq. (5) can be approximated by samples of eye focusing states and the associated weights as
where is the number of samples, and represents Kronecker delta.
Fig. 6 shows the procedure for estimating eye focusing density maps, which can be separated into three steps: 1) generating samples from the PDFs and derived from an eye movement pattern, 2) weighting samples with the PDF derived from a stochastic saliency map, and 3) re-sampling if necessary. We now describe each step in detail.
3.4.2 Propagation with eye movement patterns
The second and third terms of Equation (3) suggests that the current eye focusing position depends on the previous eye focusing position, and the degree of eye movements is driven by one’s eye movement pattern .
The second term of Equation (3) is characterized by the the transitional probability of eye movement patterns defined by a matrix given in advance.
are model parameters that represents the average and standard deviation of distances of eye movements, andis a shifted 2D Gaussian PDF with mean , indent and variance such that
Samples of eye focusing states are generated with a technique of MCMC sampling. Suppose that samples of eye focusing states at time have already been obtained. Then, samples at time are drawn by using the second and third terms of Equation (3) with the Metropolis algorithm  such as
where indicates that a sample is drawn from a PDF . This top-down part corresponds to the propagation step of a particle filter.
3.4.3 Updating with stochastic saliency maps
As the second step, sample weights are updated based on the first term of Equation (3). Formally, the weight of the -th sample at time can be calculated as
As shown in Equation (6), samples of eye focusing states and the associated weights comprise an eye focusing density map at time . This step corresponds to the update step of a particle filter.
The first term of Equation (3) represents the fact that the eye focusing position is selected based on the signal detection theory, where the position at which the stochastic saliency takes the maximum is determined as the eye focusing position. In other words, this term computes the probability at each position that the stochastic saliency takes the maximum, which can be calculated as
where is the cumulative density function (CDF) that corresponds to the PDF of the stochastic saliency . The first part of Equation (9) stands for the probability such that a stochastic saliency value at position equals , and the second part represents the probability such that stochastic saliency values at any other positions are smaller than .
The latter part of Eq. (10) does not depend on the position , which implies that it can be calculated in advance for every . This calculation can be executed in time through a tree-based multiplication and parallelization at each level (cf. Fig. 8). Also, the former part of Eq. (10) can be calculated independently for each position . Therefore, once the calculation of the latter part has finished, Eq. (10) can be calculated in time with a combination of tree-based addition and pixel-wise parallelization, where stands for the resolution of the integral in Eq. (10).
Finally, re-sampling is performed to eliminate samples with low importance weights and multiply samples with high importance weights. This step enables us to avoid “degeneracy” problem, namely, to avoid the situation where all but one of the importance weights are close to zero. Although the effective number of samples  is frequently used as a criterion for re-sampling, we execute re-sampling at regular time intervals.
We have to note that the whole procedure which includes the propagation, updating and re-sampling steps for estimating eye focusing density maps is equivalent to a particle filter with MCMC sampling since the the PDFs used in the propagation and update steps are mutually independent with each other. (Remark 2 ends.)
4 Model parameter estimation
This section focuses on the problem of estimating maximum likelihood (ML) model parameters. Fig. 9 shows the block diagram of our model parameter estimation. We can automatically estimate almost all the model parameters in advance by utilizing saliency maps calculated from the input video and eye focusing positions obtained by some eye tracking devices as observations. Simultaneous estimation of all ML parameters can be optimal but impractical due to the substantial calculation cost. Therefore, we separate our parameter estimation into two independent stages. The first stage derives parameters for computing stochastic saliency maps, and the second stage for estimating eye focusing points.
4.1 Parameters for stochastic saliency maps
The first stage derives parameters for computing stochastic saliency maps. Here, we introduce the EM algorithm. In this case, the observations are the saliency maps and the hidden variables are the stochastic saliency maps . Remember that is the duration of the video. The EM algorithm for estimating is as follows:
-th E step
The E step updates the PDF of the stochastic saliency maps given the saliency maps with the previously estimated parameter by using Kalman smoother. In detail, the objective is to recursively compute the mean and standard deviation of the stochastic saliency at time , where all the saliency maps are used as observations. Note that the position is omitted for simplicity.
Suppose that the PDF of the stochastic saliency at time is given by the following Gaussian PDF:
Then, the PDF of the stochastic saliency at time is obtained by the following recurrence relation:
-th M step
The M step updates the parameter to maximize the expected log likelihood of the PDF . We can derive a new parameter from the result of the E step by taking the derivatives of the log likelihood in terms of and setting to 0.
4.2 Parameters for eye focusing positions
The second stage derives parameters , , , , for computing eye focusing positions. The observations are the sequence of eye focusing positions obtained from some eye tracking devices, and the hidden states are the eye movement patterns . In this section, we introduce an alternative notation of eye movement patterns as
, which is a 2-dimensional binary vector such thatdenotes the passive state, and represents the active state.
We take a Viterbi learning approach for its quick convergence. It recursively updates the eye movement patterns and the ML parameter set to maximize the posterior .
Initializing eye movement patterns
We have to start with determining an initial sequence of eye movement patterns. We introduce the following decision rule:
where is a given threshold.
The -th step for updating hidden variables
This step updates the sequence of eye movement patterns to maximize the posterior density given the parameter set obtained in the previous step.
The -th step for updating the parameter set
This step updates the parameter set to maximize the posterior density .
Taking the derivative of the log likelihood in terms of , we obtain
5.1 Evaluation conditions
For the accuracy evaluation, we used CRCNS eye-1 database created by University of South California. This database includes 100 video clips (MPEG-1, pixels, 30fps) and eye traces when showing these video clips to 8 human subjects (4-6 available eye traces for each video clip, 240fps). Other details for the database can be found in https://crcns.org/files/data/eye-1/crcns-eye1-summary.pdf. In this evaluation, we used 50 video clips (about 25 minutes in total) called “original experiment” and associated eye traces.
Model parameters were derived in advance with the learning algorithm presented in Section 4. In this time, we used 5-fold cross validation so that 40 video clips and associated eye traces were used as the training data for evaluating the remaining data (10 videos and associated eye traces).
All the algorithms were implemented with a standard C++ platform and NVIDIA CUDA, and the evaluation were carried out on a standard PC with a graphics processor unit (GPU). The detailed information for the platform used in this evaluation is listed in Table I.
|OS||Windows Vista Ultimate|
|Development||Microsoft Visual C++ 2008|
|platform||OpenCV 1.1pre & NVIDIA CUDA 2.2|
|CPU||Intel Core2 Quad Q6600 (2.40GHz)|
|GPU||NVIDIA GeForce GTX275 SLI|
5.2 Evaluation metric
As a metric to quantify how well a model predicts the actual human eye focusing positions, we used the normalized scan-path saliency (NSS) used in the previous work . Let be a set of all pixels in a circular region centered on the eye focusing position of test subject with a radius of 30 pixels. Then, the NSS value at time is defined as
where is the total number of subjects, and are the mean and the variance of the pixel values of the model’s output, respectively. indicates that the subjects’ eye positions fall in a region whose predicted density is one standard deviation above average. Meanwhile, indicates that the model performs no better than picking a random position on the map.
We compared our proposed method with 3 existing computational models: 1) a simple control measuring local pixel variance (denoted “variance”) , 2) a saliency map (denoted “CIOFM”) , and 3) Bayesian surprise (denoted “surprise”) . All the outputs emitted from the above existing models are included in CRCNS eye-1 database, and therefore we directly utilized them for the evaluation.
shows the model accuracy measured by the average NSS score with standard errors for all the video clips, and Fig.11 details the average NSS score for each video clip. The order of video clips is sorted beforehand to keep the visibility. The result shown in Fig. 10 indicates that the our new method achieved significantly better scores than all 3 existing methods, which implies that our proposed method can estimate human visual attention with high accuracy. Also, the result shown in Fig. 11 indicates that our proposed method marked almost the same as or much better than all the existing methods for most of the video clips.
Fig. 12 shows snapshots of outputs from Itti model (the second and fifth rows) and our proposed method (the third and sixth rows). It illustrates that outputs from Itti model included several large salient regions. On the other hand, outputs from our proposed method included only a few small eye focusing areas. This implies that our new method picked up probable eye focusing areas accurately.
Fig. 13 shows the total execution time of 1) calculating Itti’s saliency map without CUDA, 2) the proposed method without CUDA, and 3) the proposed method with CUDA. The result indicates that the proposed method has achieved near real-time estimation (40-50 msec/frame), and almost the same processing time as the one for Itti’s model.
We have presented the first stochastic model of human visual attention based on a dynamic Bayesian framework. Unlike many existing methods, we predict the likelihood of human-attended regions on a video based on two criteria: 1) The probability of having the maximum saliency response at a given region evaluated based on the signal detection theory, and 2) the probability of matching the eye movement projection based on the predicted state. Experiments have revealed that our model offers a better eye-gazing prediction against previous deterministic models. To enhance our current model, future work may include determination of initial parameters close to the global optima when estimating model parameters, unified approach to estimate all the model parameters, a better density model of eye movements, a better integration of the bottom-up and the top-down information, a better saliency model for extracting (deterministic) saliency maps, and integration of the proposed method into some applications such as driving assistance, active vision and video retrieval.
The authors thank Prof. Laurent Itti of University of South California, Prof. Minho Lee of Kyungpook National University, Dr. Hirokazu Kameoka and Dr. Eisaku Maeda of NTT Communication Science Laboratories for their valuable discussions and helpful comments, which led to improvements of this work. The second and fourth authors contributed to this work during their internship at NTT Communication Science Laboratories. The authors also thank Dr. Yoshinobu Tonomura (currently Ryukoku University), Dr. Hiromi Nakaiwa, Dr. Naonori Ueda, Dr. Hiroshi Sawada, Dr. Shoji Makino (currently Tsukuba University) and Dr. Kenji Nakazawa (currently NTT Advance Technologies Inc.) of NTT Communication Science Laboratories for their help to the internship.
-  M. Hikita, S. Fuke, M. Ogino, T. Minato, and M. Asada, “Visual attention by saliency leads cross-modal body representation,” in Development and Learning, 2008. ICDL 2008. 7th IEEE International Conference on, Aug. 2008, pp. 157–162.
-  X.-P. Hu, L. Dempere-Marco, and G.-Z. Yang, “Hot spot detection based on feature space representation of visual search,” Medical Imaging, IEEE Transactions on, vol. 22, no. 9, pp. 1152–1162, Sept. 2003.
-  P. Santana, M. Guedes, L. Correia, and J. Barata, “Saliency-based obstacle detection and ground-plane estimation for off-road vehicles,” in ICVS, 2009, pp. 275–284.
G. Boccignone, “Nonparametric bayesian attentive video analysis,” in
Proc. International Conference on Pattern Recognition (ICPR), Dec. 2008, pp. 1–4.
-  G. Boccignone, A. Chianese, V. Moscato, and A. Picariello, “Foveated shot detection for video segmentation,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 15, no. 3, pp. 365–377, March 2005.
-  K. Fukuchi, K. Miyazato, A. Kimura, S. Takagi, and J. Yamato, “Saliency-based video segmentation with graph cuts and sequentially updated priors,” in Proc. International Conference on Multimedia and Expo (ICME), June 2009.
-  O. Chum, J. Philbin, M. Isard, and A. Zisserman, “Scalable near identical image and shot detection,” in CIVR ’07: Proceedings of the 6th ACM international conference on Image and video retrieval. New York, NY, USA: ACM, 2007, pp. 549–556.
-  C. Koch and S. Ullman, “Shifts in selective visual attention: Towards the underlying neural circuitry,” Human Neurobiology, vol. 4, pp. 219–227, 1985.
-  A. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognitive Psychology, vol. 12, pp. 97–136, 1980.
-  L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11, pp. 1254–1259, November 1998.
-  C. M. Privitera and L. W. Stark, “Algorithms for defining visual regions-of-interest: Comparison with eye fixations,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 9, pp. 970–982, 2000.
-  E. Gu, J. Wang, and N. Badler, “Generating sequence of eye fixations using decision-theoretic attention model,” in Proc. Conference on Computer Vision and Pattern Recognition (CVPR), June 2005, pp. 92–99.
-  S. Frintrop, Vocus: a Visual Attention System for Object Detection And Goal-directed Search (Lecture Notes in Computer Science). Springer-Verlag New York Inc (C), 3 2006. [Online]. Available: http://amazon.co.jp/o/ASIN/3540327592/
-  S. Jeong, S. Ban, and M. Lee, “Stereo saliency map considering affective factors and selective motion analysis in a dynamic environment,” Neural Networks, vol. 21, pp. 1420–1430, October 2008.
-  D. Gao and N. Vasconcelos, “Decision-theoretic saliency: Computational principles, biological plausibility, and implications for neurophysiology and psychophysics,” Neural Computation, vol. 21, no. 1, pp. 239–271, January 2009.
-  L. Itti and P. Baldi, “A principled approach to detecting surprising events in video,” in Proc. Conference on Computer Vision and Pattern Recognition (CVPR), June 2005, pp. 631–637.
-  C. Leung, A. Kimura, T. Takeuchi, and K. Kashino, “A computational model of saliency depletion/recovery phenomena for the salient region extraction of videos,” in Proc. International Conference on Multimedia and Expo (ICME), July 2007, pp. 300–303.
-  S. Ban, I. Lee, and M. Lee, “Dynamic visual selective attention model,” Neurocomputing, vol. 71, pp. 853–856, March 2007.
-  M. P. Eckstein, J. P. Thomas, J. Palmer, and S. S. Shimozaki, “A signal detection model predicts effects of set size on visual search accuracy for feature, conjunction, triple conjunction and disjunction displays,” Perception and Psychophysics, vol. 62, pp. 425–451, 2000.
-  P. Verghese, “Visual search and attention: A signal detection theory approach,” Neuron, vol. 31, pp. 525–535, August 2001.
-  B. J. Scholl, Ed., Objects and Attention (Cognition Special Issue). The MIT Press, 8 2002. [Online]. Available: http://amazon.co.jp/o/ASIN/0262692805/
-  T. Avraham and M. Lindenbaum, “Esaliency (extended saliency): Meaningful attention using stochastic image modeling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp. 693–708, 2009.
-  V. Mahadevan and N. Vasconcelos, “Spatiotemporal saliency in dynamic scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp. 171–177, 2009.
-  R. J. Peters and L. Itti, “Beyond bottom-up: Incorporating task-dependent influences into a computational model of spatial attention,” in Proc. Conference on Computer Vision and Pattern Recognition (CVPR), June 2007, pp. 1–8.
-  K. Myers and B. Taplay, “Adaptive sequential estimation with unknown noise statistics,” IEEE Trans. Autom. Control, vol. 21, no. 4, pp. 520–523, August 1976.
-  J. Leathrum, “On sequential estimation of state noise variances,” IEEE Trans. Autom. Control, vol. 26, no. 3, pp. 745–746, June 1981.
-  N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, “Equation of state calculations by fast computing machines,” Journal of Chemical Physics, vol. 21, pp. 1087–1092, 1953.
-  B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman filter: Particle filters for tracking applications. Boston: Artech House Publishers, 2004.
-  A. Viterbi, “Error bounds for comvolutional codes and an asymptotically optimum decoding algorithm,” IEEE Trans. Inf. Theory, vol. 13, no. 2, pp. 260–269, April 1967.
-  L. Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” Proc. the IEEE, vol. 77, no. 2, pp. 257–286, February 1989.