Visual Grouping by Neural Oscillators

07/18/2008 ∙ by Guoshen Yu, et al. ∙ MIT 0

Distributed synchronization is known to occur at several scales in the brain, and has been suggested as playing a key functional role in perceptual grouping. State-of-the-art visual grouping algorithms, however, seem to give comparatively little attention to neural synchronization analogies. Based on the framework of concurrent synchronization of dynamic systems, simple networks of neural oscillators coupled with diffusive connections are proposed to solve visual grouping problems. Multi-layer algorithms and feedback mechanisms are also studied. The same algorithm is shown to achieve promising results on several classical visual grouping problems, including point clustering, contour integration and image segmentation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 17

page 19

page 20

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider Fig. 1. Why do we perceive in these visual stimuli a cluster of points, a straight contour and a river? How is the identification performed between a subgroup of stimuli and the perceived objects? These classical questions can be addressed from a variety of point of views, both biological and mathematical. This paper develops new grouping algorithms in the biologically-inspired framework of distributed oscillator synchronization.

Many physiological studies, e.g. [12, 17, 23], have shown evidence of grouping in visual cortex. Gestalt psychology [49, 31, 20, 9], an attempt to formalize the laws of visual perception, addresses some grouping principles such as proximity, good continuation and color constancy, in order to describe the construction of larger groups from atomic local information in the stimuli.

  
Figure 1: Left: a cloud of points in which a dense cluster is embedded. Middle: a random direction grid in which a vertical contour is embedded. Right: an image in which a river is embedded.

In computer vision, various mathematical frameworks have been suggested for grouping different visual qualities. Besides some classical clustering algorithms 

[19]

such as k-means 

[29], graph-based methods have been proposed for point clustering [36, 40]. Geometrical grouplets [30] consider contour grouping in the framework of harmonic analysis. The a contrario school [10, 8, 7, 9] applies probabilistic approaches to address the perceptual grouping such as point clustering and contour detection. Variational formulations [34, 33, 1], Markov Random Fields [14] and graph cuts [40] have been applied to perform image segmentation. These computer vision approaches have achieved important success. However, as most of them have been proposed under a specific motivation (and often without much interest for biological analogy), applications of each algorithm are usually limited in grouping based on one specific quality.

In the brain, at a finer level of functional detail, the distributed synchronization known to occur at different scales has been proposed as a general functional mechanism for perceptual grouping [5, 41]. In computer vision, comparatively little attention has been devoted to exploiting neural-like oscillators in visual grouping. Wang and his colleagues have performed very innovative work using oscillators for image segmentation [45, 28, 6] and have extended the scheme to auditory segregation [2, 43, 44]. They constructed oscillator networks with local excitatory lateral connections and a global inhibitory connection. Due to the large complexity, for real images a segmentation algorithm essentially without oscillators was abstracted from the underlying oscillatory dynamics [6]. Li has proposed elaborate visual cortex models with oscillators [24, 25, 26, 27] and applied them on lattice drawings. Kuzmina and his colleagues [21, 22] have constructed a simple self-organized oscillator coupling model, and applied it on synthetic lattice images as well. Faugeras et al., have started studying oscillatory neural mass models in the contexts of natural and machine vision [11].

In this paper we propose a simple and general neural oscillator algorithm for visual grouping, based on diffusive connections [37]. We use full-state neural oscillator models rather then phase-based approximations. The key to our approach is to embed the desired grouping properties in the couplings between oscillators. This allows one to exploit existing results on visual grouping and Gestalt while at the same taking advantage of the flexibility and robustness afforded by synchronization mechanisms. Synchronization of oscillators induces perceptual grouping while desynchronization leads to segregation. Multi-layer networks with feedback are introduced. Applications to point clustering, contour integration, and segmentation of synthetic and real images are demonstrated. A recent study of stable concurrent synchronization of neural oscillators [37] provides a general analysis tool to model the associated nonlinear dynamics and study their convergence properties.

Section 2 introduces a basic model of neural oscillators with diffusive coupling connections, studies its stability, and proposes a general visual grouping algorithm. Sections 34 and 5 describe in detail the neural oscillator solutions for point clustering, contour integration and image segmentation and show a number of examples. Section 6 presents brief concluding remarks.

2 Model and Algorithm

The model is a network of neural oscillators coupled with diffusive connections. Each oscillator is associated to an atomic element in the stimuli, for example a point, an orientation or a pixel. Without coupling, the oscillators are desynchronized and oscillate in random phases. Under diffusive coupling with the coupling strength appropriately tuned, they may converge to multiple groups of synchronized elements. The synchronization of oscillators within each group indicates that perceptual grouping of the underlying stimulative atoms, while the desynchronization between groups suggest group segregation.

The next section describes how to construct the neural oscillator networks and shows the stability and convergence properties of the system. A general visual grouping algorithm is proposed at the end.

2.1 Neural Oscillators

We use a modified form of FitzHugh-Nagumo neural oscillators [13, 35], similar to [28, 6],

(1)
(2)

where is the membrane potential of the oscillator, is an internal state variable representing gate voltage, represents the external current input, and , and are strictly positive constants (we use ). When the input exceeds a certain threshold value, the neural oscillator oscillates, the trace of membrane potential being plotted in Fig. 2-a. Other spiking oscillator models can be used similarly. In the neural oscillator networks for visual grouping, each oscillator is associated to an atomic element in the stimuli.

2.2 Diffusive connections

Oscillators are coupled using diffusive connections with Gaussian-tuned gains to form networks.

Let us denote by

the state vectors of the oscillators introduced in section 

2.1, each with dynamics . A neural oscillator network is composed of oscillators, connected with diffusive coupling [46]

(3)

where is the coupling strength.

Oscillators and are said to be synchronized if remains equal to . Once the elements are synchronized, the coupling terms disappear, so that each individual elements exhibits its natural, uncoupled behavior, as illustrated in Fig. 2. It is intuitive to see that a larger value facilitates and reinforces the synchronization between the oscillators and (refer to Appendix for more details).

a b
Figure 2: a. a single oscillator. b. synchronization of two oscillators coupled through diffusive connections. The two oscillates start to be fully synchronized at about .

The key to apply neural oscillators with diffusive connections to visual grouping is to tune the coupling so that the oscillators synchronize if their underlying atoms belong to the same visual group and desynchronize otherwise. According to Gestalt psychology [49, 20, 31], visual stimulus atoms of similarity or proximity tend to be grouped perceptually. This suggests that the coupling between the neural oscillators should be reinforced if they have similar stimuli. Such coupling can be implemented by the Gaussian tuning

(4)

where and are stimuli of the two oscillators, for example position for point clustering, orientation for contour integration and grey-level for image segmentation, and is a tuning parameter. Due to its good properties such as smoothness, Gaussian tuning has been applied in various applications such as image denoising [3, 4], segmentation [40] and recognition [39].

2.3 Generalized diffusive connections

The visual cortex is hierarchical. Higher-level layers have smaller dimension than lower-level layers, as information going from bottom to top turns from redundant to sparse and from concrete to abstract [38].

In a feedback hierarchy, generalized diffusive connections correspond to achieving consensus between multiple processes of different dimensions. Implementation of the hierarchy involves connecting two or more oscillator subnetworks of different sizes. Oscillator networks of sizes and can be connected with generalized diffusive connections [47, 37]:

(5)
(6)

where and are dynamics of two networks of sizes and whose state vectors are respectively and , and are coupling matrices of appropriate sizes and , and are the coupling strengths.

For appropriate choices of dynamics, once the two layers are synchronized, i.e. , the coupling terms disappear, so that each layer exhibits its natural behavior as an independent network, as in the case of diffusive connections.

Figure 3: Two networks (top and bottom) of different dimensions are connected with generalized diffusive connections.

2.4 Concurrent Synchronization and Stability

In perception, fully synchronized elements in each group are bound, while different groups are segregated. Concurrent synchronization analysis provides a mathematical tool to study stability and exponential convergence properties in this context.

In an ensemble of dynamical elements, concurrent synchronization is defined as a regime where the whole system is divided into multiple groups of fully synchronized elements, but elements from different groups are not necessarily synchronized [37]. Networks of oscillators coupled by diffusive connections (section 2.2) or generalized diffusive connections (section 2.3) are specific cases of this general framework.

Recall that a subset of the global state space is called invariant if trajectories that start in that subset remain in that subset. In our synchronization context, the invariant subsets of interest are linear subspaces, corresponding to some components of the overall state being equal or verifying some linear relation. Concurrent synchronization analysis quantifies stability and convergence to invariant linear subspaces. Furthermore, a property of concurrent synchronization analysis, which turns out to be particularly convenient in the context of grouping, is that the actual invariant subset itself need not be know a priori to guarantee stable convergence to it.

Finally, concurrent synchronization may first be studied in an idealized setting, e.g., with exactly equal inputs to groups of oscillators, and noise-free conditions. This allows one to compute minimum coupling gains to guarantee global exponential convergence to the invariant synchronization subspace. Robustness of concurrent synchronization, a consequence of its exponential converge properties, allows the qualitative behavior of the nominal model to be preserved even in non-ideal conditions. In particular, it can be shown and quantified that for high convergence rates, actual trajectories differ little from trajectories based on an idealized model.

A more specific discussion of stability and convergence is given in the appendix. The reader is referred to [37] for more details on the analysis tools.

2.5 Visual Grouping Algorithm

The basic visual grouping algorithm proceeds in the following steps.

  1. Construct a neural oscillator network. Each oscillator is associated to one atom in the stimuli. Oscillators are connected with diffusive connections (3) or generalized diffusive connections (5, 6) using the Gaussian-tuned gains (4).

  2. The oscillators converge to concurrently synchronized groups in the so-constructed network.

  3. Identify the synchronized oscillators and equivalently the visual groups. A group of synchronized oscillators indicates that the underlying visual stimulative atoms are perceptually grouped. Desynchronization between groups suggest that the underlying stimulative atoms in the two groups are segregated.

Traces of synchronized oscillators coincide in time, while those of desynchronized groups are separated [42]. The identification of synchronization in the oscillation traces (as illustrated in the example of Fig. 4-b) can be realized by thresholding the correlation of the traces or by simply applying a clustering algorithm such as -means.

The following sections detail neural oscillator solutions for three visual grouping problems, namely point clustering, contour integration and image segmentation.

3 Points Clustering

The neural oscillator points clustering is based on diffusive connections (3) and follows directly the general algorithm in section 2.5. Let us denote the coordinates of a point . Each point is associated to an oscillator . The proximity gestalt principle [49, 20, 31] suggests strong coupling between oscillators corresponding to proximate points. More precisely, the coupling strength between and is

(7)

where is a neighborhood of . For example can be defined as the set of points closest to . Then (7) couples an oscillator and with its nearest neighbors. The local coupling can propagate to make the coupling in a larger scale. Higher value reinforces the coupling. The parameter tunes the size of the clusters one expects to detect. The external input of the oscillators in (11

) are set as uniformly distributed random variables in the appropriate range.

Fig. 4 illustrates an example in which the points make clearly two clusters. As shown in Fig. 4-b, the oscillator system converges to two concurrently synchronized groups, each corresponding to one cluster, and separated in the time dimension. The identification of the two groups induces the clustering of the underlying points, as shown in Fig. 4-c.

a b c
Figure 4: a. Points to cluster. b. The oscillators converge to two concurrently synchronized groups. c. Clustering results. The blue circles and the red crosses represent the two clusters.

Fig. 5 presents a more challenging setting where one seeks to identify a cluster in a cloud of points. The cloud is made of 300 points uniformly randomly distributed in a space of size

, in addition to a cluster of 100 Gaussian distributed points with standard deviation equal to

. Thanks to the coupling (7

), the neural oscillator system converges to one synchronized group that corresponds to the cluster with all the “outliers” totally desynchronized in the background, as shown in Fig. 

5-c. The synchronized traces are segregated from the background (for example by thresholding the correlation among the traces) which results in the identification of the underlying cluster, as shown in Fig. 5-b. Fig. 5-d plots along time the number of the oscillators simultaneously spiking. The peaks in the trace tell the existence of a cluster (similarly to e.g.  [18, 48]), and their amplitude (about 115) indicates the number of points in the cluster: the oscillators which belong to the cluster are synchronized and thus spike together to make a high peak.

a b
c d
Figure 5: a. A cloud of points made of 300 points uniformly randomly distributed in a space of size , in addition to a cluster of 100 Gaussian distributed points with standard deviation equal to . b. Blue dots represent the cluster detected by the algorithm and red crosses are the “outliers”. c. The neural oscillator system converges to one synchronized group that corresponds to the cluster with all the “outliers” totally desynchronized in the background. d. The number of oscillators simultaneously spiking. The peaks in the trace tell the existence of a cluster and their amplitude (about 115) indicates the number of points in the cluster.

4 Contour Integration

Field and his colleagues [12] have shown some interesting experiments, an example being illustrated in Fig. 6, to test human capacity of contour integration, i.e. of identifying a path within a field of randomly-oriented elements and made some quantitive observations in accordance with the “good continuation” law [49, 20, 31]:

  • Contour integration can be made when the successive elements in the path, i.e., the element-to-element angle (see Fig. 7), differ by or less.

  • There is a constraint between the element-to-element angle and the element-to-path angle (see Fig. 7). The visual system can integrate large differences in element-to-element orientation only when those differences lie along a smooth path, i.e., only when the element-to-path angle is small enough. For example, with and the contour integration is difficult, even though the observers can easily track a 15 degree orientation difference when there is no variation ( and ).

Figure 6: The left-hand panel shows the path of elements (the stimulus) that the subjects must detect when embedded in an array of randomly oriented elements (the stimulus plus background shown on the right). The stimulus consisted of 12 elements aligned along a path. In this example each successive element differs in orientation by and for this difference in orientation the string of aligned elements is easily detected. This figure is cited from [12].
Figure 7: The element-to-element angle is the difference in angle of orientation of each successive path segment. The element-to-path angle is the angle of orientation of the element with respect to the path.

Fig. 8 shows the setting of the contour integration experiments. An orientation value is defined for each point in a grid as illustrated by the flashes. The proposed algorithm detects the smooth contours potentially imbedded in the grid.

Following the general visual grouping algorithm described in section 2.5, neural oscillators with diffusive connections (3) are used to perform contour integration. Each orientation in the grid is associated to one oscillator. The coupling of the oscillators and follows the Gestalt law of “good continuation” and, in particular, the results of the psychovisual experiments of Field et al [12]:

(8)

where

is the undirectional orientation of the path (the closer to the average element-to-element orientation modulo ). By making constraints on the element-to-element angle (the first term in (8)) and the element-to-path angle (the second term in (8)), the neural oscillator system makes smooth contour integration. and tune the smoothness of the detected contour. As contour integration is known to be rather local [12], the coupling (8) is effective within a neighborhood of size .

The example illustrated in Fig. 8-a presents a grid in which orientations are uniformly distributed in space, except for one vertical contour. The orientation of the elements on the vertical contour undertakes furthermore a Gaussian perturbation of standard deviation . The neural oscillator system converges to one synchronized group that corresponds to the contour with all the other oscillators desynchronized, as illustrated in Fig. 8-c. The synchronized group is segregated by thresholding the correlation among the traces. This results in the “pop-out” of the contour, as shown in Fig. 8-b. The parameters are configured as , and , which are in line with the results of the psychovisual experiments of Field et al [12]. Fig. 9 illustrates a similar example with two intersected straight contours.

a b c
Figure 8: Left: A vertical contour is embedded in a uniformly distributed orientation grid. Middle: The detected contour. Right: the traces of the neural oscillation.
a b c
Figure 9: Left: Two contours are embedded in a uniformly distributed orientation grid. Middle and right: the two identified contours.

Fig. 10-a illustrates a smooth curve embedded in the uniformly randomly distributed orientation background. With some minor effort, subjects are able to identify the curve due to its “good continuation”. Similarly the neural system segregates the curve from the background with the oscillators lying on the curve fully synchronized, as illustrated in Fig. 10-b.

a b
Figure 10: Left: A smooth curve is embedded in a uniformly distributed orientation grid. Middle: The detected curve.

5 Image Segmentation

The proposed image segmentation scheme is based on concurrent synchronization [37] and follows the general visual grouping algorithm described in section 2.5. In the basic version, the coupling gain between oscillators are again inspired directly from more standard techniques, namely non-local grouping as applied e.g. to in image denoising [3, 4]

in addition to the gestalt laws. Multi-layer neural networks and feedback mechanisms are then introduced to reinforce robustness under strong noise perturbation and to aggregate the grouping. Experiments on both synthetic and real images are shown.

5.1 Basic Image Segmentation

One oscillator is associated to each pixel in the image. Within a neighborhood the oscillators are non-locally coupled with a coupling strength

(9)

where is the pixel gray-level at coordinates and adjusts the size of the neighborhood. Pixels with similar grey-levels are coupled more tightly, as suggested by the color constancy gestalt law [49, 20, 31]. Non-local coupling plays an important role in regularizing the image segmentation, with a larger resulting in more regularized segmentation and higher robustness to noise.

Fig. 11-a illustrates a synthetic image (the gray-levels of the black, gray and white parts are 0, 128, and 255) contaminated by white Gaussian noise of moderate standard deviation . The segmentation algorithm was configured with and . The oscillators converge into three concurrently synchronized groups as plotted in Fig. 11-b which results in a perfect segmentation as shown in Fig. 11-c.

a b c
Figure 11: a. A synthetic image (the gray-levels of the black, gray and white parts are respectively 0, 128, 255) contaminated by white Gaussian noise of standard deviation . b. The traces of the neural oscillation. The oscillators converge into three concurrently synchronized groups. c. Segmentation result.

Fig. 12 show some natural image segmentation examples. The segmentation results are rather regular with hardly any “salt and pepper” holes, thanks to the diffusive coupling. A sagittal MRI (Magnetic Resonance Imaging) image in Fig. 12-a is segmented in 15 classes, the segmentation results shown in Fig. 12-b. Salient regions such as cortex, cerebellum and lateral ventricle are segregated with good accuracy. Fig. 12-c is a radar image in which boundaries are blurred. In the segmentation results with 20 classes as shown in Fig. 12-d, the image, including the eye of the hurricane, is accurately segregated.

a b
c d
Figure 12: Real image segmentation. a and b. A sagittal MRI image of size and the segmentation result in 15 classes. c and d. A radar image of size and the segmentation result in 20 classes.

5.2 Noisy Image Segmentation with Feedback

Fig. 13-a shows a synthetic image contaminated by strong white Gaussian noise of standard deviation . Thanks to the non-local coupling, the neural oscillator network is robust and segmentation results (Fig. 13-b) are more regular than those of algorithms which do not take into account the image regularity prior, such as e.g. -means (Fig. 13-f). However, due to the heavy noise perturbation, the segmentation result is not perfect. A feedback scheme is introduced to overcome this problem. Specifically, in a loose analogy with the visual cortex hierarchy, a second layer is introduced to reflect prior knowledge, in this case that proximate pixels of natural images are likely to belong to the same region. Feedback from the second layer to the first exploits this regularity to increase robustness to noise.

The feedback error correction mechanism is implemented using the generalized diffusive connections introduced in Section 2.3. On top of the first layer previously described, a second layer of oscillators is added and coupled with the first layer, as shown in Fig. 14. The second layer contains oscillators, where is the number of regions obtained in the first layer, the input of each oscillator being the average gray-level of an image region. Each oscillator in the first layer, indexed by , is coupled with all the oscillators in the second layer, with the coupling strengths depending on the segmentation obtained in a neighborhood of . More precisely, the coupling matrices and in the generalized diffusive connection (5) are designed so that the coupling strength from an oscillator , , in the second layer to an oscillator in the first layer is proportional to number of pixels in the neighborhood which belong to the region according to the current segmentation as illustrated in Fig. 14. This inter-layer connection reinforces the coupling between the first-layer oscillators and the second-layer oscillators which correspond to the locally dominant regions, and thus regularizes the segmentation. After each step of and adjustment, the oscillator network converges to a new concurrently synchronized status and the segmentation is updated. Figs. 13-b to 13-e show the segmentation results at the beginning of the 4th, 6th, 8th and 10th oscillation periods. The segmentation errors are corrected as the feedback is going along. The segmentation is stable at Fig. 13-e after the 10th period.

Fig. 15-a illustrates an infrared night vision image heavily noised. The segmentation result of the basic algorithm without feedback shown in Fig. 15-b contains a few punctual errors and, more importantly, the contour of the segmented objected zigzags due to the strong noise perturbation. As illustrated in Fig. 15-c, the feedback procedure corrected the punctual errors and regularized the contour.

a b c
d e f
Figure 13: a. A synthetic image (the gray-levels of the black, gray and white parts are respectively 0, 128, 255) contaminated by white Gaussian noise of standard deviation . be. Segmentation by neural oscillators with feedback, shown at the 4th, 6th, 8th and 10th oscillation periods. f. Segmentation by -means.
Figure 14: Neural network with feedback for image segmentation. In the first layer one oscillator is associated to each pixel. An oscillator is coupled within all the others in a neighborhood of size (for clarity only the coupling with the 4 nearest neighbors are shown in the figure). The image is segmented into two regions marked by white and gray. The second layer contains two oscillators whose input are respectively the average gray-level of the two image regions. The coupling strength between an oscillator indexed by the coordinates in the first layer and an oscillator indexed by the regions in the second layer is proportional to number of the pixels in the neighborhood that belong to the region according to the previous segmentation. Only five such couplings and their strengths are shown for clarity .
a b c
Figure 15: Infrared image segmentation. a. Infrared image. b. Segmentation without feedback. c. Segmentation with feedback.

5.3 Multi-layer Image Segmentation

The visual cortex is hierarchical, with cells at the lower levels having smaller reception fields than the ones at the higher levels and information aggregating from bottom to top [38, 16]. This structure can be imitated by multi-layer oscillator networks.

Figure 16: Two-layer image segmentation.

Fig. 16 illustrates a two-layer image segmentation scheme. The image on the first layer is decomposed into four disjoint parts. Each part is treated as an independent image where a basic image segmentation is performed (in this symbolic example, each part is segmented into two regions.) The second layer aggregates the segmentation results obtained on the first layer, using a single oscillator to represent each resulting region, with the average gray-level in the region as the oscillator input. The coupling connections on the second layer follow the topology of the regions on the first layer: second-layer oscillators whose underlying regions in the first layer are in vicinity are coupled. The segmentation on the second layer merges some regions obtained in the first layer and provides the final segmentation result. (In this example, regions , , and are merged on the second layer.) Multi-layer segmentation can follow the same principle. From a computational point of view, the multi-layer scheme saves memory and accelerates the program.

Fig. 17 illustrate an example of the multi-layer image segmentation. The image of size shown in Fig. 17-a is decomposed into four parts. Fig. 17-b and c present respectively the segmentation results of the first and second layers, both with 6 classes. From the first layer to the second, some regions that belong to different parts are merged, which eliminates some boundary artifacts in the first layer. The merging of a number of regions belonging to the same part, on the other hand, contributes to the correction of some over-segmentation. The segmentation result in Fig. 17-c is satisfactory. The river, although rather noisy in the original image, is segmented from the land with accurate boundary. The two aspects of the land are accurately segregated as well.

a b c
Figure 17: Multi-layer image segmentation. a. Aerial image (). b. Segmentation result of the first layer. Each of the four parts is segmented in 6 classes. c. Segmentation result in 6 classes with the second layer.

6 Concluding Remarks

Inspired by neural synchronization mechanisms for perceptual grouping, simple networks of neural oscillators coupled with diffusive connections have been proposed to solve visual grouping problems. Stable multi-layer algorithms and feedback mechanisms have also been studied. The same algorithm has been shown to achieve promising results on several classical visual grouping problems, including point clustering, contour integration and image segmentation.

Appendix: Convergence and Stability

One can verify ([37], to which the reader is referred for more details on the analysis tools) that a sufficient condition for global exponential concurrent synchronization of an oscillator network is

(10)

where and

are respectively the smallest and largest eigenvalues of the symmetric matrix

, is the Laplacian matrix of the network (, for ) and is a projection matrix on . Here is the subspace orthogonal to the subspace in which all the oscillators are in synchrony or, more generally in the case of a hierarchy, where all oscillators at each level of the hierarchy are in synchrony (i.e., at each level). Note that itself need not be invariant (i.e. all oscillators synchronized at each level of the hierarchy need not be a particular solution of the system), but only needs to be a subspace of the actual invariant synchronization subspace ([15][37] section 3.3.i.), which may consist of synchronized subgroups according to the input image. Indeed, the space where all are equal (or, in the case of a hierarchy, where at each level all are equal), while in general not invariant, is always a subspace of the actual invariant subspace corresponding to synchronized subgroups.

These results can be applied e.g. to individual oscillator dynamics of the type (11). Let denote the Jacobian matrix of the individual oscillator dynamics (11), . Using a diagonal metric transformation , one easily shows, similarly to [46, 47], that the transformed Jacobian matrix is negative definite for . More general forms of oscillators can also be used. For instance, other second-order models can be created based on a smooth function and an arbitrary sigmoid-like function such that , in the form

(11)
(12)

with the transformed Jacobian matrix negative definite for .

From a stability analysis point of view, the coupling matrix of the Gaussian-tuned coupling, composed of coupling coefficients in Eq.(4), presents desirable properties. It is symmetric, and the classical theorem of Schoenberg (see [32]) shows that it is positive definite. Also, although it may actually be state-dependent, the coupling matrix can always be treated simply as a time-varying external variable for stability analysis and Jacobian computation purposes, as detailed in ([46], section 4.4, [37], section 2.3).

Last, note that more generally the individual dynamics need not all be oscillators. In particular, memory-like or voting-like subdynamics could be introduced in a feedback hierarchy, by having the corresponding be the gradient of a scalar function of with multiple local minima.

References

  • [1] G. Aubert and P. Kornprobst.

    Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations

    .
    Springer; 2nd ed. edition, 2006.
  • [2] G.J. Brown and D.L. Wang. Modelling the perceptual segregation of double vowels with a network of neural oscillators. Neural Networks, 10(9):1547–1558, 1997.
  • [3] A. Buades, B. Coll, and J.M. Morel. A review of image denoising algorithms, with a new one. Multiscale Modeling & Simulation, 4(2):490–530, 2005.
  • [4] A. Buades, B. Coll, and J.M. Morel. Nonlocal Image and Movie Denoising. International Journal of Computer Vision, 76(2):123–139, 2008.
  • [5] G. Buzsaki. Rhythms of the Brain. Oxford University Press, USA, 2006.
  • [6] K. Chen and D.L. Wang. A dynamically coupled neural oscillator network for image segmentation. Neural Networks, 15(3):423–439, 2002.
  • [7] A. Desolneux, L. Moisan, and J.-M. Morel. Computational gestalts and perception thresholds. Journal of Physiology - Paris, 97(2-3):311–322, 2003.
  • [8] A. Desolneux, L. Moisan, and J.-M. Morel. A grouping principle and four applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(4):508–513, 2003.
  • [9] A. Desolneux, L. Moisan, and J.-M. Morel. Gestalt Theory and Image Analysis, a Probabilistic Approach. Interdisciplinary Applied Mathematics series, Springer Verlag, 2007. Preprint available at http://www.cmla.ens-cachan.fr/Utilisateurs/morel/lecturenote.pdf.
  • [10] A. Desolneux P. Mus F. Cao, J. Delon and F. Sur. A unified framework for detecting groups and application to shape recognition. Technical Report 1746, IRISA, September 2005.
  • [11] O. Faugeras, F. Grimbert, and J.J. Slotine. Stability and synchronization in neural fields. S.I.A.M. Journal on Applied Mathematics, 68(8), 2008.
  • [12] D. J. Field, A. Hayes, and R. F. Hess. Contour integration by the human visual system: evidence for a local ”association field”. Vision Res, 33(2):173–193, January 1993.
  • [13] R. FitzHugh. Impulses and physiological states in theoretical models of nerve membrane. Biophysical Journal, 1:445–466., 1961.
  • [14] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. Readings in Computer Vision: Issues, Problems, Principles, and Paradigms, pages 564–584, 1987.
  • [15] L. Gerard and J.J.E. Slotine. Neuronal networks and controlled symmetries. arXiv:q-bio/0612049v4.
  • [16] J. Hawkins and S. Blakeslee. On Intelligence, 2004.
  • [17] R.F. Hess, A. Hayes, and DJ Field. Contour integration and cortical processing. Journal of Physiology-Paris, 97(2-3):105–119, 2003.
  • [18] J.J. Hopfield and C.D. Brody.

    What is a moment? Transient synchrony as a collective mechanism for spatiotemporal integration.

    Proceedings of the National Academy of Sciences, 98(3):1282, 2001.
  • [19] A.K. Jain and R.C. Dubes. Algorithms for Clustering Data. Prentice Hall, 355:356, 1988.
  • [20] G. Kanizsa. Grammatica del Vedere, Il Mulino, Bologna, 1980. Traduction française: La grammaire du voir, Diderot Editeur, Arts et Sciences, 1996.
  • [21] M. Kuzmina, E. Manykin, and I. Surina. Tunable Oscillatory Network for Visual Image Segmentation. Proc. of ICANN, pages 1013–1019, 2001.
  • [22] M. Kuzmina, E. Manykin, and I. Surina. Oscillatory network with self-organized dynamical connections for synchronization-based image segmentation. BioSystems, 76(1-3):43–53, 2004.
  • [23] T.S. Lee. Computations in the early visual cortex. Journal of Physiology-Paris, 97(2-3):121–139, 2003.
  • [24] Z. Li. A Neural Model of Contour Integration in the Primary Visual Cortex. Neural Computation, 10(4):903–940, 1998.
  • [25] Z. Li. Visual segmentation by contextual influences via intra-cortical interactions in the primary visual cortex. Network: Computation in Neural Systems, 10(2):187–212, 1999.
  • [26] Z. Li. Pre-attentive segmentation in the primary visual cortex. Spatial Vision, 13(1):25–50, 2000.
  • [27] Z. Li. Computational Design and Nonlinear Dynamics of a Recurrent Network Model of the Primary Visual Cortex*. Neural Computation, 13(8):1749–1780, 2001.
  • [28] X. Liu and D.L. Wang. Range image segmentation using a relaxation oscillator network. Neural Networks, IEEE Transactions on, 10(3):564–573, 1999.
  • [29] J. B. MacQueen. Some methods for classification and analysis of multivariate observations. In L. M. Le Cam and J. Neyman, editors,

    Proc. of the fifth Berkeley Symposium on Mathematical Statistics and Probability

    , volume 1, pages 281–297. University of California Press, 1967.
  • [30] S. Mallat. Geometrical grouplets. Applied and Computational Harmonic Analysis, 2008.
  • [31] W. Metzger. Laws of seeing. 2006.
  • [32] C.A. Micchelli. Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approximation, (2):11–22, 1986.
  • [33] J.M. Morel and S. Solimini. Variational methods in image segmentation. Progress in Nonlinear Differential Equations and their Applications, 14, 1995.
  • [34] D. Mumford and J. Shah. Optimal approximations by piecewise smooth functions and associated variational problems. Comm. Pure Appl. Math, 42(5):577–685, 1989.
  • [35] J. Nagumo, S. Arimoto, and S. Yoshizawa. An active pulse transmission line simulating nerve axon. Proceedings of the IRE, 50(10):2061–2070, Oct. 1962.
  • [36] Edwin Olson, Matthew Walter, John Leonard, and Seth Teller. Single cluster graph partitioning for robotics applications. In Proceedings of Robotics Science and Systems, pages 265–272, 2005.
  • [37] Quang-Cuong Pham and Jean-Jacques Slotine. Stable concurrent synchronization in dynamic system networks. Neural Netw., 20(1):62–77, 2007.
  • [38] R.P.N. Rao and D.H. Ballard. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2:79–87, 1999.
  • [39] T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust Object Recognition with Cortex-Like Mechanisms. IEEE Transaction On Pattern Analysis and Machine Intelligence, pages 411–426, 2007.
  • [40] J. Shi and J. Malik. Normalized Cuts and Image Segmentation. IEEE Transaction On Pattern Analysis and Machine Intelligence, pages 888–905, 2000.
  • [41] W. Singer and C.M. Gray. Visual Feature Integration and the Temporal Correlation Hypothesis. Annual Reviews in Neuroscience, 18(1):555–586, 1995.
  • [42] D.L. Wang. The time dimension for scene analysis. Neural Networks, IEEE Transactions on, 16(6):1401–1426, 2005.
  • [43] D.L. Wang and G.J. Brown. Separation of speech from interfering sounds based on oscillatorycorrelation. Neural Networks, IEEE Transactions on, 10(3):684–697, 1999.
  • [44] D.L. Wang and P. Chang. An oscillatory correlation model of auditory streaming. Cognitive Neurodynamics, 2(1):7–19, 2008.
  • [45] D.L. Wang and D. Terman. Image Segmentation Based on Oscillatory Correlation. Neural Computation, 9(4):805–836, 1997.
  • [46] W. Wang and J.J.E. Slotine. On partial contraction analysis for coupled nonlinear oscillators. Biological Cybernetics, 92(1):38–53, 2005.
  • [47] W. Wang and J.J.E. Slotine. Contraction analysis of time-delayed communications and group cooperation. IEEE Transactions on Automatic Control,, 51(4):712– 717, 2006.
  • [48] W. Wang and J.J.E. Slotine. Fast computation with neural oscillators. Neurocomputing, 69(16-18):2320–2326, 2006.
  • [49] M. Wertheimer. Untersuchungen zur Lehre der Gestalt, II. Psychologische Forschung, 4:301–350, 1923. Translation published as Laws of Organization in Perceptual Forms, in Ellis, W. (1938). A source book of Gestalt psychology (pp. 71-88). Routledge & Kegan Paul.