## 1 Introduction

Significant progress towards action recognition in realistic video settings has been achieved in the past few years [20, 22, 24, 28, 33]. However action recognition is often cast as a classification or detection problem using fully annotated data, where the temporal boundaries of individual actions, e.g. in the form of pre-segmented video clips, are given during training. The goal of this paper is to exploit the supervisory power of the temporal ordering of actions in a video stream, as illustrated in figure 1.

Gathering fully annotated videos with accurately time-stamped action labels is quite time consuming in practice. This limits the utility of fully supervised machine learning techniques on large-scale data. Using data redundancy, weakly and semi-supervised methods are a promising alternative in this case. On the other hand, it is easy to gather videos with some level of textual annotation but poor temporal localization, from movie scripts for example. This type of weak supervisory signal has been used before in classification

[20] and temporal localization [5] tasks. However, the crucial information on the ordering of actions has, to the best of our knowledge, been ignored so far in the weakly supervised setting. Following recent work on discriminative clustering [2, 35], image [16] and video [4] cosegmentation, we propose to exploit this information in a discriminative framework where both the action model and the optimal assignments under temporal constraints are learned together.### 1.1 Related Work

The temporal ordering of actions

, e.g. in the form of Markov models or action grammars, have been used to constrain action prediction in videos

[11, 13, 19, 21, 27, 32]. These kinds of spatial and temporal constraints have been also used in the context of group activity recognition [1, 18]. Similar to us, these papers exploit the temporal structure of videos, but focus on inferring action sequences from noisy but pre-defined action detectors, often in constrained surveillance and laboratory settings with a limited number of actions and static cameras. In contrast, in this work we explore the temporal structure of actions for learning action classifiers in a weakly supervised set-up and show results on challenging videos from feature length movies.Related is also work on recognition of composite activities [26], where atomic action models (“cut”, “open”) are learned given full supervision on a cooking video dataset. Composite activity models (“prepare pizza”) are learned on top of the atomic actions, using the prediction scores for the atomic actions as features. Annotations are, however, used without taking into account the ordering of actions.

Temporal models for recognition of individual actions have been explored in e.g. [20, 24, 31]. Implicit models in the form of temporal pyramids have been used with bag-of-features representations [20]. Others have used more explicit temporal models in the form of, e.g. latent action parts [24]

[31]. Contrary to these methods, we do not use an a priori model of the temporal structure of individual actions, but instead exploit the given ordering constraints between actions to learn better individual actions models.Weak supervision for learning actions has been explored in [4, 5, 20]. These methods use uncertain temporal annotations of actions provided by movie scripts. Contrary to these works our method learns multiple actions simultaneously and incorporates temporal ordering constraints on action labels obtained, e.g. from the movie scripts.

Dynamic time warping algorithms (DTW) can be used to match temporal sequences, and are extensively used in speech recognition, e.g. [7, 25]

. In computer vision, the temporal order of events has been exploited in

[23], where a DTW-like algorithm is used at test time to improve the performance of non-maximum suppression on the output of pre-trained action detectors.Discriminative clustering is an unsupervised method that partitions data by minimizing a discriminative objective, optimizing over both classifiers and labels [2, 35]. Convex formulations of discriminative clustering have been explored in [2, 8]. In computer vision these methods have been successfully applied to co-segmentation [17]. The approach presented in this paper is inspired by this framework, but adds to it the use of ordering constraints.

In this work, we make use of the Frank-Wolfe algorithm (a.k.a conditional gradient) to minimize our cost function. The Frank-Wolfe algorithm [6, 15] is a classical convex optimization procedure that permits optimizing a continuously differentiable convex function over a convex compact domain only by optimizing linear functions over the domain. In particular, it does not require any projection steps. It has recently received increased attention in the context of large-scale optimization [9, 15].

### 1.2 Problem Statement and Contributions

The temporal assignment problem addressed in the rest of this paper and illustrated by Fig. 1 can be stated as follows: We are given a set of video clips (or clips for short in what follows).
A clip is defined as a contiguous video segment consisting of frames, and may correspond, for example, to a scene (as defined in a movie script) or a collection of subsequent shots.
Each clip is divided into small *time intervals* (chunks of videos consisting of frames in our case), and annotated by an ordered list of elements taken from some action set of size (that may consist of labels such as “open door”, “stand up”, “answer phone”, etc., as in Fig. 1 for example).
Note that clips are not of the same length but for the sake of simplicity, we assume they are.
We address the problem of assigning to each time interval of each clip one action in , respecting the order in which the actions appear in the original annotation list (Fig. 2).

#### Contributions.

We make the following contributions: (i) we propose a discriminative clustering model (section 2) that handles weak supervision in the form of temporal ordering constraints and recovers a classifier for each action together with the temporal localization of each action in each video clip; (ii) we design a convex relaxation of the proposed model and show it can be efficiently solved using the conditional gradient (Frank-Wolfe) algorithm (section 3); and finally (iii) we demonstrate improved performance of our model on a new action dataset for the tasks of temporal localization (section 6) and action classification (section 7). All the data and code are publicly available at http://www.di.ens.fr/willow/research/ordering.

## 2 Discriminative Clustering with Ordering Constraints

In this section we describe the proposed discriminative clustering model that incorporates label ordering constraints. The input is a set of video clips, each annotated with an ordered list of action labels specifying the sequence of actions present in the clip. The output is the temporal assignment of actions to individual time intervals in each clip respecting the ordering constraint provided by the annotations together with a learnt classifier for each action, common for all clips. In the following, we first formulate the temporal assignment of actions to individual frames as discriminative clustering (section 2.1), then introduce a parametrization of temporal assignments using indicator variables (section 2.2

), and finally we describe the choice of a loss function for the discriminative clustering that leads to a convex cost (section

2.3).### 2.1 Problem Formulation

Let us now formalize the temporal assignment problem. We denote by in some local descriptor of video clip number during time interval number . For every in , we also define as the element of corresponding to annotation number (Fig. 2). Note that the set of actions itself is not ordered: even if we represent by a table for convenience, the elements of this table are action labels and have no natural order. The annotations, on the other hand, are ordered, for example according to where they occur in a movie script, and are represented by some integer between and . Thus maps (ordered) annotation indices onto (unordered) actions, and depends of course on the video clip under annotation. Parts of any video clip may belong to the background. To account for this fact, a dummy label is inserted in the annotation list between every consecutive pair of actual labels.

Let us denote by the set of *admissible assignments* on , that is, the set of sequences with elements in such that , , and or for all in
Such an assignment is illustrated in Fig. 2.

Let us also denote by the space of classifiers of interest, by some regularizer on this space and by an appropriate loss function. For a given clip and a fixed classifier , the problem of assigning the clip intervals to the annotation sequence can be written as the minimization of the cost function:

(1) |

with respect to assignment in . The regularizer prevents overfitting and we therefore define a scalar parameter to control this effect. Jointly learning the classifiers and solving the assignment problem corresponds to the following optimization problem:

(2) |

### 2.2 Parameterization Using an Assignment Matrix

As will be shown in the following sections, it is convenient to reformulate our problem in terms of indicator variables.
The corresponding multi-class loss is , and the classifiers are functions .
For a clip , let us define the *assignment matrix* which is composed of entries such that if the interval of clip is assigned to class .

Let

denote the row vector of dimension

corresponding to the -th row of . The cost function , defined in Eq. (1) can be rewritten as .Note: To avoid cumbersome double summations, we suppose from now that we work with a single clip. This allows us to drop the superscript notation, we replace by and skip the sum over clips. We also replace the descriptor notation by and the row extraction notation by . This is without loss of generality, and our method as described in the sequel handles multiple clips with some simple bookkeeping.

Because of temporal constraints, we want the assignment matrices to correspond to valid assignments . This amounts to imposing some constraints on . Let us therefore define , the set of all valid assignment matrices as:

(3) |

There is a bijection between the sets and .
For each in there exists a unique corresponding in and *vice versa*.
Figure 3 gives an intuitive illustration of this bijection.

The set is a subset of the set of stochastic matrices (positive matrices whose rows sum up to 1), formed by the matrices whose columns consist of exactly blocks of contiguous ones occurring in a predefined order ( in Fig. 3). There are as many elements in as ways of choosing transitions among possibilities, thus , which can be extremely large in our setting (in our setting and ). Furthermore, it is very difficult to describe explicitly the algebraic constraints on stochastic matrices that define . This point will prove important in Sec. 3 when we propose an optimization algorithm for learning our model. Using these notations, Eq. (2) is equivalent to:

(4) |

### 2.3 Quadratic Cost Functions

We now choose specific functions and that will lead to a quadratic cost function. This choice leads, to a convex relaxation of our problem. We use multi-class linear classifiers of the form , where and . We choose the square loss function, regularized with the Frobenius norm of , because in that case the optimal parameters and can be computed in closed form through matrix inversion. Let be the matrix in formed by the concatenation of all matrices . For this choice of loss and regularizer, our objective function can be rewritten using the matrices defined above as:

(5) |

This is exactly a ridge regression cost. Minimizing this cost with respect to

and for fixed can be done in closed form [2, 10]. Setting the partial derivatives with respect to and to zero and plugging the solution back yields the following equivalent problem:(6) |

and the matrix is the centering matrix . This corresponds to implicitly learning the classifier while finding the optimal Z by solving a quadratic optimisation problem in . The implicit classifier parameters and are shared among all video clips and can be recovered in closed-form as:

(7) |

## 3 Convex Relaxation and the Frank-Wolfe Algorithm

In Sec. 2, we have seen that our model can be interpreted as the minimization of a convex quadratic function (

is positive semidefinite) over a very large but discrete domain. As is usual for this type of hard combinatorial optimization problem, we replace the discrete set

by its convex hull . This allows us to find a continuous solution of the relaxed problem using an appropriate and efficient algorithm for convex optimization.### 3.1 The Frank-Wolfe Algorithm

We want to carry out the minimization of a convex function over a complex polytope , defined as the convex hull of a large but finite set of integer points defined by the constraints associated with admissible assignments. When it is possible to optimize a linear function over a constraint set of this kind, but other usual operations (like projections) are not tractable, a good way to optimize a convex objective function is to use the iterative Frank-Wolfe algorithm (a.k.a. conditional gradient method) [3, 6]. We show in Sec. 3.2 that we can minimize linear functions over , so this is an appropriate choice in our case.

The idea behind the Frank-Wolfe algorithm is rather simple. An affine approximation of the objective function is minimized yielding a point on the edge of . Then a convex combination of and the current point

is computed. This is repeated until convergence. The interpolation parameter

can be chosen either by using the universal step size , where is the iteration counter (see [15] and references therein) or, in the case of quadratic functions, by solving a univariate quadratic equation. In our implementation, we use the latter. A good feature of the Frank-Wolfe algorithm is that it provides for free a duality gap (referred to as the linearization duality gap [15]) that can be used as a certificate of sub-optimality and stopping criterion. The procedure is described in the special case of our relaxed problem in Algorithm 1. Figure 4 illustrates one step of the optimization.### 3.2 Linear Function Minimization over

It is possible to minimize linear functions over the integral set . Simple arguments (see for instance Prop B.21 of [3]) show that the solution over is also a solution over . We will therefore focus on the minimization problem on and keep in mind that it also gives a solution over as required by the Frank-Wolfe algorithm. Minimizing a linear function on amounts to solving the problem: where is a matrix in . Using the equivalence between the assignment matrix () and the plain assignment () representations (Fig. 3), this is equivalent to solving . To better deal with the temporal structure of the assignment, let us denote by the matrix with entries . The minimization problem then becomes , which can be solved using dynamic time warping. Indeed, let us define for all and : . We can think of as the cost of the optimal path from to in the graph defined by admissible assignments, and we have the following dynamic programming recursion: .

The optimal value can be computed in using dynamic programming, by precomputing the matrix , incrementally computing the corresponding values, and maintaining at each node back pointers to the appropriate neighbors.

### 3.3 Rounding

At convergence, the Frank-Wolfe algorithm finds the (non-integer) global optimum of Eq. (6) over . Given , we want to find an appropriate nearby point in . The simplest geometric rounding scheme consists in finding the closest point of according to the Frobenius distance : . Expanding the norm yields: .

Since is fixed, its norm is a constant. Moreover, since is an element of , its squared norm is constant and equal to . The rounding problem is therefore equivalent to: , that is to the minimization of a linear function over . This can be done, as in Sec. 3.2, using dynamic programming.

## 4 Practical Concerns

In this section, we detail some refinements of our model. First we show how to tackle a semi-supervised setting where some time-stamped annotations are available. Secondly, we discuss how to avoid the trivial solutions, a common issue in discriminative clustering methods [16, 2, 8].

### 4.1 Semi-supervised Setting

Let us suppose that we are given some fully annotated clips (in the sense that they are labeled with time-stamped annotations), corresponding to a total of time intervals. For every interval we have a descriptor in and a class label in . We can incorporate this data by modifying the optimization problem as follows:

(8) |

This supervised model does not change the optimization procedure, which remains valid.

### 4.2 Minimum size constraints

There are two inherent problems with discriminative clustering First, the constant assignment matrix is typically a trivial optimum. As explained in [8] this occurs when the optimization domain is symmetric over permutations of the labels of the assignment matrices. Due to our temporal constraints, the set is not symmetric and thus we are not subject to this effect.

The second difficulty is linked to the use of the centering matrix in the expression of the quadratic cost matrix . Indeed, we notice that the constant vector of length is an eigen vector of . Therefore, the column-wise constant matrices are trivial solutions to our problem. These piecewise-constant solutions are not admissible for our problem due to the temporal constraints. In practice however, we have have observed that the algorithm returned an assignment with almost all points being affected to the background label . We consider two ways to get rid of the trivial solutions.

### 4.3 Linear penalty.

To avoid solutions with dominant classes we add constraints over the fraction of clip intervals affected to each class. Ideally, we would like to incorporate a hard constraint over the proportions of each class as in [16], that is, to add to the problem formulated in Eq. (6), a constraint of the type:

(9) |

where is the indicator matrix with 0 everywhere except on the -th column which is 1. This constraint would make all operations described in Sec. 3.2 intractable: indeed, dynamic programming cannot be modified so that it respects a constraint of minimal and maximal proportions.

Instead, a simple method for avoiding trivial solutions is to add to the objective function a Lagrangian multiplier corresponding to the desired hard constraints, which we will set by validation. We therefore incorporate a linear penalty (in ) in our objective function. The multiplier corresponding to this new term is defined as a vector . The final objective function then becomes:

(10) |

Note that, with this simple modification, we can still use Alg. 1.

### 4.4 Balancing the loss.

Our constraint set is heavily unbalanced towards the class. A common way to deal with unbalanced datasets, is to weight the different classes appropriately: Instead of considering in Eq. (5) the standard least square regression problem, we propose to associate different weights to different labels. If we denote by the diagonal matrix containing the weights of each class, the square loss of Eq. (5) becomes The actual values of are obtained by validation. Note that this approach differs from so called re-weighted least squares (see for instance [10]), since here we weight labels and not instances. Following [2], a simple computation shows that the matrix remains unchanged. Thus, our algorithm is unchanged except in the computation of the Frank Wolfe gradient.

## 5 Dataset and Features

Dataset. Our input data consists of challenging video clips annotated with sequences of actions. One possible source for such data is movies with their associated scripts [4, 5, 20, 30]. The annotations provided by this kind of data are noisy and do not provide ground-truth time-stamps for evaluation. To address this issue, we have constructed a new action dataset, containing clips annotated by sequences of actions. We have taken the 69 movies from which the clips of the Hollywood2 dataset were extracted [20]

, and manually added full time-stamped annotation for 16 classes (12 of these classes are already present in Hollywood2). To build clips that form our input data, we search in the annotations for action chains containing at least two elements. To do so, we pad the temporal action annotations by 250 frames and search for overlapping intervals. A chain of such overlapping annotations forms one video clip with associated action sequence in our dataset. In the end we obtain 937 clips, with number of actions ranging from 2 to 11. We subdivide each clip into temporal intervals of length 10 frames. Clips contain on average 84 intervals, the shortest containing 11, the longest 289.

Feature representation. We have to define a feature vector for every interval of a clip. We build a bag-of-words vector per interval . Recall that intervals are of length 10 frames. To aggregate enough features, we decided to pool features from the 30-frame-long window centered on the interval. We compute video descriptors following [34]. We generate vocabularies of size 2000 for HOF features. We restricted ourselves to one channel to improve the running time, while being aware that by doing so we sacrifice some performance. In our informal experiments, we also tried the MBH channels yielding very close performance. We use the Hellinger kernel to obtain the explicit feature map by square-rooting the normalized histograms. Now every data point is associated with a vector in .

## 6 Action Labeling Experiments

Experimental Setup.
To carry out the action labeling experiments, we split 90% of the dataset into three parts (Fig. 5) that we denote *Sup* (for supervised), *Eval* (for evaluation) and *Val* (for validation).
*Sup* is the part of data that has time-stamped annotations, and it is used only in the semi-supervised setting described in Sec. 4.1.
*Val* is the set of examples on which we automatically adjust the hyper-parameters for our method ().
In practice we fix the *Val* set to contain 5% of the dataset.
This set is provided with fully time-stamped annotations, but these are not used during the cost optimization.
None of the reported results are computed on this set.
We evaluate the quality of the assignment on the *Eval* set.

Note that we carry out the Frank-Wolfe optimization on the union of all three sets.
The annotations from the *Sup* set are used to constrain in the semi-supervised setup while those from the *Val* set are only used for choosing our hyper parameters.
The supervisory information used over the rest of the data are the ordered annotations without time stamps.
Please also keep in mind that there are no “training” and “testing” phases *per se* in this primary assignment task.
All our experiments are conducted over five random splits of the data.
This allows us to present results with error bars.

Performance Measure. Several measures may be used to evaluate the performance of discriminative clustering algorithms. Some authors propose to use the output classifier to perform a classification task [5, 35] or use the output partition of the data as a solution of the segmentation task [16]. Yet another way to evaluate is to use a loss between partitions [12] as in [2]. Note that because of temporal constraints, for every clip we have a set of corresponding (prediction, ground-truth) pairs. We have thus chosen to measure the assignment quality for every ground-truth action interval and prediction as . This measure is similar to the standard Jaccard measure used for comparing ensembles [14]. Therefore, with a slight abuse of notation, we refer to this measure as the Jaccard measure. This performance measure is well suited for our problem since it respects the following properties: (1) it is high if the action predicted is included in the ground-truth annotation, (2) it is low if the prediction is bigger than the annotation, (3) it is lowest if the prediction is out of the annotation, (4) it does not take into account the prediction of the background class. The score is averaged across all ground-truth intervals. The perfect score of 1 is achieved when all actions are aligned to the correct annotations, but accurate temporal segmentation is not required as long as the predicted labels are within the ground truth interval.

Baselines. We compare our method to the three following baselines. All these are trained using the same features as the ones used for our method. For all baselines, we round the obtained solution using the scheme described in Sec. 3.3.

*Normalized Cuts (NCUT).*

We compare our method to normalized cuts (or spectral clustering)

[29]. Let us define as the symmetric Laplacian of the matrix : , where . measures both the proximity and appearance similarity of intervals. For all in , we compute: , where is the Chi-squared distance. More precisely, we minimize over all cuts the cost . is convex ( is positive semidefinite) and we can use the Frank-Wolfe optimization scheme developed for our model. Intuitively, this baseline is searching for a partition of the video such that time intervals falling into the same segments have close-by features according to the Chi-squared distance.*Bojanowski et al. [4].*
We also consider our own implementation of the weakly-supervised approach proposed in [4].
We replace our ordering constraints by the corresponding “at least one” constraints.
When an action is mentioned in the sequence, we require it appears at least once in the clip.
This corresponds to a set of linear constraints on .
We adapt this technique in order to work on our dataset.
Indeed, the available implementation requires storing a square matrix of the size of the problem.
Instead, we choose to minimize the convex objective of [4] using the Frank-Wolfe algorithm which is more scalable.

*Supervised Square Loss (SL).*
For completeness, we also compare our method to a fully supervised approach.
We train a classifier using the square loss over the annotated *Sup* set and score all time intervals in *Eval*.
We use the square loss since it is used in our method and all other baselines.

Weakly Supervised Setup. In this setup, all baselines except (SL) have only access to weak supervision in the form of ordering constraints. Figure 6 (left) illustrates the quality of the predicted asignmentss and compares our method to baselines. Our method performs better than all other weakly-supervised methods. Both the Bojanowski et al. and NCUT baselines have low scores in the weakly-supervised setting. This shows the advantage of exploiting temporal constraints as weak supervisory signal. The fully supervised baseline (blue) eventually recovers a better alignment than our method as the fraction of fully annotated data increases. This occurs (when the red line crosses the blue line) at the 25% mark, as the supervised data makes up for the lack of ordering constraints. Fully time-stamped annotated data are expensive to produce whereas movies scripts are often easy to get. It appears thus that manually annotated videos are not always necessary since good performance is reached simply by using weak supervision. Figure 7 shows the results for all weakly-supervised methods for all classes. We notice that we outperform the baselines on the most frequent classes (such as “Open Door”, “Sit Down” and “Stand Up”).

Semi-supervised Setup.
Figure 6 (right) illustrates the performance of our model when some supervised data is available.
The fraction of the supervised data is given on the x-axis.
First, note that our semi-supervised method (red) is always and consistently (Cf error bars) above the square loss baseline (blue).
Of course, during the optimization, our method has access to weak annotations over the whole dataset, and to full annotations on the *Sup* set whereas the SL baseline has access only to the latter.
This demonstrates the benefits of exploiting temporal constraints during learning.
The semi-supervised Bojanowski et al. baseline (orange) has low performance, but it improves with the amount of full supervision provided.

## 7 Classification Experiments

The experiments in the previous section evaluate the quality of the recovered assignment matrix . Here we evaluate instead the quality of the recovered classifiers on a held-out test set of data for an action classification task. We recover these classifiers as explained later in this section. We can treat them as

independent, one-versus-rest classifiers and use them to score the samples from the test set. We evaluate this performance by computing per-class precision and recall and report the corresponding average precision for each class.

Experimental setup. The models are trained following the procedure described in the previous section. To test the performance of our classifiers, we use the held out set of clips. This set is made of 10% of the clips from the original data. The clips from this set are identical in nature to the ones used to train the models. We also perform multiple random splits to report results with error bars.

Recovering the classifiers.

One of the nice features of our method is that we can estimate the implicit classifiers corresponding to our solution

. We do so using the expression from Eq. 7.Baselines. We compare the classifiers obtained by our method to those obtained by the Bojanowski et al. baseline [4]. We also compare them to the classifiers learned using the (SL) baseline.

Weakly Supervised Setup.
Classification results are presented in Fig. 8 (left).
We observe a behavior similar to the action labeling experiment.
But the supervised classifier (SL) trained on the *Sup* set using the square loss (blue) always performs worse than our model (red).
This can be explained by the fact that the proposed model makes use of mode data.
Even though our model has only access to weak annotation, it can prove sufficient to train good classifiers.
The weakly-supervised method from Bojanowski et al. (orange) is performing worst, exactly as in the previous task.
This can be explained by the fact that this method does not have access to full supervision or ordering constraints.

Semi-supervised Setup. In the semi-supervised setting (Fig. 8

(left)), our method (red) performs better than the supervised SL baseline (blue). The action model we recover is consistently better than the one obtained using only fully supervised data. Thus, our method is able to perform well semi-supervised learning. The Bojanowski et al. baseline (orange) improves when the fraction of annotated examples increases. Nonetheless, we see that making use of ordering constraints as used by our method signigicantly improves over simple linear inequalities (“at least one” constraints as formulated in

[4]).Acknowledgements. This work was supported by the European integrated project AXES, the MSR-INRIA laboratory, EIT-ICT labs, a Google Research Award, a PhD fellowship from the EADS Foundation, the Institut Universitaire de France and ERC grants ALLEGRO, VideoWorld, Activia and Sierra.

## References

- [1] Amer, M.R., Todorovic, S., Fern, A., Zhu, S.C.: Monte carlo tree search for scheduling activity recognition. In: ICCV (2013)
- [2] Bach, F., Harchaoui, Z.: DIFFRAC: a discriminative and flexible framework for clustering. In: NIPS (2007)
- [3] Bertsekas, D.: Nonlinear Programming. Athena Scientific (1999)
- [4] Bojanowski, P., Bach, F., Laptev, I., Ponce, J., Schmid, C., Sivic, J.: Finding Actors and Actions in Movies. In: ICCV (2013)
- [5] Duchenne, O., Laptev, I., Sivic, J., Bach, F., Ponce, J.: Automatic annotation of human actions in video. In: ICCV (2009)
- [6] Frank, M., Wolfe, P.: An algorithm for quadratic programming. Naval Research Logistics Quarterly (1956)
- [7] Gold, B., Morgan, N., Ellis, D.: Speech and Audio Signal Processing - Processing and Perception of Speech and Music, Second Edition. Wiley (2011)
- [8] Guo, Y., Schuurmans, D.: Convex Relaxations of Latent Variable Training. In: NIPS (2007)
- [9] Harchaoui, Z.: Conditional gradient algorithms for machine learning. In: NIPS Workshop (2012)
- [10] Hastie, T., Tibshirani, R., Friedman, J.: The elements of statistical learning: data mining, inference and prediction. Springer (2009)
- [11] Hongeng, S., Nevatia, R.: Large-scale event detection using semi-hidden markov models. In: ICCV (2003)
- [12] Hubert, L., Arabie, P.: Comparing partitions. Journal of classification (1985)
- [13] Ivanov, Y.A., Bobick, A.F.: Recognition of visual activities and interactions by stochastic parsing. PAMI (2000)
- [14] Jaccard, P.: The distribution of the flora in the alpine zone. New Phytologist (1912)
- [15] Jaggi, M.: Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In: ICML (2013)
- [16] Joulin, A., Bach, F., Ponce, J.: Discriminative Clustering for Image Co-segmentation. In: CVPR (2010)
- [17] Joulin, A., Bach, F., Ponce, J.: Multi-class cosegmentation. In: CVPR (2012)
- [18] Khamis, S., Morariu, V.I., Davis, L.S.: Combining per-frame and per-track cues for multi-person action recognition. In: ECCV (2012)
- [19] Kwak, S., Han, B., Han, J.H.: Scenario-based video event recognition by constraint flow. In: CVPR (2011)
- [20] Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: CVPR (2008)
- [21] Laxton, B., Lim, J., Kriegman, D.J.: Leveraging temporal, contextual and ordering constraints for recognizing complex activities in video. In: CVPR (2007)
- [22] Liu, J., Kuipers, B., Savarese, S.: Recognizing human actions by attributes. In: CVPR (2011)
- [23] Nguyen, M.H., Lan, Z.Z., la Torre, F.D.: Joint segmentation and classification of human actions in video. In: CVPR (2011)
- [24] Niebles, J.C., Chen, C.W., Li, F.F.: Modeling Temporal Structure of Decomposable Motion Segments for Activity Classification. In: ECCV (2010)
- [25] Rabiner, L.R., Juang, B.H.: Fundamentals of speech recognition. Prentice Hall (1993)
- [26] Rohrbach, M., Regneri, M., Andriluka, M., Amin, S., Pinkal, M., Schiele, B.: Script Data for Attribute-Based Recognition of Composite Activities. In: ECCV (2012)
- [27] Ryoo, M.S., Aggarwal, J.K.: Recognition of composite human activities through context-free grammar based representation. In: CVPR (2006)
- [28] Sadanand, S., Corso, J.J.: Action bank: A high-level representation of activity in video. In: CVPR (2012)
- [29] Shi, J., Malik, J.: Normalized Cuts and Image Segmentation. In: CVPR (1997)
- [30] Sivic, J., Everingham, M., Zisserman, A.: ”Who are you?” - Learning person specific classifiers from video. In: CVPR (2009)
- [31] Tang, K., Fei-Fei, L., Koller, D.: Learning latent temporal structure for complex event detection. In: CVPR (2012)
- [32] Vu, V.T., Bremond, F., Thonnat, M.: Automatic video interpretation: A novel algorithm for temporal scenario recognition. In: IJCAI (2003)
- [33] Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Action recognition by dense trajectories. In: CVPR (2011)
- [34] Wang, H., Schmid, C.: Action Recognition with Improved Trajectories. In: ICCV (2013)
- [35] Xu, L., Neufeld, J., Larson, B., Schuurmans, D.: Maximum Margin Clustering. In: NIPS (2004)

Comments

There are no comments yet.