Concurrence-Aware Long Short-Term Sub-Memories for Person-Person Action Recognition

06/03/2017 ∙ by Xiangbo Shu, et al. ∙ 0

Recently, Long Short-Term Memory (LSTM) has become a popular choice to model individual dynamics for single-person action recognition due to its ability of modeling the temporal information in various ranges of dynamic contexts. However, existing RNN models only focus on capturing the temporal dynamics of the person-person interactions by naively combining the activity dynamics of individuals or modeling them as a whole. This neglects the inter-related dynamics of how person-person interactions change over time. To this end, we propose a novel Concurrence-Aware Long Short-Term Sub-Memories (Co-LSTSM) to model the long-term inter-related dynamics between two interacting people on the bounding boxes covering people. Specifically, for each frame, two sub-memory units store individual motion information, while a concurrent LSTM unit selectively integrates and stores inter-related motion information between interacting people from these two sub-memory units via a new co-memory cell. Experimental results on the BIT and UT datasets show the superiority of Co-LSTSM compared with the state-of-the-art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Illustration of the proposed Co-LSTSM. For each frame, two sub-memory units are developed to store individual motion information, while a concurrent LSTM unit is developed to selectively integrate and store inter-related motion information between interacting people from two sub-memory units via a new co-memory cell (). Stacked concurrent LSTM units are recurrent to capture inter-related dynamics between interacting people over time.

Person-person interaction (e.g., handshake, hug, etc), as the basic unit in the human activity, is attracting much attention in the computer vision and pattern recognition communities 

[17, 16, 4, 31]. During a person-person interaction process, there are usually two individual motions from two interacting people respectively, some of which are concurrently inter-related with each other (e.g., two interacting people are stretching out hands in hug interaction). It has been proven that the concurrently inter-related motions between interacting people are discriminative for recognizing the person-person interactions [4, 14]. In most cases of person-person interaction, the concurrently inter-related motions between two interacting people are either 1) quite symmetrically similar to each other (e.g., two interacting people are handshaking); or 2) not quite similar but are strongly interacting to each other (e.g., person A kicks person B, while person B retreats back).

There are mainly two types of solutions for person-person interaction recognition. One solution (e.g., [17, 16, 4, 40]) is to extract the individual motion descriptors (e.g., spatio-temporal interest points [7]) from interacting people, and then predict the class label of an interaction by inferring the coherence between two individual motions. However, this solution regards the person-person interactions as two single-person actions, which ignores some inter-related motion information and brings in some irrelevant individual motion information. The other solution is to extract the motion descriptors on the interactive regions, and then train an interaction recognition model [14]. However, it is hard to locate interactive region before close interacting.

Usually, the difference between person-person interactions (e.g., boxing interaction and pat interaction) is subtle [29, 4, 26]

, which brings in the challenge to recognize person-person interaction. Recently, due to the powerful ability of capturing the sequential motion information, Recurrent Neural Networks (RNN) 

[36], especially Long Short-Term Memory (LSTM) [11], has proven successful on human action recognition tasks [8, 9, 34, 23, 12]. To well address the problem of person-person interaction recognition, we aim to explore the long-term inter-related dynamics between two interacting people by leveraging state-of-the-art LSTM model. However, existing LSTM models only modeling human individual dynamics independently do not consider the concurrently inter-related dynamics between interacting people. A naive way is to either 1) merge the individual actions at preprocessing stage [13] (e.g., consider interacting people as a whole); or 2) utilize two LSTM networks to model the individual dynamics of each interacting person respectively, and then fuse the output sequences from two LSTM networks [12]. However, this neglects the inter-related dynamics between interacting people of how person-person interactions can change over time.

To this end, we propose a novel Concurrence-Aware Long Short-Term Sub-Memories (Co-LSTSM) for person-person interaction recognition by modeling the long-term inter-related dynamics between two interacting people on the bounding boxes covering people. It has the ability to aggregate the inter-related memories from individual memories of interacting people over time, as shown in Figure 1

. Specifically, we present a novel concurrent LSTM unit consisting of two sub-memory units that store the individual motion information on the bounding box covering people of each video frame. Following these two sub-memory units, a new co-memory cell selectively integrates and stores the memories from two sub-memory units to reveal the concurrently inter-related motion information between interacting people. Overall, two interacting people in each frame are jointly modeled by a concurrent LSTM unit on the bounding boxes covering people, which outputs the concurrently inter-related hidden representations between interacting people rather than the individual hidden representations from individual human. The stacked concurrent LSTM units are recurrent in a time sequence to capture the concurrently inter-related dynamics between two interacting people over time. Extensive experiments on the widely-used benchmarks well show the superior performance of the proposed Co-LSTSM compared with the state-of-the-art methods and several baselines.

Our main contributions in this work are two-fold: (1) We propose a novel Concurrence-Aware Long Short-Term Memories (Co-LSTSM) to effectively address the problem of person-person interaction recognition. (2) To our best knowledge, our work is the first attempt in modeling concurrently long-term inter-related dynamics over time between multiple motion objects by the variants of LSTM.

2 Related Work

2.1 Human Action Recognition

Human activity recognition aims to automatically understand the activities performed by people [4, 2, 25], including group-person interaction recognition (e.g., walking, queueing, etc) [21, 28, 5, 33], person-object interaction recognition (e.g., some people are eating, while the other people are riding a bike) [1, 2], and person-person interaction recognition [17, 16, 4, 31].

For group-person interaction recognition, one solution used in [21, 28] is to exploit the spatial distribution of human activities and present the spatio-temporal descriptors in capturing the spatial distribution of people. The other solution used in [5, 24, 33]

is to track all body parts in a video, and then learn the holistic representations to estimate their collective activities. In particular, instead of treating the two problems (i.e., tracking multiple people and estimating their collective activities) separately, Choi

et al. [5] presented a unified framework to simultaneously track people and estimate their collective activities. Besides, Lan et al. [20, 21] proposed to recognize the group-person activities by jointly capturing the group activity, the individual human actions, and the interactions among them.

For person-object interaction recognition, there are usually a number of concurrent individual activities (e.g., some people are riding a bike) and group activities (e.g., some people are walking together). To address this challenge, Amer et al. [1] proposed a spatio-temporal AND-OR graph to jointly model the activity parts, person-person spatio-temporal relations, and person-object context, as well as enable multi-target tracking. Subsequently, Amer et al. [2] used a three-layered AND-OR graph to jointly model group activities, individual actions, and participating objects. A key point is that these methods require a multitude of detectors at different levels.

For person-person interaction recognition, some representative works [17, 16, 40] used several interactive phrases as the latent mid-level feature to infer the person-person interaction from the human individual actions. Interactive phrases incorporating rich human knowledge provide an effective way to represent person-person interactions. However, the difference of some interactions (e.g., boxing and pat) is too stable to be discriminated only by the interactive phrases. Besides, some person-person interactions are complex, which cannot be described well by a certain amount of interactive phrases. Recently, Kong et al. [14] developed a patch-aware latent SVM to recognize the interactions by inferring the closely interactive regions between interacting people. However, it is hard to capture the interactive regions before close interacting. Moreover, Chang et al. [4] proposed to extract features of each interacting person and then learn an interaction matrix between interacting people.

2.2 RNN-based Action Recognition

As neural nets for handling sequential data with variable length, RNN, especially LSTM, has been successfully applied to action recognition [41, 8, 9, 34, 23, 12]

. Many RNN-based action recognition methods are embedded the LSTM layer into Convolutional Neural Networks (CNN) 

[8, 37, 13]. For example, Wu et al. [37] proposed to train three types of CNNs equipped with LSTM to model the spatial, short-term motion and audio clues corresponding to the inputs of video frames, stacked optical flows, and audio spectrogram, respectively. Besides, some skeleton-based action recognition methods utilized RNN to model the long-term contextual information of all skeletons. For example. Du et al. [9] proposed a multilayer RNN framework to feed the five body parts from human skeletons into five subnets. As the number of layers increases, the representations outputs from several subnets are hierarchically fused to the inputs of the higher layers.

Some works aim to design the specific RNN architecture for the different action recognition tasks [12, 41, 30]. For example, in order to capture the co-occurrences of discriminative joints, Zhu et al. [41] added a mixed-norm regularization penalty to the deep LSTM networks. Moreover, the authors proposed an internal dropout technique to c operate on the gates, cells, and output responses of the LSTM nodes. To emphasize on the temporal change of motion information between two consecutive frames with the time, Veeriah et al. [34] proposed a Differential RNN architecture equipped with the Derivative of States between the LSTM gates. Recently, Shahroudy et al. [30] proposed a Part-aware LSTM that separates the memory cell into the several sub-cells corresponding to the different body parts and explicitly models the dependencies over spatial and temporal domains concurrently. Likewise, Liu et al. [23] also proposed the similar LSTM architecture by pushing the traditional LSTM-based learning into temporal domains and spatial domains simultaneously.

Unlike existing RNN-based action recognition works, we consider the more challenging action recognition scenario within person-person interactions. To capture the interactive motion information rather than the individual motion information, the proposed Co-LSTSM explicitly models the concurrently inter-related dynamics between interacting people. The most related works [13, 12] either combine the individual dynamics of each person or treat the two interacting people as a whole. To our best knowledge, our work is the first time to model the concurrently long-term inter-related dynamics over time between interacting people by the LSTM-based model.

3 Preliminary: RNN for Individual Action

Given an input video clip with the length , RNN [36] models its dynamics through a sequence of hidden states with hidden units, which can be mapped to an output sequence ( is the number of the classes of actions), i.e.,

(1)
(2)

where denotes , and are the weight matrices, and

is the bias vector. Finally, the output

at time step can be solved by a softmax function, i.e., , where the -th element denotes the encoding of the confidence score on the -th class action.

Due to the exponential decay in retaining the context information of video frames, Long Short-Term Memory [11], a variant of RNN, provides a solution by allowing the network to learn when to forget previous hidden states and when to update hidden states given new information [8].

Usually, each LSTM unit contains a memory cell (denoted by ) storing the memory of the input sequence up to the time step . In order to store the memory w.r.t the motion information in the long time, three types of gates (i.e., input gate , forget gate and output gate ) are incorporated into the LSTM unit to control what information would enter and leave the memory cell over time [11], as follows,

(3)
(4)
(5)

where

is a sigmoid function;

and are the weight matrices; is the bias vector. In addition to three gates, the memory cell can be expressed as

(6)

where , and denotes the element-wise product. Finally, a hidden state at time step can be expressed as

(7)

4 The Proposed Co-LSTSM

4.1 The Architecture

For person-person interaction recognition, each video frame contain two concurrent individual actions from interacting people, some of which are inter-related with each other. Existing LSTM models targeting to singe-person actions cannot handle the person-person interactions well. As mentioned before, we can roughly treat two interacting people as a whole before training the LSTM network. However, this solution will bring in some individual-special motion information. Besides, we can also model the individual dynamics of each person by two LSTM networks respectively, and then naively combine (e.g., concatenate or pool) the output sequences from two LSTM networks into the final representation. However, it is intuitive that this strategy loses some concurrently inter-related motion information between interacting people.

Figure 2: Illustration of a concurrent LSTM unit in the proposed Co-LSTSM. For the concurrent inputs and at time step , a concurrent LSTM unit consists of two specific sub-memory units, a common output gate , two new cell gates (i.e., and ) and a new co-memory cell . These two sub-memory units includes the respective input gates (i.e., and ), forget gates (i.e., and ), sub-memory cells (i.e., and ). In particular, two sub-memory cells (i.e., and ) are jointly fed into the co-memory cell , followed by the hidden representation .

To this end, we propose a Concurrence-Aware Long Short-Term Memories (Co-LSTSM) to capture the concurrently inter-related dynamics between interacting people rather than the individual dynamics of each person. Our key idea is to develop two sub-memory units to store the individual motion information of each person respectively, and a concurrent LSTM unit to selectively integrate and store the concurrently inter-related motion information between interacting people from the individual motion information. Figure 2 illustrates the architecture of a concurrent LSTM unit of the proposed Co-LSTSM. Overall, the concurrent LSTM unit at each time step consists of two specific sub-memory units, two cell gates, a common output gate and a new co-memory cell. Specifically, these two sub-memory units include their respective input gates, forget gates, memory cells. And the co-memory cell between two sub-memory units selectively integrates the individual motion information from two memory units and memorizes the inter-related motion information.

Formally, and denote two sequences of two concurrent people, respectively; , and denote the input gate, forget gate and sub-memory cell in sub-memory unit 1 at time step , respectively; , and denote the input gate, forget gate and sub-memory cell in sub-memory unit 2 at time step , respectively. All of them can be expressed in the following equations

(8)
(9)
(10)
(11)

where and are the weight matrices, and is the bias vector.

Two cell gates and following the sub-memory unit 1 and the sub-memory unit 1 respectively aim to control what memories from two sub-memory units enter and leave at each time step. Unlike the traditional gates, the cell gate () is activated by a nonlinear function of two inputs and and the past hidden state , i.e.,

(12)

where , are the weight matrices, and is the bias vector. Based on the consistent interactions between two interacting people, these two cell gates () allow more concurrently inter-related motion information between interacting people to enter the co-memory cell and contribute to one common hidden state. In this work, the co-memory cell can be expressed as

(13)

In the concurrent LSTM unit, two sub-memory units share a common output gate . The activation of the cell gate is similar to the activation of the cell gate, i.e.,

(14)

Finally, a hidden state at time step can be expressed as

(15)

Briefly, at time step , the proposed Co-LSTSM model proceeds in the following order.

  • Compute input gates and forget gates by Eq (8) and Eq (9), respectively;

  • Update sub-memory cells by Eq (11);

  • Compute cell gates by Eq (12);

  • Compute co-memory gate by Eq (13);

  • Compute output gate by Eq (14);

  • Output by Eq (15).

4.2 Learning Algorithm

We employ a loss function to learn the model parameters of Co-LSTSM by measuring the deviation between the target class

and at time step , i.e., . Both types of loss functions can be minimized by Back Propagation Through Time (BPTT) algorithm [6], which unfolds the Co-LSTSM model over several time steps and then runs the back propagation algorithm to train the model. specifically, LSTM usually uses the truncated BPTT to prevent the back-propagation errors. The idea is that once the back-propagated error leaves the LSTM unit or gates, it will not be allowed to enter the LSTM unit again. Here, we also do not allow the errors to re-enter the concurrent LSTM unit once they leave the co-memory cell.

5 Experiments

5.1 Dataset

We conduct experiments to evaluate the performance of the proposed Co-LSTSM by comparing with the state-of-the-art methods and some baselines on the following two widely-used benchmarks.

BIT dataset [16]. It consists of eight classes of human interactions, i.e., bow, boxing, handshake, high-five, hug, kick, pat, and push. In each class, there are 50 videos, which are captured in real scenarios within the cluttered backgrounds. For some videos, there are partially occluded bodies, moving objects, as well as devise appearances, scales, poses, illuminations and viewpoints. Following the setting in [17], 34 videos per class are randomly chosen as training data and the remaining ones for testing.

UT dataset [29]. It consists of ten videos, each of which contains six classes of human interactions, i.e., handshake, hug, kick, point, punch and push. These videos are captured with different scales and illuminations. The authors provide the interaction labels for each frame. After extracting the frames, we obtain 60 video clips in total, namely 10 video clips per class. The leave-one-out cross validation training strategy is adopted for the experiments, i.e., nine video clips per class are used for training while the remaining one for cross validation. Finally, averaged accuracy on 10 times is reported as the final performance.

5.2 Implementation Details

In the preprocessing step, the bounding box corresponding to each interacting person is detected and tracked over all frames by an object detector [10] and object tracker [39]. Since some works validated that placing the LSTM network on fc6 of CNN performs better than fc7 of CNN [8], we employ the pre-trained AlexNet model [19] to extract the two types of fc6 features on two bounding boxes around two concurrent people, respectively.

For BIT dataset and UT dataset, the length of time steps is set to and

, respectively. The sub-memory cell nodes are set 2048 on both BIT and UT. The time steps of each video clip in BIT dataset and UT dataset are set 30 and 40 respectively. We use Torch toolbox and Caffe as the deep learning platform and a NVIDIA Tesla K20 GPU to run the experiments. The learning rate, momentum and decay rate are set

, 0.9 and 0.95, respectively. We plot the learning curve for training Co-LSTSM model on BIT dataset and UT dataset in Figure 3. We can see that the training of Co-LSTSM begins to converge after about and epochs on the BIT dataset and the UT dataset, respectively.

Figure 3: Objective loss curve over training epochs.
Method bow boxing handshake high-five hug kick pat push Average
Lan et al. [21] 81.25 75.00 81.25 87.50 87.50 81.25 81.25 81.25 82.03
Liu et al. [22] 100.00 75.00 81.25 87.50 93.75 87.50 75.00 75.00 84.37
Kong et al. [16] 81.25 81.25 81.25 93.75 93.75 81.25 81.25 87.50 85.16
Kong et al. [14] 87.50 81.25 87.50 81.25 87.50 81.25 87.50 87.50 85.38
Kong et al. [17] 93.75 87.50 93.75 93.75 93.75 87.50 87.50 87.50 90.63
Donahue et al. [8] 100.00 75.00 85.00 69.75 85.00 69.75 80.00 76.50 80.13
Ke et al. [13] - - - - - - - - 85.20
Person-box CNN 100.00 75.00 62.50 56.25 93.75 68.75 56.25 62.50 71.88
One CNN+LSTM 100.00 75.00 84.50 84.50 88.00 88.00 70.00 78.00 83.50
Two CNN+LSTM 100.00 79.00 84.50 84.50 94.75 88.00 80.50 90.00 87.66
Co-LSTSM 100.00 90.50 92.50 92.50 94.75 88.00 90.50 94.25 92.88
Table 1: Recognition accuracy (%) of different methods on the BIT dataset.

In experiments, three baselines are conducted to illustrate the novelty of the proposed Co-LSTSM.

  • Person-box CNN

    . The pre-trained AlexNet model is deployed on two bounding boxes around the two concurrent people at each time step respectively, where two fc6 features corresponding to two interacting people are concatenated into a long vector. Then the concatenated features over all time steps are pooled into a single feature. All features from each video clip are trained and tested on the softmax classifier. This baseline can illustrate the importance of deep features.

  • One CNN+LSTM. This baseline treats two individual actions as a whole. First, two bounding boxes corresponding two interacting people at each time step are merged into a bigger bounding box. Second, fc6 features are extracted by AlexNet on this ‘‘bigger" bounding box at each time step. Third, we use the fc6 features at each time step as inputs to train a LSTM model. The model of this baseline is similar to Long-term Recurrent Convolutional Networks (LRCN) [8].

  • Two CNN+LSTM. This baseline models the individual dynamics of two people by two LSTM networks, respectively. First, AlexNet is deployed on the two bounding boxes around two interacting people at each time step to extract fc6 features. Second, fc6 features from two individuals are feed to one LSTM networks to capture the individual dynamics, respectively. Third, the softmax scores output from these two LSTM networks are fused. This idea of this baseline is the same as Two-Stream Convolutional Networks [32].

Method handshake hug kick point punch push Average
Ryoo et al. [29] 75.00 87.50 62.50 50.00 75.00 75.00 70.80
Yu et al. [38] 100.00 65.00 100.00 85.00 75.00 75.00 83.33
Ryoo  [27] 80.00 90.00 90.00 80.00 90.00 80.00 85.00
Kong et al. [16] 80.00 80.00 100.00 90.00 90.00 90.00 88.33
Kong et al. [17] 100.00 90.00 100.00 80.00 90.00 90.00 91.67
Kong et al. [14] 90.00 100.00 90.00 100.00 90.00 90.00 93.33
Raptis & Sigal [26] 100.00 100.00 90.00 100.00 80.00 90.00 93.30
Shariat & Pavlovic [31] - - - - - - 91.57
Zhang et al. [40] 100.00 100.00 100.00 90.00 90.00 90.00 95.00
Donahue et al. [8] 90.00 80.00 90.00 80.00 90.00 80.00 85.00
Ke et al. [13] - - - - - - 93.33
Wang et al. [35] - - - - - - 95.00
Person-box CNN 90.00 80.00 80.00 80.00 80.00 80.00 81.67
One CNN+LSTM 90.00 80.00 90.00 80.00 90.00 80.00 85.00
TWo CNN+LSTM 100.00 100.00 90.00 80.00 90.00 80.00 90.00
Co-LSTSM 100.00 100.00 90.00 100.00 90.00 90.00 95.00
Table 2: Recognition accuracy (%) of different methods on the UT dataset.

5.3 Results on the BIT dataset

Comparison with baselines. Table 1 shows the recognition accuracy of the proposed Co-LSTSM compared with the baselines. As shown in this table, Co-LSTSM significantly outperforms the baseline methods. We can see that adding the temporal information by employing LSTM (i.e., ‘‘One CNN+LSTM", and ‘‘Two CNN+LSTM") improves the performance of ‘‘Person-box CNN" without temporal information. In particular, ‘‘Two CNN+LSTM" achieves the higher accuracy than ‘‘One CNN+LSTM". It is illustrated that an single LSTM model can capture a single motioning object better than multiple motioning objects.

Comparison with state-of-the-art methods. We also compare Co-LSTSM with the state-of-the-art methods for person-person interaction recognition, i.e., hand-crafted spatio-temporal interest points [7] based methods of Lan et al. [21], Liu et al. [22], and Kong et al. [16, 17, 14], ws well as LSTM-based methods of Donahue et al. [8] and Ke et al. [13]. Table 1 lists the experimental results, in which some results are reported in [17, 14]. We can see Co-LSTSM performs better than the comparative methods, especially all LSTM-based methods, i.e., Donahue et al. [8] and Ke et al. [13]. In particular, compared with the state-of-the-art LSTM-based methods (i.e., Ke et al. [13] with 85.20%), Co-LSTSM has gained about 8% improvement.

5.4 Results on the UT dataset

Comparison with baselines. Table 2 shows the recognition accuracy of the proposed Co-LSTSM compared with the baselines. It is observed that Co-LSTSM performs consistently better than all baselines. ‘‘One CNN+LSTM" and ‘‘Two CNN+LSTM" considering the temporal information performs better than ‘‘Person-box CNN" without temporal information. In particular, ‘‘Two CNN+LSTM" achieves the better performance than ‘‘One CNN+LSTM".

Comparison with state-of-the-art methods. Co-LSTSM is also compared with the state-of-the-art methods, including some traditional methods (i.e., Ryoo et al. [29], Yu et al. [38], Kong et al. [16, 17, 14], Raptis & Sigal [26], Shariat & Pavlovic [31], and Zhang et al. [40]), deep learning method (i.e., Wang et al. [35]), as well as LSTM-based methods (i.e., Ke et al. [13] and Donahue et al. [8]). The comparison results are shown in Table 2. We can see that Co-LSTSM also achieves the state-of-the-arts result, i.e., 95.00% by Zhang et al. [40] and Wang et al. [35]. It is noted that Wang et al. [35] adopted deep context features on the event neighborhood, where the size of event neighborhood need be manually defined in the preprocessing step; Zhang et al. proposed a spatio-temporal phrase to capture a certain number of local movements between interacting people, where the number of local movements increases when the interaction becomes complex. As new exploration by leveraging LSTM model, the proposed Co-LSTSM performs better than other LSTM-based methods, i.e., Donahue et al. [8] and Ke et al. [13].

5.5 Evaluation on Human Interaction Prediction

In this work, we also evaluate the proposed Co-LSTSM on human interaction prediction. Unlike person-person interaction recognition, human interaction prediction is defined to recognize an ongoing interaction activity before the interaction is completely executed [13]. Due to the large variations in appearance and the evolution of scenes, interaction prediction at an early stage is a challenging task. Following experimental setting in [13, 15], a testing video clip is divided into 10 incomplete action executions by using 10 observation ratios (i.e., from 0 to 1 with a step size of 0.1), which represent the increasing amount of sequential data with time. For example, given a testing video clip with the length , a prediction accuracy under an observation ratio of denotes that the accuracy is tested with the first length frames. When the observation ratio is , namely the entire video clip is used, Co-LSTSM acts as the person-person interaction recognition model.

The comparative methods includes Dynamic Bag-of-Words (DBoW) [27], Sparse Coding (SC) [3], Sparse Coding with Mixture of training video Segments (MSSC) [3], Multiple Temporal Scales based on SVM (MTSSVM) [18], Max-Margin Action Prediction Machine (MMAPM) [15], Long-term Recurrent Convolutional Networks (LRCN) [8], and Spatial-Structural-Temporal Feature Learning (SSTFL) [13]. The comparison results on the BIT dataset with different observation ratios are listed in Figure 4. Overall, Co-LSTSM outperforms all comparative methods for all observation ratios. Specifically, we can see that 1) the improvement of Co-LSTSM is more significant when the observation ratio is ; 2) the accuracy of Co-LSTSM increases rapidly when the observation ratio is , which illustrates the close interaction is happening; and 3) the accuracy of Co-LSTSM becomes stable when the observation ratio is , which illustrates the close interaction is ending.

Figure 4: Performance of human interaction prediction on BIT.

6 Conclusions and Future Work

In this work, for person-person interaction recognition, we propose a novel Concurrence-Aware Long Short-Term Sub-Memories (Co-LSTSM) to aggregate the interactive motions between interacting people over time. Specifically, interacting people at each time step are jointly modeled by a novel concurrent LSTM unit, which captures the concurrently inter-related motion information from two sub-memory units. Experimental results on person-person interaction recognition and prediction have demonstrated the superior performance of the proposed Co-LSTSM compared with the state-of-the-art methods. In future, we will extend Co-LSTSM for addressing the problem of complex group collective activity analysis.

7 Acknowledgments

This work was supported by the National Key Research and Development Program of China (Grant No. 2016YFB1001001), the National Natural Science Foundation of China (Grant No. 61522203 and 61672285), the Natural Science Foundation of Jiangsu Province (Grant No. BK20140058), the Fundamental Research Funds for the Central Universities (Grant No. 30917015105), and the National Ten Thousand Talent Program of China (Young Top-Notch Talent).

References

  • [1] M. R. Amer, S. Todorovic, A. Fern, and S. Zhu. Monte carlo tree search for scheduling activity recognition. In ICCV, 2013.
  • [2] M. R. Amer, D. Xie, M. Zhao, S. Todorovic, and S. C. Zhu. Cost-sensitive top-down/bottom-up inference for multiscale activity recognition. In ECCV, 2012.
  • [3] Y. Cao, D. P. Barrett, A. Barbu, S. Narayanaswamy, H. Yu, A. Michaux, Y. Lin, S. J. Dickinson, J. M. Siskind, and S. Wang. Recognize human activities from partially observed videos. In CVPR, 2013.
  • [4] X. Chang, W.-S. Zheng, and J. Zhang. Learning person--person interaction in collective activity recognition. IEEE TIP, 24(6):1905--1918, 2015.
  • [5] W. Choi and S. Savarese. A unified framework for multi-target tracking and collective activity recognition. In ECCV, 2012.
  • [6] M. P. Cuéllar, M. Delgado, and M. del Carmen Pegalajar Jiménez.

    An application of non-linear programming to train recurrent neural networks in time series prediction problems.

    In ICEIS, 2005.
  • [7] P. Dollár, V. Rabaud, G. Cottrell, and S. Belongie. Behavior recognition via sparse spatio-temporal features. In VS-PETS, 2005.
  • [8] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015.
  • [9] Y. Du, W. Wang, and L. Wang. Hierarchical recurrent neural network for skeleton based action recognition. In CVPR, 2015.
  • [10] R. B. Girshick. Fast r-cnn. In ICCV, 2015.
  • [11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735--1780, 1997.
  • [12] M. Ibrahim, S. Muralidharan, Z. Deng, A. Vahdat, and G. Mori. A hierarchical deep temporal model for group activity recognition. arXiv:1511.06040, 2015.
  • [13] Q. Ke, M. Bennamoun, S. An, F. Bossaid, and F. Sohel. Spatial, structural and temporal feature learning for human interaction prediction. arXiv:1608.05267, 2016.
  • [14] Y. Kong and Y. Fu. Close human interaction recognition using patch-aware models. IEEE TIP, 25(1):167--178, 2016.
  • [15] Y. Kong and Y. Fu. Max-margin action prediction machine. IEEE TPAMI, 38(9):1844--1858, 2016.
  • [16] Y. Kong, Y. Jia, and Y. Fu. Learning human interaction by interactive phrases. In ECCV, 2012.
  • [17] Y. Kong, Y. Jia, and Y. Fu. Interactive phrases: Semantic descriptionsfor human interaction recognition. IEEE TPAMI, 36(9):1775--1788, 2014.
  • [18] Y. Kong, D. Kit, and Y. Fu. A discriminative model with multiple temporal scales for action prediction. In ECCV, 2014.
  • [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
  • [20] T. Lan, Y. Wang, W. Yang, and G. Mori. Beyond actions: Discriminative models for contextual group activities. In NIPS, 2010.
  • [21] T. Lan, Y. Wang, W. Yang, S. N. Robinovitch, and G. Mori. Discriminative latent models for recognizing contextual group activities. IEEE TPAMI, 34(8):1549--1562, 2012.
  • [22] J. Liu, B. Kuipers, and S. Savarese. Recognizing human actions by attributes. In CVPR, 2011.
  • [23] J. Liu, A. Shahroudy, D. Xu, and G. Wang. Spatio-temporal lstm with trust gates for 3d human action recognition. In ECCV, 2016.
  • [24] A. Patron-Perez, M. Marszalek, I. Reid, and A. Zisserman. Structured learning of human interactions in tv shows. IEEE TPAMI, 34(12):2441--2453, 2012.
  • [25] R. Poppe. A survey on vision-based human action recognition. Image and vision computing, 28(6):976--990, 2010.
  • [26] M. Raptis and L. Sigal. Poselet key-framing: A model for human activity recognition. In CVPR, 2013.
  • [27] M. S. Ryoo. Human activity prediction: Early recognition of ongoing activities from streaming videos. In ICCV, 2011.
  • [28] M. S. Ryoo and J. K. Aggarwal. Recognition of composite human activities through context-free grammar based representation. In CVPR, 2006.
  • [29] M. S. Ryoo and J. K. Aggarwal. Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities. In ICCV, 2009.
  • [30] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang. Ntu rgb+d: A large scale dataset for 3d human activity analysis. In CVPR, 2016.
  • [31] S. Shariat and V. Pavlovic. A new adaptive segmental matching measure for human activity recognition. In ICCV, 2013.
  • [32] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
  • [33] A. Vahdat, B. Gao, M. Ranjbar, and G. Mori. A discriminative key pose sequence model for recognizing human interactions. In ICCV Workshops, 2011.
  • [34] V. Veeriah, N. Zhuang, and G.-J. Qi. Differential recurrent neural networks for action recognition. In ICCV, 2015.
  • [35] X. Wang and Q. Ji. Hierarchical context modeling for video event recognition. In IEEE TPAMI, 2016.
  • [36] R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2):270--280, 1989.
  • [37] Z. Wu, Y.-G. Jiang, X. Wang, H. Ye, and X. Xue. Multi-stream multi-class fusion of deep networks for video classification. In ACM MM, 2016.
  • [38] T.-H. Yu, T.-K. Kim, and R. Cipolla. Real-time action recognition by spatiotemporal semantic and structural forests. In BMVC.
  • [39] A. R. Zamir, A. Dehghan, and M. Shah. Gmcp-tracker: Global multi-object tracking using generalized minimum clique graphs. In ECCV. 2012.
  • [40] Y. Zhang, X. Liu, M. Chang, W. Ge, and T. Chen. Spatio-temporal phrases for activity recognition. In ECCV, 2012.
  • [41] W. Zhu, C. Lan, J. Xing, W. Zeng, Y. Li, L. Shen, and X. Xie. Co-occurrence feature learning for skeleton based action recognition using regularized deep lstm networks. arXiv:1603.07772, 2016.