It is customary in the recognition of images to treat the two spatial dimensions and symmetrically. This is justified by the statistics of natural images, which are to a first approximation isotropic—all orientations are equally likely—and shift-invariant [38, 23]. But what about video signals ? Motion is the spatiotemporal counterpart of orientation , but all spatiotemporal orientations are not
equally likely. Slow motions are more likely than fast motions (indeed most of the world we see is at rest at a given moment) and this has been exploited in Bayesian accounts of how humans perceive motion stimuli. For example, if we see a moving edge in isolation, we perceive it as moving perpendicular to itself, even though in principle it could also have an arbitrary component of movement tangential to itself (the aperture problem in optical flow). This percept is rational if the prior favors slow movements.
If all spatiotemporal orientations are not equally likely, then there is no reason for us to treat space and time symmetrically, as is implicit in approaches to video recognition based on spatiotemporal convolutions [44, 3]. We might instead “factor” the architecture to treat spatial structures and temporal events separately. For concreteness, let us study this in the context of recognition. The categorical spatial semantics of the visual content often evolve slowly. For example, waving hands do not change their identity as “hands” over the span of the waving action, and a person is always in the “person” category even though he/she can transit from walking to running. So the recognition of the categorical semantics (as well as their colors, textures, lighting etc.) can be refreshed relatively slowly. On the other hand, the motion being performed can evolve much faster than their subject identities, such as clapping, waving, shaking, walking, or jumping. It can be desired to use fast refreshing frames (high temporal resolution) to effectively model the potentially fast changing motion.
Based on this intuition, we present a two-pathway SlowFast model for video recognition (Fig. 1). One pathway is designed to capture semantic information that can be given by images or a few sparse frames, and it operates at low frame rates and slow refreshing speed. In contrast, the other pathway is responsible for capturing rapidly changing motion, by operating at fast refreshing speed and high temporal resolution. Despite its high temporal rate, this pathway is made very lightweight, e.g., 20% of total computation. This is because this pathway is designed to have fewer channels and weaker ability to process spatial information, while such information can be provided by the first pathway in a less redundant manner. We call the first a Slow pathway and the second a Fast pathway, driven by their different temporal speeds. The two pathways are fused by lateral connections.
Our conceptual idea leads to flexible and effective designs for video models. The Fast pathway, due to its lightweight nature, does not need to perform any temporal pooling—it can operate on high frame rates for all intermediate layers and maintain temporal fidelity. Meanwhile, thanks to the lower temporal rate, the Slow pathway can be more focused on the spatial domain and semantics. By treating the raw video at different temporal rates, our method allows the two pathways to have their own expertise on video modeling.
., ImageNet), largely surpassing the best number in the literature of this kind by 5.1%. Ablation experiments convincingly demonstrate the improvement contributed by the SlowFast concept. On AVA action detection, our model achieves a new state-of-the-art of 28.3% mAP.
Our method is partially inspired by biological studies on the retinal ganglion cells in the primate visual system [24, 34, 6, 11, 46], though admittedly the analogy is rough and premature. These studies found that in these cells, 80% are Parvocellular (P-cells) and 15-20% are Magnocellular (M-cells). The M-cells operate at high temporal frequency
and are more responsive to temporal changes, but are not sensitive to spatial details or colors. P-cells provide fine spatial details and colors, but have lower temporal resolution. Our framework is analogous in that: (i) our model has two pathways separately working at low and high temporal resolutions; (ii) our Fast pathway is designed to capture fast changing motion but fewer spatial details, analogous to M-cells; and (iii) our Fast pathway is lightweight, similar to the small ratio of M-cells. We hope these relations will inspire more computer vision models for video recognition.
2 Related Work
Actions can be formulated as spatiotemporal objects and captured by oriented filtering in spacetime, as done by HOG3D  and cuboids . 3D ConvNets [43, 44, 3] extend 2D image models [29, 40, 42, 21] to the spatiotemporal domain, handling both spatial and temporal dimensions similarly. There are recent methods focusing on decomposing the convolutions into separate 2D spatial and 1D temporal filters [9, 45, 53, 36].
Beyond spatiotemporal filtering or their separable versions, our work pursuits a more thorough separation of modeling expertise by using two different temporal speeds.
Optical flow for video recognition.
There is a classical branch of research focusing on hand-crafted spatiotemporal features based on optical flow. These methods, including histograms of flow , motion boundary histograms , and trajectories 
, had shown competitive performance for action recognition before the prevalence of deep learning.
In the context of deep neural networks, the two-stream method exploits optical flow by viewing it as another input modality. This method has been a foundation of many competitive results in the literature [9, 10, 49]. However, it is methodologically unsatisfactory given that optical flow is a hand-designed representation, and two-stream methods are often not learned end-to-end jointly with the flow.
Our work is related to the two-stream method , but provides conceptually different perspectives. The two-stream method  has not explored the potential of different temporal speeds, a key concept in our method. The two-stream method adopts the same backbone structure to both streams, whereas our Fast pathway is more lightweight. Our method does not compute optical flow, and therefore, our models are learned end-to-end from the raw data.
3 SlowFast Networks
3.1 Slow pathway
temporal strideon input frames, i.e., it processes only one out of frames. A typical value of we studied is 16—this refreshing speed is roughly 2 frames sampled per second for 30-fps videos. Denoting the number of frames sampled by the Slow pathway as , the raw clip length is frames.
3.2 Fast pathway
In parallel to the Slow pathway, the Fast pathway is another convolutional model with the following properties.
High frame rate.
Our goal here is to have a fine representation along the temporal dimension. Our Fast pathway works with a small temporal stride of , where is the frame rate ratio between the Fast and Slow pathways. Our two pathways operate on the same raw clip, so the Fast pathway samples frames, times denser than the Slow pathway. A typical value is in our experiments.
The presence of is in the key of the SlowFast concept (Fig. 1, time axis). It explicitly indicates that the two pathways work on different temporal speeds, and thus drives the expertise of the two subnets instantiating the two pathways.
High temporal resolution features.
Our Fast pathway not only has a high input resolution, but also pursues high-resolution features throughout the network hierarchy. In our instantiations, we use no
temporal downsampling layers (neither temporal pooling nor time-strided convolutions) throughout the Fast pathway, until the global pooling layer before classification. As such, our feature tensors always haveframes along the temporal dimension, maintaining temporal fidelity as much as possible.
Low channel capacity.
Our Fast pathway also distinguishes with existing models in that it can use significantly lower channel capacity to achieve good accuracy for the SlowFast model. This makes it lightweight.
In a nutshell, our Fast pathway is a convolutional network analogous to the Slow pathway, but has a ratio of () channels of the Slow pathway. The typical value is in our experiments. Notice that the computation (floating-number operations, or FLOPs) of a common layer is often quadratic in term of its channel scaling ratio. This is what makes the Fast pathway more computation-effective than the Slow pathway. In our instantiations, the Fast pathway typically takes 20% of the total computation. Interestingly, as mentioned in Sec. 1, evidence suggests that 15-20% of the retinal cells in the primate visual system are M-cells (that are sensitive to fast motion but not color or spatial detail).
The low channel capacity can also be interpreted as a weaker ability of representing spatial semantics. Technically, our Fast pathway has no special treatment on the spatial dimension, so its spatial modeling capacity should be lower than the Slow pathway because of fewer channels. The good results of our model suggest that it is a desired tradeoff for the Fast pathway to weaken its spatial modeling ability while strengthening its temporal modeling ability.
Motivated by this interpretation, we also explore different ways of weakening spatial capacity in the Fast pathway, including reducing input spatial resolution and removing color information. As we will show by experiments, these versions can all give good accuracy, suggesting that a lightweight Fast pathway with less spatial capacity can be made beneficial.
3.3 Lateral connections
The information of the two pathways is fused, so one pathway is not unaware of the representation learned by the other pathway. We implement this by lateral connections, which have been used to fuse optical flow-based, two-stream networks [9, 10]. In image object detection, lateral connections  are a popular technique for merging different levels of spatial resolution and semantics.
Similar to [9, 32], we attach one lateral connection between the two pathways for every “stage" (Fig. 1). Specifically for ResNets , these connections are right after pool, res, res, and res. The two pathways have different temporal dimensions, so the lateral connections perform a transformation to match them (detailed in Sec. 3.4). We use unidirectional connections that fuse features of the Fast pathway into the Slow one (Fig. 1). We have experimented with bidirectional fusion and found similar results.
|stage||Slow pathway||Fast pathway||output sizes|
|data layer||stride 16, 1||stride 2, 1|
|conv||17, 64||57, 8|
|stride 1, 2||stride 1, 2|
|pool||13 max||13 max|
|stride 1, 2||stride 1, 2|
|global average pool, concate, fc||# classes|
Our idea of SlowFast is generic, and it can be instantiated with different backbones (e.g., [40, 42, 21]) and implementation specifics. In this subsection, we describe our instantiations of the network architectures.
An example SlowFast model is specified in Table 1. We denote spatiotemporal size by where is the temporal length and is the height and width of a square spatial crop. The details are described next.
The Slow pathway in Table 1 is a temporally strided 3D ResNet, modified from . It has 4 frames as the network input, sparsely sampled from a 64-frame raw clip with a temporal stride 16. We opt to not perform temporal downsampling in this instantiation, as doing so would be detrimental when the input stride is large.
Unlike typical C3D / I3D models, we use non-degenerate temporal convolutions (temporal kernel size 1, underlined in Table 1) only in res and res; all filters from conv to res are essentially 2D convolution kernels in this pathway. This is motivated by our experimental observation that using temporal convolutions in earlier layers degrades accuracy. We argue that this is because when objects move fast and the temporal stride is large, there is little correlation within a temporal receptive field unless the spatial receptive field is large enough (i.e., in later layers).
Table 1 shows an example of the Fast pathway with and . It has a much higher temporal resolution (green) and lower channel capacity (orange).
The Fast pathway has non-degenerate temporal convolutions in every block. This is motivated by the observation that this pathway holds fine temporal resolution for the temporal convolutions to capture detailed motion. Also, the Fast pathway has no temporal downsampling layers by design.
Our lateral connections fuse from the Fast to the Slow pathway. It requires to match the sizes of features before fusing. Denoting the feature shape of the Slow pathway as , , , the feature shape of the Fast pathway is , , . We experiment with the following transformations in the lateral connections:
(i) Time-to-channel: We reshape and transpose , , into , , , meaning that we pack all frames into the channels of one frame.
(ii) Time-strided sampling: We simply sample one out of every frames, so , , becomes , , .
(iii) Time-strided convolution: We perform a 3D convolution of a 51 kernel with output channels and stride .
The output of the lateral connections is fused into the Slow pathway by summation or concatenation.
4 Experiments: Kinetics Action Classification
We investigate Kinetics-400  that has 240k training videos and 20k validation videos in 400 human action categories. We report top-1 and top-5 classification accuracy (%). We also report the computational cost (in FLOPs) of a single, spatially center-cropped clip.111We use single-clip, center-crop FLOPs as a basic unit of computational cost. Inference-time computational cost is roughly proportional to this, if a fixed number of clips and crops is used, as is for our all models.
We adopt synchronized SGD training in 128 GPUs following the recipe in 
, and we found its accuracy is as good as typical training in one 8-GPU machine but it scales out well. The mini-batch size is 8 clips per GPU (so the total mini-batch size is 1024). We train with Batch Normalization (BN), and the BN statistics are computed within each 8 clips. We adopt a half-period cosine schedule  of learning rate decaying: the learning rate at the -th iteration is , where is the maximum training iterations and the base learning rate is set as 1.6. We also use a linear warm-up strategy 
in the first 8k iterations. Unless specified, we train for 256 epochs (60k iterations with a total mini-batch size of 1024, in240k Kinetics videos) when 4 frames, and 196 epochs when 4 frames: it is sufficient to train shorter when a clip has more frames. We use momentum of 0.9 and weight decay of 10. Dropout  of 0.5 is used before the final classifier layer.
For the temporal domain, we randomly sample a clip (of frames) from the full-length video, and the input to the Slow and Fast pathways are respectively and frames; for the spatial domain, we randomly crop 224224 pixels from a video, or its horizontal flip, with a shorter side randomly sampled in [256, 320] pixels [40, 50].
Following common practice, we uniformly sample 10 clips from a video along its temporal axis. For each clip, we scale the shorter spatial side to 256 pixels and take 3 crops of 256256 to cover the spatial dimensions, as an approximation of fully-convolutional testing, following the code of . We average the softmax scores for prediction.
Our implementation is based on the public code of .222https://github.com/facebookresearch/video-nonlocal-net
4.1 Results and Analysis
Table 2 shows a series of ablations, analyzed next:
Training from scratch.
Our models are trained from scratch, without ImageNet training. To draw fair comparisons, it is helpful to check the potential impacts (positive or negative) of training from scratch. To this end, we train the exact same 3D ResNet-50 architectures specified in , using our large-scale SGD recipe trained from scratch.
Table (a)a shows the comparisons on two architectures. In both cases, our training recipe achieves comparably good results as the ImageNet pre-training counterpart reported by . This suggests that our training system, as the foundation of the following experiments, has no loss despite no ImageNet pre-training.
Table (b)b shows the results using the structure of one individual pathway alone (specified in Table 1). We compare them with the 3D R-50 of . Our individual pathways use no temporal downsampling, leading to a total temporal reduction factor of 1 (see “t-reduce” in Table (b)b). So we also train a counterpart 3D R-50 of  modified for total temporal reduction of 1 (Table (b)b, row 2).
Table (b)b shows that our individual pathways alone have no unfair advantage over the 3D R-50 design, as shown by their lower accuracy. This can be explained by the fact that our individual pathways are far more lightweight than the 3D R-50 ones: our Slow pathway has fewer frames, and our Fast pathway has fewer channels. Consequently, they have only 20.9 and 4.9 GFLOPs respectively, substantially cheaper than the two 3D R-50 baselines (28.1 or 44.9 GFLOPs). The two pathways are designed with their special expertise if they are used jointly, as we will show next.
As a naïve fusion baseline, in Table (c)c we show a variant using no lateral connection: it only concatenates the final outputs of the two pathways. This variant has 73.5% accuracy, slightly better than the Slow-only counterpart by 0.8%.
Then we show SlowFast models with various lateral connections: time-to-channel (TtoC), time-strided sampling (T-sample), and time-strided convolution (T-conv). Concatenation is used for merging. For TtoC which can match channel dimensions for this model, we also show merging by element-wise summation.
Table (c)c shows that these SlowFast models are all better than the Slow-only pathway. With the best-performing lateral connection of T-conv, our SlowFast network is 3.0% better than the one-pathway, Slow-only baseline. We use T-conv as our default lateral connection.
Fig. 2 shows the curves of the Slow-only model (72.6%) vs. the SlowFast model (75.6%) during the training procedure. We plot the single-crop validation error and training error. It is clear that the SlowFast model is consistently better than its Slow-only counterpart throughout the entire training.
Interestingly, the Fast pathway alone has only 51.7% accuracy (Table (b)b). But it brings in up to 3.0% improvement to the Slow pathway, showing that the underlying representation modeled by the Fast pathway is largely complementary. We strengthen this observation by the next set of ablations.
Channel capacity of Fast pathway.
A key intuition for designing the Fast pathway is that it can employ a lower channel capacity for capturing motion without building a detailed spatial representation. This is controlled by the channel ratio . Table (d)d shows the effect of varying .
The best-performing values are and (our default). Nevertheless, it is surprising to see that all values from to in our SlowFast model can improve over the Slow-only counterpart. In particular, with , the Fast pathway only adds as small as 1.0 GFLOPs (5% relative), but leads to 1.6% improvement.
Weaker spatial inputs to Fast pathway.
Further, we experiment with using different weaker spatial inputs to the Fast pathway in our SlowFast model. We consider: (i) a half spatial resolution (112112), with (vs. default ) to roughly maintain the FLOPs; (ii) gray-scale input frames; (iii) “time difference" frames, computed by subtracting the current frame with the previous frame; and (iv) using optical flow as the input to the Fast pathway.
Table (e)e shows that all these variants are competitive and are better than the Slow-only baseline. In particular, the gray-scale version of the Fast pathway is nearly as good as the RGB variant, but reduces FLOPs by 5%. Interestingly, this is also consistent with the M-cell’s behavior of being insensitive to colors [24, 34, 6, 11, 46].
In Table (f)f we study using two Slow pathways instead of SlowFast. We consider: (i) ensembling two Slow-only models that are independently trained, and (ii) replacing the Fast pathway with a Slow pathway to create a “SlowSlow" model. Table (f)f shows that ensembling moderately improve accuracy by 0.5% at the cost of 2 computation, and SlowSlow severely suffers from overfitting.
Various SlowFast instantiations.
In Table (g)g we compare various instantiations of SlowFast models. We compare with two Slow-only baselines from Table (b)b. Their SlowFast counterparts have healthy gains. In particular, even if the Slow-only model has a denser input ( 88), the Fast pathway is still able to improve its accuracy by 2.1% (from 74.9% to 77.0%), with only a 20% increase of FLOPs.
The bottom rows in Table (g)g show that various SlowFast models perform competitively and offer a variety of tradeoffs between accuracy and FLOPs. In particular, the smallest SlowFast model we have tried has only 13.9 GFLOPs, which is 50% of many Slow-only 3D convolutional models (see also Table (b)b), but it still has good accuracy of 73.4%.
We believe there will be more room to optimize the tradeoffs between accuracy and FLOPs under the SlowFast framework, and the desired tradeoff is often application-dependent. But our results all suggest that the SlowFast models are more effective than the Slow-only ones.
Thus far all experiments used ResNet-50 as the backbone. Next we study advanced backbones including a deeper model of ResNet-101  and a non-local (NL) version . For models involving R-101, to reduce overfitting, we use a scale jittering range of [256, 340] pixels and a random temporal jittering of when sampling the Slow pathway with a stride of . For all models involving NL, we initialize them with the counterparts that are trained without NL, to facilitate convergence. We only add NL to the Slow pathway. For R101+NL, we only add NL to res (instead of res+res ).
Table (h)h shows the results. As expected, using advanced backbones is orthogonal to our SlowFast concept, and they give additional improvement over our SlowFast baselines.
|I3D ||ImageNet||72.1||90.3||108 N/A|
|Two-Stream I3D ||✓||ImageNet||75.7||92.0||216 N/A|
|S3D-G ||✓||ImageNet||77.2||93.0||143 N/A|
|Nonlocal R-50 ||ImageNet||76.5||92.6||282 30|
|Nonlocal R-101 ||ImageNet||77.7||93.3||359 30|
|R(2+1)D Flow ||✓||-||67.5||87.2||152 115|
|STC ||-||68.7||88.5||N/A N/A|
|ARTNet ||-||69.2||88.3||23.5 250|
|S3D ||-||69.4||89.1||66.4 N/A|
|ECO ||-||70.0||89.4||N/A N/A|
|I3D ||✓||-||71.6||90.0||216 N/A|
|R(2+1)D ||-||72.0||90.0||152 115|
|R(2+1)D ||✓||-||73.9||90.9||304 115|
|SlowFast, R50 (416)||-||75.6||92.1||36.1 30|
|SlowFast, R50||-||77.0||92.6||65.7 30|
|SlowFast, R50 + NL||-||77.7||93.1||80.8 30|
|SlowFast, R101||-||77.9||93.2||106 30|
|SlowFast, R101 + NL||-||79.0||93.6||115 30|
|I3D ||-||71.9||90.1||108 N/A|
|StNet-IRv2 RGB ||ImgNet+Kinetics400||79.0||N/A||N/A|
|SlowFast, R50||-||79.9||94.5||65.7 30|
|SlowFast, R101||-||80.4||94.8||106 30|
|SlowFast, R101 + NL||-||81.1||94.9||115 30|
Comparison with state-of-the-art results.
Table 3 shows the comparisons with state-of-the-art results in Kinetics-400. We also show the actual inference-time computation. As existing papers differ in their inference strategy for cropping/clipping in space and in time, in Table 3, we report the FLOPs per spacetime “view" (temporal clip with spatial crop) at inference and the number of views used. Recall that in our case, the inference-time spatial size is 256 (instead of 224), and 10 temporal clips each with 3 spatial crops are used (so 30 views in total).
Table 3 shows that our results are substantially better than existing results that are also without ImageNet pre-training. In particular, our model (79.0%) is 5.1% absolutely better than the previous best result of this kind (73.9%). Our results are also better than those using ImageNet pre-training.
Our results are achieved at low inference-time cost. We notice that many existing works (if reported) use extremely dense sampling of clips along the temporal axis, which can lead to 100 views at inference time. This cost has been largely overlooked. In contrast, our method does not require many temporal clips, thanks to our high temporal resolution yet lightweight Fast pathway. Our cost per spacetime view can be low (e.g., 36.3 GFLOPs), while still being more accurate than existing methods.
Kinetics-600  has 392k training videos and 30k validation videos in 600 classes. As this dataset is larger, we extend the training epochs (and the learning rate schedule) by 2. We set the base learning rate as 0.8.
Kinetics-600 is relatively new, and existing results are limited. So our goal is mainly to provide results for future reference. See Table 4. Note that the Kinetics-600 validation set overlaps with the Kinetics-400 training set , so we do not pre-train on Kinetics-400. The winning entry  of the latest ActivityNet Challenge 2018  reports a best single-model, single-modality accuracy of 79.0%. Our method achieves 81.1%.
5 Experiments: AVA Action Detection
The AVA dataset  focuses on spatiotemporal localization of human actions. The data is taken from 437 movies. Spatiotemporal labels are provided for one frame per second, with every person annotated with a bounding box and (possibly multiple) actions. There are 211k training and 57k validation video segments. We follow the standard protocol  of evaluating on 60 classes (see Fig. 3). The performance metric is mean Average Precision (mAP) over 60 classes, using a frame-level IoU threshold of 0.5.
Our detector is similar to Faster R-CNN  with minimal modifications adapted for video. We use the SlowFast network or its variants as the backbone. We set the spatial stride of res to 1 (instead of 2), and use a dilation of 2 for its filters. This increases the spatial resolution of res by 2. We extract region-of-interest (RoI) features  at the last feature map of res. We extend each 2D RoI at a frame into a 3D RoI by replicating it along the temporal axis, following . We compute RoI features by RoIAlign 
spatially, and global average pooling temporally. The RoI features are then max-pooled and fed to a per-class, sigmoid-based classifier for multi-label prediction.
Our region proposals are computed by an off-the-shelf person detector, i.e., that is not jointly trained with the action detection models. We adopt a person-detection model trained with Detectron . It is a Faster R-CNN with a ResNeXt-101-FPN [52, 32] backbone. It is pre-trained on ImageNet and the COCO human keypoint images . We fine-tune this detector on AVA for person (actor) detection. The person detector produces 93.9 AP@50 on the AVA validation set. Then, the region proposals for action detection are detected person boxes with a confidence of 0.9, which has a recall of 91.1% and a precision of 90.7% for the person class.
We initialize the network weights from the Kinetics-400 classification models. We use step-wise learning rate, reducing the learning rate 10 when validation error saturates. We train for 14k iterations (68 epochs for 211k data), with linear warm-up  for the first 1k iterations. We use a weight decay of 10. All other hyper-parameters are the same as in the Kinetics experiments. Ground-truth boxes, and proposals overlapping with ground-truth boxes by IoU 0.75, are used as the samples for training. The input is instantiation-specific frames of size 224224.
We perform inference on a single clip with frames around the frame that is to be evaluated. We resize the spatial dimension such that its shorter side is 256 pixels. The backbone feature extractor is computed fully convolutional, as in standard Faster R-CNN .
|model||flow||video pretrain||val mAP||test mAP|
|ACRN, S3D ||✓||Kinetics-400||17.4||-|
|ATR, R50 + NL ||Kinetics-400||20.0||-|
|ATR, R50 + NL ||✓||Kinetics-400||21.7||-|
|9-model ensemble ||✓||Kinetics-400||25.6||21.1|
|SlowFast, R101 + NL||Kinetics-600||27.3||-|
|SlowFast++, R101 + NL||Kinetics-600||28.3||-|
5.1 Results and Analysis
Table 5 compares a Slow-only baseline with its SlowFast counterpart, with the per-category AP shown in Fig. 3. Our method improves massively by 5.2 mAP (relative 28%) from 19.0 to 24.2. This is solely contributed by our SlowFast idea.
Category-wise (Fig. 3), our SlowFast model improves in 57 out of 60 categories, vs. its Slow-only counterpart. The largest absolute gains are observed for “hand clap" (+27.7 AP), “swim" (+27.4 AP), “run/jog" (+18.8 AP), “dance" (+15.9 AP), and “eat” (+12.5 AP). We also observe large relative increase in “jump/leap”, “hand wave”, “put down”, “throw”, “hit” or “cut”. These are categories where modeling dynamics are of vital importance. The SlowFast model is worse in only 3 categories: “answer phone" (-0.1 AP), “lie/sleep" (-0.2 AP), “shoot" (-0.4 AP), and their decrease is relatively small vs. others’ increase.
Comparison with state-of-the-art results.
Finally, we compare with previous results on AVA in Table 7. An interesting observation is on the potential benefit of using optical flow (see column ‘flow’ in Table 7). Existing works have observed mild improvements: +1.1 mAP for I3D in , and +1.7 mAP for ATR in . In contrast, our improvement from the Fast pathway is +5.2 mAP (Table 5). Moreover, two-stream methods using optical flow can double the computational cost, whereas our Fast pathway is lightweight.
As system-level comparisons, our SlowFast model has 26.1 mAP using only Kinetics-400 pre-training. This is 4.4 mAP higher than the previous best number under similar settings (21.7 of ATR , single-model), and 6.1 mAP higher than that using no optical flow (Table 7).
The work in  pre-trains on the larger Kinetics-600 and achieves 21.9 mAP. With our Kinetics-600 pre-trained model, we approach 26.8 mAP. Augmenting with NL blocks  increases to this 27.3 mAP. By using 3 spatial scales and horizontal flip for testing (SlowFast++, Table 7), we achieve 28.3 mAP, a new state-of-the-art on AVA.
We further provide an oracle experiment on ground-truth region proposals resulting in 35.1 mAP, suggesting future work could also explore to improve the proposal generation.
We show qualitative results of the SlowFast model in Fig. 4. Overall, even given some discrepancy between our predictions and ground-truth, our results are visually reasonable. Predictions and ground-truth labels are in several cases extending each other, illustrating the difficulty of this dataset. For example, consider the sequence in the last row of Fig. 4. The SlowFast detector is able to predict the right person’s reading action with a score of 0.54, but is penalized as this label is not present in the annotations.
The time axis is a special dimension. This paper has investigated an architecture design that contrasts the speed along this axis. It achieves state-of-the-art accuracy for video action classification and detection. We hope that this SlowFast concept will foster further research in video recognition.
-  E. H. Adelson and J. R. Bergen. Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am. A, 2(2):284–299, 1985.
-  J. Carreira, E. Noland, A. Banki-Horvath, C. Hillier, and A. Zisserman. A short note about Kinetics-600. arXiv:1808.01340, 2018.
-  J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In Proc. CVPR, 2017.
-  N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance. In Proc. ECCV, 2006.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In Proc. CVPR, 2009.
A. Derrington and P. Lennie.
Spatial and temporal contrast sensitivities of neurones in lateral geniculate nucleus of macaque.The Journal of physiology, 357(1):219–240, 1984.
-  A. Diba, M. Fayyaz, V. Sharma, M. M. Arzani, R. Yousefzadeh, J. Gall, and L. Van Gool. Spatio-temporal channel correlation networks for action classification. In Proc. ECCV, 2018.
-  P. Dollár, V. Rabaud, G. Cottrell, and S. Belongie. Behavior recognition via sparse spatio-temporal features. In PETS Workshop, ICCV, 2005.
-  C. Feichtenhofer, A. Pinz, and R. Wildes. Spatiotemporal residual networks for video action recognition. In NIPS, 2016.
-  C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In Proc. CVPR, 2016.
-  D. J. Felleman and D. C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex. Cerebral cortex (New York, NY: 1991), 1(1):1–47, 1991.
-  B. Ghanem, J. C. Niebles, C. Snoek, F. C. Heilbron, H. Alwassel, V. Escorcia, R. Khrisna, S. Buch, and C. D. Dao. The ActivityNet large-scale activity recognition challenge 2018 summary. arXiv:1808.03766, 2018.
-  R. Girdhar, J. Carreira, C. Doersch, and A. Zisserman. A better baseline for AVA. arXiv:1807.10066, 2018.
-  R. Girshick. Fast R-CNN. In Proc. ICCV, 2015.
-  R. Girshick, I. Radosavovic, G. Gkioxari, P. Dollár, and K. He. Detectron. https://github.com/facebookresearch/detectron, 2018.
-  P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch SGD: training ImageNet in 1 hour. arXiv:1706.02677, 2017.
-  C. Gu, C. Sun, D. A. Ross, C. Vondrick, C. Pantofaru, Y. Li, S. Vijayanarasimhan, G. Toderici, S. Ricco, R. Sukthankar, C. Schmid, and J. Malik. AVA: A video dataset of spatio-temporally localized atomic visual actions. In Proc. CVPR, 2018.
-  D. He, F. Li, Q. Zhao, X. Long, Y. Fu, and S. Wen. Exploiting spatial-temporal modelling and multi-modal fusion for human action recognition. arXiv:1806.10319, 2018.
-  K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask R-CNN. In Proc. ICCV, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proc. CVPR, 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. CVPR, 2016.
-  G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012.
-  J. Huang and D. Mumford. Statistics of natural images and models. In Proc. CVPR, 1999.
-  D. H. Hubel and T. N. Wrisel. Receptive fields and functional architecture in two non-striate visual areas of the cat. J. Neurophysiol, 28:229–289, 1965.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. ICML, 2015.
-  J. Jiang, Y. Cao, L. Song, S. Z. Y. Li, Z. Xu, Q. Wu, C. Gan, C. Zhang, and G. Yu. Human centric spatio-temporal action localization. In ActivityNet workshop, CVPR, 2018.
-  W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv:1705.06950, 2017.
-  A. Kläser, M. Marszałek, and C. Schmid. A spatio-temporal descriptor based on 3d-gradients. In Proc. BMVC., 2008.
A. Krizhevsky, I. Sutskever, and G. E. Hinton.
ImageNet classification with deep convolutional neural networks.In NIPS, 2012.
-  I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In Proc. CVPR, 2008.
-  Leaderboard:ActivityNet-AVA. http://activity-net.org/challenges/2018/evaluation.html.
-  T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In Proc. CVPR, 2017.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In Proc. ECCV, 2014.
-  M. Livingstone and D. Hubel. Segregation of form, color, movement, and depth: anatomy, physiology, and perception. Science, 240(4853):740–749, 1988.
-  I. Loshchilov and F. Hutter. SGDR: Stochastic gradient descent with warm restarts. arXiv:1608.03983, 2016.
-  Z. Qiu, T. Yao, and T. Mei. Learning spatio-temporal representation with pseudo-3d residual networks. In Proc. ICCV, 2017.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.
-  D. L. Ruderman. The statistics of natural images. Network: computation in neural systems, 5(4):517–548, 1994.
-  K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Proc. ICLR, 2015.
-  C. Sun, A. Shrivastava, C. Vondrick, K. Murphy, R. Sukthankar, and C. Schmid. Actor-centric relation network. In ECCV, 2018.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proc. CVPR, 2015.
-  G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolutional learning of spatio-temporal features. In Proc. ECCV, 2010.
-  D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3D convolutional networks. In Proc. ICCV, 2015.
-  D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri. A closer look at spatiotemporal convolutions for action recognition. In Proc. CVPR, 2018.
-  D. C. Van Essen and J. L. Gallant. Neural mechanisms of form and motion processing in the primate visual system. Neuron, 13(1):1–10, 1994.
-  H. Wang and C. Schmid. Action recognition with improved trajectories. In Proc. ICCV, 2013.
-  L. Wang, W. Li, W. Li, and L. Van Gool. Appearance-and-relation networks for video classification. In Proc. CVPR, 2018.
-  L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Val Gool. Temporal segment networks: Towards good practices for deep action recognition. In Proc. ECCV, 2016.
-  X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural networks. In Proc. CVPR, 2018.
-  Y. Weiss, E. P. Simoncelli, and E. H. Adelson. Motion illusions as optimal percepts. Nature neuroscience, 5(6):598, 2002.
-  S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In Proc. CVPR, 2017.
-  S. Xie, C. Sun, J. Huang, Z. Tu, and K. Murphy. Rethinking spatiotemporal feature learning for video understanding. arXiv:1712.04851, 2017.
-  M. Zolfaghari, K. Singh, and T. Brox. ECO: efficient convolutional network for online video understanding. In Proc. ECCV, 2018.