Understanding pedestrian behavior is an important and challenging problem that is critical for the deployment of automated and advanced driving assistance technologies in production vehicles. The challenges are exasperated in situations involving pedestrian interactions in dense urban environments, such as intersections. The difficulty in understanding and modeling pedestrian behavior is primarily attributed to the unpredictability of human behavior in situations where actions and intentions are constantly in flux and depend on the abrupt variations in the human pose, their 3D spatial relations, and their interaction with other agents as well as with the environment. To arrive at a pragmatic solution, we emphasize the importance of human pose estimation for recognition and 3D localization of actions.
The core technical components which are essential for understanding pedestrian behavior include recognition and 3D localization of actions and prediction of intention and future trajectory in traffic scenes. While human action recognition and localization problems involve estimation of the current state (behavior and location), future trajectory prediction involves estimation of the future path based on the present and past observations. Therefore, real-time analysis of the current state provides an informative prior in forecasting future trajectories for use in decision making and path planning strategies. Furthermore, it is important to localize and forecast the pedestrian motion in 3D, in order to properly assess risk and develop countermeasures using various driving assistance methodologies.
Recent papers [1, 2, 3, 4, 5, 6] related to trajectory prediction of road agents localize the 2D position of the agents in the image and predict future trajectories using GRU, LSTM, and GAN. While these methods have achieved good performance on several public datasets, forecasting future 3D-trajectory of pedestrians in urban driving scenes remains a challenge due to: (a) inaccuracies in 3D pedestrian localization from images, (b) uncertainty of the human intention, and (c) complex interactions between traffic participants and the environment.
To address interactions between agents, influential work such as Social LSTM  and Social GAN  explicitly modeled the interactions between traffic participants to forecast future trajectories from a third-person view. More recently, future vehicle trajectory forecast from an egocentric view was introduced where object location and scale level observations were used . The work by  proposed a method for the relation-aware framework for future trajectory forecast. The most recent approach , introduced a trajectory inference approach using Target Action priors Network to predict the future trajectory of scene agents. While the aforementioned methods have made significant advances in trajectory forecast in 2D image coordinates, more accurate and detailed representations of the pedestrian state, including pedestrian action and 3D localization, are needed as inputs to improve accuracy and robustness in driving scenes.
This paper focuses on accurate action recognition and 3D localization of pedestrians in street scenes. We present a two-stream temporal relation network using images of raw RGB and human pose (using DensePose ) as network inputs to recognize pedestrian actions based on an existing temporal relation network . Temporal relation networks can learn and reason about temporal dependencies between video frames at multiple time scales. Recent research  has shown a significant improvement of action recognition based on key points of human joints and the spatial relation between those key points. While the key point information has been helpful to improve performance of action recognition in close proximity using only raw RGB images, they are less informative for a representation of pedestrian state in traffic scenes where the resolution of the human is often coarse. DensePose  is a state-of-the-art method of pose estimation that also outputs more extensive information about pedestrians.
The contributions of this paper are two-fold. First, we use images of raw RGB and DensePose as inputs to a two-stream temporal relation network in order to improve the accuracy of action recognition. We demonstrate the efficacy of our approach through comparisons against single-stream temporal relation networks using the JAAD dataset . Second, we introduce a new loss function to the existing 3D localization approach, MonoLoco . Our loss function encodes pedestrian key-point information which considers the asymmetric distribution of the distance error converted from the key points on the image plane. We evaluate our 3D localization method on the KITTI dataset . Finally, we show qualitative results of experiments on action recognition and 3D localization on HRI’s H3D driving dataset .
Ii Related Work
2D Bounding Box Detection and Tracking.
Novel deep learning methods improve the accuracy of 2D bounding box detection. 2D object detection algorithms such as YOLO V3, SSD , and Mask R-CNN  have achieved high performance with fast and accurate 2D object detection using a monocular camera. CBNet 
realizes significant improvements in detection accuracy using ResNeXt as a backbone network for feature extraction. CSP can detect a 2D bounding box of pedestrians, including occluded human body parts. Pedestrians are often occluded by other traffic participants and road objects. Therefore, the whole bounding box, including occluded human body part, helps to track pedestrians based on its bounding box size and position.
Pose Estimation. Recent results [19, 20, 21] analyze joints of a human body and output key points on the image coordinates. Moreover, 3D pose estimation from a monocular image  proposes an adversarial learning framework that can estimate the 3D human pose structures learned from the fully annotated dataset with only 2D pose annotations. The state-of-the-art pose estimation is DensePose . DensePose outputs 3D surface-based representation in surface coordinates of (SMPL) model  as well as key points of pedestrians. Therefore, DensePose can represent the human body in much more detail than simply providing key points.
Action Recognition. There are two main approaches for action recognition, image-based [24, 8, 25], and skeleton-based [26, 27, 9]. Both methods capture spatial and temporal information from sequential input images or key points of pedestrians. Image-based methods have an advantage in that they can detect pedestrian actions by exploiting contextual information of the road environments. Skeleton-based approaches can classify actions from human pose features and have a benefit in terms of computation speed.
3D Localization. 3D localization from a monocular camera image sequence is a challenging research topic. Mono3D  assumes objects are on the ground plane in order to regress the 3D location and 3D bounding box of the object. This assumption creates difficulties when pedestrians are on different planes, such as sidewalks or on slopes. MonoDepth  estimates depth from a single monocular image. Stereo-based research  outperforms methods using a single monocular camera in terms of accuracy. However, stereo systems are more costly and introduce additional complexities such as requiring high precision calibration. MonoLoco  is a computationally efficient approach using a lightweight network that predicts 3D locations from 2D human pose, taking into account the uncertainty in depth.
Iii Approach for Action Recognition and 3D Localization
In this section, we present our method for pedestrian action recognition and localization. Fig. 1 shows an overview of our approach. We estimate pedestrian action from raw RGB and DensePose images using a two-stream temporal relation network. Simultaneously, we predict 3D locations from key points of pedestrians using our proposed loss function.
Iii-a 2D Bounding Box Detection and Tracking
2D bounding box detection of pedestrians is the first step in the proposed method. We use CSP  to detect the 2D bounding box because CSP generates the entire bounding boxes even if the pedestrian is occluded. We use the pre-trained model from the City Person dataset . Subsequently, the pedestrian bounding box is tracked between successive frames using DeepSort , which is computationally more efficient than feature-based tracking methods. DeepSort also has a high affinity with CSP because CSP is more robust to occluded pedestrians. The DeepSort CNN model has been trained on a large-scale person re-identification dataset .
Iii-B Pose Estimation
We use DensePose  to estimate the pose and key point locations on the body. DensePose has a cross-cascading architecture that includes the DensePose network to generate the body pose and auxiliary networks to generate key points and masks. In particular, the DensePose network generates a colored image representing 24 parts of the human body surface in the U, V coordinate. The auxiliary networks simultaneously generate key points. Fig.2 shows image samples of pose estimation results. As shown in Fig.2, the DensePose image has rich information compared with Key points of a pedestrian. We use a pre-trained model from DensePose-COCO dataset that is introduced in .
Iii-C Pedestrian Action Recognition
We propose a two-stream temporal relation network using images of raw RGB and DensePose in the U, V coordinate based on existing research 
. The unique feature of our approach is to use the DensePose images as one of the inputs to the two-stream temporal relational network because DensePose can represent the detailed human pose such as the direction and size of each body part compared to joint key points of human joints. We crop images of the raw RGB and DensePose according to the size of the 2D bounding box. We adopt Inception with Batch Normalization pre-trained on ImageNet as a base network model for feature extraction for both raw RGB and DensePose images. We add a linear layer at the output of the final layer of Batch Normalized Inception outputsand concatenate the outputs , where is frame. Moreover, we define as input for the temporal relation network. Our network structure is shown in Fig.3.
where N is the total frames required to capture a temporal relation. Furthermore, we use a multi-scale temporal relation network  to understand temporal relations at multiple time scales.
Each represents the temporal relation between ordered frames.
Iii-D 3D Localization of pedestrians
The aim of the 3D localization module is to estimate the 3D position of each pedestrian with respect to the ego-vehicle from a single monocular camera. We assume aleatoric uncertainty captured by the probability distribution based on MonoLoco
. MonoLoco uses symmetrically distributed loss functions such as Gaussian and Laplace loss. However, a pixel error of the key points on the image plane affects the distance estimation accuracy in a way that depends on the distance to the pedestrian. For example, distance estimation accuracy for pedestrians at proximity is less affected by the pixel error of the key points, while the accuracy for distant pedestrians are heavily affected. The distance estimation error as a result of the pixel error of key points is remarkably large for distant objects. Therefore, the distance errors are not distributed symmetrically, but rather asymmetrically. In our approach, we define an asymmetric distribution that is negative log-likelihood of Johnson SU loss function for the representation of aleatoric uncertainty. Johnson SU distribution can represent symmetric or asymmetric distribution using four parameters. Johnson SU distribution is significantly flexible and robust to address the distance error. Inputs of our method are key points of pedestrians on the image plane, and the neural network regresses the distance from the ego-vehicle to each pedestrian. Our proposed Johnson SU loss function is described as:
where is as:
where is the ground truth of the distance. are parameters learned by the model ,and is the estimated distance. The main advantage of Johnson SU loss function is to improve 3D localization accuracy for distant objects.
|TRN RGB||TRN Flow||TRN DensePose||Accuracy (%)|
In this section, we provide a qualitative and quantitative evaluation of our action recognition and 3D localization algorithm shown in Fig. 1 using publicly available datasets.
Iv-a Action Recognition
We assess our two-stream action recognition algorithm with images of raw RGB and DensePose using JAAD dataset . The JAAD dataset provides timestamped behavior labels and 2D bounding boxes of pedestrians. Moreover, the JAAD dataset has demographic attributes for each pedestrian (e.g., gender, age, etc.). We selected the following eight pedestrian action labels from the JAAD dataset:
Looking at Ego-Vehicle
Making Hand Gesture
The total number of prepared frames is 47K: 42K frames are for training, and 5K frames are for evaluation. As defined in Eq (1) and Eq (2), we use total frames to capture a temporal relation in this experiment. We created DensePose images using the pre-trained DensePose model from DensePose-COCO dataset. We compared our two-stream temporal relation network using images of raw RGB and DensePose against a two-stream using raw RGB and optical flow. We also evaluated the accuracy using a single-stream temporal relation network method. The optical flow was calculated using PWC-net . The comparative results are shown in TABLE I. As shown, our two-stream temporal relation network has the best performance.
Ablation Study. We evaluated our two-stream action recognition method using images of raw RGB and DensePose in a temporal segment network (TSN)  instead of the temporal relational network (TRN). This ablation study was done to confirm that DensePose images contribute to improvements in action recognition accuracy. The result using the temporal segment network is shown in TABLE II. The experiment with images of raw RGB and DensePose performs better than other methods.
|TSN RGB||TSN Flow||TSN DensePose||Accuracy (%)|
Iv-B 3D localization
We evaluated our 3D localization algorithm on the KITTI dataset by analyzing the average localization error(ALE) with respect to the ground-truth distance. We compared the results against the existing state-of-the-art methods, Mono3D , MonoDepth , Geometric , and MonoLoco-baseline .
|Ground Truth Distance [m]||3||8||12.5||17.5||22.5||27.5||35|
|Average Localization Error [m]|
Mono3D. Mono3D is a 3D object detector using a single monocular camera. Mono3D assumes that objects are lying on the ground plane.
MonoDepth. MonoDepth provides a single image depth estimation. We calculated the depth corresponding to the key points from DensePose and converted to the distance using the normalized image coordinate.
3DOP. 3DOP uses a stereo image for pedestrian detection. We referred to their publicly available results.
MonoLoco-baseline Original MonoLoco uses PifPaf  for key point detection. We replace PifPaf with DensePose in our approach.
The comparison results are shown in TABLE III. Our method achieves a larger improvement at the near () and far() pedestrian distances. On the other hand, our results at the middle-distance() are similar to MonoLoco. We conjecture that the closer pedestrians are to the ego-vehicle, the more truncated. Therefore, a distance error distribution is asymmetric for close pedestrians. In addition, the distance error distribution becomes more asymmetric for more distant pedestrians. We hypothesize that the Johnson SU loss function can represent distance error precisely, especially for the near and distant pedestrians.
Iv-C Qualitative Evaluation
We conduct qualitative tests on HRI’s H3D driving dataset . The results are shown in Fig. 4. Qualitatively, our action recognition and 3D localization method can recognize and localize actions for pedestrians reasonably well, even if they are occluded. Our approach has a significant advantage for partially occluded pedestrians because Dense Pose and key points can be calculated for the visible part of each pedestrian.
We introduced a monocular pedestrian action recognition and 3D localization approach from an egocentric view using raw RGB images and pose as inputs. The action recognition module makes use of a two-stream temporal relation network with inputs corresponding to the tracked pedestrian in the image as well as DensePose outputs. The proposed method outperforms single-stream temporal relation network methods on evaluations using the JAAD dataset. We also extended and made improvements to the method of MonoLoco for estimating the 3D locations of pedestrians by using a unique loss function of Johnosn SU distribution. Evaluations on the KITTI dataset indicated that our method improves the average localization error as compared to existing state-of-the-art methods. In future work, we plan to use these results to make predictions on the intention of agents in the scene and ultimately forecast their future trajectories.
-  Yanliang Zhu, Deheng Qian, Dongchun Ren, and Huaxia Xia. StarNet: Pedestrian Trajectory Prediction using Deep Neural Network in Star Topology. arXiv:1906.01797 [cs], June 2019.
-  Alexandre Alahi, Kratarth Goel, Vignesh Ramanathan, Alexandre Robicquet, Li Fei-Fei, and Silvio Savarese. Social LSTM: Human Trajectory Prediction in Crowded Spaces. In , pages 961–971, Las Vegas, NV, USA, June 2016. IEEE.
-  Agrim Gupta, Justin Johnson, Li Fei-Fei, Silvio Savarese, and Alexandre Alahi. Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks. arXiv:1803.10892 [cs], March 2018.
-  Chiho Choi and Behzad Dariush. Looking to Relations for Future Trajectory Forecast. In International Conference on Computer Vision, 2019.
-  Yu Yao, Mingze Xu, Chiho Choi, David J. Crandall, Ella M. Atkins, and Behzad Dariush. Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems. In 2019 International Conference on Robotics and Automation (ICRA), pages 9711–9717, Montreal, QC, Canada, May 2019. IEEE.
-  Srikanth Malla, Behzad Dariush, and Choi Chiho. Titan: Future forecast using action priors. In 2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
-  Rıza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. Densepose: Dense human pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7297–7306, 2018.
-  Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Torralba. Temporal Relational Reasoning in Videos. arXiv:1711.08496 [cs], July 2018.
-  Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition. arXiv:1801.07455 [cs], January 2018.
-  Iuliia Kotseruba, Amir Rasouli, and John K. Tsotsos. Joint Attention in Autonomous Driving (JAAD). arXiv:1609.04741 [cs], April 2017.
-  Lorenzo Bertoni, Sven Kreiss, and Alexandre Alahi. MonoLoco: Monocular 3D Pedestrian Localization and Uncertainty Estimation. arXiv:1906.06059 [cs], August 2019.
-  Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR), 2013.
-  Abhishek Patil, Srikanth Malla, Haiming Gang, and Yi-Ting Chen. The h3d dataset for full-surround 3d multi-object detection and tracking in crowded urban scenes. In International Conference on Robotics and Automation, 2019.
-  Joseph Redmon and Ali Farhadi. YOLOv3: An Incremental Improvement. arXiv:1804.02767 [cs], April 2018.
-  Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg. SSD: Single Shot MultiBox Detector. arXiv:1512.02325 [cs], 9905:21–37, 2016.
-  Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. arXiv:1703.06870 [cs], January 2018.
-  Yudong Liu, Yongtao Wang, Siwei Wang, TingTing Liang, Qijie Zhao, Zhi Tang, and Haibin Ling. CBNet: A Novel Composite Backbone Network Architecture for Object Detection. arXiv:1909.03625 [cs], September 2019.
-  Weiqiang Ren Weidong Hu Yinan Yu Wei Liu, Shengcai Liao. High-level semantic feature detection: A new perspective for pedestrian detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
-  Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional pose machines. In CVPR, 2016.
-  Alexander Toshev and Christian Szegedy. DeepPose: Human Pose Estimation via Deep Neural Networks. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 1653–1660, Columbus, OH, USA, June 2014. IEEE.
-  Sven Kreiss, Lorenzo Bertoni, and Alexandre Alahi. Pifpaf: Composite fields for human pose estimation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
-  Wei Yang, Wanli Ouyang, Xiaolong Wang, Jimmy Ren, Hongsheng Li, and Xiaogang Wang. 3D Human Pose Estimation in the Wild by Adversarial Learning. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5255–5264, Salt Lake City, UT, USA, June 2018. IEEE.
-  Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi-person linear model. ACM Transactions on Graphics, 34(6):1–16, October 2015.
-  Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal Segment Networks: Towards Good Practices for Deep Action Recognition. arXiv:1608.00859 [cs], August 2016.
-  Amir Rasouli, Iuliia Kotseruba, and John K Tsotsos. Pedestrian action anticipation using contextual feature fusion in stacked rnns. In BMVC, 2019.
-  Maosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang, and Qi Tian. Actional-Structural Graph Convolutional Networks for Skeleton-based Action Recognition. arXiv e-prints, page arXiv:1904.12659, Apr 2019.
-  Chenyang Si, Wentao Chen, Wei Wang, Liang Wang, and Tieniu Tan. An Attention Enhanced Graph Convolutional LSTM Network for Skeleton-Based Action Recognition. arXiv:1902.09130 [cs], March 2019.
-  Xiaozhi Chen, Kaustav Kundu, Ziyu Zhang, Huimin Ma, Sanja Fidler, and Raquel Urtasun. Monocular 3D Object Detection for Autonomous Driving. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2147–2156, Las Vegas, NV, USA, June 2016. IEEE.
-  Clément Godard, Oisin Mac Aodha, and Gabriel J. Brostow. Unsupervised Monocular Depth Estimation with Left-Right Consistency. arXiv:1609.03677 [cs, stat], September 2016.
-  Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Andrew G Berneshawi, Huimin Ma, Sanja Fidler, and Raquel Urtasun. 3d object proposals for accurate object class detection. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 424–432. Curran Associates, Inc., 2015.
-  Shanshan Zhang, Rodrigo Benenson, and Bernt Schiele. CityPersons: A Diverse Dataset for Pedestrian Detection. arXiv:1702.05693 [cs], February 2017.
-  Nicolai Wojke, Alex Bewley, and Dietrich Paulus. Simple online and realtime tracking with a deep association metric. In 2017 IEEE International Conference on Image Processing (ICIP), pages 3645–3649. IEEE, 2017.
-  Liang Zheng, Zhi Bie, Yifan Sun, Jingdong Wang, Chi Su, Shengjin Wang, and Qi Tian. Mars: A video benchmark for large-scale person re-identification. In Computer Vision - ECCV 2016, September 2016.
-  Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.