Analysis of Multi-Scale Fractal Dimension to Classify Human Motion

07/06/2012 ∙ by Núbia Rosa da Silva, et al. ∙ Universidade de São Paulo 0

In recent years there has been considerable interest in human action recognition. Several approaches have been developed in order to enhance the automatic video analysis. Although some developments have been achieved by the computer vision community, the properly classification of human motion is still a hard and challenging task. The objective of this study is to investigate the use of 3D multi-scale fractal dimension to recognize motion patterns in videos. In order to develop a robust strategy for human motion classification, we proposed a method where the Fourier transform is used to calculate the derivative in which all data points are deemed. Our results shown that different accuracy rates can be found for different databases. We believe that in specific applications our results are the first step to develop an automatic monitoring system, which can be applied in security systems, traffic monitoring, biology, physical therapy, cardiovascular disease among many others.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In recent years, there has been a growth of research activity aimed to develop human motion classifiers in order to enhance the automatic video analysis. Several approaches for tracking movement have been proposed in the literature. Basically, they differ in the type of object representation, varying size, position and shape of moving objects, in applying the type of motion and appearance model.

All these perspectives are designated according to the context and end-use monitoring will be conducted. Regarding the context in which the movement can be recognized, different and interesting applications had been considered. For instance, human-computer interface, gesture recognition, video indexing and browsing, analysis of sports events and video surveillance. In these situations, to recognize events of particular interest as well as their complexity and to make inferences about their evolution play a crucial rule in image processing research. All these tasks can be done by considering the knowledge that can be obtained from the motion patterns.

Although developments have been achieved by the computer vision community, the properly classification of human motion is still a hard and challenging task. This because (i) it is not possible to control the acquisition of images sequence; (ii) the images can suffer from poor illumination, blur, occlusion or of several other possibilities. Moreover, in real world, the situations differs a lot from the controlled conditions tested at the laboratory.

An appropriate method is required to assess the motion in different videos. In this study we focus on classification of the following single human motions: “walk”, “skip”, “kick”, “playing basketball”, “run”, “jack”, “jump”, side and “wave” under unconstrained indoor environments. The purpose of this paper is to investigate the use of 3D multi-scale fractal dimension to recognize motion patterns in videos. In order to study and develop a more robust strategy for human motion classification, we are using the Fourier Transform to calculate the derivative in which all data points are deemed instead of numerical methods.

Each motion class is characterized by a signature obtained by multi-scale fractal dimension-based approach. Different motions will provide distinct signatures, therefore we can discern dissimilar motions. The multi-scale fractal dimension was performed by using the Bouligand-Minkowski method, a robust, accurate and consistent way to estimate the fractal dimension according to literature

Backes and Bruno (2008); Tricot (1995); Costa and Jr. (2000). The motion signature is a curve of multi-scale fractal dimension that represents the changes in shape complexity frame by frame for different scales observed.

The rest of paper is organized as follows. Extraction signatures by multi-scale fractal dimension is explained in Section II. Experimental results and discussions are presented in Section III and conclusions in Section IV.

Ii Signatures by Multi-Scale Fractal Dimension

Fractal analysis has been widely applied to describe different problems in pattern recognition, image processing and many others domains. Established by Benoit Mandelbrot the fractal geometry is useful in problems that require complexity analysis of structures across different scales

Mandelbrot (1983). It is necessary to point out that the metric properties of fractal objects are a function of the scale used to perform the measurement. Then, we can describe an object with fractional values depicting the level of complexity and spatial distribution in the image Costa and Jr. (2000); de Oliveira Plotze et al. (2005); Backes and Bruno (2010); Lopes and Betrouni (2009).

ii.1 Fractal Dimension

The fractal dimension indicates how much space is occupied by the object, representing the degree of complexity that the figure has. For uniform and compact objects the fractal dimension coincides with the topological dimension, however, for fractal objects it is a fractional value. To estimate the fractal dimension of an object several methods can be used, including box-counting, mass-radius, dividers and Bouligand-Minkowski approach. For this study we take the Bouligand-Minkowski method into account due the precision and adaptation to the multi-scale approach Tricot (1995); Costa et al. (2002); Backes et al. (2010). Usually Minkowski is used for many applications in shape and texture analysis. While in shape, the Minkowski approach considers two-dimensional space, for texture we usually handle with three-dimensional space, due the third coordinate corresponds to the intensity of gray level at each point. In our case, we also have a three-dimensional space where the third coordinate is the time, i.e., the sequence of images in which actions occur during the video.

By using Bouligand-Minkowski we analyze the relationship between the object and the space occupied by it in space. The fractal dimension is obtained by calculating the volume of the dilated object. The dilatation can be performed by considering a sphere of radius (Figure 3 a)), which is centred at each point of the original object and all other points inside the sphere are joined to the object. In order to generate a signature for the shape, we analyzed the object volume as a function of . The algorithm used to perform this task consists in use the exact distance transform (EDT) Bruno and da Fontoura Costa (2004); Saito and Toriwaki (1994); Fabbri et al. (2008); Torelli et al. (2010), which is the distance of all points of the image to the closest point of the object. After that, we computed the fractal dimension by analyzing the log-log curve of the volume of influence versus (Figure 1). The fractal dimension is defined as:

(1)

where is the influence volume of the object with radius and is the number of dimensions.

Figure 1: Log-log curve of a sequence of video images generated by Bouligand-Minkowski.

Over the video frames it is possible find different visions of the same object. In a image of the video sequence the motion object can be close to the camera while in another image/frame can be far from the camera. Because of the range of scales of the object during the image sequence and for have just a value for the whole video, we use an extension of fractal dimension that is Multi-Scale Fractal Dimension.

ii.2 Volumetric Multi-Scale Fractal Dimension

The log-log curve produced by the Bouligand-Minkowski method presents a wealth of detail that can not be represented by only one value provided by the fractal dimension. For this reason the multi-scale fractal dimension uses the derivative to explore the limit of infinitesimal linear interpolation. Thus it is possible to obtain the relationship between variations in the complexity of the object at different scales

da S. Torres et al. (2004); de Oliveira Plotze et al. (2005). Multi-scale fractal dimension curve is defined through the derivative of curve. For computing the influence volume we use the Boulingand-Minkowski method for three dimensions and we use the derivative property of Fourier transform to calculate the derivatives.

In the proposed method, each image of the sequence of images (video) is considered as a surface . Each pixel of the image is converted to a point , , where and are the coordinates of the object in the image , and is the frame in the sequence of images, as shown in Figure 2.

Figure 2: Temporal variation in the sequence of video images.

The volume of the dilated shape in time (Figure 3 b)) using a sphere of radius can be written as:

(2)

where is the minimum distance from a point to any other point belonging to the object, is and is the Heaviside function Butkov (1968) that returns 1 if and 0, otherwise.

Figure 3: Dilation of shape. a) 2D and b) in time. Red indicates radius , orange indicates , yellow with and blue .

We compute the Bouligand-Minkowski fractal dimension as:

(3)

As it has been considered three-dimensional space, is within and 3 is number of dimensions. According to the radius , the volume of a sphere produced by a point affects the volume of other spheres, disturbing the way the volume of influence increases. This makes the volume of influence very sensitive to structural changes Backes et al. (2009).

Thereafter, the Multi-scale fractal dimension were taken using:

(4)

The result is a curve with the fractal dimension calculated at each spatial scale represented by the radius (Figure 4). The curves of the multi-scale fractal dimension were considered as signatures for the study of the motion patterns.

Figure 4: Multi-scale Fractal Dimension of a sequence of video images.

To calculate the derivative we use a property of Fourier transform, which allow us to obtain the derivative of any function from the analysis of its spectrum of frequencies Gonzalez and Woods (2002). We also use a convolution of the original signal with a Gaussian kernel in order to smooth the derivative.

Two questions have to be observed when calculating the derivative. The first one is the spacing between points of the signal, because the log-log curve has a very low sampling at the beginning, as showed in Figure 5. The sparse points were ignored and the remaining points were interpolated by filling spaces between each two points with their average. The second question is that the Fourier transform does not converge uniformly in discontinuities causing the so-called Gibbs phenomenon at the ends of the signal (Figure 6). To solve this problem, the curve was replicated before and after the original curve (Figure 7).

Figure 5: Very sparse points are ignored.
Figure 6: Fourier transform does not converge uniformly in discontinuities causing the so-called Gibbs phenomenon.
Figure 7: Curve replication before and after the original curve to solve the Gibbs phenomenon.

Iii Results and Discussions

The analysis of motion is still a challenging problem in the field of image processing and pattern recognition. Over the years, several approaches using different constructs have been proposed for action recognition, including: machine learning

Minhas et al. (2010); Cao et al. (2009), optical flow Horn and Schunck (1981); Denman et al. (2005); Roberts et al. (2009), appearance model Filipovych and Ribeiro (2009, 2008); Zhao et al. (2008). They differ especially in the kind of object representation, image features, and in applying the type of motion.

In this paper we have studied a new strategy to characterize motion in a image sequence, as well as to recognize the motion patterns in order to classify the movement. First, we use only the fractal dimension in the three-dimensional space to characterize the motion. However, it was unable to capture structural differences in the shape of the moving object. Another problem is that the method was not scale-invariant. To overcome this problem, we investigate a shape in time over multiple scales. For each scale we have a fractal dimension and the set of consecutive fractal dimensions is called signature of the video sequences. The main goal of this paper is to use these signatures in order to discriminate different types of movement.

To confirm the hypothesis that motion signatures can be distinct for different movements, we calculate the multi-scale fractal dimension to all actions. We compared the similarity of the same type of movement to ensure that signatures could be obtained similar to movements of the same class and different signatures for movements belonging to different classes. Figure 8 shows an example of four different motion signatures.

Figure 8: Motion signature by Multi-scale Fractal Dimension of four different movements.

Basically, there are two parameters in our approach, and

. The first one is related to the standard deviation of the Gaussian kernel and it quantifies the level of smoothness of the signatures. The last one is related to the maximum scale that is used to analyze an image sequence. We investigate the effects of this two parameters and we conclude that for very high values of

, we have a strongly smooth curves which implies loss of important details of the signature. We tested varying from 1 until 6 and the maximum radius equals 160. Regarding the parameter , to find out the optimal value becomes difficulty, because it is not known a priori at what scale the motion must be analyzed.

Our strategy was demonstrated performing tests on two publicly available dataset, CMU Graphics Lab Motion Capture Database (available at http://mocap.cs.cmu.edu/) and Weizmann Human Action Dataset Gorelick et al. (2007); Blank et al. (2005) (http://www.wisdom.weizmann.ac.il/ vision/SpaceTimeActions.html).

A signature was then generated for each video and we classify the motions according to their signatures. The result corresponds to a feature vector based on the signature of the multi-scale fractal dimension and each point of the signature will be an attribute. Support vector machine with 10-fold cross validation scheme was chosen to classify the examples. SVM uses the Statistical Learning Theory, first assuming the data domain in which learning is occurring are generated independently and identically distributed according to a probability distribution relationship between the examples and classes

Chen et al. (2005). Thus, to new data from the same domain, SVM obtains good results. The result of 10-fold cross validation is the average values for 10 runs of the same experiment through random selection of actions in the database.

We accomplished two sets of experiments. The first one on Mocap database. And the second one on Weizmann Database. All experiments were performed using Weka Hall et al. (2009) with default values parameters.

iii.1 Mocap Database

The Motion Capture Database contains 2605 trials in six motion categories and 23 subcategories. We choose 66 video sequences showing eight different subjects which perform four distinct actions at varying speeds. The actions are: “walk”, “skip”, “kick” and “playing basketball”. The cameras are placed around a rectangular area, of approximately 3 m x 8 m, in the center of the room. Only motions that take place in this rectangle can be captured. The Mocap database contains videos with 53 a 1020 images of size 240 320.

To find the best values for we performed tests varying the radius from 10 to 160 and equals 1. The number of correctly classified instances is almost constant by varying the size of the radius, this because of the size of each video image. Then, we can use the lowest.

To evaluate the better value we varied it from 1 until 6. According to which the value increases the number of correctly classified instances decreases, as the high smooth of motion signature.

From the performed tests we conclude the better values are equals 10 and equals 1. It was obtained 90.91 % (standard deviation equals 0.34) of accuracy with 60 of 66 correctly classified instances.

iii.2 Weizmann Database

The Weizmann human action dataset has 93 video sequences showing nine different subjects, each performing 10 natural actions such as “run”, “walk”, “skip”, “jumping-jack” (or shortly “jack”), “jump-forward-on-two-legs” (or “jump”), “jump-in-place-on-two-legs” (or “pjump”), “gallopsideways” (or “side”), “wave-two-hands” and “wave-one- hand” (or “wave”), or “bend”. The Weizmann database contains videos with 28 a 146 images of size 144 180. The size of video are not correlated to the motion.

We tested different values for and . There is little variation in the number of correct classifications by varying the size of the radius, but we note that with a equal to 110, has the highest rating. With equals 110, we vary the value of from 1 to 6. All combinations of radius and sigma have been tested, but only the most relevant values are shown here. Again, with increase of , the correctly classified decreases, the better result is obtained with equals 2. Therefore, with radius equals 110 and equals 2, the result of the classification of Weizmann database is 79.57 % (standard deviation equals 0.28) of accuracy with 74 of 93 correctly classified instances.

Iv Conclusions

This paper presented a study of motion classification based on a frame-by-frame analysis of the complexity of a shape in a video. Our main goal was apply the so-called multi-scale fractal dimension in order to classify videos according their content. We developed a strategy to classify human motion using multi-scale fractal dimension that consists to represent the movement contained in a video by a signature and support vector machine to classify. We have applied the method we describe in two real databases. The first one with 66 videos and four different types of motion and the second one with 93 videos and ten types of movements, where we have obtained different results with 90.91 % and 79.57 % of accuracy, respectively. The first database has only four motion classes quite different, however the second one has ten classes with some similar as the case of “run”, “side” an “skip”. In fact, it will be interesting to perform new experiments in a larger data set with the intention of finding a general strategy that can be potentially applied in a wide variety of vision problems that involve various complex structures of motion.

Acknowledgements

N.R.S. acknowledges support from FAPESP (Grant #2011/21467-9).
O.M.B. acknowledges support from CNPq (Grant #308449/2010-0 and #473893/2010-0) and FAPESP (Grant # 2011/01523-1).

References

  • Backes and Bruno (2008) A. R. Backes and O. M. Bruno, INFOCOMP (UFLA), INFOCOMP, 7, 74 (2008).
  • Tricot (1995) C. Tricot, Curves and fractal dimension, edited by Springer (New York: Springer-Verlag, 1995) ISBN 0387940952, 9780387940953, p. 323.
  • Costa and Jr. (2000) L. D. F. Costa and R. M. C. Jr., Shape Analysis and Classification: Theory and Practice (Image Processing Series), edited by C. Press (CRC Press; 1 edition, 2000) ISBN 0849334934, p. 680.
  • Mandelbrot (1983) B. B. Mandelbrot, The Fractal Geometry of Nature, edited by W. Freeman (W. H. Freedman and Co., 1983) ISBN 9780716711865, p. 468.
  • de Oliveira Plotze et al. (2005) R. de Oliveira Plotze, M. Falvo, J. G. Padua, L. C. Bernacci, M. L. C. Vieira, G. C. X. Oliveira,  and O. M. Bruno, Canadian Journal of Botany, 83, 287 (2005).
  • Backes and Bruno (2010) A. R. Backes and O. M. Bruno, in International Conference on Image and Signal Processing, ICISP’10, Vol. 0 (Springer-Verlag, Berlin, Heidelberg, 2010) pp. 463–470, ISBN 3-642-13680-X, 978-3-642-13680-1.
  • Lopes and Betrouni (2009) R. Lopes and N. Betrouni, Medical Image Analysis, 13, 634 (2009).
  • Costa et al. (2002) L. D. F. Costa, E. T. M. Manoel, F. Faucereau, J. Chelly, J. Van Pelt,  and G. Ramakers, Network, 13, 283 (2002).
  • Backes et al. (2010) A. Backes, D. Eler, R. Minghim,  and O. Bruno, in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Lecture Notes in Computer Science, Vol. 6419, edited by I. Bloch and R. Cesar (Springer Berlin / Heidelberg, 2010) pp. 14–21, ISBN 978-3-642-16686-0.
  • Bruno and da Fontoura Costa (2004) O. M. Bruno and L. da Fontoura Costa, Microprocessors and Microsystems, 28, 107 (2004).
  • Saito and Toriwaki (1994) T. Saito and J.-I. Toriwaki, Pattern Recognition, 27, 1551 (1994), ISSN 0031-3203.
  • Fabbri et al. (2008) R. Fabbri, L. D. F. Costa, J. C. Torelli,  and O. M. Bruno, ACM Comput. Surv., 40, 2:1 (2008), ISSN 0360-0300.
  • Torelli et al. (2010)

    J. C. Torelli, R. Fabri, G. Travieso,  and O. M. Bruno, International Journal of Pattern Recognition and Artificial Intelligence

    24, 897 (2010).
  • da S. Torres et al. (2004) R. da S. Torres, A. X. Falcão,  and L. da F. Costa, Pattern Recognition, 37, 1163 (2004), ISSN 0031-3203.
  • Butkov (1968) E. Butkov, Mathematical Physics, edited by A. Wesley (Addison Wesley Publishing Company, Reading, MA, 1968).
  • Backes et al. (2009) A. R. Backes, D. Casanova,  and O. M. Bruno, International Journal of Pattern Recognition and Artificial Inteligence, 23, 1145 (2009).
  • Gonzalez and Woods (2002) R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed., edited by I. Prentice-Hall (Tom Robbins, 2002) ISBN 0201180758.
  • Minhas et al. (2010) R. Minhas, A. A. Mohammed,  and Q. M. Jonathan Wu, Neurocomputing, 73, 1831 (2010), ISSN 0925-2312.
  • Cao et al. (2009) D. Cao, O. T. Masoud, D. Boley,  and N. Papanikolopoulos, Comput. Vis. Image Underst., 113, 1064 (2009), ISSN 1077-3142.
  • Horn and Schunck (1981) B. K. P. Horn and B. G. Schunck, Artificial Intelligence, 17, 185 (1981).
  • Denman et al. (2005) S. Denman, V. Chandran,  and S. Sridharan, in Digital Image Computing on Techniques and Applications (DICTA 2005), Vol. 0 (IEEE Computer Society, Washington, DC, USA, 2005) pp. 1–8, ISBN 0-7695-2467-2.
  • Roberts et al. (2009) R. Roberts, C. Potthast,  and F. Dellaert, Computer Vision and Pattern Recognition (CVPR 2009), 0, 57 (2009).
  • Filipovych and Ribeiro (2009) R. Filipovych and E. Ribeiro, in International Conference on Image Analysis and Recognition (ICIAR 2009), Vol. 0 (Springer-Verlag, Berlin, Heidelberg, 2009) pp. 616–626, ISBN 978-3-642-02610-2.
  • Filipovych and Ribeiro (2008) R. Filipovych and E. Ribeiro, in IEEE International Conference on Computer Vision and Patter Recognition (CVPR 2008), Vol. 0 (2008) pp. 1 –7, ISSN 1063-6919.
  • Zhao et al. (2008) T. Zhao, R. Nevatia,  and B. Wu, IEEE Transactions Pattern Anal. Mach. Intell., 30, 1198 (2008), ISSN 0162-8828.
  • Gorelick et al. (2007) L. Gorelick, M. Blank, E. Shechtman, M. Irani,  and R. Basri, Transactions on Pattern Analysis and Machine Intelligence, 29, 2247 (2007).
  • Blank et al. (2005) M. Blank, L. Gorelick, E. Shechtman, M. Irani,  and R. Basri, in The Tenth IEEE International Conference on Computer Vision (ICCV’05), Vol. 0 (2005) pp. 1395–1402.
  • Chen et al. (2005) P.-H. Chen, C.-J. Lin,  and B. Sch lkopf, Appl. Stoch. Model. Bus. Ind., 21, 111 (2005), ISSN 1524-1904.
  • Hall et al. (2009) M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann,  and I. H. Witten, SIGKDD Explor. Newsl., 11, 10 (2009), ISSN 1931-0145.