User mental status detection has variety of applications in human activity recognition systems. Without a proper algorithm for detecting human intention to interact, the vision based system is always on, therefore any kind of activity may interpreted as an interaction 
. Group meetings which are frequent business events is modeled as a case study. In this case study, among all available data streams, a combination of tracking 3D skeleton data is combined for user engagement detection in meetings. Multiple binary classifiers are implemented to detect user intention for performing an action. The output of these binary classifiers are used to create transition and guard conditions in FST. Characteristics of engagement will be discussed and biometric data which can be used for this purpose will be introduced. 3D skeleton tracking will be introduced as one of the channels of biometric information for engagement detection. Although we just use this only channel of biometric data, experiment results show we still can predict engagement with high accuracy.
In addition, DAIA, the FST of engagement detection helps system to flow among states smoothly. FST is a predefined structure based on our knowledge of human activities that helps system predict engagement state more accurately. Furthermore, FST algorithm is computational efficient. This property of FST allows achievable on-line and real-time performance.
1.1 Related Works
Engagement has been investigated in various fields such as education, organizational behavior, work, or media. Engagement is defined as the value that a participant in an interaction attributes to the goal of being together with other participant(s) and continuing interaction [2, 3]. It is also defined as the process by which two or more participants establish, maintain, and end their perceived connection directly related to attention [3, 4] ). “Effort without distress” , “A meaningful involvement”, Enabled through vigor, dedication, and absorption) are other definitions of engagement.
Body posture gives important information about engagement. Various approaches have been investigated based on body language analysis to improve human computer interaction. Intention to engage with an agent e.g. a robot [7, 8], or interactive display , are some of these studies. Measuring the engagement intent is used in service robots to identify relevant gestures from irrelevant gestures which is known as Midas Touch Problem [7, 10]. Another research interest is related to learner engagement with robotic companions or interfaces.In addition, Intention to engage with a display for improving user identification is addressed in Schwarz et al. .
A variety of studies strives for a multi-modal approach using some features of facial expression, body motion, voice, or seat pressure to elucidate on mental states. Benkaouar et al.  is discussing gaze and upper body posture for engagement detetion. Schwarz et. al  used combination of gaze, upper body and arm position for the purpose of intention detction in engagement. Vaufreydaz et. al  used gaze and proxemics and Salam et. al  employed human state observation for engagement detection. Engell et al. used gaze and facial expression and Balaban et al.  employed weight, head, and upper body motion; Scherer et al.  and Dael et. al  discuss voice, face, posture. Using Finite State Machine (FSM) for multi-modal system modeling is addressed in multiple research,,,.
All of the studies mentioned in this section discuss a very small set of potential classifiers. They also do not make full usage of the amount of qualitative research on non-verbal body language and its indication of mental states. In this research, we address these features for engagement modeling and detection.
2 Engagement Modeling and Metrics
Frank et. al have proposed a multi-modal engagement user state detection process. In our intended scenario, multiple people are within the operating range of the sensor, e.g. field of view of the camera. This module identifies the state of the participants on the engagement scale as disengaged up to involved action. Using the biometric information, the module identifies the person who exhibits a specific combination of classifiers of all categories. The classifiers, e.g. body posture components, are chosen based on research, relating them to attentiveness and engagement. The analysis is occurring on a frame-by-frame basis. Each frame is analyzed regarding all classifiers, e.g. if the person is 1) raising a hand, 2) facing a display, 3) leaning forward, 4) leaning backward, 5) uttering a feedback, 6) slouching, or 7) changing position in the last 60 frames. Classifiers are evaluated as being exhibited or not exhibited in a specific frame as a binary value.
Among all available biometric information such as gaze, voice and gesture to extend the proposed framework to detect engagement state of the user, we just use 3D joint information provided by a depth camera and NiTE SDK by Primesense. However, using more biometric channels of information will make system more accurate, our experiment results showed even use this only channel of information could result in a high performance user engagement detection system.
Upper body joints play important role in engagement detection. Multiple classifiers are designed to detect and classify upper body direction. In addition, hand movements such as raising hand, pointing, swiping, pushing or pulling are used to manipulate in vision-based interfaces. Therefore, different classifiers are designed to detect various hand movements such as raise hand above waist and also different levels of hand speed. These classifiers help to detect user intention for performing an action.
An action video with frames and joints in each frame can be represented as a set of 3D points sequence, written as . The 3D sensor provides us fifteen joints, and varies for different sequences. However, in our system , because our classifiers only use ten upper body joints from this set which are head, left and right shoulders, left and right elbows, left and right hand, torso and left and right hips. For each point, 3 dimensional position is obtained.
The first basic step of feature extraction is to compute basic feature for each frame, which describes the pose information of every of these ten joints in a single frame. The second step is calculating of left and right hand speed information. This feature is obtained by calculating 3D distance that each hand moves in two consecutive frames.
The binary values for the individual binary classifiers are weighted based on the relative influence in the training data and summed up to . The engagement score thereby assumes a value between 0 and 1.
Figure 1 shows how our system extract features from users and calculate engagement levels of each users.
The engagement score is calculated using where
is vector of weights, and is the vector of binary classifiers such that are 0 or 1 based on the output of the binary classifiers.
Table 1 gathers binary classifiers which are designed for this purpose. For each posture status, multiple binary classifiers are designed. The 0 or 1 output of these 37 classifiers are used to make our feature vector for intention to action classifier.
|Posture Status||Binary Classifiers|
|Hand Horizontal||Right of Body, Close to Body, Left of Body|
|Hand Vertical||Below Hip, Below Torso, Below Shoulder, Below Head|
|Hand Depth||Back of Body, Close to Body, Front of Body|
|Hand Speed||Stopped, Slow, Fast, Too Fast|
|Body Direction||Facing Sensor|
|Leaning||Lean back, No Lean, Lean Forward|
|Special Postures||Hands folded, Hands on Head, Hands on Torso|
This classifiers are mostly designed based on heuristic information extracted from joints 3D location. For instance, body direction classifier is made using the normal vector of the plane containing right and left shoulders, and torso joint.
3 DAIA: FST for Engagement Detection
DAIA is a frame-based engagement detection system using an FST. State machines are the description of a life cycle of a system. They can describe the different states of the lifeline, the events which influence it, and what it does when a particular event is detected at any states as the transition condition for particular state change. They offer the complete specification of the dynamic behavior of the system. Figure 2 shows the outline of this framework.
In order to increase efficiency and accuracy of our algorithm we implemented a Finite State Transducer (FST). A state is a description of the mental state or engagement of the user that is anticipated to change over time. A transition is initialized by a change in condition that results in a change of state. For example, when using a gesture recognition system to find out meaningful gestures of users, swiping or pointing can happen in some states such as in the Action state and similar gestures in the Attention state will be ignored or interpreted differently. In this research, we model engagement states as a finite state transducer with four different states: Disengagement: User is disengaged from screen or the target person/object. Attention: User has attention, but doesn’t have intention to do any actions. Intention: User has intention to do some action, but still not doing it. Action: User is performing an action.
A finite state transducer is a sextuple , where: is the input alphabet (a finite non-empty set of symbols). is the output alphabet (a finite, non-empty set of symbols). is a finite, non-empty set of states. is the initial state, an element of . is the state-transition function: . is the output function. If the output function is a function of a state and input alphabet () that definition corresponds to the Mealy model, and can be modeled as a Mealy machine.
Some hypotheses are considered in designing this FST: Engagement states change in a specific order: This property describes the FST design. It starts with disengagement (Initial State). All states can be transited to disengagement, but there is always a chain of ordered state transition for other states of engagement.
We may modify the detected state based on conditions of FST: States should transit smoothly. FST smooths state transition that helps better gesture segmentation. We don’t change state just based on Intention to act or disengagement classifier. We know the human activities are continues, so, using some protection which is called guard conditions we smoothly change states. Furthermore, engagement state classifier is memoryless. it may report wrong engagement state based on the current biometric properties of the states. However, FST keeps record of engagement states and help relabeling the frames more accurately.
Table 2 describes Transition Conditions, , for these state changes. is the combination of event triggering the transition, the target state, guard and actions as follows:
, , , : Body Direction is not facing sensor or a Special Posture such as Hand Folded exists
, , : The output of binary classifiers and Intention to Act classifier doesn’t change.
: Body Direction is facing sensor. : Intention to Act classifier is triggered and output is 1, but both Hand Speed classifiers are stopped in at least one frame of each window of predefined number of consecutive frames. This window frame is a guard to protect state from transition because of small movements of hand which are not actions. : Intention to Act classifier converts from 1 to 0 for more than predefined consecutive frames. This window of frames protects transition from action to Intention for small position changes that make Intention to Act classifier zero. : Intention to Act classifier is triggered and output is 1, also at least one of Hand Speed classifiers for detecting slow or fast movements is 1 for a predefined number of consecutive frames. : Intention to Act classifier is triggered and output is 1, but both of Hand Speed classifiers for detecting stopped is 1 for a predefined number of consecutive frames. After each transition, if FST recognizes the labels that is assigned to some frames are wrong, it can change reconsider and modify them by relabling. In addition, based on analysis of the speed signal of the hand, FST will relabel the frames when user starts moving hand to raise his hand. That helps we have correct segment of gesture for our gesture recognition system.
4 Experiment Results
DAIA framework was implemented in C++ on Windows workstation. We used ASUS Xtion Pro to capture depth images and track skeletons using Primesense NiTE SDK. The system ran at 30 frames per second. As it is mentioned in Table 1 we created 37 binary classifiers. For each posture status, multiple binary classifiers are designed. The 0 or 1 output of these 37 classifiers is used to make our feature vector, , for intention to action classifier. Furthermore, we need to define or vector of weights to calculate engagement score and afterwards we should define a threshold to classify the frame as intention to act or disengagement similar to the procedure proposed in . It needs extensive research on different body postures to calculate these weights. Furthermore, putting constant weight values for different classifiers may result in wrong classification for complex body postures. Therefore, instead of defining constant values for the weight vector, we used SVM with linear kernel for training our intention to act classification. We used as the feature vector for training and testing our SVM.
To train the classifier, a simple ”Catch the Box!” game is designed. In this game, we used hand tracking algorithm implemented in NiTE middle-ware by Primesense to move the cursor on screen. A solid rectangle randomly appears on the screen and user should move the cursor on rectangle area to receive points. The game has 3 stages which are ”Getting Ready”, ”Play” and ”Stop”. The binary classifier outputs of table 1 in ”Play” mode of the game are combined as a series of 0 and 1 and used as ”Intention to Act” feature vectors to train an binary SVM with linear kernel. The output of classifiers in ”Getting Ready” and ”Stop!” stages of game creates feature vectors of the SVM when ”Intention to Act” is not present.
We captured and labeled 23,210 frames from 5 different subjects that played the game separately. 5,000 frames are used for training and the remaining used for testing the classifier. This classifier performance was 86.38%. The frame is classified as Intention to Act or not. FST helps to relabel the frame based on the current state properties and guards. In our experiment, we asked 30 different users to hear random order of commands from a list of actions such as ”raising hand” or ”swiping right to left” and perform them in front of a screen.
Afterwards, each recorded frame is labeled in four different engagement states from Disengagement to Action by an expert and used as ground truth. Our FST performance is calculated based on the number of correct states reported by FST after relabeling the frames and also ground truth labeled manually. In total, 165,422 frames are labeled to each engagement states. The results are gathered in table 3. Figure 3 shows engagement state detection using FST and combinations of boolean operations on raising and putting down right hand.
5 Discussions and Conlutions
In this paper, a novel multi-modal model for engagement is introduced. Based on this model, a combination of tracking 3D gesture data is employed for user engagement detection. Therefore, the spectrum of mental status for performing an action is quantized in a finite number of engagement states. Furthermore, a finite state transducer (FST) with the following engagement states is proposed: Disengagement, Attention, Intention, Action. Results show our Intention to Act performance is 86.3%. In addition, FST relables some of those labels based on the history of engagement states and guard conditions. The performance of our FST for labeling the frames correctly is 92.3%.The processing time for each frame is less than 10ms which indicates real-time usability of our algorithm.
In future research, we expect using other channels of biometric information such as voice and facial data such as gaze. We may reach even higher true detection rates using extra channels of information. In addition, by using multi-camera and calculating engagement state for each of the audience in a meeting room we will be able to detect the main operator and give the control of the vision based interface to that participant.
-  Rick Kjeldsen, Anthony Levas, and Claudio Pinhanez, “Dynamically reconfigurable vision-based user interfaces,” in Computer Vision Systems, pp. 323–332. Springer, 2003.
-  Rodie Cowie, Ursula Hess, Shlomo Hareli, Maria Francesca O’Connor, Laurel D Riek, Louis-Philippe Morency, Jonathan Aigrain, Severine Dubuisson, Marcin Detyniecki, Mohamed Chetouani, et al., “Cbar 2015: Context based affect recognition,” 2015.
-  Hanan Salam and Mohamed Chetouani, “A multi-level context-based modeling of engagement in human-robot interaction,” in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on. IEEE, 2015, vol. 3, pp. 1–6.
-  Candace L Sidner, Cory D Kidd, Christopher Lee, and Neal Lesh, “Where to look: a study of human-robot engagement,” in Proceedings of the 9th international conference on Intelligent user interfaces. ACM, 2004, pp. 78–84.
-  Kelli I Stajduhar, Sally E Thorne, Liza McGuinness, and Charmaine Kim-Sing, “Patient perceptions of helpful communication in the context of advanced cancer,” Journal of clinical nursing, vol. 19, no. 13-14, pp. 2039–2047, 2010.
-  Nele Dael, Marcello Mortillaro, and Klaus R Scherer, “Emotion expression in body action and posture.,” Emotion, vol. 12, no. 5, pp. 1085, 2012.
-  Dominique Vaufreydaz, Wafa Johal, and Claudine Combe, “Starting engagement detection towards a companion robot using multimodal features,” Robotics and Autonomous Systems, 2015.
-  David Klotz, Johannes Wienke, Julia Peltason, Britta Wrede, Sebastian Wrede, Vasil Khalidov, and Jean-Marc Odobez, “Engagement-based multi-party dialog with a humanoid robot,” in Proceedings of the SIGDIAL 2011 Conference. Association for Computational Linguistics, 2011, pp. 341–343.
-  Julia Schwarz, Charles Claudius Marais, Tommer Leyvand, Scott E Hudson, and Jennifer Mankoff, “Combining body pose, gaze, and gesture to determine intention to interact in vision-based interfaces,” in Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, 2014, pp. 3443–3452.
-  Ross Mead, Amin Atrash, and Maja J Matarić, “Proxemic feature recognition for interactive robots: automating metrics from the social sciences,” in International conference on social robotics. Springer, 2011, pp. 52–61.
-  Jyotirmay Sanghvi, Ginevra Castellano, Iolanda Leite, André Pereira, Peter W McOwan, and Ana Paiva, “Automatic analysis of affective postures and body motion to detect engagement with a game companion,” in 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2011, pp. 305–311.
-  Nadia Bianchi-Berthouze, “Understanding the role of body movement in player engagement,” Human–Computer Interaction, vol. 28, no. 1, pp. 40–75, 2013.
-  Andrea Kleinsmith and Nadia Bianchi-Berthouze, “Affective body expression perception and recognition: A survey,” Affective Computing, IEEE Transactions on, vol. 4, no. 1, pp. 15–33, 2013.
-  N Bianchi-Berthouze, “What can body movement tell us about players’ engagement,” Measuring Behavior’12, pp. 94–97, 2012.
-  Stylianos Asteriadis, Kostas Karpouzis, and Stefanos Kollias, “Feature extraction and selection for inferring user engagement in an hci environment,” in Human-Computer Interaction. New Trends, pp. 22–29. Springer, 2009.
-  Wafa Benkaouar and Dominique Vaufreydaz, “Multi-sensors engagement detection with a robot companion in a home environment,” in Workshop on Assistance and Service robotics in a human environment at IEEE International Conference on Intelligent Robots and Systems (IROS2012), 2012, pp. 45–52.
-  Andrew D Engell and James V Haxby, “Facial expression and gaze-direction in human superior temporal sulcus,” Neuropsychologia, vol. 45, no. 14, pp. 3234–3241, 2007.
-  Carey D Balaban, Joseph Cohn, Mark S Redfern, Jarad Prinkey, Roy Stripling, and Michael Hoffer, “Postural control as a probe for cognitive state: Exploiting human information processing to enhance performance,” International Journal of Human-Computer Interaction, vol. 17, no. 2, pp. 275–286, 2004.
-  Stefan Scherer, Michael Glodek, Georg Layher, Martin Schels, Miriam Schmidt, Tobias Brosch, Stephan Tschechne, Friedhelm Schwenker, Heiko Neumann, and Günther Palm, “A generic framework for the inference of user states in human computer interaction,” Journal on Multimodal User Interfaces, vol. 6, no. 3-4, pp. 117–141, 2012.
-  Michael Johnston, Srinivas Bangalore, Gunaranjan Vasireddy, Amanda Stent, Patrick Ehlen, Marilyn Walker, Steve Whittaker, and Preetam Maloor, “Match: An architecture for multimodal dialogue systems,” in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2002, pp. 376–383.
-  Michael Johnston and Srinivas Bangalore, “Finite-state multimodal parsing and understanding,” in Proceedings of the 18th conference on Computational linguistics-Volume 1. Association for Computational Linguistics, 2000, pp. 369–375.
-  Bradley A Singletary and Thad E Starner, “Learning visual models of social engagement,” in Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, 2001. Proceedings. IEEE ICCV Workshop on. IEEE, 2001, pp. 141–148.
-  Marie-Luce Bourguet, “Designing and prototyping multimodal commands.,” in INTERACT. Citeseer, 2003, vol. 3, pp. 717–720.
Selene Mota and Rosalind W Picard,
“Automated posture analysis for detecting learner’s interest
Computer Vision and Pattern Recognition Workshop, 2003. CVPRW’03. Conference on. IEEE, 2003, vol. 5, pp. 49–49.
-  Marek P Michalowski, Selma Sabanovic, and Reid Simmons, “A spatial model of engagement for a social robot,” in Advanced Motion Control, 2006. 9th IEEE International Workshop on. IEEE, 2006, pp. 762–767.
-  Maria Frank, Ghassem Tofighi, Haisong Gu, and Renate Fruchter, “Engagement detection in meetings,” arXiv preprint arXiv:1608.08711, 2016.
Chih-Chung Chang and Chih-Jen Lin,
“Libsvm: a library for support vector machines,”ACM Transactions on Intelligent Systems and Technology (TIST), vol. 2, no. 3, pp. 27, 2011.