Assistive robot operated via P300-based Brain Computer Interface

05/30/2019 ∙ by Filippo Arrichiello, et al. ∙ 0

In this paper we present an architecture for the operation of an assistive robot finally aimed at allowing users with severe motion disabilities to perform manipulation tasks that may help in daily-life operations. The robotic system, based on a lightweight robot manipulator, receives high level commands from the user through a Brain-Computer Interface based on P300 paradigm. The motion of the manipulator is controlled relying on a closed loop inverse kinematic algorithm that simultaneously manages multiple set-based and equality-based tasks. The software architecture is developed relying on widely used frameworks to operate BCIs and robots (namely, BCI2000 for the operation of the BCI and ROS for the control of the manipulator) integrating control, perception and communication modules developed for the application at hand. Preliminary experiments have been conducted to show the potentialities of the developed architecture.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Research and development of robotic assistive technologies has gained tremendous momentum in the last decade, due to several factors such as the maturity level reached by several technologies, the advances in robotics and AI, and the fact that more than 700 million of persons have some kind of disability or handicap [18]. For many people with mobility impairments, essential and simple tasks, as dressing or feeding, require the assistance of dedicated people; thus, the use of devices providing independent mobility can have a large impact on their quality of life [10].

In this perspective, different classes of robotic devices can be considered as lightweight robotic arms that may help in manipulation tasks, intelligent semi-autonomous powered wheelchairs to help in mobility tasks, or wheelchair-mounted robotic manipulator to help in both the issues. From the user perspective, the operation mode of such systems may depend on the level of autonomy provided by/required to the robotic system; from the device perspective, this correspond to different control modes that vary from shared to supervisory control. In shared mode, the user is involved in the control loop of the system by continuously generating high-frequency motion commands; such commands are then translated from the control software in low-level functions after applying all the safety policies. In supervisory mode, the user provides high level low-frequency commands (e.g., to start/stop actions) and the system operates in complete autonomy; the control software must generate motion directives that realize the required action while taking into account safety, comfort and efficiency.

The operation modes of the robotic devices are strictly connected to the Human-Machine Interface (HMI) used to generate and communicate commands. Among the different HMIs, Brain-Computer-Interfaces (BCIs) represent a relatively new technology that has recently attracted large attention in view of the fact that BCIs may be used in the absence of motion capability of the user [12] with applications in different areas of assistive technologies as motor recovery, entertainment, communication and control [13, 17]. Indeed, BCIs have been recently proposed to drive wheelchairs [2, 3], to guide robots for telepresence [5, 11], to control exoskeletons [7] and mobile robots [19, 8].

Most BCIs rely on non-invasive electroencephalogram (EEG) signals, i.e. the electrical brain activity recorded from electrodes placed on the scalp. By processing such signals, the BCIs may allow the generation of commands that can be used for the communication with a software interface. EEG-BCI can be categorized based on the considered brain activity patterns [6], i.e.: Event-Related Desynchronization/Synchronization; Steady State Visual Evoke Potentials (SSVEP); P300 component of the Event Related Potentials; Slow Cortical Potentials (SCPs).

In this work, similarly to [20], we want to focus on the use of a EEG-BCI to control a robotic manipulator to perform operations as drinking or manipulating objects. However, here the BCI is operated on the base of the P300 paradigm that is a positive potential deflection on the ongoing brain activity at a latency of roughly 300 ms after the random occurrence of a desired target stimulus. This choice is motivated by the fact that P300-based BCIs are relatively easy to use for generating a control signal without complex training of the user, and have shown great potential to be used in several applications. From the robot motion control perspective, we do not base on path-planning algorithms but we use the closed loop inverse kinematic approach presented in [15] that allows the coding of different kinds of high level actions required by the user and to manage of multiple tasks arranged in priorities and expressed as both set-based and equality-based constraints. The effectiveness of the proposed architecture has been validated through experimental tests with a non-invasive BCI used to operate a 7 DOF lightweight robot manipulator. The video attached to the paper show the execution of a specific mission.

Ii Overall system architecture

The proposed system is composed of: a non-invasive BCI Emotiv Epoc+ that is a 14 channel wireless EEG-BCI; a 7 DOF lightweight robot manipulator Kinova Jaco2; a RGB-D sensor for human face and object recognition Microsoft Kinect One. As schematically represented in Fig. 1, the different components are integrated through the Robot Operating System (ROS) framework where specific control, perception and communication modules have been developed for the application at hand. The BCI is operated via a general-purpose software system BCI2000 that allows P300 experiments. The P300 operation paradigm allows the user to select via the BCI an option among a set. To each choice performed by the user is associated an action either to navigate the BCI2000 Graphical User Interface (GUI) or to send messages to external devices. In the latter case the BCI2000 opens an UDP/IP socket to send messages to a client process running on a linux machine that runs the ROS framework. The client process encodes the received message according to a beforehand established convention and send commands to the control module of the manipulator. The control module also collects messages from the perception module that allows to identify and localize objects or user’s face in the workspace. In particular, Kinect’s raw data are processed through OpenCV and PCL (Point Cloud Library) libraries. Finally, the control module of the manipulator implements a set-based closed loop inverse kinematics motion control algorithm that computes the corresponding desired joint velocities for the robotic manipulator.

d[1pt][1pt] LINUX_MACHINE[][]LINUX MACHINE WINDOWS MACHINE[][]WINDOWS MACHINE Planner[][]Planner Socket UDP/IP[][]Socket UDP/IP UDP-ROS Bridge[][]UDP-ROS Bridge Kinematic Control[][]Kinematic Control Raw Data[][]Raw Data Workspace Tasks[][]Workspace Tasks Desired Joint Velocities[][]Desired Joint Velocities Control Signals[][]Control Signals Brain Signals[][]Brain Signals c/2[][]

Fig. 1: Scheme of the overall architecture used in the experiment

Iii Operation of the EEG-BCI via P300 paradigm

The P300 potential is a component of the Event Related Potentials (ERPs), i.e. a fluctuation in the EEG generated by the electrophysiological response to a significant sensorial stimulus or event. In particular, the P300 is the largest ERP component, and it consists of a positive shift in the EEG signal approximately 300-400ms after a task relevant stimulus. The P300 potential can be generated during an oddball paradigm by which the user is subject to a sequence of events (e.g. visual stimulus) that can be categorized into two classes, one less frequent than the other. The frequent events are named standard stimulus, while the unfrequent events are named target stimulus. When the user distinguishes a target stimulus from a standard one, this generates the P300 peak about 300 ms after stimulus onset.

The P300 potential has been used as the basis for a BCI system in many studies. The classical format presents the user with a matrix of characters whose rows and columns flash successively and randomly at a rapid rate. The user selects a character by focusing attention on it and counting how many times it flashes. The row or column that contains this character evokes a P300 response, whereas all others do not. After a proper training, the computer can determine the desired row and column with the highest P300 amplitude, and thus the desired character.

The BCI considered in the proposed architecture is operated via a general-purpose software system named BCI2000 that, among the different features, allows BCI data acquisition, signal processing, stimulus presentation, and P300 experiments. In particular, for the application at hand, we want to allow an user to operate a robotic manipulator to achieve manipulation actions on some of the objects present in its workspace.Thus, we rely on BCI2000 P300 functionalities to allow the user to generate commands through a P300 based Graphical User Interface (GUI). This GUI has been realized structuring the array of flashing elements in a multi-layered structure; i.e. selecting an element from a starting array, the GUI can both switch to a different layer (i.e. with a different array of elements) and send commands to the manipulator through callback functions that open UDP/IP socket to communicate with a remote machine.

Fig. 2 shows an example of the structure of the developed GUI where the user can select operations as: select an object, pause/sleep the P300 GUI, move to the home level, command an action to the robot, stop the robot, etc. It can be noticed that characters, commonly used for the P300 paradigm, have been replaced by flashing icons more intuitive for the user.

d[1pt][1pt] Obj1[][]Obj1 Obj2[][]Obj2 Sleep[][]Sleep Home[][]Home Action1[][]Action1 Action2[][]Action2 SubAction1[][]SubAction1 SubAction2[][]SubAction2 Emergency Stop[][]Emergency Stop Emergency Stop[][]E Back[][]Back Object Selection[][]Object Selection Action Selection[][]Action Selection

Fig. 2: Scheme of the developed BCI2000 user interface.

Before using the developed GUI, the user should train the P300 software following a specific procedure that consists in selecting via oddball paradigm a set of characters in a predefined order. The BCI2000 software uses a genetic algorithm for the training of a Stepwise Linear Discriminant Analysis (SWLDA) binary classifier used in the operation mode, and it generates a specific profile file for the user.

The used BCI is an Epoc+ produced by Emotiv, that is a low-cost non-invasive BCI offering high resolution (14 bits, 1 LSB = 0.51V) multi-channels signals, and that provides access to dense array, high quality, raw EEG data. The 14 electrodes of the neuroheadset (that generate signals with a bandwidth of 0.2 –- 43Hz) are located according to the 10-20 international system.

Iv Motion control of the robotic manipulator

The robot manipulator receives as input from the BCI high-level commands containing information about the selected object and the kind of action to perform. The set of possible actions is predefined but their execution depend on the information collected on-line about the workspace, e.g. the placement of objects, the position of the user’s face, the possible presence of obstacles.

Each action is coded via a set of elementary tasks, each described by a suitable task function of the system state, and arranged in priority order as described in [1]. The reference system velocity is computed inverting the task function at a kinematic level and by projecting the contribution of each task into the null space of the higher priority ones so as to remove the velocity components that would conflict with it.

For a general robotic system with Degrees of Freedom, the state is described by the joint values . Let us consider a generic -dimensional task function . The following differential relationship holds:

where is the task Jacobian matrix, and is the system velocity. The reference velocity that brings the task value to a desired can be computed as:

(1)

where is a positive-definite matrix of gains, and is the task error. If the system is redundant with respect to the task dimension () it is possible to fulfill multiple tasks simultaneously. Defining a priority among the tasks composing an action, the reference system velocity can be computed as:

(2)

where is the reference velocity that fulfills the -th task and is the null space of the augmented Jacobian:

(3)

This framework has been recently extended in [15, 16] in order to handle also set-based tasks, i.e. monodimensional tasks requiring their value to lie in a set of values rather than assuming a specific one. Classic set-based tasks for a robotic manipulator are mechanical joints limits, obstacle avoidance, and arm manipulability. The considered method allows to simultaneously control a hierarchy composed of both equality-based and set-based tasks. In particular, while the equality-based tasks are always active, the set-based tasks can be activated or deactivated depending on the operational conditions. For each set-based task it is indeed necessary to set different reference values: physical thresholds (), activation thresholds (), and safety values with ( with ). With reference to Figure 3, as long as the threshold for a specific set-based task is satisfied (i.e. ), the task is removed from the hierarchy and the solution that fulfills the other tasks is computed. When its threshold is violated (i.e. ), the task is re-inserted in the hierarchy and it is transformed into an equality-based task. The desired value of the task function is set as:

(4)

It is important that (), in order to avoid undesirable system behaviors as chattering due to intermittent activation/deactivation of tasks caused by quantization errors or sensor noise in the task value computation.

Fig. 3: Activation and physical thresholds of a set-based task

It is not always necessary to compute the solution of the hierarchy containing all the active set-based tasks, because the desired system velocity computed applying (2) to a hierarchy that do not contain a specific set-based task could bring such task anyway to its valid set of values. In that case, the set-based task can be removed from the active task hierarchy on the base of the following algorithm.

Iv-a Set-based activation/deactivation algorithm

The algorithm can be divided into four main steps:

  1. Create the active task hierarchy

  2. Compute the solutions

  3. Compute projections of the solutions

  4. Choice of the solution

Starting from a hierarchy of mixed set-based and equality-based tasks, in the first step the hierarchy containing all the active tasks is created, that means is composed of all the equality-based tasks and all the set-based tasks that exceed the activation thresholds.

Given that we can not know a-priori which solution would make all the set-based task to stay in their specific set of values, it is necessary to build a solutions tree, by computing all the solutions given by (2) on all the possible task hierarchies obtained by inserting and removing all the set-based active tasks, i.e for an active hierarchy composed by set-based tasks, it is necessary to compute solutions and store them in a set .

Then we have to select and store in a set among all the solutions in , the ones that satisfy all the active set-based tasks while fulfilling also all the equality-based tasks. It is possible to check it by projecting each solution in into all the active set-based task spaces: a solution fulfills a set-based task if ().

Finally, one solution among the ones in has to be chosen as the desired system velocity . It is important to notice that the set is never empty, because there will always be the solution that takes into account all the set-based tasks. If contains more than one solution the highest-norm one is chosen, being the less conservative in terms of system velocity.

V Robotic perception software

The robotic platform has to be capable to interact with the environment and with the user, thus a perception system is needed to make the robot aware of the position of the user and of the possible objects. Objects in the environment are labelled with markers, and their detection and tracking are performed by resorting to the ArUco library [9], a well-known OpenCV module specifically designed for this kind of operations. Objects positions are computed in Kinect reference frame and then transformed into the manipulator reference frame by mean of a transformation matrix computed with a preliminary calibration.

For the case study presented in the following section we are interested in detecting and tracking also the mouth of the BCI’s user. Such operation has been divided in two steps: the first one consists in detecting the mouth position in the 2D plane of the RGB image taken from the Kinect sensor, and then in estimating the distance from the point cloud. First of all the image in Full HD resolution is acquired from the sensor, and then it is downsampled in order to reduce the computational load and to make the algorithm more suitable for a real-time application. Then a Viola-Jones algorithm is applied to recognize the face in the scene, specifically using the Haar features

[4], [14]. Among all the faces detected within the image, only the closest one is selected. The face image is then split into two parts, and only the lower one is taken into account for the following computations. The Haar features are once again applied in order to find the coordinates of the mouth in the image frame.

The second step is the computation of the 3D coordinates of the center of the mouth. The Full HD Point Cloud is acquired from the Kinect, and then filtered by a Voxel Grid filter in order to reduce the number of points to be computed. The points belonging to the selected area of the image are extracted and then a mean for the , and coordinates of the center of the mouth is computed.

Vi Experimental case study

In the considered experimental case study, we want to allow a user to command the robot through the BCI to move objects in preselected positions or to bring a bottle to his mouth. To the purpose, a specific GUI has been developed in the BCI2000 framework according to the scheme in Fig. 2 and that result navigable via P300 BCI paradigm. The GUI composed of the four different layers shown in Fig. 4. Referring to this figure, the first selection (image 1) represents the object selection layer, from where the user can choose the object to manipulate, in this case a bottle coke and one of water. Furthermore there is the possibility to pause the interface program execution (represented by the “pause” icon) and to send a stop signal to the manipulator in case of emergency (represented by the red cross). The second selection (image 2) represents the action selection layer, in fact after the object choice the user can decide which action to perform, i.e. whether to drink or to move the chosen bottle. Even in this case there is the possibility to pause and to return to the bottle choice (represented by the round arrow). If the user decides to drink, the interface switches directly to the fourth selection (image 4) and pauses itself. In this phase the user can decide to resume the interface program (represented by the “play” icon), to send a stop signal to the robot manipulator, to return to the bottle choice or to go back to the previous selection(represented by the “-1” icon). Instead, if the user decides to take and move the bottle, the interface switches on the third selection (image 3) that represents the sub-action selection. In this phase the user can decide to move the bottle on the left or on the right in preassigned positions. After the choice of the location, the interface advances to the fourth selection. Once object and actions are selected, the GUI sent a message to the robot control with the details of the choice performed by the user. d[1pt][1pt] LINUX_MACHINE[][]LINUX MACHINE WINDOWS MACHINE[][]WINDOWS MACHINE Planner[][]Planner Socket UDP/IP[][]Socket UDP/IP UDP-ROS Bridge[][]UDP-ROS Bridge Kinematic Control[][]Kinematic Control Raw Data[][]Raw Data Workspace Tasks[][]Workspace Tasks Desired Joint Velocities[][]Desired Joint Velocities Control Signals[][]Control Signals Brain Signals[][]Brain Signals c/2[][]

Fig. 4: Different choices of the BCI user interface

In the following there is the description of the results for the two kinds of performed operations.

Vi-a “Move” operation

For the first experiment the user has been asked to choose to move the water bottle on the right. The high-level command built by the BCI user interface is sent to the manipulator that autonomously fulfills the operation. The chosen task hierarchy for the operation is:

  1. Fourth joint mechanical limit: a maximum limit of 5.5 rad and a minimum limit of 0.7 rad have been set for the fourth joint of the manipulator in order to avoid that it hits its own structure

  2. Obstacle avoidance: as the operator chooses a bottle, the other automatically becomes an obstacle that the end-effector of the manipulator needs to avoid. A minimum threshold on the distance between the end-effector and the obstacle of 25 cm has been set.

  3. End-effector position and orientation: a sequence of target waypoints for the end-effector position and orientation has been chosen in order to make the manipulator effectively grasp the selected bottle and to move it in a specific position.

Fig. 5: Frames of the “Move” operation execution

Fig. 6 shows the end-effector position and orientation error during the entire operation. Fig. 7 shows the set-based tasks values, and it can be seen that all the thresholds for all the set-based tasks are respected. Fig 5 shows a sequence of snapshots of a performed moving mission.

2000[][]20 4000[][]40 6000[][]60 8000[][]80 10000[][]100 12000[][]120 0[][]0 1[][]1 -1[][]-1 -2[][]-2 0.5[][]0.5 -0.5[][]-0.5 [s][][][s] [m][][][m] a1[][]a) b1[][]b)

Fig. 6: “Move” operation: a) End-effector position error on -axis (blue), -axis (yellow) and -axis (red) during the operation; b) End effector orientation error on -axis (blue), -axis (yellow) and -axis (red) during the operation

2000[][]20 4000[][]40 6000[][]60 8000[][]80 10000[][]100 12000[][]120 0[][]0 0.5[][]0.5 1[][]1 1.5[][]1.5 2[][]2 4[][]4 6[][]6 [rad][][][rad] [s][][][s] [m][][][m] a1[][]a) b1[][]b) c1[][]c)

Fig. 7: “Move” operation: a) Fourth joint position (black) and limits (blue) over time; b) Distance from the obstacle (blue) and minimum distance (black) over time

Vi-B “Drink” operation

For the second experiment the user has been asked to choose to drink from the water bottle. The task hierarchy is:

  1. Fourth joint limit: same as for the first experiment

  2. Second joint limit: a maximum limit of 5.1 rad and a minimum limit of 1.9 rad has been chosen for the second joint position, in order to avoid a collision between the “elbow” of the manipulator and the table on which the objects are placed.

  3. Obstacle avoidance: same as the first experiment

  4. Bottle top position and orientation: in this case it is necessary to control the position and the orientation of the cap of the grasped bottle rather than the end-effector ones. Similarly to the first experiment, a proper sequence of target waypoints and orientations have been assigned in order to fulfill the operation: grasp the bottle, get it close to the operator’s mouth, make him drink, and reposition the bottle on the table.

Fig. 8: Frames of the “Drink” operation execution

Fig. 9 and Fig. 10 show the results of the experiment. The bottle cap follows the desired positions and orientations with small errors, while respecting all the thresholds for all the set-based tasks, effectively fullfilling the operation. Fig 8 shows a sequence of snapshots of a performed drinking mission.

2000[][]20 4000[][]40 6000[][]60 8000[][]80 10000[][]100 12000[][]120 14000[][]140 0[][]0 2[][]2 4[][]4 6[][]6 0.5[][]0.5 -0.5[][]-0.5 1[][]1 [s][][][s] [m][][][m] a1[][]a) b1[][]b) c1[][]c)

Fig. 9: “Drink” operation: a) Bottle top position error on -axis (blue), -axis (yellow) and -axis (red) during the operation; b) Bottle top orientation error on -axis (blue), -axis (yellow) and -axis (red) during the operation

2000[][]20 4000[][]40 6000[][]60 8000[][]80 10000[][]100 12000[][]120 14000[][]140 0[][]0 1[][]1 -1[][]-1 2[][]2 4[][]4 6[][]6 0.5[][]0.5 -0.5[][]-0.5 [s][][][s] [m][][][m] [rad][][][rad] a1[][]a) b1[][]b) c1[][]c)

Fig. 10: “Drink” operation: a) Fourth joint position (black) and limits (blue) over time; b) Second joint position (black) and limits (blue) over time; c) Distance from the obstacle (blue) and minimum distance (black) over time

Vii Conclusions

This paper shows an architecture for an assistive robotic system aimed at helping users with sever motion disabilities in daily-life operations. The proposed system relies on a BCI based on P300 paradigm for the high level command detection, a Kinect One sensor for the environment perception and a Kinova Jaco2 lightweight robot manipulator for performing the manipulation tasks. Details of the software modules and of the specific motion control algorithm applied for the application at hand have been described, and an experimental case study involving two different kinds of operations have been reported to prove the effectiveness of the developed system.

Further efforts will mainly concern the improvement of the perception system, the user interface and the motion/interaction control of the robot. More in detail, we want to substitute the marker-based object detection algorithm with a detection algorithm based on the object geometrical shapes. Moreover, we want to make the BCI GUI capable of dynamically changing the structure of the layers to adapt to the environment scene and to replace object icons with images dynamically taken from the Kinect. For the robot control, future activity will focus on more sophisticated obstacle avoidance algorithm to add an user’s safety levels and we will consider more suitable control algorithm for the interaction with the environment and for collisions detection.

Acknowledgments

The Authors want to thank Alessandro Bria, Gianfranco Miele, Mario Fresilli and Gianluca Mangiapelo for the provided support.

References

  • [1] G. Antonelli, F. Arrichiello, and S. Chiaverini. The null-space-based behavioral control for autonomous robotic systems. Intelligent Service Robotics, 1(1):27–39, 2008.
  • [2] L. Bi, X. Fan, and Y. Liu. Eeg-based brain-controlled mobile robots: a survey. IEEE Transactions on Human-Machine Systems, 43(2):161–176, 2013.
  • [3] T. Carlson and J. Millan. Brain-controlled wheelchairs: a robotic architecture. IEEE Robotics and Automation Magazine, 20(1):65–73, 2013.
  • [4] Modesto Castrillón, Oscar Déniz, Daniel Hernández, and Javier Lorenzo. A comparison of face and facial feature detectors based on the viola–jones general object detection framework. Machine Vision and Applications, 22(3):481–494, 2011.
  • [5] C. Escolano, J. Antelis, and J. Minguez. A telepresence mobile robot controlled with a noninvasive brain–computer interface. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42(3):793–804, 2012.
  • [6] R. Fazel-Rezai, B. Allison, C. Guger, E. Sellers, S. Kleih, and A. Kübler. P300 brain computer interface: current challenges and emerging trends. Frontiers in Neuroengineering, 5:14, 2012.
  • [7] A. Frisoli, C. Loconsole, D. Leonardis, F. Banno, M. Barsotti, C. Chisari, and M. Bergamasco. A new gaze-bci-driven control of an upper limb exoskeleton for rehabilitation in real-world tasks. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(6):1169–1179, 2012.
  • [8] V. Gandhi, G. Prasad, D. Coyle, L. Behera, and T. McGinnity. Eeg-based mobile robot control through an adaptive brain–robot interface. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 44(9):1278–1285, 2014.
  • [9] S. Garrido-Jurado, R. Mu noz Salinas, F.J. Madrid-Cuevas, and M.J. Marín-Jiménez. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47(6):2280 – 2292, 2014.
  • [10] H. Hoenig, D. Taylor, and F. Sloan. Does assistive technology substitute for personal assistance among the disabled elderly? American Journal of Public Health, 93(2):330–337, 2003.
  • [11] R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson, and J. Millan. Towards independence: a bci telepresence robot for people with severe motor disabilities. Proceedings of the IEEE, 103(6):969–982, 2015.
  • [12] D. McFarland and J. Wolpaw. Brain-computer interface operation of robotic and prosthetic devices. Computer, 41(10):52–56, 2008.
  • [13] J. Millán, R. Rupp, G. Müller-Putz, R. Murray-Smith, C. Giugliemma, M. Tangermann, C. Vidaurre, F. Cincotti, A. Kübler, and R. Leeb. Combining brain–computer interfaces and assistive technologies: state-of-the-art and challenges. Frontiers in Neuroscience, 4:161, 2010.
  • [14] Vaibhavkumar J Mistry and Mahesh M Goyani. A literature survey on facial expression recognition using global features. Int. J. Eng. Adv. Technol, 2(4):653–657, 2013.
  • [15] S. Moe, G. Antonelli, A. Teel, K. Pettersen, and J. Schrimpf. Set-based tasks within the singularity-robust multiple task-priority inverse kinematics framework: General formulation, stability analysis and experimental results. Frontiers in Robotics and AI, 3:16, 2016.
  • [16] S. Moe, A. Teel, G. Antonelli, and K. Pettersen. Stability analysis for set-based control within the singularity-robust multiple task-priority inverse kinematics framework. In 54th IEEE Conference on Decision and Control and 8th European Control Conference, pages 171–178, Osaka, Jo, December 2015.
  • [17] C. Moritz, P. Ruther, S. Goering, A. Stett, T. Ball, W. Burgard, E. Chudler, and R. Rao. New perspectives on neuroengineering and neurotechnologies: NSF-DFG workshop report. IEEE Transactions on Biomedical Engineering, 63(7):1354–1367, 2016.
  • [18] United Nations and Andrew Byrnes. From Exclusion to Equality: Realizing the Rights of Persons with Disabilities: Handbook for Parliamentarians on the Convention on the Rights of Persons with Disabilities and Its Optional Protocol. United Nations, Office of the High Commisioner for Human Rights, 2007.
  • [19] H. Riechmann, A. Finke, and H. Ritter. Using a cvep-based brain-computer interface to control a virtual agent. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 24(6):692–699, 2016.
  • [20] S. Schröer, I. Killmann, B. Frank, M. Völker, L. Fiederer, T. Ball, and W. Burgard. An autonomous robotic assistant for drinking. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 6482–6487. IEEE, 2015.