Towards Artificial Learning Companions for Mental Imagery-based Brain-Computer Interfaces

by   Léa Pillette, et al.
Université de Bordeaux

Mental Imagery based Brain-Computer Interfaces (MI-BCI) enable their users to control an interface, e.g., a prosthesis, by performing mental imagery tasks only, such as imagining a right arm movement while their brain activity is measured and processed by the system. Designing and using a BCI requires users to learn how to produce different and stable patterns of brain activity for each of the mental imagery tasks. However, current training protocols do not enable every user to acquire the skills required to use BCIs. These training protocols are most likely one of the main reasons why BCIs remain not reliable enough for wider applications outside research laboratories. Learning companions have been shown to improve training in different disciplines, but they have barely been explored for BCIs so far. This article aims at investigating the potential benefits learning companions could bring to BCI training by improving the feedback, i.e., the information provided to the user, which is primordial to the learning process and yet have proven both theoretically and practically inadequate in BCI. This paper first presents the potentials of BCI and the limitations of current training approaches. Then, it reviews both the BCI and learning companion literature regarding three main characteristics of feedback: its appearance, its social and emotional components and its cognitive component. From these considerations, this paper draws some guidelines, identify open challenges and suggests potential solutions to design and use learning companions for BCIs.


page 3

page 4

page 5


High Aptitude Motor Imagery BCI Users Have Better Visuospatial Memory

Brain computer interfaces (BCI) decode the electrophysiological signals ...

Would Motor-Imagery based BCI user training benefit from more women experimenters?

Mental Imagery based Brain-Computer Interfaces (MI-BCI) are a mean to co...

Revisiting Embodiment for Brain-Computer Interfaces

Researchers increasingly explore deploying brain-computer interfaces (BC...

Combining Brain-Computer Interfaces and Haptics: Detecting Mental Workload to Adapt Haptic Assistance

In this paper we introduce the combined use of Brain-Computer Interfaces...

Towards identifying optimal biased feedback for various user states and traits in motor imagery BCI

Objective. Neural self-regulation is necessary for achieving control ove...

Advantage of prediction and mental imagery for goal‐directed behaviour in agents and robots

Mental imagery and planning are important aspects of cognitive behaviour...

Generating Music and Generative Art from Brain activity

Nowadays, technological advances have influenced all human activities, c...


Les interfaces cerveau-ordinateur (ICO) exploitant l’imagerie mentale permettent à leurs utilisateurs d’envoyer des commandes à une interface, une prothèse par exemple, uniquement en réalisant des tâches d’imagerie mentale, tel qu’imaginer son bras droit bouger. Lors de la réalisation de ces tâches, l’activité cérébrale des utilisateurs est enregistrée et analysée par le système. Afin de pouvoir utiliser ces interfaces, les utilisateurs doivent apprendre à produire différents motifs d’activité cérébrale stables pour chacune des tâches d’imagerie mentale. Toutefois, les protocoles d’entraînement existants ne permettent pas à tous les utilisateurs de maîtriser les compétences nécessaires à l’utilisation des ICO. Ces protocoles d’entraînements font très probablement partie des raisons principales pour lesquelles les ICO manquent de fiabilité et ne sont pas plus utilisées en dehors des laboratoires de recherche. Or, les compagnons d’apprentissage, qui ont déjà permis d’améliorer l’efficacité d’apprentissage pour différentes disciplines, sont encore à peine étudiés pour les ICO. L’objectif de cet article est donc d’explorer les différents avantages qu’ils pourraient apporter à l’entraînement aux ICO en améliorant le retour fait à l’utilisateur, c’est-à-dire les informations fournies concernant la tâche. Ces dernières sont primordiales à l’apprentissage et pourtant, il a été montré qu’à la fois théoriquement et en pratique ces dernières étaient inadéquates. Tout d’abord, seront présentés dans l’article les potentiels des ICO et les limitations des protocoles d’entraînement actuels. Puis, une revue de la littérature des ICO ainsi que des compagnons d’apprentissage est réalisée concernant trois caractéristiques principales du retour utilisateur, c’est-à-dire son apparence, ses composantes sociale et émotionnelle et enfin sa composante cognitive. À partir de ces considérations, ce papier fournit quelques recommandations, identifie des défis à relever et suggère des solutions potentielles pour concevoir et utiliser des compagnons d’apprentissage en ICO.

1. Introduction

A Brain Computer Interface (BCI) can be defined as a technology that enables its users to interact with computer applications and machines by using their brain activity alone (Clerc16-v1, )

. In most BCIs, brain activity is measured using Electroencephalography (EEG), which uses electrodes placed on the scalp to record small electrical currents reflecting the activity of large populations of neurons

(Clerc16-v1, )

. In a BCI, EEG signals are processed and classified, in order to assign a specific command to a specific EEG pattern. For instance, a typical BCI system can enable a user to move a cursor to the left or right on a computer screen, by imagining left or right hand movements, each imagined movement leading to a specific EEG pattern

(Pfurtscheller01, ). In this article we focus on Mental Imagery-based BCI (i.e., MI-BCI) with which users have to consciously modify their brain activity by performing mental imagery tasks (e.g., imagining hand movements or mental calculations) (Clerc16-v1, ; Pfurtscheller01, ). MI-BCIs require the users to train to adapt their own strategies to perform the mental imagery task based on the feedback they are provided with. At the end of the training, the system should recognize which task the user is performing as accurately as possible. However, it has been shown, both theoretically and practically, that the existing training protocols do not provide an adequate feedback for acquiring these BCI skills (Lotte13, ; Jeunet16, ). This, among other reasons, could explain why BCIs still lack reliability and that around 10 to 30% of users cannot use them at all (LotteHDR2016, ; Neuper10, ). Several experiments showed that taking into account recommendations from the educational psychology field, e.g., providing a multisensorial feedback, can improve BCI performances and user-experience (Sollfrank16, ; Lecuyer08, ). However, researches using a social and emotional feedback remain scarce despite the fact that it is recommended by educational psychology (Goleman95, ).

Indeed, it has been hypothesized that our social behavior had a major influence in the development of our brain and cognitive abilities (Dunbar07, ; Ybarra08, ). Social interaction was traditionally involved in the intergenerational transmission of practices and knowledge. However, its importance for learning was acknowledged only recently with the development of the social interdependence theory, which states that the achievement of one person’s goal, i.e., here learning, depends on the action of others. Cooperative learning builds on this idea and promotes collaboration between students in order to reach their common goal (Johnson09, ). These theories and methods have shown that learning can be strengthened by a social feedback (Izuma08, ; Chou03, ).

Artificial learning companions, which are animated conversational agents involved in an interaction with the user

(Chou03, ), could provide such social and emotional feedback. Physiological and neurophysiological data recordings offer the possibility to infer users’ states/traits and to adapt the behavior of the companion accordingly (Burleson07, ; Bent17, ). The training would benefit from the later, for example the difficulty of the task could be modulated in order to keep the user motivated. In particular, the feedback provided during the training could be improved, e.g., by adapting to the emotional state of the user. Learning companions are therefore able to take into account the cognitive abilities and affective states of users, and to provide them with emotional or cognitive support. They have already proven to be effective for improving learning of different abilities, e.g., mathematics or informatics, (Cabada12, ; Kim05, ). From all types of computational supports which enrich the social context during learning (i.e., educational agent) we chose to focus on learning companions because they engage in a non-authoritative interaction with the user, can have several roles ranging from collaborator, to competitor or teachable student and could potentially involve using several of them with complementary roles (Chou03, ).

Learning companions could contribute to improving BCI training by, among other, enriching the social context of BCI. This articles aims at identifying the various benefits that learning companions can bring to BCI training, and how they can do so. To achieve this objective, this article starts by detailing the principles and applications of BCI as well as the limitations of current BCI training protocols. Once the keys to understanding BCIs provided, this article focuses on three main components of BCI feedback, which should be improved in order to improve BCI training. First of all, we study the appearance of feedback, which is one of its most studied characteristics. Second, we study its social component, i.e., the amount of interaction the user has with a social entity during the learning task, and emotional component, i.e., the feedback components which aim at eliciting an emotional response from the user. Both are still scarcely used in BCI though the existing results seem to be promising. Finally, we will concentrate on its cognitive component, i.e., which information to provide users in order to improve their understanding of the task, which represents one of the main challenge in designing BCI feedbacks. For each of these three feedback components, we propose a review of the literature for both the BCI and learning companion fields to deduce from them some guidelines, challenges and potential research directions.

2. Brain Computer Interface Skills

2.1. BCIs principes and applications

Since they make computer control possible without any physical movement, MI-BCIs rapidly became promising for a number of applications (Clerc16-v2, ). They can notably be used by severely motor impaired users, to control various assisting technologies such as prostheses or wheelchairs (Millan10, ). More recently, MI-BCIs were shown to be promising for stroke rehabilitation as well, as they can be used to guide stroke patients to stimulate their own brain plasticity towards recovery (Ang15, ). Finally, MI-BCIs can also be used beyond medical applications (vanErp12, ), for instance for gaming, multimedia or hand-free control, among many other possible applications (Clerc16-v2, ). However, as it has been mentioned, despite these many promising applications, current EEG-based MI-BCIs are unfortunately not really usable, i.e., they are not reliable nor efficient enough (LotteHDR2016, ; Clerc16-v1, ; Clerc16-v2, ). In particular, the mental commands from the users are too often incorrectly recognized by the MI-BCI. There is thus a pressing need to make them more usable, so that they can deliver their promises.

Controlling a MI-BCI is a skill that needs to be learned and refined: the more users practice, the better they become at MI-BCI control, i.e., their mental commands are recognized correctly by the system increasingly more often (Jeunet16e, ). Learning to control an MI-BCI is made possible thanks to the use of neurofeedback (NF) (Sitaram16, )

. NF consists in showing users a feedback on their brain activity, and/or as with BCI, in showing them which mental command was recognized by the BCI, and how well so. This is typically achieved using a visual feedback, e.g., a gauge displayed on screen, reflecting the output of the machine learning algorithm used to recognize the mental commands from EEG signals

(Neuper10, ) (see Figure 1). This guides users to learn to perform the MI tasks increasingly better, so that they are correctly recognized by the BCI. Thus, human learning principles need to be considered in BCI training procedures (Lotte15a, ).

2.2. Limitations of the current training protocol

Currently, most MI-BCI studies are based on the Graz training protocol or on variants of the latter. This protocol relies on a two stage procedure (Pfurtscheller01, ): (1) training the system and (2) training the user. In stage 1, the user is instructed to successively perform a certain series of MI tasks (for example, left and right hand MI). Using the recordings of brain activity generated as these various MI tasks are performed, the system attempts to extract characteristic patterns of each of the mental tasks. These extracted features are used to train a classifier, the goal of which is to determine the class to which the signals belong. Then, in stage 2 users are instructed to perform the MI tasks, but this time feedback (based on the system training performed in stage 1) is provided to inform them of the MI task recognized by the system. The user’s goal is to develop effective strategies that will allow the system to easily recognize the MI tasks that they are performing. Along such training, participants are asked to perform specific mental tasks repeatedly, e.g., imagining left or right-hand motor imagery, and are provided with a visual feedback shown as a bar indicating the recognized task and the corresponding confidence level of the classifier (see Figure 1).

Figure 1.

Example of feedback which is often provided to users during training, i.e., right and left hand motor imagery training here. At the moment the picture was taken the user had to imagine moving his left-hand. The blue bar indicates which task has been recognized and how confident the system is in its recognition. The longer the bar and the most confident the system is. Here the system rightly recognize the task that the user is performing and is quite confident about it

(Pfurtscheller01, ).

Unfortunately, such standard training approaches satisfy very few of the guidelines from human learning psychology and instructional design to ensure an efficient skill acquisition (Lotte13, ). For instance, a typical BCI training session provides a uni-modal (visual) and corrective feedback (indicating whether the learner performed the task correctly) (see Figure 1), using fixed and (reported as) boring training tasks identically repeated until the user achieves a certain level of performance, with these training tasks being provided synchronously. In contrast, it is recommended to provide a multi-modal and explanatory feedback (indicating what was right/wrong about the task performed by the user) that is goal-oriented (indicating a gap between the current performance and the desired level of performance), in an engaging and challenging environment, using varied training tasks with adaptive difficulty (Merrill07, ; Shute08, ).

Moreover, it is necessary to consider users’ motivational and cognitive states to ensure they can perform and learn efficiently (Keller08, ). Keller states that optimizing motivational factors - Attention (triggering a person’s curiosity), Relevance (the compliance with a person’s motives or values), Confidence (the expectancy for success), and Satisfaction (by intrinsic and extrinsic rewards) - leads to more user efforts towards the task and thus to better performance.

In short, current standard BCI training approaches are both theoretically (Lotte13, ) and practically (Jeunet16, )

suboptimal, and are unlikely to enable efficient learning of BCI-related skills. Artificial intelligent agents such as learning companions could provide tools to improve several characteristics of BCI training.

3. Building Bci Learning Companion - Existing Tools and Challenges

Learning companions have been defined by (Chou03, ) as follows:

In an extensive definition, a learning companion is a computer-simulated character, which has human-like characteristics and plays a non-authoritative role in a social learning environment.

This definition offers three main points that will be elaborated in the BCI context in the following section. First, the learning companion must facilitate the learning process in particular by encouraging the learner in a social learning activity. Using an anthropomorphic appearance facilitates this social context. Furthermore, its interventions should be consistent with the general recommendation concerning feedback which would also contribute to its human likeness and its efficiency (See Section 3.1).

Second, learning companions are educational agents, i.e., computational supports which enrich the social context during learning (Chou03, ). Such environment could provide a motivating and engaging context that would favor learning (See Section 3.2).

Finally, the benefit of a learning companion over the other types of educational agents is that its role can greatly vary from student to tutor given the learning model used and the knowledge that the companion holds. At the moment, an educational agent with an authoritative role of teacher is not realistic because of the lack of a cognitive model of the task. Such a model would provide information about how the learner’s profile (i.e., traits and states) influences BCI performance and which feedback to provide accordingly (Jeunet16, ; Jeunet17, ). It would be necessary to understand, predict and therefore improve the acquisition of BCI skills (See Section 3.3).

3.1. Appearance of feedback

As stated above, the appearance of the learning companion greatly impacts its influence on the user. BCI performances are also influenced by the appearance of the feedback that is provided during training. Therefore, numerous researches have been and are still led toward improving this characteristic of the feedback.

3.1.1. BCI Literature

While it is recognized that feedback improves learning, many authors have attempted to clarify which features enhance this effect (Azevedo95, ; Bangert91, ; Narciss04, ). To be effective, feedback should be directive (indicating what needs to be revised), facilitative (providing suggestions to guide learners) and should offer verification (specifying if the answer is correct or incorrect). It should also be goal-directed by providing information on the progress of the task with regard to the goal to be achieved. Finally, feedback should be specific, clear, purposeful and meaningful. These different features increase the motivation and the engagement of learners (Williams97, ; Hattie07, ; Ryan00, ).

As already underlined in (Lotte13, ), classical BCI feedback satisfies few of such requirements. Generally, BCI feedbacks are not explanatory (they do not explain what was good or bad nor why it is so), nor goal directed and do not provide details about how to improve the answer. Moreover, they are often unclear and do not have any intrinsic meaning to the learner. For example, BCI feedback is often a bar representing the output of the classifier, which is a concept most BCI users are unfamiliar with.

Recently, some promising areas of research have been investigated. For example, the study in (Kubler01a, ) showed that performances are enhanced when feedback is adapted to the characteristics of the learners. In their study, positive feedback, i.e., feedback provided only for a correct response, was beneficial for new or inexperienced BCI users, but harmful for advanced BCI users.

Several studies also focused on the modalities of feedback presentation. The work in (Ramos12, ) used BCI for motor neurorehabilitation and observed that proprioceptive feedback (feeling and seeing hand movements) improved BCI performance significantly. In a recent study in (Jeunet15a, ), the authors tested a continuous tactile feedback by comparing it to an equivalent visual feedback. Performance was higher with tactile feedback indicating that this modality can be a promising way to enhance BCI performances. The study in (Sollfrank16, ) showed that multimodal (visual and auditory) continuous feedback was associated with better performance and less frustration compared to the conventional bar feedback.

Other studies investigated new ways of providing some task specific and more tangible feedback. In (Frey14, ) and (Mercier14, ), the authors created tools using augmented reality to display the user’s EEG activity on the head of a tangible humanoid called Teegi (see Figure 2), and superimposed on the reflection of the user, respectively.

Figure 2. User visualizing his brain activity using Teegi (Frey14, ).

These researches contribute to make feedback more attractive which can have a beneficial impact. For example, it has been shown that using game-like, 3D or Virtual Reality increase user engagement and motivation (RonAngevin09, ).

3.1.2. Learning Companion Literature

Numerous researches regarding the appearance which would maximize the acquisition of a skill/ability were led as well for learning companions. Some main points seem to emerge from them, here are some that could be useful while designing one companion for BCI purpose:

  • Physical, tangible companion seems to increase social presence in comparison to a virtual companion (Hornecker11, ; Schmitz10, )

  • Anthropomorphic features facilitate social interactions (Duffy03, )

  • Physical characteristics, personality/abilities, functionalities and learning function should be consistent (Norman94, )

Interestingly, the influence of learning companions was also studied using measures of brain activity. For example, using functional Magnetic Resonance Imaging (fMRI) (Krach08, ) investigated the neural correlates of the attribution of intentions and desires (i.e., theory of mind) for different robot features. Results show that theory of mind-related cortical activity is positively correlated to the perceived human-likeness of a robot. This implies that the more realistic the robots, the more people attribute them intentions and desires.

3.1.3. Futures challenges

Feedbacks that are both adapted and adaptive to users are lacking in both the BCI and learning companion literatures. Numerous researches have been and are still led toward identifying feedback characteristics and learner characteristics influencing BCI performance. However, the often low number of participants in current experiments limits those results and further researches should be led to clarify the type of feedback to provide depending on the user’s profile.

Additionally, an interesting research direction could be to use several learning companions, including Teegi or another tangible system which could display the brain activity of the user. Each companion could have a different role and one of them could be a tutor which would provide insights about how to interpret the information related to brain activity displayed.

3.2. Social & Emotional feedback

Learning companions are more than just another mean to provide feedback. Their main benefits is that they enrich the social context of learning and can provide emotional feedback. As mentioned, BCI training still lacks such elements in its feedback though current literature tends to indicate that it would benefit from them.

3.2.1. BCI Literature

Indeed, (Nijboer08, ) showed that mood, assessed prior to each BCI session (using a quality of life questionnaire), correlates with BCI performances. Some BCI experiments provided emotional feedback using smiling faces to indicate the user if the task performed had been recognized by the system (Kubler01b, ; Leeb07, ). Though, none of these studies used a control group. Therefore, the impact of such a feedback remains unknown for BCI applications. A similar study was led in neurofeedback by (Mathiak15, ) who showed that providing participants with an emotional and social feedback as a reward enabled better control than a typical moving bar over the activation of the dorsal anterior cingulate cortex (ACC) monitored using fMRI. The feedback consisted of an avatar’s smile whose width varied depending on the user’s performance. The better the performance, the wider the smile was. This type of feedback can be considered as both emotional and social because of the use of an avatar.

The use of social feedback in BCI has been encouraged in several papers (Sexton15, ; Lotte13, ; Mattout12, ). The work in (Izuma08, ) showed that a social feedback can be considered as a reward just as much as a monetary one. Yet, the influence of a reward has already been demonstrated in BCI. Indeed, it has been shown that a monetary reward can modulate the amplitude of some brain activity, including the one involved during MI-BCI (Sepulveda16, ; Kleih10, ). However, researches about the use of a social feedback in BCI remain scarce and often lack of control groups. One of the main original purpose of BCI was to enable their users to communicate and some researchers have created tools to provide such type of communication in social environments, for example using Twitter (Edlinger11, ) but no comparison was made with equivalent nonsocial environment. Studies from (Bonnet13, ), (Obbink12, ) and (Goebel04, ) presented games where users played in pairs collaborating and/or competing against each other. The study in (Bonnet13, ) found that this type of learning context proved successful to significantly improve user-experience and the performances of the best performing users.

Finally, we explored the use of social and emotional feedback when creating PEANUT (i.e., Personalized Emotional Agent for Neurotechnology User Training), which is the first learning companion dedicated to BCI training (Pillette17, ). Its interventions were composed of spoken sentences and displayed facial expression in between two trials (see Figure 3). The interventions were selected based on the performance and progression of the user. We tested PEANUT’s influence on user’s performance and experience during BCI training using two groups of users with similar profiles. One group was trained to use BCI with PEANUT and the other without. Our results indicated that the user experience was improved when using PEANUT. Indeed, users felt that they were learning and memorizing better when they were learning with PEANUT. Even though their mean performances were not changed, the variability of the performances from the group with PEANUT were significantly higher than for the other group. Such result might indicate a differential effect of learning companions on users (Burleson07, ).

3.2.2. Learning Companion Literature

Several other studies have shown the interest of learning-companions as source of social link that is sometimes essential in certain learning situations (Lester97, ; Saerbeck10, ). They can play different roles such as co-learner, co-tutor, etc. in which they are often called upon to demonstrate certain capacities of social interaction such as empathy through emotional feedback (Lester97, ) or respect for social norms (Johnson04, ; Saerbeck10, ).

Emotional feedback aims at regulating the emotions of the learner throughout the learning. Positive emotions are known to improve problem solving, decision-making, and creation, while negative emotions are harmful in these situations (Isen01, ). Previous studies regarding emotional feedback investigated emotional regulation strategies to manage learners’ emotions and behaviors (Beale09, ; Burleson07, ; Mcquiggan10, ). The positive impact of emotional feedback has also been highlighted in some educational contexts (Terzis12, ).

In addition, it is important to adapt the social interaction to each learner. Indeed, it has been shown that a companion that adapts its behavior to learners’ profile increases the development of their positive attitude (Gordon16, ).

Learning companions are sometimes embodied in robots to better materialize social presence. Tega is a social companion robot which interprets students’ emotional response – measured from facial expressions – in a game aimed at learning Spanish vocabulary (Gordon16, ). It approximates the emotions of the learner and over time, determines the impact of these emotions on the learner to finally create a personalized motivational strategy adapted to the later.

To ensure adaptation, machine learning techniques are often deployed. With the advancement of Artificial Intelligence (AI), more efficient techniques are now used to help the companion to better learn from the learner’s behavior. In case of the social companion NICO (a Neuro-Inspired COmpanion robot), the model used for the learning of the emotions and the adaptation to the user is a combination of a Convolutional Neural Network and a Self-organization Map to recognize an emotion from the user’s facial expression, and learn to express the same

(Churamani17, ). The model allows the robot to adapt to a different user by associating the perceived emotion with an appropriate expression which makes the companion more socially acceptable in the environment in which it operates.

Figure 3. Experimental setting where PEANUT (on the left) provides a user with social presence and emotional support adapted to his performance and progression (Pillette17, ).

3.2.3. Futures challenges

As mentioned above, assessing users’ emotional states is particularly useful for learning companions. However, doing so reliably remains a challenge, particularly for covert emotions, that are thus not visible in facial expressions. Passive BCI, with which the brain activity is analyzed to provide metrics related to the mental state of the user to adapt the system accordingly could be used for this purpose (Zander09, ). Though monitoring emotional states remains a challenge because experiments compare their emotional recognition performances using self reported measures from the users. Such experimental protocol assumes that people are able to self assess reliably their own emotions which might not be true (Robinson02, ). Furthermore, the brain structures involved in emotional states are sub-cortical, e.g., amygdala (LeDoux95, ), which means that reliably monitoring them using non invasive EEG is an issue (Muhl14, ). Nevertheless, some promising results were found in particular using EEG (Muhl14, ) (see Table 1).

It also remains to be evaluated whether using a learning companion can help to reduce the fear induced by the BCI setup that has a detrimental effect on BCI performances (Witte13, ).

Recommendations Challenges Potential solutions
Appearance of Feedback BCI feedback should take into consideration of recommendations from educational psychology, e.g., multisensorial (Sollfrank16, ) or attractive (RonAngevin09, ) The feedback remain mostly unadaptive and/or unadapted and researches toward improving this are made difficult by the often few number of participants. Using learning companions to provide task related feedback and explain to the users how their brain activity is modified when they perform a task.
Social and emotional Feedback BCI training should be engaging and motivating. Assessing users’ states, e.g., the emotional one, often remains unreliable, still needs training and therefore time. Passive BCI could still be used to monitor the learner’s state, e.g., emotion or motivation but also the level of attention or fatigue in order to adapt the training.
Cognitive Feedback The feedback should provide insights and guidance to the user. A cognitive model is still lacking and limits the improvement of the training. Using an example based learning companion, which do not require a cognitive model of the task could improve learning.
Table 1. Summary of the different recommendations, challenges and potential solutions raised in this article.

3.3. Cognitive feedback

Such as social and emotional feedback, cognitive feedback constitute another challenge and present a great opportunity to improve BCI training. According to Balzer et al. (Balzer89, ), providing cognitive feedback “refers to the process of presenting the person information about the relations in the environment (i.e., task information), relations perceived by the person (i.e., cognitive information), and relations between the environment and the person’s perceptions of the environment (i.e., functional validity information)”. They suggest that the task information is the type of cognitive feedback influencing the most the performance. Therefore, providing BCI users with information about the way they do vs. should perform the MI-tasks is most likely of the utmost importance.

3.3.1. BCI Literature

Currently, the most used cognitive feedback in MI-BCI is the classification accuracy (CA), i.e., the percentage of mental commands that are correctly recognized by the system (Jeunet16e, ). While informative, this feedback remains only evaluative: it provides some information about how well the learner does perform the task, but no information about how they should perform it. Some studies have been led in order to enrich this feedback. (Kaufmann11, ) proposed a richer “multimodal” feedback providing information about the task recognized by the classifier, the strength/confidence in this recognition as well as the dynamics of the classifier output throughout the whole trial. (Sollfrank16, ) chose to add information concerning the stability of the EEG signals to the standard feedback based on CA, while (Schumacher15, ) added an explanatory feedback based on the level of muscular relaxation to this CA-based feedback. This additional feedback was used to explain poor CA as a positive correlation had been previously suggested between muscular relaxation and CA. Finally, (Zich15, ) provided learners with a 2-dimensional feedback based on a basketball metaphor: ball movements along the horizontal axis were determined by classification of contra- versus ipsilateral activity (i.e., between the two brain’s hemispheres), whereas vertical movements resulted from classifying contralateral activity of baseline versus MI interval.

By adding some dimensions to the standard classification CA-based feedback, these feedbacks provided more information to the learner about the way to improve their performance. Nonetheless, all of them are still mainly based on the CA which may not be appropriate to assess users’ learning (Lotte17, ). Indeed, CA may not reflect properly successful EEG pattern self-regulation. Yet, learning to self-regulate specific EEG patterns, and more specifically to generate stable and distinct patterns for each MI task are the skills to be acquired by the learner (Jeunet17, ).

3.3.2. Learning Companion Literature

Beside emotional (affective) and social assistance, learning companions can also be designed to provide a cognitive support to the learner. In this perspective, many solutions exist in the field of intelligent tutoring systems (ITS), which use computational tools to tutor the learner. For instance, the companion strategy can be based on the current student learning path compared to an explicit cognitive model of the task which highlights the different solution paths and skills involved (Aleven10, ). A learning path gathers the actions taken by the learner (providing an answer, asking for help, taking notes, etc), and the context of these actions (e.g., did the learner attempted an answer before asking for help?). Recognizing learners’ learning path and skills used can also be done using a constraint-based model of the task (Mitrovic10, ) or a model of the task learnt using relevant machine learning or data mining techniques. Whatever approach is used, the goal is to create a model where a learning companion can act and track learners’ actions or behavior to determine how they learn and provide them with an effective cognitive accompaniment or assistance.

On the sidelines of these cognitive tutors, example tracing tutors (Koedinger09, ) have been newly developed. They elaborate their feedback by comparing the actual strategy of the user with some previous correct and incorrect strategies, which means that they do not require any preexisting cognitive model of the task. This type of tutoring is based on imitating the successful behavior of others. Two types of imitations are possible, one by studying worked examples, the other by directly observing someone else performing the task (VanGog10, ).

3.3.3. Futures challenges

The latter second type of imitation based training has already proven useful in BCI by (Kondo15, ). They showed that BCI training could be enhanced by having users watch someone performing the motor task they imagined. Though providing the users with worked examples has never been tried and might be worth exploring by using a learning companion to provide those worked examples. In order to do so, the users would have to explicit the different strategies they used to control the BCI. One way to do so could be by teaching the companion. This represents a challenge because of the variety of strategies users can use which would then have to be analyzed, but also because the verbalization of motor-related strategies is subjective. Methods developed for clarifying interview and user experience assessment could be adapted in order to clarify these verbalizations (wilson13, ). Such researches could be linked to the semiotic training suggested for BCI, which consists in training participants to improve their capacity to associate their mental imagery strategies with their BCI performances (Timofeeva16, ). The benefit of these methods is that they do not require a cognitive model of the task though they could help determine learning paths and prove useful to develop such cognitive model.

Indeed, in order to be able to provide more relevant cognitive feedback to BCI learners, we should first deepen our theoretical knowledge about the MI-BCI skills and about their underlying processes. Very little work has been performed by the community to model MI-BCI tasks and thus the skills to be acquired. Thus, the challenges to address (see table 1) are the following:

  1. Define and implement a computational cognitive model of MI-BCI tasks (Jeunet17, )

  2. Based on this model, determine which skills should be acquired

  3. Based on these skills, define relevant measures of performance

  4. Based on these measures of performance, design cognitive feedback to help BCI learner to achieve a high performance, i.e., to acquire the target skills

4. Discussion & Conclusion

In this article, we have shown that BCIs are promising interaction systems enabling users to interact using their brain activity only. However, they require users to train so they can control them, and so far, this training has been suboptimal. Here, we hope we demonstrated how artificial learning companions could contribute to improving this training. In particular, we reviewed how such companions could be used to provide user-adapted and adaptive feedback at the social, emotional and cognitive levels. While there have been various researches on the appearance of BCI feedback, there is almost no research on social, emotional and cognitive feedback for BCI. Learning companion could bridge that gap. Reviewing the literature in learning companion, we suggested various ways to make that happen and the corresponding research challenges that will need to be solved. They are summarized in Table 1.

To conclude, the definition from (Chou03, ) (i.e., “a learning companion is a computer-simulated character, which has human-like characteristics and plays a non-authoritative role in a social learning environment”) is especially interesting because it involves an exchange of knowledge between the learner and the learning companion. This builds on the idea that, on the one hand the BCI trainee could benefit from social, emotional and cognitive feedback that the learning companion would provide. While on the other hand, the model maintained by the learning companion could benefit from the learners’ feedback to be better adapted.

Both the psychological profile and the cognitive state of the learner have an influence on the capacity to use a BCI and the type of learning companion that can be the most effective. Therefore, creating models to understand 1) which state the users go through while learning 2) how the psychological characteristics and cognitive states of the user influence the learning and finally 3) how to provide an adapted feedback according to the previous points represent a common goal for the BCI and Learning companion fields where both could benefit each other.

Even though we focused on the improvement learning companions could bring to the feedback, the benefits are not limited to it. For example, they could also be used to assess or limit the potential experimenter bias, which occurs when experimenters’ expectation or knowledge involuntarily influence their subjects (Rosnow97, ). Indeed, they could limit the need for an experimenter and make it easier to perform double blind experiments, where both subjects and experimenters do not know to which experimental group they belong.


This work was supported by the French National Research Agency (project REBEL, grant ANR-15-CE23-0013-01) and the European Research Council (project BrainConquest, grant ERC-2016-STG-714567).


  • [1] V. Aleven, I. Roll, B. M. McLaren, and K. R. Koedinger. Automated, unobtrusive, action-by-action assessment of self-regulation during learning with an intelligent tutoring system. Educational Psychologist, 45(4):224–233, 2010.
  • [2] K. Ang and C. Guan. Brain–computer interface for neurorehabilitation of upper limb after stroke. Proceedings of the IEEE, 103(6):944–953, 2015.
  • [3] R. Azevedo and R. Bernard. A meta-analysis of the effects of feedback in computer-based instruction. J Educ Comp Res, 13(2):111–127, 1995.
  • [4] W. Balzer, M. Doherty, et al. Effects of cognitive feedback on performance. Psychological bulletin, 106(3):410, 1989.
  • [5] R. Bangert-Drowns, C. Kulik, J. Kulik, and M. Morgan. The instructional effect of feedback in test-like events. Review of educational research, 61(2):213–238, 1991.
  • [6] R. Beale and C. Creed. Affective interaction: How emotional agents affect users. International Journal of Human-Computer Studies, 67(9):755–776, 2009.
  • [7] O. Bent, P. Dey, K. Weldemariam, and M. Mohania. Modeling user behavior data in systems of engagement. Futur Gen Comp Sys, 68:456–464, 2017.
  • [8] L. Bonnet, F. Lotte, and A. Lécuyer. Two brains, one game: design and evaluation of a multiuser BCI video game based on motor imagery. IEEE Transactions on Computational Intelligence and AI in games, 5(2):185–198, 2013.
  • [9] W. Burleson and R. Picard. Gender-specific approaches to developing emotionally intelligent learning companions. IEEE Intelligent Systems, 22(4), 2007.
  • [10] R. Cabada, M. Estrada, C. Garcia, Y. Pérez, et al. Fermat: merging affective tutoring systems with learning social networks. In Proc ICALT, pages 337–339, 2012.
  • [11] C. Chou, T. Chan, and C. Lin. Redefining the learning companion: the past, present, and future of educational agents. Computers & Education, 40(3), 2003.
  • [12] N. Churamani, M. Kerzel, E. Strahl, P. Barros, and S. Wermter. Teaching emotion expressions to a human companion robot using deep neural architectures. In Proc IJCNN, pages 627–634, 2017.
  • [13] M. Clerc, L. Bougrain, and F. Lotte. Brain-Computer Interfaces 1: Foundations and Methods. ISTE-Wiley, 2016.
  • [14] M. Clerc, L. Bougrain, and F. Lotte. Brain-Computer Interfaces 2: Technology and Applications. ISTE-Wiley, 2016.
  • [15] B. Duffy. Anthropomorphism and the social robot. Robotics and autonomous systems, 42(3):177–190, 2003.
  • [16] R. I. Dunbar and S. Shultz. Evolution in the social brain. science, 317(5843), 2007.
  • [17] G. Edlinger and C. Guger. Social environments, mixed communication and goal-oriented control application using a brain-computer interface, volume 6766 LNCS of Lecture Notes in Computer Science. 2011.
  • [18] J. Frey, R. Gervais, S. Fleck, F. Lotte, and M. Hachet. Teegi: Tangible EEG interface. In Proc ACM UIST, pages 301–308, 2014.
  • [19] R. Goebel, B. Sorger, J. Kaiser, N. Birbaumer, and N. Weiskopf. Bold brain pong: Self regulation of local brain activity during synchronously scanned, interacting subjects. In 34th Annual Meeting of the Society for Neuroscience, 2004.
  • [20] D. Goleman. Emotional Intelligence. New York: Brockman. Inc, 1995.
  • [21] G. Gordon, S. Spaulding, J. Westlund, J. Lee, L. Plummer, M. Martinez, M. Das, and C. Breazeal. Affective personalization of a social robot tutor for children’s second language skills. In AAAI, pages 3951–3957, 2016.
  • [22] J. Hattie and H. Timperley. The power of feedback. Review of educational research, 77(1):81–112, 2007.
  • [23] E. Hornecker. The role of physicality in tangible and embodied interactions. Interactions, 18(2):19–23, 2011.
  • [24] A. Isen. An influence of positive affect on decision making in complex situations: Theoretical issues with practical implications. Journal of consumer psychology, 11(2):75–85, 2001.
  • [25] K. Izuma, D. Saito, and N. Sadato. Processing of Social and Monetary Rewards in the Human Striatum. Neuron, 58(2):284–294, Apr. 2008.
  • [26] C. Jeunet, E. Jahanpour, and F. Lotte. Why standard brain-computer interface (bci) training protocols should be changed: an experimental study. Journal of neural engineering, 13(3):036024, 2016.
  • [27] C. Jeunet, F. Lotte, and B. N’Kaoua. Human Learning for Brain–Computer Interfaces, pages 233–250. Wiley Online Library, 2016.
  • [28] C. Jeunet, B. N’Kaoua, and F. Lotte. Towards a cognitive model of MI-BCI user training. 2017.
  • [29] C. Jeunet, C. Vi, D. Spelmezan, B. N’Kaoua, F. Lotte, and S. Subramanian. Continuous tactile feedback for motor-imagery based brain-computer interaction in a multitasking context. In Human-Computer Interaction, pages 488–505, 2015.
  • [30] D. Johnson and R. Johnson. An educational psychology success story: Social interdependence theory and cooperative learning. Educational researcher, 2009.
  • [31] W. Johnson and P. Rizzo. Politeness in tutoring dialogs:“run the factory, that’s what i’d do”. In Intelligent Tutoring Systems, pages 206–243. Springer, 2004.
  • [32] T. Kaufmann, J. Williamson, E. Hammer, R. Murray-Smith, and A. Kübler. Visually multimodal vs. classic unimodal feedback approach for smr-bcis: a comparison study. Int. J. Bioelectromagn, 13:80–81, 2011.
  • [33] J. Keller. An integrative theory of motivation, volition, and performance. Technology, Instruction, Cognition, and Learning, 6(2):79–104, 2008.
  • [34] Y. Kim. Pedagogical agents as learning companions: Building social relations with learners. In AIED, pages 362–369, 2005.
  • [35] S. Kleih, F. Nijboer, S. Halder, and A. Kübler. Motivation modulates the P300 amplitude during brain–computer interface use. Clinical Neurophysiology, 2010.
  • [36] K. Koedinger, V. Aleven, B. McLaren, and J. Sewall. Example-tracing tutors: A new paradigm for intelligent tutoring systems. Authoring Intelligent Tutoring Systems, pages 105–154, 2009.
  • [37] T. Kondo, M. Saeki, Y. Hayashi, K. Nakayashiki, and Y. Takata. Effect of instructive visual stimuli on neurofeedback training for motor imagery-based brain–computer interface. Human movement science, 43:239–249, 2015.
  • [38] S. Krach, F. Hegel, B. Wrede, G. Sagerer, F. Binkofski, and T. Kircher. Can machines think? interaction and perspective taking with robots investigated via fmri. PloS one, 3(7):e2597, 2008.
  • [39] A. Kübler, N. Neumann, J. Kaiser, B. Kotchoubey, T. Hinterberger, and N. Birbaumer. Brain-computer communication: self-regulation of slow cortical potentials for verbal communication. Arch phys med rehab, 82(11), 2001.
  • [40] A. Kübler, B. Kotchoubey, J. Kaiser, J. Wolpaw, and N. Birbaumer. Brain–computer communication: Unlocking the locked in. Psychological bulletin, 127(3):358, 2001a.
  • [41] A. Lécuyer, F. Lotte, R. Reilly, R. Leeb, M. Hirose, and M. Slater. Brain-computer interfaces, virtual reality, and videogames. Computer, 41(10), 2008.
  • [42] J. LeDoux. Emotion: Clues from the brain. Ann rev psych, 46(1):209–235, 1995.
  • [43] R. Leeb, F. Lee, C. Keinrath, R. Scherer, H. Bischof, and G. Pfurtscheller. Brain–computer communication: motivation, aim, and impact of exploring a virtual apartment. IEEE Trans Neur Sys Rehab, 15(4):473–482, 2007.
  • [44] J. Lester, S. Converse, S. Kahler, S. Barlow, B. Stone, and R. Bhogal. The persona effect: affective impact of animated pedagogical agents. In Proc ACM CHI, 1997.
  • [45] F. Lotte. Towards Usable Electroencephalography-based Brain-Computer Interfaces. Habilitation thesis (HDR), Univ. Bordeaux, 2016.
  • [46] F. Lotte and C. Jeunet. Towards improved BCI based on human learning principles. In 3rd International Brain-Computer Interfaces Winter Conference, 2015.
  • [47] F. Lotte and C. Jeunet. Online classification accuracy is a poor metric to study mental imagery-based BCI user learning: an experimental demonstration and new metrics. In 7th International BCI Conference, 2017.
  • [48] F. Lotte, F. Larrue, and C. Mühl. Flaws in current human training protocols for spontaneous brain-computer interfaces: lessons learned from instructional design. Frontiers in human neuroscience, 7, 2013.
  • [49] K. Mathiak, E. Alawi, Y. Koush, M. Dyck, J. Cordes, T. Gaber, F. Zepf, N. Palomero-Gallagher, P. Sarkheil, S. Bergert, M. Zvyagintsev, and K. Mathiak. Social reward improves the voluntary control over localized brain activity in fMRI-based neurofeedback training. Frontiers in Behavioral Neuroscience, 9(June), 2015.
  • [50] J. Mattout. Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective. Frontiers in Human Neuroscience, 6, 2012.
  • [51] S. McQuiggan, J. Robison, and J. Lester. Affective transitions in narrative-centered learning environments. Educational Technology & Society, 13(1):40–53, 2010.
  • [52] J. Mercier-Ganady, F. Lotte, E. Loup-Escande, and A. Marchal, M.and Lecuyer. The mind-mirror: See your brain in action in your head using eeg and augmented reality. In Virtual Reality (VR), 2014 iEEE, pages 33–38. IEEE, 2014.
  • [53] M. Merrill. First principles of instruction: a synthesis. Trends and issues in instructional design and technology, 2:62–71, 2007.
  • [54] J. Millán, R. Rupp, G. Müller-Putz, R. Murray-Smith, C. Giugliemma, M. Tangermann, C. Vidaurre, F. Cincotti, A. Kübler, R. Leeb, C. Neuper, K.-R. Müller, and D. Mattia. Combining brain-computer interfaces and assistive technologies: State-of-the-art and challenges. Frontiers in Neuroprosthetics, 2010.
  • [55] A. Mitrovic. Modeling domains and students with constraint-based modeling. Advances in intelligent tutoring systems, pages 63–80, 2010.
  • [56] C. Mühl, B. Allison, A. Nijholt, and G. Chanel. A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges. Brain-Computer Interfaces, 1(2):66–84, 2014.
  • [57] S. Narciss and K. Huth. How to design informative tutoring feedback for multimedia learning. Instructional design for multimedia learning, 181195, 2004.
  • [58] C. Neuper and G. Pfurtscheller. Brain-Computer Interfaces, chapter Neurofeedback Training for BCI Control, pages 65–78. The Frontiers Collection, 2010.
  • [59] F. Nijboer, A. Furdea, I. Gunst, J. Mellinger, D. McFarland, N. Birbaumer, and A. Kübler. An auditory brain–computer interface (BCI). J Neur Meth, 2008.
  • [60] D. Norman. How might people interact with agents. Comm ACM, 37(7), 1994.
  • [61] M. Obbink, H. Gürkök, D. Plass-Oude Bos, G. Hakvoort, M. Poel, and A. Nijholt. Social interaction in a cooperative brain-computer interface game. LNICST. 2012.
  • [62] G. Pfurtscheller and C. Neuper. Motor imagery and direct brain-computer communication. proceedings of the IEEE, 89(7):1123–1134, 2001.
  • [63] L. Pillette, C. Jeunet, B. Mansencal, R. N’Kambou, B. N’Kaoua, and F. Lotte. Peanut: Personalised emotional agent for neurotechnology user-training. In 7th International BCI Conference, 2017.
  • [64] A. Ramos-Murguialday, M. Schürholz, V. Caggiano, M. Wildgruber, A. Caria, E. Hammer, S. Halder, and N. Birbaumer. Proprioceptive feedback and brain computer interface (bci) based neuroprostheses. PloS one, 7(10):e47048, 2012.
  • [65] M. Robinson and G. Clore. Belief and feeling: evidence for an accessibility model of emotional self-report. Psychological bulletin, 128(6):934, 2002.
  • [66] R. Ron-Angevin and A. Díaz-Estrella. Brain–computer interface: Changes in performance using virtual reality techniques. Neur let, 449(2):123–127, 2009.
  • [67] R. Rosnow and R. Rosenthal. People studying people: Artifacts and ethics in behavioral research. WH Freeman, 1997.
  • [68] R. Ryan and E. Deci. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am psych, 55(1):68, 2000.
  • [69] M. Saerbeck, T. Schut, C. Bartneck, and M. Janse. Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. In Proc CHI, pages 1613–1622, 2010.
  • [70] M. Schmitz. Tangible interaction with anthropomorphic smart objects in instrumented environments. 2010.
  • [71] J. Schumacher, C. Jeunet, and F. Lotte. Towards explanatory feedback for user training in brain-computer interfaces. In Proc IEEE SMC, pages 3169–3174, 2015.
  • [72] P. Sepulveda, R. Sitaram, M. Rana, C. Montalba, C. Tejos, and S. Ruiz. How feedback, motor imagery, and reward influence brain self-regulation using real-time fmri. Human brain mapping, 37(9):3153–3171, 2016.
  • [73] C. Sexton. The overlooked potential for social factors to improve effectiveness of brain-computer interfaces. Frontiers in Systems Neuroscience, 9(May):1–5, 2015.
  • [74] V. Shute. Focus on formative feedback. Rev Edu Res, 78:153–189, 2008.
  • [75] R. Sitaram, T. Ros, L. Stoeckel, S. Haller, F. Scharnowski, J. Lewis-Peacock, N. Weiskopf, M. Blefari, M. Rana, E. Oblak, et al. Closed-loop brain training: the science of neurofeedback. Nature Reviews Neuroscience, 2016.
  • [76] T. Sollfrank, A. Ramsay, S. Perdikis, J. Williamson, R. Murray-Smith, R. Leeb, J. Millán, and A. Kübler. The effect of multimodal and enriched feedback on SMR-BCI performance. Clinical Neurophysiology, 127(1):490–498, 2016.
  • [77] V. Terzis, C. Moridis, and A. Economides. The effect of emotional feedback on behavioral intention to use computer based assessment. Computers & Education, 59(2):710–721, 2012.
  • [78] M. Timofeeva. Semiotic training for brain-computer interfaces. In Proc FedCSIS, pages 921–925, 2016.
  • [79] J. van Erp, F. Lotte, and M. Tangermann. Brain-computer interfaces: Beyond medical applications. IEEE Computer, 45(4):26–34, 2012.
  • [80] T. Van Gog and N. Rummel. Example-based learning: Integrating cognitive and social-cognitive research perspectives. Edu Psych Rev, 22(2):155–174, 2010.
  • [81] S. Williams. Teachers’ written comments and students’ responses: A socially constructed interaction. 1997.
  • [82] C. Wilson. Interview techniques for UX practitioners: A user-centered design method. Newnes, 2013.
  • [83] M. Witte, S. Kober, M. Ninaus, C. Neuper, and G. Wood. Control beliefs can predict the ability to up-regulate sensorimotor rhythm during neurofeedback training. Frontiers in Human Neuroscience, 7, 2013.
  • [84] O. Ybarra, E. Burnstein, P. Winkielman, M. Keller, M. Manis, E. Chan, and J. Rodriguez. Mental exercising through simple socializing: Social interaction promotes general cognitive functioning. Personality and Social Psychology Bulletin, 34(2):248–259, 2008.
  • [85] T. Zander and S. Jatzev. Detecting affective covert user states with passive brain-computer interfaces. In Proc ACII, pages 1–9, 2009.
  • [86] C. Zich, S. Debener, M. De Vos, S. Frerichs, S. Maurer, and C. Kranczioch. Lateralization patterns of covert but not overt movements change with age: An eeg neurofeedback study. Neuroimage, 116:80–91, 2015.