Active Inference for Adaptive BCI: application to the P300 Speller

by   Jelena Mladenovic, et al.

Adaptive Brain-Computer interfaces (BCIs) have shown to improve performance, however a general and flexible framework to implement adaptive features is still lacking. We appeal to a generic Bayesian approach, called Active Inference (AI), to infer user's intentions or states and act in a way that optimizes performance. In realistic P300-speller simulations, AI outperforms traditional algorithms with an increase in bit rate between 18 offering a possibility of unifying various adaptive implementations within one generic framework.



There are no comments yet.


page 1

page 2


Active Inference or Control as Inference? A Unifying View

Active inference (AI) is a persuasive theoretical framework from computa...

Deep Active Inference for Autonomous Robot Navigation

Active inference is a theory that underpins the way biological agent's p...

Geometry of Friston's active inference

We reconstruct Karl Friston's active inference and give a geometrical in...

Learning Perception and Planning with Deep Active Inference

Active inference is a process theory of the brain that states that all l...

Deep Active Inference as Variational Policy Gradients

Active Inference is a theory of action arising from neuroscience which c...

Secure Network Code for Adaptive and Active Attacks with No-Randomness in Intermediate Nodes

We analyze the security for network code when the eavesdropper can conta...

Generic and Universal Parallel Matrix Summation with a Flexible Compression Goal for Xilinx FPGAs

Bit matrix compression is a highly relevant operation in computer arithm...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Adaptive BCIs have shown to improve performance [mladenovic:hal-01542504], however thorough adaptation is far from being reached, and a general and flexible framework to implement adaptive features is still lacking. We appeal to a generic Bayesian approach, called Active Inference (AI), which tightly couples perception and action [friston2006free]. Endowing the machine with AI, enables: (1) to infer user’s intentions or states by accumulating observations (e.g. electrophysiological data) in a flexible manner, as well as (2) to act adaptively in a way that optimizes performance. We illustrate AI applied to BCI using realistic P300-speller simulations. We demonstrate it can implement new features such as optimizing the sequence of flashed letters and yield significant bit rate increases.

2 Material, Methods and Results

Active Inference rests on an explicit probabilistic model of user and task. Key variables include observed data, user hidden states, and machine’s action, as follows.

The observed data, here electroencephalagraphy (EEG) responses to target, non-target (P300 or not) and feedback stimuli (Error Potentials – ErrPs or not), allow the machine to infer user’s hidden states, here the intention to spell a letter or pause as well as the recognition of a target/non-target or a correct/incorrect feedback. Depending on the hidden states inferred, the computer has possible actions, here to flash in order to accumulate confidence about the target letter, to stop flashing and to display the chosen letter, or to switch off the screen if it infers an idle state of the user, i.e. no P300 response has been observed for some time.

Each hidden state is mapped onto observations through the data likelihood matrix which can be learned from calibration data. Given the machine’s actions, the transitions between hidden states are modeled by a probability (Markov) martix. We also predefine the preference over all possible outcomes. Typically, the preferred outcome is to be in the state of observing a correctly spelled letter. Finally, a parameter

sets the exploration-exploitation tradeoff for action selection.

We compared AI to two classical approaches:

  1. P300-spelling with a fixed flash number (12) of repetitions and pseudo-random flashing;

  2. P300-spelling with pseudo-random flashing but optimal stopping [mattout2015improving].

To do so, we used data from 18 subjects from a previous P300-speller experiment [perrin2011detecting]. For each algorithm and subject, we simulated the spelling of 12000 letters.

Furthermore, to demonstrate AI’s flexibility, we implemented a "LookAway" case, in which the machine would infer the user to be in idle state and would switch the screen off. We also simulated an ErrP classification enabling the automated detection of a wrongly spelled letter. In case of such detection, AI picks the next most probable letter to spell or choose to continue flashing to strengthen its confidence.

AI showed significantly higher bit rate (54.12bit/min) than the second best strategy (optimal stopping, 45.70bit/min), see Figure 1

. Its performance increased even further when a perfect ErrP classifier is used (73bit/min). Finally, when idle user states are simulated, it accurately switches off the speller

89% of the time, after 24 flashes.

Figure 1: Comparison in bit rate (bit/min) between various flashing methods in a P300 BCI application. Data collected from the simulated spelling of 12000 letters with 18 subjects who were recorded in a previous experiment [perrin2011detecting]. All methods significantly differ from one another (ANOVA, Tukey post-hoc, ).

3 Discussion

Our results demonstrate a great potential for implementing adaptive BCI beyond existing approaches, showing an increase of 18% and 59% (using ErrP classifier) in bit rate.

4 Significance

AI outperforms other algorithms while offering a possibility of unifying various adaptive implementations within one generic framework. Thanks to such genericity, with only a few tuning of its parameters, AI can incorporate many features, such as automated correction or accounting for an idle user state. It can adjust to signal variability by inferring about the user, but it can also take into account the influence of its actions onto the user. This approach lays ground for future co-adaptive systems.