The Statistical Model for Ticker, an Adaptive Single-Switch Text-Entry Method for Visually Impaired Users

04/20/2018 ∙ by Emli-Mari Nel, et al. ∙ University of Cambridge 0

This paper presents the statistical model for Ticker [1], a novel probabilistic stereophonic single-switch text entry method for visually-impaired users with motor disabilities who rely on single-switch scanning systems to communicate. All terminology and notation are defined in [1].

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Letter selections from audio files

In Figure 1(a) a typical composite audio sequence that can be presented to the user is shown, where the composite sequence consists of two repetitions of the alphabet. In Ticker, the user selects one letter at a time when listening to such a sequence. In the shown example, the user can click twice per letter. The second repetition occurs in a different order than the first, which allows one to infer the intentional letter selection more accurately.

The system does not explicitly make any selection after a click is received; instead the system accumulates evidence. After one or more clicks are received, the system internally updates the posterior word probabilities. It will then proceed to play the composite sequence again for the next letter. When the posterior probability of any word in a pre-defined dictionary is above a certain threshold, that word is selected.

We have shown in [1] how to effectively parallelise the audio input stream: Groups of letters are uttered in the same audio channel by the same person, as illustrated in Figure 1(b). The user is expected to wear headphones. The letter “a” is, for example, always uttered in the user’s left ear by the same voice, whereas the letter “z” is uttered by a different voice in the user’s right ear. If the user is able to focus on a specific voice, the brain will tend to filter everything else out by virtue of the cocktail-party effect. In the shown example, it allows one to play the alphabet twice to the user in just over five seconds. We refer to the main paper [1] for an overview of the system and definitions of terminology and notation.

fqwaglrxbhmsycintzdjou_ekpv.dimrwejnsxakotybgpuzcflv_hq.
(a)
(b)
Fig. 1: (a) A composite audio sequence that can be presented to the user for . Ideally the user should click twice per letter, but an accidental miss can be tolerated. (b) The normalised amplitudes of all sound files for Ticker in five channel mode. That is, the composite sequence in (a) is presented to the user, where five different voices read the alphabet to the user. Sounds within the same channel are indicated with the same colour.

Ii The composite audio sequence for

In this section we describe how to derive the composite sequence for Ticker in two channel mode. We focus on , as this is the default setting for Ticker. is suitable for very noisy switches, or when the user is unable to distinguish between sounds in stereophonic mode. With , two clicks per selection is usually necessary, which is directly comparable to standard scanning systems. This setting is intended for situations where the user is conscious and has the capability to memorise the composite audio sequence. A typical user would be compos mentis, and use, for example, a blink detector to click within a few seconds of when they intend to.

Due to the serial nature of the interface for this application, it is difficult to come up with a technique that adapts to the interface dynamically (so that more probable words/letters are easier to select) without increasing the cognitive load too much. We therefore assume the composite audio sequence to be fixed so that the user can pre-empt when a certain letter will be pronounced.

We assume the letters occur in alphabetical order within each clip for , so that the user only has to memorise the clip for . For example, in Figure 1(b), the letter sequence abcde of the clip associated with the voice shown in red is associated with . The user has to then memorise deabc for .

If the user’s click-time precision is noisy, it can be difficult to make an estimate of the intended letter after one click. If the letters that were close to the intended letter at

occur far away from it at , disambiguation can become remarkably easier. Hence, to compute the composite audio sequence for , the distances between letters that were close to each other for should be large for . Since all sound files are assumed to have the same length, one can integerise this computation, considering the number of letters between certain letter pairs. For example, in Figure 1(b), letters r and x are adjacent for , whereas they are separated by five letters for .

Let be the number of letters in the alphabet (). The total number of letters in the composite sequence is then , which is not always divisible by the number of channels. This can cause the characters to sound arrhythmic at the beginning and end of a clip, making it more difficult to tune in on a voice. To account for this, some sound files at the beginning/end of a clip were made slightly longer. To further assist the user to control his/her timing, two “tick” sounds were added at the beginning of the composite audio sequence to set the pace of the rhythm. The adjusted sound-file lengths and the addition of the “tick” sounds were not part of the interface during the initial user trials. These adjustments were made after feedback from the users, and resulted in a significant improvement.

The computation of the composite audio sequence is performed in two steps. Firstly, the nearest neighbours of each letter are stored for . Secondly, the sequence for is chosen such that all of the stored neighbours from the first step are at least letters away from . This process is repeated to maximise . It was found that for 1–5 channels, the maximal are , respectively.

There can be several sequences with the same . Some of the sequences were further eliminated by restricting successive sounds, as some sounds can become indistinguishable when they overlap. Sequences containing any of the following successive letters, {a, h}, {q, k}, {m, n}, {b, d} and {a, i} were removed.

Figure 2(a) depicts the final composite audio sequences in 2D for all channels.

1) abcdefghijklmnopqrstuvwxyz_.wrmhczupkfaxsnid_vqlgbytoje.
2) aobpcqdresftguhviwjxkylzm_n.lwgrb_kvfqazjuepnyitdomxhsc.
3) sajtbkuclvdmwenxfoygpzhq_ir.fmuaqyelsipxdk.howcj_gnvbrz
4) ahovbipwcjqxdkryelszfmt_gnu.bjrzgiqyfnowemuxalp_dhs.cktv
5) fqwaglrxbhmsycintzdjou_ekpv.dimrwejnsxakotybgpuzcflv_hq.
1) 2) 3)
4) 5)
(a)
(b)
Fig. 2: (a) The final composite audio sequences for 1-5 channels and their corresponding 2D plots. Letters from the same clip are represented by the same colour. (b) The codewords associated with all letters in the alphabet for 5 channels (as part of computing the composite audio sequence). The y-axis indicates the desired letter, whereas the x-axis indicates which letter to select (“1”) or not to select (“0”).

The optimisation of the composite audio sequence corresponds to maximisation of the information rate , measured in bits per second:

(1)

where is the input set (a list of words that the user intends to write), is the output set (the list of words the user writes), is the mutual information, refers to the entropy function, and is the time it takes to produce an output; see [2] for further detail.

We made some simplifying assumptions to construct our approach. These assumptions were made to reduce computational cost, and increase generality of use. Firstly, we have assumed a Uniform prior for each word, thereby ignoring our dictionary. When one does not ignore the dictionary, it can naturally lead to sequences where the separation between letters that are frequently close to each other are increased. It may then become much easier to select frequently occurring words, thereby increasing the overall text-entry speed. Note from Figure 2(a) that “o” and “e” are close to each other when considering all sound pairs in the 5-channel configuration. In English, these two vowels frequently occur next to each other. By default, pairs like “o” and “e” would therefore limit the performance of the system by definition, if the user writes in English.

One can also think of Ticker as entering a binary code to write a word. The code becomes longer as the user selects more letters. The intentional word can typically be decoded more easily when using longer pseudo random codes, at the expense of a reduction in speed. This idea relates Ticker to Shannon’s noisy coding theorem. Ignoring a dictionary when optimising Equation 1 reduces the computational complexity considerably, as it reduces the problem to considering only letter codewords (such as shown in Figure 2(b)).

Some simplifying assumptions of less significance during the optimisation of the composite audio sequence were: the audio files of all letters were assumed to have the same length, making the denominator in Equation 1 irrelevant. False positive and false negative switch noise were ignored (assuming the user clicks exactly times for each letter). A rudimentary click-timing model was also assumed.

The above simplifications were not applied during inference while using Ticker, but only during the optimisation of the composite audio sequence. During inference a much more comprehensive noise model is used to allow for more interesting and smooth click-timing models, and which can also adapt to the user. The letter priors are also not ignored during inference.

Fig. 3: A visualisation of the binary codes associated with some words.

Figure 3 illustrates that each word in the dictionary has a unique pseudo-random binary code. Each letter is associated with a dot or line: The dot represents a zero, and is associated with the letters that the user should not select while the composite audio sequence is presented. The lines represent the timings of the desired letters (indicating when the user has to click). The system is optimal if the distances between confusing, frequently occurring letters are maximised in binary symbol space. For example, if the shown codes for “ace_” and “act_

” have similar (large) prior probability mass, the ones associated with “e” and “t” should ideally be as far from each other as possible. The binary codes for letters (shown in Figure 

2(b)) will then determine if the latter two letters can be easily distinguished from each other.

Iii The click-timing distribution

The derivation of the click-timing distribution is shown in Figure 4: An expression for is derived, where , and

is the set of fixed (untrainable) hyperparameters that controls the distributions over the (trainable) parameters in

. Figure 4(d)–(e) explains how to compute the products of Gaussians . Let

(2)

Instead of explicitly evaluating all the realisations of (which can quickly become infeasible), the latter sum can be calculated recursively, as summarised in Table I. Combining Equation 4 and Equation 2, it then follows that

(3)

where , .

(4)
where
, and is defined by Equation 6.
(a)
: Observed click times, , ,
: False negative labels, , .
: False positive labels, , .
: The number of times each letter is repeated.
: The number of false positives, .
: The number of true clicks, . : The number of observations, i.e., .
: One of letters in the alphabet.
: The collection of variable (trainable) parameters.
: The collection of fixed (hyper) parameters.
(b)
The prior over variable (trainable) parameters.
: The average click-time delay (a Gaussian prior).
: The precision of the click-time delay (Gamma prior), where .
: False-positive rate parameter (Gamma prior).
: False-negative parameter (Beta prior).
(c)
(6)
(7)
where and .
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
1 0 0 1 1 0 0 1
0 1 1 0 0 1 1 0
1 0 1 0 1 0 1 0
1 1 0 0 1 1 0 0
(d) (e)
Fig. 4: (a) To do inference in Ticker Equation 4 has to be computed. It is factorised in Equation 4. The first column of (b) contains the same factorisation, with the values of the probabilities shown in the second column, and a description of each term in the third column. For example, . (c) The factorisation of the prior is displayed in a similar way. Equation 4 is derived by multiplying all the terms in the second column of (b). Likewise in Equation LABEL:eq:prior is derived by multiplying all the terms in the second column of (c). A justification of all the models in the second column of (b) and (c) is provided in the text. (d) is defined by Equation 6. (e) Example realisations of defined in (d) with , , , . Each realisation of corresponds to labelling each of the received click times as either a false/true positive. For example, the highlighted indicates that and are true clicks. For each such realisation of , all possible realisations of are considered which determines the product of Gaussians that should be used (all products where and are true clicks).
Initialise:
For each , construct a matrix of size , where . Construct a matrix of size , where , .
For :
For : For :
where and . At the boundaries, and .
TABLE I: The recursion algorithm for Equation 2. Note that the size of decreases to as increases.

Iv Training the Click-Timing Model

Online adaptation of the model is done based on the last word selections corresponding to the letter sequence . This letter sequence corresponds to a sequence of received click-time ensembles . An online-learning rate is included to limit the influence of erroneous word selections.

The E-M algorithm is specifically used to compute the maximum-a-posteriori (MAP) estimates of our parameters

. Only true positive click times should contribute to the kernel-density estimation, which results in the following mixture model:

(8)

where corresponds to the normalised click-time associated with (already selected). Each true positive is weighted by , where, by definition, . Each hypothesis contained in stipulates which repetition of is responsible for a positive. This enables click-time normalisation, which involves subtracting the corresponding starting time of the sound file from the stored letter time. Likewise, the newly received click-time is normalised by subtracting the starting time of audio file corresponding to and that is specified by .

It is computationally expensive to train

. A well known approximation amounts to firstly approximating the non-parametric distribution with a Gaussian distribution (by using the E-M algorithm in our case). Secondly, the standard deviation of the latter Gaussian is scaled according to the

normal-scale rule, in order to compute  [3].

The training procedure is followed every-time a new word is selected. Table II summarises the E-M update equations for our application, where the same generic notation defined in [4] were used to do the derivations. At convergence, the Gaussian click-time parameters and the switch noise parameters are set. These point estimates  of the parameters are regularised by the fixed hyperparameters  provided by Figure 4.

Expectation:
(9)
which can be derived from Equation 3 for for all the observed letters , with . The expected number of true clicks is given by:
(10)
where .
Maximisation:
and can be derived from Equation 3. Following the maximisation of ,
(11)
(12)
(13)
(14)
where
(15)
(16)
(17)
(18)
where is the number of clicks observed during the selection of letter , is the observed click time, and is beginning of the audio file implied by the labelling .
Kernel bandwidth parameter:
(19)
(20)
TABLE II: The E- and M-steps of the E-M algorithm, used to train the parameters of Ticker’s noise distributions. Following the convergence of , Equations 19-20 are applied once as the last step of training.

The hyperparameters were chosen to allow for a broad range of parameters to be learned. Their effect also naturally wears off as more training samples accumulate. We specifically used , , , , s, , , and during the application of the E-M algorithm in our simulations and final user trials.

After applying the E-M algorithm, each of the parameters listed in the M step are updated according to the rule . After this step, the normal-scale rule is applied to compute . from the E step is then recalculated to compute (see the bottom of Table II). To prevent the system becoming too slow, we only use the last 1000 selected letters during training and evaluation.

In Nomon [5]

, there are fewer latent variables, allowing for a straightforward application of the normal-scale rule. Online learning is applied by using linear dampening to reduce the effect of previous samples when updating the kernel-density estimator. This effectively uses an exponential distribution to model the importance of previous samples: As time progresses the importance of older samples will decay exponentially, allowing them to be pruned in a natural way. In our case, older samples are considered just as important as the newest of samples during evaluation, implying a Uniform distribution. Future work will involve testing the effect of dampening of older samples in the same way as Nomon 

[5].

Some shortcomings inherent to MAP estimation need to be considered. Since MAP estimates are variable under a change of basis, one may achieve better results in some cases by applying a non-linear transform to the basis of the probability distribution that models the parameter at hand 

[6]. This option has not been explored for this application.

Like all MAP estimates, successful training strongly depends on good initialisation, which is done as follows. First, and are initialised by measuring the switch noise. The user is then requested to write the word “yes_” during a calibration phase before starting to use the system for the first time. The E-M procedure is then used to train while keeping fixed (to the measured values), and setting . After calibration and are updated according to the words that are selected. Note that most switch manufacturers specify the typical false positive- and negative rate in their documentation, so that they don’t have to be measured. A fairly large degree of flexibility in the measurement accuracy is also automatically allowed through the hyperparameters.

In an ideal Bayesian world, one would integrate out all the parameters (the trainable ones in ), instead of inferring point estimates of them. However, one should consider the effect of this approximation for each application, which is more severe if there is a lack of training data. We have only a few parameters, and the distributions over them are quite simple, which implies that, in our case, we have ample training data: On average, each word selection leads to about ten click-times that can be used during training. The error bars (standard deviations of the parameter estimates) in many cases decrease with the familiar scale factor  [7], where is the total number of click-times during training. Thus, 10 initial click times, which quickly accumulates to at least 100, in addition to the relatively slow learning rate of , and our calibration step are considered adequate steps to ensure ample training data.

In practice it was found that our training procedure tends to cause over smoothing, which limits entry rate. This is a well-known drawback of the kernel-density estimation described above, and becomes more prevalent when the distribution is multimodal, since can become quite large, causing to become large.

In Nomon [5], it is easier to learn how to click precisely since the procedure (click when the rotating hand reaches noon) is the same for each letter. Firstly, when using Ticker, the user has to get used to the rhythm within a channel. The rhythm created within the channel is imprecise compared to Nomon [5]. The time at which a sound becomes clearly audible might differ from the ideal click-time (determined by the rhythm) with several tens of milliseconds. Secondly, it is more difficult to time the letters at the beginning of the composite audio sequence, even with the added “tick” sound since the user has to tune in on a specific voice. By construction, it can therefore be expected that the click-timing distributions in Nomon [5] will be more unimodal than in Ticker, making over-smoothing less of a problem. It is, however, better to have some probability mass associated with all modes in the click-timing distribution, than to have only a unimodal distribution.

A future improvement in Ticker may involve choosing a different click-timing distribution. A Dirichlet process could be a possible choice.

V Language Modelling

Let there be words in the dictionary, and let the th word consist of a sequence of letters: . When necessary, the notation is used to refer to the th repetition of letter . The set of word priors is represented by , where is the normalised frequency of the th word in the dictionary.

A counting variable  keeps track of the letter index. If at least one click is received at the end of the composite audio sequence, the system will move on to the next letter, and will be incremented. For example, if the user wants to write the word “is_”, , and the user will start by selecting “i” twice during the presentation of the composite audio sequence. If the system received clicks at the end of the composite audio sequence, the posterior probabilities of all words in the dictionary are updated, and the counter will be incremented so that . The updated probabilities then become the word priors while . This update procedure is formalised through Bayes’ rule, which provides new posterior word probabilities each time is updated:

(21)

where denotes the data . Equation 21 can be approximated as follows:

(22)

where is a point estimate of the parameters, and the letter indices are given by

(23)

The letter index  depends on and , and allows to be updated even if .

For example, if the system has to compute the denominator in Equation 22 for the word “is_” and , then , so that . is used in Equation 22. The point estimate is updated after a word has been selected - more detail about this approximation method is given in Section IV.

If is bigger than a predefined threshold for updating Equation 22, is selected, where

. The selection threshold was chosen to be 0.9 during all the user trials and simulations. That is, if the system is at least 90% certain of the intentional word, it will be selected. Note that using the maximum posterior value in this way is just a heuristic, and there may well be better alternatives. Other heuristics such as the sum of the posterior probabilities of the top few words may be used, for example.

Vi A Case study

A case study with a non-speaking individual with motor disabilities who was unable to communicate on his own using the standard scanning system Grid2 [8] was done. This user communicated mostly by raising his eyebrows in an interactive conversation with his carer. The carer could also guess well what he tried to say after he selected a few letters. We automated this process using an Impulse switch attached to the user’s eyebrow muscle and connected to Ticker.

The Impulse switch is quite prone to false positives and drift, especially if the user communicates for a while and his body temperature slightly increases. Since this end-user had vision problems all visual cues had to be replaced with audio cues.

We trained the end-user to use Ticker in four 2-hour sessions. During the last session the end-user was able to select 20 words (four phrases) at a rate of 1.3 wpm. No time-out errors occurred, and four of the 20 words were wrong. However, due to the context one could easily see which words the end-user meant. For example, “throb_” were selected instead of “three_” from the phrase “three_two_one_zero_blast_”. All the other words were selected correctly.

Photos from the first session of the case study described are provided in Figure 5. The case study was done in collaboration with Special Effect [9], a charity based in Oxford. They were present at all sessions, along with the participant’s carer. The staff from Special Effect provided invaluable advice, and also access and visits to other potential users who are not mentioned in this paper; time allowed for only one participant to undergo a full evaluation. The photos were also kindly provided by Special Effect.

Despite the participants initial doubt (depicted in Figure 5(d)) he was able, to his own surprise, to select letters easily using Ticker in 5-channel mode. He communicated that he had fun doing the user trials.

(a) (b)
(c) (d)
Fig. 5: (a) The participant usually communicates through the shown non automatic system. (b) The participant selects letters using Ticker. The shown poster was the only visual assistance he received to help him remember the clips. (c) A close-up of the poster in (b). (d) The participant’s initial reaction to Ticker in 5-channel mode.

Vii A Note on Brain-Computer Interfaces (BCI)

In this paper we have focussed on single-switch text entry methods. However, we believe Ticker’s possible resilience to noise could potentially make it an ideal candidate for Brain-Computer Interfaces (BCIs), which translate brain activity into computer actions (e.g. [10, 11, 12]). Due to the noise, text entry rates in BCIs are extremely low [11, 13, 14, 15]. The main contributing factors to such resilience are its predictive texting traits, the customisable language model and its robustness to long click-time delays and false positives. Users can initially make use of the 1-channel configuration, and gradually progress to more channels if applicable. One might have to repeat the alphabet more than once, as mentioned in Section II.

Brain-Computer Interfaces (BCIs) translate brain activity into computer actions (e.g. [10, 11, 12]). To convert brain activity into a signal that can be reliably used to control a switch for a scanning system is difficult as the signal-to-noise ratio is generally low. Several methods have been developed to capture brain activity each with its pros and cons; see [12] for a summary. Two well-known techniques to capture brain-activities are Electroencephelography (EEG) and Functional Magnetic Resonance Imaging (fMRI).

EEG is the predominant technology, where electrodes are placed on the head to measure weak electrical potentials [12]. The technique has low spatial resolution (2-3cm at best) and requires careful setup. The latency is low (tens of milliseconds). On the other hand, fMRI has high spatial resolution ( (1cm), but high latency ((5-8seconds). EEG can therefore capture a (non-specific) brain signal quickly, whereas fMRI can capture the user’s thoughts with much higher accuracy albeit at a slower rate. Tan and Nijholt [12] mention that (2cm on the cerebral cortex could make it difficult to distinguish if the user is listening to music or conducting a hand motion. Text-entry methods controlled by EEG should therefore be highly resilient to false positives, whereas if controlled by fMRI, they should be highly resilient to long delays.

Due to the noise, text-entry rates are extremely low and typically measured in characters per minute [11]. Blankertz [13] reports a text-entry rate of 7.6char/min, controlling Hex-o-Spell with two switches (only two subjects were tested). Millan et al. [11] mention Hex-o-spell in the context of state-of-the art BCI-spelling devices (in 2010), and as an improvement on the Thought-Translation-Device (with a reported text-entry rate of 0.5char/min).

In a recent study (2014), Welton et al. [14] reports on the use of Dasher in a BCI context. The pioneering study by Wills and MacKay [15] thought it a viable text-entry method in this context due to its personalised language model and the ability to navigate towards a symbol instead of selecting one symbol at a time (making it more resilient to the noisy EEG data). However, there was uncertainty regarding the cognitive load of this visually intensive task. Welton et al. [14] tested seven users with a wide range of disabilities. They found that Dasher-BCI was not the answer for all the users, but it may be viable in some cases, and justifies more extensive testing. For example, one user with cerebral palsy who was unable to use the QWERTY keyboard or Dasher-Mouse, could use Dasher-BCI, typing at 4.7char/min.

Viii A Note on Error Corrections in Ticker

Error corrections in Ticker are used only in extreme circumstances, as noise compensation allows for a large variety of implicit error correction.

In some cases, two words can strongly compete against each other, especially if the user clicked inaccurately and the intentional word is short. A typical example will be “in_” and “is_”. In the 5-channel mode “n” and “s” are nearest neighbours (see Figure 2(a)). If the user clicked slightly inaccurately while aiming for one of these two letters, the probability would typically be split close to equal between the two words to say 0.45 and 0.5. The word would then typically have to be repeated until the confusing letter is reached again, at which point the user would typically resolve his/her previous error.

This aforementioned problem can, of course, be improved by not allowing the letters “n” and “s” to be neighbours in the first place. A second way to work around this problem would be to change the word-selection heuristic to evaluate the sum of the top three posterior word probabilities instead. The user can then perhaps select between the top three words in some way. This would be slightly complicated, since care has to be taken not to break the user’s thought process, in case he/she has to resume with letter selections if the intentional word is not in the top three. It should be noted, however, that during all user trials and simulations it was found to be extremely rare for the intended word not to end up in the top three words, especially by the time the system fails (which is a rare event in itself).

References

  • [1] E. Nel, P. Kristensson, and D. J. C. MacKay, “Ticker: An adaptive single-switch text entry method for visually impaired users,” to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence.
  • [2] ——, “Modelling Noise-Resilient Single-Switch Scanning Systems,” https://arxiv.org/, 2017.
  • [3] B. W. Silverman, Density Estimation for Statistics and Data Analysis.   CRC press, 1986, vol. 26.
  • [4] C. M. Bishop, Pattern Recognition and Machine Learning.   Springer-Verlag, 2006.
  • [5] T. Broderick and D. J. C. MacKay, “Fast and Flexible Selection with a Single Switch,” PLoS ONE, vol. 4, no. 10, 10 2009.
  • [6] D. J. C. MacKay, “Choice of basis for laplace approximation,” Machine Learning, vol. 33, no. 1, pp. 77–86, 1998.
  • [7] ——, Information Theory, Inference and Learning Algorithms.   Cambridge University Press, 2003.
  • [8] Sensory Software International Ltd., “The Grid 2 Reference Manual,” http://sensorysoftware.com/downloads/, Accessed Online 2015.
  • [9] Special Effect, “Accessibility charity,” http://www.specialeffect.org.uk/, Accessed Online 2015.
  • [10] U. Hoffmann, J.-M. Vesin, and T. Ebrahimi, “Recent Advances in Brain-Computer Interfaces,” in IEEE International Workshop on Multimedia Signal Processing (MMSP07), 2007, invited Paper. [Online]. Available: http://www.mmsp2007.org/
  • [11] J. D. R. Millan, R. Rupp, G. R. M√ºller-Putz, R. Murray-Smith, C. Giugliemma, M. Tangermann, C. Vidaurre, F. Cincotti, A. K√ºbler, R. Leeb, C. Neuper, K.-R. M√ºller, and D. Mattia, “Combining brain‚Äìcomputer interfaces and assistive technologies: State-of-the-art and challenges,” Frontiers in Neuroscience, vol. 4, no. 161, 2010.
  • [12] D. S. Tan and A. Nijholt, Eds., Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction, ser. Human-Computer Interaction Series.   London: Springer Verlag, July 2010.
  • [13] B. Blankertz, M. Krauledat, G. Dornhege, J. Williamson, R. Murray-Smith, and K.-R. M√ºller, “A note on brain actuated spelling with the berlin brain-computer interface,” in Universal Access in Human-Computer Interaction. Ambient Interaction, ser. Lecture Notes in Computer Science, C. Stephanidis, Ed.   Springer Berlin Heidelberg, 2007, vol. 4555, pp. 759–768.
  • [14] T. Welton, D. J. Brown, L. Evett, and N. Sherkat, “A brain‚Äìcomputer interface for the dasher alternative text entry system,” Universal Access in the Information Society, pp. 1–7, 2014. [Online]. Available: http://dx.doi.org/10.1007/s10209-014-0375-y
  • [15] S. A. Wills and D. J. C. Mackay, “Dasher-an efficient writing system for brain-computer interfaces?” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 14, no. 2, pp. 244–246, 2006.