DeepAI
Log In Sign Up

The Interface Usage Skills Test: An Open Source Tool for Real-Time Quantitative Evaluation for Clinicians and Researchers

Assistive machines endow people with limited mobility the opportunity to live more independently. However, operating these machines poses risks to the safety of the human operator as well as the surrounding environment. Thus, proper user training is an essential step towards independent control and use of functionally assistive machines. A variety of control interfaces can be employed by the human to issue control signals to the device, dependent on the residual mobility and level of injury of the human operator. Proficiency in operating the interface of choice precedes the skill in operating the assistive machine. In this systems paper, we present an open source tool for quantifying user skill in operating various interfaces devices.

READ FULL TEXT VIEW PDF

page 2

page 3

page 4

page 5

03/11/2011

Building XenoBuntu Linux Distribution for Teaching and Prototyping Real-Time Operating Systems

This paper describes the realization of a new Linux distribution based o...
05/08/2012

Age Based User Interface in Mobile Operating System

This paper proposes the creation of different interfaces in the mobile o...
12/08/2017

Computer Interfaces to Organizations: Perspectives on Borg-Human Interaction Design

We use the term borg to refer to the complex organizations composed of p...
12/18/2019

Psychoacoustic Sonification as User Interface for Human-Machine Interaction

When operating a machine, the operator needs to know some spatial relati...
05/17/2022

Control Interface Remapping for Bias-Aware Assistive Teleoperation

Users of assistive devices vary in their extent of motor impairment, and...
10/05/2018

Global Types for Open Systems

Global-type formalisms enable to describe the overall behaviour of distr...
06/10/2019

Detecting Clues for Skill Levels and Machine Operation Difficulty from Egocentric Vision

With respect to machine operation tasks, the experiences from different ...

I Introduction

Assistive machines enable people with motor impairments or other mobility deficits to achieve a higher level of independence, community and social participance, and quality of life. Although these devices expand a person’s capabilities, if handled without proper training, they can also pose safety hazards to the human using the device as well as those around them. Thus, many individuals who can benefit from an assistive machine such as a powered wheelchair are barred from using one because of inadequate control proficiency [18].

However, there is a lack of quantitative measurements of assistive device interface use. Considering the specific case of a powered wheelchair (PW), there is a lack of quantitative assessment of the wheelchair user’s navigational control skills in each step of the process of gaining access to and using a PW—from introducing the individual to a PW by a wheelchair professional, the selection of an appropriate interface by a seating clinician, and ongoing training with a therapist. Seating clinicians use qualitative observations and their prior experience as a way to gauge which settings and interfaces are suitable for the patient. Therapists use the Wheelchair Skills Test (WST) [25], the current clinical standard for measuring wheelchair skills, to evaluate a patient’s performance by assigning a discrete capacity score, again through qualitative observations. Access to analytics of the human’s interface usage skill can help identify areas of deficit in order for clinicians to provide better informed and targeted training and therapy for the ultimate goal of improving functional assistive machine usage.

In cases where safe and efficient operation of assistive devices such as PWs remains challenging, robotics autonomy can assist user’s in overcoming control barriers. The goal of any functional assistance—robotics or otherwise—is to maximize the human’s own agency. Assistance should empower users to increase their independence without taking away any of their intact abilities. It is therefore key for robotics autonomy to intelligently customize the amount and type of provided assistance to each user’s needs and skill levels. This again requires a way to accurately measure areas of deficit and skill the human has in controlling a device with a given interfaces.

Proficiency in handling the interfacing device precedes proficiency in handling the assistive machine. Prescribing the most appropriate interface device to an individual based on their preferences and capabilities, adjusting the interface to the specific needs and abilities of that person and subsequently training them in the safe and skilled usage of that interface are integral elements of preparing the human to operate an assistive machine with greater success. For both clinicians and researchers to develop better informed strategies and technologies that overcome operational challenges still faced by current and potential PW users, it is imperative to have a standard tool for quantitative assessment of interface usage skill. Yet, there is no standard assessment tool for evaluating interface usage. We fill this gap with the work described in this paper. In this systems paper, we present a real-time analytics suite with conversion to app form suitable to run on any computer platform. The tool can be used by clinicians without any additional training and provides on-the-fly statistics about multiple interface usage measures. The raw data also is stored, which can be used for further detailed analysis about custom interface usage characteristics. We additionally contribute hardware specification for an adaptor that allows common commercial interfaces used for assistive machines to communicate over Bluetooth with our assessment tool.

We first cover a brief background on the relevant literature in Section II. We then provide a detailed description of our Interface Skills Test software and hardware evaluation tools in Sections III and IV, with a discussion of the preliminary results. Implications for clinical and research use is covered in Section V. We conclude with our proposal for future work in Section VI.

Ii Background

In this section, we present a summary of related work on assistive machine skills assessment tools and characterizing interface usage.

Ii-a Assistive Machine Skill Measures

In the domain of functional rehabilitation, outcome measures span a wide range from kinaesthetics and neurophysiological [4] measures of the patients to global outcomes in terms of overall function and community reintegration [26].

The wheelchair skills test (WST) is the state-of-the-art in clinical assessment of an individual’s ability to drive a PW [1], and is an intermediate-level outcome lying in between the two extremes cited above. This measure consists of various tasks— including ascending and descending slopes or navigating through doorways—and for each task, a capacity and confidence score is chosen by an observing therapist based on the completion of the task and subjective safety. A score of 0 indicates failure to complete the task, 1 indicates the user had some difficulties in completing the task, and 2 indicates they accomplished the task without any difficulty. This measure fails to capture details about the exact difficulties the person experienced in completing specific tasks. Although the WST is a powerful assessment tool, the delivery is subject to clinician training [13].

Powered mobility clinical driving assessment (PMCDA) is another assessment tool which is also observational [6]. There are currently no assessments that consider how an individual completes the skill in terms of quantitative measures of safety such as distance to barriers, speed or smoothness profiles. Furthermore, in the assistive robotics domain, there is no standard way of assessing user skills. The most common performance measures used to evaluate the efficacy of assistive robotics systems include task completion time, tracking errors, and subjective questionnaires which do not assess objective user skill [15, 22, 2].

All of these assessments require a specific physical setup, and more importantly, do not provide a quantitative analysis of interface usage, but rather qualitative observations on wheelchair driving skill. We propose that by looking one level higher—at the interface operation and identifying control areas that require improvement—we can also improve wheelchair driving skill.

Ii-B Interface Usage Characterization

To our knowledge, there is no standard assessment tool for characterizing interface usage. Research studies on interface characterization have mostly focused on neuromuscular and physical human response during manual control to study human sensing and response characteristics [24, 19, 5].

In the domain of assistive technology, clinicians have been surveyed about the usefulness and adequacy of powered wheelchair control interfaces for their patients [11]. Their results provide subjective evidence regarding steering and manoeuvring difficulty, and the for a need to integrate robot autonomy into conventional powered wheelchairs. However, their results do not provide quantitative information on interface usage, which could be exploited by assistive autonomy. Novel interface technologies have been developed for those with severe motor impairments, including an isometric joystick [9]. The authors compared task completion time and accuracy of the novel interface to a conventional joystick within a control population but not an end-user population. In another work, the authors introduce a novel BMI interface and compare user accuracy and performance between the target spinal cord injured (SCI) group and an uninjured control group, but do not compare the performance against any conventional interfaces [10].

In the human-robot interaction domain, a study investigated the influence of video game usage on human-robot team performance [23]

. The authors classify interfaces based on their inputs and outputs and use the information to create a framework for systematically evaluating interfaces in the human robot interaction (HRI) domain.

Prior work in remote teleoperation has also studied the effect of time delay and communication channel degradation on the quality of teleoperation and manual control [14].

Iii Interface Skills Test

In prior work, we introduced a variety of interface usage metrics to characterize interface usage skill [17]. In this work, we expand upon these measures and design an open-source application that will compute these measures online in real-time and store data within a user profile that is easy to use by anyone familiar with a smart phone, tablet, or computer. No training is needed to use this tool, and analytical graphs are available for the clinician and researcher, as well as the human operator of the assistive machine. In this section, we introduce the interface skills test, describe key variables that impact interface usage, the configurable settings, and the outcome measures.

Iii-a Tasks

The assessment consists of a series of tasks that evaluate qualities of the human input including speed, precision, and stability: all necessary criteria for a human to be able to properly command an assistive machine.

Fig. 1: Interface Skills Test assessment task menu.

This is done through two distinct tasks. Namely, a command following and trajectory following task. All tasks are designed in a simulated environment so that uncertainties from real world dynamics do not corrupt the interface usage performance measures.

Individual user profiles can be made and the tasks can be selected via a menu (Figure 1). Each task can be reconfigured as described in Section III-C.

Iii-A1 Command Following

The command following task is designed to uncover a patient’s ability to respond to a visual command stimulus in terms of response accuracy, speed, and stability (Fig. 1(c)). In this task, a white arrow—the command prompt —appears on the screen pointing in different directions in a random and balanced sequence. The default direction settings include the four cardinal and four inter-cardinal angles. The length of the arrow also changes in a random and balanced order to measure how well the human can scale inputs to the prompted command. The human is instructed to issue a command for the same direction and magnitude (if the interfacing device allows for scaling) as soon as they see the command prompt and to continue issuing the command uninterrupted for the duration of the prompt (). The blue arrow is the feedback of the actual command issued by the human.

Iii-A2 Trajectory Following

The trajectory following task is designed to evaluate how well the human is able to follow a predefined path to evaluate signal integrity in terms of smoothness, ability to give corrections, and directness of the human command. Trajectory following can be thought of as the inherent ability to generate commands to follow waypoints while using visual feedback correction. The ability to follow a trajectory where there was a single known goal—without interference from the wheelchair dynamics and external sources of noise—aims to uncover how a person’s intended goal may differ from the signal they output through the interface. The task consists of controlling the motion of a 2D simulated wheelchair (the yellow pentagon shape in Fig. 1(a) and Fig. 1(b)

) along a predefined path. The path is demarcated to indicate motion along goal posts. The trajectory path begins with a square path, followed by a curved path. Only the path in the immediate vicinity of the wheelchair is visible at any given moment (as in Fig. 

1(a)). The patient is instructed to stay within the bounds of the clearly marked path and to avoid going into the out-of-bounds grey area. The square and curved paths are designed such that they contain the basic commands covered by general interfacing devices used for 2D assistive machines. The square path consisted of two forward, two backward, two left turn and two right turn trajectories. The curved path consists of two long arcs and two small arcs.

(a)
(b)
(c)
(d)
Fig. 2: Study tasks. (a) The square and (b) curved path portions of the trajectory following task. (c,d) Command following task. The white arrow is the target prompt and the blue arrow is the human response.

Iii-B Key Variables

A multitude of variables impact the human’s interface usage characteristics while operating an assistive machine. We group these variables into four distinct categories [20]:

  • Task Variables. Factors such as the dynamics of the device they are controlling (e.g., rear-wheel vs mid-wheel drive wheelchair), the mechanics of the control interface, as well as what information is available to the human and how.

  • Environmental Variables. Factors such as temperature, whether the task is indoors or outdoors, or noise pollution.

  • Internal Variables. Factors such as internal motivation, training and skill, fatigue, and stress that impact the internal states of the human.

  • Procedural Variables. Features of the experimental design, such as the instructions for the given task and order of task presentation.

The effect of procedural variables is minimized in the Interface Skills Test by standardizing the instructions for each task within the app. To control for the remaining variables, the clinician, test taker, or test giver can input information using Likert-scale questionnaires prior to and after a given task.

Iii-C Configurable Settings

The assessment tool is designed with a variety of configurable input settings through a user-friendly GUI. The configurable measures consist of information relating to task variables, environmental variables, and the human’s internal variables, as well as configurable settings pertaining to how the tasks are presented to the human test taker.

The controlled covariates for documenting the human’s internal state are collected through a series of Likert-type questionnaires administered directly within the app. Documenting these variables is not necessary for running the assessment, but it is important and recommended to keep track of these as they may inform larger trends in the human’s interface usage skill characteristics.

  • Fatigue: Measured via the Fatigue Scale, which is an 11 item Likert-type questionnaire[8]

  • Motivation: Measured via the Intrinsic Motivation Inventory (IMI) [21].

  • Workload: Measured using the raw NASA-TLX which is a shortened version of this assessment tool [16].

  • Pain: Measured via the Numerical Pain Rating Scale which consists of a list of pain-level descriptors [7].

  • Stimulant consumption: Text entry question.

  • Confidence: Likert scale question on how confident the human is in their ability to use the interfacing device.

  • Stress: Measured via the Perceived Stress Questionnaire [12].

Other control variables can also be documented that monitor environmental and task variables:

  • The interface used during the test.

  • How often the interface is used daily by the patient.

Fig. 3: Menu for recofigurable settings for command following task.

The independent variables used within the command following and trajectory following tasks which can be reconfigured by the clinician and test-taker include:

  • Set of target control commands. This also includes the choice of selecting only directions or magnitudes of the commands. The default is the four cardinal and four inter-cardinal angles.

  • Number of times each target command is prompted. The default is set to twenty.

  • Range of time each target prompt is displayed. This is the amount of time each command prompt is visible and the time the user has to respond. The default range is between 1-2 seconds.

  • The order of the trajectory following tasks.

Iii-D Outcome Measures and Scoring

One of the main contributions of our work is that the assessment is calculated using strict closed-form equations, and that the results do not depend on qualitative observations of the test taker. Additionally, outcome measures are calculated while the assessment is being administered, and results are available immediately following the conclusion of the assessment.

A summary of the performance statistics is available immediately after the conclusion of a test trial, accessed via the GUI. The outcome measures available immediately include:

  • Average response delay: The average of all time differences between target command prompts and the first instance when the patient issues the correct command.

    is the time of the first within-tolerance human command.

  • Average successful response percent: The percentage of command prompts to which to patient is able to successfully respond.

    where is a tracking index.

  • Average settling time: The time it takes for the patient to steadily issue the prompted command within an allowable tolerance of .

    Here is the time at which the human command settled to within-tolerance.

  • Average stability: Measured as the dimensionless jerk of patient trajectory.

    where is the speed, and are the start and end times, and is the maximum of .

  • Control accuracy: How close the average patient response is to the target command response.

  • Average speed: The average speed during the trajectory following task.

  • Percentage of time out of bounds: The percentage of time during the trajectory following tasks when the simulated wheelchair is outside the indicated path barriers.

    where and are the time the 2-D wheelchair went outside and came back into the barrier, respectively.

is the total number of target command prompts, is the target command prompt which occurs at time for a duration of , and is the patient command at time , and is the tolerance which is set to of the target command.

Additionally, the detailed raw data used to calculate the summary statistics for each test condition are stored as an SQL file that can be accessed for further analysis at any time.

Iii-E Practicality

In terms of practicality, we designed our assessment tool to require minimal equipment which reduces cost, space requirements and set-up time. The only equipment needed are the person’s own electric assistive machine (e.g. powered wheelchair), a tablet or any device able to run the assessment application, and our interfacing device if the interface is not Bluetooth capable.

Iii-F Usability

In terms of usability, we find that the full assessment can be completed in one session lasting between 30-60 minutes with the default settings, and without taxing the patient or the experimenter. The duration can be adjusted to be longer based on the reconfigurable independent variables, as deemed appropriate by the experimenter. Simple and clear instructions for each task are contained within the app itself, so that the tester can easily administer the assessment. There have been no adverse incidents with the assessment tool as all tasks are simulation-based. The start and end of each test is also clearly defined.

Iii-G Reliability

Our motivation was to design an assessment tool that produces outcome measures that are repeatable, consistent, precise, and immune to test-taker bias. The outcome measures are calculated automatically, which preserves scoring reliability across various test takers and across session for a single patient profile. For example, to calculate the total percentage of time the simulated wheelchair is outside of the allowed bounds during the trajectory following task, we use definitive geometries as shown in Figure 4 ***The boundaries are designed to account for visual human capabilities..

Fig. 4: Bounding geometries used for calculating out-of-bound duration of the simulated wheelchair.

Iv Hardware Connection

Some modern powered wheelchairs have Bluetooth-enabled interfaces that allow the interface to connect to digital devices such as computers, smart phones, tablets, etc. However, many PWs still lack this capability. We have designed a multi-interfacing device that connects to various common interfaces used for the control of powered wheelchairs, such as joysticks, switch-based headarrays, and sip/puff devices. This device serves as a connection that communicates signals from the control interface over Bluetooth, which can then be detected by any device with the interface usage skill measure app. With our open source design, any PW can be used to measure interface usage skill.

This work was inspired by the Freedom Wing adaptor by AbleGamers [3]. The novel contribution of our work is to replace the wired connection with Bluetooth and to allow for various types of interface connections.

The hardware required is minimal and includes off-the-shelf components. A Raspberry Pi11footnotetext: Model B+ was tested. acts as a bridge by relaying commands from the assistive devices to our app over Bluetooth.22footnotetext: The source code for the device is available at https://github.com/FrogJin/WheelchairBLEGamepad. The Raspberry Pi emulates an XBox 360 controller, converting commands from the assistive interface into button presses/joystick values on the controller. An additional PiCAN2 board is used to allow CAN Bus interface connections to the Raspberry Pi, which is required for some R-Net type interface devices. A diagram of the hardware bridge is shown in Figure 5.

The hardware connection was tested with R-Net switch-based headarray and joystick interfaces on Permobil M3 and F3 Corpus powered wheelchairs.

Fig. 5: R-Net and USB-A interface to Bluetooth joystick.

V Discussion

The goal of designing this assessment tool is to improve the training and wheelchair navigation performance by documenting initial and subsequent interface usage skill and characteristics via reliable, repeatable, and objective outcome measures. Reliable and objective measurement instruments are needed not just for providing informed care to patients, but also to assist in testing research hypothesis, comparing outcomes, and assist developing new technologies.

The scoring for all tests are digitized and analytical, so the outcome measures are not subject to experimenter bias.

This assessment also makes possible the study of other interesting research questions. For example, there is potential in using this tool for identifying how long-term therapy is affecting interface usage skill. Also, the tool allows for identifying how various key variables affect different qualities of the human input during interface usage. Furthermore, the tool may aid in deciding which interfaces and which settings are more suitable for a particular individual with evidence-based measurements. There is also the potential to evaluate how various autonomous robotics assistance interventions affect—either improving or decreasing—the patient’s interface usage skill.

Vi Conclusion and Future Work

In this systems paper, we presented the Interface Skills Test; an assessment tool for evaluating various qualities of interface usage. We anticipate that with automated outcome measure calculations, this assessment tool can aid in understanding a patient’s interface usage skill and diagnosing appropriate solutions to overcome deficiencies. This contribution can potentially enhance the quality of clinical care and also allow robotics researchers to design customized and intelligent assistance algorithms

The current version of this assessment tool is limited to 2D assistive machines. In future work, we will expand to cover higher dimensional machines such as robotic arms. Additionally, our future iteration will include additional tasks and measures to include reachability and operation range.

We have currently tested the hardware bridge with R-Net controllers. Our next iteration will expand to other common wheelchair controller types. We plan to evaluate test-retest validity, context validity, and usefulness to clinicians via within-subject study.

Acknowledgment

This material is based upon work supported by the National Science Foundation under Grant IIS-1552706. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

References

  • [1] Cited by: §II-A.
  • [2] M. a Goodrich, E. R. Boer, J. W. Crandall, R. W. Ricks, and M. L. Quigley (2004) Behavioral entropy in human-robot interaction. Proceedings of Performance Metrics for Intelligent Systems, pp. 24–26. Cited by: §II-A.
  • [3] Bill (2020-01) Ablegamers & atmakers team up for the freedomwing!. External Links: Link Cited by: §IV.
  • [4] M. L. Boninger, R. A. Cooper, M. A. Baldwin, S. D. Shimada, and A. Koontz (1999) Wheelchair pushrim kinetics: body weight and median nerve function. Archives of physical medicine and rehabilitation 80 (8), pp. 910–915. Cited by: §II-A.
  • [5] J. Burchfield, J. Elkind, and D. Miller (1967) On the optimal behavior of the human controller: a pilot study comparing the human controller with optimal control models. Bolt Beranek and Newman Inc., Rept 1532. Cited by: §II-B.
  • [6] J. L. Candiotti, D. C. Kamaraj, B. Daveler, C. S. Chung, G. G. Grindle, R. Cooper, and R. A. Cooper (2019-04) Usability evaluation of a novel robotic power wheelchair for indoor and outdoor navigation. Archives of Physical Medicine and Rehabilitation 100, pp. 627–637. External Links: Document, ISSN 1532821X Cited by: §II-A.
  • [7] A. Caraceni, N. Cherny, R. Fainsinger, S. Kaasa, P. Poulain, L. Radbruch, F. De Conno, et al. (2002) Pain measurement tools and methods in clinical research in palliative care: recommendations of an expert working group of the european association of palliative care. Journal of pain and symptom management 23 (3), pp. 239–255. Cited by: 4th item.
  • [8] T. Chalder, G. Berelowitz, T. Pawlikowska, L. Watts, S. Wessely, D. Wright, and E. Wallace (1993) Development of a fatigue scale. Journal of psychosomatic research 37 (2), pp. 147–153. Cited by: 1st item.
  • [9] R. A. Cooper, D. M. Spaeth, D. K. Jones, M. L. Boninger, S. G. Fitzgerald, and S. Guo (2002) Comparison of virtual and real electric powered wheelchair driving using a position sensing joystick and an isometric joystick. Medical Engineering and Physics, pp. 703–708. Cited by: §II-B.
  • [10] A. Farshchiansadegh, F. Abdollahi, D. Chen, Mei-Hua Lee, J. Pedersen, C. Pierella, E. J. Roth, I. Seanez Gonzalez, E. B. Thorp, and F. A. Mussa-Ivaldi (2014) A body machine interface based on inertial sensors. Proceedings of Engineering in Medicine and Biology Society. Cited by: §II-B.
  • [11] L. Fehr, W. E. Langbein, and S. B. Skaar (2000) Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey.. Journal of rehabilitation research and development, pp. 353–60. Cited by: §II-B.
  • [12] H. Fliege, M. Rose, P. Arck, O. B. Walter, R. Kocalevent, C. Weber, and B. F. Klapp (2005) The perceived stress questionnaire (psq) reconsidered: validation and reference values from different clinical and healthy adult samples. Psychosomatic medicine 67 (1), pp. 78–88. Cited by: 7th item.
  • [13] E. Giesbrecht (2022) Wheelchair skills test outcomes across multiple wheelchair skills training bootcamp cohorts. International Journal of Environmental Research and Public Health 19 (1), pp. 21. Cited by: §II-A.
  • [14] S. GMV TELEOPERATION with time delay a survey and its use in space robotics. Cited by: §II-B.
  • [15] A. Goil, M. Derry, and B. D. Argall (2013)

    Using machine learning to blend human and robot controls for assisted wheelchair navigation

    .
    IEEE International Conference on Rehabilitation Robotics. External Links: Document, ISBN 9781467360241, ISSN 19457898 Cited by: §II-A.
  • [16] S. G. Hart (1985) NASA-task load index (nasa-tlx); 20 years later.. Proceedings of the human factors and ergonomics society annual meeting 50 (9), pp. 904–908. Cited by: 3rd item.
  • [17] M. N. Javaremi, M. Young, and B. D. Argall (2019) Interface operation and implications for shared-control assistive robots. International Conference on Rehabilitation Robotics. Cited by: §III.
  • [18] D. Kairy, P. W. Rushton, P. Archambault, E. Pituch, C. Torkia, A. E. Fathi, P. Stone, F. Routhier, R. Forget, L. Demers, J. Pineau, and R. Gourdeau (2014-02) Exploring powered wheelchair users and their caregivers’ perspectives on potential intelligent power wheelchair use: a qualitative study. International Journal of Environmental Research and Public Health 11, pp. 2244–2261. External Links: Document, ISSN 16617827 Cited by: §I.
  • [19] D. Kleinman, S. Baron, and W. Levison (1971) A control theoretic approach to manned-vehicle systems analysis. IEEE Transactions on Automatic Control 16 (6), pp. 824–832. Cited by: §II-B.
  • [20] D. T. McRuer and H. R. Jex (1967) A review of quasi-linear pilot models. IEEE transactions on human factors in electronics (3), pp. 231–249. Cited by: §III-B.
  • [21] B. P. Needs and P. Education Intrinsic motivation inventory (imi). Cited by: 2nd item.
  • [22] D. R. Olsen and M. A. Goodrich (2003) Metrics for evaluating human-robot interactions. NIST Performance Metrics for Intelligent Systems Workshop, pp. 507–527. External Links: Document, ISBN 0953-5438, ISSN 13698478 Cited by: §II-A.
  • [23] J. Richer and J. L. Drury (2006) A video game-based framework for analyzing human-robot interaction: characterizing interface design in real-time interactive multimedia applications. In Proceedings of the Conference on Human-Robot Interaction, Cited by: §II-B.
  • [24] T. B. Sheridan and W. R. Ferrell (1974) Man-machine systems; information, control, and decision models of human performance.. the MIT press. Cited by: §II-B.
  • [25] D. University Wheelchair skills test (wst) 5.1 form. External Links: Link Cited by: §I.
  • [26] S. Wood-Dauphinee, M. Opzoomer, J. I. Williams, B. Marchand, and W. O. Spitzer (1988) Assessment of global function: the reintegration to normal living index.. Archives of physical medicine and rehabilitation 69 (8), pp. 583–590. Cited by: §II-A.