Log In Sign Up

Proprioceptive Sonomyographic Control: A novel method of intuitive proportional control of multiple degrees of freedom for upper-extremity amputees

by   Ananya S. Dhawan, et al.

Technological advances in multi-articulated prosthetic hands have outpaced the methods available to amputees to intuitively control these devices. Amputees often cite difficulty of use as a key contributing factor for abandoning their prosthesis, creating a pressing need for improved control technology. A major challenge of traditional myoelectric control strategies using surface electromyography electrodes has been the difficulty in achieving intuitive and robust proportional control of multiple degrees of freedom. In this paper, we describe a new control method, proprioceptive sonomyographic control that overcomes several limitations of myoelectric control. In sonomyography, muscle mechanical deformation is sensed using ultrasound, as compared to electrical activation, and therefore the resulting control signals can directly control the position of the end effector. Compared to myoelectric control which controls the velocity of the end-effector device, sonomyographic control is more congruent with residual proprioception in the residual limb. We tested our approach with 5 upper-extremity amputees and able-bodied subjects using a virtual target achievement and holding task. Amputees and able-bodied participants demonstrated the ability to achieve positional control for 5 degrees of freedom with an hour of training. Our results demonstrate the potential of proprioceptive sonomyographic control for intuitive dexterous control of multiarticulated prostheses.


page 1

page 4

page 7

page 9

page 11

page 12


Cleverarm: A Novel Exoskeleton For Rehabilitation Of Upper Limb Impairments

CLEVERarm (Compact, Low-weight, Ergonomic, Virtual and Augmented Reality...

Decentralized Nonlinear Control of Redundant Upper Limb Exoskeleton with Natural Adaptation Law

The aim of this work is to utilize an adaptive decentralized control met...

Sparsity Analysis of a Sonomyographic Muscle-Computer Interface

Objective: The objectives of this paper are to determine the optimal loc...

Muscle Synergy and Coupling for Hand

The knowledge of the intuitive link between muscle activity and the fing...

Deterministic and stochastic control of kirigami topology

Kirigami, the creative art of paper cutting, is a promising paradigm for...


Currently, there are approximately 600,000 individuals living with upper limb loss in the US. Upper extremity amputations most commonly occur in working age adults as a result of trauma [1, 2], and frequently affect the dominant extremity, leading to significant impacts on activities of daily living. The most common upper extremity amputation involving a wrist disarticulation or higher occurs at the transradial level (57%) [3]. Despite the enormous investment of resources in the development of new multi-articulated upper limb prostheses, a large proportion of upper extremity amputees discontinue use of their prosthesis [4, 5, 6]. Prosthetic non-wear or part-time use has been reported in 20% of the adult upper limb amputee population and rejection rates for upper limb prosthetic users are staggering, with reported figures ranging from 35-45% for myoelectric and cable controlled systems [4]. Dissatisfaction with prosthesis technology is strongly associated with rejection, and 88% of non-users reported the systems as being “too difficult or tiring” to use [7]. However, 74% of those who have abandoned their upper limb prostheses, stated that they would reconsider prosthetic use if technological advancements were made to improve their functionality and usability [7]. Therefore, there is a significant need for better technological solutions to improve the function and quality of life of upper limb amputees.

While significant advances have been made in the electromechanical design of multiarticulated dexterous prostheses, methods enabling amputees to intuitively control these devices is still lacking. Advanced myoelectric prosthetic hands either utilize two surface electromyography (sEMG) electrodes to record electrical activity from flexor and extensor muscles of the residuum, or use sophisticated machine learning-based systems, such as pattern recognition and regression 

[8, 9, 10], using multiple electrodes [11, 12, 13]. sEMG signals have poor signal-to-noise characteristics and hence limited amplitude resolution, making it challenging for users to accurately achieve graded levels of sEMG amplitude. Furthermore, sEMG signals suffer from random fluctuations of high amplitude, especially with dry electrodes [14, 15, 16]. Finally, sEMG signals have limited specificity for deep contiguous multi-compartmental muscles because of cross talk [17, 18, 19, 20], and differentiation between individual digit and joint motions is challenging based on electrical activity recorded through the overlying tissues. Due to these limitations, in the most commonly used myoelectric control strategy, called direct control, sEMG amplitude is used to proportionally drive joint velocity instead of joint position. In velocity-based control, muscle contraction amplitude is mapped to a single movement, where flexion and extension dictate the movement direction while muscle inactivity halts joint movement. While this scheme allows for smooth joint movements with noisy sEMG amplitude measurements, the use of sEMG activation to create an artificial velocity control signal is not congruent with proprioceptive feedback from residual muscles. Proprioception is dependent on the mechanical movement of muscles, tendons and associated fascia, rather than the level of electrical activation [21, 22]. Thus, velocity-based control severely limits a patient’s ability to achieve dexterous manipulation. Amputees often grip objects with excessive force and velocity as compared to able-bodied individuals [23]. As a result, many amputee users prefer body powered hooks over myoelectric systems, as the cable tension provides congruent sensory feedback for positional control and thus is more intuitive to use [4, 24].

Another significant challenge is to intuitively select among different grips in the terminal device. Advanced multiarticulated hands, such as the bebionic (Ottobock, GmbH) and i-LIMB (Touch Bionics, LLC) [25] offer a number of different grip patterns (e.g., power, key, precision pinch, index point, three-jaw chuck grips). Current advanced hands use a method called mode switching that requires special sEMG triggers, e.g., co-contraction of both flexors and extensors in order to switch between different grasps. In the absence of muscle pairs in amputees, these functions are mapped to a set of substituted musculature or in some cases physical buttons on the prosthetic hand resulting in an unintuitive control paradigm. In addition, subjects experience periods of confusion with respect to the current operating mode of the limb, as well as the next mode in the control sequence. Not surprisingly, amputee subjects report that they strongly dislike mode switching [13]. A major limitation of current systems is the level of conscious attention required by the amputee user, and the lack of intuitive control [24]. Thus, there is an urgent need for a more intuitive command mechanism to control a lifelike prosthesis [26, 27].

New EMG decoding methods are being actively researched to improve functionality [28, 12]. Most notably, pattern recognition algorithms are being developed to decode user intention from recorded temporal sequences of multi-channel sEMG signals. [15, 29, 30, 31, 32, 33, 34, 35]

. While pattern recognition is able to classify the intended grasp end-state with high accuracy, the ability of amputees to translate classification accuracy to intuitive real-time control with multiple degrees of freedom is limited 

[36]. In real life usage, pattern recognition lacks robustness and requires frequent retraining. To address these limitations of sEMG-based myoelectric control, other invasive strategies, such as implantable myoelectric systems [37, 38, 39], targeted muscle reinnervation [39] and peripheral implant [40, 41] strategies, are being actively explored. While these methods can overcome many of the limitations described above, implanted devices and surgical procedures are associated with risks and side effects and may not be available to most amputees. Thus, there continues to be a need for a robust noninvasive strategy that can provide intuitive real-time proprioceptive control over multiple degrees of freedom, enabling amputees to make full use of advanced commercial hands.

In recent years, sonomyography or ultrasound-based sensing of mechancical muscle contractions, is being actively pursued as an noninvasive alternative to myoelectric control [42]. Sonomyography overcomes many of the limitations of current myoelectric control. Ultrasound imaging can spatially resolve individual muscles, including those deep inside the tissue, and detect dynamic activity within different functional compartments in real-time while being non-invasive. Sonomyography has been shown to be useful for detecting individual finger positions [43, 44], along with other complex muscle deformation patterns [45]. Previous research has also shown that ultrasound techniques could be used for real-time classification of hand movements by predicting forearm muscle deformation patterns in able-bodied individuals [42, 44] as well as a trans-radial amputee [46] lending support to future prosthetic control applications. In this paper, we propose a new sonomyographic control strategy to classify volitional motion intent and further extend our paradigm to enable intuitive, position-based proportional control over multiple degrees-of-freedom. Since our strategy relies on mechanical muscle contractions and relaxations that are congruent with underlying proprioceptive feedback in the residual limb, we refer to our strategy as proprioceptive sonogmyographic control. We validate our techniques on able-bodied subjects and apply it to five upper-extremity amputees one of whom is a congenital amputee. We asked the participants to perform predefined hand motions while ultrasound images of their forearm muscle movements were captured using a portable, commercial ultrasound system. We extracted representative ultrasound images from the collected ultrasound data and perform leave-one-out validation to quantify prediction accuracy. Participants were then asked to perform the same hand motions in real-time while being shown an on-screen cursor that moved up or down in proportion to their muscle movements. A series of targets were presented and the participant’s ability to perform graded control using forearm muscle deformation was measured. The goals of this work are 1) to determine the ability of upper-extremity amputees to perform different hand motions using our proprioceptive sonomyographic control strategy, with minimal training and 2) to determine the accuracy and stability with which amputees can perform these motions in a proportional manner.


Subjects and experimental setup

We recruited four unilateral and one bilateral, upper-extremity amputees at the MedStar National Rehabilitation Hospital (NRH) and George Mason University (GMU). All of the amputee participants were at the time using electrically powered, myoelectric prostheses with varying levels of proficiency. Subject-specific amputation details and demographics are available in Table 1. Additionally, five able-bodied subjects were recruited and served as a control group for this study. Demographics for able-bodied participants are listed in Table 2. All procedures described in this work were approved by the respective Institutional Review Boards at NRH and GMU. All subjects provided written, informed consent prior to participating in the study, and were also compensated for their participation.

Sex Age
Years since
Amputation type
Amputation level and side
Am1 M 68 50 Traumatic Transradial (L*)
Am2 M NA Congenital Congenital Transradial (L*)
Am3 M 56 46 Traumatic Transradial (R*)
Am4 M 30 7.5 Traumatic Wrist disarticulation (L*), shoulder disarticulation (R)
Am5 M 38 1.5 Traumatic Transradial (R*)
(L) and (R) indicate amputation of left or right arm respectively and * indiciates arm used for this study.
Table 1: Demographics and amputation details of amputee subjects
Subject ID Age Sex Dominant arm
Ab1 29 M Right
Ab2 26 M Right
Ab3 28 M Right
Ab4 25 F Left
Ab5 24 M Right
Table 2: Demographics of able-bodied participants
Figure 1: Photo of the experimental setup showing an amputee subject instrumented with an ultrasound transducer on the residuum (inset). The interface for the target holding motion control task described in experiment 2 shows the target position, movement bounds and the cursor position, which is controlled by muscle deformations in the amputee’s residuum. Unilateral amputee subjects were asked to demonstrate the perceived motion using their contralateral intact limb.

For the entire course of this study, all subjects were seated upright with their forearm comfortably supported below the shoulder to minimize fatigue. Subjects were instrumented with a clinical ultrasound system (Terason uSmart 3200T, Terason). The ultrasound system was connected to a low-profile, high-frequency, linear, 16HL7 transducer. For amputee participants, the transducer was positioned on the residuum below the elbow, such that all individual phantom finger movements resulted in considerable movement in the field-of-view. For able-bodied participants, the transducer was manually positioned on the volar aspect of the dominant forearm, approximately 4cm - 5cm from the olecranon process in order to image both the deep and superficial flexor muscle compartments. Additionally, able-bodied participants were asked to place their forearm inside an opaque enclosure that prevented direct observation of arm movements below the elbow. This ensured that able-bodied participants relied solely on their kinesthetic sense to perform all motion control tasks. The transducer was then secured in a custom-designed probe holder and held in place with a stretchable cuff as shown in Fig.1 (inset). Ultrasound image sequences from the clinical ultrasound system were acquired using MATLAB (The MathWorks, Inc.) and processed using custom-developed algorithms discussed in the following section.

Control algorithms

Following probe placement, subjects underwent an initial training phase during which they performed repeated iterations of a set of motions (power grasp, wrist pronation, point, key grasp, tripod), one motion at a time. Movements were timed to a metronome such that the participant first transitioned from rest to the end state of the selected motion, then held that position for a fixed number of metronome beats, then transitioned from the end state back to rest, and lastly held for the same number of metronome beats. This process was repeated for 5 repetitions. The first frame of each motion sequence was extracted from the ultrasound data and labeled as ’rest’. All subsequent frames were compared to the first frame by computing the distance from the rest image. This measurement was performed in real-time and a visualization of the signal was provided to participants in real-time as well. An inverse of the computed signal was used for visualization, such that areas of low-similarity to rest appeared as peaks and areas of high similarity to rest appeared as valleys. The participants were thus able to track the extent to which their muscles were contracted and whether this contraction was consistent over time. Using the same signal, plateaus in similarity were identified in order to find the area where the participant was holding the motion or holding at rest. The ultrasound frames at each plateau were averaged into a single representative image and added to a training database with a corresponding motion or rest label. The process was repeated until training images corresponding to each movement had been added to the database. The accuracy of the database was then verified by performing leave-one-out cross-validation with a 1-nearest-neighbor classifier using correlation coefficient as a similarity measure.

Incoming image frame at time t

Training images for selected motion

Mean correlation coefficient at time t,

Update upper bound (), and lower bound ()


if ,

if ,

if ,

if ,

Proprioceptive sonomyographic control signal,
Figure 2: Image analysis pipeline for computing proportional control signal for an incoming, pre-anotated image frame in real-time. Upper bound () and lower bound () are initialized as and respectively.

In order to compute motion specific, real-time proprioceptive sonomyographic control signals, a motion was first selected from the training database. The mean correlation value, , of the acquired image frame at time-point, , versus all frames of the selected motion in the training database was computed and mapped to a motion completion value between ’0’ and ’1’. An upper bound, , and a lower-bound, , were initialized, corresponding to the expected correlation value at the motion end-state and rest respectively so that the correlation signal could be normalized between rest (’0’) and 100% motion completion (’1’). Since it is highly unlikely that a participant reaches the exact same motion end state and rest for every single iteration of a given motion, both bounds were dynamically updated as a weighted sum of the instantaneous correlation signal, and the closest bound. This process is described in detail in Fig. 2

, but the general approach to the bound update procedure is as follows: if a correlation signal that was higher than the expected upper bound was observed, the upper bound was increased; likewise, if a correlation signal lower than the expected lower-bound was observed, the lower bound was decreased; over time, the expected bounds are relaxed very slowly, i.e. the expected upper bound was lowered and the expected lower bound was increased, under the assumption that the bounds are uncertain estimates.

The effect of the dynamic nature of these updates is that the correlation signal pushes the bounds outwards, resulting in a system that is more responsive if we underestimate the initial bounds than it is if we overestimate. An underestimation would result in the participant being nudged towards 100% completion or towards rest earlier than expected; whereas an overestimation of the bounds would result in the participant being unable to fully reach 0% completion (rest) or 100% completion (motion end state). In order to mitigate the system’s bias towards rest and completion, we performed the bound relaxation at a very slow rate. And though fixed bounds could be used at the risk of overestimation, the dynamic bound adjustment enables the system to adapt in real-time to variations in the participants muscle activation strength due to fatigue, variations in signal amplitude due to sensor movements, or environmental noise and other such factors [47].

Experimental protocols

Experiment 1 - Motion discriminability training

The aim of this experiment was to determine the extent of discriminability that can be achieved across multiple motions for able-bodied and amputee subjects. For this experiment, subjects were asked to perform repetitions of a pre-selected motion, interleaved with rest phases between each repetition, in response to audible metronome cues. During the course of the experiment, subjects were provided with a view of ultrasound images acquired from their residuum (or intact limb for able-bodied subjects) in conjunction with the real-time correlation value, , of the current image frame to rest as described in the previous section (also see video in Supplementary material M1).

The study involved blocks of trials, each consisting of five repetitions of a predefined set of motions. Trials were repeated till cross-validation (CV) accuracy exceeded 85% and subjects reported that they were comfortable performing the motions. All of the amputee subjects listed in Table 1 participated in this experiment. Subject-specific motion sets and number of iterations performed by each amputee participant are listed in Table 3. All able-bodied participants performed five iterations of power grasp, wrist pronation, tripod, key grasp, and point each. Outcome measures for this experiment were leave-one out cross-validation accuracies for the first trial and the best trial performed, as well as the average accuracy over all trials.

Subject ID Motions performed Number of iterations per motion
Am1 PG, WP, Tr, KG, Po 20
Am2 PG, WP, In, Tr 5
Am3* PG, WP, Tr, KG, Po, In, Tr 5 (S1), 25 (S2)
Am4 PG, WP, Tr, KG, Po 20
Am5* PG, WP, Tr, KG, Po 15 (S1), 25 (S2)
PG = power grasp, WP = wrist pronation, Tr = tripod, KG = key grasp, Po = point, In = index flexion
* Motions performed over two different sessions, where S1 = session 1 and S2 = session 2.
Table 3: User-intended motions and number of iteration of each motion performed by amputee subjects.

Experiment 2 - Proportional target holding task

The aim of this experiment was to quantify proprioceptive sonomyographic control

performance of amputees and able-bodied subjects at graded muscle activation levels for multiple motions. A motor control task was implemented where the participant controlled an on-screen cursor that could move up or down in proportion to the degree of muscle activation in the forearm as shown in Fig. 

1. The cursor on the computer screen could move up towards a normalized bound of ’1’ in proportion to the performed completion level of a selected motion, reaching ’1’ when the motion was at 100% completion. Similarly, the cursor could move down towards the normalized bound of ’0’ as the user returned from motion completion towards rest, reaching ’0’ when the user was completely at rest (see video in supplementary material M2).

After an initial calibration step to initialize the bounds, the control interface presented the user with a target position randomly chosen from a predefined set of quantized, equidistant positions (), between the normalized upper and lower bounds. The target remained fixed at that position for a set hold-time, and then moved to the next position until all points were exhausted. For each target position, the participant was prompted to move the cursor to the target by contracting or relaxing their muscles and holding the cursor position until the hold period expired. Unilateral amputee subjects were also asked to demonstrate the perceived motion of their phantom limb using their intact, contralateral arm. Fig. 1 shows a screen-shot of the control interface with the target and the user-controlled cursor.

We first conducted a pilot study with Am2 and Am3, with five quantized positions () and a hold time, s to validate our control algorithms. For the pilot study, the hold period commenced when the cursor entered the current target’s quantization bounds, defined as around the target. If the user failed to enter the quantization bounds within a timeout period, s, the target automatically moved to the next set-point.

Following the pilot study, all able-bodied participants and amputee subjects Am3, Am4 and Am5 were recruited to perform the same motion control task with eleven graded levels () and hold time of s. For this extended study, the hold period commenced as soon as the target was presented to the user, irrespective of whether the user was able to reach the target. At the end of the hold period, the target moved to the next target location, randomly chosen from the set of eleven target positions, without replacement. All participants performed three trials of each target level, for a total of 33 target positions.

Position error, stability error, task completion rate and movement time served as evaluation metrics for

proprioceptive sonomyographic control

performance. Position error was computed as the mean error between the cursor and the target position, while stability error was calculated to be the standard deviation of the cursor position from the target position. Both, position and stability errors were computed for the time when the cursor first entered the quantization bound

till the hold time, expired. Task completion rate was defined as the percentage of targets that the user was able to successfully reach of the total target locations presented. A target was considered to have been successfully acquired if the cursor entered the quantization bound of around the target position. However, the user was not required to stay within the bound, , for successful target acquisition. The time starting from when the target was first presented to the user, to when the cursor first entered the quantization bounds was measured to be the movement time.

Figure 3: Aggregate confusion matrices showing post-training, motion discriminability for able-bodied, traumatic and congenital amputee subjects. Motion legend- PG = power grasp, WP = wrist pronation, Po = point, KG = key grasp, Tr = tripod, In = index flexion.


Experiment 1

The 1-nearest neighbor classifier was first validated with able-bodied subjects. Four out of the five able-bodied participants achieved cross-validation accuracies of 100% within five repetitions (or one trial) across all five motions. Fig. 3

shows the aggregate confusion matrix for all able-bodied subjects. It shows that four out of five motions were predicted with 100% accuracy and

key grasp was incorrectly predicted as power grasp in just one out of 25 motion instances (5 subjects performing 5 motions each).

All of the amputee subjects, including the congenital amputee (Am2), were also able to successfully complete the tasks, with an average prediction accuracy of 96.85.4% for at least 4 motions. Fig. 3 shows the aggregate confusion matrix for all traumatic amputees and the congenital amputee subject, Am2. For traumatic amputees, the average cross-validation accuracy was 96.76% for five motions, with key grasp and point having the lowest prediction accuracies. In contrast, the congenital amputee subject achieved a cross-validation accuracy of 85% for four motions with tripod having the least prediction accuracy at 60%. The subject reported not having phantom sensation of a thumb, which could have negatively affected motion discriminability for motions involving thumb movements, such as tripod and power grasp. All subjects typically completed the training phase in an hour or less.

Subject ID Cross-validation accuracy (%)
Including rest Excluding rest
First trial Best trial
Average across
all trials
First trial Best trial
Average across
all trials
Am1 88.33 90.83 87.71 100.00 100.00 99.00
Am2* 80.50 80.50 80.50 85.00 85.00 85.00
Am3 (S1)* 94.12 94.12 94.12 100.00 100.00 100.00
Am3 (S2) 82.50 95.00 86.33 96.00 100.00 96.80
Am4 100.00 100.00 100.00 100.00 100.00 100.00
Am5 (S1) 86.67 86.67 85.00 100.00 100.00 100.00
Am5 (S2) 92.50 93.33 89.33 96.00 100.00 96.80
Mean SD 89.236.82 91.496.32 89.006.38 96.715.50 97.865.67 96.805.40
*Subjects performed one trial of five repetitions. Refer to Table 3
Table 4: Comparison of cross-validation accuracies for amputee subjects with and without rest as a motion class.

We also analyzed the influence of treating rest as a separate motion class on the prediction accuracies. When rest was excluded from cross-validation, motion discriminability improved for all participants. For able-bodied subjects, the five motions were predicted correctly with 99.2% cross-validation accuracy. For amputee subjects, the mean cross-validation accuracy for all trials increased by 7.8% to 96.8% when rest was excluded as shown in Table 4.

Figure 4: Plots of user-controlled cursor position against target position for the pilot target holding task, for Am2 (3(a)) and Am3 (3(b)). The target randomly moved between five quantized target levels within normalized bounds of rest (‘0’) and motion completion (‘1’). Position and stability errors for individual target position segments are also shown.

Table 4 also shows cross-validation accuracies with and without rest for the initial and best set of trials for amputee subjects. In both cases, the mean cross-validation accuracy for the initial trial was comparable to best accuracy figures showing that the classifier was able to consistently predict motions when presented with representative image frames repeated across several trials.

Figure 5: Aggregate outcome metrics and Fitt’s law analysis for the target holding task (experiment 2) for able-bodied and amputee subjects. Motion legend- PG = power grasp, WP = wrist pronation, Po = point, KG = key grasp, Tr = tripod.

Experiment 2

Figs. 3(a) and 3(b) show results of the pilot target holding study of experiment 2 for amputee subjects, Am2 and Am3 respectively. The quantized target trajectory, user-controlled cursor position and associated quantization bounds have been plotted against time. Both subjects were able to successfully reach all of the targets presented in the pilot study without any prior training on performing the task. Congenital amputee subject, Am2 performed power grasp with position and stability error of 7.95% and 20% respectively. For wrist pronation position error was 11.83% whereas stability error was 17.83%. Amputee subject Am3’s position errors ranged between -0.03% to 1.50% while stability error ranged from 3.66% to 8.71% across four motions. Power grasp had the lowest position error whereas thumb flexion was found to have the lowest stability error.

Table 5 and Table 6 show outcome metrics for the subsequent, extended target holding holding task performed at eleven graded target levels for able-bodied and amputee subjects respectively. All of the able-bodied participants and amputee subjects, Am4 and Am5 performed five motions while subject Am3 performed 2 motions. Am3 also performed key grasp, however, that data was excluded from analysis due to error during the bound calibration stage.

Position error (%) Stability error (%) Task completion rate (%)
Ab1 -0.74 0.47 0.67 0.63 -0.41 3.66 1.88 4.02 8.45 5.55 93.94 100.0 100.0 87.88 96.97
Ab2 1.17 0.41 -0.44 1.21 -0.58 4.90 3.97 1.03 8.87 7.92 90.91 100.0 96.97 96.97 100.0
Ab3 -0.66 0.04 0.22 -0.69 -0.38 2.98 1.66 3.52 5.18 9.17 93.94 100.0 81.82 100.0 100.0
Ab4 -0.92 -0.25 0.16 1.43 -0.93 3.62 1.62 5.63 9.63 5.97 96.97 100.0 75.76 96.97 81.82
Ab5 -1.73 -0.42 0.09 -2.23 -0.57 7.30 4.39 6.78 11.8 7.38 100.0 84.85 96.97 93.94 93.94
Average 1.04 0.32 0.32 1.24 0.57 4.49 2.70 4.20 8.79 7.20 95.15 96.97 90.30 95.15 94.55
PG = power grasp, WP = wrist pronation, Po = point, KG = key grasp, Tr = tripod
Table 5: Evaluation metrics for target holding task (experiment 2) for able-bodied subjects.
Position error (%) Stability error (%) Task completion rate (%)
Am3 2.37 -0.23 - - - 10.19 4.61 - - - 69.70 100.0 - - -
Am4 -0.39 -0.25 0.12 -0.49 -0.02 2.33 5.79 3.37 8.55 4.00 96.97 100.0 96.97 96.97 100.0
Am5 -6.91 -0.82 -0.01 -5.09 -6.41 13.89 3.92 5.86 15.60 10.90 87.88 100.0 78.79 96.97 100.0
Average 3.22 0.43 0.07 2.79 3.22 8.80 4.77 4.62 12.08 7.45 84.85 100.0 87.88 96.97 100.0
PG = power grasp, WP = wrist pronation, Tr = tripod, KG = key grasp, Po = point
Table 6: Evaluation metrics for target holding task (experiment 2) for amputee subjects.

The average outcome metrics for the target holding task are also shown in Fig. 5. For the targets that were successfully acquired, Figs. 4(a) and 4(b) show that able-bodied subjects performed all motions except point with lower position and stability errors compared to amputee subjects. Although, stability errors for able-bodied subjects were lower than amputee subjects, there seems to be a correspondence between motion-specific stability errors across subjects. Motions with high stability errors in able-bodied subjects also correspondingly have high stability errors in amputees.

The type of motion also seems to be have an influence on position errors and stability errors for all subjects, i.e. motions with higher position error also have high stability error and vice versa. For example, as shown in Figs. 4(a) and 4(b), for able-bodied subjects, key grasp followed by power grasp and tripod have the highest stability and position errors. Similarly, for amputee subjects, a similar trend is observed wherein key grasp, power grasp and tripod exhibited the the highest stability and position errors out of the 5 motions that were performed.

For the extended target holding task, point was found to have the lowest average absolute position and stability error followed closely by wrist pronation for amputee subjects. However, average task completion rate (Fig. 4(c)) for point was 87.88%, while that of wrist pronation was 100% across all trials. Similarly, power grasp and tripod had comparable position and stability errors, although the average task completion rate was 84.85% for power grasp and 100% for tripod. Fig. 4(d)

also shows that there are small decreases in task completion rates at the lower quartile (0 to 0.25) and upper tenths (0.9 to 1.0) of the total motion completion range for amputees as well as able-bodied subjects. Timeseries plots of the cursor and target positions shown in Figs. 

6 and 7 also indicate that the unachieved targets were mostly presented in the latter half of the task.

Finally, a Fitt’s law analysis for the movement time required to achieve each target against its corresponding index of difficulty was performed and results are shown in Fig. 5. The index of difficulty () for the task was defined as in equation. 1.

Figure 6: Timeseries plots showing amputee subject Am4’s performance in the target holding task. The user-controlled cursor position and target locations for five motions along with position and stability errors are shown. Motion legend- PG = power grasp, WP = wrist pronation, Po = point, KG = key grasp, Tr = tripod.
Figure 7: Timeseries plots showing amputee subject Am5’s performance in the target holding task. The user-controlled cursor position and target locations for five motions along with position and stability errors are shown. Motion legend- PG = power grasp, WP = wrist pronation, Po = point, KG = key grasp, Tr = tripod.

Where, is the distance between subsequent targets and is the target’s quantization bound. Fig. 4(e)

shows that for both able-bodied and amputee subjects, mean movement time increases with increasing task difficulty across all motions as is expected in a human-computer interaction task. A regression analysis was performed of the mean movement times versus index of difficulty for both able-bodied (

) and amputee subjects (). Throughputs were found to be slightly higher for able-bodied subjects at 1.35 bits/s compared to 1.19 bits/s for amputee subjects. Movement time intercept (y-axis intercept) was also higher for able-bodied subject at 0.72 s compared to 0.44 s for amputees.


We have developed an ultrasound-based sensing approach for detection of volitional user movement intent and extraction of proportional control signals in response to muscle deformations in the forearm. Our proposed proprioceptive sonomyograpic control strategy allows spatially resolved muscle activity detection in deep seated muscle compartments in the forearm, and extraction of proportional signals from individual functional muscle compartments enabling higher fidelity of control compared to traditional myoelectric sensing techniques. Our approach also enables true positional control of the end-effector device, as opposed to velocity control commonly implemented using myoelectric signals.

Our proposed approach can enable proportional control with multiple degrees of freedom as it can classify between different degrees of freedom as we have shown before[42]. In this study, we demonstrated that simple classification algorithm, involving a 1-nearest neighbor classifier with a correlation-based distance metric was able to classify user-intended motions for both able-bodied and upper limb amputees. This image analysis pipeline makes our system agnostic to anatomical landmarks and removes the need for computationally expensive tracking algorithms [43]. However, due to the nature of our ultrasound-based image analysis pipeline, we utilize a reference ultrasound image frame corresponding to rest state to compute a correlation-based distance metric to other motion classes. We treat rest as a separate motion class as opposed to the absence of a signal as in traditional myoelectric control paradigms [48]. We show that when rest is excluded from classification there is a significant increase in classification accuracy across all subjects. We believe, this is due to natural postural variations in the relaxed state of the flexor muscles in the forearm. As a result, rest image frames have a higher intra-class dissimilarity compared to other motion classes. However, when rest is excluded, our approach achieves motion distinguishability that is comparable to current myoelectric pattern recognition (PR) systems performing similar motions [48, 49]. However, sonomyography was able to achieve this classification accuracy with less than an hour of training, compared to several weeks of training needed for PR to achieve comparable classification accuracy [48]. In future studies, we plan to utilize a single sEMG electrode in combination with sonomyography, and decode rest based on the absence of sEMG signal.

It is interesting to note that amputee subjects, Am1, Am3 and Am4 are long-time myoelectric prosthesis users and reported having an advanced level of control over their residual musculature owing to daily use with their prosthesis. This prior experience and repeated practice may have had some influence leading to their initially high cross-validation accuracies as seen in Table 4. Along the same lines, subject Am5’s, slightly lower initial accuracy, in the first session (S1) (see Table 4) may have been a result of limited exposure to a powered prosthesis. In the following session (S2), Am5 demonstrated a higher initial as well as average accuracy compared to his first session, indicating that the effect of training on our system may have been retained. This seems to suggest that sonomyography can leverage existing motor skills in current, experienced myoelectric prosthesis users without the need for extensive retraining. Additionally, Am5 later reported an improvement in his ability to control his existing myoelectric prosthesis due to a clearer understanding of his residual muscle deformation in the context of phantom digit movements. On the other hand, congenital subject Am2 was able to achieve high motion discriminability for four motions within approximately 30 minutes. This suggests that sonomyography can provide an intuitive control paradigm for traumatic amputees as well as congenital amputees who may either lack phantom limb sensations altogether or have limited context of muscular proprioception in their residuum. This is corroborated by contralateral arm demonstrations by unilateral amputees (see video in Supplementary material M2) showing that the movements perceived in their phantom limb closely resemble the intended motion. This one-to-one correspondence between residual muscle movement, resulting kinesthesia and perceived phantom sensation enables sensorimotor congruence in amputee subjects. The effect of sensorimotor congruence extends to proportional control task performance as well (see video in supplementary material). We show that traumatic and congenital amputee subjects with no prior experience of using a sonomyography-based interface are able to demonstrate fine graded control of an end-effector controlled by muscle activity in the forearm. Position errors for amputee and able-bodied subjects were below 3.5%, with an average task completion rate higher than 94% for for 11 graded targets. We believe that this intuitiveness of our sonomyographic control paradigm is due to the direct mapping between the extent muscle deformation in the forearm and the derived position-based proportional signal. This is in contrast to traditional myoelectric control strategies that map muscle activation intensity to the velocity of a prosthetic device [50, 51].

Earlier studies with sEMG electrodes integrated into prosthetic shells have shown that electrode shift, donning/doffing and arm position can have an adverse affect on long-term control reliability [52, 53, 54, 55, 56]. In previous studies, we have evaluated the effect of user arm position on classification accuracy for able-bodied subject and demonstrated the robustness of our technique to such variations in a previous work [42]. Although, our image analysis pipeline does not rely on anatomical landmarks for classification or proportional control, major changes in transducer position will likely severely degrade both classification accuracy and proportional control performance and require retraining. Considering the short training regimes required for our approach, typically 10 minutes or less, a user-driven, structured training sequence may be a viable solution to retrain the classifier. However, minor variations in transducer location and orientation due to movement of the residuum inside a prosthetic shell can be mitigated in real-time by appropriate filtering and wavelet-based machine learning techniques [57].

This work establishes the feasibility of an ultrasound-based, noninvasive, muscle activity sensing modality for potential use in real-time control of multiarticulated prosthetic arms for individuals with upper-extremity amputation. Clinical ultrasound devices continue to be miniaturized and handheld systems are commercially available. However, the current form factor of commercial systems are still to large to be readily be integrated with commercial prosthetic arms. We are currently developing custom miniaturized and low-power ultrasound imaging instrumentation [58] that can be directly integrated into a prosthetic shell for continuous, in-vivo monitoring of muscle activity and control of associated prosthetic devices. Our novel proprioceptive sonomyographic control approach may provide a means to achieve intuitive proportional control in future prosthetic devices and potentially significantly lower the rate of device rejection.


  • [1] Ziegler-Graham, K., MacKenzie, E. J., Ephraim, P. L., Travison, T. G. & Brookmeyer, R. Estimating the prevalence of limb loss in the united states: 2005 to 2050. Archives of Physical Medicine and Rehabilitation 89, 422–429 (2008).
  • [2] Dillingham, T. R., Pezzin, L. E. & MacKenzie, E. J. Incidence, acute care length of stay, and discharge to rehabilitation of traumatic amputee patients: An epidemiologic study. Archives of Physical Medicine and Rehabilitation 79, 279–287 (1998).
  • [3] Esquenazi, A. & Meier, R. Rehabilitation in limb deficiency. 4. Limb amputation. Archives of physical medicine and rehabilitation 77, S18–S28 (1996).
  • [4] Biddiss, E. & Chau, T. Upper limb prosthesis use and abandonment: A survey of the last 25 years. Prosthetics and Orthotics International 31, 236–257 (2007).
  • [5] Østlie, K. et al. Prosthesis rejection in acquired major upper-limb amputees: A population-based survey. Disability and Rehabilitation: Assistive Technology 7, 294–303 (2012).
  • [6] McFarland, L. V., Winkler, S. L. H., Heinemann, A. W., Jones, M. & Esquenazi, A. Unilateral upper-limb loss: Satisfaction and prosthetic-device use in veterans and servicemembers from Vietnam and OIF/OEF conflicts. The Journal of Rehabilitation Research and Development 47, 299 (2010).
  • [7] Biddiss, E. & Chau, T. Upper-limb prosthetics: Critical factors in device abandonment. American Journal of Physical Medicine and Rehabilitation 86, 977–987 (2007).
  • [8] Amsuess, S., Goebel, P., Graimann, B. & Farina, D. A Multi-Class Proportional Myocontrol Algorithm for Upper Limb Prosthesis Control: Validation in Real-Life Scenarios on Amputees. IEEE Transactions on Neural Systems and Rehabilitation Engineering 23 (2015).
  • [9] Stango, A., Negro, F. & Farina, D. Spatial correlation of high density emg signals provides features robust to electrode number and shift in pattern recognition for myocontrol. IEEE Transactions on Neural Systems and Rehabilitation Engineering 23, 189–198 (2015).
  • [10] Hahne, J. M. et al. Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 269–279 (2014).
  • [11] Chu, J.-U., Moon, I. & Mun, M.-S. A real-time emg pattern recognition system based on linear-nonlinear feature projection for a multifunction myoelectric hand. IEEE Transactions on biomedical engineering 53, 2232–2239 (2006).
  • [12] Amsuess, S. et al. Context-Dependent Upper Limb Prosthesis Control for Natural and Robust Use. IEEE Transactions on Neural Systems and Rehabilitation Engineering 24, 744–753 (2016).
  • [13] Hargrove, L. J., Scheme, E. J., Englehart, K. B. & Hudgins, B. S. Multiple binary classifications via linear discriminant analysis for improved controllability of a powered prosthesis. IEEE Transactions on Neural Systems and Rehabilitation Engineering 18, 49–57 (2010).
  • [14] Clancy, E. A., Morin, E. L. & Merletti, R. Sampling, noise-reduction and amplitude estimation issues in surface electromyography. Journal of Electromyography and Kinesiology 12, 1–16 (2002).
  • [15] Daley, H., Englehart, K., Hargrove, L. & Kuruganti, U. High density electromyography data of normally limbed and transradial amputee subjects for multifunction prosthetic control. Journal of Electromyography and Kinesiology 22, 478–484 (2012).
  • [16] Fillauer, C. E., Pritham, C. H. & Fillauer, K. D. Evolution and Development of the Silicone Suction Socket (3S) for Below-Knee Prostheses. Journal of Prosthetics & Orthotics 1, 92–103 (1989).
  • [17] Kong, Y. K., Hallbeck, M. S. & Jung, M. C. Crosstalk effect on surface electromyogram of the forearm flexors during a static grip task. Journal of Electromyography and Kinesiology 20, 1223–1229 (2010).
  • [18] van Duinen, H., Gandevia, S. C. & Taylor, J. L. Voluntary Activation of the Different Compartments of the Flexor Digitorum Profundus. Journal of Neurophysiology 104, 3213–3221 (2010).
  • [19] van Duinen, H., Yu, W. S. & Gandevia, S. C. Limited ability to extend the digits of the human hand independently with extensor digitorum. Journal of Physiology 587, 4799–4810 (2009).
  • [20] McIsaac, T. L. & Fuglevand, A. J. Motor-Unit Synchrony Within and Across Compartments of the Human Flexor Digitorum Superficialis. Journal of Neurophysiology 97, 550–556 (2007).
  • [21] Doubler, J. A. & Childress, D. S. An Analysis of Extended Physiological Proprioception as a Prosthesis-Control Technique. Journal of Rehabilitation Research and Development 21, 5–18 (1984).
  • [22] Doubler, J. A. & Childress, D. S. Design and Evaluation of a Prosthesis Control System Based on the Concept of Extended Physiological Pioprioception. Journal of Rehabilitation Research and Development 21, 19–31 (1984).
  • [23] Van Dijk, L., Van Der Sluis, C. K., Van Dijk, H. W. & Bongers, R. M. Task-Oriented Gaming for Transfer to Prosthesis Use. IEEE Transactions on Neural Systems and Rehabilitation Engineering 24, 1384–1394 (2016).
  • [24] Carey, S. L., Lura, D. J. & Highsmith, M. J. Differences in myoelectric and body-powered upper-limb prostheses: Systematic literature review. Journal of Rehabilitation Research and Development 52, 247–262 (2015).
  • [25] Waryck, B. Comparison of two myoelectric multi-articulating prosthetic hands. In Proc. of the 2011 Myoelectric Controls/Powered Prosthetics Symposium, 1–4 (Fredriction, New Brunswick, Canada, 2011).
  • [26] Kyberd, P. J. & Hill, W. Survey of upper limb prosthesis users in Sweden, the United Kingdom and Canada. Prosthetics and Orthotics International 35, 234–241 (2011).
  • [27] Atkins, D. J., Heard, D. C. & Donovan, W. H. Epidemiologic overview of individuals with upper-limb loss and their reported research priorities. Journal of Prosthetics and Orthotics 8, 2–11 (1996).
  • [28] Jiang, N., Rehbaum, H., Vujaklija, I., Graimann, B. & Farina, D. Intuitive, online, simultaneous, and proportional myoelectric control over two degrees-of-freedom in upper limb amputees. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 501–510 (2014).
  • [29] Tenore, F. et al. Decoding of individuated finger movements using surface Electromyography. Biomedical Engineering, IEEE Transactions on 56, 1427–1434 (2009).
  • [30] Li, G., Schultz, A. E. & Kuiken, T. A. Quantifying pattern recognition- based myoelectric control of multifunctional transradial prostheses. IEEE Transactions on Neural Systems and Rehabilitation Engineering 18, 185–192 (2010).
  • [31] Englehart, K., Hudgins, B., Parker, P. A. & Stevenson, M. Classification of the myoelectric signal using time-frequency based representations. Medical Engineering and Physics 21, 431–438 (1999).
  • [32] Englehart, K. & Hudgins, B. A robust, real-time control scheme for multifunction myoelectric control. IEEE Transactions on Biomedical Engineering 50, 848–854 (2003).
  • [33] Farrell, T. R. & Weir, R. F. The optimal controller delay for myoelectric prostheses. IEEE Transactions on Neural Systems and Rehabilitation Engineering 15, 111–118 (2007).
  • [34] Smith, L. H., Hargrove, L. J., Lock, B. A. & Kuiken, T. A. Determining the optimal window length for pattern recognition-based myoelectric control: Balancing the competing effects of classification error and controller delay. IEEE Transactions on Neural Systems and Rehabilitation Engineering 19, 186–192 (2011).
  • [35] Scheme, E. & Englehart, K. Electromyogram pattern recognition for control of powered upper-limb prostheses: state of the art and challenges for clinical use. Journal of Rehabilitation Research and Development 48, 643–659 (2011).
  • [36] Simon, A. M., Hargrove, L. J., Lock, B. & Kuiken, T. A. The Target Achievement Control Test: Evaluating real-time myoelectric pattern recognition control of a multifunctional upper-limb prosthesis. Journal of Rehabilitation Research and Development 48, 619–627 (2011).
  • [37] Pasquina, P. F. et al. First-in-man demonstration of a fully implanted myoelectric sensors system to control an advanced electromechanical prosthetic hand. Journal of Neuroscience Methods 244, 85–93 (2015).
  • [38] Weir, R. F., Troyk, P. R., DeMichele, G. A., Kerns, D. A. & Schorsch, J. F. Implantable Myoelectric Sensors for Intramuscular EMG Recording. IEEE Transactions on Biomedical Engineering 56, 2009 (2009).
  • [39] Kuiken, T. A. et al. Targeted muscle reinnervation for real-time myoelectric control of multifunction artificial arms. Jama 301, 619–628 (2009).
  • [40] Micera, S., Navarro, X. & Yoshida, K. Interfacing with the peripheral nervous system to develop innovative neuroprostheses. IEEE Transactions on Neural Systems and Rehabilitation Engineering 17, 417–419 (2009).
  • [41] Navarro, X. et al. A critical review of interfaces with the peripheral nervous system for the control of neuroprostheses and hybrid bionic systems. Journal of the Peripheral Nervous System 10, 229–258 (2005).
  • [42] Akhlaghi, N. et al. Real-time classification of hand motions using ultrasound imaging of forearm muscles. IEEE Transactions on Biomedical Engineering 63, 1687–1698 (2016).
  • [43] Castellini, C., Passig, G. & Zarka, E. Using ultrasound images of the forearm to predict finger positions. IEEE Transactions on Neural Systems and Rehabilitation Engineering 20, 788–797 (2012).
  • [44] Sikdar, S. et al. Novel method for predicting dexterous individual finger movements by imaging muscle activity using a wearable ultrasonic system. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 69–76 (2014).
  • [45] Hodges, P., Pengel, L., Herbert, R. & Gandevia, S. Measurement of muscle contraction with ultrasound imaging. Muscle & nerve 27, 682–692 (2003).
  • [46] Baker, C. A., Akhlaghi, N., Rangwala, H., Kosecka, J. & Sikdar, S. Real-time, ultrasound-based control of a virtual hand by a trans-radial amputee. In Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the, 3219–3222 (IEEE, 2016).
  • [47] Johnson, R. E., Kording, K. P., Hargrove, L. J. & Sensinger, J. W. Adaptation to random and systematic errors: Comparison of amputee and non-amputee control interfaces with varying levels of process noise. PLoS ONE 12, 1–19 (2017).
  • [48] Powell, M. A., Kaliki, R. R. & Thakor, N. V. User training for pattern recognition-based myoelectric prostheses: Improving phantom limb movement consistency and distinguishability. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 522–532 (2014).
  • [49] Resnik, L. et al. Evaluation of EMG pattern recognition for upper limb prosthesis control : a case study in comparison with direct myoelectric control. Journal of NeuroEngineering and Rehabilitation 15, 1–13 (2018).
  • [50] Hudgins, B., Parker, P. & Scott, R. N. A new strategy for multifunction myoelectric control. IEEE Transactions on Biomedical Engineering 40, 82–94 (1993).
  • [51] Scheme, E. et al. Motion normalized proportional control for improved pattern recognition-based myoelectric control. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, 149–157 (2014).
  • [52] Young, A. J., Hargrove, L. J. & Kuiken, T. A. Improving myoelectric pattern recognition robustness to electrode shift by changing interelectrode distance and electrode configuration. IEEE Transactions on Biomedical Engineering 59, 645–652 (2012).
  • [53] Hwang, H. J., Hahne, J. M. & Müller, K. R. Real-time robustness evaluation of regression based myoelectric control against arm position change and donning/doffing. PLoS ONE 12, 1–22 (2017).
  • [54] Jiang, N., Muceli, S., Graimann, B. & Farina, D. Effect of arm position on the prediction of kinematics from emg in amputees. Medical & biological engineering & computing 51, 143–151 (2013).
  • [55] Geng, Y., Zhou, P. & Li, G. Toward attenuating the impact of arm positions on electromyography pattern-recognition based motion classification in transradial amputees. Journal of NeuroEngineering and Rehabilitation 9, 1–11 (2012).
  • [56] Young, A. J., Hargrove, L. J. & Kuiken, T. A. The effects of electrode size and orientation on the sensitivity of myoelectric pattern recognition systems to electrode shift. IEEE Transactions on Biomedical Engineering 58, 2537–2544 (2011).
  • [57] Khan, A. A., Dhawan, A., Akhlaghi, N., Majdi, J. A. & Sikdar, S. Application of wavelet scattering networks in classification of ultrasound image sequences. In 2017 IEEE International Ultrasonics Symposium (IUS), 1–4 (2017).
  • [58] Tarbox, E. et al. Low-power ultrasound imaging systems using time delay spectrometry. In 2017 IEEE International Ultrasonics Symposium (IUS), 1–1 (2017).


This work is supported by multiple grants from: United States Department of Defense, Award Number: W81XWH-16-1-0722 and National Science Foundation, Grant Number: 1329829.

Additional information

Some of the authors also participated as subjects in the study.

Author contributions statement

A.D., B.M., S.P., N.A., M.H-L., W.J. and S.S. conceived the experiment(s). A.D., B.M., S.P., and R.H. conducted the experiment(s), A.D., B.M., and S.P analyzed the results. A.D., B.M., S.P., G.L., and S.S wrote the manuscript. All authors reviewed the manuscript.

Competing interests: The authors have no competing interests to declare.