Sequential Decision Fusion for Environmental Classification in Assistive Walking

04/25/2019 ∙ by Kuangen Zhang, et al. ∙ 0

Powered prostheses are effective for helping amputees walk on level ground, but these devices are inconvenient to use in complex environments. Prostheses need to understand the motion intent of amputees to help them walk in complex environments. Recently, researchers have found that they can use vision sensors to classify environments and predict the motion intent of amputees. Previous researchers can classify environments accurately in the offline analysis, but they neglect to decrease the corresponding time delay. To increase the accuracy and decrease the time delay of environmental classification, we propose a new decision fusion method in this paper. We fuse sequential decisions of environmental classification by constructing a hidden Markov model and designing a transition probability matrix. We evaluate our method by inviting able-bodied subjects and amputees to implement indoor and outdoor experiments. Experimental results indicate that our method can classify environments more accurately and with less time delay than previous methods. Besides classifying environments, the proposed decision fusion method may also optimize sequential predictions of the human motion intent in the future.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Amputation attenuates the mobility of millions of amputees in daily life. There were 44,430 new lower limb amputees in Canada from 2006 to 2011 [1], and the situation is more serious in the USA. Researchers predicted that the amputee population in the USA would increase to 3.6 million by the year 2050 [2]. Without healthy lower limbs, these amputees face serious difficulties in daily life. For lower limb amputees, everyday tasks, such as walking and running, present major challenges. In order to help amputees to walk, researchers have developed artificial legs, which are called prostheses [3, 4, 5, 6]. There are two types of prostheses, powered and passive prostheses. Powered prostheses are better than passive prostheses because they can provide the necessary active force to amputees during walking [7, 8].

Although powered prostheses are efficient for helping amputees walk on level ground, they are inconvenient to use in complex environments. In complex environments, amputees need to switch locomotion modes between different environments (e.g., level ground, up/down stairs, and up/down ramp) [9], then prostheses should be able to change locomotion modes accordingly. To address this issue, Sup et al. introduced a finite-state controller [10] composed of a series of parametric controllers that uses different parameters in different locomotion modes to control the prosthesis. To achieve seamless switching between different modes, however, the prosthesis must predict the motion intent of the amputee, which is difficult for the prosthesis. Unlike recognizing human activity [11], recognizing human intent is difficult to do accurately because the human intent happens mentally and so cannot be measured in the same way.

Previous researchers have primarily focused on the signals in the human-prosthesis loop to predict human motion intent. For instance, targeted muscle reinnervation (TMR) [12], electromyography (EMG) [13], inertial measurement unit (IMU) [14], and mechanical sensors [15] have been used to recognize human intent. TMR and EMG allow researchers to measure the electric potential produced by the muscle, which occurs prior to the motion [12], but these muscle signals are noisy and are difficult to classify accurately. Signals provided by the IMU and mechanical sensors, on the other hand, are stable but time-delayed [16]. Moreover, regardless of which signal is used, these signals are user-dependent, which means that they vary for different subjects. Consequently, it is difficult to predict human intent accurately and robustly based on the signals above.

Fig. 1:

Sequential model of environmental classification. We regard human walking in complex environments as sequential behaviors. To estimate the sequential behaviors, we use the vision system to perceive environments continuously. Current environments are denoted as the latent states, including level ground (LG), up stairs (US), down stairs (DS), up ramp (UP), and down ramp (DR).

Another method to predict the motion intent of amputees is to recognize the signals in the prosthesis-environment loop [17]. Visual information can guide able-bodied people to change locomotion modes in different environments [18]. Similarly, environmental recognition can provide the prosthesis with the environmental context of the human motion intent and help the prosthesis to reconstruct the vision-locomotion loop. The first research to consider combining the visual sensor with the powered prosthesis can be traced back to 2015 when a Kinect camera was used to recognize the geometric parameters of the stairs [19]. Subsequently, Liu et al. combined an IMU with a laser sensor to classify five types of terrains, including level ground, up/down stairs, and up/down ramp [20]

. Recently, Massalin et al. applied a wearable depth camera to capture the depth images of the environments and designed a support vector machine (SVM) method to classify the environments

[21]. In our previous research [22, 23]

, we utilized a self-contained depth camera and an IMU to capture stable point clouds of environments and designed a graph convolutional neural network (CNN) to classify point clouds. No matter which method is used, original classification results based on the above methods are usually noisy. Researchers have to utilize filters, such as the majority voting filter, to improve classification results. These filters, however, require data in a long time window, which results in the time delay and affects the real-time control. It is not appropriate to increase accuracy with sacrificing the real-time capability of environmental classification.

In order to increase the accuracy and decrease the required time delay simultaneously for environmental classification, we construct a sequential model in this paper (Fig. 1

). We hypothesize that we can fuse the sequential decisions (environmental classification results) from each frame of the image and decrease the required size of the time window by constructing a hidden Markov model (HMM) based on the probability theory and designing a decision fusion method. We verify our hypothesis through indoor and outdoor experiments with able-bodied subjects and amputees. The main contributions of this study include 1) constructing a hidden Markov model to optimize the sequential decisions of environmental classification, 2) designing a transition probability matrix for switching locomotion modes, 3) decreasing the time delay and increasing the accuracy of environmental classification simultaneously.

We organize the rest of our paper as follows. section II describes sequential decision fusion methods for environmental classification. Experimental results of presented methods are shown in section III. After showing the results, we provide corresponding discussions in section IV. Finally, section V discusses the conclusion of this paper.

Ii Methods

We present our decision fusion method in this section. We first state the research problems of environmental classification. To solve these problems, we briefly describe the methods of environmental feature extraction and classification based on the single image, which are introduced thoroughly in our previous paper

[22]. Then we discuss how to construct a stable sequential model and fuse sequential decisions for environmental classification.

Ii-a Problem statement

Our objective is to classify environments accurately with short time delay. We can classify the current environment into a possible category. To determine a possible category, we need to calculate the probability distribution

of the current environment on each category first:

(1)

where represent the probability of that current environment is level ground (), up stairs (), down stairs (), up ramp (), or down ramp ().

We only use a depth camera to perceive environments, and thus the input data of our method are a series of depth images:

(2)

where is the image at time . We denote the current time and delayed time as and , respectively.

To calculate the probability distribution at current time , we need to find a function to classify the single image and a function to fuse sequential decisions from different images:

(3)

There are some design constraints for the function of and :

  • The classification function should classify the single image accurately and quickly.

  • The fusion function should consider the relationship between the adjacent decisions.

  • There might be some error images , and thus should tolerate some error decisions.

  • The environmental classification accuracy should be high.

  • The delayed time should be short.

Ii-B Preprocessing environmental images

The depth camera can output the point cloud of the environment, which is a set of three dimensional (3D) points in the space . A problem for point clouds is that they are unstable because the camera is worn on the leg and rotates together with the leg. To solve this problem, we offset the point cloud in real time using the measured angle from an IMU. Another problem is that point clouds are unstructured and unordered. To convert the point cloud to structured and ordered data, we project the point cloud to binary images (Fig. 2), which can be classified by the convolutional neural network (CNN) easily. The detail methods of offsetting and projecting point cloud are described in [22].

Fig. 2: Environmental features extraction process. The point cloud on the first row is a set of 3D points. We convert the point cloud to the binary image, which only has two types of values: one or zero.

Ii-C Classifying the single environmental image

After preprocessing environmental images, we need to find a classification function

to classify the single environmental image accurately and efficiently. We tend to select a suitable deep learning method to classify our environmental images because deep learning avoids designing features manually and has achieved great success in classifying images. Considering that our method should be efficient, we choose the convolutional neural network (CNN) as our classification function (

= CNN). CNN is efficient because it shares the weight parameters of the convolutional kernel and downsamples the image through max-pooling layers.

Fig. 3: The architecture of the image classifier. The image classifier is based on a traditional CNN. The input and output of this CNN are a binary image of pixels and classification scores on five types of environments. The definitions of LG, US, DS, UR, and DR are the same as in Fig. 1

. We apply batch normalization and Relu activation function after each convolutional layer. The convolutional layer uses a filter to sum the bias and the dot product of pixels and corresponding parameters in the filter. Each input channel is normalized by the batch normalization. The Relu activation set the elements less than zero to zero. The max-pooling layers downsample the image and extract features in different resolutions. The output features from the last max-pooling layer are flattened and input to the fully connected layers. The fully connected layer calculates corresponding classification scores by adding the bias with the product sum of weights and features.

We then design an image classifier based on a simplified CNN [24], which is shown in Fig. 3. The input of our classifier is a binary image of pixels, and the output is the probability distribution (classification scores) of the current image on five types of categories. There are three convolutional layers and two max-pooling layers. The kernel size of the all convolutional layers is pixels. As for the max-pooling layers, the kernel size is set to pixels. There are 16, 32, and 64 channels for three convolutional layers. Moreover, We use batch normalization and Relu activation after each convolutional layer.

Before training the network, we initialize all parameters randomly. The initial weight values of convolutional layers and fully connected layer are generated from a Gaussian distribution randomly. The mean and standard deviation of the Gaussian distribution are 0 and 0.01, respectively.

Ii-D Sequential model of environmental classification

Our image classifier can generate a decision for each input image, and we need to fuse sequential decisions because classification result based on a single image is not robust. For instance, the camera may capture error images when the leg swings quickly. An intuitive method to fuse sequential decisions is to consider their temporal relationships. We can regard the captured images as sequence signals because human walking is continuous. After constructing the sequential model, we can design a hidden Markov model ( = HMM) to describe relationships between different decisions.

There are two important elements in HMM: latent states and observations. As shown in Fig. 4, we can regard the current category and captured image of current environment as the latent state and observation , respectively. The emission (conditional) probability of observing image given the latent state is:

(4)

where denotes a vector of classification scores based on the current image . The jth value of this vector equals to the probability of that the category of current environment is . The definition of is the same as in (1).

Fig. 4: The hidden Markov model (HMM) of environmental classification. Latent states represent current environments, including level ground (LG), up stairs (US), down stairs (DS), up ramp (UR), and down ramp (DR). The observation is the current image, which is denoted by . The

is the smooth latent state that is calculated based on the mean value of previous latent states in a sliding window. The transition probabilities display the prior probabilities to transit from last smooth state

to current state .

The estimated category of may not be robust because there are some error images. In order to make the fusion function to tolerate errors, we need to calculate a smooth state to substitute . A simple method is to calculate the average probability distribution of the state in a sliding window:

(5)

where length of the sliding window is denoted as .

In the HMM, adjacent latent states are connected by the transition probability, which represents the probability of transiting from a type of state to another type of state. The transition probability can be estimated base on our experience of life. For instance, the transition between different types of environments happens much less frequently than remaining in the same environment. Moreover, the stairs and ramp are usually connected by the level ground. Hence, we propose several rules to design the transition probability matrix . We use and to denote transition probability matrix and the transition probability from to , respectively:

  • The probabilities of remaining the same environment () are higher than that of transiting to different environments ().

  • The probabilities of remaining the same environment () are same for all environments.

  • The probabilities of transiting from level ground to other types of environments () are same.

  • The probabilities of transiting from other types of environments to the level ground () are same.

  • The probabilities of transiting between different upward environments or downward environments () are same and low.

  • The probabilities of transiting between the upward environments and the downward environments (

     is odd

    ) are same and the lowest.

According to the above rules, we design a transition probability matrix, which is shown in Fig. 4.

Ii-E Sequential decision fusion

Instead of using voting strategy or median strategy [19, 22], we modify the Viterbi algorithm to fuse the probability distribution of sequential decisions and estimate the smooth state [25]. The voting and median strategies are not appropriate because they do not consider the credibility of different decisions. However, the decision whose probability distribution concentrates on one category is more credible than those whose probabilities distributed similarly on all categories.

Considering that the credibilities of decisions are different, our modified Viterbi algorithm takes account of the probability distribution of every decision (Algorithm LABEL:algo:Viterbi). Our method is able to tolerate some errors because the decisions from error images are usually less credible than the stable decisions. After using our method, we can classify environments accurately with only delaying one frame.

However, if there are many error images, we still need a voting strategy to increase the robustness of the classification results further. The voting strategy is to calculate the mode of a series of smooth states in a sliding window:

(6)

where is the final decision of voting strategy and is the number of delayed frames caused by the voting strategy.

Consequently, the decision fusion function can be our HMM ( = HMM) or the combination of our HMM and voting strategy ( = HMM + Voting). The symbol of denotes combination.

algocf[!h]    

Ii-F Experimental setup

We evaluated our method by inviting subjects to implement indoor and outdoor experiments, which are the same as in [22]. We invited able-bodied subjects and amputees to wear our sub-vision system above the knee joint to capture environmental images. The sub-vision system consists of a depth camera (CamBoard pico flexx, , pmdtechnologies) and an IMU (MTi 1-series, , Xsens Technologies). During the experiments, we requested each subject to walk in an experimental area repeatedly for five times. In each trial, there are three level ground modes, one up and one down stairs modes, and one up and one down ramp modes.

We used the trained CNN model in our previous research to test our decision fusion methods. The detailed training settings of the CNN model are shown in [22]. We utilized the trained CNN model to calculate the original classification scores (emission probabilities ) from collected images. Then we implemented our decision fusion methods to estimate the final decisions (), which were compared with the actual modes to evaluate the classification accuracy of our method.

We implemented the experimental analysis on a computer with an Intel Core i7-7700K (4.2 GHz), a 16 GB DDR3, and a graphics card (NVIDIA GeForce GTX 1050 Ti). The program is based on MATLAB@ R2017b.

Ii-G Statistical analysis

In our experiments, we collected the generated binary images and labeled the actual modes manually. The mean and standard deviations of classification accuracy were analyzed for different subjects. We utilized a t-test at a significance level of

and value to evaluate the significance of the difference between the results using different methods. The

value is the probability that the null hypothesis is true.

Iii Results

Iii-a Subject information

Five able-bodied subjects and three transfemoral amputees participated in our experiments. We provide the basic information of subjects and amputees in Table I and Table II. We recruited the amputees from a local prosthetic company. The able-bodied subjects are from our university. One of the able-bodied subjects is the author of this paper. The approval to perform these experiments was granted by the Review Board of Southern University of Science and Technology. Subjects and amputees signed informed consents before the experiments.

Subjects Height (m) Weight (kg) Age (years) Gender
Subject 1 1.66 59 28 Male
Subject 2 1.65 63 30 Male
Subject 3 1.68 58 29 Male
Subject 4 1.72 60 24 Female
Subject 5 1.67 53 25 Male
TABLE I: Basic information of able-bodied subjects.
Subjects Amputee 1 Amputee 2 Amputee 3
Height (m) 1.70 1.70 1.69
Weight (kg) 64 60 62
Age (years) 38 38 42
Gender Male Male Male
Amputation time 2016 2001 2000
Amputation side Left Right Left
Residual limb length (m) 0.33 0.30 0.31
TABLE II: Basic information of amputees.

Iii-B Environmental classification results

In order to evaluate the performance of our method, we compared the environmental results of our method ( = CNN + HMM + Voting) with that of CNN and the combination of CNN and voting strategy (CNN + Voting). We set the length of sliding window and the number of delayed frames at 5 and 1, respectively. Then we calculated the mean and standard deviation (SD) of the indoor and outdoor environmental classification accuracy using the above three methods. The error bars and statistical data of classification accuracy are shown in Fig. 5, Fig. 6, and Table III.

The classification accuracy using our method is statistically different from that using (CNN + Voting) (). As shown in Table III, Compared to the (CNN + Voting), our method increases the mean values of classification accuracy in the indoor experiment and outdoor experiment by 1.09% and 2.62%, respectively. Moreover, the standard deviations of classification accuracy decrease after using our method. Hence, our method can classify environments accurately and stably.

Moreover, we compare the classification accuracy for each subject using three different methods. As shown in Fig. 5 and Fig. 6, our method can increase the classification accuracy for all subjects and amputees in the indoor and outdoor experiments.

Methods Mean (%) SD (%) Mean (%) SD (%)
Indoor Outdoor
CNN 94.83 1.64 90.74 2.84
CNN + Voting 96.33 1.41 93.71 2.09
Ours 97.42 1.17 96.33 1.28
TABLE III: Comparison of environmental classification accuracy. The number of delayed frames is one.
Fig. 5: Comparison of indoor environmental classification results. Subjects include five able-bodied subjects (S1-S5) and three transfemoral amputees (A1-A3). The error bars represent mean one standard deviation of classification accuracy in five repeated experiments.
Fig. 6: Comparison of outdoor environmental classification results. The meaning of the error bar and subject number are the same as in Fig. 5.

Iii-C Trade-off between the accuracy and time delay

We can increase the classification accuracy further by using the voting strategy because human remains the same locomotion modes in most situations. The voting strategy, however, can cause the time delay. Here we analyze the trade-off between classification accuracy and time delay.

We calculated environmental classification accuracy using three different methods and different window length of voting strategy. The number of delayed frames for (CNN + Voting) equals to . Meanwhile, for our method is because our HMM can also cause one frame delay. We aligned the classification accuracy of three different methods based on the number of delayed frames, which is shown in Fig. 7. The classification accuracy of our method and (CNN + Voting) increases with the number of delayed frames. Our method is affected less by the number of delayed frames than (CNN + Voting). We calculated the slope of classification accuracy relative to the number of delayed frames. In the indoor experiments, the mean slope of our method and (CNN + Voting) are 0.047% and 0.15%, respectively. In the outdoor experiments, the above two values change to 0.065% and 0.30%.

Moreover, we analyzed the difference of time delay between using our method and using (CNN + Voting) to achieve the same classification accuracy (difference of accuracy 0.05%). In the indoor experiments, the classification accuracy of our method achieves 97.53% with a delay of two frames. Meanwhile, (CNN + Voting) requires a delay of four frames to achieve the accuracy of 97.56%. In the outdoor experiments, the classification accuracy of our method with a delay of two frames is 96.53%. (CNN + Voting) achieves 96.49% with the delay of 6 frames. Considering that the capturing frequency of our depth camera is 15 frames per second, our method can decrease the time delay by about 133 ms and 267 ms to achieve the same classification accuracy as (CNN + Voting).

Fig. 7: The relationship between the classification accuracy and the number of delayed frames. The definitions of error bars and different methods are the same as in Fig. 5.

Iii-D Sequential decisions of environmental classification

We visualize the sequential decisions of environmental classification intuitively in Fig. 8 and Fig. 9. The original classification results of CNN are noisy, which are not suitable to control the prosthesis. Then we utilized the voting strategy to filter the classification results and set the number of delayed frames to 2. The classification results using (CNN + Voting) become clearer but still have some error results. After using our method, the error classification results only happen once in both the indoor and outdoor experiments. Consequently, under the same number of delayed frames, our method can improve the classification results more than (CNN + Voting).

Fig. 8: Sequential decisions of environmental classification in the indoor experiment. Amputee 1 is the subject of this experiment. The black heavy line denotes the actual modes, which are labeled manually based on the captured binary images. The blue circles, yellow thin line, and purple dash line represent the classification modes using CNN, (CNN + Voting), and our method. LG, US, DS, UR, and DR are the abbreviations of level ground, up stairs, down stairs, up ramp, and down ramp.
Fig. 9: Sequential decisions of environmental classification in the outdoor experiment. Amputee 2 is the subject in this experiment. Definitions of the legend and labels are the same as in Fig. 8.

Iii-E Comparison of probability distributions

Our method is better than CNN and (CNN + Voting) because we consider the probability distribution in each frame and the transition probability between adjacent states. As shown in Fig. 10

, the original probability distribution calculated by the CNN varies at different frames. There are also some error probability distributions in the original results because of the intense rotation of cameras or some anomalous environments. The posterior probability distributions of our method are more discernable than that of CNN. Most probability distributions concentrate on one mode.

Fig. 10: Probability distributions calculated by different methods. In two upper figures, the blue solid line, red dot, yellow dash line, purple dot line, and green dot dash line represent level ground (LG), up stairs (US), down stairs (DS), up ramp (UR), and down ramp (DR).

Iv Discussion

Iv-a Advantages of our method

In this research, we proposed a concise method to fuse sequential decisions to increase environmental classification accuracy and decrease the time delay. Compared to the traditional voting strategy [19, 22], our methods have several advantages.

Firstly, our method takes account of the credibility of each decision. For traditional voting strategy, all decisions contribute equally, but this is not credible. In real situations, the camera may provide error images sometimes. For instance, the camera cannot perceive front terrains when its orientation angle in the sagittal plane is too big or too small. Also, there are some interference objects, including uneven ground and curbs, in the environments, especially in the outdoor environment. The probability distributions of these error decisions are more ambiguous than that of normal decisions. We can decrease the credibility of error decisions after fusing the probabilities, which is better than traditional voting strategy.

Additionally, we designed a transition probability matrix base on the characteristics of human walking and daily environments. This transition probability matrix provides the relationship between the last state and current state, and thus optimizes the accuracy of the environmental classification. As stated in section III, our method increases classification accuracy of all subjects in both indoor and outdoor environments.

Moreover, our method is suitable for real-time control because it has low computational complexity and requires a short time delay. It only takes 0.02 ms to use our method to update one decision. Besides, our method can still achieve high accuracy with a delay of only one frame. As shown in Fig. 7, our method is less affected by the number of delayed frames than the method of (CNN + Voting). In our previous research [22], we also achieved high classification accuracy, but we sacrificed the real-time performance. Although the camera can perceive the environments in front of the prosthesis and can tolerate time delay of recognition, large time delay decreases the response speed of the whole control system and cannot handle unexpected situations. Compared to traditional voting strategy, our method can achieve the same classification accuracy with decreasing time delay of 133 ms and 266 ms in the indoor and outdoor environments, respectively. Consequently, our method can increase the response speed of the control system.

Furthermore, we can also apply our method to classify human intent. The input of our decision fusion method is only the probability distribution. Hence our method is not limited in the environmental classification. The change of sensors does not affect our decision fusion method. Some human signals, such as EMG and IMU, can also be utilized to classify human motion intent during walking in complex environments. The requirements of real-time performance for these human signals are higher than visual signals because these human signals generate only tens of milliseconds ahead of motion or even after the motion. Then we need to achieve high classification accuracy with short time delay. As stated before, the required time delay of our method is down to one frame, and our method has low computational complexity. Thus, our method fulfills the above requirements.

Iv-B Limitations and future works

Although our method can classify environments accurately and with short time delay, there are some limitations. Firstly, we have not applied our method on the real-time control of a powered prosthesis. The situations in real-time control can be different from that in the offline analysis. Besides, the amputees wearing powered prostheses may walk differently from that wearing passive prostheses. Hence, we will apply our method on the real-time control of the powered prosthesis to evaluate the performance of our method further. Moreover, the environmental classification can only provide prior information about human motion intent. Therefore, we need to fuse the decisions from visual signals and that from human signals to estimate human motion intent more accurately.

V Conclusion

In this paper, we constructed a hidden Markov model and designed a transition probability matrix for environmental classification in assistive walking. We considered the probability distribution of original decision from the CNN and fused sequential decisions to increase the classification accuracy with short time delay. We invited able-bodied subjects and amputees to implement experiments in indoor and outdoor experiments. According to the experimental results, our method achieved the classification accuracy of 97.42% and 96.33% with delaying only one frame in the indoor and outdoor experiments, which were 1.09% and 2.62% higher than that using the traditional voting strategy. For achieving same classification accuracy, our method decreased time delay by 133 ms and 266 ms in the indoor and outdoor experiments compared to traditional voting strategy. Moreover, our decision fusion method only cost 0.02 ms to update one decision. Hence, our method realized our target: increasing the classification accuracy and decreasing the time delay simultaneously. These satisfactory experimental results validated the accuracy and real-time capability of our method, which is significant to improve the performance of prostheses.

Acknowledgment

We acknowledge funding and support by the National Natural Science Foundation of China under Grant U1613206, 61533004, and 91648203, and in part by Guangdong Innovative and Entrepreneurial Research Team Program under Grant 2016ZT06G587.

References

  • [1] B. Imam, W. C. Miller, H. C. Finlayson, J. J. Eng, and T. Jarus, “Incidence of lower limb amputation in Canada,” Can J Public Health, vol. 108, no. 4, pp. 374–380, Nov. 2017.
  • [2] K. Ziegler-Graham, E. J. MacKenzie, P. L. Ephraim, T. G. Travison, and R. Brookmeyer, “Estimating the Prevalence of Limb Loss in the United States: 2005 to 2050,” Archives of Physical Medicine and Rehabilitation, vol. 89, no. 3, pp. 422–429, Mar. 2008.
  • [3] S. K. Au and H. M. Herr, “Powered ankle-foot prosthesis,” IEEE Robotics Automation Magazine, vol. 15, no. 3, pp. 52–59, Sep. 2008.
  • [4] S. K. Au, J. Weber, and H. Herr, “Powered Ankle–Foot Prosthesis Improves Walking Metabolic Economy,” IEEE Transactions on Robotics, vol. 25, no. 1, pp. 51–66, Feb. 2009.
  • [5] F. Sup, H. A. Varol, and M. Goldfarb, “Upslope Walking With a Powered Knee and Ankle Prosthesis: Initial Results With an Amputee Subject,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 19, no. 1, pp. 71–78, Feb. 2011.
  • [6] B. E. Lawson, H. A. Varol, A. Huff, E. Erdemir, and M. Goldfarb, “Control of Stair Ascent and Descent With a Powered Transfemoral Prosthesis,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 21, no. 3, pp. 466–473, May 2013.
  • [7] R. S. Gailey, M. A. Wenger, M. Raya, N. Kirk, K. Erbs, P. Spyropoulos, and M. S. Nash, “Energy expenditure of trans-tibial amputees during ambulation at self-selected pace,” Prosthetics and orthotics international, vol. 18, no. 2, pp. 84–91, 1994.
  • [8] Q. Wang, K. Yuan, J. Zhu, and L. Wang, “Walk the Walk: A Lightweight Active Transtibial Prosthesis,” IEEE Robotics Automation Magazine, vol. 22, no. 4, pp. 80–89, Dec. 2015.
  • [9] H. A. Varol, F. Sup, and M. Goldfarb, “Multiclass Real-Time Intent Recognition of a Powered Lower Limb Prosthesis,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 3, pp. 542–551, Mar. 2010.
  • [10] F. Sup, A. Bohara, and M. Goldfarb, “Design and Control of a Powered Transfemoral Prosthesis,” The International Journal of Robotics Research, vol. 27, no. 2, pp. 263–273, Feb. 2008.
  • [11] J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu, “Deep learning for sensor-based activity recognition: A Survey,” Pattern Recognition Letters, Feb. 2018.
  • [12] J. M. Souza, N. P. Fey, J. E. Cheesborough, S. P. Agnew, L. J. Hargrove, and G. A. Dumanian, “Advances in Transfemoral Amputee Rehabilitation: Early Experience with Targeted Muscle Reinnervation,” Current Surgery Reports, vol. 2, no. 5, p. 51, Mar. 2014.
  • [13] T. R. Clites, M. J. Carty, J. B. Ullauri, M. E. Carney, L. M. Mooney, J.-F. Duval, S. S. Srinivasan, and H. M. Herr, “Proprioception from a neurally controlled lower-extremity prosthesis,” Science Translational Medicine, vol. 10, no. 443, p. eaap8373, May 2018.
  • [14] D. Xu, Y. Feng, J. Mai, and Q. Wang, “Real-Time On-Board Recognition of Continuous Locomotion Modes for Amputees With Robotic Transtibial Prostheses,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 10, pp. 2015–2025, Oct. 2018.
  • [15] A. M. Simon, K. A. Ingraham, N. P. Fey, S. B. Finucane, R. D. Lipschutz, A. J. Young, and L. J. Hargrove, “Configuring a Powered Knee and Ankle Prosthesis for Transfemoral Amputees within Five Specific Ambulation Modes,” PLOS ONE, vol. 9, no. 6, p. e99387, Jun. 2014.
  • [16] M. Hao, K. Chen, and C. Fu, “Smoother-based 3d Foot Trajectory Estimation Using Inertial Sensors,” IEEE Transactions on Biomedical Engineering, pp. 1–1, 2019.
  • [17] K. Zhang, C. W. de Silva, and C. Fu, “Sensor Fusion for Predictive Control of Human-Prosthesis-Environment Dynamics in Assistive Walking: A Survey,” arXiv:1903.07674 [cs], Mar. 2019, arXiv: 1903.07674.
  • [18] J. S. Matthis, J. L. Yates, and M. M. Hayhoe, “Gaze and the Control of Foot Placement When Walking in Natural Terrain,” Current Biology, vol. 28, no. 8, pp. 1224–1233.e5, Apr. 2018.
  • [19] N. E. Krausz, T. Lenzi, and L. J. Hargrove, “Depth Sensing for Improved Control of Lower Limb Prostheses,” IEEE Transactions on Biomedical Engineering, vol. 62, no. 11, pp. 2576–2587, Nov. 2015.
  • [20] M. Liu, D. Wang, and H. H. Huang, “Development of an Environment-Aware Locomotion Mode Recognition System for Powered Lower Limb Prostheses,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 24, no. 4, pp. 434–443, Apr. 2016.
  • [21] Y. Massalin, M. Abdrakhmanova, and H. A. Varol, “User-Independent Intent Recognition for Lower Limb Prostheses Using Depth Sensing,” IEEE Transactions on Biomedical Engineering, vol. 65, no. 8, pp. 1759–1770, Aug. 2018.
  • [22] K. Zhang, C. Xiong, W. Zhang, H. Liu, D. Lai, Y. Rong, and C. Fu, “Environmental Features Recognition for Lower Limb Prostheses Toward Predictive Walking,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 3, pp. 465–476, Mar. 2019.
  • [23] K. Zhang, M. Hao, J. Wang, C. W. de Silva, and C. Fu, “Linked Dynamic Graph CNN: Learning on Point Cloud via Linking Hierarchical Features,” arXiv:1904.10014 [cs], Apr. 2019, arXiv: 1904.10014.
  • [24]

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in

    Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds.   Curran Associates, Inc., 2012, pp. 1097–1105.
  • [25] A. Viterbi, “Error bounds for convolutional codes and an asymptotically optimum decoding algorithm,” IEEE Transactions on Information Theory, vol. 13, no. 2, pp. 260–269, Apr. 1967.