Robotic Supervised Autonomy: A Review

06/27/2019
by   Yangming Li, et al.
Rochester Institute of Technology
0

This invited paper discusses a new but important problem, supervised autonomy, in the context of robotics. The paper defines supervised autonomy and compares the supervised autonomy with robotic teleoperation and robotic full autonomy. Based on the discussion, the significance of supervised autonomy was introduced. The paper discusses the challenging and unsolved problems in supervised autonomy, and reviews the related works in our research lab. Based on the discussions, the paper draws the conclusion that supervised autonomy is critical for applying robotic systems to address complicated problems in the real world.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/30/2019

The Use of Agricultural Robots in Orchard Management

Book chapter that summarizes recent research on agricultural robotics in...
10/15/2020

Locomotion Design for an Internally Actuated Cubic Robot for Exploration of Low Gravity Bodies in the Solar System

The exploration of asteroids and comets is important in the quest for th...
08/24/2018

Defining the problem of Observation Learning

This article defines and formulates the problem of observation learning ...
10/15/2015

Narrative Science Systems: A Review

Automatic narration of events and entities is the need of the hour, espe...
11/18/2017

Robotic frameworks, architectures and middleware comparison

Nowadays, the construction of a complex robotic system requires a high l...
04/06/2020

Towards a Science of Resilient Robotic Autonomy

This discussion paper aims to support the argument process for the need ...
08/19/2019

Development of a Robotic System for Automatic Wheel Removal and Fitting

This paper discusses the image processing and computer vision algorithms...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The robotic technology meant to improve operation precision and reliability and liberates human beings from boring, tedious, and dangerous works. With the development of robotics, more and more robotic systems have been applied to real-world applications and address real-world problems. For example, industrial robots greatly improve production precision and reliability and lower the manufacture costs.

One of the key factors that lead to the successful application of robotic technology is robotic autonomy[1, 2]. Nowadays, it generally considers the robotic autonomy for repetitive tasks in fixed and simple environments is a solved problem[3], and robots are generally outperformed human equivalents in costs, efficiency and reliability. However, achieving autonomy in real-world applications in the real dynamic world remains an extremely challenging problem. This is because, in those complicated real-world applications, robots need to understand the environments, adapt to the environmental dynamicity, and need to adjust the task execution automatically. Up to today, it is still a dream to ask robots to do works for us as asking a friend.

There are many efforts we need to make, in order to push robotic systems to the next level of applicability[4]. Among all of them, there are some key technical challenges:

  • sensors,

  • robotic system reliability,

  • robotic learning and intelligence,

  • robot/human interaction.

Besides, there are also ethical/moral problems and legal concerns that need to be addressed to introduce fully intelligent robots into our lives.

Some of these problems can be addressed by supervised autonomy, as it increases the robotic system applicability by introducing human expert knowledge and decrease the adoption barrier for robotic systems. This paper discusses what is supervised autonomy, what are the main differences between supervised autonomy and full autonomy and teleoperation. It presents the key challenges and the significance of the technology.

The paper is organized as follows: Section II defines the supervised autonomy and compares it to full autonomy and teleoperation. Section III introduces the significance of supervised autonomy. Section IV discusses the key challenging problems in supervised autonomy. The paper draws a conclusion in the last section.

Ii What is Supervised Autonomy

Ii-a Supervised Autonomy v.s. Full Autonomy

The final fantasy of robotics is to liberate human-being from heavy, tedious and boring works. It is generally accepted that robotic autonomy has different levels, which represent the increases in autonomy. There are different definitions of robotic autonomy levels. In this paper, we focus on the relationship between autonomy and supervised autonomy, and pick the autonomy definition for autonomous cars, as a reference, to facilitate the discussion.

For autonomous cars, there are 5 levels of autonomy, and level 5, is considered full autonomy. On the bottom of the 5 levels, it is no autonomy at all, we consider it as level 0.

Ii-A1 Level 0: No Autonomation

Electrical/Mechanical systems extend human capabilities. These systems are fully controlled by human-being and do not assist human with the operations. The traditional cars are such examples. These systems, no matter how complex they are, they are intrinsically tools, and fully operated by human-being.

Ii-A2 Level 1: Robotic Assistance Without Environmental Perception

This level of autonomy is primitive. The systems with such autonomy can assist human with operations, often based on human’s setup and simple rules. For example, traditional cruise control. When drivers set up the cruise speed, a car can use its speed sensor to close the control loop and ease human drivers from the tedious gas pedal control.

Ii-A3 Level 2: Robotic Assistance With Environmental Perception

Level 2 autonomy still depends on simple rules, however, such robots can actively perceive environments and adapt to environmental changes, to a certain degree. For example, level 2 autonomous cars can react to traffic lights.

Ii-A4 Level 3: Robotic Operation Under Human Monitoring

When robots reach level 3, they can continuously monitor environments, and autonomously adjust operations based on environmental changes. However, robots are still not capable of adapt to all conditions and will require human’s monitoring all the time.

Ii-A5 Level 4: Robotic Operation With Human Supervision

The level 4 autonomy, for autonomous cars, is fully autonomous, but need supervision. Level 4 autonomous cars can adapt to various road conditions and environments, and only need human decisions for rare cases.

Ii-A6 Level 5: Full Autonomation

The level 5 autonomy, for autonomous cars, is the equivalent of human drivers. These robots behave as taxi drivers and do not need human intervention.

From the upper discussion, it is clear that robots with full autonomy are the equivalent of human experts. It worth to point out that, compared to driving, most of the real world tasks are more complicated. Human often needs days to learn to drive, but a lot of professional services, such as surgery, takes years of training and practice the rules are not straightforward. It will be much harder to realize full autonomy in many fields.

Ii-B Supervised Autonomy v.s. Teleoperation

Teleoperation[5] refers to the remote control of a robot from a console. In a teleoperated robotic system, a human operator controls the movements of the slave side of a robot. It is clear that teleoperation is opposite to full autonomy as human operators fully control the robot.

Ii-C Levels of Supervised Autonomy

From the upper discussion, we can see that teleoperation solely depend on human micro-level control for performing tasks, and controlling robot behavior. Autonomy, as a comparison, purely rely on robots on task performing. However, in real-world applications, operations and behavior are under supervision. The more dangerous the operations are, the more intense and more complicated supervision will be conducted. Despite the reasons from the legal perspective, there are real technical motivations. Expert knowledge is needed to ensure the successful execution, reliability, and the results meet expectations. From this perspective, autonomy is always supervised.

Being similar to full autonomy, supervised autonomy has different levels. These levels reflect the increase of the capability of environmental perception, and the capability to solve tasks with increasing complexities[6] Fig. 1.

Fig. 1: Perception in Supervised Autonomy. In low-level supervised autonomy, robots close the loop of control by monitoring self-status. In high-level supervised autonomy, robots have environmental perception and can perform complex tasks.

Ii-C1 Level 1: Robotic Assistance

This is the lowest level of supervised autonomy. In this level of supervised autonomy, human setup the goal status for robots, and robots adjust self status.

This level of supervised autonomy liberate human from simple repetitive micro controls. But the robots can neither adapt to environmental changes nor perform complex tasks.

Ii-C2 Level 2: Entry-Level Task Autonomy Under Supervision

Robots with level 2 supervised autonomy can perform entry-level tasks, under continuous and intensive supervision from human-being. These robots have the superior capability on environmental perception and task execution, however, lack the capability of complex task planning and decision making.

Task complexity is a well-studied problem and there are many definitions[6]. In the context of supervised autonomy study, we categorize tasks into two categories, the entry-level tasks, and the specialist-level tasks. The entry-level tasks are based on simple rules, and the results can be objectively measured by robots. For example, when a robot grasps an object, the rules for the task is explicit and simple, and the results can be objectively and simply measured by the robot.

Ii-C3 Level 3: Specialist-Level Task Autonomy Under Supervision

Level 3 supervised autonomy still requires continuous supervision from human-being. Being different with robots at level 2, robots at level 3 reach human level capability of task execution, and can perform complex tasks autonomously under human supervision.

Note that raising from level 2 to level 3, robots need not only improved intelligence but also improve hardware and system. This is because the execution of more complex tasks requires robots having improved capability, performing collaborations, and having superior intelligence.

Ii-C4 Level 4: Autonomy With Human Supervision

Robotic systems with level 4 supervised autonomy have human-expert-level reliability. These robotic systems no longer need intensive supervision from human experts. Instead, they behave as human experts and actively consult human experts if the systems think it is needed.

Iii Why Supervised Autonomy?

Supervised autonomy is needed because human experts’ opinions are the gold standard. Even through supervised autonomy is closely related to robotic teleoperation and full autonomy, it has clear differences. When robotic systems are teleoperated, the systems require no autonomy, the control is fully on human-being and supervised autonomy is not needed. When robotic systems are fully autonomous, the control is fully independent of human control, and supervised autonomy is not needed.

Iii-a Robotic System Reliability

Robotics systems have many components, and a single problem in any of these components can jeopardize the performance and the stability of robots[7, 8, 9, 10]. Improving the stability and the performance of robotic systems requires the development of robotic technology and the accumulation of experiences[11, 12]. Moreover, improving system reliability often relies on extra hardware[13, 14, 15] and software[16, 17, 18], which further improve the complexity of robotic systems[19, 20, 21, 22, 23, 24, 25].

Robotic systems continuously work in dynamic real environments[26, 27, 28], which has various adverse factors that cause a robotic system failure[29]. Because of the system complexity, the environmental complexity, and the task complexity, it is extremely challenging to maintain robots’ performance and stability in real-world applications, under today’s technology, regardless of the improvement of robotic technology and researchers’ efforts. Experts’ supervision on robotic systems can serve as guardians to robotic systems and allows introducing robotic systems into real-world applications.

Iii-B Robotic Intelligence

Most of the existing robotic research focuses on robotic technology[30, 31]. However, domain knowledge is needed in many real-world applications[32, 33, 34]. Expert level knowledge is essential to successful robotic applications but is difficult to achieve[35, 36, 37]. Classical expert systems utilize rule-based intelligent systems and facing the exponential complexity increases[38, 39], thus are not easy to develop and maintain for complex applications[40, 41]

. Recently, deep learning based methods achieved impressive progress and outperform human performance in many applications

[42, 43]

, such as natural language processing. However, deep learning methods often require a large amount of training data

[44], which is often not available for robotic applications. Moreover, deep learning methods are often applied to address single-task problems and are sensitive to the change of data distributions. As a result, equipping robots with expert level skills is still a challenging and unsolved problem. Because of these limitations, supervised autonomy plays a significant role in accelerating introducing robots into real-world applications.

Iii-C Robotic Collaboration

Collaboration is essential for complex tasks, even for a human being. Although there is a large amount of existing research and effort towards robot/human collaboration[45], existing results often aim to address single-task applications[46]. Therefore, utilizing human expert knowledge to decompose and simplify tasks into simpler tasks, which can be handled by robots, is important to extend robotic applications.

Iii-D Ethical and Legal Vacancy

Although the study in robotics has made impressive progress in the past to centuries, and robots already started to address real-world problems, it is still blank in ethics and law for robots directly interacting with a human. For simple operations, such as driving, human beings can often reach a common consensus on the evaluation of the operations. For example, an operation causes a traffic accident and damage to human or properties are definitely failed. For complex operations, such as surgeries, even human experts can have conflict opinions toward some operations. This is often caused by the fact that the evaluation of the operation is complicated. as the results, the evaluations of hypothesizing operations are more complicated and often controversial. When the huge loss of values is associated with operations, human appeal lawsuit to seek for solutions. When such situation raised, a committee formed by human experts will be the reference for the court. For robots, when concerns are raised regarding the operations, human experts will evaluate the results. Clearly, supervised autonomy allows human experts to make critical decisions to ensure the robotic systems are safe and effective.

Iv Challenges and Opportunities in Supervised Autonomy

Iv-a Sensor Information

Sensors are fundamental and challenging components in robotic systems[47, 48, 49, 50]. Even after decades of research, it is still difficult to increase robot perception to human level[51]. For example, tactile sensing is essential and fundamental to a human being but despite the impressive progress of force sensor matrix research, robot haptic sensing is far away from human performance[52], in both sensing precision and resolution[53, 54].

Another important problem is sensor fusion[55, 56, 57]. Human naturally uses all available information[58, 59], such as vision, hearing, and tactile, for performing tasks, but for robots, sensor fusion is still a challenging problem, especially with comparatively poor sensor information quality[60, 61, 62].

Iv-B Robot Control

Robot control is a historical problem and remains challenging and attractive. There is a huge research community, which focuses on improving control efficiency[63] and system robustness[64, 65]. However, while modern robots often have redundancy for improved system reliability, it makes the control problem harder[66]. While multiple redundant robots work collaboratively, the control problem reaches a new level of complexity[67, 68].

Iv-C Robotic Intellgence

Human intelligence keeps increasing from new experiences and is superior in heuristic learning and reasoning

[69]. Robots need a similar learning capability to keep increasing performance. Robots also need to increase the reasoning capability to transform knowledge from domains to domains, which is a problem need to be addressed as soon as possible.

Iv-D Robot/Human Interaction and Interface

Supervised autonomy needs robot-human interaction. The classical robot/human interaction has insufficient efficiency for supervised autonomy[70]. We need more than emergency stops to guide robots for improving robotic system performance[71].

V Conclusion

This work introduces supervised autonomy, a new and important topic in the context of robotics. The paper defines supervised autonomy, and through comparing with teleoperation and full autonomy, explains the significance and the importance of this topic. The paper discusses challenges and opportunities in Supervised Autonomy, and hope to help other researchers to quickly push forward the development of supervised autonomy.

Acknowledgment

References

  • [1] Y. Li, “Research on robust mapping methods in unstructured environments,” Ph.D. dissertation, University of Science and Technology of China, Hefei, Anhui, China, 5 2010.
  • [2]

    Y. Li, S. Li, and B. Hannaford, “A model based recurrent neural network with randomness for efficient control with applications,”

    IEEE Transactions on Industrial Informatics, 2018.
  • [3] S. Li and Y. Li, “Nonlinearly activated neural network for solving time-varying complex sylvester equation,” IEEE transactions on cybernetics, vol. 44, no. 8, pp. 1397–1407, 2014.
  • [4] S. Li, Y. Li, and Z. Wang, “A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application,” Neural Networks, vol. 39, pp. 27–39, 2013.
  • [5] P. F. Hokayem and M. W. Spong, “Bilateral teleoperation: An historical survey,” Automatica, vol. 42, no. 12, pp. 2035–2057, 2006.
  • [6] D. J. Campbell, “Task complexity: A review and analysis,” Academy of management review, vol. 13, no. 1, pp. 40–52, 1988.
  • [7]

    S. Zhao, W. Huang, and Y. Li, “An improved pattern matching algorithm of intrusion detection based on kmp,”

    Journal of Jinggangshan University (Natural Sciences Edition), vol. 34, no. 1, pp. 55–57, 2013.
  • [8] M. Miyasaka, M. Haghighipanah, Y. Li, and B. Hannaford, “Hysteresis model of longitudinally loaded cable for cable driven robots and identification of the parameters,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on.   IEEE, 2016, pp. 4051–4057.
  • [9] Y. Li, M. Q.-H. Meng, S. Li, W. Chen, and H. Liang, “Particle filtering for range-based localization in wireless sensor networks,” in Intelligent Control and Automation, 2008. WCICA 2008. 7th World Congress on.   IEEE, 2008, pp. 1629–1634.
  • [10] Y. Li, M. Q.-H. Meng, S. Li, W. Chen, Z. You, Q. Guo, and H. Liang, “A quadtree based neural network approach to real-time path planning,” in Robotics and Biomimetics, 2007. ROBIO 2007. IEEE International Conference on.   IEEE, 2007, pp. 1350–1354.
  • [11]

    M. Haghighipanah, M. Miyasaka, Y. Li, and B. Hannaford, “Unscented kalman filter and 3d vision to improve cable driven surgical robot joint angle estimation,” in

    Robotics and Automation (ICRA), 2016 IEEE International Conference on.   IEEE, 2016, pp. 4135–4142.
  • [12] Q. Guo, H. Liang, W. Chen, Y. Li, and S. Li, “Design of operating system for node of wilreless sensor networks,” Automation and Instrumentation, vol. 23, no. 10, pp. 23–25, 2008.
  • [13] J. Han and Y. Li, “Study on can-ethernet gateway based on submerge frame,” Microcontrollers and Embedded Systems, no. 006, pp. 10–13, 2006.
  • [14] H. Jin, J. Han, and Y. Li, “Research of exception handler mechanism for embedded system based on ARM,” Modern Electronic Technique, vol. 28, no. 022, pp. 1–3, 2005.
  • [15] Y. Li, J. Han, and H. Jin, “Application on dual audio frequency technology to data transmission in underground mine,” Coal Science and Technology, vol. 33, no. 11, pp. 22–25, 2005.
  • [16] W. Chen, T. Meil, Y. Li, H. Liang, Y. Liu, and M. Q.-H. Meng, “An auto-adaptive routing algorithm for wireless sensor networks,” in Information Acquisition, 2007. ICIA’07. International Conference on.   IEEE, 2007, pp. 574–578.
  • [17] S. Li, M. Q. Meng, W. Chen, Y. Li, Z. You, Y. Zhou, L. Sun, H. Liang, K. Jiang, and Q. Guo, “Sp-nn: a novel neural network approach for path planning,” in Robotics and Biomimetics, 2007. ROBIO 2007. IEEE International Conference on.   IEEE, 2007, pp. 1355–1360.
  • [18] W. Chen, M. Q.-H. Meng, S. Li, T. Mei, H. Liang, and Y. Li, “Energy efficient head node selection algorithm in wireless sensor networks,” in Robotics and Biomimetics, 2007. ROBIO 2007. IEEE International Conference on.   IEEE, 2007, pp. 1366–1371.
  • [19] S. Li and Y. Li, “Distributed range-free localization of wireless sensor networks via nonlinear dynamics,” in Wireless Sensor Networks-Technology and Protocols.   InTech, 2012.
  • [20] W. Chen, T. Mei, L. Sun, Y. Liu, Y. Li, S. Li, H. Liang, and M. Q.-H. Meng, “Error analyzing for rssi-based localization in wireless sensor networks,” in Intelligent Control and Automation, 2008. WCICA 2008. 7th World Congress on.   IEEE, 2008, pp. 2701–2706.
  • [21] J. Rosen, L. N. Sekhar, D. Glozman, M. Miyasaka, J. Dosher, B. Dellon, K. S. Moe, A. Kim, L. J. Kim, T. Lendvay, Y. Li, and B. Hannaford, “Roboscope: A flexible and bendable surgical robot for single portal minimally invasive surgery,” in Robotics and Automation (ICRA), 2017 IEEE International Conference on.   IEEE, 2017, pp. 2364–2370.
  • [22] W. Chen, T. Mei, M. Q.-H. Meng, H. Liang, Y. Liu, Y. Li, and S. Li, “Localization algorithm based on a spring model (lasm) for large scale wireless sensor networks,” Sensors, vol. 8, no. 3, pp. 1797–1818, 2008.
  • [23] Y. Li, Q. Meng, H. Liang, S. Li, and W. Chen, “On wsn—aided simultaneous localization and mapping based on particle filtering,” Robot, vol. 30, no. 5, pp. 421–427, 2008.
  • [24] Y. Li, M. Q.-H. Meng, H. Liang, S. Li, and W. Chen, “Particle filtering for wsn aided slam,” in Advanced Intelligent Mechatronics, 2008. AIM 2008. IEEE/ASME International Conference on.   IEEE, 2008, pp. 740–745.
  • [25] M. Q.-H. Meng, H. Liang, S. Li, Y. Li, W. Chen, Y. Zhou, S. Miao, K. Jiang, Q. Guo, et al., “A localization algorithm in wireless sensor networks using a mobile beacon node,” in Information Acquisition, 2007. ICIA’07. International Conference on.   IEEE, 2007, pp. 420–426.
  • [26] Y. Li, J. Zhang, and S. Li, “STMVO: biologically inspired monocular visual odometry,” Neural Computing and Applications, vol. 29, no. 6, pp. 215–225, 2018.
  • [27] N. Aghdasi, Y. Li, A. Berens, K. S. Moe, R. A. Bly, and B. Hannaford, “Atlas and feature based 3d pathway visualization enhancement for skull base pre-operative fast planning from head ct,” in SPIE Medical Imaging.   International Society for Optics and Photonics, 2015, pp. 941 519–941 519.
  • [28] Y. Li, S. Li, and Y. Ge, “A biologically inspired solution to simultaneous localization and consistent mapping in dynamic environments,” Neurocomputing, vol. 104, pp. 170–179, 2013.
  • [29] M. Haghighipanah, Y. Li, M. Miyasaka, and B. Hannaford, “Improving position precision of a servo-controlled elastic cable driven surgical robot using unscented kalman filter,” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on.   IEEE, 2015, pp. 2030–2036.
  • [30] S. Li, B. Liu, and Y. Li, “Selective positive–negative feedback produces the winner-take-all competition in recurrent neural networks,” IEEE transactions on neural networks and learning systems, vol. 24, no. 2, pp. 301–309, 2013.
  • [31] S. Li, Z. Wang, and Y. Li, “Using laplacian eigenmap as heuristic information to solve nonlinear constraints defined on a graph and its application in distributed range-free localization of wireless sensor networks,” Neural Processing Letters, pp. 1–14, 2012.
  • [32] R. A. Harbison, A. M. Berens, Y. Li, R. A. Bly, B. Hannaford, and K. S. Moe, “Region-specific objective signatures of endoscopic surgical instrument motion: A cadaveric exploratory analysis,” Journal of Neurological Surgery Part B: Skull Base, vol. 78, no. 01, pp. 099–104, 2017.
  • [33] R. C. Saxena, S. Friedman, R. A. Bly, J. Otjen, A. M. Alessio, Y. Li, B. Hannaford, M. Whipple, and K. S. Moe, “Comparison of micro–computed tomography and clinical computed tomography protocols for visualization of nasal cartilage before surgical planning for rhinoplasty,” JAMA facial plastic surgery, 2019.
  • [34] Y. Li, R. A. Bly, R. A. Harbison, I. M. Humphreys, M. E. Whipple, B. Hannaford, and K. S. Moe, “Anatomical region segmentation for objective surgical skill assessment with operating room motion data,” Journal of Neurological Surgery Part B: Skull Base, vol. 369, no. 15, pp. 1434–1442, 2017.
  • [35] Y. Li, R. A. Harbison, R. A. Bly, I. M. Humphreys, B. Hannaford, and K. Moe, “Atlas based anatomical region segmentation for minimally invasive skull base surgery objective motion analysis,” in Journal of Neurological Surgery Part B: Skull Base, vol. 78, no. S 01.   Georg Thieme Verlag KG, 2017, p. A146.
  • [36] Y. Li, R. Bly, M. Whipple, I. Humphreys, B. Hannaford, and K. Moe, “Use endoscope and instrument and pathway relative motion as metric for automated objective surgical skill assessment in skull base and sinus surgery,” in Journal of Neurological Surgery Part B: Skull Base, vol. 79, no. S 01.   Georg Thieme Verlag KG, 2018, p. A194.
  • [37] Y. Li, R. Bly, I. Humphreys, M. Whipple, B. Hannaford, and K. Moe, “Surgical motion based automatic objective surgical completeness assessment in endoscopic skull base and sinus surgery,” in Journal of Neurological Surgery Part B: Skull Base, vol. 79, no. S 01.   Georg Thieme Verlag KG, 2018, p. P193.
  • [38] A. M. Berens, R. A. Harbison, Y. Li, R. A. Bly, N. Aghdasi, M. Ferreira Jr, B. Hannaford, and K. S. Moe, “Quantitative analysis of transnasal anterior skull base approach: Report of technology for intraoperative assessment of instrument motion,” Surgical Innovation, pp. 405–410, 2017.
  • [39] R. A. Harbison, Y. Li, A. M. Berens, R. A. Bly, B. Hannaford, and K. S. Moe, “An automated methodology for assessing anatomy-specific instrument motion during endoscopic endonasal skull base surgery,” Journal of Neurological Surgery Part B: Skull Base, vol. 38, no. 03, pp. 222–226, 2017.
  • [40] B. Hannaford, D. Hu, D. Zhang, and Y. Li, “Simulation results on selector adaptation in behavior trees,” 2016.
  • [41] R. A. Harbison, A. Berens, Y. Li, A. Law, M. Whipple, B. Hannaford, and K. Moe, “Objective signatures of endoscopic surgical performance,” in Journal of Neurological Surgery Part B: Skull Base, vol. 77, no. S 01, 2016, p. A120.
  • [42] L. Wang, Z.-H. You, X. Chen, Y.-M. Li, Y.-N. Dong, L.-P. Li, and K. Zheng, “Lmtrda: Using logistic model tree to predict mirna-disease associations by fusing multi-source information of sequences and similarities,” PLoS computational biology, vol. 15, no. 3, p. e1006865, 2019.
  • [43] Z.-H. Chen, L.-P. Li, Z. He, J.-R. Zhou, Y. Li, and L. Wong, “An improved deep forest model for predicting self-interacting proteins from protein sequence using wavelet transformation,” Frontiers in Genetics, vol. 10, 2019.
  • [44] F. Qin, Y. Li, Y.-H. Su, D. Xu, and B. Hannaford, “Surgical instrument segmentation for endoscopic vision with data fusion of cnn prediction and kinematic pose,” in Robotics and Automation (ICRA), 2019 IEEE International Conference on.   IEEE, 2019, pp. 1–6.
  • [45] S. Li, J. He, Y. Li, and M. U. Rafique, “Distributed recurrent neural networks for cooperative control of manipulators: A game-theoretic perspective,” IEEE transactions on neural networks and learning systems, vol. 28, no. 2, pp. 415–426, 2017.
  • [46] L. Jin, S. Li, X. Luo, Y. Li, and B. Qin, “Neural dynamics for cooperative control of redundant robot manipulators,” IEEE Transactions on Industrial Informatics, vol. 14, no. 9, pp. 3812–3821, 2018.
  • [47] Y. Li, Q. Song, H. Liu, and Y. Ge, “General purpose lidar feature extractor for mobile robot navigation,” Journal of Huazhong University of Science and Technology(Nature Science Edition), no. S1, pp. 280–283, 2013.
  • [48] Y. Li and E. B. Olson, “A general purpose feature extractor for light detection and ranging data,” Sensors, vol. 10, no. 11, pp. 10 356–10 375, 2010.
  • [49]

    ——, “Structure tensors for general purpose lidar feature extraction,” in

    Robotics and Automation (ICRA), 2011 IEEE International Conference on.   IEEE, 2011, pp. 1869–1874.
  • [50] ——, “Extracting general-purpose features from lidar data,” in Robotics and Automation (ICRA), 2010 IEEE International Conference on.   IEEE, 2010, pp. 1388–1393.
  • [51] S. Li, Y. Zhou, M. Q. Meng, H. Liang, Y. Li, Z. You, W. Chen, X. Liu, K. Jiang, and Q. Guo, “A visual localization method for mobile robot based on background subtraction and optical flow tracking,” in Intelligent Control and Automation, 2008. WCICA 2008. 7th World Congress on.   IEEE, 2008, pp. 3870–3874.
  • [52] Y. Li and B. Hannaford, “Close the loop for surgical robot haptic control,” 2017.
  • [53] ——, “Gaussian process regression for sensorless grip force estimation of cable-driven elongated surgical instruments,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1312–1319, 2017.
  • [54] Y. Li, M. Miyasaka, M. Haghighipanah, L. Cheng, and B. Hannaford, “Dynamic modeling of cable driven elongated surgical instruments for sensorless grip force estimation,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on.   IEEE, 2016, pp. 4128–4134.
  • [55] Y. Li, Q. Song, H. Liu, and Y. Ge, “Applying general purpose lidar feature extract to mobile robot navigation,” in the 10th Chinese Conference on Intelligent Robot, 2013, pp. 1–6.
  • [56] P. Raudaschl, P. Zaffino, G. Sharp, M. Spadea, A. Chen, B. Dawant, T. Albrecht, T. Gass, C. Langguth, M. Luthi, F. Jung, O. Knapp, S. Wesarg, R. Haworth, M. Bowes, A. Ashman, G. Guillard, A. Brett, G. Vincent, M. Arteaga, D. Peña, G. Dominguez, N. Aghdasi, Y. Li, A. Berens, K. Moe, B. Hannaford, R. Schubert, and K. Fritscher, “Evaluation of segmentation methods on head and neck ct: Auto-segmentation challenge 2015,” Medical Physics, vol. 44, no. 5, pp. 2020–2036, 2017.
  • [57] N. Aghdasi, Y. Li, A. M. Berens, R. A. Harbison, K. S. Moe, and B. Hannaford, “Efficient orbital structures segmentation with prior anatomical knowledge,” Journal of Medical Imaging, vol. 4, no. 3, p. 034501, 2017.
  • [58] S. Li, C. Pham, A. Jaekel, M. A. Matin, A. H. M. Amin, and Y. Li, “Perception, reaction, and cognition in wireless sensor networks,” 2013.
  • [59] Y. Li, Y. Wang, J. Sun, F. Shuang, and Y. Ge, “Machine perception and the application to exoskeleton robots,” 2011.
  • [60] Y. Li, S. Li, Q. Song, H. Liu, and M. Q.-H. Meng, “Fast and robust data association using posterior based approximate joint compatibility test,” IEEE Transactions on Industrial Informatics, vol. 10, no. 1, pp. 331–339, 2014.
  • [61] Y. Li and E. B. Olson, “IPJC: The incremental posterior joint compatibility test for fast feature cloud matching,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on.   IEEE, 2012, pp. 3467–3474.
  • [62] Y. Li, M.-H. Meng, and W. Chen, “Data fusion based on rbf and nonparametric estimation for localization in wireless sensor networks,” in Robotics and Biomimetics, 2007. ROBIO 2007. IEEE International Conference on.   IEEE, 2007, pp. 1361–1365.
  • [63] S. Li, Y. Li, B. Liu, and T. Murray, “Model-free control of lorenz chaos using an approximate optimal control strategy,” Communications in Nonlinear Science and Numerical Simulation, 2012.
  • [64] Y. Li and B. Hannaford, “Soft-obstacle avoidance for redundant manipulators with recurrent neural network,” in Intelligent Robots and Systems (IROS), 2018 IEEE/RSJ International Conference on.   IEEE, 2018, pp. 1–6.
  • [65] Y. Li, S. Li, M. Miyasaka, A. Lewis, and B. Hannaford, “Improving control precision and motion adaptiveness for surgical robot with recurrent neural network,” in Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on.   IEEE, 2017, pp. 1–6.
  • [66] Y. Li, S. Li, and B. Hannaford, “A novel recurrent neural network control scheme for improving redundant manipulator motion planning completeness,” in Robotics and Automation (ICRA), 2018 IEEE International Conference on.   IEEE, 2018, p. 1 6.
  • [67] S. Li, H. Cui, Y. Li, B. Liu, and Y. Lou, “Decentralized control of collaborative redundant manipulators with partial command coverage via locally connected recurrent neural networks,” Neural Computing & Applications, pp. 1–10, 2012.
  • [68] S. Li, S. Chen, B. Liu, Y. Li, and Y. Liang, “Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks,” Neurocomputing, 2012.
  • [69] R. A. Harbison, X.-F. Shan, Z. Douglas, S. Bevans, Y. Li, K. S. Moe, N. Futran, and J. J. Houlton, “Navigation guidance during free flap mandibular reconstruction: a cadaveric trial,” JAMA Otolaryngology–Head & Neck Surgery, vol. 143, no. 3, pp. 226–233, 2017.
  • [70] Y. Li, “Trends in control and decision-making for human-robot collaboration systems [bookshelf],” IEEE Control Systems Magazine, vol. 39, no. 2, pp. 101–103, April 2019.
  • [71] H. Wang, A. Song, B. Li, B. Xu, and Y. Li, “Psychophysiological classification and experiment study for spontaneous eeg based on two novel mental tasks,” Technology and Health Care, vol. 23, no. s2, pp. S249–S262, 2015.