Including Uncertainty when Learning from Human Corrections
It is difficult for humans to efficiently teach robots how to correctly perform a task. One intuitive solution is for the robot to iteratively learn the human's preferences from corrections, where the human improves the robot's current behavior at each iteration. When learning from corrections, we argue that while the robot should estimate the most likely human preferences, it should also know what it does not know, and integrate this uncertainty when making decisions. We advance the state-of-the-art by introducing a Kalman filter for learning from corrections: this approach also maintains the uncertainty of the estimated human preferences. Next, we demonstrate how uncertainty can be leveraged for active learning and risk-sensitive deployment. Our results indicate that maintaining and leveraging uncertainty leads to faster learning from human corrections.
READ FULL TEXT