What Can This Robot Do? Learning from Appearance and Experiments
When presented with an unknown robot (subject) how can an autonomous agent (learner) figure out what this new robot can do? The subject's appearance can provide cues to its physical as well as cognitive capabilities. Seeing a humanoid can make one wonder if it can kick balls, climb stairs or recognize faces. What if the learner can request the subject to perform these tasks? We present an approach to make the learner build a model of the subject at a task based on the latter's appearance and refine it by experimentation. Apart from the subject's inherent capabilities, certain extrinsic factors may affect its performance at a task. Based on the subject's appearance and prior knowledge about the task a learner can identify a set of potential factors, a subset of which we assume are controllable. Our approach picks values of controllable factors to generate the most informative experiments to test the subject at. Additionally, we present a metric to determine if a factor should be incorporated in the model. We present results of our approach on modeling a humanoid robot at the task of kicking a ball. Firstly, we show that actively picking values for controllable factors, even in noisy experiments, leads to faster learning of the subject's model for the task. Secondly, starting from a minimal set of factors our metric identifies the set of relevant factors to incorporate in the model. Lastly, we show that the refined model better represents the subject's performance at the task.
READ FULL TEXT