A Preliminary Study on the Learning Informativeness of Data Subsets

09/28/2015 ∙ by Simon Kaltenbacher, et al. ∙ Technische Universität München Universität München 0

Estimating the internal state of a robotic system is complex: this is performed from multiple heterogeneous sensor inputs and knowledge sources. Discretization of such inputs is done to capture saliences, represented as symbolic information, which often presents structure and recurrence. As these sequences are used to reason over complex scenarios, a more compact representation would aid exactness of technical cognitive reasoning capabilities, which are today constrained by computational complexity issues and fallback to representational heuristics or human intervention. Such problems need to be addressed to ensure timely and meaningful human-robot interaction. Our work is towards understanding the variability of learning informativeness when training on subsets of a given input dataset. This is in view of reducing the training size while retaining the majority of the symbolic learning potential. We prove the concept on human-written texts, and conjecture this work will reduce training data size of sequential instructions, while preserving semantic relations, when gathering information from large remote sources.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

References

  • [1] N. H. Kirk, K. Ramirez-Amaro, E. Dean-Leon, M. Saveriano, and G. Cheng, “Online prediction of activities with structure: Exploiting contextual associations and sequences,” in 2015 IEEE-RAS International Conference on Humanoid Robots, IEEE, 2015.
  • [2] N. H. Kirk, D. Nyga, and M. Beetz, “Controlled natural languages for language generation in artificial cognition,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 6667–6672, IEEE, 2014.
  • [3] M. Tenorth, D. Nyga, and M. Beetz, “Understanding and executing instructions for everyday manipulation tasks from the world wide web,” in 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 1486–1491, IEEE, 2010.
  • [4]

    T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in

    Advances in neural information processing systems, pp. 3111–3119, 2013.
  • [5] M. Banko and E. Brill, “Scaling to very very large corpora for natural language disambiguation,” in Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, pp. 26–33, Association for Computational Linguistics, 2001.
  • [6] N. H. Kirk, “Towards learning object affordance priors from technical texts,” in

    Active Learning in Robotics” Workshop, 2014 IEEE-RAS International Conference on Humanoid Robots

    , IEEE, 2014.