Exploration of NLU: disassemble the information represented by Natural Language, based on the understanding of the internal structure of information, modeling the storage and p

10/24/2020 ∙ by Limin Zhang, et al. ∙ 0

Natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the information. This paper disassembles the information represented by natural language, analyzes the classification coding system of attribute information and the abstraction relation between attribute information and entities in the real world, constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes, reclassifies the sentences types from the perspective of task types and data reading modes. Then, simulated the understanding process (the information processing process) on a dialogue example. Finally, the author summarizes the basic conditions of understanding and gives out the definition of understanding from a personal point of view. The study in this paper provides a practical, theoretical basis and research methods for NLU. It also can be applied in large-scale, multi-type information processing in the artificial intelligence (AI) area.



There are no comments yet.


page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.