Forgetting and consolidation for incremental and cumulative knowledge acquisition systems

The application of cognitive mechanisms to support knowledge acquisition is, from our point of view, crucial for making the resulting models coherent, efficient, credible, easy to use and understandable. In particular, there are two characteristic features of intelligence that are essential for knowledge development: forgetting and consolidation. Both plays an important role in knowledge bases and learning systems to avoid possible information overflow and redundancy, and in order to preserve and strengthen important or frequently used rules and remove (or forget) useless ones. We present an incremental, long-life view of knowledge acquisition which tries to improve task after task by determining what to keep, what to consolidate and what to forget, overcoming The Stability-Plasticity dilemma. In order to do that, we rate rules by introducing several metrics through the first adaptation, to our knowledge, of the Minimum Message Length (MML) principle to a coverage graph, a hierarchical assessment structure which treats evidence and rules in a unified way. The metrics are not only used to forget some of the worst rules, but also to set a consolidation process to promote those selected rules to the knowledge base, which is also mirrored by a demotion system. We evaluate the framework with a series of tasks in a chess rule learning domain.

READ FULL TEXT

page 5

page 22

research
10/02/2018

Rough set based lattice structure for knowledge representation in medical expert systems: low back pain management case study

The aim of medical knowledge representation is to capture the detailed d...
research
07/26/2021

In Defense of the Learning Without Forgetting for Task Incremental Learning

Catastrophic forgetting is one of the major challenges on the road for c...
research
01/30/2018

Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence

We study the incremental learning problem for the classification task, a...
research
08/27/2022

Anti-Retroactive Interference for Lifelong Learning

Humans can continuously learn new knowledge. However, machine learning m...
research
02/03/2019

Incremental Learning with Maximum Entropy Regularization: Rethinking Forgetting and Intransigence

Incremental learning suffers from two challenging problems; forgetting o...
research
12/12/2014

A Robust Transformation-Based Learning Approach Using Ripple Down Rules for Part-of-Speech Tagging

In this paper, we propose a new approach to construct a system of transf...
research
03/27/2013

The Automatic Training of Rule Bases that Use Numerical Uncertainty Representations

The use of numerical uncertainty representations allows better modeling ...

Please sign up or login with your details

Forgot password? Click here to reset