Constraint-driven multi-task learning
Inductive logic programming is a form of machine learning based on mathematical logic that generates logic programs from given examples and background knowledge. In this project, we extend the Popper ILP system to make use of multi-task learning. We implement the state-of-the-art approach and several new strategies to improve search performance. Furthermore, we introduce constraint preservation, a technique that improves overall performance for all approaches. Constraint preservation allows the system to transfer knowledge between updates on the background knowledge set. Consequently, we reduce the amount of repeated work performed by the system. Additionally, constraint preservation allows us to transition from the current state-of-the-art iterative deepening search approach to a more efficient breadth first search approach. Finally, we experiment with curriculum learning techniques and show their potential benefit to the field.
READ FULL TEXT