Learning from Rules Performs as Implicit Regularization

Publication Date: 6/9/2019

Event: Thirty-sixth International Conference on Machine Learning (ICML 2019)

Reference: pp. 1-5, 2019

Authors: Hossein Hosseini, University of Washington, NEC Laboratories America, Inc.; Ramin Moslemi, NEC Laboratories America, Inc.; Ali Hooshmand, NEC Laboratories America, Inc.; Ratnesh Sharma, NEC Laboratories America, Inc.

Abstract: In this paper, we study the generalization performance of deep neural networks in learning problems where the given task is governed by a set of rules. We consider two settings of supervised learning and rule-based learning. In supervised learning, the network is trained with pairs of inputs and the corresponding solutions that satisfy the problem constraints. In rule-based learning, the constraints are encoded into a neural network module that is applied on the output of the solver network. In this approach, instead of training with any actual solutions of the problem, the model will be trained to explicitly satisfy the constraints. We perform the experiments on two problems of solving a system of nonlinear equations and solving Sudoku puzzles. Our experimental results show that, compared to supervised approach, rule-based learning results in higher training error, but significantly lower validation error, especially when training data is small, thus performing as an implicit regularization.

Publication Link: