Sparse Code Formation with Linear Inhibition
Sparse code formation in the primary visual cortex (V1) has been inspiration for many state-of-the-art visual recognition systems. To stimulate this behavior, networks are trained networks under mathematical constraint of sparsity or selectivity. In this paper, the authors exploit another approach which uses lateral interconnections in feature learning networks. However, instead of adding direct lateral interconnections among neurons, we introduce an inhibitory layer placed right after normal encoding layer. This idea overcomes the challenge of computational cost and complexity on lateral networks while preserving crucial objective of sparse code formation. To demonstrate this idea, we use sparse autoencoder as normal encoding layer and apply inhibitory layer. Early experiments in visual recognition show relative improvements over traditional approach on CIFAR-10 dataset. Moreover, simple installment and training process using Hebbian rule allow inhibitory layer to be integrated into existing networks, which enables further analysis in the future.
READ FULL TEXT