Factorial Convolution Neural Networks
In recent years, GoogleNet has garnered substantial attention as one of the base convolutional neural networks (CNNs) to extract visual features for object detection. However, it experiences challenges of contaminated deep features when concatenating elements with different properties. Also, since GoogleNet is not an entirely lightweight CNN, it still has many execution overheads to apply to a resource-starved application domain. Therefore, a new CNNs, FactorNet, has been proposed to overcome these functional challenges. The FactorNet CNN is composed of multiple independent sub CNNs to encode different aspects of the deep visual features and has far fewer execution overheads in terms of weight parameters and floating-point operations. Incorporating FactorNet into the Faster-RCNN framework proved that FactorNet gives a 5% better accuracy at a minimum and produces additional speedup over GoolgleNet throughout the KITTI object detection benchmark data set in a real-time object detection system.
READ FULL TEXT