Background subtraction using the factored 3-way restricted Boltzmann machines
In this paper, we proposed a method for reconstructing the 3D model based on continuous sensory input. The robot can draw on extremely large data from the real world using various sensors. However, the sensory inputs are usually too noisy and high-dimensional data. It is very difficult and time consuming for robot to process using such raw data when the robot tries to construct 3D model. Hence, there needs to be a method that can extract useful information from such sensory inputs. To address this problem our method utilizes the concept of Object Semantic Hierarchy (OSH). Different from the previous work that used this hierarchy framework, we extract the motion information using the Deep Belief Network technique instead of applying classical computer vision approaches. We have trained on two large sets of random dot images (10,000) which are translated and rotated, respectively, and have successfully extracted several bases that explain the translation and rotation motion. Based on this translation and rotation bases, background subtraction have become possible using Object Semantic Hierarchy.
READ FULL TEXT