OCNet: Object Context Network for Scene Parsing
Context is essential for various computer vision tasks. The state-of-the-art scene parsing methods have exploited the effectiveness of the context defined over image-level. Such context carries the mixture of objects belonging to different categories. According to that the label of each pixel P is defined as the category of the object it belongs to, we propose the pixel-wise Object Context that consists of the objects belonging to the same category with pixel P. The representation of pixel P's object context is the aggregation of all the features that belong to the pixels sharing the same category with P. Since the ground truth objects that the pixel P belonging to is unavailable, we employ the self-attention method to approximate the objects by learning a pixel-wise similarity map. We further propose the Pyramid Object Context and Atrous Spatial Pyramid Object Context to capture context of multiple scales. Based on the object context, we introduce the OCNet and show that OCNet achieves state-of-the-art performance on both Cityscapes benchmark and ADE20K benchmark. The code of OCNet will be made available at https://github.com/PkuRainBow/OCNet.
READ FULL TEXT