Deriving Visual Semantics from Spatial Context: An Adaptation of LSA and Word2Vec to generate Object and Scene Embeddings from Images

09/20/2020
by   Matthias S. Treder, et al.
0

Embeddings are an important tool for the representation of word meaning. Their effectiveness rests on the distributional hypothesis: words that occur in the same context carry similar semantic information. Here, we adapt this approach to index visual semantics in images of scenes. To this end, we formulate a distributional hypothesis for objects and scenes: Scenes that contain the same objects (object context) are semantically related. Similarly, objects that appear in the same spatial context (within a scene or subregions of a scene) are semantically related. We develop two approaches for learning object and scene embeddings from annotated images. In the first approach, we adapt LSA and Word2vec's Skipgram and CBOW models to generate two sets of embeddings from object co-occurrences in whole images, one for objects and one for scenes. The representational space spanned by these embeddings suggests that the distributional hypothesis holds for images. In an initial application of this approach, we show that our image-based embeddings improve scene classification models such as ResNet18 and VGG-11 (3.72% improvement on Top5 accuracy, 4.56% improvement on Top1 accuracy). In the second approach, rather than analyzing whole images of scenes, we focus on co-occurrences of objects within subregions of an image. We illustrate that this method yields a sensible hierarchical decomposition of a scene into collections of semantically related objects. Overall, these results suggest that object and scene embeddings from object co-occurrences and spatial context yield semantically meaningful representations as well as computational improvements for downstream applications such as scene classification.

READ FULL TEXT
research
10/19/2020

Semantic-Guided Inpainting Network for Complex Urban Scenes Manipulation

Manipulating images of complex scenes to reconstruct, insert and/or remo...
research
12/16/2019

Learning Canonical Representations for Scene Graph to Image Generation

Generating realistic images of complex visual scenes becomes very challe...
research
08/09/2017

Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning

Cinemagraphs are a compelling way to convey dynamic aspects of a scene. ...
research
05/19/2023

Generating Visual Spatial Description via Holistic 3D Scene Understanding

Visual spatial description (VSD) aims to generate texts that describe th...
research
08/18/2019

Scene Classification in Indoor Environments for Robots using Context Based Word Embeddings

Scene Classification has been addressed with numerous techniques in comp...
research
06/10/2018

Semantic Correspondence: A Hierarchical Approach

Establishing semantic correspondence across images when the objects in t...
research
05/27/2019

Semantic Fisher Scores for Task Transfer: Using Objects to Classify Scenes

The transfer of a neural network (CNN) trained to recognize objects to t...

Please sign up or login with your details

Forgot password? Click here to reset