Discoverability in Satellite Imagery: A Good Sentence is Worth a Thousand Pictures

01/03/2020
by   David Noever, et al.
0

Small satellite constellations provide daily global coverage of the earth's landmass, but image enrichment relies on automating key tasks like change detection or feature searches. For example, to extract text annotations from raw pixels requires two dependent machine learning models, one to analyze the overhead image and the other to generate a descriptive caption. We evaluate seven models on the previously largest benchmark for satellite image captions. We extend the labeled image samples five-fold, then augment, correct and prune the vocabulary to approach a rough min-max (minimum word, maximum description). This outcome compares favorably to previous work with large pre-trained image models but offers a hundred-fold reduction in model size without sacrificing overall accuracy (when measured with log entropy loss). These smaller models provide new deployment opportunities, particularly when pushed to edge processors, on-board satellites, or distributed ground stations. To quantify a caption's descriptiveness, we introduce a novel multi-class confusion or error matrix to score both human-labeled test data and never-labeled images that include bounding box detection but lack full sentence captions. This work suggests future captioning strategies, particularly ones that can enrich the class coverage beyond land use applications and that lessen color-centered and adjacency adjectives ("green", "near", "between", etc.). Many modern language transformers present novel and exploitable models with world knowledge gleaned from training from their vast online corpus. One interesting, but easy example might learn the word association between wind and waves, thus enriching a beach scene with more than just color descriptions that otherwise might be accessed from raw pixels without text annotation.

READ FULL TEXT
research
06/01/2023

CapText: Large Language Model-based Caption Generation From Image Context and Description

While deep-learning models have been shown to perform well on image-to-t...
research
06/10/2022

Fast building segmentation from satellite imagery and few local labels

Innovations in computer vision algorithms for satellite image analysis c...
research
02/08/2021

Overhead MNIST: A Benchmark Satellite Dataset

The research presents an overhead view of 10 important objects and follo...
research
12/17/2018

Grounded Video Description

Video description is one of the most challenging problems in vision and ...
research
09/25/2022

Paraphrasing Is All You Need for Novel Object Captioning

Novel object captioning (NOC) aims to describe images containing objects...
research
07/31/2021

Chest ImaGenome Dataset for Clinical Reasoning

Despite the progress in automatic detection of radiologic findings from ...

Please sign up or login with your details

Forgot password? Click here to reset