Understanding Mobile GUI: from Pixel-Words to Screen-Sentences

05/25/2021
by   Jingwen Fu, et al.
33

The ubiquity of mobile phones makes mobile GUI understanding an important task. Most previous works in this domain require human-created metadata of screens (e.g. View Hierarchy) during inference, which unfortunately is often not available or reliable enough for GUI understanding. Inspired by the impressive success of Transformers in NLP tasks, targeting for purely vision-based GUI understanding, we extend the concepts of Words/Sentence to Pixel-Words/Screen-Sentence, and propose a mobile GUI understanding architecture: Pixel-Words to Screen-Sentence (PW2SS). In analogy to the individual Words, we define the Pixel-Words as atomic visual components (text and graphic components), which are visually consistent and semantically clear across screenshots of a large variety of design styles. The Pixel-Words extracted from a screenshot are aggregated into Screen-Sentence with a Screen Transformer proposed to model their relations. Since the Pixel-Words are defined as atomic visual components, the ambiguity between their visual appearance and semantics is dramatically reduced. We are able to make use of metadata available in training data to auto-generate high-quality annotations for Pixel-Words. A dataset, RICO-PW, of screenshots with Pixel-Words annotations is built based on the public RICO dataset, which will be released to help to address the lack of high-quality training data in this area. We train a detector to extract Pixel-Words from screenshots on this dataset and achieve metadata-free GUI understanding during inference. We conduct experiments and show that Pixel-Words can be well extracted on RICO-PW and well generalized to a new dataset, P2S-UI, collected by ourselves. The effectiveness of PW2SS is further verified in the GUI understanding tasks including relation prediction, clickability prediction, screen retrieval, and app type classification.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

page 10

research
06/25/2022

Construct a Sentence with Multiple Specified Words

This paper demonstrates a task to finetune a BART model so it can constr...
research
12/15/2022

Visually-augmented pretrained language models for NLP tasks without images

Although pre-trained language models (PLMs) have shown impressive perfor...
research
04/02/2020

Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Transformers

We propose Pixel-BERT to align image pixels with text by deep multi-moda...
research
12/20/2022

Pay Attention to Your Tone: Introducing a New Dataset for Polite Language Rewrite

We introduce PoliteRewrite – a dataset for polite language rewrite which...
research
12/22/2020

ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces

As mobile devices are becoming ubiquitous, regularly interacting with a ...
research
01/30/2023

WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics

Modeling user interfaces (UIs) from visual information allows systems to...
research
04/13/2021

Multiple regression techniques for modeling dates of first performances of Shakespeare-era plays

The date of the first performance of a play of Shakespeare's time must u...

Please sign up or login with your details

Forgot password? Click here to reset