pix2code: Generating Code from a Graphical User Interface Screenshot

05/22/2017
by   Tony Beltramelli, et al.
0

Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77 for three different platforms (i.e. iOS, Android and web-based technologies).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2023

CGui Toolchain for Highly Customized GUI Development for Multiple Platforms

Highly customized graphical user interfaces play a major role in today's...
research
07/05/2020

Automatically Generating Codes from Graphical Screenshots Based on Deep Autocoder

During software front-end development, the work to convert Graphical Use...
research
11/14/2010

Integration of Flexible Web Based GUI in I-SOAS

It is necessary to improve the concepts of the present web based graphic...
research
12/31/2019

Learning to Infer User Interface Attributes from Images

We explore a new domain of learning to infer user interface attributes t...
research
10/20/2019

Sketch2Code: Transformation of Sketches to UI in Real-time Using Deep Neural Network

User Interface (UI) prototyping is a necessary step in the early stages ...
research
02/07/2018

Machine Learning-Based Prototyping of Graphical User Interfaces for Mobile Apps

It is common practice for developers of user-facing software to transfor...
research
08/04/2020

GPLAN: Computer-Generated Dimensioned Floorplans for given Adjacencies

In this paper, we present GPLAN, software aimed at constructing dimensio...

Please sign up or login with your details

Forgot password? Click here to reset