BlackBox Toolkit: Intelligent Assistance to UI Design

04/04/2020
by   Vinoth Pandian Sermuga Pandian, et al.
Fraunhofer
0

User Interface (UI) design is an creative process that involves considerable reiteration and rework. Designers go through multiple iterations of different prototyping fidelities to create a UI design. In this research, we propose to modify the UI design process by assisting it with artificial intelligence (AI). We propose to enable AI to perform repetitive tasks for the designer while allowing the designer to take command of the creative process. This approach makes the machine act as a black box that intelligently assists the designers in creating UI design. We believe this approach would greatly benefit designers in co-creating design solutions with AI.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/09/2020

Guru, Partner, or Pencil Sharpener? Understanding Designers' Attitudes Towards Intelligent Creativity Support Tools

Creativity Support Tools (CST) aim to enhance human creativity, but the ...
04/15/2021

Towards A Process Model for Co-Creating AI Experiences

Thinking of technology as a design material is appealing. It encourages ...
02/11/2020

Artificial Intelligence Assistance Significantly Improves Gleason Grading of Prostate Biopsies by Pathologists

While the Gleason score is the most important prognostic marker for pros...
03/24/2020

AI loyalty: A New Paradigm for Aligning Stakeholder Interests

When we consult with a doctor, lawyer, or financial advisor, we generall...
07/22/2021

Toward AI Assistants That Let Designers Design

AI for supporting designers needs to be rethought. It should aim to coop...
01/06/2021

Towards an Abolitionist AI: the role of Historically Black Colleges and Universities

Abolition is the process of destroying and then rebuilding the structure...
10/09/2021

Using Human-Guided Causal Knowledge for More Generalized Robot Task Planning

A major challenge in research involving artificial intelligence (AI) is ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The user interface (UI) acts as the bridge between a human and a machine. margin: "Just as the Industrial Revolution freed up a lot of humanity from physical drudgery I think AI has the potential to free up humanity from a lot of the mental drudgery."
Andrew Ng
It acts as a translator mediating between two worlds: one disorderly, irrational, but adept at noticing patterns; another structured, analytical, however inept in pattern-finding (as of now). A UI designer is an architect who designs this bridge between man and machine.

The most important job of a UI designer is not to find a balance between both worlds; instead, reduce the mental load of the one with issues and emotions and try to fit it into the confinements and restrictions placed by the other. This task of designing such interfaces is strenuous. Among the numerous ways of creating user interfaces to satisfy both the worlds involved, the most commonly used technique is user-centered design.

In user-centered design, users are kept at the heart of design, and designers attempt to satisfy their needs by analyzing the usage context, user needs, and requirements before starting the design process. Then during the design process, designers go through multiple fidelities of prototypes. Starting from low-fidelity (lo-fi) freehand sketches to medium-fidelity (me-fi) digital images and finally to high-fidelity (hi-fi) interactive screens or code. The different fidelities have their strengths and weaknesses. For example, lo-fi is cheap and supports ideating different designs quickly; however, it does not do justice to the final look-and-feel of the system. Similarly, me-fi contains the most necessary information and is quicker to create than hi-fi; however, it is hard to create multiple design variations compared to lo-fi. Hi-fi resembles the final product, but the workload of creating such a system is humongous, and it is hard to create multiple design modifications to test the system.

Several researchers attempt to solve the issues in this design process. After the advancements of AI, one solution is to automate this whole process - the designers sketch a UI, and then the machine analyses the sketch and generates the hi-fi code. This interesting approach, however, has one major flaw. The whole system acts like a transformer to the designer who enters their sketch and gets a corresponding code with no control over tweaking the intricate design details. As a solution, we approach the same problem domain and propose two solutions. Our goal is to enable the machine to perform repetitive tasks for the designer while allowing the designer to take command of the creative process. This approach makes the machine act as a black box that intelligently assists the designers in UI design. The machine’s role in our proposed design process is no different from the apprentices of renaissance art maestros. The apprentice’s task to assist the artist in preparing materials and executing the less critical and quite tedious decorative parts of frescoes or statues. We believe this approach would greatly benefit designers where AI and human co-create creative solutions.

2 Research Focus

Our focus in this research is on how to move designers as the drivers of creativity and let AI assist designers in the UI design process. Our primary research question in this research is, "How can we automate the UI design process while allowing UI designers to control the design details." We address this question by using artificial intelligence to automate the transformation of different fidelities of UI design. In the following sections, we expand on each of our proposed solutions, the challenges we faced with implementation and the benefit of that solution.

3 MetaMorph

MetaMorph is a UI element detector, created with a Deep Neural Network (DNN) object detection model [5]. MetaMorph detects constituent UI element categories and their position from a lo-fi sketch using a fine-tuned RetinaNet object detection model trained with a dataset of 125,000 synthetically generated lo-fi sketches.

3.1 Challenge & Solution

The major challenge in creating MetaMorph was the dataset. We required a large scale lo-fi sketch dataset, which was non-existent. Therefore, we collected UI element sketches from 350 participants using paper and digital questionnaires. Then we processed this data and labeled it to create the UISketch dataset111https://www.kaggle.com/vinothpandian/uisketch [4]

. This dataset contains 5,917 sketches of 19 UI elements. However, this labeled dataset is only useful for classifying UI elements; but, a UI element detector would need ground truth of both the identity and location of UI elements in a lo-fi sketch.

Sample generated synthetic data

As a solution, we created Syn222https://www.kaggle.com/vinothpandian/syn-dataset, a large-scale synthetic dataset containing 125,000 synthetically generated lo-fi sketches [4]. To create Syn, we randomly chose UI elements from the labeled UISketch dataset and stitched them in random locations with random scaling (Figure 3.1). This random allocation of elements in an image is similar to the pre-processing and data augmentation techniques used in improving detection metrics of object detection models. We used Syn to train the MetaMorph UI element detector.

We then collected 200 lo-fi sketches to evaluate MetaMorph. The evaluation results indicate that MetaMorph detects UI elements from lo-fi sketches with 63.5% mAP. MetaMorph333https://metamorph.designwitheve.com/ is available as an open web API444http://api.metamorph.designwitheve.com/

. We have also open-sourced the UISketch dataset and Syn.

Usage of MetaMorph to detect lo-fi elements and transforming them to me-fi in Eve

3.2 Benefits

By detecting the UI elements present in a lo-fi sketch, the lo-fi prototype can be converted to me-fi or hi-fi instead of fully automating the process (Figure 3.1). MetaMorph enabled us to create Eve, a prototyping workbench [5] where a designer can sketch or upload a lo-fi, which will be converted to me-fi and later to hi-fi by means of UI element detection. Eve enables the designer to control the styling of the UI in me-fi and progress it to hi-fi android XML code.

4 Blu

Blu is a tool that generates UI layout information from UI screenshots. With the detected information, it enables conversion of UI screenshots to blueprints and editable vector graphics

[3]. Herring et al. demonstrate the benefits and role of design examples in different aspects of the UI design process [2]. In this research, we expand on this idea, and from a UI design example screenshot, we detect UI element categories, positions, grouping information, and layout using deep neural networks.

4.1 Challenge & Solution

We faced two significant challenges in implementing Blu: dataset and layout detection.

Fortunately, as a solution to dataset issue, Deka et al. collected a large scale android UI screenshot dataset, RICO [1]. RICO contains 72k UI screenshots with UI element hierarchy and semantic annotations. However, RICO was annotated using an automated approach; therefore, the annotations are sometimes mislabelled. If RICO is directly used as training data of a DNN, these mistakes propagate and provide inadequate results. Therefore, we had to re-annotate the elements and layout information for a subset of RICO to train Blu.

Another challenge in creating Blu is identifying the layout information. A UI designer by education and practice groups and aligns UI elements while creating a me-fi prototype. They group UI elements mostly based on gestalt laws. This layout information further helps the front-end developers to create the hi-fi with the constraints placed on them by programming languages and development environments. However, there is no algorithmic way to automate the layout of UI elements using gestalt law yet. To solve this issue, we are attempting to automate the alignment process algorithmically based on gestalt laws.

UI screen (left) and its respective blueprint (middle) and UI layout tree (right) created using Blu.

4.2 Benefits

Blu reduces the rework of UI designers for starting a design from scratch [3]. It also assists designers to generate blueprints of UI design and convey the design information to developers. To demonstrate Blu, we created a web application555https://blu.blackbox-toolkit.com/ that utilizes the annotations from RICO and generates a blueprint (Figure 4.1). This web app helps to convey the design and layout information of UI screen.

5 Summary & Future work

In this paper, we presented our research on utilizing AI to assist designers in the UI designing process. We introduce the first two of our tools (MetaMorph and Blu) from our proposed solution, BlackBox toolkit. This research is an exploration of applying bleeding-edge AI research in human-computer interaction domain. This ongoing research is at its incipient phase with two tools. Further, we are planning to ideate and implement similar tools, such as generating UIs and automatic detection of accessibility issues in UIs.

By this research, we are looking forward to a future where humans and AI collaborate in creative tasks similar to the analytical tasks.

References

  • [1] B. Deka, Z. Huang, C. Franzen, J. Hibschman, D. Afergan, Y. Li, J. Nichols, and R. Kumar (2017) Rico: a mobile app dataset for building data-driven design applications. In Proceedings of the 30th Annual ACM Symposium on UIST, UIST ’17, New York, NY, USA, pp. 845–854. External Links: ISBN 9781450349819, Link, Document Cited by: §4.1.
  • [2] S. R. Herring, C. C. Chang, J. Krantzler, and B. P. Bailey (2009) Getting inspired! understanding how and why examples are used in creative design practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’09, New York, NY, USA, pp. 87–96. External Links: ISBN 9781605582467, Link, Document Cited by: §4.
  • [3] V. Pandian, S. Suleri, and M. Jarke (2020) Blu: what guis are made of. In Proceedings of the 25th International Conference on IUI Companion, IUI ’20, New York, NY, USA, pp. 81–82. External Links: ISBN 9781450375139, Link, Document Cited by: §4.2, §4.
  • [4] Pandian,VPS, S. Suleri, and M. Jarke (2020) Syn: synthetic dataset for training ui element detector from lo-fi sketches. In Proceedings of the 25th International Conference on IUI Companion, IUI ’20, New York, NY, USA, pp. 79–80. External Links: ISBN 9781450375139, Link, Document Cited by: §3.1, §3.1.
  • [5] S. Suleri, V. Pandian, S. Shishkovets, and M. Jarke (2019) Eve: a sketch-based software prototyping workbench. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA ’19, New York, NY, USA. External Links: ISBN 9781450359719, Link, Document Cited by: §3.2, §3.