AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time

by   Seonwook Park, et al.

Developing cross-device multi-user interfaces (UIs) is a challenging problem. There are numerous ways in which content and interactivity can be distributed. However, good solutions must consider multiple users, their roles, their preferences and access rights, as well as device capabilities. Manual and rule-based solutions are tedious to create and do not scale to larger problems nor do they adapt to dynamic changes, such as users leaving or joining an activity. In this paper, we cast the problem of UI distribution as an assignment problem and propose to solve it using combinatorial optimization. We present a mixed integer programming formulation which allows real-time applications in dynamically changing collaborative settings. It optimizes the allocation of UI elements based on device capabilities, user roles, preferences, and access rights. We present a proof-of-concept designer-in-the-loop tool, allowing for quick solution exploration. Finally, we compare our approach to traditional paper prototyping in a lab study.



There are no comments yet.


page 1

page 6

page 7

page 8

page 9


GRIDS: Interactive Layout Design with Integer Programming

Grid layouts are used by designers to spatially organise user interfaces...

Multi-User Multi-Device-Aware Access Control System for Smart Home

In a smart home system, multiple users have access to multiple devices, ...

A Personalized Preference Learning Framework for Caching in Mobile Networks

This paper comprehensively studies a content-centric mobile network base...

Mixed integer programming formulation of unsupervised learning

A novel formulation and training procedure for full Boltzmann machines i...

When and Whom to Collaborate with in a Changing Environment: A Collaborative Dynamic Bandit Solution

Collaborative bandit learning, i.e., bandit algorithms that utilize coll...

Learning Combined Set Covering and Traveling Salesman Problem

The Traveling Salesman Problem is one of the most intensively studied co...

Scaling notifications beyond alerts: from subtly drawing attention up to forcing the user to take action

New computational devices, in particular wearable devices, offer the uni...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many users now carry not one, but several computing devices, such as laptops, smartphones or wearable devices. In addition, our environments are often populated with public and semi-public displays. In collaborative settings, such as at work or in education, many application scenarios could benefit from UIs that are distributed across available devices and potentially also across multiple users participating in a joint activity. However, traditional interfaces are designed for a single device and are neither aware nor do they benefit from having multiple input and output channels available. This may be ascribed, in part, to the significant complexity of designing and implementing such cross device interfaces and the combinatorial complexity of the question of which UI element should be placed onto which of the users’ devices.

Our goal is to provide computational support for the task of distributing elements in a rapid and controllable way among devices in a collaborative setting. Consider a concert, exhibition, birthday party, or a work meeting: depending on their device capabilities, co-present users would have parts of an interface displayed on their devices. Instead of device owners manually deciding assignment (who gets what), elements are automatically distributed such that the most important elements are always available while taking into account personal preferences and constraints including privacy. Such collaborative settings are inherently dynamic with users and devices appearing and disappearing at various points in time. This requires a real-time approach to accommodate dynamic device configurations, user preferences and user roles.

Prior work on cross-device interfaces have proposed methods for synchronizing elements across devices [8, 22, 33, 34, 52] or distributing elements of a workspace over multiple displays [42, 48]. Panelrama [52] uses a suitability measure for associating (single user) UI panels to devices with an integer programming formulation. Frosini and Paternò [8] present a conceptual framework which considers multi-user roles but does not provide methods to solve the assignment problem. Prior to this paper, no automatic solutions existed for element distribution in collaborative settings which considers critical constraints such as access rights, privacy, and roles and their dynamic evolution over time.

We propose an optimization-based approach that automatically distributes elements to available devices by solving a many-to-many assignment problem, constraining the optimization by available screen real-estate. Given a list of UI elements and available devices, user and device descriptions, it distributes the UI elements based on an objective that maximizes the usefulness of an element on a device while simultaneously maximizing completeness of the UI from a user’s perspective (i.e., ensuring that important elements are present for each user). More precisely our method (1) takes role requirements and (2) user preferences into account when distributing elements, (3) adapts to changing user roles or preferences depending on a given task, and (4) adapts the DUI in real-time based on presence of users and devices in collaborative scenarios. Our formulation can be solved quickly, easily scaling up to thousands of users and devices. The benefit to users and designers is the new type of control provided: instead of instructing how

elements should be distributed (a heuristic or rule-based approach), or completing it manually, developers and designers can express qualities of “good” distributions. As shown in Figure 

AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time this control offers substantial promise for the creation of applications that effectively take advantage of the wide range of capabilities in cross-device ecosystems for collaborative multi-user interfaces.

We demonstrate the utility of our approach with a step-by-step walkthrough of how the system adapts to various roles and preferences in a company meeting setting, and demonstrate real-time adaptiveness in a fully implemented co-located media sharing application. Furthermore, we suggest how the algorithm could scale to address previously impossible problem scales. In addition, we evaluate our approach in a user study and compare it to traditional paper prototyping.

2 Related Work

Cross-device or “Distributed User Interfaces” (DUIs) offer appealing features including, more pixels [49], new forms of engagement at varying scales [50], reduction in system complexity by splitting and sharing functionality [4] and targeting interactions across and between devices (e.g., [35]). This vision has given rise to sustained research interest within the HCI community from research on taxonomies [50, 32], interaction techniques, and middleware [22]. We briefly discuss related work across several related areas from DUIs to UI optimization.

2.1 Cross Device User Interfaces

People now use multiple devices with displays (e.g., laptops, phones, tablets), often at the same time. Commercial software solutions exist for mirroring (e.g., AirPlay), I/O targeting (e.g., Microsoft Continuum), coordinating (e.g., Apple Continuity) or stitching multiple displays (e.g., Equalizer [5]) into a single canvas. However, design and development for such settings is entirely manual and requires the developer to consider the myriad set of inputs, outputs and device configurations to achieve even rudimentary cross-device experiences. When designing for multiple users this problem is further exacerbated due to access rights, privacy and user preference concerns.

Existing cross-device research has highlighted challenges in adapting DUIs for collaborative environments in real-time, including problems in testing multi-device experiences [4], user interface widget adoption [12], functional UI coordination  [47], component role allocation [51], spatial awareness [43] and changes in related parallel use [20]. Addressing these challenges has given rise to the approach taken here.

Our work is concerned with computational support for the design of distributed or cross-device UIs [7, 32] in the sense of a crossmedia service where the functionality of a single application is decomposed and shared across devices and users. We propose an algorithmic approach to functionality assignment according to device strengths and user preferences, extending prior rule-based approaches [19, 33]. Functionality distribution to different devices is a crucial element of DUI design since a balanced assignment of interactive components can reduce the complexity of the original system [4].

Rule-based approaches [19, 33] provide insights into cross-device interaction patterns in the real-world but do not scale to many devices or multi-user scenarios. We believe that our bottom-up approach of modeling DUI usability in multi-user scenarios opens up unexplored application areas.

2.2 Toolkits and Middleware

Existing toolkits have explored cross device interaction with combinations of mobile devices [14, 38, 42, 48, 49], mobile/desktop devices [16, 29, 33, 35], mobile/display wall devices [2, 22] and wearables [3, 11, 18]. Alternative approaches have focused on the development of conceptual frameworks [24, 40]. Within this work, common applications which support multiple people, with cross device interactions, include authoring [22], web browsing [10, 15, 27] and collaborative visualizations [2].

Prior work has often focused on providing support for keeping application and UI states synced across devices using conventional software development practices [22]. Our work builds on these capabilities to go beyond the state-of-the art in the automatic distribution of UI elements to users and devices.

2.3 Mobile Co-located Interaction and Collaboration

DUIs have emerged as a platform of interest for supporting mobile co-located interaction [26]. Existing research has investigated systems that allow groups of co-located people to collaborate around a digital whiteboard with mobile devices (e.g., PDAs) [17, 28, 31, 45]. With mobile devices alone, research has explored co-located collaboration for shopping [46], video [46], ideation [41] and content sharing [25, 28]. Our work explicitly targets heterogeneous settings where devices with different capabilities are used to create a single collaborative system. By considering each user separately, we also allow for better distribution of functionality across homogeneous devices. This can occur often with mobile phones in mobile co-located interactions. Additionally, we address the dynamicity of mobile interactions in terms of available users and devices by providing a real-time formulation.

2.4 Computational UI Generation and Retargeting

Modern optimization methods have been proposed to automate UI generation and retargeting. SUPPLE [9] uses decision-theoretic optimization to automatically generate UIs adapted to a person’s abilities and computational solutions have been shown for example in PUC [36], automatically creating control interfaces for complex appliances. Smart Templates [37] uses parameterized templates to specify when to automatically apply standard design conventions. One important observation that we build on in this work is that many GUI design problems such as layout of menus, web pages, and keyboards can be formulated as an assignment problem [21, 39].

Model-based approaches for UI retargeting have proposed formal abstractions of user interfaces (UIDLs) to describe the interface and its properties, operation logic, and relationships to other parts of the system [6] which can then be used to compile interfaces in different languages and to port them across platforms. Data-driven approaches have been explored by Kulkarni and Klemmer [23] to automatically transform desktop-optimized pages to other devices. GUMMY [30] retargets UIs from one platform to another by adapting and combining features of the original UI.

To the best of our knowledge, no prior work addresses the computational assignment of UI elements to devices in multi-user settings that would consider critical constraints such as access rights, privacy, and roles and their dynamic evolution over time. AdaM provides a real-time capable optimization formulation and implementation using mixed integer linear programming.

3 Concepts

The type of scenarios we consider in this work are co-located multi-user events – such as a meeting, party, or lecture. Any number of people with various devices and roles can be involved. An interactive application is assumed to consist of elements of different types, and the participants show varying interest toward them, but not all devices can show all elements. We further assume that this setup and the need for interactivity can change dynamically as time progresses. In order to cast such scenarios for combinatorial optimization, we need to introduce and define a few central concepts. These concepts are the basis for the objectives and constraints of the assignment problem formulation we develop in the next section.

Element Importance

Depending on the preferences of users present, the display of some elements should be prioritized. For example, in the lecture scenario the slides need to be presented on a public display, whereas a chat channel for the audience may only be displayed if auxiliary, personal devices (e.g., phones) are available. This importance value may be defined by the application developer or user. Element importance is one of the aspects an optimization scheme needs to consider and trade-off with other, potentially contradictory, preferences.

Device Access

In collaborative settings we assume that personal devices as well as shared devices must be considered. An example of a shared device is a large screen in a conference room, whereas a private device can range from smart wearables to laptop computers. In order to apply a user’s preferences through the importance metric, we must know which devices are available to a user. Thus, we can describe the user’s access to a device by its availability to the user, defined either in terms of ownership or physical proximity.

Element Permission

We integrate user roles into our optimization scheme by considering that some elements should not be made available to specific users. For example, while a disc jockey may require access to the audio mixer UI, the light technician should focus on stage lighting and the stage crew should not have access to either. To effectively represent such user roles in the final DUI, one would have to make sure that users are authenticated properly. We assume that mechanisms for this exist.

Device Characteristics

An element which requires frequent and quick text input should be assigned to a device with either a physical or soft keyboard (e.g., laptop, phone) rather than to a display only (e.g., TV). Similarly, visually rich elements such as presentation slides or a video should not be placed on small-screen devices (e.g., smartwatch). Similar to Panelrama [52], we consider visual quality, pointing and text input mechanisms as device characteristics.

Element Requirements

Complementary to device requirements we also define element requirements. Not all elements can be shown on every terminal. An element such as a drawing canvas may require precise pointing input as well as high visual fidelity, where assignment to a touchscreen tablet would be preferred over a small phone.

4 Optimization Formulation

With above concepts in place we develop a formalization as a mixed integer program that can be solved with state-of-the-art ILP solvers such as Gurobi [13]. These solvers can automatically search for solutions that maximize the objective and satisfy the defined constraints while assigning integer solutions to decision variables and give formal bounds on the solution quality with respect to the objective function. In the following, we define the overall objective. The subsections define each of its terms in detail.

4.1 Main Objective and Decisions

To begin, we identify that device access, element permission, and element privacy are concepts which constrain our problem. On the other hand, element importance, device characteristics, and element requirements directly address our objective of building a usable DUI. We thus propose a conceptually simple objective with the sub-objectives of: quality () and completeness (), which we aim to maximize in our final assignments. Here measures whether the correct elements are assigned to a user and device and measures whether a user receives all necessary elements. We formulate our objective function as a weighted sum of the normalized terms ():


where . We empirically set . Elements , devices , and users are considered.

In this study, we only consider the assignment of elements to devices. The problem of layout of elements on a device is assumed to be performed by responsive design practices common in web design. In our Demo Application section we demonstrate how a thin layer of UI code is sufficient to create fully functional user facing applications.

At the core of our method lies the decision on how to assign element to device , defined as,


All other decision variables pertaining to secondary optimization criteria such as element size and element count (per user) and input parameters are defined in Table 1.

Variable Description Assignment of element to device Area of element on device Whether element is made available to user Minimum elements-coverage over all users

Parameter Description Whether user has access to device Whether user is given permission to interact with element Importance of element to user

Device characteristics vector

Element requirements vector
Minimum size of element in pixels Maximum size of element in pixels Size of screen on device in pixels

Table 1: Description and ranges of variables and input parameters.

4.2 Quality Term ()

The quality of the final assignment relies on the suitability of assigning an element to device in terms of device characteristics and element requirements . and are -element vectors with values in range . The values represent visual quality and availability of text input, touch pointing, and mouse pointing. This is similar to the approach in [52].

In addition, we take users’ preferences through into account and consider the area that an element would occupy on a device. As an element cannot take up more space than is available on the display of a device, this consideration proves to be crucial for ensuring that not all elements are assigned to every device. For each device, a mean importance is calculated over all users who have access to this device. By taking the mean, we aim to balance the preferences of multiple users. We also aim to maximize the size of more important and compatible elements. That is, elements which are capable of being larger and benefit from additional screen real-estate (e.g., HD video) should be allowed to do so. Hence, we assume that a larger version of an element exhibits better visual quality than a smaller version.

The final quality term is then defined as:



are combined input parameters describing device and element characteristics () and importance of element to user ().

4.3 Completeness Term ()

When assigning elements across devices, we must furthermore consider and ensure the usefulness of the resulting UI from each user’s perspective. With the element permission parameter , we define a subset of elements which a user should be able to interact with. To ensure that the DUI is complete in the sense that all necessary functionality can be accessed by a given user in a collaborative multi-user scenario, we explicitly model the completeness of the UI per user.

Intuitively the completeness of the DUI for a user can be defined by:




The completeness variable describes the proportion of UI elements that a user has access to. A user with would have access to all elements which she requires for her role, that is, all elements with .

The decision variable represents whether an element has been made available by assignment to a user , taking into account the devices for which the user has access to (i.e., where ). This variable is determined by maximizing our objective (1) and applying the following constraints:


In addition, we consider the least privileged user, that is, the user with lowest . This variable is denoted and it is determined by applying the following additional constraints:


We now formulate the completeness term in the objective as,


where we maximize the mean UI completeness of users, and in particular try to improve the result for users with coverage.

4.4 Assignment Constraints

The previous terms alone cannot sufficiently constrain the optimization. In particular, we cannot support private elements or limit the assignment of elements in a meaningful way. In this section, we describe state and describe the constraints which allow for an effective optimization formulation.

Element Area Constraint

The element size variable must be determined based on whether an element is assigned to a device at all. We thus define the following for all :


ensuring that the area of an element be zero if it is not assigned and that it lies between user-specified bounds otherwise.

Device Capacity Constraint

In Eq. (3), we aim to maximize the size of all elements. We constrain this maximization by saying that the assignment of element sizes should not exceed the device’s display area. An assumption is made to say that a sum of the area of rectangular elements assigned on device represents the total area used by the elements. While this assumption would not always hold, it works in practice as shown in our evaluations. The device capacity constraint is formulated as follows:


where is a sufficiently large number.

Due to our simplifying assumption, we must explicitly ensure that the minimal width and height of an element allows it to be assigned to a device. This is expressed with the following constraints:


Element Permission Constraint

When assigning an element, we must consider the element permissions variable , which must be evaluated for every assignment . We do this by considering a device for which some users have access (). If any of these users do not have permission to interact with an element (i.e., ), then the element should not be assigned to the device. This is expressed as:


Device Accessibility Constraints

Furthermore, a device which is accessible by none of the users should not have any elements assigned,


Zero Constraints

Finally, we check if the compatibility or importance of an assignment is zero with:


We apply these constraints to make a distinction between very low importance or compatibility and zero-value input parameters. This allows for users to express a definite decision against an element assignment.

User-defined Element Assignment

Though not shown, our work may simply be extended to give users explicit control on element-device assignment. For instance, to ensure that element is assigned on device , the constraint could be added. Similarly, can ensure that is not assigned to . Note that the user-facing application should account for cases where the additional constraint cannot be fulfilled such as when minimum element size exceeds device capacity.

5 AdaM Design Tool

The AdaM Design Tool is a proof-of-concept designer-in-the-loop tool that allows for rapid solution space exploration. It consists of the AdaM Application Prototype and the AdaM Simulator. The Application Prototype allows the designer to specify input parameters required by the optimizer to allocate elements to devices and automatically applies the optimizer result. The simulator allows for quick tuning of input parameters by applying changes in device configurations immediately.

We build our tool on top of Codestrates [44] and Webstrates [22], which transparently synchronize the state of the Document Object Model (DOM) of webpages. Codestrates further enables collaborative prototyping and rapid iterations of AdaM applications. Communication with the optimizer back-end happens over a websocket connection.

5.1 AdaM Application Prototype

The AdaM Application Prototype includes an integrated development environment (IDE) for editing application content and behavior, as well as a configuration panel UI that allows for changing the parameters of optimizable elements. The platform is web-based and each AdaM application is a single web-page that contains optimizable elements, that it can hide or show based on the optimized solution.

The designer can develop the user interface and the interactive behavior of an AdaM application using standard HTML5, JavaScript, and CSS3 (Figure 1). A final application can be put into fullscreen (Figure AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time). All changes are instantly reflected in the browser, allowing for rapid application development and testing. Each application is addressed by a URL, which can be shared with others to collaboratively develop applications or to run it on devices.

Figure 1: AdaM application in edit mode with the HTML of an application (left) and its CSS and JavaScript (right).

The designer has to annotate HTML elements with the attribute optimizable="true" to consider them for optimization. Initially, the optimizer uses default parameters for elements but they can be specified by the designer. Pressing the control key on the keyboard and clicking on an optimizable element opens the configuration panel UI (Figure 2). This panel allows the designer to enter parameters related to default element-user importance, element requirements, and user permissions.

Figure 2: Workflow to open configuration panel UI to specify element parameters. Media sharing application (A) with highlighted optimizable controls element (B) and open configuration panel (C).

Each AdaM application communicates with the optimizer back-end by sending changes of its state (e.g., when the application has loaded, or parameters for elements that have changed), and receives updates from the optimizer including updates caused by other clients. A change includes updated user-specified parameters and user/device configuration. Device information are automatically read out from the device (e.g., window width and height) or can be set as URL parameters. This is useful for testing with different devices.

5.2 AdaM Simulator

Testing multi-device user interfaces is inherently difficult, as it requires managing the input and output of multiple (often heterogeneous) devices at the same time. To overcome this challenge, we developed a simulator that allows us to instantiate a wide range of simulated devices in a web browser and control the device characteristics used by the optimizer. A device is simulated in an iframe pointing to a given AdaM application, parameterized to e.g., act like a user’s personal tablet or a shared interactive whiteboard.

The simulator has a pre-defined set of a device types from which the designer can choose (i.e., TV, laptop, tablet, smartphone, and smartwatch) (Figure AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time). The simulated device characteristics can be changed at any time. For example, user access, device display dimensions, or device affordances. A device in the simulator can be disabled to simulate a device leaving or enabled to simulate a device joining.

6 System Walkthrough

To illustrate the utility of our approach, we start by discussing simple scenarios first, building up to more challenging scenarios and a functional end-to-end application. The initial illustrative examples build on a meeting room scenario. There are four users present in this scenario: the manager (‘boss’), her assistant, an employee, and a colleague who is presenting work results. We adjust specific parameters of our formulation per scenario and illustrate the effects.

6.1 A. User Roles

By considering user roles in our constraints, we can ensure that a particular user does not receive elements irrelevant to their role and task. A first simple use case involves the presenter and assistant. We set binary permission values between elements and user, defining the UI elements each role has access to (but not the assignment of elements to devices). For the purposes of our demonstration we only consider three UI elements:

Presenter Controls Minutes (View) Minutes (Edit)
Presenter Yes Yes No
Assistant No No Yes
(a) Initial (b) Adapted
Figure 3: Adapting to user roles. Giving permissions only for a subset of available elements allows for an interface which satisfies the requirements of users’ roles.

Figure 3 shows that setting permission values only, already yields meaningful results. While the initial layout has no awareness of user roles (a), our algorithm correctly removes UI elements for unauthorized users (b).

6.2 B. User Preference

While user roles are respected via a designer-specified constraints, user preference is accounted for by the optimization objective. This allows for a flexible balancing of preferences, which is shown further in the demo application section. We show a simple example in Figure 4. Initially all four UI elements have the same importance values and are therefore displayed on a large shared screen with a random element assigned to personal devices. Once the boss and assistant set higher importances for the “Quarterly Figures” and “Minutes (View)” elements, these are assigned to their personal devices. Note that in this example, the size, input and output requirements of elements and the device characteristics are kept equal. Examples in our demo application in the next section show more sophisticated changes in user preference.

Figure 4: Adapting to user preferences. Initially, all elements appear on the projector. Increasing the per user importance of individual elements, triggers appearance on personal devices.

6.3 C. Device Compatibility

We attempt to assign each UI element to the most suitable devices by considering element-device compatibility. We show an example of a single presenter with 3 devices, shown in Figure 5. We compare (a) a case where all parameters are set to against (b) a case with sensible parameters. Exact parameters are listed in the Appendix (Tab. 2). Note that other input parameters are kept fixed and that all elements including the presentation slides fit onto the smartwatch’s display.

Clearly a naive distribution of elements onto devices does not make sense since there is no guidance in terms of device affordances. The “Presenter Notes” element is placed on a small smartphone while the “Presentation” element is placed on the even smaller smartwatch. While “Presenter Controls” may be used on a laptop, arguably this element would be better placed on the available touchscreen device. In contrast, by setting sensible device characteristics and element requirement parameters, we can attain a useful assignment. While a human designer may not have duplicated the “Presenter Controls” over the smartphone and smartwatch and may have moved the “Clock” to the watch, we note that this is simply an initial assignment and can be refined quickly by tuning further input parameters such as setting the correct element size bounds and adjusting importance values. Since optimization takes only seconds this can be done interactively.

(a) Without device characteristics or element requirements (b) With sensible device characteristics and element requirements parameters
Figure 5: Element assignments become more suitable when taking in to consideration device characteristics and element requirements.

6.4 D. Individual UI Completeness

An important contribution of our work is a formulation that considers completeness of the final DUI. When elements are assigned to devices without the completeness term or consideration of element utility from each user’s perspective, a particular user may receive an incomplete and hence non-functional UI. We address this by encouraging the optimizer to maximize the number of elements that a user can utilize.

Figure 6 shows the effect of the completeness term. The original UI shown in (a) is incomplete, and switching the laptop off only exaggerates this issue, where the assistant is left with a single UI element. When adding the completeness term, the initial UI includes all available elements (b). After switching the laptop off, the three elements previously assigned to the laptop move to the tablet and the UI remains functional.

By introducing the DUI completeness term which improves the functionality of each user’s DUI, we ensure that utility is part of the optimizer’s objective. Our consideration results in usable DUIs and is a meaningful step towards optimizing for individual users in a multi-user setting.

(a) Without Completeness Term (b) With Completeness Term
Figure 6: The completeness term ensures that the final DUI remains useful. (a) shows the low utility of the DUI generated by the optimizer without the completeness term while, (b) shows how all elements are available for the user when using the completeness term.

7 Demo Application: Co-located Media Sharing

(a) Initial configuration. It can be seen that elements respect element-device compatibilities in their assignment. (b) Bob and Carol’s preferences can both be respected. On the left, only Bob’s preference of the “Suggested Videos” element is represented. On the right, Carol’s preference for the “Description” element is also addressed by placing the element on the tablet shared with Bob.


Figure 7: A demonstration of our full system with optimization backend and distributed frontend. In this example, we can see users and devices in play with three user preferences represented. Our system quickly adapts to the changing setting with ease. (a) and (b) are explained in their own captions and (c) reflects Darryl’s preference for reading comments.

After analyzing the individual components of our approach we now discuss a more end-to-end application that we implement using the proposed optimization approach. In our application, we explore the task of co-located media sharing, being particularly well suited to demonstrate the capabilities to adapt to dynamic changes. This is one of the main contributions of our work and have previously not been modeled. Our approach makes it possible to adapt to arbitrary changes in a scenario in real-time and allows a designer or even the end-users to express and apply their preferences to continuously improve the user experience. In this application, we design our elements using responsive web design practices. The result is a visually appealing and functional application.

We consider a scenario involving 4 users, shared devices with large displays as well as smaller private devices. The UI consists of the following elements: video, playback controls, description, comments, and suggested videos. We also add a collaborative component by implementing a voting module. When a user clicks on one of the suggested videos, the video is shown on the voting element. When all users have voted, the vote concludes and the suggested video may be played.

In our scenario we begin with 1 TV, 1 shared laptop, and 3 smartphones. We do not illustrate all devices in the paper and refer the reader to our supplementary video for a visual demonstration of how our system handles dynamic user, device and user preference changes.

7.1 Initial Condition

Without any user preferences expressed, our algorithm can still produce sensible element assignments taking element size ranges, device characteristics, element requirements, and device sizes into consideration. Figure 7a shows the optimized assignment in the AdaM simulator UI. It can be seen that the most visually important video element is placed on the shared large displays, while the voting controls which require touch interaction are placed on the mobile phone displays. The comments element requires text input, and is appropriately placed on the laptop.

7.2 Bob and Carol’s preferences and shared tablet

During the video sharing session, Bob and Carol bring out their tablet. When Bob increases his importance value for the “Suggestions” element to be higher (5) than the default for everyone else (4), the element appears on the tablet. With an even higher importance value of , the suggestions appear on the phone as well, replacing the voting element (see Figure 7b).

When Carol decides that she would like to read the description of the video, she sets an importance value that is higher than Bob’s importance for the suggestions element. She has to set a sufficiently higher value of however, to counter-act the lower compatibility between the description element and tablet. The result of this is shown in Figure 7b as well. Our completeness measure ensures that both Carol and Bob can still access the important voting controls.

7.3 Darryl joins with his own preference

Later in the evening, Darryl joins the gathering. He prefers to read other users’ opinions, and therefore he places a high importance on the comments element. When he sets his personal importance value for the comments element to , it is placed on his personal smartphone. He can then read and comment as he pleases. This result is shown in Figure 7c.

8 Scalability

Our algorithm is capable of adapting to changes in users, devices, and elements in real-time. So far and for brevity we discussed only toy examples in which the run-time of the optimizer was s. Here we evaluate how well the algorithm scales to larger number of devices, elements and users, settings in which manual assignment would be at best tedious if not impossible. We run our performance evaluations on a desktop PC with an Intel i7-4770 processor and GB of RAM. Gurobi is used to solve our optimization problem.

As a test for worst-case scenarios, we randomly generate a large number of elements, devices, and users, and record convergence time of the solver over 10 randomized runs. For users, random per-element importance values are generated and devices are generated with width/height values between px and px. We allow all users to access all devices. Elements are generated with minimum width/height of px and maximum width/height of px with px increments between randomized values.

Figure 8 summarizes the results. In (a), the input data consists of devices and users with an increasing number of elements. In (b), we input elements and users with an increasing number of devices. In (c), there are devices, elements, and up to users to show an extreme scenario. All users have randomized personal preferences. To consider a more realistic case, we fix the number of elements to and vary both users and devices in (d). There are personal devices per user and publicly shared device per 5 users.

Our algorithm can solve a scenario with users and devices in second, allowing for the design of large-scale real-time adaptive systems. This speed allows for a real-time exploration of DUI configurations where a designer can determine parameters suitable to a task based on instant feedback.

(a) Elements (b) Devices
(c) Users (d) Users (and Devices)
Figure 8: Optimization time in seconds for varying problem sizes. In (a-c) we vary the number of elements, devices, and users independently. In (d) we vary both users and devices.

9 User Study

We assessed the approach by asking experimental participants to design a DUI using either pen and paper or AdaM. Our goal was to understand whether our approach is easy to understand, and to see if we can observe improvements in the design process in terms of performance and experience.

9.1 Method

Participants: Six participants (3 female, 3 male) were recruited from our institution (students and staff). The average age was 26 (SD = 1.6, aged 24 to 27). Two participants were researchers in the area of web engineering with one of them in particular researching DUIs. Three other participants stated to have web development experience.

Tasks: The study comprised of two tasks centered around a meeting scenario: 1) Participants were asked to assign UI-elements to devices to reflect the role and preference of users as specified in the scenario (T1). 2) In the second task, some devices were switched on/off and content preferences were changed. Participants were asked to adapt the previous assignment accordingly (T2).

Experimental design: We tested two conditions. In the first condition (pen&paper), participants crossed out elements which did not match the given scenario on a large sheet of paper showing all devices of all roles (see Figure 9, left). In the second condition (AdaM), participants used sliders to specify element importance according to scenario descriptions. An additional UI displayed an overview of devices and assigned elements (see figure 9, right). We used a within-subjects design and counterbalanced the order of presentation.

Figure 9: Conditions of study (left: pen&paper, right: AdaM).

Procedure: In the beginning, participants were introduced into pen&paper and AdaM and were provided time to practice using the tool. After that participants solved T1 and T2 in the respective conditions. Tasks were completed when participants reported to be satisfied with the element to device assignment. For each task and condition, participants completed the NASA-TLX and a questionnaire on satisfaction with results. At the end an exit interview was conducted. A session took on average 60 mins.

9.2 Results

In terms of perceived scenario, results satisfaction, number of scenario violations and perceived task load, the mean of responses of both conditions were within standard deviation. However, task execution time (TET) was lower for

pen&paper compared to AdaM***For a summary of quantitative results see Table 3 in the appendix., which indicates that the design task may not have been sufficiently difficult. This highlights the challenge of performing a fair comparison between automatic and manual designs from the designers’ perspective, where the task cannot be so difficult to be deemed unfair.

Analyzing the answers of the interviews, three participants valued AdaM’s capability to adapt in real-time to changing device configurations. In fact, one participant even exclaimed “perfect!” after switching on a mobile phone and realizing that the automatically assigned UI elements satisfy the scenario without any further adjustment. This advantage is also evident when looking at the differences of quantitative results between the assignment task T1 and the adaption task T2. In between tasks, the average TET improved by 103 sec with AdaM compared to only 14 sec in pen&paper and task load improved by 14.6 with AdaM compared to 6.2 in pen&paper.

Another property of AdaM that was perceived as a “powerful” advantage over the manual approach (5 out of 6 participants) was the possibility to specify “global rules” (so named by a participant). They liked the fact that instead of assigning elements on a device level, they could specify the preference of a person and let the optimizer distributes elements over her devices. Participants commented on this capability saying “not white and black listing per device, but you specify importance per role” or “when I specify the importance I do not need to think about devices”.

Nevertheless, the same participants mentioned that the main drawback of AdaM was less control in terms of specifying distinct element to device assignments. They struggled with finding a balance between different slider values such that the optimizer’s element-to-device-assignment matches their intention. One participant summarized that problem with: “I was able to satisfy the scenario, [but] it was difficult with the optimizer to go beyond”. A solution for this problem is to allow the specification of element-to-device assignments as hard constraints (see paragraph User-defined Element Assignment).

Another difficulty participants had was to understand the expected outcome of a slider change (“what does it translate to when I set a slider to 15?”). Due to the non-linear nature of our formulation the outcome of the optimizer is hard to predict and thus how sliders need to be adjusted.

10 Limitations and Future Work

In this paper, we laid the foundations for future work but it is not without limitations. User study participants in particular had difficulty predicting the optimizer’s output (i.e., when the size bounds of the video element changes, how does the output change?), while the large number of input parameters and the difficulty of determining the best parameters caused some difficulty in implementing the demo application. These issues could be addressed by: (a) producing a rigorous DUI test framework based on empirical observations (to allow for an improved objective function formulation), (b) reducing the number of input parameters (e.g., by defining a mapping from real-world device characteristics to or using user interaction logs for determining ), and (c) improving the DUI design-space exploration experience for designers (e.g., facilitating easy specification of scenarios and automated mockup of heterogeneous set of devices associated with users).

A further limitation is in our evaluations. While our user study serves its purpose of confirming the general idea of our approach, low participant numbers and the simulated design task cause us to hesitate in forming generalized conclusions. Nevertheless, we have confidence in our approach as it was designed to be general and user-centred, with basic principles in mind. Thus, we believe that AdaM can be effective in real world settings and aim to conduct an in-depth analysis in the future to verify our thoughts.

Further extensions to improve user experience in AdaM-based DUIs could include: (1) consideration of user proximity and attention for , (2) automatic determination of element-device compatibility parameters and based on the affordances of devices and composition of elements, and (3) continuous adaptation to users’ changing preferences through analysis of interaction logs and visual attention tracking.

11 Conclusion

In this paper we have demonstrated a scalable approach to the automatic assignment of UI elements to users and devices in cross-device user interfaces, during multi-user events. By posing this problem as an assignment problem, we were able to create an algorithm which adapts to dynamic changes due to altering configurations of users, their roles, their preferences and access rights, as well as advertised device capabilities.

Underpinning AdaM, is a MILP solver which given an objective function decides the assignment of elements to multiple devices and users. Measures for both quality, completeness along with constraints, help to guide the optimization toward satisfactory solutions, which are represented by suitable assignments of UI elements. Following this, the layout problem is performed by responsive design practices common in web design, as shown in our application scenarios.

The AdaM application platform itself is web-based and enables collaborative prototyping and rapid iterations of AdaM applications. In addition, our simulator environment allows us to instantiate a wide range of simulated devices. We report on scenarios with up to 1000 users and 2200 devices along with a user study involving six participants, who are asked to assign and adapt UI-element configurations. Our qualitative results indicate that AdaM can reduce both designer and user effort in attaining ideal DUI configurations. The results are promising and suggest further exploration is warranted into the automatic UI element assignment approach introduced here.

The mathematical formulation introduced here may be extended to incorporate other issues present in collaborative multi-user interfaces including, extended device parameterization, social acceptability factors, user attention, proxemic dimensions, display switching, display contiguity, field of view, spatio-temporal interaction flow, inter-device consistency, sequential and parallel device use along with synchronous and asynchronous device arrangements.

12 Acknowledgments

We thank the ACM SIGCHI Summer School on Computational Interaction 2017 for bringing the authors together along with our study participants and the reviewers of this work.

This work was supported in part by ERC Grants OPTINT (StG-2016-717054) and Computed (StG-2014-637991), SNF Grant (200021L_153644), the Aarhus University Research Foundation, the Innovation Fund Denmark (CIBIS 1311-00001B), and the Scottish Informatics and Computer Science Alliance (SICSA).


  • [1]
  • [2] Sriram Karthik Badam and Niklas Elmqvist. 2014. PolyChrome: A Cross-Device Framework for Collaborative Web Visualization. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS ’14). ACM, New York, NY, USA, 109–118. DOI: 
  • [3] Pei-Yu (Peggy) Chi and Yang Li. 2015. Weave: Scripting Cross-Device Wearable Interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 3923–3932. DOI: 
  • [4] Tao Dong, Elizabeth F. Churchill, and Jeffrey Nichols. 2016. Understanding the Challenges of Designing and Developing Multi-Device Experiences. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (DIS ’16). ACM, New York, NY, USA, 62–72. DOI: 
  • [5] S. Eilemann, M. Makhinya, and R. Pajarola. 2009. Equalizer: A Scalable Parallel Rendering Framework. IEEE Transactions on Visualization and Computer Graphics 15, 3 (May 2009), 436–452. DOI: 
  • [6] Jacob Eisenstein, Jean Vanderdonckt, and Angel Puerta. 2001. Applying Model-based Techniques to the Development of UIs for Mobile Computers. In Proceedings of the 6th International Conference on Intelligent User Interfaces (IUI ’01). ACM, New York, NY, USA, 69–76. DOI: 
  • [7] Niklas Elmqvist. 2011. Distributed User Interfaces: State of the Art. Springer London, London, 1–12. DOI: 
  • [8] Luca Frosini and Fabio Paternò. 2014. User Interface Distribution in Multi-device and Multi-user Environments with Dynamically Migrating Engines. In Proceedings of the 2014 ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS ’14). ACM, New York, NY, USA, 55–64. DOI: 
  • [9] Krzysztof Gajos and Daniel S. Weld. 2004. SUPPLE: Automatically Generating User Interfaces. In Proceedings of the 9th International Conference on Intelligent User Interfaces (IUI ’04). ACM, New York, NY, USA, 93–100. DOI: 
  • [10] Giuseppe Ghiani, Fabio Paternò, and Carmen Santoro. 2010. On-demand Cross-device Interface Components Migration. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI ’10). ACM, New York, NY, USA, 299–308. DOI: 
  • [11] Jens Grubert, Matthias Heinisch, Aaron Quigley, and Dieter Schmalstieg. 2015. MultiFi: Multi Fidelity Interaction with Displays On and Around the Body. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 3933–3942. DOI: 
  • [12] Jens Grubert, Matthias Kranz, and Aaron Quigley. 2016. Challenges in mobile multi-device ecosystems. mUX: The Journal of Mobile User Experience 5, 1 (26 Aug 2016), 5. DOI: 
  • [13] Inc. Gurobi Optimization. 2016. Gurobi Optimizer Reference Manual. (2016).
  • [14] Peter Hamilton and Daniel J. Wigdor. 2014. Conductor: Enabling and Understanding Cross-device Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM, New York, NY, USA, 2773–2782. DOI: 
  • [15] Richard Han, Veronique Perret, and Mahmoud Naghshineh. 2000. WebSplitter: A Unified XML Framework for Multi-device Collaborative Web Browsing. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (CSCW ’00). ACM, New York, NY, USA, 221–230. DOI: 
  • [16] Tommi Heikkinen, Jorge Goncalves, Vassilis Kostakos, Ivan Elhart, and Timo Ojala. 2014. Tandem Browsing Toolkit: Distributed Multi-Display Interfaces with Web Technologies. In Proceedings of The International Symposium on Pervasive Displays (PerDis ’14). ACM, New York, NY, USA, Article 142, 6 pages. DOI: 
  • [17] Otmar Hilliges, Lucia Terrenghi, Sebastian Boring, David Kim, Hendrik Richter, and Andreas Butz. 2007. Designing for Collaborative Creative Problem Solving. In Proceedings of the 6th ACM SIGCHI Conference on Creativity &Amp; Cognition (C&C ’07). ACM, New York, NY, USA, 137–146. DOI: 
  • [18] Steven Houben and Nicolai Marquardt. 2015. WatchConnect: A Toolkit for Prototyping Smartwatch-Centric Cross-Device Applications. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 1247–1256. DOI: 
  • [19] Maria Husmann, Daniel Huguenin, Matthias Geel, and Moira C. Norrie. 2017. Orchestrating Multi-device Presentations with OmniPresent. In Proceedings of the 6th ACM International Symposium on Pervasive Displays (PerDis ’17). ACM, New York, NY, USA, Article 3, 8 pages. DOI: 
  • [20] Tero Jokela, Jarno Ojala, and Thomas Olsson. 2015. A Diary Study on Combining Multiple Information Devices in Everyday Activities and Tasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 3903–3912. DOI: 
  • [21] Andreas Karrenbauer and Antti Oulasvirta. 2014. Improvements to Keyboard Optimization with Integer Programming. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST ’14). ACM, New York, NY, USA, 621–626. DOI: 
  • [22] Clemens N. Klokmose, James R. Eagan, Siemen Baader, Wendy Mackay, and Michel Beaudouin-Lafon. 2015. Webstrates: Shareable Dynamic Media. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 280–290. DOI: 
  • [23] Chinmay Eishan Kulkarni and Scott R. Klemmer. 2011. Automatically Adapting Web Pages to Heterogeneous Devices. In CHI ’11 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’11). ACM, New York, NY, USA, 1573–1578. DOI: 
  • [24] R. Langner, T. Horak, and R. Dachselt. 2017. VISTILES: Coordinating and Combining Co-located Mobile Devices for Visual Data Exploration. IEEE Transactions on Visualization and Computer Graphics PP, 99 (2017), 1–1. DOI: 
  • [25] Andrés Lucero, Jussi Holopainen, and Tero Jokela. 2011. Pass-them-around: Collaborative Use of Mobile Phones for Photo Sharing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, New York, NY, USA, 1787–1796. DOI: 
  • [26] Andrés Lucero, Matt Jones, Tero Jokela, and Simon Robinson. 2013. Mobile Collocated Interactions: Taking an Offline Break Together. interactions 20, 2 (March 2013), 26–32. DOI: 
  • [27] K. Luyten and K. Coninx. 2005. Distributed user interface elements to support smart interaction spaces. In Seventh IEEE International Symposium on Multimedia (ISM’05). 8 pp.–. DOI: 
  • [28] Nicolai Marquardt, Ken Hinckley, and Saul Greenberg. 2012. Cross-device Interaction via Micro-mobility and F-formations. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST ’12). ACM, New York, NY, USA, 13–22. DOI: 
  • [29] Jérémie Melchior, Donatien Grolaux, Jean Vanderdonckt, and Peter Van Roy. 2009. A Toolkit for Peer-to-peer Distributed User Interfaces: Concepts, Implementation, and Applications. In Proceedings of the 1st ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS ’09). ACM, New York, NY, USA, 69–78. DOI: 
  • [30] Jan Meskens, Jo Vermeulen, Kris Luyten, and Karin Coninx. 2008. Gummy for Multi-platform User Interface Designs: Shape Me, Multiply Me, Fix Me, Use Me. In Proceedings of the Working Conference on Advanced Visual Interfaces (AVI ’08). ACM, New York, NY, USA, 233–240. DOI: 
  • [31] Brad A. Myers, Herb Stiel, and Robert Gargiulo. 1998. Collaboration Using Multiple PDAs Connected to a PC. In Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work (CSCW ’98). ACM, New York, NY, USA, 285–294. DOI: 
  • [32] Michael Nebeling. 2016. Cross-Device Interfaces: Existing Research, Current Tools, Outlook. In Proceedings of the 2016 ACM on Interactive Surfaces and Spaces (ISS ’16). ACM, New York, NY, USA, 513–516. DOI: 
  • [33] Michael Nebeling. 2017. XDBrowser 2.0: Semi-Automatic Generation of Cross-Device Interfaces. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 4574–4584. DOI: 
  • [34] Michael Nebeling and Anind K. Dey. 2016. XDBrowser: User-Defined Cross-Device Web Page Designs. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 5494–5505. DOI: 
  • [35]

    Michael Nebeling, Theano Mintsi, Maria Husmann, and Moira Norrie. 2014.

    Interactive Development of Cross-device User Interfaces. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems (CHI ’14). ACM, New York, NY, USA, 2793–2802. DOI: 
  • [36] Jeffrey Nichols, Brad A. Myers, Michael Higgins, Joseph Hughes, Thomas K. Harris, Roni Rosenfeld, and Mathilde Pignol. 2002. Generating Remote Control Interfaces for Complex Appliances. In Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology (UIST ’02). ACM, New York, NY, USA, 161–170. DOI: 
  • [37] Jeffrey Nichols, Brad A. Myers, and Kevin Litwack. 2004. Improving Automatic Interface Generation with Smart Templates. In Proceedings of the 9th International Conference on Intelligent User Interfaces (IUI ’04). ACM, New York, NY, USA, 286–288. DOI: 
  • [38] Katie O’Leary, Tao Dong, Julia Katherine Haines, Michael Gilbert, Elizabeth F. Churchill, and Jeffrey Nichols. 2017. The Moving Context Kit: Designing for Context Shifts in Multi-Device Experiences. In Proceedings of the 2017 Conference on Designing Interactive Systems (DIS ’17). ACM, New York, NY, USA, 309–320. DOI: 
  • [39] A. Oulasvirta. 2017. User Interface Design with Combinatorial Optimization. Computer 50, 1 (Jan 2017), 40–47. DOI: 
  • [40] Fabio Paternò and Carmen Santoro. 2012. A Logical Framework for Multi-device User Interfaces. In Proceedings of the 4th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS ’12). ACM, New York, NY, USA, 45–50. DOI: 
  • [41] Martin Porcheron, Andrés Lucero, and Joel E. Fischer. 2016. Co-curator: Designing for Mobile Ideation in Groups. In Proceedings of the 20th International Academic Mindtrek Conference (AcademicMindtrek ’16). ACM, New York, NY, USA, 226–234. DOI: 
  • [42] Roman Rädle, Hans-Christian Jetter, Nicolai Marquardt, Harald Reiterer, and Yvonne Rogers. 2014. HuddleLamp: Spatially-Aware Mobile Displays for Ad-hoc Around-the-Table Collaboration. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS ’14). ACM, New York, NY, USA, 45–54. DOI: 
  • [43] Roman Rädle, Hans-Christian Jetter, Mario Schreiner, Zhihao Lu, Harald Reiterer, and Yvonne Rogers. 2015. Spatially-aware or Spatially-agnostic?: Elicitation and Evaluation of User-Defined Cross-Device Interactions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 3913–3922. DOI: 
  • [44] Roman Rädle, Midas Nouwens, Kristian Antonsen, James R Eagan, and Clemens Nylandsted Klokmose. 2017. Codestrates: Literate Computing with Webstrates. In Proceedings of the 30th annual ACM symposium on User interface software & technology (UIST ’17). New York, NY, USA.
  • [45] Jun Rekimoto. 1998. A Multiple Device Approach for Supporting Whiteboard-based Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’98). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 344–351. DOI: 
  • [46] Simon Robinson, Jennifer Pearson, Matt Jones, Anirudha Joshi, and Shashank Ahire. 2017. Better Together: Disaggregating Mobile Services for Emergent Users. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’17). ACM, New York, NY, USA, Article 44, 13 pages. DOI: 
  • [47] Stephanie Santosa and Daniel Wigdor. 2013. A Field Study of Multi-device Workflows in Distributed Workspaces. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’13). ACM, New York, NY, USA, 63–72. DOI: 
  • [48] Mario Schreiner, Roman Rädle, Hans-Christian Jetter, and Harald Reiterer. 2015. Connichiwa: A Framework for Cross-Device Web Applications. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15). ACM, New York, NY, USA, 2163–2168. DOI: 
  • [49] Julia Schwarz, David Klionsky, Chris Harrison, Paul Dietz, and Andrew Wilson. 2012. Phone As a Pixel: Enabling Ad-hoc, Large-scale Displays Using Mobile Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2235–2238. DOI: 
  • [50] Lucia Terrenghi, Aaron Quigley, and Alan Dix. 2009. A Taxonomy for and Analysis of Multi-person-display Ecosystems. Personal Ubiquitous Comput. 13, 8 (Nov. 2009), 583–598. DOI: 
  • [51] Minna Wäljas, Katarina Segerståhl, Kaisa Väänänen-Vainio-Mattila, and Harri Oinas-Kukkonen. 2010. Cross-platform Service User Experience: A Field Study and an Initial Framework. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI ’10). ACM, New York, NY, USA, 219–228. DOI: 
  • [52] Jishuo Yang and Daniel Wigdor. 2014. Panelrama: Enabling Easy Specification of Cross-device Web Applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM, New York, NY, USA, 2783–2792. DOI: 

13 Appendix

13.1 Device Capability Study Parameters

Laptop Smartphone Smartwatch Visual Quality 3 1 1 Text Input 5 3 0 Mouse Pointing 3 0 0 Touch Pointing 0 4 2 (a) Device Characteristics Presenta-tion Presenter Controls Presenter Notes Clock Visual Quality 5 0 3 2 Text Input 0 0 1 0 Mouse Pointing 0 3 1 0 Touch Pointing 0 5 1 0 (b) Element Requirements
Table 2: Our device characteristics and element requirements parameters for Fig. 5 (b).

13.2 User Study Quantitative Figures

In the study, we asked participants whether the assignment of elements they produced in a condition satisfies the scenario (on a scale ranging from 1 (not at all) to 7 (completely)) and how satisfied they were with the assignment they specified (ranging from 1 (not satisfied) to 7 (very satisfied)). The results of these questions as well as the task execution time for both conditions and tasks can be seen in table 3. Furthermore, we asked participants to fill out the Nasa-TLX questionnaire and calculated how often the designed element-to-device assignments of participants violated the given scenario for both conditions and tasks. Results are again shown in table 3.

Task Measure pen&paper AdaM
T1 Exec. time () 31390 399119
Scenario satisfaction 5.22.1 5.71.6
Result satisfaction 4.81.9 4.81.6
Scenario violations 11.5 1.31.2
Task load 36.917.3 41.710.8
T2 Exec. time () 259141 296112
Scenario satisfaction 60.6 5.70.8
Result satisfaction 5.51.8 5.31.0
Scenario violations 1.21.0 1.21.3
Task load 30.716.9 27.113.3
Table 3: Mean and SD of quantitative measures for T1 and T2 per condition.