Description

The emergence of a diversity of new electronic devices has led to a lack of effective ways to interact and communicate between devices (cross-device interaction). Furthermore, often we do no longer only work with digital devices but also interact with physical objects in our environments (cross-media interaction). For example, cross-device interaction has made it into commercial solutions such as Google Chromecast, where information can be exchanged between devices. On the other hand, cross-media interactions are, for example, supported by commercial digital pen a paper solutions in order to bridge the gap between the digital and physical worlds. While there exists significant research on cross-device as well as cross-media interaction, there is a clear lack of research efforts trying to unify these two domains. Furthermore, existing cross-device or cross-media solutions often provide a pre-defined set of interactions and it is impossible for end users to make even just minor adaptations of an interface. Last but not least, there is a lack of authoring tools which allow end users to define new innovative cross-device and cross-media interactions with minimal or no programming effort. The goal of this proposal is to develop a framework for user-defined cross-device and cross-media user interfaces. We plan to bridge the gap between cross-device and cross-media interaction by unifying research of both domains. We will further investigate how the definition and authoring of cross-device and cross-media interactions can be simplified in order that new interactions and interfaces can not only be programmed by developers but also composed by end users via the appropriate authoring tools. Our solution should enable them to integrate existing user interface components and digital data as well as services in new innovative ways. Thereby, we plan to offer a flexible configuration and composition of these components in a way similar as we have seen it with mashup tools for the Web 2.0. In order to reach our goal, we will first design a model combining cross-device and cross-media interaction based on the resource-selector-link (RSL) hypermedia metamodel. Based on this model, the eXtensible User Interface (XUI) framework will be developed. The XUI framework will allow developers to define new interactions via the active component and resource plug-in concepts offered by the RSL model. In a second phase of our project, we will allow end users to use and customise existing active components. This will be achieved by extending the initial version of the previously mentioned model and XUI framework. In the third phase of our project, we will investigate the necessary end user authoring tool in order to configure the different cross-device and cross-media interactions. A first version of the authoring tool will support the visual composition of different user interface components and the corresponding services. An evaluation of the authoring tool and the underlying model and XUI framework might reveal some potential issues which will be addressed in a second version of the end user authoring tool. This second version of the authoring tool will further go beyond the graphical definition of cross-device and cross-media interactions and also support some forms of programming-by-example. The user could, for example, perform some interactions (e.g. pressing a button on a device or executing a gesture) and then only use the graphical authoring tool to link the performed interaction with the corresponding action. The proposed research will be executed in the CISA research group within the Web & Information Systems Engineering (WISE) lab at the VUB. The research group investigates cross-media information systems and architectures as well as concepts for future environments, which includes next generation user interfaces, augmented reality and ambient information.<br />
AcronymFWOSB18
StatusActive
Effective start/end date1/01/1631/12/17

ID: 21102706