Demos and Posters

14:40 Demos Fennia II
Chair: Anna Kolehmainen, Futurice Oy, Finland

A Mobile See-Through 3D Display with Front- and Back-Touch

  1. Patrick Bader, Stuttgart Media University, Germany
  2. Valentin Schwind, Stuttgart Media University, Germany
  3. Stefan Schneegass, University of Stuttgart, Germany
  4. Katrin Wolf, University of Stuttgart, Germany
  5. Niels Henze, University of Stuttgart, Germany

Touch screens are currently the dominant technology for facilitating input and output for mobile devices. Several directions to extend the possibilities of current touch screen technologies have been explored. In this demonstration we showcase a handheld device that consists of a stack of three see-through displays. Using three display layers enables to realize a volumetric 3D display. As our device is touch sensitive on both display sides it enables touch input on the device’s front and back. We demonstrate the device’s capabilities through three demo applications. We present 3D images using three display layers, demonstrate a game where the character can move from layer to layer, and show a target selection task to compare selection performance on different display layers.

A Physical Visualization of a Living Social Network

  1. Wieslaw Bartkowski, University of Social Sciences and Humanities, Poland

Social network analysis is an important scientific research tool. It allows capturing ongoing processes at different levels and from different perspectives such as the individual, group, community and society. Scientists looking at the evolution of such networks over time often notice a resemblance to a living organism, existing over and above the basic social units of which it is constituted. The main aim of this work is to deepen this perception and experience of the “living network” by moving the visualization of the network from the computer screen into the real physical three-dimensional space. Additionally, this work is a demonstration of how artifacts created for purposes of scientific research can become contemporary art objects.

ALADIN: Demo of a Multimodal Adaptive Voice Interface

  1. Jonathan Huyghe, KU Leuven, Belgium
  2. Jan Derboven, KU Leuven, Belgium
  3. Dirk De Grooff, KU Leuven, Belgium

This demo will demonstrate the ALADIN adaptive voice interface, which was designed to aid people with motor disabilities. As these people often experience problems with conventional button or switch-based interfaces, but often also suffer from speech impairments, a self-learning speech recognition system was developed. To support the system, a tablet-based companion interface was designed. The demo will present a working prototype of the speech system that can optionally be trained by conference attendees to recognize their own language or dialect, as well as the tablet interface and an interactive 3D recreation of a home environment used during user testing.

Designing for Engagement; Tangible Interaction in Multisensory Environments

  1. Henrik Svarrer Larsen, Lund University, Sweden
  2. Héctor A. Caltenco, Lund University, Sweden

The pedagogical use of multisensory environments (MSE/Snoezelen) addresses the fundamentals of en-gagement in the world through rich, wondrous and sen-suous experiences. Despite the diversity of artefacts and materials used in these practices, interactive de-signs are few, screen-centric or limited to simplistic behaviour. Twenty-four children with profound developmental dis-abilities from three MSE institutions have together with us and pedagogues explored potentials in interactivity for MSE. From a suite of 17 interactive designs, we will describe the three we will demo.

SONDI: Audio-based Device Discovery and Pairing for Smart Environments

  1. Hannu Kukka, University of Oulu, Finland
  2. Pauli Marjakangas, University of Oulu, Finland

In this paper we propose a system called SONDI that uses high frequency audio signals (called audio signatures) to pair mobile devices with fixed devices in smart environments. The system allows users to discover interaction possibilities in the environment they might have otherwise missed, through unobtrusive and non-audible signals sent from fixed devices. Benefits of SONDI include fast discovery times (<1.8 seconds), effortless interaction from the user, and high availability as SONDI does not require any additional hardware on the users’ mobile devices.

SecondNose: an air quality mobile crowdsensing system

  1. Chiara Leonardi, Fondazione Bruno Kessler, Italy
  2. Andrea Cappellotto, Fondazione Bruno Kessler, Italy
  3. Michele Caraviello, SKIL – Telecom Italia, Italy
  4. Bruno Lepri, Fondazione Bruno Kessler, Italy
  5. Fabrizio Antonelli, SKIL – Telecom Italia, Italy

In this paper, we present SecondNose, an air quality mobile crowdsensing service, aimed at collecting environmental data to monitor some air pollution indicators to foster participants’ reflection on their overall exposure to pollutants. Currently, SecondNose aggregates more than 30k data points daily from 80 citizens in Trento, northern Italy. The paper describes the system and the results of an initial evaluation. We conclude with an outline of future work on the research and development of SecondNose.

Using Gaze Gestures with Haptic Feedback on Glasses

  1. Jari Kangas, University of Tampere, Finland
  2. Deepak Akkil, University of Tampere, Finland
  3. Jussi Rantala, University of Tampere, Finland
  4. Poika Isokoski, University of Tampere, Finland
  5. Päivi Majaranta, University of Tampere, Finland
  6. Roope Raisamo, University of Tampere, Finland

Wearable computing devices are gradually becoming common, and head-mounted devices such as Google Glass are already available. These devices present new interaction challenges as the devices are usually small in size, and also the usage environment sets limitations on the available interaction modalities. One potential interaction method could be to use gaze for input and haptics for output with a head-worn device. We built a demonstration system to show how gaze gestures could be used to control a simple information application together with head area haptic feedback for gesture confirmation. The demonstration and experiences of early user studies have shown that users perceive such an input-output combination useful.

Visual Berrypicking in Large Image Collections

  1. Thomas Low, Otto von Guericke University Magdeburg, Germany
  2. Christian Hentschel, Hasso Plattner Institute for Software Systems Engineering, Germany
  3. Sebastian Stober, Otto von Guericke University Magdeburg, Germany
  4. Harald Sack, Hasso Plattner Institute for Software Systems Engineering, Germany
  5. Andreas Nürnberger, Otto von Guericke University Magdeburg, Germany

Exploring image collections using similarity-based two-dimensional maps is an ongoing research area that faces two main challenges: with increasing size of the collection and complexity of the similarity metric projection accuracy rapidly degrades and computational costs prevent online map generation. We propose a prototype that creates the impression of panning a large (global) map by aligning inexpensive small maps showing local neighborhoods. By directed hopping from one neighborhood to the next the user is able to explore the whole image collection. Additionally, the similarity metric can be adapted by weighting image features and thus users benefit from a more informed navigation.

Why Not Simply Google?

  1. Ahmet Soylu, University of Oslo, Norway
  2. Martin Giese, University of Oslo, Norway
  3. Ernesto Jimenez-Ruiz, University of Oxford, United Kingdom
  4. Evgeny Kharlamov, University of Oxford, United Kingdom
  5. Dmitriy Zheleznyakov, University of Oxford, United Kingdom
  6. Ian Horrocks, University of Oxford, United Kingdom

We demonstrate an ontology-based visual query system, namely OptiqueVQS, for end users without any technical background to formulate rather complex information needs into formal queries over databases. It is built on multiple and coordinated representation and interaction paradigms and a flexible widget-based architecture.

14:40 Posters Foyer
Chair: Mikael Wiberg, Umeå University, Sweden

“Should I Stay or Should I Go?” – Different Designs to Support Drivers’ Decision Making

  1. Andreas Löcken, OFFIS – Institute for Information Technology, Germany
  2. Heiko Müller, OFFIS – Institute for Information Technology, Germany
  3. Wilko Heuten, OFFIS – Institute for Information Technology, Germany
  4. Susanne Boll, University of Oldenburg, Germany

Ambient lighting systems have been introduced by several manufacturers to increase the driver’s comfort. Also, some works proposed warning systems based on light displays. Expanding on those works, we are searching for designs of Lumicons (i.e. light patterns) that can not only warn drivers in critical situations, but also keep them informed in a non-distracting way. We present first ideas for Lumicons for a given scenario coming from a participatory design process.

Children Reading eBooks on Tablets: a Study of The Context of Use

  1. Luca Colombo, University of Lugano, Switzerland
  2. Marcello Paolo Scipioni, University of Lugano, Switzerland

As children’s use of mobile technology is increasing, research has started investigating “how” and “why” children interact with portable devices; yet very little is known about “where” and “when” such interaction takes place. This paper wants to shed more light on the spatial and temporal context of use of tablet computers for leisure reading. Our findings provide implications for the design of eBooks for children and an agenda for future research.

Classifying Driver’s Uncertainty for Developing Trustworthy Assistance Systems

  1. Fei Yan, OFFIS – Institute for Information Technology, Germany
  2. Lars Weber, OFFIS – Institute for Information Technology, Germany
  3. Andreas Luedtke, OFFIS – Institute for Information Technology, Germany

This paper presents the results of a first step in a longer – term research approach to investigate the influence of uncertainty on drivers’ trust in Advanced Driver Assistance Systems (ADAS). The first step is to classify driver’s uncertainty in lane changing situations. In a pilot study(n=5), the effect of distance gap on driver’s uncertainty was studied using a driving simulator. The results indicated a U shaped relationship between distance gaps, reaction times and uncertainty scores.

Device-Orientation is More Engaging than Drag (at Least in Mobile Computing)

  1. Mattias Arvola, SICS East Swedish ICT AB and Linköping University, Sweden
  2. Anna Holm, Linköping University, Sweden

Does device-orientation-based panning on mobile devices facilitate engagement? 20 users were asked to pan panoramas by turning around and changing the direction of the device, and by swiping with the finger on the touchscreen. The participants were also asked to rate how engaging they found it on the User Engagement Scale. It turned out that device-orientation-based panning was more engaging than drag based panning. Moving your body to navigate information can pull you into an affective loop.

Digital Aura: Investigating Representations of Self in Augmented Reality Applications

  1. David McGookin, Aalto University, Finland

We consider the concept of a Digital Aura – an augmented reality (AR) visualisation, derived from existing social and digital media that represents an aspect of a person’s digital self to people he or she meets in the physical environment. We outline the potential and risks for such technology in face-to-face interaction, before discussing the results of interview studies that revealed what media users were open to sharing, with whom, and how a Digital Aura should be visualised. We outline our future work to investigate the impact of Digital Auras on face-to-face interaction.

Dynamic switching of data visualization method for increased plotting scalability

  1. Angie Mikhail-Morozov, ABB Corporate Research, Sweden
  2. Mika P. Nieminen, Aalto University, Finland

In information visualization every method performs best in its own range of plotting density. When the available area for the visualization is unknown or when the data is supplied in packages of various sizes, the best suited method cannot be chosen in advance, and the wrong choice leads to the problems of under- or overplotting. This note suggests a solution to dynamically change the visualization method based on the sample size and plotting area. The concept is illustrated with an example application for multivariate data. Suggested future work includes applying the solution to other data types, defining the switching boundaries and fading functions. The proposed dynamic switching of methods increases the plotting scalability of interactive data visualizations and thus can improve the usability of information-intense graphical user interfaces.

EcoSonic: Auditory Displays supporting Fuel-Efficient Driving

  1. Jan Hammerschmidt, Bielefeld University, Germany
  2. René Tünnermann, Bielefeld University, Germany
  3. Thomas Hermann, Bielefeld University, Germany

In this paper, we present our work towards an auditory display that is capable of supporting a fuel-efficient operation of vehicles. We introduce five design approaches for employing the auditory modality for a fuel economy display. Furthermore, we have implemented a novel auditory display based on one of these approaches, foussing on giving feedback on the engine’s optimal rpm range, which is a major factor for eco-driving. Finally, we report on the development of a simple but physically realistic car simulator, which allows for a reproducible evaluation of prototype auditory displays as well as a comparison to state-of-the-art visual fuel efficiency indicators.

EduVis: Visualizing Educational Information

  1. Vilma Jordão, Universidade de Lisboa, Portugal
  2. Sandra Gama, INESC-ID and Universidade de Lisboa, Portugal
  3. Daniel Gonçalves, INESC-ID and Universidade de Lisboa, Portugal

A successful analysis of educational processes may help enhance success. Data Mining techniques, despite allowing analysis of such data, result in an extensive set of symbolic patterns that are difficult to understand. Visualization may overcome this limitation due to its potential to display large quantities of data while alleviating cognitive load. We developed a visualization that allows the analysis of patterns obtained by using educational data mining techniques to gather patterns of interdependences among courses in a university program. We created EduVis, a coordinated visualization which takes advantage of two different, complementary, tools: a multi-layered visualization and a multi-matrix representation of courses and corresponding relationships. Preliminary user tests have shown that EduVis makes important patterns immediately perceivable, suggesting that a small number of adjustments will realize its full potential for visualizing educational information.

Effects of Haptic Feedback on Gaze Based Auto Scrolling

  1. Karoliina Käki, University of Tampere, Finland
  2. Päivi Majaranta, University of Tampere, Finland
  3. Oleg Špakov, University of Tampere, Finland
  4. Jari Kangas, University of Tampere, Finland

Eye tracking enables automatic scrolling based on natural viewing behavior. We were interested in the effects of haptic feedback on gaze behavior and user experience. We conducted an experiment where haptic feedback was used to forewarn the reader that their gaze had entered an active scrolling area. Results show no statistical differences between conditions with or without haptic feedback on task time or gaze behavior. However, user experience varied a lot. Some participants were not able to associate the haptics and the scrolling. Those who understood the connection found the haptic feedback useful. Further research is required to find out a delay between the forewarning and the start of scrolling that is short enough to make the association but yet long enough to support the feeling of control and enjoyable user experience.

Evaluating Multimodal Interaction with Gestures and Speech for Point and Select Tasks

  1. Alvin Jude, Baylor University, United States
  2. G. Michael Poor, Baylor University, United States
  3. Darren Guinness, Baylor University, United States

Natural interactions such as speech and gestures have achieved mainstream success independently, with consumer products such as Leap Motion popularizing gestures, while mobile phones have embraced speech input. In this paper we designed an interaction style that combines both gestures and speech to evaluate point and select interaction. Our results indicate that while gestures are slower than the mouse, the introduction of speech allows for selection to be performed without negatively impacting navigation. We also found that users can adapt to this interaction quickly and are able to improve their performance with minimal raining. This lays the foundation for future work, such as mouse replacement technologies for those with hand impairments.

Exploring Long-term Participation within a Living Lab: Satisfaction, Motivations and Expectations

  1. Chiara Leonardi, Fondazione Bruno Kessler, Italy
  2. Nicola Doppio, Trento RISE, Italy
  3. Bruno Lepri, Fondazione Bruno Kessler, Italy
  4. Massimo Zancanaro, Fondazione Bruno Kessler, Italy
  5. Michele Caraviello, Telecom Italia SKIL Lab, Italy
  6. Fabio Pianesi, Fondazione Bruno Kessler, Italy

This paper presents an assessment of the experience with a Living Lab project which is currently involving 128 families with young children in a long-term relation to design mobile services for this user group. Living Labs are a promising way to manage innovation yet they also pose several challenges to retain participants and to keep their motivation high for a long-term period. We discuss the strategies used in our project to encourage and manage participation. We then present an initial assessment focused on participants’ satisfaction, perceived burden, motivational drivers and needs.

Exploring Non-Verbal Communications in Mobile Text Chat – Emotion-Enhanced Chat

  1. Jackson Feijo Filho, Nokia Technology Institute, Brazil
  2. Thiago Valle, Nokia Technology Institute, Brazil
  3. Wilson Prata, PUC-Rio, Brazil

Human communication is greatly carried out non-verbally. All this information is lost in mobile text messaging. This work describes an attempt to augment text chatting in mobile phones by adding automatically detected facial expression reactions and reading status, to conversations. These expressions are detected using known image processing techniques. Known related work, concerning the investigation of non-verbal communication through text messaging are considered and distinguished from the present solution. The conception and implementation of a mobile phone application with the debated feature is described and user studies are narrated. Finally, context of application, conclusions and future work are also discussed.

Formative Evaluation of a Constrained Composition Approach for Storytelling

  1. Eleonora Mencarini, University of Trento, Italy
  2. Gianluca Schiavo, University of Trento, Italy
  3. Alessandro Cappelletti, Fondazione Bruno Kessler, Italy
  4. Oliviero Stock, Fondazione Bruno Kessler, Italy
  5. Massimo Zancanaro, Fondazione Bruno Kessler, Italy

In this paper, we present the evaluation of a pen-and-paper mockup for the composition of comics using a pre-defined set of images and sentences. Our goal was to investigate the effectiveness and the satisfaction of use of such an approach in comparison to a less constrained one. This pilot study is meant as an initial step to inform the design of a tool to support teenagers who speak different languages to remotely create collaborative stories. Initial results are encouraging, since they suggest that the limitation of expressiveness imposed by the pre-defined sentences do not hinder the possibility of creating sensible and creative stories.

Groupsourcing: Nudging Users away from Unsafe Content

  1. Jian Liu, University of Helsinki and Aalto University, Finland
  2. Sini Ruohomaa, University of Helsinki, Finland
  3. Kumaripaba Athukorala, University of Helsinki, Finland
  4. Giulio Jacucci, University of Helsinki, Finland
  5. N. Asokan, Aalto University, Finland
  6. Janne Lindqvist, Rutgers University, United States

We present a system for aggregating feedback from social groups to deliver warnings about unsafe content, and describe our laboratory study to verify the effectiveness of such warnings.

Harvesting Social Media for Assessing User Experience

  1. Stefan Schneegass, University of Stuttgart, Germany
  2. Niels Henze, University of Stuttgart, Germany

Social media has become increasingly popular. Not only among individuals but also among companies that try to connect with potential customers and users. Companies` posts are discussed and shared by Facebook users. It has been argued that these interactions can provide important information for companies. In this work, we explore likes, comments, and shares of automotive related images posted on Facebook. We investigate to what extend these measures reflect the user experience measured in a lab-based study and in an online survey utilizing a standardized questionnaire. It is shown that the data harvested from Facebook highly correlates with the results of the lab-based study (up to r=.912). Surprisingly, we found that the correlation between the lab-based study and Facebook data is even clearly higher than the correlation between lab-based study and the online survey.

How to Present Information on Wrist-Worn Point-Light Displays

  1. Jutta Fortmann, University of Oldenburg, Germany
  2. Heiko Müller, OFFIS – Institute for Information Technology, Germany
  3. Wilko Heuten, OFFIS – Institute for Information Technology, Germany
  4. Susanne Boll, University of Oldenburg, Germany

In the last years there has been an emerging trend towards wearable devices, such as wristwatches, and wristbands. Common wrist-worn devices often present information visually and in an abstract way. However, little research has been done on the question of how these displays should present information in daily life. In this work we built a point-light bracelet to explore this question. In a user study participants designed light patterns for a hands-on scenario: physical activity feedback. Afterwards, we investigated how the participants experienced the light patterns in their daily life. From the study results we derive implications for the design of light patterns on a wrist-worn display, e.g. how and when to use the light parameters colour and brightness.

Human Centered Training: Perceived Exertion as Main Parameter for Training Adaption

  1. Janko Timmermann, OFFIS – Institute for Information Technology, Germany
  2. Anke Workowski, Schüchtermann-Klinik, Germany
  3. Wilko Heuten, OFFIS – Institute for Information Technology, Germany
  4. Detlev Willemsen, Schüchtermann-Klinik, Germany
  5. Susanne Boll, University of Oldenburg, Germany

Regular physical activity is important for a healthy lifestyle. The physical activity should be performed with the right individual intensity. Today it is common to use the heart rate as indicator for the optimal individual training intensity, but the subjective intensity of the trainee is not considered. In this paper we present an approach which focuses on the perceived exertion of the trainee and helps to reach and keep a user-defined intensity level.

IllumiMug: Revealing Imperceptible Characteristics of Drinks

  1. Benjamin Poppinga, University of Oldenburg, Germany
  2. Jutta Fortmann, University of Oldenburg, Germany
  3. Heiko Müller, OFFIS – Institute for Information Technology, Germany
  4. Wilko Heuten, OFFIS – Institute for Information Technology, Germany
  5. Susanne Boll, University of Oldenburg, Germany

Drinking is vital, but certain drinks can also harm human health and well-being. In this paper, we present IllumiMug, a concept for a content-aware, interactive cup. The IllumiMug concept is able to measure the temperature and the level of a liquid in a cup and can represent helpful information through ambient light. We discuss some initial design thoughts and illustrate the potential benefits of IllumiMug in two scenarios, i.e., the preparation of proper alcoholic drinks, where the alcohol concentration is measured and shown, and the brewing of safe tea, where the drink’s temperature is indicated.

Influential Statements and Gaze for Persuasion Modeling

  1. Hana Vrzakova, University of Eastern Finland, Finland
  2. Roman Bednarik, University of Eastern Finland, Finland
  3. Yukiko Nakano, Seikei University, Japan
  4. Fumio Nihei, Seikei University, Japan

Influential statements during conversations change the flow of the discussion and open new directions in the conversation. The content of the statement does not make the statement influential alone, it is strengthened by behavioral patterns, such as voice pitch, facial gestures, gaze and body postures. In this work we focus on the relationship between influential statements and gaze, as a potential cue in the automatic detection of conversation skills and in replicating natural interaction behavior for companionship and persuasive technologies. Within a multimodal data corpus of group conversations, we present an approach to analysis of the rich social signals and explore the potentials for correlation between the influential statements and gaze. The statements in the conversations were semi-automatically annotated and scored according to the level of influence, which provided us with boundaries for the gaze analysis. We present the first results of this approach.

Information-seeking on the Web – Influence of Language on Search Performances and Strategies

  1. Leena Salmi, University of Turku, Finland
  2. Aline Chevalier, University of Toulouse, France

This study deals with information-seeking on the Web with a focus on the use of more than one language. Native speakers (NS) of French (N=20) were compared with non-native speakers (NNS) of French (N=30) as to how they sought for answers to given questions using Google. Half of the NNSs (N=14) were also asked to change language and find the answers in their mother tongue (Finnish). The results show no differences in finding the correct answers, but the NSs of French completed the tasks faster than the NNSs. The NNSs formulated more queries than the NSs, and the group that had to change language formulated more queries and used more keywords than the other two groups. In searching for information in another language, diverse search strategies emerged.

Let’s Play the Feedback Game

  1. Anna Kantosalo, University of Helsinki, Finland
  2. Sirpa Riihiaho, Aalto University, Finland

This paper describes a method for getting user feedback from Finnish primary school students as a part of usability and concept evaluation for an educational poetry writing tool. The game-like method uses smiling faces on a physical game board and physical tokens to answer questions on the evaluated system.

LiFe-Support: An Environment to Get Live Feedback during Emergency Scenarios

  1. Syed Atif Mehdi, University of Kaiserslautern, Germany
  2. Shah Rukh Humayoun, University of Kaiserslautern, Germany
  3. Karsten Berns, University of Kaiserslautern, Germany

The paper presents an environment, called LiFe-Support, for facilitating the caregiver staff at health service centers in getting live feedback of an emergency situation that may occur to an elderly person at home. The two main components of the system include an autonomous mobile robot, called ARTOS, for serving elderly people in their homes and a visual platform for the caregivers to control communication and navigation of the robot in case of an emergency situation. A preliminary evaluation of the LiFe-Support environment has been carried out and promising results indicate usefulness of the system.

Long-term Modality Effect in Multimedia Learning

  1. Alessia Ruf, University of Basel, Switzerland
  2. Mirjam Seckler, University of Basel, Switzerland
  3. Klaus Opwis, University of Basel, Switzerland

Cognitive theories of multimedia are seeking the best way of creating materials to enhance learning outcomes. The so-called modality effect accords that learning outcomes are better if visual material such as images is presented together with auditory rather than with visual information such as text. However, previous research on this effect is conflicting. There is also some evidence that the modality effect can be reversed if the learning environment is self-paced. Finally, there is little research about the modality effect over time, and its impact on long-term memory. There is a lack of studies comparing multimodal learning in a systempaced as well as in a self-paced environment over time. Therefore, the aim of this study is (1) to compare auditory and visual learning conditions, (2) to examine the relationship between self- and system-paced learning time, and (3) to analyze the modality effect over time (immediate and after one week).

Magnetic Interaction with Devices: A Pilot Study on Mobile Gaming

  1. Saeed Afshari, University of Luxembourg, Luxembourg
  2. Andrei Popleteev, University of Luxembourg, Luxembourg
  3. Roderick McCall, University of Luxembourg, Luxembourg
  4. Thomas Engel, University of Luxembourg, Luxembourg

This work-in-progress paper presents a study of interaction techniques for mobile devices, with a focus on gaming scenarios. We introduce and explore usability and performance aspects of a novel magnet-based control for tangible around-device interaction, and compare it with the traditional mobile gaming controls, such as touchscreen thumbstick, swiping and tilt-based approaches.

Patient Expectations and Experiences from a Clinical Study in Psychiatric Care Using a Self-Monitoring System

  1. Lasse Benn Nørregaard, Daybuilder Solutions, Denmark
  2. Philip Kaare Løventoft, Daybuilder Solutions, Denmark
  3. Erik Frøkjær, University of Copenhagen, Denmark
  4. Lise Lauritsen, University of Copenhagen, Denmark
  5. Emilia Clara Olsson, University of Copenhagen, Denmark
  6. Louise Andersen, University of Copenhagen, Denmark
  7. Stine Rauff, University of Copenhagen, Denmark
  8. Klaus Martiny, University of Copenhagen, Denmark

Preliminary results from a clinical study concerning the feasibility of using a self-monitoring system in psychiatric care are presented. At the end of hospital treatment for depression 32 patients were enrolled in the study. The patients used the self-monitoring system at home during a 4-week period. Data from the case report forms show that a clear majority of the patients find that using the self-monitoring system supported them in getting a better overview of their symptoms. 12 out of 32 patients even found that using the system could help them catch an upcoming depression. A clear majority of the patients found it important that the use of the self-monitoring system was combined with communication and information sharing with their clinicians at face-to-face meetings and/or through telephone contacts.

Second Look: Combining Wearable Computing and Crowdsourcing to support Creative Writing

  1. Pedro Campos, Madeira-ITI, Portugal
  2. Frederica Gonçalves, Madeira-ITI, Portugal
  3. Michael Martins, Madeira-ITI, Portugal
  4. Miguel Campos, WowSystems, Portugal
  5. Paulo Freitas, WowSystems, Portugal

present “Second Look”, a platform for helping people, in particular creative writers, to overcome writer’s block. This ubiquitous platform combines augmented reality (Google Glass and AR markers), ubiquitous computing (mobile phones), and crowdsourcing in order to improve the creativity, focus and performance of creative writers. A primary challenge in developing and evaluating creativity support tools is that we are not able to detect when a person is being creative. Our approach improves current ones by exploring the “in-the-moment” creativity and supporting it with adaptive ubiquitous technologies that try to keep people in a creative experience peak for a longer period of time.

Seniors and Text Messaging on Mobile Touchscreen Phones

  1. Reetta Övermark, University of Tampere, Finland
  2. Poika Isokoski, University of Tampere, Finland and KAIST HCI lab, South Korea
  3. Saila Ovaska, University of Tampere, Finland

We studied how senior citizens write and send text messages on their own mobile phone and two touch-screen smartphones. Each participant participated in three training sessions and wrote messages with three phones. We found that the range of text entry performance among seniors is large. Average text entry rate in entering a 34 character test phrase was only 3.5 wpm. Further work to improve text messaging user interfaces for older un-skilled users is clearly needed.

Studying the perception of color components’ relative amounts in blended colors

  1. Sandra Gama, INESC-ID and Universidade de Lisboa, Portugal
  2. Daniel Gonçalves, INESC-ID and Universidade de Lisboa, Portugal

Visualization provides the means for a natural information interpretation with low cognitive loads. In such context, color is a powerful way to convey information properties. One use of color in visualization is color blending, in which distinct colors represent different data properties and one data item which verifies more than one property is represented in a color that consists of blending its properties’ colors. Humans are able to perceive the original components that generate particular colors. However, the amount of each color component may not be evident, possibly making it difficult for users to quantify the relative relevance of the each property. We have performed a user-study to verify to which extent people can perceive relative amounts of color components in blended colors. Results of our study have provided a set of guidelines to follow when using color blending for in information visualization.

Supporting Running Groups as a Whole

  1. Janko Timmermann, OFFIS – Institute for Information Technology, Germany
  2. Alexander Erlemann, University of Oldenburg, Germany
  3. Wilko Heuten, OFFIS – Institute for Information Technology, Germany
  4. Susanne Boll, University of Oldenburg, Germany

Running in groups is a common activity since it is more motivating and interesting for the runners. However, nowadays technical support systems generally support only single users and are not tailored for the use in running groups. In this paper, we analyze the context of group running with the help of an expert for running beginners. We identify three communication channels and analyze their roles for the technical support of group running.

Supporting Situation Awareness with Peripheral Feedback on Monitoring Behavior

  1. Florian Fortmann, OFFIS – Institute for Information Technology, Germany
  2. Dierk Brauer, University of Oldenburg, Germany
  3. Heiko Müller, OFFIS – Institute for Information Technology, Germany
  4. Susanne Boll, University of Oldenburg, Germany

Maintaining situation awareness during supervisory control of automated systems is a mental task, which requires the human operator to perform adequate monitoring behavior. But research has shown that this requirement is often violated, e.g., due to distraction and fatigue. To overcome this problem, we envision an adaptive, peripheral light display conveying feedback on the adequacy of the monitoring behavior of the human operator. This paper presents the results of a focus group, which has been conducted in order to specify (1) which type of information regarding the adequacy of monitoring behavior should be conveyed by the peripheral feedback display and (2) how the information should be encoded in light.

Sustainable Mobility – How to Overcome Mobility Behavior Routines

  1. Julia Seebode, TU Berlin, Germany
  2. Stefan Greiner, TU Berlin, Germany
  3. Tilo Westermann, TU Berlin, Germany
  4. Ina Wechsung, TU Berlin, Germany
  5. Sebastian Möller, TU Berlin, Germany

Is it possible to influence mobility routines with intelligent app design? This paper presents a study investigating the influence of information messages on mobility decisions. Results show an impact of carefully targeted information messages on mobility decisions. Particularly so-called “hard facts” like time and money appear to be important motivators encouraging people to overcome their mobility routines.

TACTUX – A Tactile User Experience Assessment Board

  1. Georg Regal, AIT Austrian Institute of Technology GmbH, Austria
  2. Marc Busch, AIT Austrian Institute of Technology GmbH, Austria
  3. Christina Hochleitner, AIT Austrian Institute of Technology GmbH, Austria
  4. Peter Wokerstorfer, AIT Austrian Institute of Technology GmbH, Austria
  5. Manfred Tscheligi, AIT Austrian Institute of Technology GmbH and University of Salzburg, Austria

We introduce TACTUX – Tactile User Experience Assessment Board, a tool to assess user experience through tactile properties. The results of using TACTUX in a preliminary user study with 19 participants show that tactile surface properties can successfully be used for self-assessment of user experience. 14 of 19 participants stated that it was easy to express their experience by using tactile surface properties. TACTUX has advantages over classical methods of user experience self-assessments (e.g. questionnaires): It can be used by a broad range of user groups and stimulates participants to talk about their experience when using interactive systems.

The InnocentButGuilty Framework – A Step Towards GKT-enhanced Applications

  1. Matthias Pfeiffer, Goethe University Frankfurt, Germany
  2. Claudia Stockhausen, Goethe University Frankfurt, Germany
  3. Detlef Krömker, Goethe University Frankfurt, Germany

The Guilty Knowledge Test (GKT) is a method for detecting knowledge, or an associated reaction of the brain, that is relevant to a given task. Here we propose the concept of GKT-enhanced applications where applications are improved by combining them with GKTs. While BCI systems are reliable, there is still no easy way to analyze the results from a GKT nor to automate the process. The InnocentButGuilty framework closes this gap and opens the door to use GKTs in various fields of applications. InnocentButGuilty enriches applications and games with an additional new form of physiological input paradigm.

The Role of Location-based Event Browsers in Collaborative Behaviors: An Explorative Study

  1. Diogo Cabral, University of Helsinki, Finland
  2. Valeria Orso, University of Padua, Italy
  3. Youssef El-khouri, University of Helsinki, Finland
  4. Maura Belio, University of Padua, Italy
  5. Luciano Gamberini, University of Padua, Italy
  6. Giulio Jacucci, University of Helsinki and Aalto University, Finland

Events play an important role in touristic activities, they are usually planned in group, involving collaborative behaviours. Mobile technology is a useful tool for such activities. Augmented reality used in handheld devices can enhance the tourist experience. In this work, we present three mobile apps for the exploration of cultural events in a city: one based on 2D maps; one based on AR technology and a Hybrid one that integrates both approaches. In addition, we report on the impact that the different technology has on collaborative behaviors.

Towards Collaborative Communities: a Preliminary Study on Exchange of Goods and Services in Local Contexts

  1. Steven Tait, EIT ICT Labs, Italy
  2. Chiara Leonardi, Fondazione Bruno Kessler, Italy
  3. Massimo Zancanaro, Fondazione Bruno Kessler, Italy
  4. Michele Caraviello, SKIL – Telecom Italia, Italy
  5. Bruno Lepri, Fondazione Bruno Kessler, Italy
  6. Paolo Massa, Fondazione Bruno Kessler, Italy

In this paper, we present an explorative study aimed at assessing challenges and opportunities for the design of innovative technologies to support the exchange of goods and services in geographically local contexts. A preliminary focus group identified the main dimensions and issues informing the design of a survey, which was submitted to parents with young children and received 102 answers. From the analysis of the answers, three themes emerged as important: favour exchange as a gift; the different relevance that feedbacks have with respect to global peer-to-peer services and the importance of the relationships among the participants.

Towards Interactive Car Interiors – the Active Armrest

  1. Andreas Braun, Fraunhofer Institute for Computer Graphics Research, Germany
  2. Stephan Neumann, Technische Universität Darmstadt, Germany
  3. Sönke Schmidt, Technische Universität Darmstadt, Germany
  4. Reiner Wichert, Fraunhofer Institute for Computer Graphics Research, Germany
  5. Arjan Kuijper, Fraunhofer Institute for Computer Graphics Research, Germany

Modern cars are often equipped with touch-based interaction systems, such as touchscreens or touchpads. However, they are typically exposed within the car environment. In this paper, we present the Active Armrest. This regular car armrest is equipped with capacitive proximity sensors that combine limb detection and recognition of gestures. The sensors are designed for invisible integration into existing environments and can be used to create interactive surfaces in a car. We investigate two different types of gestural interaction, touch gestures with the arm lifted and free-air finger gestures performed above the interactive area, while the arm stays on the armrest. The system was integrated into a prototype and tested for gesture recognition precision and usability.

User Curated Augmented Reality Art Exhibitions

  1. Paul Coulton, Lancaster University, United Kingdom
  2. Emma Murphy, Glasgow School of Art, United Kingdom
  3. Klen Čopič Pucihar, Lancaster University, United Kingdom
  4. Richard Smith, Lancaster University, United Kingdom
  5. Mark Lochrie, Lancaster University, United Kingdom

Creating mobile augmented reality applications to display gallery artworks or museum content is a well-established concept within the research community. However, the focus of these systems is generally technologically driven and primarily addresses the end user and not the views of the gallery or the original artist. In this paper we present the design and development of the mobile application ‘Taking the Artwork Home’, which allows people to digitally curate augmented reality art exhibitions in their own homes. A research through design methodology was adopted so that we could more fully understand how the views of the gallery and artists impacted on the artifact design and therefore the user experience.

User Interaction Metadata for Healthcare Information Systems

  1. Sami Laine, Aalto University, Finland
  2. Marko Nieminen, Aalto University, Finland

Semantically consistent understanding of data/content in information systems is a key issue in collaborative work settings. Even simple data – such as timestamps – can be a source of misunderstandings both in data entry and reporting tasks. Healthcare information systems generate timestamps during data entry situations. Timestamped data are used for various purposes, such as efficiency calculations, organizational development, and research. However, timestamps are often generated ambiguously in different ways: they are a source of semantic interpretation errors. In this paper we illustrate the phenomena of timestamp ambiguity in medical settings. We suggest that User Interaction Metadata (UIM) could reduce the problems caused by ambiguous and erroneous timestamps. UIM may reveal hidden heterogeneity in data interpretation. Additionally, it can be used to identify user behavior patterns that are currently unrecognizable from raw transaction data.

Visualising the Flow of a Local Economy to Encourage Inter- Community Trading: adding bits to BARTER

  1. Mark Lochrie, Lancaster University, United Kingdom
  2. Paul Coulton, Lancaster University, United Kingdom
  3. Jonny Huck, Lancaster University, United Kingdom
  4. Mike Hallam, Lancaster Ethical Small Traders Association (ESTA), United Kingdom
  5. Jon Whittle, Lancaster University, United Kingdom
  6. Bran Knowles, Lancaster University, United Kingdom

In this research we present a visualisation to represent the flow of money between traders in the local economy. The aim is to use a rhetorical approach to persuasion such that it promotes inter-community local trading in such a way that it highlights community benefit over personal gain.