Selected Projects

Eliciting and Analysing Users’ Envisioned Dialogues with Perfect Voice Assistants

Authors: Sarah Theres Völkel, Daniel Buschek, Malin Eiband, Benjamin R. Cowan, and Heinrich Hussmann.

Accepted for Publication in: CHI '21

website.png
website.png

ABSTRACT We present a dialogue elicitation study to assess how users envision conversations with a perfect voice assistant (VA). In an online survey, N=205 participants were prompted with everyday scenarios, and wrote the lines of both user and VA in dialogues that they imagined as perfect. We analysed the dialogues with text analytics and qualitative analysis, including number of words and turns, social aspects of conversation, implied VA capabilities, and the influence of user personality. The majority envisioned dialogues with a VA that is interactive and not purely functional; it is smart, proactive, and has knowledge about the user. Attitudes diverged regarding the assistant’s role as well as it

expressing humour and opinions. An exploratory analysis suggested a relationship with personality for these aspects, but correlations were low overall. We discuss implications for research and design of future VAs, underlining the vision of enabling conversational UIs, rather than single command “Q&As”.

 

APPLICATION Our results can inform the development of user-centred voice assistant personalities. In particular, dialogue designers can use our resting dialogues as inspiration to create future voice assistants. Furthermore, our findings show that users have individual preferences for a voice assistant expressing humour and opinion, emphasising the need for personalised voice assistants.

Developing a Personality Model for Speech-based Conversational Agents Using the Psycholexical Approach

Authors: Sarah Theres Völkel, Ramona Schoedel, Daniel Buschek, Clemens Stachl, Verena Winterhalter, Markus Bühner, and Heinrich Hussmann.

Published in: CHI '20

pdf.png
doi.png
cite.png
video.png

ABSTRACT We present the first systematic analysis of personality dimensions developed specifically to describe the personality of speech-based conversational agents. Following the psycholexical approach from psychology, we first report on a new multi-method approach to collect potentially descriptive adjectives from 1) a free description task in an online survey (228 unique descriptors), 2) an interaction task in the lab (176 unique descriptors), and 3) a text analysis of 30,000 online reviews of conversational agents (Alexa, Google Assistant, Cortana) (383 unique descriptors). We aggregate the results into a set of 349 adjectives, which are then rated by 744 people in an online survey. A factor analysis reveals that the commonly used Big Five model for human personality does not adequately describe agent personality. As an initial step to developing a personality model, we propose alternative dimensions and discuss implications for the design of agent personalities, personality-aware personalisation, and future research.

APPLICATION Our dimensions can be used to facilitate designing conversational agents with unique personalities. In particular, UX designers can use the resulting descriptors to develop consistent and comprehensive agent personalities. 

personality-model.png
How to Trick AI: Users’ Strategies for Protecting Themselves Against Automatic Personality Assessment

Authors: Sarah Theres Völkel, Renate Häuslschmid, Anna Werner, Andreas Butz, and Heinrich Hussmann

Published in: CHI '20

ABSTRACT Psychological targeting tries to influence and manipulate users' behaviour. We investigated whether users can protect themselves from being profiled by a chatbot, which automatically assesses users' personality. Participants interacted twice with the chatbot: 

(1) They chatted for 45 minutes in customer service scenarios and received their actual profile (baseline). 

(2) They then were asked to repeat the interaction and to disguise their personality by strategically tricking the chatbot into calculating a falsified profile. 

In interviews, participants mentioned 41 different strategies but could only apply a subset of them in the interaction. They were able to manipulate all Big Five personality dimensions by nearly 10%.  

Participants regarded personality as very sensitive data. As they found tricking the AI too exhaustive for everyday use, we reflect on opportunities for privacy-protective designs in the context of personality-aware systems.  

personalities. 

trick-ai.png
When People and Algorithms Meet: User-reported Problems in Intelligent Everyday Applications

Authors: Malin Eiband, Sarah Theres Völkel, Daniel Buschek, Sophia Cook, and Heinrich Hussmann

Published in: TiiS '20, IUI '19

TiiS '20:

IUI '19:

website.png

ABSTRACT The complex nature of intelligent systems motivates work on supporting users during interaction, for example through explanations. However, as of yet, there is little empirical evidence in regard to specific problems users face when applying such systems in everyday situations. This paper contributes a novel method and analysis to investigate such problems as reported by users:

We analysed 45,448 reviews of four apps on the Google Play Store (Facebook, Netflix, Google Maps and Google Assistant) with sentiment analysis and topic modelling to reveal problems during interaction that can be attributed to the apps' algorithmic decision-making. We enriched this data with users' coping and support strategies through a follow-up online survey (N=286). In particular, we found problems and strategies related to content, algorithm, user choice, and feedback.

We discuss corresponding implications for designing user support, highlighting the importance of user control and explanations of output, rather than processes.

methodology_overview_v3.png
Punishable AI: Examining Users’ Attitude Towards Robot Punishment

Authors: Beat Rossmy, Sarah Theres Völkel, Elias Naphausen, Patricia Kimm, Alexander Wiethoff,

and Andreas Muxel

Published in: DIS '20

ABSTRACT To give robots, which are black box systems for most users, feedback we have to implement interaction paradigms that users understand and accept, for example reward and punishment. In this paper we present the first HRI experience prototype which implements gradual destructive interaction, namely breaking a robot's leg as a punishment technique. We conducted an exploratory experiment (N=20) to investigate participants' behaviour during the execution of three punishment techniques. Using a structured analysis of videos and interviews, we provide in-depth insights into participants' attitude towards these techniques. 

Participants preferred more abstract techniques and felt uncomfortable during human-like punishment interaction.  Based on our findings, we raise questions how human-like technologies should be designed.

The PhoneStudy Research Project: A Large-scale Mobile Sensing App 

Project Members: Clemens Stachl, Ramona Schödel, Quay Au, Sarah Theres Völkel, Florian Bemmann, Daniel Buschek, Samuel D. Gosling, Gabriella M. Harari, Tobias Schuwerk, Florian Pargent, Florian Lehmann, Daniela Becker, Michelle Oldemeier, Theresa Ullmann, Heinrich Hussmann, Bernd Bischl, and Markus Bühner

Published in: PNAS '20, European Journal of Personality '20, Zeitschrift für Psychologie

website.png

ABSTRACT The complex nature of intelligent systems motivates work on supporting users during interaction, for example through explanations. However, as of yet, there is little empirical evidence in regard to specific problems users face when applying such systems in everyday situations. This paper contributes a novel method and analysis to investigate such problems as reported by users:

We analysed 45,448 reviews of four apps on the Google Play Store (Facebook, Netflix, Google Maps and Google Assistant) with sentiment analysis and topic modelling to reveal problems during interaction that can be attributed to the apps' algorithmic decision-making. We enriched this data with users' coping and support strategies through a follow-up online survey (N=286). In particular, we found problems and strategies related to content, algorithm, user choice, and feedback.

We discuss corresponding implications for designing user support, highlighting the importance of user control and explanations of output, rather than processes.

phonestudy.png
What is “Intelligent” in Intelligent User Interfaces? A Meta-Analysis of 25 Years of IUI

Authors: Sarah Theres Völkel, Christina Schneegass, Malin Eiband, and Daniel Buschek

Published in: IUI '19

ABSTRACT 

This reflection paper takes the 25th IUI conference milestone as an opportunity to analyse in detail the understanding of intelligence in the community: Despite the focus on intelligent UIs, it has remained elusive what exactly renders an interactive system or user interface “intelligent”, also in the fields of HCI and AI at large. We follow a bottom-up approach to analyse the emergent meaning of intelligence in the IUI community: In particular, we apply text analysis to extract all occurrences of “intelligent” in all IUI proceedings. We manually review these with regard to three main questions:

1) What is deemed intelligent?

2) How (else) is it characterised? and

3) What capabilities are attributed to an intelligent entity?

We discuss the community’s emerging implicit perspective on characteristics of intelligence in intelligent user interfaces and conclude with ideas for stating one’s own understanding of intelligence more explicitly.

clustermap_entity_co-descriptor.png
Understanding Emoji Interpretation through User Personality and Message Context

Authors: Sarah Theres Völkel, Daniel Buschek, Jelena Pranjic, and Heinrich Hussmann

Published in: MobileHCI '19

ABSTRACT Emojis are commonly used as non-verbal cues in texting, yet may also lead to misunderstandings due to their often ambiguous meaning. User personality has been linked to understanding of emojis isolated from context, or via indirect personality assessment through text analysis. This paper presents the first study on the influence of personality (measured with BFI-2) on understanding of emojis, which are presented in concrete mobile messaging contexts: four recipients (parents, friend, colleague, partner) and four situations (information, arrangement, salutory, romantic). In particular, we presented short text chat scenarios in an online survey (N=646) and asked participants to add appropriate emojis. Our results show that personality factors influence the choice of emojis. In another open task participants compared emojis found as semantically similar by related work. Here, participants provided rich and varying emoji interpretations, even in defined contexts. We discuss implications for research and design of mobile texting interfaces.

mobilehci_paper.png