Hi, I'm Uwe Gruenefeld
An HCI researcher in love with Mixed Reality

2023



Exploring the Stability of Behavioral Biometrics in Virtual Reality in a Remote Field Study: Towards Implicit and Continuous User Identification through Body Movements

J.Liebers, C.Burschik, U.Gruenefeld, and S.Schneegass
Published Paper at ACM VRST '23, Christchurch, New Zealand

Behavioral biometrics has recently become a viable alternative method for user identification in Virtual Reality (VR). Its ability to identify users based solely on their implicit interaction allows for high usability and removes the burden commonly associated with security mechanisms. However, little is known about the temporal stability of behavior (i.e., how behavior changes over time),...

...as most previous works were evaluated in highly controlled lab environments over short periods. In this work, we present findings obtained from a remote field study (N = 15) that elicited data over a period of eight weeks from a popular VR game. We found that there are changes in people’s behavior over time, but that two-session identification still is possible with a mean F1-score of up to 71% while an initial training yields 86%. However, we also see that performance can drop by up to over 50 percentage points when testing with later sessions, compared to the first session, particularly for smaller groups. Thus, our findings indicate that the use of behavioral biometrics in VR is convenient for the user and practical with regard to changing behavior and also reliable regarding behavioral variation.

1st Joint Workshop on Cross Reality

H.Liang, H-C.Jetter, F.Maurer, U.Gruenefeld, M.Billinghurst, and C.Anthes
Published Workshop at IEEE ISMAR '23, Sydney, Australia

Cross Reality (CR) is an emerging technology that focuses on the concurrent usage of or the transition between multiple systems at different points on the reality-virtuality continuum (RVC), including Virtual Reality (VR), Augmented Virtuality (AV), and Augmented Reality (AR). CR has gained significant attention in recent years due to its potential for revolutionizing various research and industry areas...

...where users need to comprehend and explore spatial data and its relevant information in different forms. It is expected that in the near future, more CR applications will arise to allow users to transition along the individual stages of the RVC or to collaborate in-between these stages to use their distinct advantages and mitigate their potential problems.


The Actuality-Time Continuum: Visualizing Interactions and Transitions Taking Place in Cross-Reality Systems

J.Auda, S.Faltaous, U.Gruenefeld, S.Mayer, and S.Schneegass
Published Workshop Paper at IEEE ISMAR '23, Sydney, Australia

In the last decade, researchers contributed an increasing number of cross-reality systems and their evaluations. Going beyond individual technologies such as Virtual or Augmented Reality, these systems introduce novel approaches that help to solve relevant problems such as the integration of bystanders or physical objects. However, cross-reality systems are complex by nature, and describing the interactions and transitions taking place is a challenging task...

...Thus, in this paper, we propose the idea of the Actuality-Time Continuum that aims to enable researchers and designers alike to visualize complex cross-reality experiences. Moreover, we present four visualization examples that illustrate the potential of our proposal and conclude with an outlook on future perspectives.


A Scoping Survey on Cross-Reality Systems

J.Auda, U.Gruenefeld, S.Faltaous, S.Mayer, and S.Schneegass
Published Journal Article at ACM CSUR '23

Immersive technologies such as Virtual Reality (VR) and Augmented Reality (AR) empower users to experience digital realities. Known as distinct technology classes, the lines between them are becoming increasingly blurry with recent technological advancements. New systems enable users to interact across technology classes or transition between them - referred to as cross-reality systems. Nevertheless, these systems are not well-understood. Hence, in this paper, we conducted a scoping literature review to classify and analyze cross-reality systems proposed in previous work...

... First, we deine these systems by distinguishing three diferent types. Thereafter, we compile a literature corpus of 306 relevant publications, analyze the proposed systems, and present a comprehensive classiication, including research topics, involved environments, and transition types. Based on the gathered literature, we extract nine guiding principles that can inform the development of cross-reality systems. We conclude with research challenges and opportunities.


Don't Forget to Disinfect: Understanding Technology-Supported Hand Disinfection Stations

J.Keppel, M.Strauss, S.Faltaous, J.Liebers, R.Heger, U.Gruenefeld, and S.Schneegass
Published Journal Article at MobileHCI '23, Athens, Greece

The global COVID-19 pandemic created a constant need for hand disinfection. While it is still essential, disinfection use is declining with the decrease in perceived personal risk (e.g., as a result of vaccination). Thus this work explores using different visual cues to act as reminders for hand disinfection. We investigated different public display designs using (1) paper-based only, adding (2) screen-based, or (3) projection-based visual cues...

...To gain insights into these designs, we conducted semi-structured interviews with passersby (N=30). Our results show that the screen- and projection-based conditions were perceived as more engaging. Furthermore, we conclude that the disinfection process consists of four steps that can be supported: drawing attention to the disinfection station, supporting the (subconscious) understanding of the interaction, motivating hand disinfection, and performing the action itself. We conclude with design implications for technology-supported disinfection.


ARcoustic: A Mobile Augmented Reality System for Seeing Out-of-View Traffic

X.Zhang, X.Wu, R.Cools, A.L.Simeone, and U.Gruenefeld
Published Conference Paper at AutoUI '23, Ingolstadt, Germany

Locating out-of-view vehicles can help pedestrians to avoid critical traffic encounters. Some previous approaches focused solely on visualising out-of-view objects, neglecting their localisation and limitations. Other methods rely on continuous camera-based localisation, raising privacy concerns. Hence, we propose the ARcoustic system, which utilises a microphone array for nearby moving vehicle localisation and visualises nearby out-of-view vehicles to support pedestrians...

...First, we present the implementation of our sonic-based localisation and discuss the current technical limitations. Next, we present a user study (𝑛 = 18) in which we compared two state-of-the-art visualisation techniques (Radar3D, CompassbAR) to a baseline without any visualisation. Results show that both techniques present too much information, resulting in below average user experience and longer response times. Therefore, we introduce a novel visualisation technique that aligns with the technical localisation limitations and meets pedestrians’ preferences for effective visualisation, as demonstrated in the second user study (𝑛 = 16). Lastly, we conduct a small field study (𝑛 = 8) testing ourARcoustic system under realistic conditions. Our work shows that out-of-view object visualisations must align with the underlying localisation technology and fit the concrete application scenario.


“They see me scrollin”—Lessons Learned from Investigating Shoulder Surfing Behavior and Attack Mitigation Strategies

A.Saad, J.Liebers, S.Schneegass, and U.Gruenefeld
Published Book Chapter at Springer Human Factors in Privacy Research

Mobile computing devices have become ubiquitous; however, they are prone to observation and reconstruction attacks. In particular, shoulder surfing, where an adversary observes another user's interaction without prior consent, remains a significant unresolved problem. In the past, researchers have primarily focused their research on making authentication more robust against shoulder surfing - with less emphasis on understanding the attacker or their behavior...

... Nonetheless, understanding these attacks is crucial for protecting smartphone users' privacy. This chapter aims to bring more attention to research that promotes a deeper understanding of shoulder surfing attacks. While shoulder surfing attacks are difficult to study under natural conditions, researchers have proposed different approaches to overcome this challenge. We compare and discuss these approaches and extract lessons learned. Furthermore, we discuss different mitigation strategies of shoulder surfing attacks and cover algorithmic detection of attacks and proposed threat models as well. Finally, we conclude with an outlook of potential next steps for shoulder surfing research.


Pointing It out! Comparing Manual Segmentation of 3D Point Clouds between Desktop, Tablet, and Virtual Reality

C.Liebers, M.Prochazka, N.PfĂĽtzenreuter, J.Liebers, J.Auda, U.Gruenefeld, and S.Schneegass
Published Journal Article at IJHCI '23

Scanning everyday objects with depth sensors is the state-of-the-art approach to generating point clouds for realistic 3D representations. However, the resulting point cloud data suffers from outliers and contains irrelevant data from neighboring objects. To obtain only the desired 3D representation, additional manual segmentation steps are required. In this paper, we compare three different technology classes as independent variables (desktop vs. tablet vs. virtual reality)...

...in a within-subject user study (NÂĽ 18) to understand their effectiveness and efficiency for such segmentation tasks. We found that desktop and tablet still outperform virtual reality regarding task completion times, while we could not find a significant difference between them in the effectiveness of the segmentation. In the post hoc interviews, participants preferred the desktop due to its familiarity and temporal efficiency and virtual reality due to its given three-dimensional representation.


How to Communicate Robot Motion Intent: A Scoping Review

M.Pascher, U.Gruenefeld, S.Schneegass, and J.Gerken
Published Paper at ACM CHI '23, Hamburg, Germany

Robots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot’s motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize robot motion intent...

...This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (intent type, intent information, intent location). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research.


HaptiX: Vibrotactile Haptic Feedback for Communication of 3D Directional Cues

M.Pascher, T.Franzen, K.Kronhardt, U.Gruenefeld, S.Schneegass, and J.Gerken
Published Poster at ACM CHI '23, Hamburg, Germany

With non-Euclidean spaces, Virtual Reality (VR) experiences can more efficiently exploit the available physical space by using overlapping virtual rooms. However, the illusion created by these spaces can be discovered, if the overlap is too large. Thus, in this work, we investigate if users can be distracted from the overlap by showing a minimap that suggests that there is none...

...When done correctly, more VR space can be mapped into the existing physical space, allowing for more spacious virtual experiences. Through a user study, we found that participants uncovered the overlap of two virtual rooms when it was at 100% or the overlapping room extended even further. Our results show that the additional minimap renders overlapping virtual rooms more believable and can serve as a helpful tool to use physical space more efficiently for natural locomotion in VR.

Introduction to Authentication using Behavioral Biometrics

J.Liebers, U.Gruenefeld, D.Buschek, F.Alt, and S.Schneegass
Published Course at ACM CHI '23, Hamburg, Germany

The trend of ubiquitous computing goes in parallel with ubiquitous authentication, as users must confirm their identity several times a day on their devices. Passwords are increasingly superseded by biometrics for their inherent drawbacks, and Behavioral Biometrics are particularly promising for increased usability and user experience...

...This course provides participants with an introduction to the overall topic, covering all phases of creating novel authentication schemes. We introduce important aspects of evaluating Behavioral Biometrics and provide an overview of technical machine-learning techniques in a hands-on session, inviting practitioners and researchers to extend their knowledge of Behavioral Biometrics.

2022



ExplAInable Pixels: Investigating One-Pixel Attacks on Deep Learning Models with Explainable Visualizations

J.Keppel, J.Liebers, J.Auda, U.Gruenefeld, and S.Schneegass
Published Paper at ACM MUM '22, Lisbon, Portugal
🏆 Honorable Mention Award

Nowadays, deep learning models enable numerous safety-critical applications, such as biometric authentication, medical diagnosis support, and self-driving cars. However, previous studies have frequently demonstrated that these models are attackable through slight modifications of their inputs, so-called adversarial attacks...

...Hence, researchers proposed investigating examples of these attacks with explainable artificial intelligence to understand them better. In this line, we developed an expert tool to explore adversarial attacks and defenses against them. To demonstrate the capabilities of our visualization tool, we worked with the publicly available CIFAR-10 dataset and generated one-pixel attacks. After that, we conducted an online evaluation with 16 experts. We found that our tool is usable and practical, providing evidence that it can support understanding, explaining, and preventing adversarial examples.


Single-Sign-On in Smart Homes using Continuous Authentication

J.Liebers, N.Wittig, S.Janzon, P.Golkar, H.Moruf, W.Kontchipo, U.Gruenefeld, and S.Schneegass
Published Poster at ACM MUM '22, Lisbon, Portugal

Modern ubiquitous computing environments are increasingly populated with smart devices that need to know the identity of users interacting with them. At the same time, the number of authentications that a user needs to perform increases, as nowadays devices such as smart TVs require authentication which was not the case in earlier times...

...Even for single-person households, the need to authenticate against present smart devices in the environment appears at regular intervals, ranging from TVs to voice assistants, to gaming consoles. To reduce the need for repeated authentication, we explore the concept of a system that allows the sharing of users’ authenticated identity information between smart devices, similar to the concept of Single-Sign-On on the internet. Following a preliminary field study, we show that such a system can decrease the number of necessary authentications in a ubiquitous computing environment by 84.4%, increasing usability and security.


Identifying Users by Their Hand Tracking Data in Augmented and Virtual Reality

J.Liebers, S.Brockel, U.Gruenefeld, and S.Schneegass
Published Journal Article at IJHCI '22

Nowadays, Augmented and Virtual Reality devices are widely available and are often shared among users due to their high cost. Thus, distinguishing users to offer personalized experiences is essential. However, currently used explicit user authentication (e.g., entering a password) is tedious and vulnerable to attack. Therefore, this work investigates the feasibility of implicitly identifying users by their hand tracking data...

...In particular, we identify users by their uni- and bimanual finger behavior gathered from their interaction with eight different universal interface elements, such as buttons and sliders. In two sessions, we recorded the tracking data of 16 participants while they interacted with various interface elements in Augmented and Virtual Reality. We found that user identification is possible with up to 95% accuracy across sessions using an explainable machine learning approach. We conclude our work by discussing differences between interface elements, and feature importance to provide implications for behavioral biometric systems.


A Systematic Analysis of External Factors Affecting Gait Identification

A.Saad, N.Wittig, U.Gruenefeld, and S.Schneegass
Published Paper at IJBS '22, Abu Dhabi, United Arab Emirates

Inertial sensors integrated into smartphones provide a unique opportunity for implicitly identifying users through their gait. However, researchers identified different external factors influencing the user's gait and consequently impact gait-based user identification algorithms. While these previous studies provide important insights, a holistic comparison of external factors influencing identification algorithms is still missing...

... In this explorative work, we conducted a focus group with participants from biometrics research to collect and classify these factors. Next, we recorded the gait of 12 participants walking regularly and being influenced by eleven different external factors (e.g., shoes and floor types) in two separate sessions. We used a Deep Learning (DL) identification algorithm for analysis and validated the analysis results using within-and between-sessions data. We propose a categorization of gait covariates based on users' control levels. Floor types have the most significant impact on recognition accuracy. Finally, between-session analysis shows less accurate yet more robust results than within-session validation and testing.


ARm Haptics: 3D-Printed Wearable Haptics for Mobile Augmented Reality

U.Gruenefeld, A.Geilen, J.Liebers, N.Wittig, M.Koelle, and S.Schneegass
Published Journal Article at MobileHCI '22, Vancouver, Canada

Augmented Reality (AR) technology enables users to superpose virtual content onto their environments. However, interacting with virtual content while mobile often requires users to perform interactions in mid-air, resulting in a lack of haptic feedback. Hence, in this work, we present the ARm Haptics system, which is worn on the user's forearm...

...and provides 3D-printed input modules, each representing well-known interaction components such as buttons, sliders, and rotary knobs. These modules can be changed quickly, thus allowing users to adapt them to their current use case. After an iterative development of our system, which involved a focus group with HCI researchers, we conducted a user study to compare the ARm Haptics system to hand-tracking-based interaction in mid-air (baseline). Our findings show that using our system results in significantly lower error rates for slider and rotary input. Moreover, use of the ARm Haptics system results in significantly higher pragmatic quality and lower effort, frustration, and physical demand. Following our findings, we discuss opportunities for haptics worn on the forearm.


Reflecting on Approaches to Monitor User’s Dietary Intake

J.Keppel, U.Gruenefeld, M.Strauss, L.Gonzalez, O.Amft, and S.Schneegass
Published Workshop Paper at MobileHCI '22, Vancouver, Canada

Monitoring dietary intake is essential to providing user feedback and achieving a healthier lifestyle. In the past, different approaches for monitoring dietary behavior have been proposed. In this position paper, we first present an overview of the state-of-the-art techniques grouped by image-and sensor-based approaches....

...After that, we introduce a case study in which we present a Wizard-of-Oz approach as an alternative and non-automatic monitoring method.


Investigating the Influence of Gaze-and Context-Adaptive Head-up Displays on Take-Over Requests

H.Detjen, S.Faltaous, J.Keppel, M.Prochazka, U.Gruenefeld, S.Sadeghian, and S.Schneegass
Published Paper at AutoUI '22, Seoul, South Korea

In Level 3 automated vehicles, preparing drivers for take-over requests (TORs) on the head-up display (HUD) requires their repeated attention. Visually salient HUD elements can distract attention from potentially critical parts in a driving scene during a TOR. Further, attention is (a) meanwhile needed for non-driving-related activities and can (b) be over-requested...

...In this paper, we conduct a driving simulator study (N=12), varying required attention by HUD warning presence (absent vs. constant vs. TOR-only) across gaze-adaptivity (with vs. without) to fit warnings to the situation. We found that (1) drivers value visual support during TORs, (2) gaze-adaptive scene complexity reduction works but creates a benefit-neutralizing distraction for some, and (3) drivers perceive constant HUD warnings as annoying and distracting over time. Our findings highlight the need for (a) HUD adaptation based on user activities and potential TORs and (b) sparse use of warning cues in future HUD designs.


Give Weight to VR: Manipulating Users’ Perception of Weight in Virtual Reality with Electric Muscle Stimulation

S.Faltaous, M.Prochazka, J.Auda, J.Keppel, N.Wittig, U.Gruenefeld, and S.Schneegass
Published Paper at Mensch und Computer '22, Darmstadt, Germany

Virtual Reality (VR) devices empower users to experience virtual worlds through rich visual and auditory sensations. However, believable haptic feedback that communicates the physical properties of virtual objects, such as their weight, is still unsolved in VR. The current trend towards hand tracking-based interactions, neglecting the typical controllers, further amplifies this problem...

...Hence, in this work, we investigate the combination of passive haptics and electric muscle stimulation to manipulate users’ perception of weight, and thus, simulate objects with different weights. In a laboratory user study, we investigate four differing electrode placements, stimulating different muscles, to determine which muscle results in the most potent perception of weight with the highest comfort. We found that actuating the biceps brachii or the triceps brachii muscles increased the weight perception of the users. Our findings lay the foundation for future investigations on weight perception in VR.


Investigating the Challenges Facing Behavioral Biometrics in Everyday Life

A.Saad, and U.Gruenefeld
Published Workshop Paper at Mensch und Computer '22, Darmstadt, Germany

The rapid progress of ubiquitous devices’ usage is faced with equally rapid progress of user-centered attacks. Researchers considered adopting different user identification methods, with more attention towards the implicit and continuous ones, to maintain the balance between usability and privacy...

...In this statement, we first discuss biometric-based solutions used to assure devices’ robustness against user-centered attacks, taking the inertial sensor-based gait identification for example. We finally discuss the challenges facing these solutions when integrated with everyday interactions.


Understanding Shoulder Surfer Behavior and Attack Patterns Using Virtual Reality

Y.Abdrabou, R.Rivu, T.Ammar, J.Liebers, A.Saad, C.Liebers, U.Gruenefeld, P.Knierim, M.Khamis, V.Mäkelä, S.Schneegass, and F.Alt
Accepted Paper at ACM In-Cooperation AVI '22, Rome, Italy

In this work, we explore attacker behavior during shoulder surfing. As such behavior is often opportunistic and difficult to observe in real world settings, we leverage the capabilities of virtual reality (VR). We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space...

...In both scenarios, participants shoulder surfed private screens displaying different types of content. From the results we derive an understanding of factors influencing shoulder surfing behavior, reveal common attack patterns, and sketch a behavioral shoulder surfing model. Our work suggests directions for future research on shoulder surfing and can serve as a basis for creating novel approaches to mitigate shoulder surfing.


VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality

U.Gruenefeld, J.Auda, F.Mathis, S.Schneegass, M.Khamis, J.Gugenheimer, and S.Mayer
Published Paper at ACM CHI '22, New Orleans, USA
🏆 Honorable Mention Award

Cross-reality systems empower users to transition along the reality-virtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems...

...By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.


If The Map Fits! Exploring Minimaps as Distractors from Non-Euclidean Spaces in Virtual Reality

J.Auda, U.Gruenefeld, and S.Schneegass
Published Poster at ACM CHI '22, New Orleans, USA

With non-Euclidean spaces, Virtual Reality (VR) experiences can more efficiently exploit the available physical space by using overlapping virtual rooms. However, the illusion created by these spaces can be discovered, if the overlap is too large. Thus, in this work, we investigate if users can be distracted from the overlap by showing a minimap that suggests that there is none...

...When done correctly, more VR space can be mapped into the existing physical space, allowing for more spacious virtual experiences. Through a user study, we found that participants uncovered the overlap of two virtual rooms when it was at 100% or the overlapping room extended even further. Our results show that the additional minimap renders overlapping virtual rooms more believable and can serve as a helpful tool to use physical space more efficiently for natural locomotion in VR.


Mask removal isn’t always convenient in public! -- The Impact of the Covid-19 Pandemic on Device Usage and User Authentication

A.Saad, U.Gruenefeld, L.Mecke, M.Koelle, F.Alt, and S.Schneegass
Published Poster at ACM CHI '22, New Orleans, USA

The ongoing Covid-19 pandemic has impacted our everyday lives and demands everyone to take countermeasures such as wearing masks or disinfecting their hands. However, while previous work suggests that these countermeasures may profoundly impact biometric authentication, an investigation of the actual impact is still missing. Hence, in this work, we present our findings from an online survey (n=334)...

...on experienced changes in device usage and failures of authentication. Our results show significant changes in personal and shared device usage, as well as a significant increase in experienced failures when comparing the present situation to before the Covid-19 pandemic. From our qualitative analysis of participants' responses, we derive potential reasons for these changes in device usage and increases in authentication failures. Our findings suggest that making authentication contactless is only one of the aspects relevant to encounter the novel challenges caused by the pandemic.


Understanding Challenges and Opportunities of Technology-Supported Sign Language Learning

S.Faltaous, T.Winkler, C.Schneegass, U.Gruenefeld, and S.Schneegass
Published Paper at ACM In-Cooperation AHs '22, Munich, Germany

Around 466 million people in the world live with hearing loss, with many benefiting from sign language as a mean of communication. Through advancements in technology-supported learning, autodidactic acquisition of sign languages, e.g., American Sign Language (ASL), has become possible. However, little is known about the best practices for teaching signs using technology...

...This work investigates the use of different conditions for teaching ASL signs: audio, visual, electrical muscle stimulation (EMS), and visual combined with EMS. In a user study, we compare participants’ accuracy in executing signs, recall ability after a two-week break, and user experience. Our results show that the conditions involving EMS resulted in the best overall user experience. Moreover, ten ASL experts rated the signs performed with visual and EMS combined highest. We conclude our work with the potentials and drawbacks of each condition and present implications that will benefit the design of future learning systems.


The Butterfly Effect: Novel Opportunities for Steady-State Visually-Evoked Potential Stimuli in Virtual Reality

J.Auda, U.Gruenefeld, T.Kosch, and S.Schneegass
Published Paper at ACM In-Cooperation AHs '22, Munich, Germany

In Virtual Reality (VR), Steady-State-Visual Evoked Potentials (SSVEPs) can be used to interact with the virtual environment using brain signals. However, the design of SSVEP-eliciting stimuli often does not match the virtual environment, and thus, disrupts the virtual experience. In this paper, we investigate stimulus designs with varying suitability to blend in virtual environments. Therefore, we created differently-shaped, virtual butterflies...

...The shapes vary from rectangular wings, over round wings, to a wing shape of a real butterfly. These butterflies elicit SSVEP responses through different animations-flickering or flapping wings. To evaluate our stimuli, we first extracted suitable frequencies for SSVEP responses from the literature. In a first study, we determined three frequencies yielding the best detection accuracy in VR. We used these frequencies in a second study to analyze detection accuracy and appearance ratings using our stimuli designs. Our work contributes insights into the design of SSVEP stimuli that blend into virtual environments and still elicit SSVEP responses.


Understanding Shoulder Surfer Behavior Using Virtual Reality

Y.Abdrabou, R.Rivu, T.Ammar, J.Liebers, A.Saad, C.Liebers, U.Gruenefeld, P.Knierim, M.Khamis, V.Mäkelä, S.Schneegass, and F.Alt
Published Poster at IEEE VR '22, Christchurch, New Zealand

We explore how attackers behave during shoulder surfing. Unfortunately, such behavior is challenging to study as it is often opportunistic and can occur wherever potential attackers can observe other people’s private screens. Therefore, we investigate shoulder surfing using virtual reality (VR)...

...We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space. In both scenarios, avatars interacted with private screens displaying different content, thus providing opportunities for shoulder surfing. From the results, we derive an understanding of factors influencing shoulder surfing behavior.


My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments

M.Pascher, K.Kronhardt, T.Franzen, U.Gruenefeld, S.Schneegass, and J.Gerken
Published Journal Article at MDPI Sensors '22

Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior,...

...which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they “see” the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot’s surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.

2021



Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality

J.Liebers, P.Horn, C.Burschik, U.Gruenefeld, and S.Schneegass
Published Paper at ACM VRST '21, Osaka, Japan

Identifying users of a Virtual Reality (VR) headset provides designers of VR content with the opportunity to adapt the user interface, set user-specific preferences, or adjust the level of difficulty either for games or training applications. While most identification methods currently rely on explicit input, implicit user identification is less disruptive and does not impact the immersion of the users. In this work, we introduce a biometric identification system that employs the user’s gaze behavior as a unique, individual characteristic...

...In particular, we focus on the user’s gaze behavior and head orientation while following a moving stimulus. We verify our approach in a user study. A hybrid post-hoc analysis results in an identification accuracy of up to 75 % for an explainable machine learning algorithm and up to 100 % for a deep learning approach. We conclude with discussing application scenarios in which our approach can be used to implicitly identify users.


I’m in Control! Transferring Object Ownership Between Remote Users with Haptic Props in Virtual Reality

J.Auda, L.Busse, K.Pfeuffer, U.Gruenefeld, R.Rivu, F.Alt, and S.Schneegass
Published Paper at SUI '21, Virtual Conference

Virtual Reality (VR) remote collaboration is becoming more and more relevant in a wide range of scenarios, such as remote assistance or group work. A way to enhance the user experience is using haptic props that make virtual objects graspable. But physical objects are only present in one location and cannot be manipulated directly by remote users. We explore different strategies to handle ownership of virtual objects enhanced by haptic props...

...In particular, two strategies of handling object ownership – SingleOwnership and SharedOwnership. SingleOwnership restricts virtual objects to local haptic props, while SharedOwnership allows collaborators to take over ownership of virtual objects using local haptic props. We study both strategies for a collaborative puzzle task regarding their influence on performance and user behavior. Our findings show that SingleOwnership increases communication and enhanced with virtual instructions, results in higher task completion times. SharedOwnership is less reliant on verbal communication and faster, but there is less social interaction between the collaborators.


Towards a Universal Human-Computer Interaction Model for Multimodal Interactions

S.Faltaous, U.Gruenefeld, and S.Schneegass
Published Paper at MuC '21, Ingolstadt, Germany

Models in HCI describe and provide insights into how humans use interactive technology. They are used by engineers, designers, and developers to understand and formalize the interaction process. At the same time, novel interaction paradigms arise constantly introducing new ways of how interactive technology can support humans. In this work, we look into how these paradigms can be described using the classical HCI model...

...introduced by Schomaker in 1995. We extend this model by presenting new relations that would provide a better understanding of them. For this, we revisit the existing interaction paradigms and try to describe their interaction using this model. The goal of this work is to highlight the need to adapt the models to new interaction paradigms and spark discussion in the HCI community on this topic.


Wisdom of the IoT Crowd: Envisioning a Smart Home-based Nutritional Intake Monitoring System

S.Faltaous, S.Janzon, R.Heger, M.Strauss, P.Golkar, M.Viefhaus, M.Prochazka, U.Gruenefeld, and S.Schneegass
Published Paper at MuC '21, Ingolstadt, Germany

Obesity and overweight are two factors linked to various health problems that lead to death in the long run. Technological advancements have granted the chance to create smart interventions. These interventions could be operated by the Internet of Things (IoT) that connects different smart home and wearable devices, providing a large pool of data. In this work, we use IoT with different technologies to present an exemplary nutrition monitoring intake system...

...This system integrates the input from various devices to understand the users’ behavior better and provide recommendations accordingly. Furthermore, we report on a preliminary evaluation through semi-structured interviews with six participants. Their feedback highlights the system’s opportunities and challenges.


Enabling Reusable Haptic Props for Virtual Reality by Hand Displacement

J.Auda, U.Gruenefeld, and S.Schneegass
Published Paper at MuC '21, Ingolstadt, Germany

Virtual Reality (VR) enables compelling visual experiences. However, providing haptic feedback is still challenging. Previous work suggests utilizing haptic props to overcome such limitations and presents evidence that props could function as a single haptic proxy for several virtual objects. In this work, we displace users’ hands to account for virtual objects that are smaller or larger...

...Hence, the used haptic prop can represent several differently-sized virtual objects. We conducted a user study (N = 12) and presented our participants with two tasks during which we continuously handed them the same haptic prop but they saw in VR differently-sized virtual objects. In the first task, we used a linear hand displacement and increased the size of the virtual object to understand when participants perceive a mismatch. In the second task, we compare the linear displacement to logarithmic and exponential displacements. We found that participants, on average, do not perceive the size mismatch for virtual objects up to 50% larger than the physical prop. However, we did not find any differences between the explored different displacement. We conclude our work with future research directions.


VRSketch: Investigating 2D Sketching in Virtual Reality with Different Levels of Hand and Pen Transparency

J.Auda, R.Heger, U.Gruenefeld, and S.Schneegass
Published Paper at INTERACT '21, Bari, Italy

Sketching is a vital step in design processes. While analog sketching on pen and paper is the defacto standard, Virtual Reality (VR) seems promising for improving the sketching experience. It provides myriads of new opportunities to express creative ideas. In contrast to reality, possible drawbacks of pen and paper drawing can be tackled by altering the virtual environment. In this work, we investigate how hand and pen transparency...

...impacts users’ 2D sketching abilities. We conducted a lab study ( N=20 ) investigating different combinations of hand and pen transparency. Our results show that a more transparent pen helps one sketch more quickly, while a transparent hand slows down. Further, we found that transparency improves sketching accuracy while drawing in the direction that is occupied by the user’s hand.

VR4Sec: 1st International Workshop on Security for XR and XR for Security

S.Schneegass, M.Khamis, F.Alt, U.Gruenefeld, K.Marky, A.Saad, J.Liebers, J.Auda, F.Mathis, and L.Mecke
Published Workshop at SOUPS '21, Virtual Event

Augmented and Virtual Reality (XR) technologies are finding their way into users’ everyday life. Contexts of use are home entertainment or professional collaboration, among others. The increasing interest in XR technology raises a need for the community to focus more strongly on XR aspects related to usable security and privacy. Additionally, XR technologies provide promising opportunities to study usable security and privacy topics...

...that exist or emerge in the real world in a simulated manner. However, it remains relatively unexplored which research challenges and opportunities arise out of XR technology used in a variety of different contexts. In this workshop, we will bring together experts from the fields of usable security, augmented/virtual reality, and human-computer interaction as well as people interested in these topics to discuss current challenges and derive promising research directions that can inform and augment security and privacy research.


Understanding User Identification in Virtual Reality Through Behavioral Biometrics and the Effect of Body Normalization

J.Liebers, M.Abdelaziz, L.Mecke, A.Saad, J.Auda, U.Gruenefeld, F.Alt, and S.Schneegass
Published Paper at ACM CHI '21, Yokohama, Japan

Virtual Reality (VR) is becoming increasingly popular both in the entertainment and professional domains. Behavioral biometrics have recently been investigated as a means to continuously and implicitly identify users in VR. Applications in VR can specifically benefit from this, for example, to adapt virtual environments and user interfaces...

...as well as to authenticate users. In this work, we conduct a lab study (N=16) to explore how accurately users can be identified during two task-driven scenarios based on their spatial movement. We show that an identification accuracy of up to 90% is possible across sessions recorded on different days. Moreover, we investigate the role of users' physiology in behavioral biometrics by virtually altering and normalizing their body proportions. We find that body normalization in general increases the identification rate, in some cases by up to 38%; hence, it improves the performance of identification systems.


Understanding Bystanders’ Tendency to Shoulder Surf Smartphones Using 360-degree Videos in Virtual Reality

A.Saad, J.Liebers, U.Gruenefeld, F. Alt, and S.Schneegass
Published Paper at MobileHCI '21, Toulouse, France

Shoulder surfing is an omnipresent risk for smartphone users. However, investigating these attacks in the wild is difficult because of either privacy concerns, lack of consent, or the fact that asking for consent would influence people’s behavior (e.g., they could try to avoid looking at smartphones). Thus, we propose utilizing 360-degree videos in Virtual Reality (VR), recorded in staged real-life situations on public transport...

...Despite differences between perceiving videos in VR and experiencing real-world situations, we believe this approach to allow novel insights on observers’ tendency to shoulder surf another person’s phone authentication and interaction to be gained. By conducting a study (N=16), we demonstrate that a better understanding of shoulder surfers’ behavior can be obtained by analyzing gaze data during video watching and comparing it to post-hoc interview responses. On average, participants looked at the phone for about 11% of the time it was visible and could remember half of the applications used.

2020



Demystifying Deep Learning: Developing and Evaluating a User-Centered Learning App for Beginners to Gain Practical Experience

S.Schultze, U.Gruenefeld, and S.Boll
Published Journal Article at i-com '21

Deep Learning has revolutionized Machine Learning, enhancing our ability to solve complex computational problems. From image classification to speech recognition, the technology can be beneficial in a broad range of scenarios. However, the barrier to entry is quite high, especially when programming skills are missing...

... In this paper, we present the development of a learning application that is easy to use, yet powerful enough to solve practical Deep Learning problems. We followed the human-centered design approach and conducted a technical evaluation to identify solvable classification problems. Afterwards, we conducted an online user evaluation to gain insights on users' experience with the app, and to understand positive as well as negative aspects of our implemented concept. Our results show that participants liked using the app and found it useful, especially for beginners. Nonetheless, future iterations of the learning app should step-wise include more features to support advancing users.


Behind the Scenes: Comparing X-Ray Visualization Techniques in Head-mounted Optical See-through Augmented Reality

U.Gruenefeld, Y.BrĂĽck, and S.Boll
Published Paper at ACM MUM '20, Essen, Germany

Locating objects in the environment can be a difficult task, especially when the objects are occluded. With Augmented Reality, we can alternate our perceived reality by augmenting it with visual cues or removing visual elements of reality, helping users to locate occluded objects. However, to our knowledge, it has not yet been evaluated which visualization technique works best for estimating the distance and size of occluded objects...

...in optical see-through head-mounted Augmented Reality. To address this, we compare four different visualization techniques derived from previous work in a laboratory user study. Our results show that techniques utilizing additional aid (textual or with a grid) help users to estimate the distance to occluded objects more accurately. In contrast, a realistic rendering of the scene, such as a cutout in the wall, resulted in higher distance estimation errors.


SaVR: Increasing Safety in Virtual Reality Environments via Electrical Muscle Stimulation

S.Faltaous, J.Neuwirth, U.Gruenefeld, and S.Schneegass
Published Paper at ACM MUM '20, Essen, Germany

One of the main benefits of interactive Virtual Reality (VR) applications is that they provide a high sense of immersion. As a result, users lose their sense of real-world space which makes them vulnerable to collisions with real-world objects. In this work, we propose a novel approach to prevent such collisions using Electrical Muscle Stimulation (EMS)...

... EMS actively prevents the movement that would result in a collision by actuating the antagonist muscle. We report on a user study comparing our approach to the commonly used feedback modalities: audio, visual, and vibro-tactile. Our results show that EMS is a promising modality for restraining user movement and, at the same time, rated best in terms of user experience.


Time is money! Evaluating Augmented Reality Instructions for Time-Critical Assembly Tasks

J.Illing, P.Klinke, U.Gruenefeld, M.Pfingsthorn, and W.Heuten
Published Paper at ACM MUM '20, Essen, Germany

Manual assembly tasks require workers to precisely assemble parts in 3D space. Often additional time pressure increases the complexity of these tasks even further (e.g., adhesive bonding processes). Therefore, we investigate how Augmented Reality (AR) can improve workers' performance in time and spatial dependent process steps. In a user study, we compare three conditions: instructions presented on (a) paper, (b) a camera-based see-through tablet, and (c) a head-mounted AR device...

...For instructions we used selected work steps from a standardized adhesive bonding process as a representative for common time-critical assembly tasks. We found that instructions in AR can improve the performance and understanding of time and spatial factors. The tablet instruction condition showed the best subjective results among the participants, which can increase motivation, particularly among less-experienced workers.


It Takes Two To Tango: Conflicts Between Users on the Reality-Virtuality Continuum and Their Bystanders

J.Auda, U.Gruenefeld, and S.Mayer
Published Workshop Paper at ACM ISS '20, Lisbon, Portugal

Over the last years, Augmented and Virtual Reality technology became more immersive. However, when users immerse themselves in these digital realities, they detach from their real-world environments. This detachment creates conflicts that are problematic in public spaces such as planes but also private settings. Consequently, on the one hand, the detaching from the world creates an immerse experience for the user, and on the other hand, this creates a social conflict with bystanders...

...With this work, we highlight and categorize social conflicts caused by using immersive digital realities. We first present different social settings in which social conflicts arise and then provide an overview of investigated scenarios. Finally, we present research opportunities that help to address social conflicts between immersed users and bystanders.


EasyEG: A 3D-printable Brain-Computer Interface

J.Auda, R.Heger, T.Kosch, U.Gruenefeld, and S.Schneegass
Published Poster at ACM UIST '20, Minnesota, USA

Brain-Computer Interfaces (BCIs) are progressively adopted by the consumer market, making them available for a variety of use-cases. However, off-the-shelf BCIs are limited in their adjustments towards individual head shapes, evaluation of scalp-electrode contact, and extension through additional sensors. This work presents EasyEG, a BCI headset that is adaptable to individual head shapes and offers adjustable electrode-scalp contact to improve measuring quality...

...EasyEG consists of 3D-printed and low-cost components that can be extended by additional sensing hardware, hence expanding the application domain of current BCIs. We conclude with use-cases that demonstrate the potentials of our EasyEG headset.


Demystifying Deep Learning: Developing a Learning App for Beginners to Gain Practical Experience

S.Schultze, U.Gruenefeld, and S.Boll
Published Workshop Paper at MuC '20, Magdeburg, Germany

Deep learning has revolutionized machine learning, enhancing our ability to solve complex computational problems. From image classification to speech recognition, the technology can be beneficial in a broad range of scenarios. However, the barrier to entry is quite high, especially when programming skills are missing. In this paper, we present the development of a learning application for beginners...

...that is easy to use, yet powerful enough to solve practical deep learning problems. We followed the human-centered design approach and conducted a technical evaluation to identify solvable classification problems. In the future, we plan to conduct a user study to evaluate our learning application online.


Mind the ARm: Realtime Visualization of Robot Motion Intent in Head-mounted Augmented Reality

U.Gruenefeld, L.Prädel, J.Illing, T.C.Stratmann, S.Drolshagen, and M.Pfingsthorn
Published Paper at MuC '20, Magdeburg, Germany

Established safety sensor technology shuts down industrial robots when a collision is detected, causing preventable loss of productivity. To minimize downtime, we implemented three Augmented Reality (AR) visualizations (Path, Preview, and Volume) which allow users to understand robot motion intent and give way to the robot. We compare the different visualizations in a user study in which a small cognitive task is performed...

...in a shared workspace, We found that Preview and Path required significantly longer head rotations to perceive robot motion intent. Volume, however, required the shortest head rotation and was perceived as most safe, enabling closer proximity of the robot arm before one left the shared workspace without causing shutdowns.


HiveFive: Immersion Preserving Attention Guidance
in Virtual Reality

D.Lange, T.C.Stratmann, U.Gruenefeld, and S.Boll
Published Paper at ACM CHI '20, Honolulu, Hawaii

Recent advances in Virtual Reality (VR) technology, such as larger fields of view, have made VR increasingly immersive. However, a larger field of view often results in a user focusing on certain directions and missing relevant content presented elsewhere on the screen. With HiveFive, we propose a technique that uses swarm motion to guide user attention...

...in VR. The goal is to seamlessly integrate directional cues into the scene without losing immersiveness. We evaluate HiveFive in two studies. First, we compare biological motion (from a prerecorded swarm) with non-biological motion (from an algorithm), finding further evidence that humans can distinguish between these motion types and that, contrary to our hypothesis, non-biological swarm motion results in significantly faster response times. Second, we compare HiveFive to four other techniques and show that it not only results in fast response times but also has the smallest negative effect on immersion.

2019



Locating Nearby Physical Objects in Augmented Reality

U.Gruenefeld, L.Prädel, and W.Heuten
Published Paper at ACM MUM '19, Pisa, Italy

Locating objects in physical environments can be an exhausting and frustrating task, particularly when these objects are out of the user's view or occluded by other objects. With recent advances in Augmented Reality (AR), these environments can be augmented to visualize objects for which the user searches. However, it is currently unclear which visualization strategy can best support users in locating these objects.

In this paper, we compare a printed map to three different AR visualization strategies: (1) in-view visualization, (2) out-of-view visualization, and (3) the combination of in-view and out-of-view visualizations. Our results show that in-view visualization reduces error rates for object selection accuracy, while additional out-of-view object visualization improves users' search time performance. However, combining in-view and out-of-view visualizations leads to visual clutter, which distracts users.

VRoad: Gesture-based Interaction Between Pedestrians and Automated Vehicles in Virtual Reality

U.Gruenefeld, S.Weiß, A.Löcken, I.Virgilio, A.Kun, and S.Boll
Published Poster at AutoUI '19, Utrecht, Netherlands

As a third party to both automated and non-automated vehicles, pedestrians are among the most vulnerable participants in traffic. Currently, there is no way for them to communicate their intentions to an automated vehicle (AV). In this work, we explore the interactions between pedestrians and AVs at unmarked crossings. We propose...

...a virtual reality testbed, in which we conducted a pilot study to compare three conditions: crossing a street before a car that (1) does not give information, (2) displays its locomotion, or (3) displays its locomotion and reacts to pedestrians' gestures. Our results show that gestures introduce a new point of failure, which can increase pedestrians' insecurity. However, communicating the vehicle's locomotion supports pedestrians, helping them to make safer decisions.

Improving Search Time Performance for Locating
Out-of-View Objects in Augmented Reality

U.Gruenefeld, L.Prädel, and W.Heuten
Published Paper at MuC '19, Hamburg, Germany
🏆 Honorable Mention Award

Locating virtual objects (e.g., holograms) in head-mounted Augmented Reality (AR) can be an exhausting and frustrating task. This is mostly due to the limited field of view of current AR devices, which amplify the problem of objects receding from view. In previous work, EyeSee360...

...was developed to address this problem by visualizing the locations of multiple out-of-view objects. However, on small field of view devices such as the Hololens, EyeSee360 adds a lot of visual clutter that may negatively affect user performance. In this work, we compare three variants of EyeSee360 with different levels of information (assistance) to evaluate in how far they add visual clutter and thereby negatively affect search time performance. Our results show that variants of EyeSee360 with less assistance result into faster search times.

ChalkboARd: Exploring Augmented Reality for Public Displays

U.Gruenefeld, T.Wolff, N.Diekmann, M.Koelle, and W.Heuten
Published Paper at ACM PerDis '19, Palermo, Italy

Augmented Reality (AR) devices and applications are gaining in popularity, and - with recent trends such as Pokemon Go - are venturing into public spaces where they become more and more pervasive. In consequence, public AR displays might soon be part of our cityscapes and may impact on our everyday view of the world. In this work, we present ChalkboARd, a prototype of an AR-enabled public display...

...that seamlessly integrates into its environment. We investigate the influence of our system on the attention of bystanders in a field study (N=20). The field deployment of ChalkboARd provides evidence that AR for public displays needs to be interactive and adaptive to their surroundings, while at the same time taking privacy issues into account. Nevertheless, ChalkboARd was received positively by the participants, which points out the (hidden) potential of public AR displays.

Comparing Techniques for Visualizing Moving Out-of-View Objects in Head-mounted Virtual Reality

U.Gruenefeld, I.Koethe, D.Lange, S.Weiß, and W.Heuten
Published short paper at IEEE VR '19, Osaka, Japan

Current head-mounted displays (HMDs) have a limited field-of-view (FOV). A limited FOV further decreases the already restricted human visual range and amplifies the problem of objects receding from view (e.g., opponents in computer games). However, there is no previous work that investigates how to best perceive moving out-of-view objects...

...on head-mounted displays. In this paper, we compare two visualization approaches: (1) Overview+detail, with 3D Radar, and (2) Focus+context, with EyeSee360, in a user study to evaluate their performances for visualizing moving out-of-view objects. We found that using 3D Radar resulted in a significantly lower movement estimation error and higher usability, measured by the system usability scale. 3D Radar was also preferred by 13 out of 15 participants for visualization of moving out-of-view objects.

2018


Mobile Bridge - A Portable Design Simulator for
Ship Bridge Interfaces

T.C.Stratmann, U.Gruenefeld, J.Stratmann, S.Schweigert, A.Hahn, and S.Boll
Published Journal Article at TransNav '19

Developing new software components for ship bridges is challenging. Mostly due to high costs of testing these components in realistic environments. To reduce these costs the development process is divided into different stages. Whereas, the final test on a real ship bridge is the last step in this process. However, by dividing the development process...

...into different stages new components have to be adapted to each stage individually. To improve the process, we propose a mobile ship bridge system to fully support the development process from lab studies to tests in realistic environments. Our system allows developing new software components in the lab and setting it up on a ship bridge without interfering with the vessel's navigational systems. Therefore it is linked to a NaviBox to get necessary information such as GPS, AIS, compass, and radar information. Our system is embedded in LABSKAUS, a test bed for the safety assessment of new e-Navigation systems.

Ensuring Safety in Augmented Reality from Trade-off Between Immersion and Situation Awareness

J.Jung, H.Lee, J.Choi, A.Nanda, U. Gruenefeld, T.C.Stratmann, and W. Heuten
Published Paper at IEEE ISMAR '18, Munich, Germany

Although the mobility and emerging technology of augmented reality (AR) have brought significant entertainment and convenience in everyday life, the use of AR is becoming a social problem as the accidents caused by a shortage of situation awareness due to an immersion of AR are increasing. In this paper, we address the trade-off between immersion and situation awareness as the fundamental factor of the AR-related accidents.

As a solution against the trade-off, we propose a third-party component that prevents pedestrian-vehicle accidents in a traffic environment based on vehicle position estimation (VPE) and vehicle position visualization (VPV). From a RGB image sequence, VPE efficiently estimates the relative 3D position between a user and a car using generated convolutional neural network (CNN) model with a region-of-interest based scheme. VPV shows the estimated car position as a dot using an out-of-view object visualization method to alert the user from possible collisions. The VPE experiment with 16 combinations of parameters showed that the InceptionV3 model, fine-tuned on activated images yields the best performance with a root mean squared error of 0.34 m in 2.1 ms. The user study of VPV showed the inversely proportional relationship between the immersion controlled by the difficulty of the AR game and the frequency of situation awareness in both quantitatively and qualitatively. Additional VPV experiment assessing two out-of-view object visualization methods (EyeSee360 and Radar) showed no significant effect on the participants' activity, while EyeSee360 yielded faster responses and Radar engendered participants' preference on average. Our field study demonstrated an integration of VPE and VPV which has potentials for safety-ensured immersion when the proposed component is used for AR in daily uses. We expect that when the proposed component is developed enough to be used in real world, it will contribute to the safety-ensured AR, as well as to the population of AR.

Guiding Smombies: Augmenting Peripheral Vision with Low-Cost Glasses to Shift the Attention of Smartphone Users

U.Gruenefeld, T.C.Stratmann, J.Jung, H.Lee, J.Choi, A.Nanda, and W.Heuten
Published Poster at IEEE ISMAR '18, Munich, Germany

Over the past few years, playing Augmented Reality (AR) games on smartphones has steadily been gaining in popularity (e.g., Pokémon Go). However, playing these games while navigating traffic is highly dangerous and has led to many accidents in the past. In our work, we aim to augment peripheral vision of pedestrians with low-cost glasses to support them in critical traffic encounters. Therefore, we developed a lo-fi prototype with peripheral displays. We technically improved...

...the prototype with the experience of five usability experts. Afterwards, we conducted an experiment on a treadmill to evaluate the effectiveness of collision warnings in our prototype. During the experiment, we compared three different light stimuli (instant, pulsing and moving) with regard to response time, error rate, and subjective feedback. Overall, we could show that all light stimuli were suitable for shifting the users' attention (100% correct). However, moving light resulted in significantly faster response times and was subjectively perceived best.

Juggling 4.0: Learning Complex Motor Skills with Augmented Reality Through the Example of Juggling

B.Meyer, P.Gruppe, B.Cornelsen, T.C.Stratmann, U.Gruenefeld, and S.Boll
Published Poster at ACM UIST '18, Berlin, Germany

Learning new motor skills is a problem that people are constantly confronted with (eg to learn a new kind of sport). In our work, we investigate to which extent the learning process of a motor sequence can be optimized with the help of Augmented Reality as a technical assistant. Therefore, we propose an approach that divides the problem...

...into three tasks: (1) the tracking of the necessary movements, (2) the creation of a model that calculates possible deviations, and (3) the implementation of a visual feedback system. To evaluate our approach, we implemented the idea by using infrared depth sensors and an Augmented Reality head-mounted device (Hololens). Our results show that the system can give an efficient assistance for the correct height of a throw with one ball. Furthermore, it provides a basis for the support of a complete juggling sequence.

Identification of Out-of-View Objects in Virtual Reality

U.Gruenefeld, R.von Bargen, and W.Heuten
Published Poster at ACM SUI '18, Berlin, Germany

Current Virtual Reality (VR) devices have limited fields-of-view (FOV). A limited FOV amplifies the problem of objects receding from view. In previous work, different techniques have been proposed to visualize the position of objects out of view. However, these techniques do not allow to identify these objects. In this work, we compare three different ways...

...of identifying out-of-view objects. Our user study shows that participants prefer to have the identification always visible.

Investigations on Container Ship Berthing from the Pilot’s Perspective: Accident Analysis, Ethnographic Study, and ...

U. Gruenefeld, T.C.Stratmann, Y.Brueck, A.Hahn, S.Boll, and W.Heuten
Published Journal Article at TransNav '19

In recent years, container ships have had to transport more and more goods due to constantly growing demand. Therefore, the container ships for carrying these goods are growing in size, while the harbors fall short in adapting to these changes. As a result, the berthing of these container ships in harbors has become more challenging for harbor pilots.

In this work, we identify problems and risks with which pilots are confronted during the berthing process. First, we analyzed approximately 1500 accident reports from six different transportation safety authorities and identified their major causes. Second, we conducted an ethnographic study with harbor pilots in Hamburg to observe their actions. Third, we gained more specific insights on pilots environments and communications through an online survey of 30 harbor pilots from different European countries. We conclude our work with recommendations on how to reduce problems and risks during berthing of container vessels.

Where to Look: Exploring Peripheral Cues for Shifting Attention to Spatially Distributed Out-of-View Objects

U. Gruenefeld, A.Löcken, Y.Brueck, S.Boll, and W.Heuten
Published Paper at ACM AutoUI '18, Toronto, Canada

Knowing the locations of spatially distributed objects is important in many different scenarios (e.g., driving a car and being aware of other road users). In particular, it is critical for preventing accidents with objects that come too close (e.g., cyclists or pedestrians). In this paper, we explore how peripheral cues can shift a user's attention towards spatially distributed out-of-view objects.

We identify a suitable technique for visualization of these out-of-view objects and explore different cue designs to advance this technique to shift the user's attention. In a controlled lab study, we investigate non-animated peripheral cues with audio stimuli and animated peripheral cues without audio stimuli. Further, we looked into how user's identify out-of-view objects. Our results show that shifting the user's attention only takes about 0.86 seconds on average when animated stimuli are used, while shifting the attention with non-animated stimuli takes an average of 1.10 seconds.

Beyond Halo and Wedge: Visualizing Out-of-View Objects on Head-mounted Virtual and Augmented Reality Devices

U.Gruenefeld, A.El Ali, S.Boll, and W.Heuten
Published Paper at ACM MobileHCI '18, Barcelona, Spain

Head-mounted devices (HMDs) for Virtual and Augmented Reality (VR/AR) enable us to alter our visual perception of the world. However, current devices suffer from a limited field of view (FOV), which becomes problematic when users need to locate out of view objects (e.g., locating points-of-interest during sightseeing). To address this, ...

...we developed and evaluated in two studies HaloVR, WedgeVR, HaloAR and WedgeAR, which are inspired by usable 2D off-screen object visualization techniques (Halo, Wedge). While our techniques resulted in overall high usability, we found the choice of AR or VR impacts mean search time (VR: 2.25s, AR: 3.92s) and mean direction estimation error (VR: 21.85˚, AR: 32.91˚). Moreover, while adding more out-of-view objects significantly affects search time across VR and AR, direction estimation performance remains unaffected. We provide implications and discuss the challenges of designing for VR and AR HMDs.

RadialLight: Exploring Radial Peripheral LEDs for Directional Cues in Head-Mounted Displays

U.Gruenefeld, T.C.Stratmann, A.El Ali, S.Boll, and W.Heuten
Published Paper at ACM MobileHCI '18, Barcelona, Spain

Current head-mounted displays (HMDs) for Virtual Reality (VR) and Augmented Reality (AR) have a limited field-of-view (FOV). This limited FOV further decreases the already restricted human visual range and amplifies the problem of objects going out of view. Therefore, we explore the utility of augmenting HMDs with RadialLight, ...

...a peripheral light display implemented as 18 radially positioned LEDs around each eye to cue direction towards out-of-view objects. We first investigated direction estimation accuracy of multi-colored cues presented on one versus two eyes. We then evaluated direction estimation accuracy and search time performance for locating out-of-view objects in two representative 360˚ video VR scenarios. Key findings show that participants could not distinguish between LED cues presented to one or both eyes simultaneously, participants estimated LED cue direction within a maximum 11.8˚ average deviation, and out-of-view objects in less distracting scenarios were selected faster. Furthermore, we provide implications for building peripheral HMDs.

MonoculAR: A Radial Light Display to Point Towards Out-of-View Objects on Augmented Reality Devices

U.Gruenefeld, T.C.Stratmann, L.Prädel, and W.Heuten
Published Poster at ACM MobileHCI '18, Barcelona, Spain

Present head-mounted displays (HMDs) for Augmented Reality (AR) devices have narrow fields-of-view (FOV). The narrow FOV further decreases the already limited human visual range and worsens the problem of objects going out of view. Therefore, we explore the utility of augmenting head-mounted AR devices with MonoculAR, ...

...a peripheral light display comprised of twelve radially positioned light cues, to point towards out-of-view objects. In this work, we present two implementations of MonoculAR: (1) On-screen virtual light cues and (2) Off-screen LEDs. In a controlled user study we compare both approaches and evaluate search time performance for locating out-of-view objects in AR on the Microsoft Hololens. Key results show that participants find out-of-view objects faster when the light cues are presented on the screen. Furthermore, we provide implications for building peripheral HMDs.


BuildingBlocks: Head-mounted Virtual Reality for Robot Interaction in Large Non-Expert Audiences

U. Gruenefeld, T.C.Stratmann, L.Prädel, M.Pfingsthorn, and W.Heuten
Published Workshop Paper at ACM MobileHCI '18', Barcelona, Spain

Virtual Reality (VR) technology empowers users to experience and manipulate virtual environments in a novel way. Further, by using digital twins of real world objects it is also possible to extend the reach of interaction to reality. In this work, we explore how users interact with a robot arm and its programming by using a digital representation in VR. In particular, we were interested...

...in how public spaces influence these interactions. As a preliminary outcome in this direction, we present a simple application called BuildingBlocks, which allows any member of the public to assemble a work order for a robot with almost no instruction. This application was tested anecdotally during an industry fair with 235 active participants.

Augmenting Augmented Reality

U.Gruenefeld, T.C.Stratmann, J.Auda, M.Koelle, S.Schneegass, and W.Heuten
Published Tutorial at ACM MobileHCI '18, Barcelona, Spain

Today’s Augmented Reality (AR) devices enable users to interact almost naturally with their surroundings, e.g., by pinning digital content onto real-world objects. However, current AR display are mostly limited to optical and video see-through technologies. Nevertheless, extending Augmented Reality (AR) beyond screens by accommodating additional modalities (e.g., smell or haptics)...

...or additional visuals (e.g., peripheral light) has recently become a trend in HCI. During this half-day tutorial, we provide beginner-level, hands-on instructions for augmenting an Augmented Reality application using peripheral hardware to generate multi-sensual stimuli.

Effective Visualization of Time-Critical Notifications
in Virtual Reality

U.Gruenefeld, M.Harre, T.C.Stratmann, A.Lüdtke, and W.Heuten
Published Paper at MuC '18, Dresden, Germany

Virtual Reality (VR) devices empower users to be fully immersed into a virtual environment. However, time-critical notifications must be perceived as quickly and correctly as possible. Especially, if they indicate risk of injury (e.g., bumping into walls). Compared to displays used in previous work to investigate fast response times, immersion...

...a virtual environment, wider field of view and use of near-eye displays observed through lenses may have a considerable impact on the perception of time-critical notifications. Therefore, we studied the effectiveness of different visualization types (color, shape, size, text, number) in two different setups (room-scale, standing-only) with 20 participants in VR. Our study consisted of a part were we tested one notification and a part with multiple notifications showing up at the same time. We measured reaction time, correctness and subjective user evaluation. Our results showed that visualization types can be organized by consistent effectiveness ranking for different numbers of notification elements presented. Further, we offer promising recommendations regarding which visualization type to use in future VR applications for showing time-critical notifications.

EyeMR - Low-cost Eye-Tracking for Rapid-prototyping in Head-mounted Mixed Reality

T.C.Stratmann, U.Gruenefeld, and S.Boll
Published Poster at ACM ETRA '18, Warsaw, Poland

Mixed Reality devices can either augment reality (AR) or create completely virtual realities (VR). Combined with head-mounted devices and eye-tracking, they enable users to interact with these systems in novel ways. However, current eye-tracking systems are expensive and limited in the interaction with virtual content. In this paper, we present EyeMR, ...

...a low-cost system (below 100$) that enables researchers to rapidly prototype new techniques for eye and gaze interactions. Our system supports mono-and binocular tracking (using Pupil Capture) and includes a Unity framework to support the fast development of new interaction techniques. We argue for the usefulness of EyeMR based on results of a user evaluation with HCI experts.

FlyingARrow: Pointing Towards Out-of-View Objects on Augmented Reality Devices

U.Gruenefeld, D.Lange, L.Hammer, S.Boll, and W.Heuten
Published Paper at ACM PerDis '18, Munich, Germany

Augmented Reality (AR) devices empower users to enrich their surroundings by pinning digital content onto real world objects. However, current AR devices suffer from having small fields of view, making the process of locating spatially distributed digital content similar to looking through a keyhole. Previous solutions are not suitable...

...to address the problem of locating digital content out of view on small field of view devices because of visual clutter. Therefore, we developed FlyingARrow, which consists of a visual representation that flies on-demand from the user's line of sight toward the position of the out-of-view object and returns an acoustic signal over headphones if reached. We compared our technique with the out-of-view object visualization technique EyeSee360 and found that it resulted in higher usability and lower workload. However, FlyingARrow performed slightly worse with respect to search time and direction error. Furthermore, we discuss the challenges, and opportunities in combining visual and acoustic representations to overcome visual clutter.

Exploring Vibrotactile and Peripheral Cues for
Spatial Attention Guidance

T.C.Stratmann, A.Löcken, U.Gruenefeld, W.Heuten, and S.Boll
Published Paper at ACM PerDis '18, Munich, Germany

For decision making in monitoring and control rooms situation awareness is key. Given the often spacious and complex environments, simple alarms are not sufficient for attention guidance (e.g., on ship bridges). In our work, we explore shifting attention towards the location of relevant entities in large cyber-physical systems. Therefore, we used...

...pervasive displays: tactile displays on both upper arms and a peripheral display. With these displays, we investigated shifting the attention in a seated and standing scenario. In a first user study, we evaluated four distinct cue patterns for each on-body display. We tested seated monitoring limited to 90° in front of the user. In a second study, we continued with the two patterns from the first study for lowest and highest urgency perceived. Here, we investigated standing monitoring in a 360° environment. We found that tactile cues led to faster arousal times than visual cues, whereas the attention shift speed for visual cues was faster than tactile cues.

EyeSeeX: Visualization of Out-of-View Objects on Small Field-of-View Augmented and Virtual Reality Devices

U.Gruenefeld, D.Hsiao, and W.Heuten
Published Demo at ACM PerDis '18, Munich, Germany

Recent advances in Virtual and Augmented Reality technology enable a variety of new applications (e.g., multi-player games in real environments). However, current devices suffer from having small fields of view, making the process of locating spatially distributed digital content similar to looking through a keyhole. In this work, we present EyeSeeX...

...as a technique for visualizing out-of-view objects with head-mounted devices. EyeSeeX improves upon our previously developed technique EyeSee360 for small field-of-view (FOV) devices. To do so, EyeSeeX proposes two strategies: (1) reducing the visualized field and (2) compressing the presented information. Further, EyeSeeX supports video and optical see-through Augmented Reality, Mixed Reality, and Virtual Reality devices.

2017


EyeSee360: Designing a Visualization Technique for Out-of-View Objects in Head-mounted Augmented Reality

U. Gruenefeld, D.Ennenga, A.El Ali, W.Heuten, and S.Boll
Published Paper at ACM SUI '17, Brighton, United Kingdom
🏆 Honorable Mention Award

Head-mounted displays allow user to augment reality or dive into a virtual one. However, these 3D spaces often come with problems due to objects that may be out of view. Visualizing these out-of-view objects is useful under certain scenarios, such as situation monitoring during ship docking. To address this, we designed a lo-fi prototype of our EyeSee360 system, and based on user feedback, subsequently...

...implemented EyeSee360. We evaluate our technique against well-known 2D off-screen object visualization techniques (Arrow, Halo, Wedge) adapted for head-mounted Augmented Reality, and found that EyeSee360 results in lowest error for direction estimation of out-of-view objects. Based on our findings, we outline the limitations of our approach and discuss the usefulness of our developed lo-fi prototyping tool.

EyeSee: Beyond Reality with Microsoft Hololens

U.Gruenefeld, D.Hsiao, W.Heuten, and S.Boll
Published Demo at ACM SUI '17, Brighton, United Kingdom

Head-mounted Augmented Reality (AR) devices allow overlaying digital information on the real world, where objects may be out of view. Visualizing these out-of-view objects is useful under certain scenarios. To address this, we developed EyeSee360 in our previous work. However, our implementation of EyeSee360 was limited to video-see-through devices.

These devices suffer from a delayed looped camera image and are decreasing the human field-of-view. In this demo, we present our EyeSee360 transferred to optical-see-through Augmented Reality to overcome these limitations.

Visualizing Out-of-View Objects in Head-mounted
Augmented Reality

U. Gruenefeld, A.El Ali, W.Heuten, and S.Boll
Published Late-Breaking Work at ACM MobileHCI '17, Vienna, Spain

Various off-screen visualization techniques that point to off-screen objects have been developed for small screen devices. A similar problem arises with head-mounted Augmented Reality (AR) with respect to the human field-of-view, where objects may be out of view. Being able to detect so-called out-of-view objects is useful for certain scenarios...

...(e.g., situation monitoring during ship docking). To augment existing AR with this capability, we adapted and tested well-known 2D off-screen object visualization techniques (Arrow, Halo, Wedge) for head-mounted AR. We found that Halo resulted in the lowest error for direction estimation while Wedge was subjectively perceived as best. We discuss future directions of how to best visualize out-of-view objects in head-mounted AR.

PeriMR: A Prototyping Tool for Head-mounted Peripheral Light Displays in Mixed Reality

U.Gruenefeld, T.C.Stratmann, W.Heuten, and S.Boll
Published Demo at ACM MobileHCI '17, Vienna, Spain

Nowadays, Mixed and Virtual Reality devices suffer from a field of view that is too small compared to human visual perception. Although a larger field of view is useful (e.g., conveying peripheral information or improving situation awareness), technical limitations prevent the extension of the field-of-view. A way to overcome these limitations...

...is to extend the field-of-view with peripheral light displays. However, there are no tools to support the design of peripheral light displays for Mixed or Virtual Reality devices. Therefore, we present our prototyping tool PeriMR that allows researchers to develop new peripheral head-mounted light displays for Mixed and Virtual Reality.

Effects of Location and Fade-in Time of (Audio-)Visual Cues on Response Times and Success-rates in a Dual-task Experiment

A.Löcken, S.Blum, T.C.Stratmann, U.Gruenefeld, W.Heuten, S.Boll, and S.van de Par
Published Paper at ACM SAP '17, Cottbus, Germany

While performing multiple competing tasks at the same time, e.g., when driving, assistant systems can be used to create cues to direct attention towards required information. However, poorly designed cues will interrupt or annoy users and affect their performance. Therefore, we aim to identify cues that are not missed and trigger...

...a quick reaction without changing the primary task performance. We conducted a dual-task experiment in an anechoic chamber with LED-based stimuli that faded in or turned on abruptly and were placed in the periphery or front of a subject. Additionally, a white noise sound was triggered in a third of the trials. The primary task was to react to visual stimuli placed on a screen in front. We observed significant effects on the response times in the screen task when adding sound. Further, participants responded faster to LED stimuli when they faded in.

2014


Swarming in the Urban Web Space to Discover the
Optimal Region

C.Kumar, U.Gruenefeld, W.Heuten, and S.Boll
Published Paper at IEEE/WIC/ACM WI '14, Warsaw, Poland

People moving to a new place usually look for a suitable region with respect to their multiple criteria of interests. In this work we map this problem to the migration behavior of other species such as swarming, which is a collective behavior exhibited by animals of similar size which aggregate together, milling about the same region. Taking the swarm intelligence perspective, we present a novel method...

...to find relevant geographic region for citizens based on Particle Swarm Optimization (PSO) framework. Particles represent geographic regions which are moving in the map space to find a region most relevant with respect to user’s query. The characterization of geographic regions is based on the multi-criteria distribution of geo-located facilities or landscape structure from the OpenStreetMap data source. We enable end users to visualize and evaluate the regional search process of PSO via a Web interface. The proposed framework demonstrates high precision and computationally efficient performance for regional search over a vast city based dataset.

Characterizing the Swarm Movement on Map for
Spatial Visualization

C.Kumar, U.Gruenefeld, W.Heuten, and S.Boll
Published Poster at IEEE TVCG '14, Paris, France

Visualization of maps to explore relevant geographic areas is one of the common practices in spatial decision scenarios. However visualizing geographic distribution with multidimensional criteria becomes a nontrivial setup in the conventional point based mapspace. In this work, we present a novel method to generalize frompoint data to spatial distributions, captivating on the swarm intelligence.

We exploit the particle swarm optimization (PSO) framework, where particles represent geographic regions that are moving in the map space to find better position with respect to user’s criteria. We track the swarm movement on map surface to generate arelevance heatmap, which could effectively support the spatial anal-ysis task of end users.