Uwe Gruenefeld
Hi, I'm Uwe Gruenefeld
An HCI researcher in love with Mixed Reality

2020


HiveFive: Immersion Preserving Attention Guidance
in Virtual Reality

D.Lange, T.C.Stratmann, U.Gruenefeld, S.Boll
Accepted Paper at ACM CHI '20, Honolulu, Hawaii

Recent advances in Virtual Reality (VR) technology, such as larger fields of view, have made VR increasingly immersive. However, a larger field of view often results in a user focusing on certain directions and missing relevant content presented elsewhere on the screen. With HiveFive, we propose a technique that uses swarm motion to guide user attention...

...in VR. The goal is to seamlessly integrate directional cues into the scene without losing immersiveness. We evaluate HiveFive in two studies. First, we compare biological motion (from a prerecorded swarm) with non-biological motion (from an algorithm), finding further evidence that humans can distinguish between these motion types and that, contrary to our hypothesis, non-biological swarm motion results in significantly faster response times. Second, we compare HiveFive to four other techniques and show that it not only results in fast response times but also has the smallest negative effect on immersion.

2019


Locating Nearby Physical Objects in Augmented Reality

U.Gruenefeld, L.Prädel, W.Heuten
Published Paper at ACM MUM '19, Pisa, Italy

Locating objects in physical environments can be an exhausting and frustrating task, particularly when these objects are out of the user's view or occluded by other objects. With recent advances in Augmented Reality (AR), these environments can be augmented to visualize objects for which the user searches. However, it is currently unclear which visualization strategy can best support users in locating these objects.

In this paper, we compare a printed map to three different AR visualization strategies: (1) in-view visualization, (2) out-of-view visualization, and (3) the combination of in-view and out-of-view visualizations. Our results show that in-view visualization reduces error rates for object selection accuracy, while additional out-of-view object visualization improves users' search time performance. However, combining in-view and out-of-view visualizations leads to visual clutter, which distracts users.

VRoad: Gesture-based Interaction Between Pedestrians and Automated Vehicles in Virtual Reality

U.Gruenefeld, S.Weiß, A.Löcken, I.Virgilio, A.Kun, S.Boll
Published Poster at AutoUI '19, Utrecht, Netherlands

As a third party to both automated and non-automated vehicles, pedestrians are among the most vulnerable participants in traffic. Currently, there is no way for them to communicate their intentions to an automated vehicle (AV). In this work, we explore the interactions between pedestrians and AVs at unmarked crossings. We propose...

...a virtual reality testbed, in which we conducted a pilot study to compare three conditions: crossing a street before a car that (1) does not give information, (2) displays its locomotion, or (3) displays its locomotion and reacts to pedestrians' gestures. Our results show that gestures introduce a new point of failure, which can increase pedestrians' insecurity. However, communicating the vehicle's locomotion supports pedestrians, helping them to make safer decisions.

Improving Search Time Performance for Locating
Out-of-View Objects in Augmented Reality

U.Gruenefeld, L.Prädel, W.Heuten
Published Paper at MuC '19, Hamburg, Germany
Honorable Mention for Best Short Paper

Locating virtual objects (e.g., holograms) in head-mounted Augmented Reality (AR) can be an exhausting and frustrating task. This is mostly due to the limited field of view of current AR devices, which amplify the problem of objects receding from view. In previous work, EyeSee360...

...was developed to address this problem by visualizing the locations of multiple out-of-view objects. However, on small field of view devices such as the Hololens, EyeSee360 adds a lot of visual clutter that may negatively affect user performance. In this work, we compare three variants of EyeSee360 with different levels of information (assistance) to evaluate in how far they add visual clutter and thereby negatively affect search time performance. Our results show that variants of EyeSee360 with less assistance result into faster search times.

ChalkboARd: Exploring Augmented Reality for Public Displays

U.Gruenefeld, T.Wolff, N.Diekmann, M.Koelle, W.Heuten
Published Paper at ACM PerDis '19, Palermo, Italy

Augmented Reality (AR) devices and applications are gaining in popularity, and - with recent trends such as Pokemon Go - are venturing into public spaces where they become more and more pervasive. In consequence, public AR displays might soon be part of our cityscapes and may impact on our everyday view of the world. In this work, we present ChalkboARd, a prototype of an AR-enabled public display...

...that seamlessly integrates into its environment. We investigate the influence of our system on the attention of bystanders in a field study (N=20). The field deployment of ChalkboARd provides evidence that AR for public displays needs to be interactive and adaptive to their surroundings, while at the same time taking privacy issues into account. Nevertheless, ChalkboARd was received positively by the participants, which points out the (hidden) potential of public AR displays.

Comparing Techniques for Visualizing Moving Out-of-View Objects in Head-mounted Virtual Reality

U.Gruenefeld, I.Koethe, D.Lange, S.Weiß, W.Heuten
Published short paper at IEEE VR '19, Osaka, Japan

Current head-mounted displays (HMDs) have a limited field-of-view (FOV). A limited FOV further decreases the already restricted human visual range and amplifies the problem of objects receding from view (e.g., opponents in computer games). However, there is no previous work that investigates how to best perceive moving out-of-view objects...

...on head-mounted displays. In this paper, we compare two visualization approaches: (1) Overview+detail, with 3D Radar, and (2) Focus+context, with EyeSee360, in a user study to evaluate their performances for visualizing moving out-of-view objects. We found that using 3D Radar resulted in a significantly lower movement estimation error and higher usability, measured by the system usability scale. 3D Radar was also preferred by 13 out of 15 participants for visualization of moving out-of-view objects.

2018

Mobile Bridge - A Portable Design Simulator for
Ship Bridge Interfaces

T.C.Stratmann, U.Gruenefeld, J.Stratmann, S.Schweigert, [...] S.Boll
Published Journal Article at TransNav '19

Developing new software components for ship bridges is challenging. Mostly due to high costs of testing these components in realistic environments. To reduce these costs the development process is divided into different stages. Whereas, the final test on a real ship bridge is the last step in this process. However, by dividing the development process...

...into different stages new components have to be adapted to each stage individually. To improve the process, we propose a mobile ship bridge system to fully support the development process from lab studies to tests in realistic environments. Our system allows developing new software components in the lab and setting it up on a ship bridge without interfering with the vessel's navigational systems. Therefore it is linked to a NaviBox to get necessary information such as GPS, AIS, compass, and radar information. Our system is embedded in LABSKAUS, a test bed for the safety assessment of new e-Navigation systems.

Ensuring Safety in Augmented Reality from Trade-off Between Immersion and Situation Awareness

J.Jung, H.Lee, J.Choi, A.Nanda, U. Gruenefeld, [...] W. Heuten
Published Paper at IEEE ISMAR '18, Munich, Germany

Although the mobility and emerging technology of augmented reality (AR) have brought significant entertainment and convenience in everyday life, the use of AR is becoming a social problem as the accidents caused by a shortage of situation awareness due to an immersion of AR are increasing. In this paper, we address the trade-off between immersion and situation awareness as the fundamental factor of the AR-related accidents.

As a solution against the trade-off, we propose a third-party component that prevents pedestrian-vehicle accidents in a traffic environment based on vehicle position estimation (VPE) and vehicle position visualization (VPV). From a RGB image sequence, VPE efficiently estimates the relative 3D position between a user and a car using generated convolutional neural network (CNN) model with a region-of-interest based scheme. VPV shows the estimated car position as a dot using an out-of-view object visualization method to alert the user from possible collisions. The VPE experiment with 16 combinations of parameters showed that the InceptionV3 model, fine-tuned on activated images yields the best performance with a root mean squared error of 0.34 m in 2.1 ms. The user study of VPV showed the inversely proportional relationship between the immersion controlled by the difficulty of the AR game and the frequency of situation awareness in both quantitatively and qualitatively. Additional VPV experiment assessing two out-of-view object visualization methods (EyeSee360 and Radar) showed no significant effect on the participants' activity, while EyeSee360 yielded faster responses and Radar engendered participants' preference on average. Our field study demonstrated an integration of VPE and VPV which has potentials for safety-ensured immersion when the proposed component is used for AR in daily uses. We expect that when the proposed component is developed enough to be used in real world, it will contribute to the safety-ensured AR, as well as to the population of AR.

Guiding Smombies: Augmenting Peripheral Vision with Low-Cost Glasses to Shift the Attention of Smartphone Users

U.Gruenefeld, T.C.Stratmann, J.Jung, H.Lee, J.Choi, [...] W.Heuten
Published Poster at IEEE ISMAR '18, Munich, Germany

Over the past few years, playing Augmented Reality (AR) games on smartphones has steadily been gaining in popularity (e.g., Pokémon Go). However, playing these games while navigating traffic is highly dangerous and has led to many accidents in the past. In our work, we aim to augment peripheral vision of pedestrians with low-cost glasses to support them in critical traffic encounters. Therefore, we developed a lo-fi prototype with peripheral displays. We technically improved...

...the prototype with the experience of five usability experts. Afterwards, we conducted an experiment on a treadmill to evaluate the effectiveness of collision warnings in our prototype. During the experiment, we compared three different light stimuli (instant, pulsing and moving) with regard to response time, error rate, and subjective feedback. Overall, we could show that all light stimuli were suitable for shifting the users' attention (100% correct). However, moving light resulted in significantly faster response times and was subjectively perceived best.

Juggling 4.0: Learning Complex Motor Skills with Augmented Reality Through the Example of Juggling

B.Meyer, P.Gruppe, B.Cornelsen, T.C.Stratmann, U.Gruenefeld, S.Boll
Published Poster at ACM UIST '18, Berlin, Germany

Learning new motor skills is a problem that people are constantly confronted with (eg to learn a new kind of sport). In our work, we investigate to which extent the learning process of a motor sequence can be optimized with the help of Augmented Reality as a technical assistant. Therefore, we propose an approach that divides the problem...

...into three tasks: (1) the tracking of the necessary movements, (2) the creation of a model that calculates possible deviations, and (3) the implementation of a visual feedback system. To evaluate our approach, we implemented the idea by using infrared depth sensors and an Augmented Reality head-mounted device (Hololens). Our results show that the system can give an efficient assistance for the correct height of a throw with one ball. Furthermore, it provides a basis for the support of a complete juggling sequence.

Identification of Out-of-View Objects in Virtual Reality

U.Gruenefeld, R.von Bargen, W.Heuten
Published Poster at ACM SUI '18, Berlin, Germany

Current Virtual Reality (VR) devices have limited fields-of-view (FOV). A limited FOV amplifies the problem of objects receding from view. In previous work, different techniques have been proposed to visualize the position of objects out of view. However, these techniques do not allow to identify these objects. In this work, we compare three different ways...

...of identifying out-of-view objects. Our user study shows that participants prefer to have the identification always visible.

Investigations on Container Ship Berthing from the Pilot’s Perspective: Accident Analysis, Ethnographic Study, and ...

U. Gruenefeld, T.C.Stratmann, Y.Brueck, A.Hahn, S.Boll, W.Heuten
Published Journal Article at TransNav '19

In recent years, container ships have had to transport more and more goods due to constantly growing demand. Therefore, the container ships for carrying these goods are growing in size, while the harbors fall short in adapting to these changes. As a result, the berthing of these container ships in harbors has become more challenging for harbor pilots.

In this work, we identify problems and risks with which pilots are confronted during the berthing process. First, we analyzed approximately 1500 accident reports from six different transportation safety authorities and identified their major causes. Second, we conducted an ethnographic study with harbor pilots in Hamburg to observe their actions. Third, we gained more specific insights on pilots environments and communications through an online survey of 30 harbor pilots from different European countries. We conclude our work with recommendations on how to reduce problems and risks during berthing of container vessels.

Where to Look: Exploring Peripheral Cues for Shifting Attention to Spatially Distributed Out-of-View Objects

U. Gruenefeld, A.Löcken, Y.Brueck, S.Boll, W.Heuten
Published Paper at ACM AutoUI '18, Toronto, Canada

Knowing the locations of spatially distributed objects is important in many different scenarios (e.g., driving a car and being aware of other road users). In particular, it is critical for preventing accidents with objects that come too close (e.g., cyclists or pedestrians). In this paper, we explore how peripheral cues can shift a user's attention towards spatially distributed out-of-view objects.

We identify a suitable technique for visualization of these out-of-view objects and explore different cue designs to advance this technique to shift the user's attention. In a controlled lab study, we investigate non-animated peripheral cues with audio stimuli and animated peripheral cues without audio stimuli. Further, we looked into how user's identify out-of-view objects. Our results show that shifting the user's attention only takes about 0.86 seconds on average when animated stimuli are used, while shifting the attention with non-animated stimuli takes an average of 1.10 seconds.

Beyond Halo and Wedge: Visualizing Out-of-View Objects on Head-mounted Virtual and Augmented Reality Devices

U.Gruenefeld, A.El Ali, S.Boll, W.Heuten
Published Paper at ACM MobileHCI '18, Barcelona, Spain

Head-mounted devices (HMDs) for Virtual and Augmented Reality (VR/AR) enable us to alter our visual perception of the world. However, current devices suffer from a limited field of view (FOV), which becomes problematic when users need to locate out of view objects (e.g., locating points-of-interest during sightseeing). To address this, ...

...we developed and evaluated in two studies HaloVR, WedgeVR, HaloAR and WedgeAR, which are inspired by usable 2D off-screen object visualization techniques (Halo, Wedge). While our techniques resulted in overall high usability, we found the choice of AR or VR impacts mean search time (VR: 2.25s, AR: 3.92s) and mean direction estimation error (VR: 21.85˚, AR: 32.91˚). Moreover, while adding more out-of-view objects significantly affects search time across VR and AR, direction estimation performance remains unaffected. We provide implications and discuss the challenges of designing for VR and AR HMDs.

RadialLight: Exploring Radial Peripheral LEDs for Directional Cues in Head-Mounted Displays

U.Gruenefeld, T.C.Stratmann, A.El Ali, S.Boll, W.Heuten
Published Paper at ACM MobileHCI '18, Barcelona, Spain

Current head-mounted displays (HMDs) for Virtual Reality (VR) and Augmented Reality (AR) have a limited field-of-view (FOV). This limited FOV further decreases the already restricted human visual range and amplifies the problem of objects going out of view. Therefore, we explore the utility of augmenting HMDs with RadialLight, ...

...a peripheral light display implemented as 18 radially positioned LEDs around each eye to cue direction towards out-of-view objects. We first investigated direction estimation accuracy of multi-colored cues presented on one versus two eyes. We then evaluated direction estimation accuracy and search time performance for locating out-of-view objects in two representative 360˚ video VR scenarios. Key findings show that participants could not distinguish between LED cues presented to one or both eyes simultaneously, participants estimated LED cue direction within a maximum 11.8˚ average deviation, and out-of-view objects in less distracting scenarios were selected faster. Furthermore, we provide implications for building peripheral HMDs.

MonoculAR: A Radial Light Display to Point Towards Out-of-View Objects on Augmented Reality Devices

U.Gruenefeld, T.C.Stratmann, L.Prädel, W.Heuten
Published Poster at ACM MobileHCI '18, Barcelona, Spain

Present head-mounted displays (HMDs) for Augmented Reality (AR) devices have narrow fields-of-view (FOV). The narrow FOV further decreases the already limited human visual range and worsens the problem of objects going out of view. Therefore, we explore the utility of augmenting head-mounted AR devices with MonoculAR, ...

...a peripheral light display comprised of twelve radially positioned light cues, to point towards out-of-view objects. In this work, we present two implementations of MonoculAR: (1) On-screen virtual light cues and (2) Off-screen LEDs. In a controlled user study we compare both approaches and evaluate search time performance for locating out-of-view objects in AR on the Microsoft Hololens. Key results show that participants find out-of-view objects faster when the light cues are presented on the screen. Furthermore, we provide implications for building peripheral HMDs.


BuildingBlocks: Head-mounted Virtual Reality for Robot Interaction in Large Non-Expert Audiences

U. Gruenefeld, T.C.Stratmann, L.Prädel, M.Pfingsthorn, W.Heuten
Workshop Paper at ACM MobileHCI '18', Barcelona, Spain

Virtual Reality (VR) technology empowers users to experience and manipulate virtual environments in a novel way. Further, by using digital twins of real world objects it is also possible to extend the reach of interaction to reality. In this work, we explore how users interact with a robot arm and its programming by using a digital representation in VR. In particular, we were interested...

...in how public spaces influence these interactions. As a preliminary outcome in this direction, we present a simple application called BuildingBlocks, which allows any member of the public to assemble a work order for a robot with almost no instruction. This application was tested anecdotally during an industry fair with 235 active participants.

Augmenting Augmented Reality

U.Gruenefeld, T.C.Stratmann, J.Auda, M.Koelle, [...] W.Heuten
Tutorial at ACM MobileHCI '18, Barcelona, Spain

Today’s Augmented Reality (AR) devices enable users to interact almost naturally with their surroundings, e.g., by pinning digital content onto real-world objects. However, current AR display are mostly limited to optical and video see-through technologies. Nevertheless, extending Augmented Reality (AR) beyond screens by accommodating additional modalities (e.g., smell or haptics)...

...or additional visuals (e.g., peripheral light) has recently become a trend in HCI. During this half-day tutorial, we provide beginner-level, hands-on instructions for augmenting an Augmented Reality application using peripheral hardware to generate multi-sensual stimuli.

Effective Visualization of Time-Critical Notifications
in Virtual Reality

U.Gruenefeld, M.Harre, T.C.Stratmann, A.Lüdtke, W.Heuten
Published Paper at MuC '18, Dresden, Germany

Virtual Reality (VR) devices empower users to be fully immersed into a virtual environment. However, time-critical notifications must be perceived as quickly and correctly as possible. Especially, if they indicate risk of injury (e.g., bumping into walls). Compared to displays used in previous work to investigate fast response times, immersion...

...a virtual environment, wider field of view and use of near-eye displays observed through lenses may have a considerable impact on the perception of time-critical notifications. Therefore, we studied the effectiveness of different visualization types (color, shape, size, text, number) in two different setups (room-scale, standing-only) with 20 participants in VR. Our study consisted of a part were we tested one notification and a part with multiple notifications showing up at the same time. We measured reaction time, correctness and subjective user evaluation. Our results showed that visualization types can be organized by consistent effectiveness ranking for different numbers of notification elements presented. Further, we offer promising recommendations regarding which visualization type to use in future VR applications for showing time-critical notifications.

EyeMR - Low-cost Eye-Tracking for Rapid-prototyping in Head-mounted Mixed Reality

T.C.Stratmann, U.Gruenefeld, S.Boll
Published Poster at ACM ETRA '18, Warsaw, Poland

Mixed Reality devices can either augment reality (AR) or create completely virtual realities (VR). Combined with head-mounted devices and eye-tracking, they enable users to interact with these systems in novel ways. However, current eye-tracking systems are expensive and limited in the interaction with virtual content. In this paper, we present EyeMR, ...

...a low-cost system (below 100$) that enables researchers to rapidly prototype new techniques for eye and gaze interactions. Our system supports mono-and binocular tracking (using Pupil Capture) and includes a Unity framework to support the fast development of new interaction techniques. We argue for the usefulness of EyeMR based on results of a user evaluation with HCI experts.

FlyingARrow: Pointing Towards Out-of-View Objects on Augmented Reality Devices

U.Gruenefeld, D.Lange, L.Hammer, S.Boll, W.Heuten
Published Paper at ACM PerDis '18, Munich, Germany

Augmented Reality (AR) devices empower users to enrich their surroundings by pinning digital content onto real world objects. However, current AR devices suffer from having small fields of view, making the process of locating spatially distributed digital content similar to looking through a keyhole. Previous solutions are not suitable...

...to address the problem of locating digital content out of view on small field of view devices because of visual clutter. Therefore, we developed FlyingARrow, which consists of a visual representation that flies on-demand from the user's line of sight toward the position of the out-of-view object and returns an acoustic signal over headphones if reached. We compared our technique with the out-of-view object visualization technique EyeSee360 and found that it resulted in higher usability and lower workload. However, FlyingARrow performed slightly worse with respect to search time and direction error. Furthermore, we discuss the challenges, and opportunities in combining visual and acoustic representations to overcome visual clutter.

Exploring Vibrotactile and Peripheral Cues for
Spatial Attention Guidance

T.C.Stratmann, A.Löcken, U.Gruenefeld, W.Heuten, S.Boll
Published Paper at ACM PerDis '18, Munich, Germany

For decision making in monitoring and control rooms situation awareness is key. Given the often spacious and complex environments, simple alarms are not sufficient for attention guidance (e.g., on ship bridges). In our work, we explore shifting attention towards the location of relevant entities in large cyber-physical systems. Therefore, we used...

...pervasive displays: tactile displays on both upper arms and a peripheral display. With these displays, we investigated shifting the attention in a seated and standing scenario. In a first user study, we evaluated four distinct cue patterns for each on-body display. We tested seated monitoring limited to 90° in front of the user. In a second study, we continued with the two patterns from the first study for lowest and highest urgency perceived. Here, we investigated standing monitoring in a 360° environment. We found that tactile cues led to faster arousal times than visual cues, whereas the attention shift speed for visual cues was faster than tactile cues.

EyeSeeX: Visualization of Out-of-View Objects on Small Field-of-View Augmented and Virtual Reality Devices

U.Gruenefeld, D.Hsiao, W.Heuten
Published Demo at ACM PerDis '18, Munich, Germany

Recent advances in Virtual and Augmented Reality technology enable a variety of new applications (e.g., multi-player games in real environments). However, current devices suffer from having small fields of view, making the process of locating spatially distributed digital content similar to looking through a keyhole. In this work, we present EyeSeeX...

...as a technique for visualizing out-of-view objects with head-mounted devices. EyeSeeX improves upon our previously developed technique EyeSee360 for small field-of-view (FOV) devices. To do so, EyeSeeX proposes two strategies: (1) reducing the visualized field and (2) compressing the presented information. Further, EyeSeeX supports video and optical see-through Augmented Reality, Mixed Reality, and Virtual Reality devices.

2017

EyeSee360: Designing a Visualization Technique for Out-of-View Objects in Head-mounted Augmented Reality

U. Gruenefeld, D.Ennenga, A.El Ali, W.Heuten, S.Boll
Published Paper at ACM SUI '17, Brighton, United Kingdom
Honorable Mention for Best Paper

Head-mounted displays allow user to augment reality or dive into a virtual one. However, these 3D spaces often come with problems due to objects that may be out of view. Visualizing these out-of-view objects is useful under certain scenarios, such as situation monitoring during ship docking. To address this, we designed a lo-fi prototype of our EyeSee360 system, and based on user feedback, subsequently...

...implemented EyeSee360. We evaluate our technique against well-known 2D off-screen object visualization techniques (Arrow, Halo, Wedge) adapted for head-mounted Augmented Reality, and found that EyeSee360 results in lowest error for direction estimation of out-of-view objects. Based on our findings, we outline the limitations of our approach and discuss the usefulness of our developed lo-fi prototyping tool.

EyeSee: Beyond Reality with Microsoft Hololens

U.Gruenefeld, D.Hsiao, W.Heuten, S.Boll
Published Demo at ACM SUI '17, Brighton, United Kingdom

Head-mounted Augmented Reality (AR) devices allow overlaying digital information on the real world, where objects may be out of view. Visualizing these out-of-view objects is useful under certain scenarios. To address this, we developed EyeSee360 in our previous work. However, our implementation of EyeSee360 was limited to video-see-through devices.

These devices suffer from a delayed looped camera image and are decreasing the human field-of-view. In this demo, we present our EyeSee360 transferred to optical-see-through Augmented Reality to overcome these limitations.

Visualizing Out-of-View Objects in Head-mounted
Augmented Reality

U. Gruenefeld, A.El Ali, W.Heuten, S.Boll
Published Late-Breaking Work at ACM MobileHCI '17, Vienna, Spain

Various off-screen visualization techniques that point to off-screen objects have been developed for small screen devices. A similar problem arises with head-mounted Augmented Reality (AR) with respect to the human field-of-view, where objects may be out of view. Being able to detect so-called out-of-view objects is useful for certain scenarios...

...(e.g., situation monitoring during ship docking). To augment existing AR with this capability, we adapted and tested well-known 2D off-screen object visualization techniques (Arrow, Halo, Wedge) for head-mounted AR. We found that Halo resulted in the lowest error for direction estimation while Wedge was subjectively perceived as best. We discuss future directions of how to best visualize out-of-view objects in head-mounted AR.

PeriMR: A Prototyping Tool for Head-mounted Peripheral Light Displays in Mixed Reality

U.Gruenefeld, T.C.Stratmann, W.Heuten, S.Boll
Published Demo at ACM MobileHCI '17, Vienna, Spain

Nowadays, Mixed and Virtual Reality devices suffer from a field of view that is too small compared to human visual perception. Although a larger field of view is useful (e.g., conveying peripheral information or improving situation awareness), technical limitations prevent the extension of the field-of-view. A way to overcome these limitations...

...is to extend the field-of-view with peripheral light displays. However, there are no tools to support the design of peripheral light displays for Mixed or Virtual Reality devices. Therefore, we present our prototyping tool PeriMR that allows researchers to develop new peripheral head-mounted light displays for Mixed and Virtual Reality.

Effects of Location and Fade-in Time of (Audio-)Visual Cues on Response Times and Success-rates in a Dual-task Experiment

A.Löcken, S.Blum, T.C.Stratmann, U.Gruenefeld, [...], S.van de Par
Published Paper at ACM SAP '17, Cottbus, Germany

While performing multiple competing tasks at the same time, e.g., when driving, assistant systems can be used to create cues to direct attention towards required information. However, poorly designed cues will interrupt or annoy users and affect their performance. Therefore, we aim to identify cues that are not missed and trigger...

...a quick reaction without changing the primary task performance. We conducted a dual-task experiment in an anechoic chamber with LED-based stimuli that faded in or turned on abruptly and were placed in the periphery or front of a subject. Additionally, a white noise sound was triggered in a third of the trials. The primary task was to react to visual stimuli placed on a screen in front. We observed significant effects on the response times in the screen task when adding sound. Further, participants responded faster to LED stimuli when they faded in.

2014

Swarming in the Urban Web Space to Discover the
Optimal Region

C.Kumar, U.Gruenefeld, W.Heuten, S.Boll
Published Paper at IEEE/WIC/ACM WI '14, Warsaw, Poland

People moving to a new place usually look for a suitable region with respect to their multiple criteria of interests. In this work we map this problem to the migration behavior of other species such as swarming, which is a collective behavior exhibited by animals of similar size which aggregate together, milling about the same region. Taking the swarm intelligence perspective, we present a novel method...

...to find relevant geographic region for citizens based on Particle Swarm Optimization (PSO) framework. Particles represent geographic regions which are moving in the map space to find a region most relevant with respect to user’s query. The characterization of geographic regions is based on the multi-criteria distribution of geo-located facilities or landscape structure from the OpenStreetMap data source. We enable end users to visualize and evaluate the regional search process of PSO via a Web interface. The proposed framework demonstrates high precision and computationally efficient performance for regional search over a vast city based dataset.

Characterizing the Swarm Movement on Map for
Spatial Visualization

C.Kumar, U.Gruenefeld, W.Heuten, S.Boll
Published Poster at IEEE TVCG '14, Paris, France

Visualization of maps to explore relevant geographic areas is one of the common practices in spatial decision scenarios. However visualizing geographic distribution with multidimensional criteria becomes a nontrivial setup in the conventional point based mapspace. In this work, we present a novel method to generalize frompoint data to spatial distributions, captivating on the swarm intelligence.

We exploit the particle swarm optimization (PSO) framework, where particles represent geographic regions that are moving in the map space to find better position with respect to user’s criteria. We track the swarm movement on map surface to generate arelevance heatmap, which could effectively support the spatial anal-ysis task of end users.