2017
Authors
Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Ferreira, A; Raposo, A; Pereira, JM; Jorge, JA;
Publication
ISS
Abstract
Context-aware pervasive applications can improve user experiences by tracking people in their surroundings. Such systems use multiple sensors to gather information regarding people and devices. However, when developing novel user experiences, researchers are left to building foundation code to support multiple network-connected sensors, a major hurdle to rapidly developing and testing new ideas. We introduce Creepy Tracker, an open-source toolkit to ease prototyping with multiple commodity depth cameras. It automatically selects the best sensor to follow each person, handling occlusions and maximizing interaction space, while providing full-body tracking in scalable and extensible manners. It also keeps position and orientation of stationary interactive surfaces while offering continuously updated point-cloud user representations combining both depth and color data. Our performance evaluation shows that, although slightly less precise than marker-based optical systems, Creepy Tracker provides reliable multi-joint tracking without any wearable markers or special devices. Furthermore, implemented representative scenarios show that Creepy Tracker is well suited for deploying spatial and context-aware interactive experiences.
2019
Authors
dos Anjos, RK; Sousa, M; Mendes, D; Medeiros, D; Billinghurst, M; Anslow, C; Jorge, J;
Publication
25TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY (VRST 2019)
Abstract
Modern volumetric projection-based telepresence approaches are capable of providing realistic full-size virtual representations of remote people. Interacting with full-size people may not be desirable due to the spatial constraints of the physical environment, application context, or display technology. However, the miniaturization of remote people is known to create an eye gaze matching problem. Eye-contact is essential to communication as it allows for people to use natural nonverbal cues and improves the sense of "being there". In this paper we discuss the design space for interacting with volumetric representations of people and present an approach for dynamically manipulating scale, orientation and the position of holograms which guarantees eye-contact. We created a working augmented reality-based prototype and validated it with 14 participants.
2018
Authors
Medeiros, D; dos Anjos, RK; Mendes, D; Pereira, JM; Raposo, A; Jorge, J;
Publication
24TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY (VRST 2018)
Abstract
Head-Mounted Displays are useful to place users in virtual reality (VR). They do this by totally occluding the physical world, including users' bodies. This can make self-awareness problematic. Indeed, researchers have shown that users' feeling of presence and spatial awareness are highly influenced by their virtual representations, and that self-embodied representations (avatars) of their anatomy can make the experience more engaging. On the other hand, recent user studies show a penchant towards a third-person view of one's own body to seemingly improve spatial awareness. However, due to its unnaturality, we argue that a third-person perspective is not as effective or convenient as a first-person view for task execution in VR. In this paper, we investigate, through a user evaluation, how these perspectives affect task performance and embodiment, focusing on navigation tasks, namely walking while avoiding obstacles. For each perspective, we also compare three different levels of realism for users' representation, specifically a stylized abstract avatar, a mesh-based generic human, and a real-time point-cloud rendering of the users' own body. Our results show that only when a third-person perspective is coupled with a realistic representation, a similar sense of embodiment and spatial awareness is felt. In all other cases, a first-person perspective is still better suited for navigation tasks, regardless of representation.
2016
Authors
Medeiros, D; Cordeiro, E; Mendes, D; Sousa, M; Raposo, A; Ferreira, A; Jorge, J;
Publication
22ND ACM CONFERENCE ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY (VRST 2016)
Abstract
Travel on Virtual Environments is the simple action where a user moves from a starting point A to a target point B. Choosing an incorrect type of technique could compromise the Virtual Reality experience and cause side effects such as spatial disorientation, fatigue and cybersickness. The design of effective travelling techniques demands to be as natural as possible, thus real walking techniques presents better results, despite their physical limitations. Approaches to surpass these limitations employ techniques that provide an indirect travel metaphor such as point-steering and target-based. In fact, target-based techniques evince a reduction in fatigue and cybersickness against the point-steering techniques, even though providing less control. In this paper we investigate further effects of speed and transition on target-based techniques on factors such as comfort and cybersickness using a Head-Mounted Display setup.
2016
Authors
Medeiros, D; Sousa, M; Mendes, D; Raposo, A; Jorge, J;
Publication
22ND ACM CONFERENCE ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY (VRST 2016)
Abstract
Head-Mounted Displays (HMDs) and similar 3D visualization devices are becoming ubiquitous. Going a step forward, HMD seethrough systems bring virtual objects to real world settings, allowing augmented reality to be used in complex engineering scenarios. Of these, optical and video see-through systems differ on how the real world is captured by the device. To provide a seamless integration of real and virtual imagery, the absolute depth and size of both virtual and real objects should match appropriately. However, these technologies are still in their early stages, each featuring different strengths and weaknesses which affect the user experience. In this work we compare optical to video see-through systems, focusing on depth perception via exocentric and egocentric methods. Our study pairs Meta Glasses, an off-the-shelf optical see-through, to a modified Oculus Rift setup with attached video-cameras, for video see-through. Results show that, with the current hardware available, the video see-through configuration provides better overall results. These experiments and our results can help interaction designers for both virtual and augmented reality conditions.
2016
Authors
Mendes, D; Relvas, F; Ferreira, A; Jorge, J;
Publication
22ND ACM CONFERENCE ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY (VRST 2016)
Abstract
Object manipulation is a key feature in almost every virtual environment. However, it is difficult to accurately place an object in immersive virtual environments using mid-air gestures that mimic interactions in the physical world, although being a direct and natural approach. Previous research studied mouse and touch based interfaces concluding that separation of degrees-of-freedom (DOF) led to improved results. In this paper, we present the first user evaluation to assess the impact of explicit 6 DOF separation in mid-air manipulation tasks. We implemented a technique based on familiar virtual widgets that allow single DOF control, and compared it against a direct approach and PRISM, which dynamically adjusts the ratio between hand and object motions. Our results suggest that full DOF separation benefits precision in spatial manipulations, at the cost of additional time for complex tasks. From our results we draw guidelines for 3D object manipulation in mid-air.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.