2019
Authors
Sousa, M; Mendes, D; Jorge, JA;
Publication
CoRR
Abstract
2018
Authors
Caputo, FM; Mendes, D; Bonetti, A; Saletti, G; Giachetti, A;
Publication
2018 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2018, Tuebingen/Reutlingen, Germany, 18-22 March 2018
Abstract
The choice of a suitable method for object manipulation is one of the most critical aspects of virtual environment design. It has been shown that different environments or applications might benefit from direct manipulation approaches, while others might be more usable with indirect ones, exploiting, for example, three dimensional virtual widgets. When it comes to mid-Air interactions, the success of a manipulation technique is not only defined by the kind of application but also by the hardware setup, especially when specific restrictions exist. In this paper we present an experimental evaluation of different techniques and hardware for mid-Air object manipulation in immersive virtual environments (IVE). We compared task performances using both deviceless and device-based tracking solutions, combined with direct and widget-based approaches. We also tested, in the case of freehand manipulation, the effects of different visual feedback, comparing the use of a realistic virtual hand rendering with a simple cursor-like visualization. © 2018 IEEE.
2018
Authors
Cordeiro, E; Giannini, F; Monti, M; Mendes, D; Ferreira, A;
Publication
Italian Chapter Conference 2018 - Smart Tools and Apps in computer Graphics, STAG 2018, Brescia, Italy, October 18-19, 2018
Abstract
Current immersive modeling environments use non-natural tools and interfaces to support traditional shape manipulation operations. In the future, we expect the availability of natural methods of interaction with 3D models in immersive environments to become increasingly important in several industrial applications. In this paper, we present a study conducted on a group of potential users with the aim of verifying if there is a common strategy in gestural and vocal interaction in immersive environments when the objective is modifying a 3D shape model. The results indicate that users adopt different strategies to perform the different tasks but in the execution of a specific activity it is possible to identify a set of similar and recurrent gestures. In general, the gestures made are physically plausible. During the experiment, the vocal interaction was used quite rarely and never to express a command to the system but rather to better specify what the user was doing with gestures.
2017
Authors
Mendes, D; Medeiros, D; Sousa, M; Ferreira, R; Raposo, A; Ferreira, A; Jorge, JA;
Publication
3DUI
Abstract
Virtual Reality (VR) is again in the spotlight. However, interactions and modeling operations are still major hurdles to its complete success. To make VR Interaction viable, many have proposed mid-air techniques because of their naturalness and resemblance to physical world operations. Still, natural mid-air metaphors for Constructive Solid Geometry (CSG) are still elusive. This is unfortunate, because CSG is a powerful enabler for more complex modeling tasks, allowing to create complex objects from simple ones via Boolean operations. Moreover, Head-Mounted Displays occlude the real self, and make it difficult for users to be aware of their relationship to the virtual environment. In this paper we propose two new techniques to achieve Boolean operations between two objects in VR. One is based on direct-manipulation via gestures while the other uses menus. We conducted a preliminary evaluation of these techniques. Due to tracking limitations, results allowed no significant conclusions to be drawn. To account for self-representation, we compared full-body avatar against an iconic cursor depiction of users' hands. In this matter, the simplified hands-only representation improved efficiency in CSG modelling tasks.
2017
Authors
Mendes, D; Medeiros, D; Cordeiro, E; Sousa, M; Ferreira, A; Jorge, JA;
Publication
3DUI
Abstract
Selecting objects outside user's arm-reach in Virtual Reality still poses significant challenges. Techniques proposed to overcome such limitations often follow arm-extension metaphors or favor the use of selection volumes combined with ray-casting. Nonetheless, these approaches work for room sized and sparse environments, and they do not scale to larger scenarios with many objects. We introduce PRECIOUS, a novel mid-air technique for selecting out-of-reach objects. It employs an iterative progressive refinement, using cone-casting to select multiple objects and moving users closer to them in each step, allowing accurate selections. A user evaluation showed that PRECIOUS compares favorably against existing approaches, being the most versatile.
2014
Authors
Mendes, D; Fonseca, F; Araùjo, B; Ferreira, A; Jorge, J;
Publication
2014 IEEE SYMPOSIUM ON 3D USER INTERFACES (3DUI)
Abstract
Stereoscopic tabletops offer unique visualization capabilities, enabling users to perceive virtual objects as if they were lying above the surface. While allowing virtual objects to coexist with user actions in the physical world, interaction with these virtual objects above the surface presents interesting challenges. In this paper, we aim to understand which approaches to 3D virtual object manipulations are suited to this scenario. To this end, we implemented five different techniques based on the literature. Four are mid-air techniques, while the remainder relies on multi-touch gestures, which act as a baseline. Our setup combines affordable non-intrusive tracking technologies with a multi-touch stereo tabletop, providing head and hands tracking, to improve both depth perception and seamless interactions above the table. We conducted a user evaluation to find out which technique appealed most to participants. Results suggest that mid-air interactions, combining direct manipulation with six degrees of freedom for the dominant hand, are both more satisfying and efficient than the alternatives tested.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.