2017
Autores
Ribas, L; Rangel, A; Verdicchio, M; Carvalhais, M;
Publicação
JOURNAL OF SCIENCE AND TECHNOLOGY OF THE ARTS
Abstract
2017
Autores
Sousa, M; Mendes, D; Paulo, S; Matela, N; Jorge, J; Lopes, DS;
Publicação
PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17)
Abstract
Reading room conditions such as illumination, ambient light, human factors and display luminance, play an important role on how radiologists analyze and interpret images. Indeed, serious diagnostic errors can appear when observing images through everyday monitors. Typically, these occur whenever professionals are ill-positioned with respect to the display or visualize images under improper light and luminance conditions. In this work, we show that virtual reality can assist radiodiagnostics by considerably diminishing or cancel out the effects of unsuitable ambient conditions. Our approach combines immersive head-mounted displays with interactive surfaces to support professional radiologists in analyzing medical images and formulating diagnostics. We evaluated our prototype with two senior medical doctors and four seasoned radiology fellows. Results indicate that our approach constitutes a viable, flexible, portable and cost-efficient option to traditional radiology reading rooms.
2017
Autores
Mendes, D; Medeiros, D; Sousa, M; Cordeiro, E; Ferreira, A; Jorge, JA;
Publicação
Proceedings of the 33rd Spring Conference on Computer Graphics, SCCG 2017, Mikulov, Czech Republic, May 15-17, 2017
Abstract
In Virtual Reality (VR), the action of selecting virtual objects outside arms-reach still poses significant challenges. In this work, after classifying, with a new taxonomy, and analyzing existing solutions, we propose a novel technique to perform out-of-reach selections in VR. It uses natural pointing gestures, a modifiable cone as selection volume, and an iterative progressive refinement strategy. This can be considered a VR implementation of a discrete zoom approach, although we modify users' position instead of the field-of-view. When the cone intersects several objects, users can either activate the refinement process, or trigger a multiple object selection. We compared our technique against two techniques from literature. Our results show that, although not being the fastest, it is a versatile approach due to the lack of errors and uniform completion times. © 2017 Copyright held by the owner/author(s).
2017
Autores
Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Ferreira, A; Raposo, A; Pereira, JM; Jorge, JA;
Publicação
Proceedings of the Interactive Surfaces and Spaces, ISS 2017, Brighton, United Kingdom, October 17 - 20, 2017
Abstract
Context-aware pervasive applications can improve user experiences by tracking people in their surroundings. Such systems use multiple sensors to gather information regarding people and devices. However, when developing novel user experiences, researchers are left to building foundation code to support multiple network-connected sensors, a major hurdle to rapidly developing and testing new ideas. We introduce Creepy Tracker, an open-source toolkit to ease prototyping with multiple commodity depth cameras. It automatically selects the best sensor to follow each person, handling occlusions and maximizing interaction space, while providing full-body tracking in scalable and extensible manners. It also keeps position and orientation of stationary interactive surfaces while offering continuously updated point-cloud user representations combining both depth and color data. Our performance evaluation shows that, although slightly less precise than marker-based optical systems, Creepy Tracker provides reliable multi-joint tracking without any wearable markers or special devices. Furthermore, implemented representative scenarios show that Creepy Tracker is well suited for deploying spatial and context-aware interactive experiences. © 2017 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
2017
Autores
Mendes, D; Sousa, M; Lorena, R; Ferreira, A; Jorge, JA;
Publicação
Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, VRST 2017, Gothenburg, Sweden, November 8-10, 2017
Abstract
Virtual Reality environments are able to other natural interaction metaphors. However, it is dicult to accurately place virtual objects in the desired position and orientation using gestures in mid-air. Previous research concluded that the separation of degrees-of-freedom (DOF) can lead to beer results, but these benets come with an increase in time when performing complex tasks, due to the additional number of transformations required. In this work, we assess whether custom transformation axes can be used to achieve the accuracy of DOF separation without sacricing completion time. For this, we developed a new manipulation technique, MAiOR, which oers translation and rotation separation, supporting both 3-DOF and 1-DOF manipulations, using personalized axes for the laer. Additionally, it also has direct 6-DOF manipulation for coarse transformations, and scaled object translation for increased placement. We compared MAiOR against an exclusively 6-DOF approach and a widget-based approach with explicit DOF separation. Results show that, contrary to previous research suggestions, single DOF manipulations are not appealing to users. Instead, users favored 3-DOF manipulations above all, while keeping translation and rotation independent. © 2017 Copyright held by the owner/author(s).
2017
Autores
Mendes, D; Medeiros, D; Sousa, M; Cordeiro, E; Ferreira, A; Jorge, JA;
Publicação
COMPUTERS & GRAPHICS-UK
Abstract
In interactive systems, the ability to select virtual objects is essential. In immersive virtual environments, object selection is usually done at arm's length in mid-air by directly intersecting the desired object with the user's hand. However, selecting objects outside user's arm-reach still poses significant challenges, which direct approaches fail to address. Techniques proposed to overcome such limitations often follow an arm-extension metaphor or favor selection volumes combined with ray-casting. Nonetheless, while these approaches work for room sized environments, they hardly scale up to larger scenarios with many objects. In this paper, we introduce a new taxonomy to classify existing selection techniques. In its wake, we propose PRECIOUS, a novel mid-air technique for selecting out-of-reach objects, featuring iterative refinement in Virtual Reality, an hitherto untried approach in this context. While comparable techniques have been developed for non-stereo and non-immersive environments, these are not suitable to Immersive Virtual Reality. Our technique is the first to employ an iterative progressive refinement in such settings. It uses cone-casting to select multiple objects and moves the user closer to them in each refinement step, to allow accurate selection of the desired target. A user evaluation showed that PRECIOUS compares favorably against state-of-the-art approaches. Indeed, our results indicate that PRECIOUS is a versatile approach to out-of-reach target acquisition, combining accurate selection with consistent task completion times across different scenarios.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.