2019
Authors
Sousa, M; Mendes, D; dos Anjos, RK; Lopes, DS; Jorge, JA;
Publication
The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI 2019, Brisbane, QLD, Australia, November 14-16, 2019.
Abstract
Face-to-face telepresence promotes the sense of "being there" and can improve collaboration by allowing immediate understanding of remote people's nonverbal cues. Several approaches successfully explored interactions with 2D content using a see-through whiteboard metaphor. However, with 3D content, there is a decrease in awareness due to ambiguities originated by participants' opposing points-of-view. In this paper, we investigate how people and content should be presented for discussing 3D renderings within face-to-face collaborative sessions. To this end, we performed a user evaluation to compare four different conditions, in which we varied reflections of both workspace and remote people representation. Results suggest potentially more benefits to remote collaboration from workspace consistency rather than people's representation fidelity.We contribute a novel design space, the Negative Space, for remote face-to-face collaboration focusing on 3D content. © 2019 Association for Computing Machinery.
2019
Authors
Pires, G; Mendes, D; Goncalves, D;
Publication
PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION (ICGI 2019)
Abstract
The rapid increase of connected devices causes more and more data to be generated and, in some cases, this data needs to be analyzed as it is received. As such, the challenge of presenting streaming data in such way that changes in the regular flow can be detected needs to be tackled, so that timely and informed decisions can be made. This requires users to be able to perceive the information being received in the moment in detail, while maintaining the context. In this paper, we propose VisMillion, a visualization technique for large amounts of streaming data, following the concept of graceful degradation. It is comprised of several different modules positioned side by side, corresponding to different contiguous time spans, from the last few seconds to a historical view of all data received in the stream so far. Data flows through each one from right to left and, the more recent the data, the more detailed it is presented. To this end, each module uses a different technique to aggregate and process information, with special care to ensure visual continuity between modules to facilitate the analysis. VisMillion was validated through a usability evaluation with 21 participants, as well as performance tests. Results show that it fulfills its objective, successfully aiding users to detect changes, patterns and anomalies in the information being received.
2019
Authors
dos Anjos, RK; Sousa, M; Mendes, D; Medeiros, D; Billinghurst, M; Anslow, C; Jorge, J;
Publication
25TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY (VRST 2019)
Abstract
Modern volumetric projection-based telepresence approaches are capable of providing realistic full-size virtual representations of remote people. Interacting with full-size people may not be desirable due to the spatial constraints of the physical environment, application context, or display technology. However, the miniaturization of remote people is known to create an eye gaze matching problem. Eye-contact is essential to communication as it allows for people to use natural nonverbal cues and improves the sense of "being there". In this paper we discuss the design space for interacting with volumetric representations of people and present an approach for dynamically manipulating scale, orientation and the position of holograms which guarantees eye-contact. We created a working augmented reality-based prototype and validated it with 14 participants.
2019
Authors
Zorzal, ER; Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Paulo, SF; Rodrigues, P; Mendes, JJ; Delmas, V; Uhl, JF; Mogorron, J; Jorge, JA; Lopes, DS;
Publication
COMPUTERS & GRAPHICS-UK
Abstract
3D reconstruction from anatomical slices allows anatomists to create three dimensional depictions of real structures by tracing organs from sequences of cryosections. However, conventional user interfaces rely on single-user experiences and mouse-based input to create content for education or training purposes. In this work, we present Anatomy Studio, a collaborative Mixed Reality tool for virtual dissection that combines tablets with styli and see-through head-mounted displays to assist anatomists by easing manual tracing and exploring cryosection images. We contribute novel interaction techniques intended to promote spatial understanding and expedite manual segmentation. By using mid-air interactions and interactive surfaces, anatomists can easily access any cryosection and edit contours, while following other user's contributions. A user study including experienced anatomists and medical professionals, conducted in real working sessions, demonstrates that Anatomy Studio is appropriate and useful for 3D reconstruction. Results indicate that Anatomy Studio encourages closely-coupled collaborations and group discussion, to achieve deeper insights.
2019
Authors
Mendes, D; Caputo, FM; Giachetti, A; Ferreira, A; Jorge, J;
Publication
COMPUTER GRAPHICS FORUM
Abstract
Interactions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid-air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non-trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid-air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state-of-the-art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid-air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.
2019
Authors
Sousa, M; Mendes, D; Jorge, JA;
Publication
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.