Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

  • Name

    Daniel Mendes
  • Cluster

    Computer Science
  • Role

    Assistant Researcher
  • Since

    01st April 2020
001
Publications

2019

VisMillion: A novel interactive visualization technique for real-time big data

Authors
Pires, G; Mendes, D; Goncalves, D;

Publication
PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION (ICGI 2019)

Abstract
The rapid increase of connected devices causes more and more data to be generated and, in some cases, this data needs to be analyzed as it is received. As such, the challenge of presenting streaming data in such way that changes in the regular flow can be detected needs to be tackled, so that timely and informed decisions can be made. This requires users to be able to perceive the information being received in the moment in detail, while maintaining the context. In this paper, we propose VisMillion, a visualization technique for large amounts of streaming data, following the concept of graceful degradation. It is comprised of several different modules positioned side by side, corresponding to different contiguous time spans, from the last few seconds to a historical view of all data received in the stream so far. Data flows through each one from right to left and, the more recent the data, the more detailed it is presented. To this end, each module uses a different technique to aggregate and process information, with special care to ensure visual continuity between modules to facilitate the analysis. VisMillion was validated through a usability evaluation with 21 participants, as well as performance tests. Results show that it fulfills its objective, successfully aiding users to detect changes, patterns and anomalies in the information being received.

2019

WARPING DEIXIS: Distorting Gestures to Enhance Collaboration

Authors
Sousa, M; dos Anjos, RK; Mendes, D; Billinghurst, M; Jorge, J;

Publication
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS

Abstract
When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer's perception. We evaluated our approach in a co-located side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.

2019

Adventures in Hologram Space: Exploring the Design Space of Eye-to-eye Volumetric Telepresence

Authors
dos Anjos, RK; Sousa, M; Mendes, D; Medeiros, D; Billinghurst, M; Anslow, C; Jorge, J;

Publication
25TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY (VRST 2019)

Abstract
Modern volumetric projection-based telepresence approaches are capable of providing realistic full-size virtual representations of remote people. Interacting with full-size people may not be desirable due to the spatial constraints of the physical environment, application context, or display technology. However, the miniaturization of remote people is known to create an eye gaze matching problem. Eye-contact is essential to communication as it allows for people to use natural nonverbal cues and improves the sense of "being there". In this paper we discuss the design space for interacting with volumetric representations of people and present an approach for dynamically manipulating scale, orientation and the position of holograms which guarantees eye-contact. We created a working augmented reality-based prototype and validated it with 14 participants.

2019

Anatomy Studio: A tool for virtual dissection through augmented 3D reconstruction

Authors
Zorzal, ER; Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Paulo, SF; Rodrigues, P; Mendes, JJ; Delmas, V; Uhl, JF; Mogorron, J; Jorge, JA; Lopes, DS;

Publication
COMPUTERS & GRAPHICS-UK

Abstract
3D reconstruction from anatomical slices allows anatomists to create three dimensional depictions of real structures by tracing organs from sequences of cryosections. However, conventional user interfaces rely on single-user experiences and mouse-based input to create content for education or training purposes. In this work, we present Anatomy Studio, a collaborative Mixed Reality tool for virtual dissection that combines tablets with styli and see-through head-mounted displays to assist anatomists by easing manual tracing and exploring cryosection images. We contribute novel interaction techniques intended to promote spatial understanding and expedite manual segmentation. By using mid-air interactions and interactive surfaces, anatomists can easily access any cryosection and edit contours, while following other user's contributions. A user study including experienced anatomists and medical professionals, conducted in real working sessions, demonstrates that Anatomy Studio is appropriate and useful for 3D reconstruction. Results indicate that Anatomy Studio encourages closely-coupled collaborations and group discussion, to achieve deeper insights.

2019

A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments

Authors
Mendes, D; Caputo, FM; Giachetti, A; Ferreira, A; Jorge, J;

Publication
COMPUTER GRAPHICS FORUM

Abstract
Interactions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid-air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non-trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid-air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state-of-the-art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid-air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.