Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Daniel Mendes

2015

LS3D: LEGO Search Combining Speech and Stereoscopic 3D

Authors
Pascoal, PB; Mendes, D; Henriques, D; Trancoso, I; Ferreira, A;

Publication
Int. J. Creative Interfaces Comput. Graph.

Abstract
The number of available 3D digital objects has been increasing considerably. As such, searching in large collections has been subject of vast research. However, the main focus has been on algorithms and techniques for classification, indexing and retrieval. While some works have been done on query interfaces and results visualization, they do not explore natural interactions. The authors propose a speech interface for 3D object retrieval in immersive virtual environments. As a proof of concept, they developed the LS3D prototype, using the context of LEGO blocks to understand how people naturally describe such objects. Through a preliminary study, it was found that participants mainly resorted to verbal descriptions. Considering these descriptions and using a low cost visualization device, the authors developed their solution. They compared it with a commercial application through a user evaluation. Results suggest that LS3D can outperform its contestant, and ensures better performance and results perception than traditional approaches for 3D object retrieval.

2019

WARPING DEIXIS: Distorting Gestures to Enhance Collaboration

Authors
Sousa, M; dos Anjos, RK; Mendes, D; Billinghurst, M; Jorge, J;

Publication
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS

Abstract
When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer's perception. We evaluated our approach in a co-located side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.

2014

ThumbCam: Returning to single touch interactions to explore 3D virtual environments

Authors
Mendes, D; Sousa, M; Ferreira, A; Jorge, JA;

Publication
ITS

Abstract
Three-dimensional virtual environments are present in many diffierent applications, being used even in small handheld devices. To navigate in these environments using such devices, most of current solutions rely on multi-touch interactions. However, previous works have already stated that multi-touch gestures on smartphones are not always feasible. In this work we present ThumbCam, a novel single-touch technique for camera manipulation on 3D virtual environments. With our solution, the user is able to move and look around and circle points of interest, while interacting using only his thumb. We compare ThumbCam with other state-of-the-art techniques, showing that it can offer more operations with a single touch. A qualitative user evaluation revealed that users found our solution appealing.

2017

VRRRRoom: Virtual Reality for Radiologists in the Reading Room

Authors
Sousa, M; Mendes, D; Paulo, S; Matela, N; Jorge, J; Lopes, DS;

Publication
PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17)

Abstract
Reading room conditions such as illumination, ambient light, human factors and display luminance, play an important role on how radiologists analyze and interpret images. Indeed, serious diagnostic errors can appear when observing images through everyday monitors. Typically, these occur whenever professionals are ill-positioned with respect to the display or visualize images under improper light and luminance conditions. In this work, we show that virtual reality can assist radiodiagnostics by considerably diminishing or cancel out the effects of unsuitable ambient conditions. Our approach combines immersive head-mounted displays with interactive surfaces to support professional radiologists in analyzing medical images and formulating diagnostics. We evaluated our prototype with two senior medical doctors and four seasoned radiology fellows. Results indicate that our approach constitutes a viable, flexible, portable and cost-efficient option to traditional radiology reading rooms.

2019

Safe Walking in VR

Authors
Sousa, M; Mendes, D; Jorge, JA;

Publication
The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI 2019, Brisbane, QLD, Australia, November 14-16, 2019.

Abstract
Common natural walking techniques for navigating in virtual environments feature constraints that make it difficult to use those methods in cramped home environments. Indeed, natural walking requires unobstructed and open space, to allow users to roam around without fear of stumbling on obstacles while immersed in a virtual world. In this work, we propose a new virtual locomotion technique, CWIP-AVR, that allows people to take advantage of the available physical space and empowers them to use natural walking to navigate in the virtual world. To inform users about real world hazards our approach uses augmented virtual reality visual indicators. A user evaluation suggests that CWIP-AVR allows people to navigate safely, while switching between locomotion modes flexibly and maintaining a adequate degree of immersion. © 2019 Association for Computing Machinery.

2019

Negative Space: Workspace Awareness in 3D Face-to-Face Remote Collaboration

Authors
Sousa, M; Mendes, D; dos Anjos, RK; Lopes, DS; Jorge, JA;

Publication
VRCAI

Abstract
Face-to-face telepresence promotes the sense of "being there" and can improve collaboration by allowing immediate understanding of remote people's nonverbal cues. Several approaches successfully explored interactions with 2D content using a see-through whiteboard metaphor. However, with 3D content, there is a decrease in awareness due to ambiguities originated by participants' opposing points-of-view. In this paper, we investigate how people and content should be presented for discussing 3D renderings within face-to-face collaborative sessions. To this end, we performed a user evaluation to compare four different conditions, in which we varied reflections of both workspace and remote people representation. Results suggest potentially more benefits to remote collaboration from workspace consistency rather than people's representation fidelity.We contribute a novel design space, the Negative Space, for remote face-to-face collaboration focusing on 3D content.

  • 7
  • 12