Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Daniel Mendes

2011

Combining bimanual manipulation and pen-based inputfor 3D modelling

Authors
Lopes, P; Mendes, D; Araújo, B; Jorge, JA;

Publication
Sketch Based Interfaces and Modeling, Vancouver, BC, Canada, 5-7 August 2011. Proceedings

Abstract
Multitouch enabled surfaces can bring advantages to modelling scenarios, in particular if bimanual and pen input can be combined. In this work, we assess the suitability of multitouch interfaces to 3D sketching tasks. We developed a multitouch enabled version of ShapeShop, whereby bimanual gestures allow users to explore the canvas through camera operations while using a pen to sketch. This provides a comfortable setting familiar to most users. Our contribution focuses on comparing the combined approach (bimanual and pen) to the pen-only interface for similar tasks. We conducted the evaluation helped by ten sketching experts who exercised both techniques. Results show that our approach both simplifies workflow and lowers task times, when compared to the pen-only interface, which is what most current sketching applications provide. © 2011 ACM.

2011

Virtual LEGO Modelling on Multi-Touch Surfaces

Authors
Mendes, D; Ferreira, A;

Publication
WSCG 2011: FULL PAPERS PROCEEDINGS

Abstract
Construction of LEGO models is a popular hobby, not only among children and young teenagers, but also for adults of all ages. Following the technological evolution and the integration of computers into the everyday life, several applications for virtual LEGO modelling have been created. However, these applications generally have interfaces based on windows, icons, menus and pointing devices, the so-called WIMP interfaces, thus being unnatural and hard-to-use for many users. Taking advantage of new trends in of interaction paradigms we developed an innovative solution for virtual LEGO modelling using a horizontal multi-touch surface. To achieve better results, we selected the most common virtual LEGO applications and performed a comparative study, identifying advantages and disadvantages of each one. In this paper we briefly present that study and describe the application developed upon it.

2023

MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration

Authors
Fidalgo, CG; Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Singh, K; Jorge, J;

Publication
2023 IEEE CONFERENCE VIRTUAL REALITY AND 3D USER INTERFACES, VR

Abstract
Remote collaborative work has become pervasive in many settings, ranging from engineering to medical professions. Users are immersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example to assist in the design of new devices such as customized prosthetics, vehicles or buildings. Discussing such shared 3D content face-to-face, however, has a variety of challenges such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, which in turn leads to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user's gestures to correctly reflect them in the local user's reference space when face-to-face. To measure what two users perceive in common when using pointing gestures in a shared 3D space, we introduce a novel metric called pointing agreement. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-toface collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.

2014

Interactive Tabletops for Architectural Visualization Combining Stereoscopy and Touch Interfaces for Cultural Heritage

Authors
Figueiredo, B; Costa, ECE; Araujo, B; Fonseca, F; Mendes, D; Jorge, JA; Duarte, JP;

Publication
FUSION: DATA INTEGRATION AT ITS BEST, VOL 1

Abstract
This paper presents an interactive apparatus to didactically explore Alberti's treatise on architecture, De re aedificatoria, as generative design systems, namely shape grammars. This apparatus allows users to interactively explore such architectonical knowledge in both appealing and informal ways, by enabling them to visualize and manipulate in real-time different design solutions. The authors identify the difficulties on encoding the architectural knowledge of a parametric design model into an interactive apparatus to be used by laypeople. At last, the authors discuss the results of a survey conducted to users that interacted with the prototype in order to assess its ability to communicate the knowledge of an architectural language.

2015

LS3D: LEGO Search Combining Speech and Stereoscopic 3D

Authors
Pascoal, PB; Mendes, D; Henriques, D; Trancoso, I; Ferreira, A;

Publication
Int. J. Creative Interfaces Comput. Graph.

Abstract
The number of available 3D digital objects has been increasing considerably. As such, searching in large collections has been subject of vast research. However, the main focus has been on algorithms and techniques for classification, indexing and retrieval. While some works have been done on query interfaces and results visualization, they do not explore natural interactions. The authors propose a speech interface for 3D object retrieval in immersive virtual environments. As a proof of concept, they developed the LS3D prototype, using the context of LEGO blocks to understand how people naturally describe such objects. Through a preliminary study, it was found that participants mainly resorted to verbal descriptions. Considering these descriptions and using a low cost visualization device, the authors developed their solution. They compared it with a commercial application through a user evaluation. Results suggest that LS3D can outperform its contestant, and ensures better performance and results perception than traditional approaches for 3D object retrieval.

2019

WARPING DEIXIS: Distorting Gestures to Enhance Collaboration

Authors
Sousa, M; dos Anjos, RK; Mendes, D; Billinghurst, M; Jorge, J;

Publication
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS

Abstract
When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer's perception. We evaluated our approach in a co-located side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.

  • 4
  • 10