Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Daniel Mendes is an Assistant Professor at the Faculty of Engineering of the University of Porto, Portugal, and a researcher at INESC TEC. He received his Ph.D. (2018), MSc (2011), and BSc (2008) degrees in Computer Science and Engineering from Instituto Superior Técnico, University of Lisbon. His main interest areas are Human-Computer Interaction, 3D User Interfaces, Virtual and Augmented Reality, Multimodal Interfaces, and Touch/Gesture-based Interactions. He has been involved in several national research projects funded by the Portuguese Foundation for Science and Technology (FCT). He co-authored over 60 papers published in peer-reviewed scientific journals, conferences, and meetings. He is a member of ACM, IEEE, Eurographics, and the Portuguese Group for Computer Graphics.

Interest
Topics
Details

Details

  • Name

    Daniel Mendes
  • Role

    Senior Researcher
  • Since

    01st April 2020
003
Publications

2023

Impact of incidental visualizations on primary tasks

Authors
Moreira, J; Mendes, D; Goncalves, D;

Publication
INFORMATION VISUALIZATION

Abstract
Incidental visualizations are meant to be seen at-a-glance, on-the-go, and during short exposure times. They will always appear side-by-side with an ongoing primary task while providing ancillary information relevant to those tasks. They differ from glanceable visualizations because looking at them is never their major focus, and they differ from ambient visualizations because they are not embedded in the environment, but appear when needed. However, unlike glanceable and ambient visualizations that have been studied in the past, incidental visualizations have yet to be explored in-depth. In particular, it is still not clear what is their impact on the users' performance of primary tasks. Therefore, we conducted an empirical online between-subjects user study where participants had to play a maze game as their primary task. Their goal was to complete several mazes as quickly as possible to maximize their score. This game was chosen to be a cognitively demanding task, bound to be significantly affected if incidental visualizations have a meaningful impact. At the same time, they had to answer a question that appeared while playing, regarding the path followed so far. Then, for half the participants, an incidental visualization was shown for a short period while playing, containing information useful for answering the question. We analyzed various metrics to understand how the maze performance was impacted by the incidental visualization. Additionally, we aimed to understand if working memory would influence how the maze was played and how visualizations were perceived. We concluded that incidental visualizations of the type used in this study do not disrupt people while they played the maze as their primary task. Furthermore, our results strongly suggested that the information conveyed by the visualization improved their performance in answering the question. Finally, working memory had no impact on the participants' results.

2023

MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration

Authors
Fidalgo, CG; Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Singh, K; Jorge, J;

Publication
2023 IEEE CONFERENCE VIRTUAL REALITY AND 3D USER INTERFACES, VR

Abstract
Remote collaborative work has become pervasive in many settings, ranging from engineering to medical professions. Users are immersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example to assist in the design of new devices such as customized prosthetics, vehicles or buildings. Discussing such shared 3D content face-to-face, however, has a variety of challenges such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, which in turn leads to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user's gestures to correctly reflect them in the local user's reference space when face-to-face. To measure what two users perceive in common when using pointing gestures in a shared 3D space, we introduce a novel metric called pointing agreement. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-toface collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.

2023

CIDER: Collaborative Interior Design in Extended Reality

Authors
Pintani, D; Caputo, A; Mendes, D; Giachetti, A;

Publication
Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter, CHItaly 2023, Torino, Italy, September 20-22, 2023

Abstract
Despite significant efforts dedicated to exploring the potential applications of collaborative mixed reality, the focus of the existing works is mostly related to the creation of shared virtual/mixed environments resolving concurrent manipulation issues rather than supporting an effective collaboration strategy for the design procedure. For this reason, we present CIDER, a system for the collaborative editing of 3D augmented scenes allowing two or more users to manipulate the virtual scene elements independently and without unexpected changes. CIDER is based on the use of "layers"encapsulating the state of the environment with private layers that can be edited independently and a global one collaboratively updated with "commit"operations. Using this system, implemented for the HoloLens 2 headsets and supporting multiple users, we performed a user test on a realistic interior design task, evaluating the general usability and comparing two different approaches for the management of the atomic commit: forced (single-phase) and voting (requiring consensus), analyzing the effects of this choice on the collaborative behavior. © 2023 ACM.

2023

Shape-A-Getti: A haptic device for getting multiple shapes using a simple actuator

Authors
Barbosa, F; Mendes, D; Rodrigues, R;

Publication
COMPUTERS & GRAPHICS-UK

Abstract
Haptic feedback in Virtual Reality is commonly provided through wearable or grounded devices adapted to specific scenarios and situations. Shape-changing devices allow for the physical representation of different virtual objects but are still a minority, complex, and usually have long transformation times. We present Shape-a-getti, a novel ungrounded, non-wearable, and graspable haptic device that can quickly change between different radially symmetrical shapes. It uses a single actuator to rotate several identical poles distributed along a radius to render the different shapes. The format of the poles defines the possible shapes, and in our prototype, we used one that could render concave, straight, and convex shapes with different radii. We conducted a user evaluation with 21 participants asking them to recognize virtual objects by grasping the Shape-a-getti. Despite having difficulties distinguishing between some objects with very similar shapes, participants could successfully identify virtual objects with different shapes rendered by our device. (c) 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).

2023

Incidental graphical perception: How marks and display time influence accuracy

Authors
Moreira, J; Mendes, D; Goncalves, D;

Publication
INFORMATION VISUALIZATION

Abstract
Incidental visualizations are meant to be perceived at-a-glance, on-the-go, and during short exposure times, but are not seen on demand. Instead, they appear in people's fields of view during an ongoing primary task. They differ from glanceable visualizations because the information is not received on demand, and they differ from ambient visualizations because the information is not continuously embedded in the environment. However, current graphical perception guidelines do not consider situations where information is presented at specific moments during brief exposure times without being the user's primary focus. Therefore, we conducted a crowdsourced user study with 99 participants to understand how accurate people's incidental graphical perception is. Each participant was tested on one of the three conditions: position of dots, length of lines, and angle of lines. We varied the number of elements for each combination and the display time. During the study, participants were asked to perform reproduction tasks, where they had to recreate a previously shown stimulus in each. Our results indicate that incidental graphical perception can be accurate when using position, length, and angles. Furthermore, we argue that incidental visualizations should be designed for low exposure times (between 300 and 1000 ms).

Supervised
thesis

2022

Object Tracking Using 3D Point Clouds And RGB Images For Autonomous Driving

Author
Daniel Ferreira Brandão

Institution
UP-FEUP

2022

Shape-A-Getti: A Haptic Device for Getting Multiple Shapes Using a Simple Actuator

Author
Filipe Guedes Barbosa

Institution
UP-FEUP

2022

Tangible Tokens for Multitouch Interfaces Based On Extended Triangular Patterns

Author
André Colares Pinto Ramos

Institution
UP-FEUP

2022

Dataflower: harnessing heterogeneous hardware parallelism for creative applications

Author
Pedro Miguel Silva Carlos Sousa Ângelo

Institution
UP-FEUP

2021

Prototipagem de um instrumento musical misto: a expressividade da interface

Author
Henrique Gomes Ferreira

Institution
UP-FEUP