Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Daniel Mendes is an Assistant Professor at the Faculty of Engineering of the University of Porto, Portugal, and a researcher at INESC TEC. He received his Ph.D. (2018), MSc (2011), and BSc (2008) degrees in Computer Science and Engineering from Instituto Superior Técnico, University of Lisbon. His main interest areas are Human-Computer Interaction, 3D User Interfaces, Virtual and Augmented Reality, Multimodal Interfaces, and Touch/Gesture-based Interactions. He has been involved in several national research projects funded by the Portuguese Foundation for Science and Technology (FCT). He co-authored over 60 papers published in peer-reviewed scientific journals, conferences, and meetings. He is a member of ACM, IEEE, Eurographics, and the Portuguese Group for Computer Graphics.

Interest
Topics
Details

Details

  • Name

    Daniel Mendes
  • Role

    Senior Researcher
  • Since

    01st April 2020
Publications

2025

Layer-based management of collaborative interior design in extended reality

Authors
Pintani, D; Caputo, A; Mendes, D; Giachetti, A;

Publication
BEHAVIOUR & INFORMATION TECHNOLOGY

Abstract
We present CIDER, a novel framework for the collaborative editing of 3D augmented scenes. The framework allows multiple users to manipulate the virtual elements added to the real environment independently and without unexpected changes, comparing the different editing proposals and finalising a collaborative result. CIDER leverages the use of 'layers' encapsulating the state of the environment. Private layers can be edited independently by the different subjects, and a global one can be collaboratively updated with 'commit' operations. In this paper, we describe in detail the system architecture and the implementation as a prototype for the HoloLens 2 headsets, as well as the motivations behind the interaction design. The system has been validated with a user study on a realistic interior design task. The study not only evaluated the general usability but also compared two different approaches for the management of the atomic commit: forced (single-phase) and voting (requiring consensus), analyzing the effects of this choice on collaborative behaviour. According to the users' comments, we performed improvements to the interface and further tested their effectiveness.

2025

Do We Need 3D to See? Impact of Dimensionality of the Virtual Environment on Attention

Authors
Matos, T; Mendes, D; Jacob, J; de Sousa, AA; Rodrigues, R;

Publication
2025 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW

Abstract
Virtual Reality allows users to experience realistic environments in an immersive and controlled manner, particularly beneficial for contexts where the real scenario is not easily or safely accessible. The choice between 360 content and 3D models impacts outcomes such as perceived quality and computational cost, but can also affect user attention. This study explores how attention manifests in VR using a 3D model or a 360 image rendered from said model during visuospatial tasks. User tests revealed no significant difference in workload or cybersickness between these types of content, while sense of presence was reportedly higher in the 3D environment.

2025

Advancing XR Education: Towards a Multimodal Human-Machine Interaction Course for Doctoral Students in Computer Science

Authors
Silva, S; Marques, B; Mendes, D; Rodrigues, R;

Publication
EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 46TH ANNUAL CONFERENCE, EUROGRAPHICS 2025, EDUCATION PAPERS

Abstract
Nowadays, eXtended Reality (XR) has matured to the point where it seamlessly integrates various input and output modalities, enhancing the way users interact with digital environments. From traditional controllers and hand tracking to voice commands, eye tracking, and even biometric sensors, XR systems now offer more natural interactions. Similarly, output modalities have expanded beyond visual displays to include haptic feedback, spatial audio, and others, enriching the overall user experience. In this vein, as the field of XR becomes increasingly multimodal, the education process must also evolve to reflect these advancements. There is a growing need to incorporate additional modalities into the curriculum, helping students understand their relevance and practical applications. By exposing students to a diverse range of interaction techniques, they can better assess which modalities are most suitable for different contexts, enabling them to design more effective and human-centered solutions. This work describes an Advanced Human-Machine Interaction (HMI) course aimed at Doctoral Students in Computer Science. The primary objective is to provide students with the necessary knowledge in HMI by enabling them to articulate the fundamental concepts of the field, recognize and analyze the role of human factors, identify modern interaction methods and technologies, apply HCD principles to interactive system design and development, and implement appropriate methods for assessing interaction experiences across advanced HMI topics. In this vein, the course structure, the range of topics covered, assessment strategies, as well as the hardware and infrastructure employed are presented. Additionally, it highlights mini-projects, including flexibility for students to integrate their projects, fostering personalized and project-driven learning. The discussion reflects on the challenges inherent in keeping pace with this rapidly evolving field and emphasizes the importance of adapting to emerging trends. Finally, the paper outlines future directions and potential enhancements for the course.

2024

Incidental graphical perception: How marks and display time influence accuracy

Authors
Moreira, J; Mendes, D; Gonçalves, D;

Publication
INFORMATION VISUALIZATION

Abstract
Incidental visualizations are meant to be perceived at-a-glance, on-the-go, and during short exposure times, but are not seen on demand. Instead, they appear in people's fields of view during an ongoing primary task. They differ from glanceable visualizations because the information is not received on demand, and they differ from ambient visualizations because the information is not continuously embedded in the environment. However, current graphical perception guidelines do not consider situations where information is presented at specific moments during brief exposure times without being the user's primary focus. Therefore, we conducted a crowdsourced user study with 99 participants to understand how accurate people's incidental graphical perception is. Each participant was tested on one of the three conditions: position of dots, length of lines, and angle of lines. We varied the number of elements for each combination and the display time. During the study, participants were asked to perform reproduction tasks, where they had to recreate a previously shown stimulus in each. Our results indicate that incidental graphical perception can be accurate when using position, length, and angles. Furthermore, we argue that incidental visualizations should be designed for low exposure times (between 300 and 1000 ms).

2024

Cues to fast-forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative Systems

Authors
Assaf, R; Mendes, D; Rodrigues, R;

Publication
COMPUTER GRAPHICS FORUM

Abstract
Collaboration in extended reality (XR) environments presents complex challenges that revolve around how users perceive the presence, intentions, and actions of their collaborators. This paper delves into the intricate realm of group awareness, focusing specifically on workspace awareness and the innovative visual cues designed to enhance user comprehension. The research begins by identifying a spectrum of collaborative situations drawn from an analysis of XR prototypes in the existing literature. Then, we describe and introduce a novel classification for workspace awareness, along with an exploration of visual cues recently employed in research endeavors. Lastly, we present the key findings and shine a spotlight on promising yet unexplored topics. This work not only serves as a reference for experienced researchers seeking to inform the design of their own collaborative XR applications but also extends a welcoming hand to newcomers in this dynamic field.

Supervised
thesis

2023

Material Changing Haptics for VR

Author
Henrique Melo Ribeiro

Institution
UM

2023

Accountability in Immersive Content Creation Platforms

Author
Luís Guilherme da Costa Castro Neves

Institution
UM

2023

Immersive and collaborative web-based 3D design review

Author
Rodrigo Assaf

Institution
UM

2023

Exploring Pseudo-Haptics for object compliance in VR

Author
Carlos Daniel Rodrigues Lousada

Institution
UM

2023

Improving Absolute Inputs for Interactive Surfaces in VR

Author
Diogo Guimarães do Rosário

Institution
UM