Detalhes
Nome
Daniel MendesCargo
Investigador SéniorDesde
01 abril 2020
Nacionalidade
PortugalCentro
Computação Centrada no Humano e Ciência da InformaçãoContactos
+351222094000
daniel.mendes@inesctec.pt
2025
Autores
Pinto, R; Matos, T; Mendes, D; Rodrigues, R;
Publicação
VRST
Abstract
Virtual Reality applications increasingly require methods to effectively guide users to important elements within the virtual environment. Central visual cues are the most common method, which have proven effective for directing attention, yet often compromise on level of immersion. This work explored whether peripheral visual cues could serve as an alternative approach that supports attention guidance while preserving sense of presence. We performed a user study with 24 participants to compare four visual cues: two central cues (Floating Text and Floating Arrow) and two peripheral cues (Edge Lighting and Swarm). Users completed a visual search task of 7 objects for each visual cue, with data collected on performance through reaction time, round time, and total errors. Additionally, presence and workload were evaluated through the IGROUP Presence Questionnaire and NASA Task Load Index, respectively. No statistically significant differences were found between peripheral and central cues for presence, however performance and workload varied significantly based on specific cue implementation rather than type of positioning. Our findings indicate that peripheral positioning does not inherently provide attention guidance advantages over central placement. Instead, thoughtful cue design, with a simple yet clear appearance and behavior appears to be the critical factor for achieving effective attention guidance while preserving presence in IVEs. These results provide valuable insights for VR content creators to facilitate the design process of VR experiences. © 2025 Copyright held by the owner/author(s).
2025
Autores
Silva, S; Marques, B; Mendes, D; Rodrigues, R;
Publicação
EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 46TH ANNUAL CONFERENCE, EUROGRAPHICS 2025, EDUCATION PAPERS
Abstract
Nowadays, eXtended Reality (XR) has matured to the point where it seamlessly integrates various input and output modalities, enhancing the way users interact with digital environments. From traditional controllers and hand tracking to voice commands, eye tracking, and even biometric sensors, XR systems now offer more natural interactions. Similarly, output modalities have expanded beyond visual displays to include haptic feedback, spatial audio, and others, enriching the overall user experience. In this vein, as the field of XR becomes increasingly multimodal, the education process must also evolve to reflect these advancements. There is a growing need to incorporate additional modalities into the curriculum, helping students understand their relevance and practical applications. By exposing students to a diverse range of interaction techniques, they can better assess which modalities are most suitable for different contexts, enabling them to design more effective and human-centered solutions. This work describes an Advanced Human-Machine Interaction (HMI) course aimed at Doctoral Students in Computer Science. The primary objective is to provide students with the necessary knowledge in HMI by enabling them to articulate the fundamental concepts of the field, recognize and analyze the role of human factors, identify modern interaction methods and technologies, apply HCD principles to interactive system design and development, and implement appropriate methods for assessing interaction experiences across advanced HMI topics. In this vein, the course structure, the range of topics covered, assessment strategies, as well as the hardware and infrastructure employed are presented. Additionally, it highlights mini-projects, including flexibility for students to integrate their projects, fostering personalized and project-driven learning. The discussion reflects on the challenges inherent in keeping pace with this rapidly evolving field and emphasizes the importance of adapting to emerging trends. Finally, the paper outlines future directions and potential enhancements for the course.
2025
Autores
Matos, T; Mendes, D; Jacob, J; de Sousa, AA; Rodrigues, R;
Publicação
2025 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW
Abstract
Virtual Reality allows users to experience realistic environments in an immersive and controlled manner, particularly beneficial for contexts where the real scenario is not easily or safely accessible. The choice between 360 content and 3D models impacts outcomes such as perceived quality and computational cost, but can also affect user attention. This study explores how attention manifests in VR using a 3D model or a 360 image rendered from said model during visuospatial tasks. User tests revealed no significant difference in workload or cybersickness between these types of content, while sense of presence was reportedly higher in the 3D environment.
2025
Autores
Pintani, D; Caputo, A; Mendes, D; Giachetti, A;
Publicação
BEHAVIOUR & INFORMATION TECHNOLOGY
Abstract
We present CIDER, a novel framework for the collaborative editing of 3D augmented scenes. The framework allows multiple users to manipulate the virtual elements added to the real environment independently and without unexpected changes, comparing the different editing proposals and finalising a collaborative result. CIDER leverages the use of 'layers' encapsulating the state of the environment. Private layers can be edited independently by the different subjects, and a global one can be collaboratively updated with 'commit' operations. In this paper, we describe in detail the system architecture and the implementation as a prototype for the HoloLens 2 headsets, as well as the motivations behind the interaction design. The system has been validated with a user study on a realistic interior design task. The study not only evaluated the general usability but also compared two different approaches for the management of the atomic commit: forced (single-phase) and voting (requiring consensus), analyzing the effects of this choice on collaborative behaviour. According to the users' comments, we performed improvements to the interface and further tested their effectiveness.
2024
Autores
Moreira, J; Pinto, D; Mendes, D; Gonçlves, D;
Publicação
2024 INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION, ICGI
Abstract
Incidental visualizations allow individuals to access information on-the-go, at-a-glance, and without needing to consciously search for it. Unlike ambient visualizations, incidental visualizations are not fixed in a specific location and only appear briefly within a person's field of view while they are engaged in a primary task. Despite their potential, incidental visualizations have not yet been thoroughly studied in current literature. We conducted exploratory research to establish the distinctiveness of incidental visualizations and to advocate for their study as an independent research topic. We tested both incidental and ambient visualizations in two separate studies, each involving one specific scenarios: a cognitively demanding primary task (42 participants), and a mechanical primary task (28 participants). Our findings show that in the cognitively demanding task, both types of visualizations resulted in similar performance. However, in the mechanical task, ambient visualizations led to better results compared to incidental visualizations. Based on these results, we argue that incidental visualizations should be further explored in scenarios involving physical requirements, as these situations present the greatest challenges for their integration.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.