Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Licenciado em Engenharia de Sistemas e Informática pela Universidade do Minho em 1998. Desenvolveu o seu doutoramento na área de reconstrução 3D baseada em imagens no contexto dos laboratórios de investigação PHilips Research Labs (Eindhoven, Holanda) e a Universidade do Minho, tendo concluído em 2006.

Trabalhou na indústria na área dos sistemas interativos, até 2009, altura em que inciou atividade na FEUP, no Departamento de Engenharia Informática, primeiro como professor auxiliar convidado, e posteriormente como professor auxiliar. Desde essa altura desenvolve atividades de ensino e investigação nas áreas de computação gráfica, interação pessoa-computador e desenho e desenvolvimento de jogos digitais.

É colaborador do INESC TEC desde 2011. Atualmente é também o diretor do Mestrado em Multimédia da Universidade do Porto, e responsável pelo laboratório Graphics, Interaction and Gaming (GIG) do DEI/FEUP.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Rui Pedro Rodrigues
  • Cargo

    Investigador Sénior
  • Desde

    01 novembro 2011
007
Publicações

2025

Guiding Attention in VR: Comparing the Effect of Peripheral and Central Cues on Presence and Workload

Autores
Pinto, R; Matos, T; Mendes, D; Rodrigues, R;

Publicação
31ST ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY, VRST 2025

Abstract
Virtual Reality applications increasingly require methods to effectively guide users to important elements within the virtual environment. Central visual cues are the most common method, which have proven effective for directing attention, yet often compromise on level of immersion. This work explored whether peripheral visual cues could serve as an alternative approach that supports attention guidance while preserving sense of presence. We performed a user study with 24 participants to compare four visual cues: two central cues (Floating Text and Floating Arrow) and two peripheral cues (Edge Lighting and Swarm). Users completed a visual search task of 7 objects for each visual cue, with data collected on performance through reaction time, round time, and total errors. Additionally, presence and workload were evaluated through the IGROUP Presence Questionnaire and NASA Task Load Index, respectively. No statistically significant differences were found between peripheral and central cues for presence, however performance and workload varied significantly based on specific cue implementation rather than type of positioning. Our findings indicate that peripheral positioning does not inherently provide attention guidance advantages over central placement. Instead, thoughtful cue design, with a simple yet clear appearance and behavior appears to be the critical factor for achieving effective attention guidance while preserving presence in IVEs. These results provide valuable insights for VR content creators to facilitate the design process of VR experiences.

2025

Advancing XR Education: Towards a Multimodal Human-Machine Interaction Course for Doctoral Students in Computer Science

Autores
Silva, S; Marques, B; Mendes, D; Rodrigues, R;

Publicação
EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 46TH ANNUAL CONFERENCE, EUROGRAPHICS 2025, EDUCATION PAPERS

Abstract
Nowadays, eXtended Reality (XR) has matured to the point where it seamlessly integrates various input and output modalities, enhancing the way users interact with digital environments. From traditional controllers and hand tracking to voice commands, eye tracking, and even biometric sensors, XR systems now offer more natural interactions. Similarly, output modalities have expanded beyond visual displays to include haptic feedback, spatial audio, and others, enriching the overall user experience. In this vein, as the field of XR becomes increasingly multimodal, the education process must also evolve to reflect these advancements. There is a growing need to incorporate additional modalities into the curriculum, helping students understand their relevance and practical applications. By exposing students to a diverse range of interaction techniques, they can better assess which modalities are most suitable for different contexts, enabling them to design more effective and human-centered solutions. This work describes an Advanced Human-Machine Interaction (HMI) course aimed at Doctoral Students in Computer Science. The primary objective is to provide students with the necessary knowledge in HMI by enabling them to articulate the fundamental concepts of the field, recognize and analyze the role of human factors, identify modern interaction methods and technologies, apply HCD principles to interactive system design and development, and implement appropriate methods for assessing interaction experiences across advanced HMI topics. In this vein, the course structure, the range of topics covered, assessment strategies, as well as the hardware and infrastructure employed are presented. Additionally, it highlights mini-projects, including flexibility for students to integrate their projects, fostering personalized and project-driven learning. The discussion reflects on the challenges inherent in keeping pace with this rapidly evolving field and emphasizes the importance of adapting to emerging trends. Finally, the paper outlines future directions and potential enhancements for the course.

2025

Do We Need 3D to See? Impact of Dimensionality of the Virtual Environment on Attention

Autores
Matos, T; Mendes, D; Jacob, J; de Sousa, AA; Rodrigues, R;

Publicação
2025 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW

Abstract
Virtual Reality allows users to experience realistic environments in an immersive and controlled manner, particularly beneficial for contexts where the real scenario is not easily or safely accessible. The choice between 360 content and 3D models impacts outcomes such as perceived quality and computational cost, but can also affect user attention. This study explores how attention manifests in VR using a 3D model or a 360 image rendered from said model during visuospatial tasks. User tests revealed no significant difference in workload or cybersickness between these types of content, while sense of presence was reportedly higher in the 3D environment.

2024

AIMSM - A Mechanism to Optimize Systems with Multiple AI Models: A Case Study in Computer Vision for Autonomous Mobile Robots

Autores
Ferreira, BG; de Sousa, AJM; Reis, LP; de Sousa, AA; Rodrigues, R; Rossetti, R;

Publicação
EPIA (3)

Abstract
This article proposes the Artificial Intelligence Models Switching Mechanism (AIMSM), a novel approach to optimize system resource utilization by allowing systems to switch AI models during runtime in dynamic environments. Many real-world applications utilize multiple data sources and various AI models for different purposes. In many of those applications, every AI model doesn’t have to operate all the time. The AIMSM strategically allows the system to activate and deactivate these models, focusing on system resource optimization. The switching of each AI model can be based on any information, such as context or previous results. In the case study of an autonomous mobile robot performing computer vision tasks, the AIMSM helps the system to achieve a significant increment in performance, with a 50% average increase in frames per second (FPS) rate, for this specific case study, assuming that no erroneous switching occurred. Experimental results have demonstrated that the AIMSM can improve system resource utilization efficiency when properly implemented, optimize overall resource consumption, and enhance system performance. The AIMSM presented itself as a better alternative to permanently loading all the models simultaneously, improving the adaptability and functionality of the systems. It is expected that using the AIMSM will yield a performance improvement that is particularly relevant to systems with multiple AI models of a complex nature, where such models do not need to be all continuously executed or systems that will benefit from lower resource usage. Code is available at https://github.com/BrunoGeorgevich/AIMSM.

2024

Cues to fast-forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative Systems

Autores
Assaf, R; Mendes, D; Rodrigues, R;

Publicação
COMPUTER GRAPHICS FORUM

Abstract
Collaboration in extended reality (XR) environments presents complex challenges that revolve around how users perceive the presence, intentions, and actions of their collaborators. This paper delves into the intricate realm of group awareness, focusing specifically on workspace awareness and the innovative visual cues designed to enhance user comprehension. The research begins by identifying a spectrum of collaborative situations drawn from an analysis of XR prototypes in the existing literature. Then, we describe and introduce a novel classification for workspace awareness, along with an exploration of visual cues recently employed in research endeavors. Lastly, we present the key findings and shine a spotlight on promising yet unexplored topics. This work not only serves as a reference for experienced researchers seeking to inform the design of their own collaborative XR applications but also extends a welcoming hand to newcomers in this dynamic field.