Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Graduated in Systems and Informatics Engineering at Minho University in 1998. During his PhD he researched in the area of 3D reconstruction from Images divided between Philips Research, Eindhoven, and Minho University, until he concluded in 2006.

He worked in the industry in the field of interactive systems, until he joined FEUP as Invited Assistant Professor in 2009, in the Department of Informatics Engineering (FEUP), to teach and research in the areas of Computer Graphics, Interaction and Game design and development.

He is a collaborator at INESC TEC/INESC Porto since 2011. Currently he is also the director of the Multimedia Masters of University of Porto, and responsible by the Graphics, Interaction and Gaming (GIG) Lab at DEI/FEUP.

Interest
Topics
Details

Details

  • Name

    Rui Pedro Rodrigues
  • Role

    Senior Researcher
  • Since

    01st November 2011
007
Publications

2025

Do We Need 3D to See? Impact of Dimensionality of the Virtual Environment on Attention

Authors
Matos, T; Mendes, D; Jacob, J; de Sousa, AA; Rodrigues, R;

Publication
2025 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW

Abstract
Virtual Reality allows users to experience realistic environments in an immersive and controlled manner, particularly beneficial for contexts where the real scenario is not easily or safely accessible. The choice between 360 content and 3D models impacts outcomes such as perceived quality and computational cost, but can also affect user attention. This study explores how attention manifests in VR using a 3D model or a 360 image rendered from said model during visuospatial tasks. User tests revealed no significant difference in workload or cybersickness between these types of content, while sense of presence was reportedly higher in the 3D environment.

2024

AIMSM - A Mechanism to Optimize Systems with Multiple AI Models: A Case Study in Computer Vision for Autonomous Mobile Robots

Authors
Ferreira, BG; de Sousa, AJM; Reis, LP; de Sousa, AA; Rodrigues, R; Rossetti, R;

Publication
EPIA (3)

Abstract
This article proposes the Artificial Intelligence Models Switching Mechanism (AIMSM), a novel approach to optimize system resource utilization by allowing systems to switch AI models during runtime in dynamic environments. Many real-world applications utilize multiple data sources and various AI models for different purposes. In many of those applications, every AI model doesn’t have to operate all the time. The AIMSM strategically allows the system to activate and deactivate these models, focusing on system resource optimization. The switching of each AI model can be based on any information, such as context or previous results. In the case study of an autonomous mobile robot performing computer vision tasks, the AIMSM helps the system to achieve a significant increment in performance, with a 50% average increase in frames per second (FPS) rate, for this specific case study, assuming that no erroneous switching occurred. Experimental results have demonstrated that the AIMSM can improve system resource utilization efficiency when properly implemented, optimize overall resource consumption, and enhance system performance. The AIMSM presented itself as a better alternative to permanently loading all the models simultaneously, improving the adaptability and functionality of the systems. It is expected that using the AIMSM will yield a performance improvement that is particularly relevant to systems with multiple AI models of a complex nature, where such models do not need to be all continuously executed or systems that will benefit from lower resource usage. Code is available at https://github.com/BrunoGeorgevich/AIMSM.

2024

Cues to fast-forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative Systems

Authors
Assaf, R; Mendes, D; Rodrigues, R;

Publication
COMPUTER GRAPHICS FORUM

Abstract
Collaboration in extended reality (XR) environments presents complex challenges that revolve around how users perceive the presence, intentions, and actions of their collaborators. This paper delves into the intricate realm of group awareness, focusing specifically on workspace awareness and the innovative visual cues designed to enhance user comprehension. The research begins by identifying a spectrum of collaborative situations drawn from an analysis of XR prototypes in the existing literature. Then, we describe and introduce a novel classification for workspace awareness, along with an exploration of visual cues recently employed in research endeavors. Lastly, we present the key findings and shine a spotlight on promising yet unexplored topics. This work not only serves as a reference for experienced researchers seeking to inform the design of their own collaborative XR applications but also extends a welcoming hand to newcomers in this dynamic field.

2023

Exploring Pseudo-Haptics for Object Compliance in Virtual Reality

Authors
Lousada, C; Mendes, D; Rodrigues, R;

Publication
ICGI

Abstract
Virtual Reality (VR) has opened avenues for users to immerse themselves in virtual 3D environments, simulating reality across various domains like health, education, and entertainment. Haptic feedback plays a pivotal role in achieving lifelike experiences. However, the accessibility of haptic devices poses challenges, prompting the exploration of alternatives. In response, Pseudo-Haptic feedback has emerged, utilizing visual and auditory cues to create illusions or modify perceived haptic feedback. Given that many pseudo-haptic techniques are yet to be tailored for VR, our proposal involves combining and adapting multiple techniques to enhance compliance perception in virtual environments. By modifying the Mass-Spring-Damper model and incorporating hand-tracking software along with an inverse kinematics algorithm, our aim is to deliver compliance feedback through visual stimuli, thereby elevating the realism of the overall experience. The outcomes were encouraging, with numerous participants expressing their ability to easily discern various compliance levels with high confidence, all within a realistic and immersive environment. Additionally, we observed an impact of object scale on the perception of compliance in specific scenarios, as participants noted a tendency to perceive smaller objects as more compliant.

2023

TouchRay: Towards Low-effort Object Selection at Any Distance in DeskVR

Authors
Monteiro, J; Mendes, D; Rodrigues, R;

Publication
2023 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, ISMAR

Abstract
DeskVR allows users to experience Virtual Reality (VR) while sitting at a desk without requiring extensive movements. This makes it better suited for professional work environments where productivity over extended periods is essential. However, tasks that typically resort to mid-air gestures might not be suitable for DeskVR. In this paper, we focus on the fundamental task of object selection. We present TouchRay, an object selection technique conceived specifically for DeskVR that enables users to select objects at any distance while resting their hands on the desk. It also allows selecting objects' sub-components by traversing their corresponding hierarchical trees. We conducted a user evaluation comparing TouchRay against state-of-the-art techniques targeted at traditional VR. Results revealed that participants could successfully select objects in different settings, with consistent times and on par with the baseline techniques in complex tasks, without requiring mid-air gestures.