Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

2025

Online monitoring of electric transmission lines using an optical ground wire with Distributed Acoustic Sensing

Authors
Silva, S; Nunes, GD; da Silva, JP; Meireles, A; Bidarra, D; Moreira, J; Novais, S; Dias, I; Sousa, R; Frazao, O;

Publication
29TH INTERNATIONAL CONFERENCE ON OPTICAL FIBER SENSORS

Abstract
In this study, we demonstrate the measurement of electric power using an optical ground wire ( OPGW). The tests were conducted on an OPGW cable from a high-voltage transmission line in Sines, Portugal, operating at 400 kV. A buried fiber position, free of 50 Hz and 100 Hz frequency interference, was selected to confirm that the 50 Hz frequency is not due to mechanical perturbation or electronic noise. Additionally, two suspended fiber positions (at 2500 m and 8500 m), where these frequencies were clearly observed, were analyzed. This study also examined the positioning of poles and splice detection between cables.

2025

Advancing XR Education: Towards a Multimodal Human-Machine Interaction Course for Doctoral Students in Computer Science

Authors
Silva, S; Marques, B; Mendes, D; Rodrigues, R;

Publication
EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS 46TH ANNUAL CONFERENCE, EUROGRAPHICS 2025, EDUCATION PAPERS

Abstract
Nowadays, eXtended Reality (XR) has matured to the point where it seamlessly integrates various input and output modalities, enhancing the way users interact with digital environments. From traditional controllers and hand tracking to voice commands, eye tracking, and even biometric sensors, XR systems now offer more natural interactions. Similarly, output modalities have expanded beyond visual displays to include haptic feedback, spatial audio, and others, enriching the overall user experience. In this vein, as the field of XR becomes increasingly multimodal, the education process must also evolve to reflect these advancements. There is a growing need to incorporate additional modalities into the curriculum, helping students understand their relevance and practical applications. By exposing students to a diverse range of interaction techniques, they can better assess which modalities are most suitable for different contexts, enabling them to design more effective and human-centered solutions. This work describes an Advanced Human-Machine Interaction (HMI) course aimed at Doctoral Students in Computer Science. The primary objective is to provide students with the necessary knowledge in HMI by enabling them to articulate the fundamental concepts of the field, recognize and analyze the role of human factors, identify modern interaction methods and technologies, apply HCD principles to interactive system design and development, and implement appropriate methods for assessing interaction experiences across advanced HMI topics. In this vein, the course structure, the range of topics covered, assessment strategies, as well as the hardware and infrastructure employed are presented. Additionally, it highlights mini-projects, including flexibility for students to integrate their projects, fostering personalized and project-driven learning. The discussion reflects on the challenges inherent in keeping pace with this rapidly evolving field and emphasizes the importance of adapting to emerging trends. Finally, the paper outlines future directions and potential enhancements for the course.

2025

Theoretical Model Validation of the Multisensory Role on Subjective Realism, Presence and Involvement in Immersive Virtual Reality

Authors
Gonçalves, G; Peixoto, B; Melo, M; Bessa, M;

Publication
COMPUTER GRAPHICS FORUM

Abstract
With the consistent adoption of iVR and growing research on the topic, it becomes fundamental to understand how the perception of Realism plays a role in the potential of iVR. This work puts forwards a hypothesis-driven theoretical model of how the perception of each multisensory stimulus (Visual, Audio, Haptic and Scent) is related to the perception of Realism of the whole experience (Subjective Realism) and, in turn, how this Subjective Realism is related to Involvement and Presence. The model was validated using a sample of 216 subjects in a multisensory iVR experience. The results indicated a good model fit and provided evidence on how the perception of Realism of Visual, Audio and Scent individually is linked to Subjective Realism. Furthermore, the results demonstrate strong evidence that Subjective Realism is strongly associated with Involvement and Presence. These results put forwards a validated questionnaire for the perception of Realism of different aspects of the virtual experience and a robust theoretical model on the interconnections of these constructs. We provide empirical evidence that can be used to optimise iVR systems for Presence, Involvement and Subjective Realism, thereby enhancing the effectiveness of iVR experiences and opening new research avenues.

2025

Multi-Class Intrusion Detection in Internet of Vehicles: Optimizing Machine Learning Models on Imbalanced Data

Authors
Palma, A; Antunes, M; Bernardino, J; Alves, A;

Publication
FUTURE INTERNET

Abstract
The Internet of Vehicles (IoV) presents complex cybersecurity challenges, particularly against Denial-of-Service (DoS) and spoofing attacks targeting the Controller Area Network (CAN) bus. This study leverages the CICIoV2024 dataset, comprising six distinct classes of benign traffic and various types of attacks, to evaluate advanced machine learning techniques for instrusion detection systems (IDS). The models XGBoost, Random Forest, AdaBoost, Extra Trees, Logistic Regression, and Deep Neural Network were tested under realistic, imbalanced data conditions, ensuring that the evaluation reflects real-world scenarios where benign traffic dominates. Using hyperparameter optimization with Optuna, we achieved significant improvements in detection accuracy and robustness. Ensemble methods such as XGBoost and Random Forest consistently demonstrated superior performance, achieving perfect accuracy and macro-average F1-scores, even when detecting minority attack classes, in contrast to previous results for the CICIoV2024 dataset. The integration of optimized hyperparameter tuning and a broader methodological scope culminated in an IDS framework capable of addressing diverse attack scenarios with exceptional precision.

2025

A Multimodal Agentic AI for the Autonomous Precise Landing of UAVs

Authors
Neves, FSP; Branco, LM; Claro, R; Pinto, AM;

Publication

Abstract
Autonomous landing for Unmanned Aerial Vehicles (UAVs) requires both precision and resilience against environmental uncertainties, capabilities that current approaches struggle to deliver. This paper presents a novel learning-based solution that combines an advanced multimodal transformer-based detector with a reinforcement learning formulation to achieve reliable autonomous landing behavior across varying scenario uncertainties. Beyond the integration of multimodality for robust target detection, this research incorporates a comprehensive analysis of the impact of state representation on decision-making performance. The proposed methodology is validated through extensive simulation studies and real-world field experiments conducted on physical UAV platforms under natural wind disturbances, demonstrating reliable transfer from simulated training environments to controlled outdoor conditions. Field experiments across varying initial conditions and wind stress confirm the system’s robustness, achieving landing precision of 0.10 ± 0.08 meters in outdoor trials, demonstrating centimeter-level accuracy that surpasses the meter-level precision of global positioning systems.

2025

Multimodal PointPillars for Efficient Object Detection in Autonomous Vehicles

Authors
Oliveira, M; Cerqueira, R; Pinto, JR; Fonseca, J; Teixeira, LF;

Publication
IEEE Trans. Intell. Veh.

Abstract
Autonomous Vehicles aim to understand their surrounding environment by detecting relevant objects in the scene, which can be performed using a combination of sensors. The accurate prediction of pedestrians is a particularly challenging task, since the existing algorithms have more difficulty detecting small objects. This work studies and addresses this often overlooked problem by proposing Multimodal PointPillars (M-PP), a fast and effective novel fusion architecture for 3D object detection. Inspired by both MVX-Net and PointPillars, image features from a 2D CNN-based feature map are fused with the 3D point cloud in an early fusion architecture. By changing the heavy 3D convolutions of MVX-Net to a set of convolutional layers in 2D space, along with combining LiDAR and image information at an early stage, M-PP considerably improves inference time over the baseline, running at 28.49 Hz. It achieves inference speeds suitable for real-world applications while keeping the high performance of multimodal approaches. Extensive experiments show that our proposed architecture outperforms both MVX-Net and PointPillars for the pedestrian class in the KITTI 3D object detection dataset, with 62.78% in $AP_{BEV}$ (moderate difficulty), while also outperforming MVX-Net in the nuScenes dataset. Moreover, experiments were conducted to measure the detection performance based on object distance. The performance of M-PP surpassed other methods in pedestrian detection at any distance, particularly for faraway objects (more than 30 meters). Qualitative analysis shows that M-PP visibly outperformed MVX-Net for pedestrians and cyclists, while simultaneously making accurate predictions of cars.

  • 135
  • 4281