2025
Authors
Fernandes, T; Silva, T; Vaz, J; Silva, J; Cruz, G; Sousa, A; Barroso, J; Martins, P; Filipe, V;
Publication
Communications in Computer and Information Science - Technology and Innovation in Learning, Teaching and Education
Abstract
2025
Authors
Ullah, Z; Da Silva, JAC; Nunes, RR; Barroso, JMP; Reis, AMD; Filipe, VMD; Pires, EJS;
Publication
IEEE ACCESS
Abstract
This study examines the effectiveness of employing Advanced Rider Assistance System (ARAS) for enhancing motorcycle safety by reducing crashes and improving rider safety. The system includes both single solution approaches, like braking systems, and multi-sensor solutions that integrate data from LiDARs, radars, and cameras through sensor fusion. A systematic literature review was conducted to collect data from 2008 to 2024 across various sources related to ARAS. The review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to ensure a comprehensive and transparent process. Data were extracted from the included studies, focusing on study design, sample size, intervention details, and outcomes. The risk of bias was assessed using a customized checklist. The review included 31 studies that met the inclusion criteria. Findings were summarized for single sensor solutions and sensor fusion approaches.The review indicates that single-solution systems are effective ARAS technologies. In contrast, the application of sensor fusion in motorcycles has been only minimally explored, making it difficult to draw definitive conclusions about its impact in this context. Evidence from four-wheeled vehicles, however, shows that sensor fusion can enhance perception robustness, improve performance under adverse conditions, and contribute to measurable safety gains. These results suggest that similar advantages could be realized for motorcycles as fusion-based ARAS technologies become more widely implemented. Moreover, sensor fusion holds the potential to provide riders with broader situational awareness and more comprehensive safety assistance than single-system solutions. Future research should focus on addressing the identified challenges and optimizing these systems for broader implementation. This review underscores the critical role of ARAS in reducing motorcycle-related incidents and improving rider safety, highlighting the need for ongoing research to refine sensor fusion algorithms and address technical challenges for real-world applications.
2025
Authors
Goncalves C.F.; Cruz N.A.; Ferreira B.M.; Pinto G.A.; Soares S.F.; Filipe V.M.;
Publication
Oceans Conference Record IEEE
Abstract
Pose estimation by computer vision is essential in underwater robot navigation. Several works already use computer vision and ArUco markers for this purpose. The method is widely spread and developed. In terms of software, libraries have already been developed, for instance, the ArUco module in the OpenCV library. However, there is still a need to characterize the relationship between the performance of the system and the computer vision hardware itself, as well as the spatial arrangement of the markers. Another aspect to take into account is the environmental condition. This work seeks to relate these factors to the error resulting from the estimation of relative positions between cameras and markers.
2025
Authors
Pinto, F; Cruz, A; Ferreira, M; Soares, SFSP; Filipe, V;
Publication
Oceans Conference Record (IEEE)
Abstract
This paper leverages image processing techniques, including edge detection and feature extraction, to identify and perform pose tracking of cylindrical-like structures within underwater scenes. Examples of these cylindrical-shaped objects are pipes typically used in offshore oil and gas, pillars of bridge structures, or mooring line cables when their sag angle is near zero, making them approximately flat and thus can be approximated as rectilinear. In addition to the pipe's contour identification, this algorithm provides relative distance and bearing to the vision sensor to enable a convergence framework between the structure and any vehicle equipped with this sensor. Furthermore, the algorithm proposed was tested in a pollsimulated scene with a digital twin of the actual vision sensor, onboard an in-house developed ROV prototype. Additionally, the effects of common underwater challenges, such as lighting variability, shadows, turbidity, and visual noise from the pool's geometric structure, were all analyzed to describe the algorithm's performance and robustness fully. Performance was evaluated for distances, bearing angles, FOV, turbidity, camera resolutions, and algorithm processing complexity. © 2025 Marine Technology Society.
2025
Authors
Ullah, Z; da Silva, JAC; Nunes, RR; Reis, A; Filipe, V; Barroso, J; Pires, EJS;
Publication
VEHICLES
Abstract
Advanced rider assistance systems (ARAS) play a crucial role in enhancing motorcycle safety through features such as collision avoidance, blind-spot detection, and adaptive cruise control, which rely heavily on sensors like radar, cameras, and LiDAR. However, their performance is often compromised under adverse weather conditions, leading to sensor interference, reduced visibility, and inconsistent reliability. This study evaluates the effectiveness and limitations of ARAS technologies in rain, fog, and snow, focusing on how sensor performance, algorithms, techniques, and dataset suitability influence system reliability. A thematic analysis was conducted, selecting studies focused on ARAS in adverse weather conditions based on specific selection criteria. The analysis shows that while ARAS offers substantial safety benefits, its accuracy declines in challenging environments. Existing datasets, algorithms, and techniques were reviewed to identify the most effective options for ARAS applications. However, more comprehensive weather-resilient datasets and adaptive multi-sensor fusion approaches are still needed. Advancing in these areas will be critical to improving the robustness of ARAS and ensuring safer riding experiences across diverse environmental conditions.
2025
Authors
Lopes D.; Silva M.F.; Rocha L.F.; Filipe V.;
Publication
IEEE International Conference on Emerging Technologies and Factory Automation ETFA
Abstract
The textile industry faces economic and environmental challenges due to low recycling rates and contamination from fasteners like buttons, rivets, and zippers. This paper proposes an Red, Green, Blue (RGB) vision system using You Only Look Once version 11 (YOLOv11) with a sliding window technique for automated fastener detection. The system addresses small object detection, occlusion, and fabric variability, incorporating Grounding DINO for garment localization and U2-Net for segmentation. Experiments show the sliding window method outperforms full-image detection for buttons and rivets (precision 0.874, recall 0.923), while zipper detection is less effective due to dataset limitations. This work advances scalable AI-driven solutions for textile recycling, supporting circular economy goals. Future work will target hidden fasteners, dataset expansion and fastener removal.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.