2024
Authors
Martins, J; Pereira, P; Campilho, R; Pinto, A;
Publication
2024 20TH IEEE/ASME INTERNATIONAL CONFERENCE ON MECHATRONIC AND EMBEDDED SYSTEMS AND APPLICATIONS, MESA 2024
Abstract
Due to the difficult access to the maritime environment, cooperation between different robotic platforms operating in different domains provides numerous advantages when considering Operations and Maintenance (O&M) missions. The nest Uncrewed Surface Vehicle (USV) is equipped with a parallel platform, serving as a landing pad for Uncrewed Aerial Vehicle (UAV) landings in dynamic sea states. This work proposes a methodology for short term forecasting of wave-behaviour using Fast Fourier Transforms (FFT) and a low-pass Butterworth filter to filter out noise readings from the Inertial Measurement Unit (IMU) and applying an Auto-Regressive (AR) model for the forecast, showing good results within an almost 10-second window. These predictions are then used in a Model Predictive Control (MPC) approach to optimize trajectory planning of the landing pad roll and pitch, in order to increase horizontality, consistently mitigating around 80% of the wave induced motion.
2024
Authors
Neves, FS; Branco, LM; Pereira, M; Claro, RM; Pinto, AM;
Publication
2024 20TH IEEE/ASME INTERNATIONAL CONFERENCE ON MECHATRONIC AND EMBEDDED SYSTEMS AND APPLICATIONS, MESA 2024
Abstract
In the field of autonomous Unmanned Aerial Vehicles (UAVs) landing, conventional approaches fall short in delivering not only the required precision but also the resilience against environmental disturbances. Yet, learning-based algorithms can offer promising solutions by leveraging their ability to learn the intelligent behaviour from data. On one hand, this paper introduces a novel multimodal transformer-based Deep Learning detector, that can provide reliable positioning for precise autonomous landing. It surpasses standard approaches by addressing individual sensor limitations, achieving high reliability even in diverse weather and sensor failure conditions. It was rigorously validated across varying environments, achieving optimal true positive rates and average precisions of up to 90%. On the other hand, it is proposed a Reinforcement Learning (RL) decision-making model, based on a Deep Q-Network (DQN) rationale. Initially trained in simulation, its adaptive behaviour is successfully transferred and validated in a real outdoor scenario. Furthermore, this approach demonstrates rapid inference times of approximately 5ms, validating its applicability on edge devices.
2024
Authors
Claro, RM; Neves, FSP; Pinto, AMG;
Publication
Abstract
2024
Authors
Agostinho, L; Pereira, D; Hiolle, A; Pinto, A;
Publication
ROBOTICS AND AUTONOMOUS SYSTEMS
Abstract
Ego -motion estimation plays a critical role in autonomous driving systems by providing accurate and timely information about the vehicle's position and orientation. To achieve high levels of accuracy and robustness, it is essential to leverage a range of sensor modalities to account for highly dynamic and diverse scenes, and consequent sensor limitations. In this work, we introduce TEFu-Net, a Deep -Learning -based late fusion architecture that combines multiple ego -motion estimates from diverse data modalities, including stereo RGB, LiDAR point clouds and GNSS/IMU measurements. Our approach is non -parametric and scalable, making it adaptable to different sensor set configurations. By leveraging a Long Short -Term Memory (LSTM), TEFu-Net produces reliable and robust spatiotemporal ego -motion estimates. This capability allows it to filter out erroneous input measurements, ensuring the accuracy of the car's motion calculations over time. Extensive experiments show an average accuracy increase of 63% over TEFu-Net's input estimators and on par results with the state-of-the-art in real -world driving scenarios. We also demonstrate that our solution can achieve accurate estimates under sensor or input failure. Therefore, TEFu-Net enhances the accuracy and robustness of ego -motion estimation in real -world driving scenarios, particularly in challenging conditions such as cluttered environments, tunnels, dense vegetation, and unstructured scenes. As a result of these enhancements, it bolsters the reliability of autonomous driving functions.
2024
Authors
Mina, J; Leite, PN; Carvalho, J; Pinho, L; Gonçalves, EP; Pinto, AM;
Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2
Abstract
Underwater scenarios pose additional challenges to perception systems, as the collected imagery from sensors often suffers from limitations that hinder its practical usability. One crucial domain that relies on accurate underwater visibility assessment is underwater pipeline inspection. Manual assessment is impractical and time-consuming, emphasizing the need for automated algorithms. In this study, we focus on developing learning-based approaches to evaluate visibility in underwater environments. We explore various neural network architectures and evaluate them on data collected within real subsea scenarios. Notably, the ResNet18 model outperforms others, achieving a testing accuracy of 93.5% in visibility evaluation. In terms of inference time, the fastest model is MobileNetV3 Small, estimating a prediction within 42.45 ms. These findings represent significant progress in enabling unmanned marine operations and contribute to the advancement of autonomous underwater surveillance systems.
2024
Authors
Carvalho, J; Leite, PN; Mina, J; Pinho, L; Gonçalves, EP; Pinto, AM;
Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2
Abstract
Marine growth impacts the stability and integrity of offshore structures, while simultaneously preventing inspection procedures. In consequence, companies need to employ specialists that manually assess each impacted part of the structure. Due to harsh sub-sea environments, acquiring large quantities of quality underwater data becomes difficult. To mitigate these challenges a new data augmentation algorithm is proposed that generates new images by performing localized crops on regions of interest from the original data, expanding the total size of the dataset approximately 6 times. This research also proposes a learning-based algorithm capable of automatically delineating marine growth in underwater images, achieving up to 0.389 IoU and 0.508 Dice Loss. Advances in this area contribute for reducing the manual labour necessary to schedule maintenance operations in man-made submerged structures, while increasing the reliability and automation of the process.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.