Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Andry Maykol Pinto

2024

Wave-motion compensation for USV-UAV cooperation: A model predictive controller approach

Authors
Martins, J; Pereira, P; Campilho, R; Pinto, A;

Publication
2024 20TH IEEE/ASME INTERNATIONAL CONFERENCE ON MECHATRONIC AND EMBEDDED SYSTEMS AND APPLICATIONS, MESA 2024

Abstract
Due to the difficult access to the maritime environment, cooperation between different robotic platforms operating in different domains provides numerous advantages when considering Operations and Maintenance (O&M) missions. The nest Uncrewed Surface Vehicle (USV) is equipped with a parallel platform, serving as a landing pad for Uncrewed Aerial Vehicle (UAV) landings in dynamic sea states. This work proposes a methodology for short term forecasting of wave-behaviour using Fast Fourier Transforms (FFT) and a low-pass Butterworth filter to filter out noise readings from the Inertial Measurement Unit (IMU) and applying an Auto-Regressive (AR) model for the forecast, showing good results within an almost 10-second window. These predictions are then used in a Model Predictive Control (MPC) approach to optimize trajectory planning of the landing pad roll and pitch, in order to increase horizontality, consistently mitigating around 80% of the wave induced motion.

2024

A Multimodal Learning-based Approach for Autonomous Landing of UAV

Authors
Neves, FS; Branco, LM; Pereira, M; Claro, RM; Pinto, AM;

Publication
2024 20TH IEEE/ASME INTERNATIONAL CONFERENCE ON MECHATRONIC AND EMBEDDED SYSTEMS AND APPLICATIONS, MESA 2024

Abstract
In the field of autonomous Unmanned Aerial Vehicles (UAVs) landing, conventional approaches fall short in delivering not only the required precision but also the resilience against environmental disturbances. Yet, learning-based algorithms can offer promising solutions by leveraging their ability to learn the intelligent behaviour from data. On one hand, this paper introduces a novel multimodal transformer-based Deep Learning detector, that can provide reliable positioning for precise autonomous landing. It surpasses standard approaches by addressing individual sensor limitations, achieving high reliability even in diverse weather and sensor failure conditions. It was rigorously validated across varying environments, achieving optimal true positive rates and average precisions of up to 90%. On the other hand, it is proposed a Reinforcement Learning (RL) decision-making model, based on a Deep Q-Network (DQN) rationale. Initially trained in simulation, its adaptive behaviour is successfully transferred and validated in a real outdoor scenario. Furthermore, this approach demonstrates rapid inference times of approximately 5ms, validating its applicability on edge devices.

2024

A Multimodal Perception System for Precise Landing of UAVs in Offshore Environments

Authors
Claro, RM; Neves, FSP; Pinto, AMG;

Publication

Abstract
The integration of precise landing capabilities into UAVs is crucial for enabling autonomous operations, particularly in challenging environments such as the offshore scenarios. This work proposes a heterogeneous perception system that incorporates a multimodal fiducial marker, designed to improve the accuracy and robustness of autonomous landing of UAVs in both daytime and nighttime operations. This work presents ViTAL-TAPE, a visual transformer-based model, that enhance the detection reliability of the landing target and overcomes the changes in the illumination conditions and viewpoint positions, where traditional methods fail. VITAL-TAPE is an end-to-end model that combines multimodal perceptual information, including photometric and radiometric data, to detect landing targets defined by a fiducial marker with 6 degrees-of-freedom. Extensive experiments have proved the ability of VITAL-TAPE to detect fiducial markers with an error of 0.01 m. Moreover, experiments using the RAVEN UAV, designed to endure the challenging weather conditions of offshore scenarios, demonstrated that the autonomous landing technology proposed in this work achieved an accuracy up to 0.1 m. This research also presents the first successful autonomous operation of a UAV in a commercial offshore wind farm with floating foundations installed in the Atlantic Ocean. These experiments showcased the system’s accuracy, resilience and robustness, resulting in a precise landing technology that extends mission capabilities of UAVs, enabling autonomous and Beyond Visual Line of Sight offshore operations.

2024

TEFu-Net: A time-aware late fusion architecture for robust multi-modal ego-motion estimation

Authors
Agostinho, L; Pereira, D; Hiolle, A; Pinto, A;

Publication
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
Ego -motion estimation plays a critical role in autonomous driving systems by providing accurate and timely information about the vehicle's position and orientation. To achieve high levels of accuracy and robustness, it is essential to leverage a range of sensor modalities to account for highly dynamic and diverse scenes, and consequent sensor limitations. In this work, we introduce TEFu-Net, a Deep -Learning -based late fusion architecture that combines multiple ego -motion estimates from diverse data modalities, including stereo RGB, LiDAR point clouds and GNSS/IMU measurements. Our approach is non -parametric and scalable, making it adaptable to different sensor set configurations. By leveraging a Long Short -Term Memory (LSTM), TEFu-Net produces reliable and robust spatiotemporal ego -motion estimates. This capability allows it to filter out erroneous input measurements, ensuring the accuracy of the car's motion calculations over time. Extensive experiments show an average accuracy increase of 63% over TEFu-Net's input estimators and on par results with the state-of-the-art in real -world driving scenarios. We also demonstrate that our solution can achieve accurate estimates under sensor or input failure. Therefore, TEFu-Net enhances the accuracy and robustness of ego -motion estimation in real -world driving scenarios, particularly in challenging conditions such as cluttered environments, tunnels, dense vegetation, and unstructured scenes. As a result of these enhancements, it bolsters the reliability of autonomous driving functions.

2024

Enhancing Underwater Inspection Capabilities: A Learning-Based Approach for Automated Pipeline Visibility Assessment

Authors
Mina, J; Leite, PN; Carvalho, J; Pinho, L; Gonçalves, EP; Pinto, AM;

Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Underwater scenarios pose additional challenges to perception systems, as the collected imagery from sensors often suffers from limitations that hinder its practical usability. One crucial domain that relies on accurate underwater visibility assessment is underwater pipeline inspection. Manual assessment is impractical and time-consuming, emphasizing the need for automated algorithms. In this study, we focus on developing learning-based approaches to evaluate visibility in underwater environments. We explore various neural network architectures and evaluate them on data collected within real subsea scenarios. Notably, the ResNet18 model outperforms others, achieving a testing accuracy of 93.5% in visibility evaluation. In terms of inference time, the fastest model is MobileNetV3 Small, estimating a prediction within 42.45 ms. These findings represent significant progress in enabling unmanned marine operations and contribute to the advancement of autonomous underwater surveillance systems.

2024

Artificial Intelligence for Automated Marine Growth Segmentation

Authors
Carvalho, J; Leite, PN; Mina, J; Pinho, L; Gonçalves, EP; Pinto, AM;

Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Marine growth impacts the stability and integrity of offshore structures, while simultaneously preventing inspection procedures. In consequence, companies need to employ specialists that manually assess each impacted part of the structure. Due to harsh sub-sea environments, acquiring large quantities of quality underwater data becomes difficult. To mitigate these challenges a new data augmentation algorithm is proposed that generates new images by performing localized crops on regions of interest from the original data, expanding the total size of the dataset approximately 6 times. This research also proposes a learning-based algorithm capable of automatically delineating marine growth in underwater images, achieving up to 0.389 IoU and 0.508 Dice Loss. Advances in this area contribute for reducing the manual labour necessary to schedule maintenance operations in man-made submerged structures, while increasing the reliability and automation of the process.

  • 11
  • 15