Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Andry Maykol Pinto concluded the Doctoral Program in Electrical and Computer Engineering with thesis related to Robotics, from the Faculty of Engineering of the University of Porto, in 2014. At the same institution, he obtained a Master in Engineering Electrotechnical and Computers in 2010. Currently, he works as a Senior Researcher at the Center for Robotics and Autonomous Systems at INESC TEC and as an Assistant Professor at the Faculty of Engineering of the University of Porto.

Interest
Topics
Details

Details

  • Name

    Andry Maykol Pinto
  • Role

    Senior Researcher
  • Since

    01st February 2011
007
Publications

2024

Fusing heterogeneous tri-dimensional information for reconstructing submerged structures in harsh sub-sea environments

Authors
Leite, PN; Pinto, AM;

Publication
INFORMATION FUSION

Abstract
Exploiting stronger winds at offshore farms leads to a cyclical need for maintenance due to the harsh maritime conditions. While autonomous vehicles are the prone solution for O&M procedures, sub-sea phenomena induce severe data degradation that hinders the vessel's 3D perception. This article demonstrates a hybrid underwater imaging system that is capable of retrieving tri-dimensional information: dense and textured Photogrammetric Stereo (PS) point clouds and multiple accurate sets of points through Light Stripe Ranging (LSR), that are combined into a single dense and accurate representation. Two novel fusion algorithms are introduced in this manuscript. A Joint Masked Regression (JMR) methodology propagates sparse LSR information towards the PS point cloud, exploiting homogeneous regions around each beam projection. Regression curves then correlate depth readings from both inputs to correct the stereo-based information. On the other hand, the learning-based solution (RHEA) follows an early-fusion approach where features are conjointly learned from a coupled representation of both 3D inputs. A synthetic-to-real training scheme is employed to bypass domain-adaptation stages, enabling direct deployment in underwater contexts. Evaluation is conducted through extensive trials in simulation, controlled underwater environments, and within a real application at the ATLANTIS Coastal Testbed. Both methods estimate improved output point clouds, with RHEA achieving an average RMSE of 0.0097 m -a 52.45% improvement when compared to the PS input. Performance with real underwater information proves that RHEA is robust in dealing with degraded input information; JMR is more affected by missing information, excelling when the LSR data provides a complete representation of the scenario, and struggling otherwise.

2023

ArTuga: A novel multimodal fiducial marker for aerial robotics

Authors
Claro, RM; Silva, DB; Pinto, AM;

Publication
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
For Vertical Take-Off and Landing Unmanned Aerial Vehicles (VTOL UAVs) to operate autonomously and effectively, it is mandatory to endow them with precise landing abilities. The UAV has to be able to detect the landing target and to perform the landing maneuver without compromising its own safety and the integrity of its surroundings. However, current UAVs do not present the required robustness and reliability for precise landing in highly demanding scenarios, particularly due to their inadequacy to perform accordingly under challenging lighting and weather conditions, including in day and night operations.This work proposes a multimodal fiducial marker, named ArTuga (Augmented Reality Tag for Unmanned vision-Guided Aircraft), capable of being detected by an heterogeneous perception system for accurate and precise landing in challenging environments and daylight conditions. This research combines photometric and radiometric information by proposing a real-time multimodal fusion technique that ensures a robust and reliable detection of the landing target in severe environments.Experimental results using a real multicopter UAV show that the system was able to detect the proposed marker in adverse conditions (such as at different heights, with intense sunlight and in dark environments). The obtained average accuracy for position estimation at 1 m height was of 0.0060 m with a standard deviation of 0.0003 m. Precise landing tests obtained an average deviation of 0.027 m from the proposed marker, with a standard deviation of 0.026 m. These results demonstrate the relevance of the proposed system for the precise landing in adverse conditions, such as in day and night operations with harsh weather conditions.(c) 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

2023

End-to-End Detection of a Landing Platform for Offshore UAVs Based on a Multimodal Early Fusion Approach

Authors
Neves, FS; Claro, RM; Pinto, AM;

Publication
SENSORS

Abstract
A perception module is a vital component of a modern robotic system. Vision, radar, thermal, and LiDAR are the most common choices of sensors for environmental awareness. Relying on singular sources of information is prone to be affected by specific environmental conditions (e.g., visual cameras are affected by glary or dark environments). Thus, relying on different sensors is an essential step to introduce robustness against various environmental conditions. Hence, a perception system with sensor fusion capabilities produces the desired redundant and reliable awareness critical for real-world systems. This paper proposes a novel early fusion module that is reliable against individual cases of sensor failure when detecting an offshore maritime platform for UAV landing. The model explores the early fusion of a still unexplored combination of visual, infrared, and LiDAR modalities. The contribution is described by suggesting a simple methodology that intends to facilitate the training and inference of a lightweight state-of-the-art object detector. The early fusion based detector achieves solid detection recalls up to 99% for all cases of sensor failure and extreme weather conditions such as glary, dark, and foggy scenarios in fair real-time inference duration below 6 ms.

2023

Energy Efficient Path Planning for 3D Aerial Inspections

Authors
Claro, RM; Pereira, MI; Neves, FS; Pinto, AM;

Publication
IEEE ACCESS

Abstract
The use of Unmanned Aerial Vehicles (UAVs) in different inspection tasks is increasing. This technology reduces inspection costs and collects high quality data of distinct structures, including areas that are not easily accessible by human operators. However, the reduced energy available on the UAVs limits their flight endurance. To increase the autonomy of a single flight, it is important to optimize the path to be performed by the UAV, in terms of energy loss. Therefore, this work presents a novel formulation of the Travelling Salesman Problem (TSP) and a path planning algorithm that uses a UAV energy model to solve this optimization problem. The novel TSP formulation is defined as Asymmetric Travelling Salesman Problem with Precedence Loss (ATSP-PL), where the cost of moving the UAV depends on the previous position. The energy model relates each UAV movement with its energy consumption, while the path planning algorithm is focused on minimizing the energy loss of the UAV, ensuring that the structure is fully covered. The developed algorithm was tested in both simulated and real scenarios. The simulated experiments were performed with realistic models of wind turbines and a UAV, whereas the real experiments were performed with a real UAV and an illumination tower. The inspection paths generated presented improvements over 24% and 8%, when compared with other methods, for the simulated and real experiments, respectively, optimizing the energy consumption of the UAV.

2023

Labelled Indoor Point Cloud Dataset for BIM Related Applications

Authors
Abreu, N; Souza, R; Pinto, A; Matos, A; Pires, M;

Publication
DATA

Abstract
BIM (building information modelling) has gained wider acceptance in the AEC (architecture, engineering, and construction) industry. Conversion from 3D point cloud data to vector BIM data remains a challenging and labour-intensive process, but particularly relevant during various stages of a project lifecycle. While the challenges associated with processing very large 3D point cloud datasets are widely known, there is a pressing need for intelligent geometric feature extraction and reconstruction algorithms for automated point cloud processing. Compared to outdoor scene reconstruction, indoor scenes are challenging since they usually contain high amounts of clutter. This dataset comprises the indoor point cloud obtained by scanning four different rooms (including a hallway): two office workspaces, a workshop, and a laboratory including a water tank. The scanned space is located at the Electrical and Computer Engineering department of the Faculty of Engineering of the University of Porto. The dataset is fully labelled, containing major structural elements like walls, floor, ceiling, windows, and doors, as well as furniture, movable objects, clutter, and scanning noise. The dataset also contains an as-built BIM that can be used as a reference, making it suitable for being used in Scan-to-BIM and Scan-vs-BIM applications. For demonstration purposes, a Scan-vs-BIM change detection application is described, detailing each of the main data processing steps. Dataset: https://doi.org/10.5281/zenodo.7948116 Dataset License: Creative Commons Attribution 4.0 International License (CC BY 4.0).

Supervised
thesis

2022

GreenNext - A solução da Deloitte para a pontuação, emissão, comercialização e relatórios de impacto de ativos verdes

Author
Nuno Filipe Ferreira Fernandes

Institution
UTAD

2022

Automatic Performance evaluation of API Gateways based on architectural models

Author
Pedro Miguel Braga Moreira

Institution
UM

2022

RV imersiva para o setor imobiliário. Desenvolvimento de aplicação e avaliação do impacto de diferentes níveis de interação e fidelidade visual

Author
Samuel Filipe Silveira Martins

Institution
UTAD

2022

Processamento Analítico com Álgebra Linear Tipada em MonetDB

Author
Lucas Ribeiro Pereira

Institution
UM

2022

Efficient Neuromorphic Architectures for Visual Perception

Author
Marcelo Almeida de Carvalho

Institution
UP-FEUP