Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Andry Maykol Pinto concluiu o Programa de Doutoramento em Engenharia Electrotécnica e de Computadores com tese relacionada com Robótica, pela Faculdade de Engenharia da Universidade do Porto, em 2014. Na mesma instituição, obteve o Mestrado em Engenharia Electrotécnica e de Computadores em 2010. Actualmente, trabalha como Investigador Sénior no Centro de Robótica e Sistemas Autónomos do INESC TEC e como Professor Auxiliar na Faculdade de Engenharia da Universidade do Porto.


Ele é o Investigador Principal de muitos projetos de investigação relacionados com soluções robóticas para O&M, e financiados por fundos nacionais e europeus. Lidera uma equipa com mais de 15 investigadores e coordena um projeto ICT/H2020 na área da robótica marítima e a sua investigação tem inúmeras publicações nas revistas de maior impacto em áreas relacionadas com visão computacional, robótica móvel, sistemas autónomos, percepção multidimensional, fusão de sensores e visão subaquática.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Andry Maykol Pinto
  • Cargo

    Investigador Sénior
  • Desde

    01 fevereiro 2011
008
Publicações

2025

A Multimodal Perception System for Precise Landing of UAVs in Offshore Environments

Autores
Claro, RM; Neves, FSP; Pinto, AMG;

Publicação
Journal of Field Robotics

Abstract
The integration of precise landing capabilities into unmanned aerial vehicles (UAVs) is crucial for enabling autonomous operations, particularly in challenging environments such as the offshore scenarios. This work proposes a heterogeneous perception system that incorporates a multimodal fiducial marker, designed to improve the accuracy and robustness of autonomous landing of UAVs in both daytime and nighttime operations. This work presents ViTAL-TAPE, a visual transformer-based model, that enhance the detection reliability of the landing target and overcomes the changes in the illumination conditions and viewpoint positions, where traditional methods fail. VITAL-TAPE is an end-to-end model that combines multimodal perceptual information, including photometric and radiometric data, to detect landing targets defined by a fiducial marker with 6 degrees-of-freedom. Extensive experiments have proved the ability of VITAL-TAPE to detect fiducial markers with an error of 0.01 m. Moreover, experiments using the RAVEN UAV, designed to endure the challenging weather conditions of offshore scenarios, demonstrated that the autonomous landing technology proposed in this work achieved an accuracy up to 0.1 m. This research also presents the first successful autonomous operation of a UAV in a commercial offshore wind farm with floating foundations installed in the Atlantic Ocean. These experiments showcased the system's accuracy, resilience and robustness, resulting in a precise landing technology that extends mission capabilities of UAVs, enabling autonomous and Beyond Visual Line of Sight offshore operations. © 2025 Wiley Periodicals LLC.

2025

Raya: A Bio-Inspired AUV for Inspection and Intervention of Underwater Structures

Autores
Pereira, P; Silva, R; Marques, JVA; Campilho, R; Matos, A; Pinto, AM;

Publicação
IEEE ACCESS

Abstract
This work presents a bio-inspired Autonomous Underwater Vehicle (AUV) concept called Raya that enables high manoeuvrability required for close-range inspection and intervention tasks, while fostering endurance for long-range operations by enabling efficient navigation. The AUV has an estimated terminal velocity of 0.82 m/s in an optimal environment, and a capacity to acquire visual data and sonar measurements in all directions. Raya was designed with the potential to incorporate an electric manipulator arm of 6 degrees of freedom (DoF) for free-floating underwater intervention. Smart and biologically inspired principles applied to morphology and a strategic thruster configuration assure that Raya is capable of manoeuvring in all 6 DoFs even when equipped with a manipulator with a 5 kg payload. Extensive experiments were conducted using simulation tools and real-life environments to validate Raya's requirements and functionalities. The stresses and displacements of the rigid bodies were analysed using finite element analysis (FEA), and an estimation of the terminal forward velocity was achieved using a dynamic model. To assess the accuracy of the perception system, a reconstruction task took place in an indoor pool, resulting in a 3D reconstruction with average length, width, and depth errors below 1. 5%. The deployment of Raya in the ATLANTIS Coastal Testbed and Porto de Leix & otilde;es allowed the validation of the propulsion system and the gathering of valuable 2D and 3D data, thus proving the suitability of the vehicle for operation and maintenance (O&M) activities of underwater structures.

2025

Multimodal information fusion using pyramidal attention-based convolutions for underwater tri-dimensional scene reconstruction

Autores
Leite, PN; Pinto, AM;

Publicação
INFORMATION FUSION

Abstract
Underwater environments pose unique challenges to optical systems due to physical phenomena that induce severe data degradation. Current imaging sensors rarely address these effects comprehensively, resulting in the need to integrate complementary information sources. This article presents a multimodal data fusion approach to combine information from diverse sensing modalities into a single dense and accurate tridimensional representation. The proposed fusiNg tExture with apparent motion information for underwater Scene recOnstruction (NESO) encoder-decoder network leverages motion perception principles to extract relative depth cues, fusing them with textured information through an early fusion strategy. Evaluated on the FLSea-Stereo dataset, NESO outperforms state-of-the-art methods by 58.7%. Dense depth maps are achieved using multi-stage skip connections with attention mechanisms that ensure propagation of key features across network levels. This representation is further enhanced by incorporating sparse but millimeter-precise depth measurements from active imaging techniques. A regression-based algorithm maps depth displacements between these heterogeneous point clouds, using the estimated curves to refine the dense NESO prediction. This approach achieves relative errors as low as 0.41% when reconstructing submerged anode structures, accounting for metric improvements of up to 0.1124 m relative to the initial measurements. Validation at the ATLANTIS Coastal Testbed demonstrates the effectiveness of this multimodal fusion approach in obtaining robust tri-dimensional representations in real underwater conditions.

2024

Fusing heterogeneous tri-dimensional information for reconstructing submerged structures in harsh sub-sea environments

Autores
Leite, PN; Pinto, AM;

Publicação
INFORMATION FUSION

Abstract
Exploiting stronger winds at offshore farms leads to a cyclical need for maintenance due to the harsh maritime conditions. While autonomous vehicles are the prone solution for O&M procedures, sub-sea phenomena induce severe data degradation that hinders the vessel's 3D perception. This article demonstrates a hybrid underwater imaging system that is capable of retrieving tri-dimensional information: dense and textured Photogrammetric Stereo (PS) point clouds and multiple accurate sets of points through Light Stripe Ranging (LSR), that are combined into a single dense and accurate representation. Two novel fusion algorithms are introduced in this manuscript. A Joint Masked Regression (JMR) methodology propagates sparse LSR information towards the PS point cloud, exploiting homogeneous regions around each beam projection. Regression curves then correlate depth readings from both inputs to correct the stereo-based information. On the other hand, the learning-based solution (RHEA) follows an early-fusion approach where features are conjointly learned from a coupled representation of both 3D inputs. A synthetic-to-real training scheme is employed to bypass domain-adaptation stages, enabling direct deployment in underwater contexts. Evaluation is conducted through extensive trials in simulation, controlled underwater environments, and within a real application at the ATLANTIS Coastal Testbed. Both methods estimate improved output point clouds, with RHEA achieving an average RMSE of 0.0097 m -a 52.45% improvement when compared to the PS input. Performance with real underwater information proves that RHEA is robust in dealing with degraded input information; JMR is more affected by missing information, excelling when the LSR data provides a complete representation of the scenario, and struggling otherwise.

2024

Reinforcement learning based robot navigation using illegal actions for autonomous docking of surface vehicles in unknown environments

Autores
Pereira, MI; Pinto, AM;

Publicação
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE

Abstract
Autonomous Surface Vehicles (ASVs) are bound to play a fundamental role in the maintenance of offshore wind farms. Robust navigation for inspection vehicles should take into account the operation of docking within a harbouring structure, which is a critical and still unexplored maneuver. This work proposes an end-to-end docking approach for ASVs, based on Reinforcement Learning (RL), which teaches an agent to tackle collision- free navigation towards a target pose that allows the berthing of the vessel. The developed research presents a methodology that introduces the concept of illegal actions to facilitate the vessel's exploration during the learning process. This method improves the adopted Actor-Critic (AC) framework by accelerating the agent's optimization by approximately 38.02%. A set of comprehensive experiments demonstrate the accuracy and robustness of the presented method in scenarios with simulated environmental constraints (Beaufort Scale and Douglas Sea Scale), and a diversity of docking structures. Validation with two different real ASVs in both controlled and real environments demonstrates the ability of this method to enable safe docking maneuvers without prior knowledge of the scenario.

Teses
supervisionadas

2023

Robust Perception System for Autonomous Precise Landing of UAVs in Offshore Wind Farms

Autor
Rafael Marques Claro

Instituição
INESCTEC

2023

A multimodal vision-based sensor fusion approach for precise landing of an UAV

Autor
José Miguel Lopes Ferrão

Instituição
INESCTEC

2023

Development of a web-based eye-tracking tool for usability evaluation studies

Autor
Daniel Rodrigues da Silva

Instituição
INESCTEC

2023

Perception-based Autonomous Underwater Vehicle Navigation for Close-range Inspection of Offshore Structures

Autor
Renato Jorge Moreira Silva

Instituição
INESCTEC

2023

An Intelligent Retention System for Unmanned Aerial Vehicles on a Dynamic Platform

Autor
Lourenço Sousa de Pinho

Instituição
INESCTEC