Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Bernardo Gomes Teixeira
  • Cargo

    Investigador
  • Desde

    15 abril 2019
Publicações

2022

Feedfirst: Intelligent monitoring system for indoor aquaculture tanks

Autores
Teixeira, B; Lima, AP; Pinho, C; Viegas, D; Dias, N; Silva, H; Almeida, J;

Publicação
2022 OCEANS HAMPTON ROADS

Abstract
The Feedfirst Intelligent Monitoring System is a novel tool for intelligent monitoring of fish nurseries in aquaculture scenarios, mainly focusing on monitoring three essential items: water quality control, biomass estimation, and automated feeding. The system is based on machine vision techniques for fish larvae population size detection, and larvae biomass estimation is monitored through size measurement. We also show that the perception-actuation loop in automated fish tanks can be closed by using the vision system output to influence feeding procedures. The proposed solution was tested in a real tank in an aquaculture setting with real-time performance and logging capabilities.

2021

Deep learning point cloud odometry: Existing approaches and open challenges

Autores
Teixeira, B; Silva, H;

Publicação
U.Porto Journal of Engineering

Abstract
Achieving persistent and reliable autonomy for mobile robots in challenging field mission scenarios is a long-time quest for the Robotics research community. Deep learning-based LIDAR odometry is attracting increasing research interest as a technological solution for the robot navigation problem and showing great potential for the task. In this work, an examination of the benefits of leveraging learning-based encoding representations of real-world data is provided. In addition, a broad perspective of emergent Deep Learning robust techniques to track motion and estimate scene structure for real-world applications is the focus of a deeper analysis and comprehensive comparison. Furthermore, existing Deep Learning approaches and techniques for point cloud odometry tasks are explored, and the main technological solutions are compared and discussed. Open challenges are also laid out for the reader, hopefully offering guidance to future researchers in their quest to apply deep learning to complex 3D non-matrix data to tackle localization and robot navigation problems.

2020

Deep Learning for Underwater Visual Odometry Estimation

Autores
Teixeira, B; Silva, H; Matos, A; Silva, E;

Publicação
IEEE ACCESS

Abstract
This paper addresses Visual Odometry (VO) estimation in challenging underwater scenarios. Robot visual-based navigation faces several additional difficulties in the underwater context, which severely hinder both its robustness and the possibility for persistent autonomy in underwater mobile robots using visual perception capabilities. In this work, some of the most renown VO and Visual Simultaneous Localization and Mapping (v-SLAM) frameworks are tested on underwater complex environments, assessing the extent to which they are able to perform accurately and reliably on robotic operational mission scenarios. The fundamental issue of precision, reliability and robustness to multiple different operational scenarios, coupled with the rise in predominance of Deep Learning architectures in several Computer Vision application domains, has prompted a great a volume of recent research concerning Deep Learning architectures tailored for visual odometry estimation. In this work, the performance and accuracy of Deep Learning methods on the underwater context is also benchmarked and compared to classical methods. Additionally, an extension of current work is proposed, in the form of a visual-inertial sensor fusion network aimed at correcting visual odometry estimate drift. Anchored on a inertial supervision learning scheme, our network managed to improve upon trajectory estimates, producing both metrically better estimates as well as more visually consistent trajectory shape mimicking.

2019

Deep Learning Approaches Assessment for Underwater Scene Understanding and Egomotion Estimation

Autores
Teixeira, B; Silva, H; Matos, A; Silva, E;

Publicação
OCEANS 2019 MTS/IEEE SEATTLE

Abstract
This paper address the use of deep learning approaches for visual based navigation in confined underwater environments. State-of-the-art algorithms have shown the tremendous potential deep learning architectures can have for visual navigation implementations, though they are still mostly outperformed by classical feature-based techniques. In this work, we apply current state-of-the-art deep learning methods for visual-based robot navigation to the more challenging underwater environment, providing both an underwater visual dataset acquired in real operational mission scenarios and an assessment of state-of-the-art algorithms on the underwater context. We extend current work by proposing a novel pose optimization architecture for the purpose of correcting visual odometry estimate drift using a Visual-Inertial fusion network, consisted of a neural network architecture anchored on an Inertial supervision learning scheme. Our Visual-Inertial Fusion Network was shown to improve results an average of 50% for trajectory estimates, also producing more visually consistent trajectory estimates for both our underwater application scenarios.