Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRAS

2020

Deep Learning for Underwater Visual Odometry Estimation

Authors
Teixeira, B; Silva, H; Matos, A; Silva, E;

Publication
IEEE Access

Abstract

2020

MARESye: A hybrid imaging system for underwater robotic applications

Authors
Pinto, AM; Matos, AC;

Publication
INFORMATION FUSION

Abstract
This article presents an innovative hybrid imaging system that provides dense and accurate 3D information from harsh underwater environments. The proposed system is called MARESye and captures the advantages of both active and passive imaging methods: multiple light stripe range (LSR) and a photometric stereo (PS) technique, respectively. This hybrid approach fuses information from these techniques through a data-driven formulation to extend the measurement range and to produce high density 3D estimations in dynamic underwater environments. This hybrid system is driven by a gating timing approach to reduce the impact of several photometric issues related to the underwater environments such as, diffuse reflection, water turbidity and non-uniform illumination. Moreover, MARESye synchronizes and matches the acquisition of images with sub-sea phenomena which leads to clear pictures (with a high signal-to-noise ratio). Results conducted in realistic environments showed that MARESye is able to provide reliable, high density and accurate 3D data. Moreover, the experiments demonstrated that the performance of MARESye is less affected by sub-sea conditions since the SSIM index was 0.655 in high turbidity waters. Conventional imaging techniques obtained 0.328 in similar testing conditions. Therefore, the proposed system represents a valuable contribution for the inspection of maritime structures as well as for the navigation procedures of autonomous underwater vehicles during close range operations.

2020

MViDO: A high performance monocular vision-based system for docking a hovering AUV

Authors
Figueiredo, AB; Matos, AC;

Publication
Applied Sciences (Switzerland)

Abstract
This paper presents a high performance (low computationally demanding) monocular vision-based system for a hovering Autonomous Underwater Vehicle (AUV) in the context of autonomous docking process-MViDO system: Monocular Vision-based Docking Operation aid. The MViDO consists of three sub-modules: a pose estimator, a tracker and a guidance sub-module. The system is based on a single camera and a three spherical color markers target that signal the docking station. The MViDO system allows the pose estimation of the three color markers even in situations of temporary occlusions, being also a system that rejects outliers and false detections. This paper also describes the design and implementation of the MViDO guidance module for the docking manoeuvres. We address the problem of driving the AUV to a docking station with the help of the visual markers detected by the on-board camera, and show that by adequately choosing the references for the linear degrees of freedom of the AUV, the AUV is conducted to the dock while keeping those markers in the field of view of the on-board camera. The main concepts behind the MViDO are provided and a complete characterization of the developed system is presented from the formal and experimental point of view. To test and evaluate the MViDO detector and pose an estimator module, we created a ground truth setup. To test and evaluate the tracker module we used the MARES AUV and the designed target in a four-meter tank. The performance of the proposed guidance law was tested on simulink/Matlab. © 2020 by the authors.

2020

Teaching robotics with a simulator environment developed for the autonomous driving competition

Authors
Fernandes, D; Pinheiro, F; Dias, A; Martins, A; Almeida, J; Silva, E;

Publication
Advances in Intelligent Systems and Computing

Abstract
Teaching robotics based on challenge of our daily lives is always more motivating for students and teachers. Several competitions of self-driving have emerged recently, challenging students and researchers to develop solutions addressing the autonomous driving systems. The Portuguese Festival Nacional de Robótica (FNR) Autonomous Driving Competition is one of those examples. Even though the competition is an exciting challenger, it requires the development of real robots, which implies several limitations that may discourage the students and compromise a fluid teaching process. The simulation can contribute to overcome this limitation and can assume an important role as a tool, providing an effortless and costless solution, allowing students and researchers to keep their focus on the main issues. This paper presents a simulation environment for FNR, providing an overall framework able to support the exploration of robotics topics like perception, navigation, data fusion and deep learning based on the autonomous driving competition. © Springer Nature Switzerland AG 2020.

2020

Evaluation of Lightweight Convolutional Neural Networks for Real-Time Electrical Assets Detection

Authors
Barbosa, J; Dias, A; Almeida, J; Silva, E;

Publication
Advances in Intelligent Systems and Computing

Abstract
The big growth of electrical demand by the countries required larger and more complex power systems, which have led to a greater need for monitoring and maintenance of these systems. To overcome this problem, UAVs equipped with appropriated sensors have emerged, allowing the reduction of the costs and risks when compared with traditional methods. The development of UAVs together with the great advance of the deep learning technologies, more precisely in the detection of objects, allowed to increase the level of automation in the process of inspection. This work presents an electrical assets monitoring system for detection of insulators and structures (poles and pylons) from images captured through a UAV. The proposed detection system is based on lightweight Convolutional Neural Networks and it is able to run on a portable device, aiming for a low cost, accurate and modular system, capable of running in real time. © 2020, Springer Nature Switzerland AG.

2020

Real-time GNSS precise positioning: RTKLIB for ROS

Authors
Ferreira, A; Matias, B; Almeida, J; Silva, E;

Publication
International Journal of Advanced Robotic Systems

Abstract
The global navigation satellite system (GNSS) constitutes an effective and affordable solution to the outdoor positioning problem. When combined with precise positioning techniques, such as the real time kinematic (RTK), centimeter-level positioning accuracy becomes a reality. Such performance is suitable for a whole new range of demanding applications, including high-accuracy field robotics operations. The RTKRCV, part of the RTKLIB package, is one of the most popular open-source solutions for real-time GNSS precise positioning. Yet the lack of integration with the robot operating system (ROS), constitutes a limitation on its adoption by the robotics community. This article addresses this limitation, reporting a new implementation which brings the RTKRCV capabilities into ROS. New features, including ROS publishing and control over a ROS service, were introduced seamlessly, to ensure full compatibility with all original options. Additionally, a new observation synchronization scheme improves solution consistency, particularly relevant for the moving-baseline positioning mode. Real application examples are presented to demonstrate the advantages of our rtkrcv_ros package. For community benefit, the software was released as an open-source package.

  • 1
  • 84