Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

Publications

2021

A 3-D Lightweight Convolutional Neural Network for Detecting Docking Structures in Cluttered Environments

Authors
Pereira, MI; Leite, PN; Pinto, AM;

Publication
MARINE TECHNOLOGY SOCIETY JOURNAL

Abstract
The maritime industry has been following the paradigm shift toward the automation of typically intelligent procedures, with research regarding autonomous surface vehicles (ASVs) having seen an upward trend in recent years. However, this type of vehicle cannot be employed on a full scale until a few challenges are solved. For example, the docking process of an ASV is still a demanding task that currently requires human intervention. This research work proposes a volumetric convolutional neural network (vCNN) for the detection of docking structures from 3-D data, developed according to a balance between precision and speed. Another contribution of this article is a set of synthetically generated data regarding the context of docking structures. The dataset is composed of LiDAR point clouds, stereo images, GPS, and Inertial Measurement Unit (IMU) information. Several robustness tests carried out with different levels of Gaussian noise demonstrated an average accuracy of 93.34% and a deviation of 5.46% for the worst case. Furthermore, the system was fine-tuned and evaluated in a real commercial harbor, achieving an accuracy of over 96%. The developed classifier is able to detect different types of structures and works faster than other state-of-the-art methods that establish their performance in real environments.

2021

Advancing Autonomous Surface Vehicles: A 3D Perception System for the Recognition and Assessment of Docking-Based Structures

Authors
Pereira, MI; Claro, RM; Leite, PN; Pinto, AM;

Publication
IEEE ACCESS

Abstract
The automation of typically intelligent and decision-making processes in the maritime industry leads to fewer accidents and more cost-effective operations. However, there are still lots of challenges to solve until fully autonomous systems can be employed. Artificial Intelligence (AI) has played a major role in this paradigm shift and shows great potential for solving some of these challenges, such as the docking process of an autonomous vessel. This work proposes a lightweight volumetric Convolutional Neural Network (vCNN) capable of recognizing different docking-based structures using 3D data in real-time. A synthetic-to-real domain adaptation approach is also proposed to accelerate the training process of the vCNN. This approach makes it possible to greatly decrease the cost of data acquisition and the need for advanced computational resources. Extensive experiments demonstrate an accuracy of over 90% in the recognition of different docking structures, using low resolution sensors. The inference time of the system was about 120ms on average. Results obtained using a real Autonomous Surface Vehicle (ASV) demonstrated that the vCNN trained with the synthetic-to-real domain adaptation approach is suitable for maritime mobile robots. This novel AI recognition method, combined with the utilization of 3D data, contributes to an increased robustness of the docking process regarding environmental constraints, such as rain and fog, as well as insufficient lighting in nighttime operations.

2021

Exploiting Motion Perception in Depth Estimation Through a Lightweight Convolutional Neural Network

Authors
Leite, PN; Pinto, AM;

Publication
IEEE ACCESS

Abstract
Understanding the surrounding 3D scene is of the utmost importance for many robotic applications. The rapid evolution of machine learning techniques has enabled impressive results when depth is extracted from a single image. High-latency networks are required to achieve these performances, rendering them unusable for time-constrained applications. This article introduces a lightweight Convolutional Neural Network (CNN) for depth estimation, NEON, designed for balancing both accuracy and inference times. Instead of solely focusing on visual features, the proposed methodology exploits the Motion-Parallax effect to combine the apparent motion of pixels with texture. This research demonstrates that motion perception provides crucial insight about the magnitude of movement for each pixel, which also encodes cues about depth since large displacements usually occur when objects are closer to the imaging sensor. NEON's performance is compared to relevant networks in terms of Root Mean Squared Error (RMSE), the percentage of correctly predicted pixels (delta(1)) and inference times, using the KITTI dataset. Experiments prove that NEON is significantly more efficient than the current top ranked network, estimating predictions 12 times faster; while achieving an average RMSE of 3.118 m and a delta(1) of 94.5%. Ablation studies demonstrate the relevance of tailoring the network to use motion perception principles in estimating depth from image sequences, considering that the effectiveness and quality of the estimated depth map is similar to more computational demanding state-of-the-art networks. Therefore, this research proposes a network that can be integrated in robotic applications, where computational resources and processing-times are important constraints, enabling tasks such as obstacle avoidance, object recognition and robotic grasping.

2020

Dense disparity maps from rgb and sparse depth information using deep regression models

Authors
Leite, PN; Silva, RJ; Campos, DF; Pinto, AM;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
A dense and accurate disparity map is relevant for a large number of applications, ranging from autonomous driving to robotic grasping. Recent developments in machine learning techniques enable us to bypass sensor limitations, such as low resolution, by using deep regression models to complete otherwise sparse representations of the 3D space. This article proposes two main approaches that use a single RGB image and sparse depth information gathered from a variety of sensors/techniques (stereo, LiDAR and Light Stripe Ranging (LSR)): a Convolutional Neural Network (CNN) and a cascade architecture, that aims to improve the results of the first. Ablation studies were conducted to infer the impact of these depth cues on the performance of each model. The models trained with LiDAR sparse information are the most reliable, achieving an average Root Mean Squared Error (RMSE) of 11.8 cm on our own Inhouse dataset; while the LSR proved to be too sparse of an input to compute accurate predictions on its own. © Springer Nature Switzerland AG 2020.

2020

Multi-agent optimization for offshore wind farm inspection using an improved population-based metaheuristic

Authors
Silva, RJ; Leite, PN; Pinto, AM;

Publication
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)

Abstract
The use of robotic solutions in tasks such as the inspection and monitorization of offshore wind farms aims to, not only mitigate the involved risks, but also to reduce the costs of operating and maintaining these structures. Performing a complete inspection of the platforms in useful time is crucial. Therefore, multiple agents can prove to be a cost-effective solution. This work proposes a trajectory planning algorithm, based on the Ant Colony metaheuristic, capable of optimizing the number of Autonomous Surface Vehicles (ASVs) to be used, and their corresponding route.Experiments conducted on a simulated environment, representative of the real scenario, proves this approach to be successful in planning a trajectory that is able to select the appropriate number of agents and the trajectory of each agent that avoids collisions and at the same time guarantees the full observation of the offshore structures. © 2020 IEEE.