2021
Authors
Pereira, MI; Claro, RM; Leite, PN; Pinto, AM;
Publication
IEEE ACCESS
Abstract
The automation of typically intelligent and decision-making processes in the maritime industry leads to fewer accidents and more cost-effective operations. However, there are still lots of challenges to solve until fully autonomous systems can be employed. Artificial Intelligence (AI) has played a major role in this paradigm shift and shows great potential for solving some of these challenges, such as the docking process of an autonomous vessel. This work proposes a lightweight volumetric Convolutional Neural Network (vCNN) capable of recognizing different docking-based structures using 3D data in real-time. A synthetic-to-real domain adaptation approach is also proposed to accelerate the training process of the vCNN. This approach makes it possible to greatly decrease the cost of data acquisition and the need for advanced computational resources. Extensive experiments demonstrate an accuracy of over 90% in the recognition of different docking structures, using low resolution sensors. The inference time of the system was about 120ms on average. Results obtained using a real Autonomous Surface Vehicle (ASV) demonstrated that the vCNN trained with the synthetic-to-real domain adaptation approach is suitable for maritime mobile robots. This novel AI recognition method, combined with the utilization of 3D data, contributes to an increased robustness of the docking process regarding environmental constraints, such as rain and fog, as well as insufficient lighting in nighttime operations.
2021
Authors
Leite, PN; Pinto, AM;
Publication
IEEE ACCESS
Abstract
Understanding the surrounding 3D scene is of the utmost importance for many robotic applications. The rapid evolution of machine learning techniques has enabled impressive results when depth is extracted from a single image. High-latency networks are required to achieve these performances, rendering them unusable for time-constrained applications. This article introduces a lightweight Convolutional Neural Network (CNN) for depth estimation, NEON, designed for balancing both accuracy and inference times. Instead of solely focusing on visual features, the proposed methodology exploits the Motion-Parallax effect to combine the apparent motion of pixels with texture. This research demonstrates that motion perception provides crucial insight about the magnitude of movement for each pixel, which also encodes cues about depth since large displacements usually occur when objects are closer to the imaging sensor. NEON's performance is compared to relevant networks in terms of Root Mean Squared Error (RMSE), the percentage of correctly predicted pixels (delta(1)) and inference times, using the KITTI dataset. Experiments prove that NEON is significantly more efficient than the current top ranked network, estimating predictions 12 times faster; while achieving an average RMSE of 3.118 m and a delta(1) of 94.5%. Ablation studies demonstrate the relevance of tailoring the network to use motion perception principles in estimating depth from image sequences, considering that the effectiveness and quality of the estimated depth map is similar to more computational demanding state-of-the-art networks. Therefore, this research proposes a network that can be integrated in robotic applications, where computational resources and processing-times are important constraints, enabling tasks such as obstacle avoidance, object recognition and robotic grasping.
2021
Authors
Duarte D.F.; Pereira M.I.; Pinto A.M.;
Publication
Oceans Conference Record (IEEE)
Abstract
Recently, research concerning the navigation of Autonomous Surface Vehicles (ASVs) has been increasing. However, a big scale implementation of these vessels is still held back by a plethora of challenges such as multi-object tracking. This article presents the development of a tracking model through transfer learning techniques, based on referenced object trackers for urban scenarios. The work consisted in training a neural network through deep learning techniques, including data association and comparison of three different optimisers, Adadelta, Adam and SGD, determining the best hyper-parameters to maximise the training efficiency. The developed model achieved decent performance at tracking large vessels in the ocean, being successful even in harsh lighting conditions and lack of image focus.
2021
Authors
Resende, J; Barbosa, P; Almeida, J; Martins, A;
Publication
2021 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)
Abstract
This paper presents a high-resolution imaging system developed for plankton imaging in the context of the MarinEye integrated biological sensor [1]. This sensor aims to produce an autonomous system for marine integrated physical, chemical and biological monitoring combining imaging, acoustic, sonar, and fraction filtration systems (coupled to DNA/RNA preservation) as well as sensors for targeting physical-chemical variables in a modular and compact system that can be deployed on fixed and mobile platforms, such as the TURTLE robotic deep sea lander [2]. The results obtained with the system both in laboratory conditions and in the field are presented and discussed, allowing the characterization and validation of the performance of the Autonomous High-Resolution Image Acquisition System for Plankton.
2021
Authors
Loureiro, G; Dias, A; Martins, A; Almeida, J;
Publication
REMOTE SENSING
Abstract
The use and research of Unmanned Aerial Vehicle (UAV) have been increasing over the years due to the applicability in several operations such as search and rescue, delivery, surveillance, and others. Considering the increased presence of these vehicles in the airspace, it becomes necessary to reflect on the safety issues or failures that the UAVs may have and the appropriate action. Moreover, in many missions, the vehicle will not return to its original location. If it fails to arrive at the landing spot, it needs to have the onboard capability to estimate the best area to safely land. This paper addresses the scenario of detecting a safe landing spot during operation. The algorithm classifies the incoming Light Detection and Ranging (LiDAR) data and store the location of suitable areas. The developed method analyses geometric features on point cloud data and detects potential right spots. The algorithm uses the Principal Component Analysis (PCA) to find planes in point cloud clusters. The areas that have a slope less than a threshold are considered potential landing spots. These spots are evaluated regarding ground and vehicle conditions such as the distance to the UAV, the presence of obstacles, the area's roughness, and the spot's slope. Finally, the output of the algorithm is the optimum spot to land and can vary during operation. The proposed approach evaluates the algorithm in simulated scenarios and an experimental dataset presenting suitability to be applied in real-time operations.
2021
Authors
Amado, M; Lopes, F; Dias, A; Martins, A;
Publication
IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2021, Santa Maria da Feira, Portugal, April 28-29, 2021
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.