2019
Authors
Melo, J; Matos, AC;
Publication
AUTONOMOUS ROBOTS
Abstract
In this paper we present a novel method for the acoustic tracking of multiple Autonomous Underwater Vehicles. While the problem of tracking a single moving vehicle has been addressed in the literature, tracking multiple vehicles is a problem that has been overlooked, mostly due to the inherent difficulties on data association with traditional acoustic localization networks. The proposed approach is based on a Probability Hypothesis Density Filter, thus overcoming the data association problem. Our tracker is able not only to successfully estimate the positions of the vehicles, but also their velocities. Moreover, the tracker estimates are labelled, thus providing a way to establish track continuity of the targets. Using real word data, our method is experimentally validated and the performance of the tracker is evaluated.
2019
Authors
Melo, J; Matos, A;
Publication
ASIAN JOURNAL OF CONTROL
Abstract
In this article a new Data-Driven formulation of the Particle Filter framework is proposed. The new formulation is able to learn an approximate proposal distribution from previous data. By doing so, the need to explicitly model all the disturbances that might affect the system is relaxed. Such characteristics are particularly suited for Terrain Based Navigation for sensor-limited AUVs, where typical scenarios often include non-negligible sources of noise affecting the system, which are unknown and hard to model. Numerical results are presented that demonstrate the superior accuracy, robustness and efficiency of the proposed Data-Driven approach.
2019
Authors
Nunes, AP; Silva Gaspar, ARS; Pinto, AM; Matos, AC;
Publication
SENSOR REVIEW
Abstract
Purpose This paper aims to present a mosaicking method for underwater robotic applications, whose result can be provided to other perceptual systems for scene understanding such as real-time object recognition. Design/methodology/approach This method is called robust and large-scale mosaicking (ROLAMOS) and presents an efficient frame-to-frame motion estimation with outlier removal and consistency checking that maps large visual areas in high resolution. The visual mosaic of the sea-floor is created on-the-fly by a robust registration procedure that composes monocular observations and manages the computational resources. Moreover, the registration process of ROLAMOS aligns the observation to the existing mosaic. Findings A comprehensive set of experiments compares the performance of ROLAMOS to other similar approaches, using both data sets (publicly available) and live data obtained by a ROV operating in real scenes. The results demonstrate that ROLAMOS is adequate for mapping of sea-floor scenarios as it provides accurate information from the seabed, which is of extreme importance for autonomous robots surveying the environment that does not rely on specialized computers. Originality/value The ROLAMOS is suitable for robotic applications that require an online, robust and effective technique to reconstruct the underwater environment from only visual information.
2019
Authors
Leite, P; Silva, R; Matos, A; Pinto, AM;
Publication
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)
Abstract
Autonomous Surface Vehicles (ASVs) provide the ideal platform to further explore the many opportunities in the cargo shipping industry, by making it more profitable and safer. This paper presents an architecture for the autonomous docking operation, formed by two stages: a maneuver module and, a situational awareness system to detect a mooring facility where an ASV can safely dock. Information retrieved from a 3D LIDAR, IMU and GPS are combined to extract the geometric features of the floating platform and to estimate the relative positioning and orientation of the moor to the ASV. Then, the maneuver module plans a trajectory to a specific position and guarantees that the ASV will not collide with the mooring facility. The approach presented in this paper was validated in distinct environmental and weather conditions such as tidal waves and wind. The results demonstrate the ability of the proposed architecture for detecting the docking platform and safely conduct the navigation towards it, achieving errors up to 0.107 m in position and 6.58 degrees in orientation.
2019
Authors
Marques, MM; Mendonca, R; Marques, F; Ramalho, T; Lobo, V; Matos, A; Ferreira, B; Simoes, N; Castelao, I;
Publication
2019 IEEE UNDERWATER TECHNOLOGY (UT)
Abstract
Nowadays, one of the problems associated with Unmanned Systems is the gap between research community and end-users. In order to emend this problem, the Portuguese Navy Research Center (CINAV) conducts the REX 2016 (Robotic Exercises). This paper describes the trials that were presented in this exercise, divided in two phases. The first phase happened at the Naval Base in Lisbon, with the support of divers and RHIBs (Rigid-Hulled Inflatable Boats), and the second phase, also with divers' support, at the coast of Lisbon-Cascais. It counted with many participants and research groups, including INESC-TEC, UNINOVA, TEKEVER and UAVISION. There are several advantages of doing this exercise, including for the Portuguese Navy, but also for partners. For the Navy, because it is an opportunity of being in contact with recent market technologies and researches. On the other hand, it is an opportunity for the partners to test their systems in a real environment, which usually is a difficult action to accomplish. Therefore, the paper describes three of the most relevant experiments: underwater docking stations, UAV and USV cooperation and Tracking targets from UAVs.
2019
Authors
Teixeira, B; Silva, H; Matos, A; Silva, E;
Publication
OCEANS 2019 MTS/IEEE SEATTLE
Abstract
This paper address the use of deep learning approaches for visual based navigation in confined underwater environments. State-of-the-art algorithms have shown the tremendous potential deep learning architectures can have for visual navigation implementations, though they are still mostly outperformed by classical feature-based techniques. In this work, we apply current state-of-the-art deep learning methods for visual-based robot navigation to the more challenging underwater environment, providing both an underwater visual dataset acquired in real operational mission scenarios and an assessment of state-of-the-art algorithms on the underwater context. We extend current work by proposing a novel pose optimization architecture for the purpose of correcting visual odometry estimate drift using a Visual-Inertial fusion network, consisted of a neural network architecture anchored on an Inertial supervision learning scheme. Our Visual-Inertial Fusion Network was shown to improve results an average of 50% for trajectory estimates, also producing more visually consistent trajectory estimates for both our underwater application scenarios.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.