Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Alexandra Nunes

2018

Comparative Study of Visual Odometry and SLAM Techniques

Authors
Gaspar, AR; Nunes, A; Pinto, A; Matos, A;

Publication
Advances in Intelligent Systems and Computing

Abstract
The use of the odometry and SLAM visual methods in autonomous vehicles has been growing. Optical sensors provide valuable information from the scenario that enhance the navigation of autonomous vehicles. Although several visual techniques are already available in the literature, their performance could be significantly affected by the scene captured by the optical sensor. In this context, this paper presents a comparative analysis of three monocular visual odometry methods and three stereo SLAM techniques. The advantages, particularities and performance of each technique are discussed, to provide information that is relevant for the development of new research and novel robotic applications. © Springer International Publishing AG 2018.

2018

Urban@CRAS dataset: Benchmarking of visual odometry and SLAM techniques

Authors
Gaspar, AR; Nunes, A; Pinto, AM; Matos, A;

Publication
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
Public datasets are becoming extremely important for the scientific and industrial community to accelerate the development of new approaches and to guarantee identical testing conditions for comparing methods proposed by different researchers. This research presents the Urban@CRAS dataset that captures several scenarios of one iconic region at Porto Portugal These scenario presents a multiplicity of conditions and urban situations including, vehicle-to-vehicle and vehicle-to-human interactions, cross-sides, turn-around, roundabouts and different traffic conditions. Data from these scenarios are timestamped, calibrated and acquired at 10 to 200 Hz by through a set of heterogeneous sensors installed in a roof of a car. These sensors include a 3D LIDAR, high-resolution color cameras, a high-precision IMU and a GPS navigation system. In addition, positioning information obtained from a real-time kinematic satellite navigation system (with 0.05m of error) is also included as ground-truth. Moreover, a benchmarking process for some typical methods for visual odometry and SLAM is also included in this research, where qualitative and quantitative performance indicators are used to discuss the advantages and particularities of each implementation. Thus, this research fosters new advances on the perception and navigation approaches of autonomous robots (and driving).

2019

A mosaicking technique for object identification in underwater environments

Authors
Nunes, AP; Silva Gaspar, ARS; Pinto, AM; Matos, AC;

Publication
SENSOR REVIEW

Abstract
Purpose This paper aims to present a mosaicking method for underwater robotic applications, whose result can be provided to other perceptual systems for scene understanding such as real-time object recognition. Design/methodology/approach This method is called robust and large-scale mosaicking (ROLAMOS) and presents an efficient frame-to-frame motion estimation with outlier removal and consistency checking that maps large visual areas in high resolution. The visual mosaic of the sea-floor is created on-the-fly by a robust registration procedure that composes monocular observations and manages the computational resources. Moreover, the registration process of ROLAMOS aligns the observation to the existing mosaic. Findings A comprehensive set of experiments compares the performance of ROLAMOS to other similar approaches, using both data sets (publicly available) and live data obtained by a ROV operating in real scenes. The results demonstrate that ROLAMOS is adequate for mapping of sea-floor scenarios as it provides accurate information from the seabed, which is of extreme importance for autonomous robots surveying the environment that does not rely on specialized computers. Originality/value The ROLAMOS is suitable for robotic applications that require an online, robust and effective technique to reconstruct the underwater environment from only visual information.

2019

Three-dimensional mapping in underwater environment

Authors
Nunes, A; Matos, A;

Publication
U.Porto Journal of Engineering

Abstract
Autonomous underwater vehicles are applied in diverse fields, namely in tasks that are risky for human beings to perform, as optical inspection for the purpose of structures quality control. Optical sensors are more appealing cost and they supply a larger quantity of data. Lasers can be used to reconstruct structures in three dimensions, along with cameras, which create a faithful representation of the environment. However, in this context a visual approach was used and the paper presents a method that can put together the three-dimensional information that has been harvested over time, combining also RGB information for surface reconstruction. The map construction follows the motion estimated by a odometry method previously selected from the literature. Experiments conducted using real scenario show that the proposed solution is able to provide a reliable map for objects and even the seafloor.

2019

Critical object recognition in underwater environment

Authors
Nunes A.; Gaspar A.R.; Matos A.;

Publication
OCEANS 2019 - Marseille, OCEANS Marseille 2019

Abstract
Nowadays, ocean exploration is far from complete and the development of suitable recognition systems are crucial, to allow that the robots perform inspection and monitoring tasks in diverse conditions. The online available datasets are incomplete for these kinds of scenarios and, so it is important to build datasets that covered real condition in a simulated environment. Thus, it was developed a dataset with some man-made objects presents in the underwater environment. Moreover, it is also presented the developed method (Convolutional Neural Network) and its evaluation in diverse conditions is performed. It is also presented a comparative analysis and a discussion between the proposed algorithm and the ResNet architecture. The obtained results showed that the developed method is appropriate to classify 7 critical different objects with good performance.

2021

Evaluation of Bags of Binary Words for Place Recognition in Challenging Scenarios

Authors
Gaspar, AR; Nunes, A; Matos, A;

Publication
2021 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
To perform autonomous tasks, robots in real-world environments must be able to navigate in dynamic and unknown spaces. To do so, they must recognize previously seen places to compensate for accumulated positional deviations. This task requires effective identification of recovered landmarks to produce a consistent map, and the use of binary descriptors is increasing, especially because of their compact representation. The visual Bag-of-Words (BoW) algorithm is one of the most commonly used techniques to perform appearance-based loop closure detection quickly and robustly. Therefore, this paper presents a behavioral evaluation of a conventional BoW scheme based on Oriented FAST and Rotated BRIEF (ORB) features for image similarity detection in challenging scenarios. For each scenario, full-indexing vocabularies are created to model the operating environment and evaluate the performance for recognizing previously seen places similar to online approaches. Experiments were conducted on multiple public datasets containing scene changes, perceptual aliasing conditions, or dynamic elements. The Bag of Binary Words technique shows a good balance to deal with such severe conditions at a low computational cost.

  • 1
  • 2