Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Em Fevereiro de 2017 conclui o Mestrado Integrado em Engenharia Eletrotécnica e de Computadores na Faculdade de Engenharia da Universidade do Porto. A ligação ao Centro de Robótica e Sistemas (CRAS) iniciou-se aquando a realização da minha dissertação, cujo objetivo era  o mapeamento do fundo do mar bem como das estruturas subaquáticas nele presentes, utilizando um método de estimação de movimento visual. Desde Maio do mesmo ano, sou bolseira do CRAS. Estive envolvida no projeto de um sistema de localização baseado em recetores GPS e sistema inercial e, neste momento, a minha área de trabalho será focada na visão e a percepção.       

Interest
Topics
Details

Details

  • Name

    Alexandra Nunes
  • Role

    Research Assistant
  • Since

    01st October 2016
Publications

2023

Limit Characterization for Visual Place Recognition in Underwater Scenes

Authors
Gaspar, AR; Nunes, A; Matos, A;

Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
The underwater environment has some structures that still need regular inspection. However, the nature of this environment presents a number of challenges in achieving accurate vehicle position and consequently successful image similarity detection. Although there are some factors - water turbidity or light attenuation - that degrade the quality of the captured images, visual sensors have shown a strong impact on mission scenarios - close range operations. Therefore, the purpose of this paper is to study whether these data are capable of addressing the aforementioned underwater challenges on their own. Considering the lack of available data in this context, a typical underwater scenario was recreated using the Stonefish simulator. Experiments were conducted on two predefined trajectories containing appearance scene changes. The loop closure situations provided by the bag-of-words (BoW) approach are correctly detected, but it is sensitive to some severe conditions.

2023

Comparative Study of Semantic Segmentation Methods in Harbour Infrastructures

Authors
Nunes, A; Gaspar, AR; Matos, A;

Publication
OCEANS 2023 - LIMERICK

Abstract
Nowadays, the semantic segmentation of the images of the underwater world is crucial, as these results can be used in various applications such as manipulation or one of the most important in the semantic mapping of the environment. In this way, the structure of the scene observed by the robot can be recovered, and at the same time, the robot can identify the class of objects seen and choose the next action during the mission. However, semantic segmentation using cameras in underwater environments is a non-trivial task, as it depends on the quality of the acquired images (which change over time due to various factors), the diversification of objects and structures that can be inspected during the mission, and the quality of the training performed prior to the evaluation, as poor training means an incorrect estimation of the object class or a poor delineation of the object. Therefore, in this paper, a comparative study of suitable modern semantic segmentation algorithms is conducted to determine whether they can be used in underwater scenarios. Nowadays, it is very important to equip the robot with the ability to inspect port facilities, as this scenario is of particular interest due to the large variety of objects and artificial structures, and to know and recognise most of them. For this purpose, the most suitable dataset available online was selected, which is the closest to the intended context. Therefore, several parameters and different conditions were considered to perform a complete evaluation, and some limitations and improvements are described. The SegNet model shows the best overall accuracy, reaching more than 80%, but some classes such as robots and plants degrade the quality of the performance (considering the mean accuracy and the mean IoU metric).

2023

Visual Place Recognition for Harbour Infrastructures Inspection

Authors
Gaspar, AR; Nunes, A; Matos, A;

Publication
OCEANS 2023 - LIMERICK

Abstract
The harbour infrastructures have some structures that still need regular inspection. However, the nature of this environment presents a number of challenges when it comes to determining an accurate vehicle position and consequently performing successful image similarity detection. In addition, the underwater environment is highly dynamic, making place recognition harder because the appearance of a place can change over time. In these close-range operations, the visual sensors have a major impact. There are some factors that degrade the quality of the captured images, but image preprocessing steps are increasingly used. Therefore, in this paper, a purely visual similarity detection with enhancement technique is proposed to overcome the inherent perceptual problems in a port scenario. Considering the lack of available data in this context and to facilitate the variation of environmental parameters, a harbour scenario was simulated using the Stonefish simulator. The experiments were performed on some predefined trajectories containing the poor visibility conditions typical of these scenarios. The place recognition approach improves the performance by up to 10% compared to the results obtained with captured images. In general, it provides a good balance in coping with turbidity and light incidence at low computational cost and achieves a performance of about 80%.

2023

Improving Semantic Segmentation Performance in Underwater Images

Authors
Nunes, AP; Matos, A;

Publication

Abstract
Nowadays, semantic segmentation is increasingly used in exploration by underwater robots, for example in autonomous navigation, so that the robot can recognise the nature and elements of its environment during the mission and act according to this classification to avoid collisions. Other applications can be found in the search for archaeological artefacts, in the inspection of underwater structures or in species monitoring. Therefore, it is necessary to try to improve the performance in these tasks as much as possible. To this end, we compare some methods for improving image quality and for data augmentation and test whether higher performance metrics can be achieved with both strategies. The experiments are performed with the SegNet implementations and the SUIM dataset with 8 common underwater classes to compare the obtained results with the already known ones. The results obtained with both strategies show that they are beneficial and lead to better performance results by achieving a mean IoU of 56% and an increased overall accuracy of 81.8%. The single result shows that there are 5 classes with an IoU value above 60% and only one class with an IoU value below 30%, which is a more reliable result and easier to use in real contexts.

2023

Improving Semantic Segmentation Performance in Underwater Images

Authors
Nunes, A; Matos, A;

Publication
JOURNAL OF MARINE SCIENCE AND ENGINEERING

Abstract
Nowadays, semantic segmentation is used increasingly often in exploration by underwater robots. For example, it is used in autonomous navigation so that the robot can recognise the elements of its environment during the mission to avoid collisions. Other applications include the search for archaeological artefacts, the inspection of underwater structures or in species monitoring. Therefore, it is necessary to improve the performance in these tasks as much as possible. To this end, we compare some methods for image quality improvement and data augmentation and test whether higher performance metrics can be achieved with both strategies. The experiments are performed with the SegNet implementation and the SUIM dataset with eight common underwater classes to compare the obtained results with the already known ones. The results obtained with both strategies show that they are beneficial and lead to better performance results by achieving a mean IoU of 56% and an increased overall accuracy of 81.8%. The result for the individual classes shows that there are five classes with an IoU value close to 60% and only one class with an IoU value less than 30%, which is a more reliable result and is easier to use in real contexts.