2024
Authors
Guedes, PA; Silva, HM; Wang, S; Martins, A; Almeida, J; Silva, E;
Publication
JOURNAL OF MARINE SCIENCE AND ENGINEERING
Abstract
This paper introduces an advanced acoustic imaging system leveraging multibeam water column data at various frequencies to detect and classify marine litter. This study encompasses (i) the acquisition of test tank data for diverse types of marine litter at multiple acoustic frequencies; (ii) the creation of a comprehensive acoustic image dataset with meticulous labelling and formatting; (iii) the implementation of sophisticated classification algorithms, namely support vector machine (SVM) and convolutional neural network (CNN), alongside cutting-edge detection algorithms based on transfer learning, including single-shot multibox detector (SSD) and You Only Look once (YOLO), specifically YOLOv8. The findings reveal discrimination between different classes of marine litter across the implemented algorithms for both detection and classification. Furthermore, cross-frequency studies were conducted to assess model generalisation, evaluating the performance of models trained on one acoustic frequency when tested with acoustic images based on different frequencies. This approach underscores the potential of multibeam data in the detection and classification of marine litter in the water column, paving the way for developing novel research methods in real-life environments.
2024
Authors
Maravalhas-Silva, J; Silva, H; Lima, AP; Silva, E;
Publication
OCEANS 2024 - SINGAPORE
Abstract
We present a pilot study where spectral unmixing is applied to hyperspectral images captured in a controlled environment with a threefold purpose in mind: validation of our experimental setup, of the data processing pipeline, and of the usage of spectral unmixing algorithms for the aforementioned research avenue. Results from this study show that classical techniques such as VCA and FCLS can be used to distinguish between plastic and nonplastic materials, but struggle significantly to distinguish between spectrally similar plastics, even in the presence of multiple pure pixels.
2024
Authors
Guedes, PA; Silva, H; Wang, S; Martins, A; Almeida, JM; Silva, E;
Publication
OCEANS 2024 - SINGAPORE
Abstract
This paper explores the potential use of acoustic imaging and the use of a multi-frequency multibeam-echosounder (MBES) for monitoring marine litter in the water column. The main goal is to perform a test and validation setup using a simulation and actual experimental setup to determine if the MBES data can detect marine litter in a water column image (WCI) and if using multi-frequency MBES data will allow to better distinguish and characterize marine litter debris in detection applications. Results using simulated HoloOcean Environment and actual marine litter data revealed the successful detection of objects commonly found in ocean litter hotspots at various ranges and frequencies, enablingthe pursue of novel means of automatic detection and classification in MBES WCI data while using multi-frequency capabilities.
2023
Authors
Riz L.; Caraffa A.; Bortolon M.; Mekhalfi M.L.; Boscaini D.; Moura A.; Antunes J.; Dias A.; Silva H.; Leonidou A.; Constantinides C.; Keleshis C.; Abate D.; Poiesi F.;
Publication
IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Abstract
We present MONET, a new multimodal dataset captured using a thermal camera mounted on a drone that flew over rural areas, and recorded human and vehicle activities. We captured MONET to study the problem of object localisation and behaviour understanding of targets undergoing large-scale variations and being recorded from different and moving viewpoints. Target activities occur in two different land sites, each with unique scene structures and cluttered backgrounds. MONET consists of approximately 53K images featuring 162K manually annotated bounding boxes. Each image is timestamp-aligned with drone metadata that includes information about attitudes, speed, altitude, and GPS coordinates. MONET is different from previous thermal drone datasets because it features multimodal data, including rural scenes captured with thermal cameras containing both person and vehicle targets, along with trajectory information and metadata. We assessed the difficulty of the dataset in terms of transfer learning between the two sites and evaluated nine object detection algorithms to identify the open challenges associated with this type of data. Project page: https://github.com/fabiopoiesi/monet-dataset.
2007
Authors
Silva, H; Almeida, JM; Lima, L; Martins, A; da Silva, EP;
Publication
RoboCup 2007: Robot Soccer World Cup XI, July 9-10, 2007, Atlanta, GA, USA
Abstract
2007
Authors
Silva, H; Almeida, JM; Lima, L; Martins, A; Silva, EP; Patacho, A;
Publication
COMPUTATIONAL MODELLING OF OBJECTS REPRESENTED IN IMAGES: FUNDAMENTALS, METHODS AND APPLICATIONS
Abstract
This paper propose a real-time vision architecture for mobile robotics, and describes a current implementation that is characterised by: low computational cost, low latency, low power, high modularity, configuration, adaptability and scalability. A pipeline structure further reduces latency and allows a paralleled hardware implementation. A dedicated hardware vision sensor was developed in order to take advantage of the proposed architecture. A new method using run length encoding (RLE) colour transition allows real-time edge determination at low computational cost. The real-time characteristics and hardware partial implementation, coupled with low energy consumption address typical autonomous systems applications.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.