Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRIIS

2024

Spray Quality Assessment on Water-Sensitive Paper Comparing AI and Classical Computer Vision Methods

Authors
Simões, I; Baltazar, AR; Sousa, A; dos Santos, FN;

Publication
Proceedings of the 21st International Conference on Informatics in Control, Automation and Robotics, ICINCO 2024, Porto, Portugal, November 18-20, 2024, Volume 2.

Abstract
Over recent decades, precision agriculture has revolutionized farming by optimizing crop yields and reducing resource use through targeted applications. Existing portable spray quality assessors lack precision, especially in detecting overlapping droplets on water-sensitive paper. This proposal aims to develop a smartphone application that uses the integrated camera to assess spray quality. Two approaches were implemented for segmentation and evaluation of both the water-sensitive paper and the individual droplets: classical computer vision techniques and a pre-trained YOLOv8 deep learning model. Due to the labor-intensive nature of annotating real datasets, a synthetic dataset was created for model training through sim-to-real transfer. Results show YOLOv8 achieves commendable metrics and efficient processing times but struggles with low image resolution and small droplet sizes, scoring an average Intersection over Union of 97.76% for water-sensitive spray segmentation and 60.77% for droplet segmentation. Classical computer vision techniques demonstrate high precision but lower recall with a precision of 36.64% for water-sensitive paper and 90.85% for droplets. This study highlights the potential of advanced computer vision and deep learning in enhancing spray quality assessors, emphasizing the need for ongoing refinement to improve precision agriculture tools. © 2024 by SCITEPRESS-Science and Technology Publications, Lda.

2024

Subsurface Metallic Object Detection Using GPR Data and YOLOv8 Based Image Segmentation

Authors
Branco, D; Coutinho, R; Sousa, A; dos Santos, FN;

Publication
Proceedings of the 21st International Conference on Informatics in Control, Automation and Robotics, ICINCO 2024, Porto, Portugal, November 18-20, 2024, Volume 1.

Abstract
Ground Penetrating Radar (GPR) is a geophysical imaging technique used for the characterization of a sub surface’s electromagnetic properties, allowing for the detection of buried objects. The characterization of an object’s parameters, such as position, depth and radius, is possible by identifying the distinct hyperbolic signature of objects in GPR B-scans. This paper proposes an automated system to detect and characterize the presence of buried objects through the analysis of GPR data, using GPR and computer vision data pro cessing techniques and YOLO segmentation models. A multi-channel encoding strategy was explored when training the models. This consisted of training the models with images where complementing data processing techniques were stored in each image RGB channel, with the aim of maximizing the information. The hy perbola segmentation masks predicted by the trained neural network were related to the mathematical model of the GPR hyperbola, using constrained least squares. The results show that YOLO models trained with multi-channel encoding provide more accurate models. Parameter estimation proved accurate for the object’s position and depth, however, radius estimation proved inaccurate for objects with relatively small radii. © 2024 by SCITEPRESS– Science and Technology Publications, Lda.

2024

AIMSM - A Mechanism to Optimize Systems with Multiple AI Models: A Case Study in Computer Vision for Autonomous Mobile Robots

Authors
Ferreira, BG; de Sousa, AJM; Reis, LP; de Sousa, AA; Rodrigues, R; Rossetti, R;

Publication
Progress in Artificial Intelligence - 23rd EPIA Conference on Artificial Intelligence, EPIA 2024, Viana do Castelo, Portugal, September 3-6, 2024, Proceedings, Part III

Abstract
This article proposes the Artificial Intelligence Models Switching Mechanism (AIMSM), a novel approach to optimize system resource utilization by allowing systems to switch AI models during runtime in dynamic environments. Many real-world applications utilize multiple data sources and various AI models for different purposes. In many of those applications, every AI model doesn’t have to operate all the time. The AIMSM strategically allows the system to activate and deactivate these models, focusing on system resource optimization. The switching of each AI model can be based on any information, such as context or previous results. In the case study of an autonomous mobile robot performing computer vision tasks, the AIMSM helps the system to achieve a significant increment in performance, with a 50% average increase in frames per second (FPS) rate, for this specific case study, assuming that no erroneous switching occurred. Experimental results have demonstrated that the AIMSM can improve system resource utilization efficiency when properly implemented, optimize overall resource consumption, and enhance system performance. The AIMSM presented itself as a better alternative to permanently loading all the models simultaneously, improving the adaptability and functionality of the systems. It is expected that using the AIMSM will yield a performance improvement that is particularly relevant to systems with multiple AI models of a complex nature, where such models do not need to be all continuously executed or systems that will benefit from lower resource usage. Code is available at https://github.com/BrunoGeorgevich/AIMSM. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2024

JEMA: A Joint Embedding Framework for Scalable Co-Learning with Multimodal Alignment

Authors
Sousa, J; Darabi, R; Sousa, A; Brueckner, F; Reis, LP; Reis, A;

Publication
CoRR

Abstract

2024

Hierarchical Reinforcement Learning and Evolution Strategies for Cooperative Robotic Soccer

Authors
Santos, B; Cardoso, A; Ledo, G; Reis, LP; Sousa, A;

Publication
2024 7TH IBERIAN ROBOTICS CONFERENCE, ROBOT 2024

Abstract
Artificial I ntelligence ( AI) a nd M achine Learning are frequently used to develop player skills in robotic soccer scenarios. Despite the potential of deep reinforcement learning, its computational demands pose challenges when learning complex behaviors. This work explores less demanding methods, namely Evolution Strategies (ES) and Hierarchical Reinforcement Learning (HRL), for enhancing coordination and cooperation between two agents from the FC Portugal 3D Simulation Soccer Team, in RoboCup. The goal is for two robots to learn a high-level skill that enables a robot to pass the ball to its teammate as quickly as possible. Results show that the trained models under-performed in a traditional robotic soccer two-agent task and scored perfectly in a much simpler one. Therefore, this work highlights that while these alternative methods can learn trivial cooperative behavior, more complex tasks are difficult t o learn.

2024

Using Deep Learning for 2D Primitive Perception with a Noisy Robotic LiDAR

Authors
Brito, A; Sousa, P; Couto, A; Leao, G; Reis, LP; Sousa, A;

Publication
2024 7TH IBERIAN ROBOTICS CONFERENCE, ROBOT 2024

Abstract
Effective navigation in mobile robotics relies on precise environmental mapping, including the detection of complex objects as geometric primitives. This work introduces a deep learning model that determines the pose, type, and dimensions of 2D primitives using a mobile robot equipped with a noisy LiDAR sensor. Simulated experiments conducted in Webots involved randomly placed primitives, with the robot capturing point clouds which were used to progressively build a map of the environment. Two mapping techniques were considered, a deterministic and probabilistic (Bayesian) mapping, and different levels of noise for the LiDAR were compared. The maps were used as input to a YOLOv5 network that detected the position and type of the primitives. A cropped image of each primitive was then fed to a Convolutional Neural Network (CNN) that determined the dimensions and orientation of a given primitive. Results show that the primitive classification achieved an accuracy of 95% in low noise, dropping to 85% under higher noise conditions, while the prediction of the shapes' dimensions had error rates from 5% to 12%, as the noise increased. The probabilistic mapping approach improved accuracy by 10-15% compared to deterministic methods, showcasing robustness to noise levels up to 0.1. Therefore, these findings highlight the effectiveness of probabilistic mapping in enhancing detection accuracy for mobile robot perception in noisy environments.

  • 9
  • 362