Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Vitor Manuel Filipe

2024

Assessing Soil Ripping Depth for Precision Forestry with a Cost-Effective Contactless Sensing System

Autores
da Silva, DQ; Louro, F; dos Santos, FN; Filipe, V; Sousa, AJ; Cunha, M; Carvalho, JL;

Publicação
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Forest soil ripping is a practice that involves revolving the soil in a forest area to prepare it for planting or sowing operations. Advanced sensing systems may help in this kind of forestry operation to assure ideal ripping depth and intensity, as these are important aspects that have potential to minimise the environmental impact of forest soil ripping. In this work, a cost-effective contactless system - capable of detecting and mapping soil ripping depth in real-time - was developed and tested in laboratory and in a realistic forest scenario. The proposed system integrates two single-point LiDARs and a GNSS sensor. To evaluate the system, ground-truth data was manually collected on the field during the operation of the machine with a ripping implement. The proposed solution was tested in real conditions, and the results showed that the ripping depth was estimated with minimal error. The accuracy and mapping ripping depth ability of the low-cost sensor justify their use to support improved soil preparation with machines or robots toward sustainable forest industry.

2024

Fusion of Time-of-Flight Based Sensors with Monocular Cameras for a Robotic Person Follower

Autores
Sarmento, J; dos Santos, FN; Aguiar, AS; Filipe, V; Valente, A;

Publicação
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS

Abstract
Human-robot collaboration (HRC) is becoming increasingly important in advanced production systems, such as those used in industries and agriculture. This type of collaboration can contribute to productivity increase by reducing physical strain on humans, which can lead to reduced injuries and improved morale. One crucial aspect of HRC is the ability of the robot to follow a specific human operator safely. To address this challenge, a novel methodology is proposed that employs monocular vision and ultra-wideband (UWB) transceivers to determine the relative position of a human target with respect to the robot. UWB transceivers are capable of tracking humans with UWB transceivers but exhibit a significant angular error. To reduce this error, monocular cameras with Deep Learning object detection are used to detect humans. The reduction in angular error is achieved through sensor fusion, combining the outputs of both sensors using a histogram-based filter. This filter projects and intersects the measurements from both sources onto a 2D grid. By combining UWB and monocular vision, a remarkable 66.67% reduction in angular error compared to UWB localization alone is achieved. This approach demonstrates an average processing time of 0.0183s and an average localization error of 0.14 meters when tracking a person walking at an average speed of 0.21 m/s. This novel algorithm holds promise for enabling efficient and safe human-robot collaboration, providing a valuable contribution to the field of robotics.

2024

Pest Detection in Olive Groves Using YOLOv7 and YOLOv8 Models

Autores
Alves, A; Pereira, J; Khanal, S; Morais, AJ; Filipe, V;

Publicação
OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, PT II, OL2A 2023

Abstract
Modern agriculture faces important challenges for feeding a fast-growing planet's population in a sustainable way. One of the most important challenges faced by agriculture is the increasing destruction caused by pests to important crops. It is very important to control and manage pests in order to reduce the losses they cause. However, pest detection and monitoring are very resources consuming tasks. The recent development of computer vision-based technology has made it possible to automatize pest detection efficiently. In Mediterranean olive groves, the olive fly (Bactrocera oleae Rossi) is considered the key-pest of the crop. This paper presents olive fly detection using the lightweight YOLO-based model for versions 7 and 8, respectively, YOLOv7-tiny and YOLOv8n. The proposed object detection models were trained, validated, and tested using two different image datasets collected in various locations of Portugal and Greece. The images are constituted by sticky yellow trap photos and by McPhail trap photos with olive fly exemplars. The performance of the models was evaluated using precision, recall, and mAP.95. The YOLOV7-tiny model best performance is 88.3% of precision, 85% of Recall, 90% of mAP.50, and 53% of mAP.95. The YOLOV8n model best performance is 85% of precision, 85% of Recall, 90% mAP.50, and 55% of mAP.50 YOLO8n model achieved worst results than YOLOv7-tiny for a dataset without negative images (images without olive fly exemplars). Aiming at installing an experimental prototype in the olive grove, the YOLOv8n model was implemented in a Ubuntu Server 23.04 Raspberry PI 3 microcomputer.

2023

THE IMPACT OF PERCEIVED CHALLENGE ON NARRATIVE IMMERSION IN RPG VIDEO GAMES: A PRELIMINARY STUDY

Autores
Domingues, JM; Filipe, V; Luz, F; Carita, A;

Publicação
Proceedings of the International Conferences on Interfaces and Human Computer Interaction 2023, IHCI 2023; Computer Graphics, Visualization, Computer Vision and Image Processing 2023, CGVCVIP 2023; and Game and Entertainment Technologies 2023, GET 2023

Abstract
The challenge is a fundamental aspect of almost every gameplay, and immersion is one of the most widely recognized concepts in the video game industry. Since this is currently a work in progress, this study aims to preliminary research how player's perceived level of challenge affects narrative immersion during gameplay in the role-playing game (RPG) genre. This study will outline the procedures that will be undertaken, including the utilization of the Challenge Originating from Recent Gameplay Interaction Scale (CORGIS) instrument and a questionnaire to measure player immersion. These instruments will enable the assessment of the impact of the perceived challenge on narrative immersion in each use case. © 2023 Proceedings of the International Conferences on Interfaces and Human Computer Interaction 2023, IHCI 2023; Computer Graphics, Visualization, Computer Vision and Image Processing 2023, CGVCVIP 2023; and Game and Entertainment Technologies 2023, GET 2023. All rights reserved.

2023

Deep Learning-Based Tree Stem Segmentation for Robotic Eucalyptus Selective Thinning Operations

Autores
da Silva, DQ; Rodrigues, TF; Sousa, AJ; dos Santos, FN; Filipe, V;

Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT II

Abstract
Selective thinning is a crucial operation to reduce forest ignitable material, to control the eucalyptus species and maximise its profitability. The selection and removal of less vigorous stems allows the remaining stems to grow healthier and without competition for water, sunlight and nutrients. This operation is traditionally performed by a human operator and is time-intensive. This work simplifies selective thinning by removing the stem selection part from the human operator's side using a computer vision algorithm. For this, two distinct datasets of eucalyptus stems (with and without foliage) were built and manually annotated, and three Deep Learning object detectors (YOLOv5, YOLOv7 and YOLOv8) were tested on real context images to perform instance segmentation. YOLOv8 was the best at this task, achieving an Average Precision of 74% and 66% on non-leafy and leafy test datasets, respectively. A computer vision algorithm for automatic stem selection was developed based on the YOLOv8 segmentation output. The algorithm managed to get a Precision above 97% and a 81% Recall. The findings of this work can have a positive impact in future developments for automatising selective thinning in forested contexts.

2023

STREET LIGHT SEGMENTATION IN SATELLITE IMAGES USING DEEP LEARNING

Autores
Teixeira, AC; Carneiro, G; Filipe, V; Cunha, A; Sousa, JJ;

Publicação
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM

Abstract
Public lighting plays a very important role for society's safety and quality of life. The identification of faults in public lighting is essential for the maintenance and prevention of safety. Traditionally, this task depends on human action, through checking during the day, representing expenditure and waste of energy. Automatic detection with deep learning is an innovative solution that can be explored for locating and identifying of this kind of problem. In this study, we present a first approach, composed of several steps, intending to obtain the segmentation of public lighting, using Seville (Spain) as case study. A dataset called NLight was created from a nighttime image taken by the JL1-3B satellite, and four U-Net and FPN architectures were trained with different backbones to segment part of the NLight. The U-Net with InceptionResNetv2 proved to be the model with the best performance, obtained 761 of 815, correct locations (93.4%). This model was used to predict the segmentation of the remaining dataset. This study provides the location of lamps so that we can identify patterns and possible lighting failures in the future.

  • 22
  • 30