Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CRIIS

2023

STREET LIGHT SEGMENTATION IN SATELLITE IMAGES USING DEEP LEARNING

Autores
Teixeira, AC; Carneiro, G; Filipe, V; Cunha, A; Sousa, JJ;

Publicação
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM

Abstract
Public lighting plays a very important role for society's safety and quality of life. The identification of faults in public lighting is essential for the maintenance and prevention of safety. Traditionally, this task depends on human action, through checking during the day, representing expenditure and waste of energy. Automatic detection with deep learning is an innovative solution that can be explored for locating and identifying of this kind of problem. In this study, we present a first approach, composed of several steps, intending to obtain the segmentation of public lighting, using Seville (Spain) as case study. A dataset called NLight was created from a nighttime image taken by the JL1-3B satellite, and four U-Net and FPN architectures were trained with different backbones to segment part of the NLight. The U-Net with InceptionResNetv2 proved to be the model with the best performance, obtained 761 of 815, correct locations (93.4%). This model was used to predict the segmentation of the remaining dataset. This study provides the location of lamps so that we can identify patterns and possible lighting failures in the future.

2023

EVALUATING YOLO MODELS FOR GRAPE MOTH DETECTION IN INSECT TRAPS

Autores
Teixeira, AC; Carneiro, G; Morais, R; Sousa, JJ; Cunha, A;

Publicação
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM

Abstract
The grape moth is a common pest that affects grapevines by consuming both fruit and foliage, rendering grapes deformed and unsellable. Integrated pest management for the grape moth heavily relies on pheromone traps, which serve a crucial function by identifying and tracking adult moth populations. This information is then used to determine the most appropriate time and method for implementing other control techniques. This study aims to find the best method for detecting small insects. We evaluate the following recent YOLO models: v5, v6, v7, and v8 for detecting and counting grape moths in insect traps. The best performance was achieved by YOLOv8, with an average precision of 92.4% and a counting error of 8.1%.

2023

TRANSFER-LEARNING ON LAND USE AND LAND COVER CLASSIFICATION

Autores
Carneiro, G; Teixeira, A; Cunha, A; Sousa, J;

Publicação
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM

Abstract
In this study, we evaluated the use of small pre-trained 3D Convolutional Neural Networks (CNN) on land use and land cover (LULC) slide-window-based classification. We pre-trained the small models in a dataset with origin in the Eurosat dataset and evaluated the benefits of the transfer-learning plus fine-tuning for four different regions using Sentinel-2 L1C imagery (bands of 10 and 20m of spatial resolution), comparing the results to pre-trained models and trained from scratch. The models achieved an F1 Score of between 0.69-0.80 without significative change when pre-training the model. However, for small datasets, pre-training the model improved the classification by up to 3%.

2023

EVALUATING DATA AUGMENTATION FOR GRAPEVINE VARIETIES IDENTIFICATION

Autores
Carneiro, G; Neto, A; Teixeira, A; Cunha, A; Sousa, J;

Publicação
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM

Abstract
The grapevine variety identification is important in the wine's production chain since it is related to its quality, authenticity and singularity. In this study, we addressed the data augmentation approach to identify grape varieties with images acquired in-field. We tested the static transformations, RandAugment, and Cutmix methods. Our results showed that the best result was achieved by the Static method generating 5 images per sample (F1 = 0.89), however without a significative difference if compared with RandAugment generating 2 images. The worst performance was achieved by CutMix (F1 = 0.86).

2023

Computer Vision Based Quality Control for Additive Manufacturing Parts

Autores
Nascimento, R; Martins, I; Dutra, TA; Moreira, L;

Publicação
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY

Abstract
This work presents a novel methodology for the quality assessment of material extrusion parts through AI-based Computer Vision. To this end, different techniques are integrated using inspection methods that are applied to other areas in additive manufacturing field. The system is divided into four main points: (1) pre-processing, (2) color analysis, (3) shape analysis, and (4) defect location. The color analysis is performed in CIELAB color space, and the color distance between the part under analysis and the reference surface is calculated using the color difference formula CIE2000. The shape analysis consists of the binarization of the image using the Canny edge detector. Then, the Hu moments are calculated for images from the part under analysis and the results are compared with those from the reference part. To locate defects, the image of the part to be analyzed is first processed with a median filter, and both the original and filtered image are subtracted. Then, the resulting image is binarized, and the defects are located through a blob detector. In the training phase, a subset of parts was used to evaluate the performance of different methods and to set the values of parameters. Later, in a testing and validation phase, the performance of the system was evaluated using a different set of parts. The results show that the proposed system is able to classify parts produced by additive manufacturing, with an overall accuracy of 86.5%, and to locate defects on their surfaces in a more effective manner.

2023

Working on empathy with the use of extended reality scenarios: the Mr. UD project

Autores
Laska-Lesniewicz, A; Kaminska, D; Zwolinski, G; Coelho, L; Raposo, R; Vairinhos, M; Haamer, E;

Publicação
INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY

Abstract
Empathy has become a central part of design and is loudly manifested in several frameworks such as universal design, inclusive design or human-centred design. This paper presents five independent Extended Reality (XR) scenarios that put potential users in the shoes of people with special needs such as vision impairments, autism spectrum disorder, mobility impairments, pregnancy state and some problems of the elderly. All exercises occur in a supermarket environment and the application is prepared for Oculus Quest 2 platform and is supported in some cases by tangible equipment (geriatric suit, pregnancy belly simulator, wheelchair). The proposed simulations were validated by experts who evaluated the quality of the proposed tasks and the possibility of simulating selected limitations or issues in XR. Ongoing development and testing of the XR application will provide further in-depth views on its usefulness, acceptance and impact in increasing empathy towards the challenges faced by the personas portrayed.

  • 73
  • 377