2023
Autores
Neto, A; Couto, D; Coimbra, MT; Cunha, A;
Publicação
VISIGRAPP (4: VISAPP)
Abstract
Colorectal cancer is the third most common cancer and the second cause of cancer-related deaths in the world. Colonoscopic surveillance is extremely important to find cancer precursors such as adenomas or serrated polyps. Identifying small or flat polyps can be challenging during colonoscopy and highly dependent on the colonoscopist's skills. Deep learning algorithms can enable improvement of polyp detection rate and consequently assist to reduce physician subjectiveness and operation errors. This study aims to compare YOLO object detection architecture with self-attention models. In this study, the Kvasir-SEG polyp dataset, composed of 1000 colonoscopy annotated still images, were used to train (700 images) and validate (300images) the performance of polyp detection algorithms. Well-defined architectures such as YOLOv4 and different YOLOv5 models were compared with more recent algorithms that rely on self-attention mechanisms, namely the DETR model, to understand which technique can be more helpful and reliable in clinical practice. In the end, the YOLOv5 proved to be the model achieving better results for polyp detection with 0.81 mAP, however, the DETR had 0.80 mAP proving to have the potential of reaching similar performances when compared to more well-established architectures.
2023
Autores
Teixeira, I; Morais, R; Sousa, JJ; Cunha, A;
Publicação
AGRICULTURE-BASEL
Abstract
In recent years, the use of remote sensing data obtained from satellite or unmanned aerial vehicle (UAV) imagery has grown in popularity for crop classification tasks such as yield prediction, soil classification or crop mapping. The ready availability of information, with improved temporal, radiometric, and spatial resolution, has resulted in the accumulation of vast amounts of data. Meeting the demands of analysing this data requires innovative solutions, and artificial intelligence techniques offer the necessary support. This systematic review aims to evaluate the effectiveness of deep learning techniques for crop classification using remote sensing data from aerial imagery. The reviewed papers focus on a variety of deep learning architectures, including convolutional neural networks (CNNs), long short-term memory networks, transformers, and hybrid CNN-recurrent neural network models, and incorporate techniques such as data augmentation, transfer learning, and multimodal fusion to improve model performance. The review analyses the use of these techniques to boost crop classification accuracy by developing new deep learning architectures or by combining various types of remote sensing data. Additionally, it assesses the impact of factors like spatial and spectral resolution, image annotation, and sample quality on crop classification. Ensembling models or integrating multiple data sources tends to enhance the classification accuracy of deep learning models. Satellite imagery is the most commonly used data source due to its accessibility and typically free availability. The study highlights the requirement for large amounts of training data and the incorporation of non-crop classes to enhance accuracy and provide valuable insights into the current state of deep learning models and datasets for crop classification tasks.
2022
Autores
Teixeira, AC; Morais, R; Sousa, JJ; Peres, E; Cunha, A;
Publicação
CENTERIS/ProjMAN/HCist
Abstract
Insect pests cause significant damage to agricultural production. Smart pest monitoring enables the automatic detection and identification of pests using artificial intelligence techniques. The automatic detection of pests is an important tool to help the farmer decide on the application of pesticides. Several studies were carried out to develop deep learning methods for detecting insect pests. However, it is still an open problem, as there are a scarcity and data features that do not allow the good performance of a deep learning method. Pest24 is a public dataset with great diversity and variability of insects, but it has a low detection rate. To improve detection performance in Pest24, this work proposes a method of automatic detection of insects using deep learning. Two experiments were carried out, applying the YOLOv5 with standard hyperparameters and the hyperparameter tuning obtained by the evolution algorithm. As a result, we obtained a performance superior to that reported in state of the art, with the YOLOv5 method with standard hyperparameters, with an mAP of 72.1%.
2022
Autores
Teixeira, AC; Morais, R; Sousa, JJ; Peres, E; Cunha, A;
Publicação
CENTERIS/ProjMAN/HCist
Abstract
The bedbug and the grape moth are the most significant pests affecting rice and vineyards, causing great damage. However, these pests are only two examples of the many insect pests that exist with great potential to cause significant crop damage. Insect traps are among the most appropriate solution for monitoring and counting, influencing the selection and dosage of the pesticide to be applied for pest control. However, the counting and monitoring operations are based on the frequent visit of technicians to the site and are supported by inefficient counting methods, which is a challenging and time-consuming task. This study proposes the automatic counting of bedbugs and grape moths in traps using deep learning algorithms. We use three different databases, Pest24, Bedbug and Grape moth. Pest24 is a public dataset with a great diversity of insects. The Bedbugs and the Grape moth datasets are private datasets provided by mySense, a precision agriculture platform developed and managed by researchers from the University of Tras-os-Montes e Alto Douro (UTAD). First, we trained the Pest24 dataset with YOLOv5, and we got an mAP of 69.3%. Then, using the weights obtained from the Pest24 dataset, we trained the Bedbug and Grape moth datasets. The best results for the bedbug dataset were obtained with the YOLOv5 with transfer learning with an AP of 96.5% and a counting error of 63.3%. The best result was obtained with YOLOv5 without transfer learning of Pest24 with an AP of 90.9% and a counting error of 6.7 for the Grape moth.
2023
Autores
Pereira, T; Cunha, A; Oliveira, HP;
Publicação
APPLIED SCIENCES-BASEL
Abstract
2023
Autores
Mendes, J; Pereira, T; Silva, F; Frade, J; Morgado, J; Freitas, C; Negrao, E; de Lima, BF; da Silva, MC; Madureira, AJ; Ramos, I; Costa, JL; Hespanhol, V; Cunha, A; Oliveira, HP;
Publicação
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
Biomedical engineering has been targeted as a potential research candidate for machine learning applications, with the purpose of detecting or diagnosing pathologies. However, acquiring relevant, high-quality, and heterogeneous medical datasets is challenging due to privacy and security issues and the effort required to annotate the data. Generative models have recently gained a growing interest in the computer vision field due to their ability to increase dataset size by generating new high-quality samples from the initial set, which can be used as data augmentation of a training dataset. This study aimed to synthesize artificial lung images from corresponding positional and semantic annotations using two generative adversarial networks and databases of real computed tomography scans: the Pix2Pix approach that generates lung images from the lung segmentation maps; and the conditional generative adversarial network (cCGAN) approach that was implemented with additional semantic labels in the generation process. To evaluate the quality of the generated images, two quantitative measures were used: the domain-specific Frechet Inception Distance and Structural Similarity Index. Additionally, an expert assessment was performed to measure the capability to distinguish between real and generated images. The assessment performed shows the high quality of synthesized images, which was confirmed by the expert evaluation. This work represents an innovative application of GAN approaches for medical application taking into consideration the pathological findings in the CT images and the clinical evaluation to assess the realism of these features in the generated images.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.