Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by António Cunha

2023

Preface

Authors
Cunha, A; Garcia, NM; Gómez, JM; Pereira, S;

Publication
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST

Abstract
[No abstract available]

2024

Clinical Perspectives on the Use of Computer Vision in Glaucoma Screening

Authors
Camara, J; Cunha, A;

Publication
MEDICINA-LITHUANIA

Abstract
Glaucoma is one of the leading causes of irreversible blindness in the world. Early diagnosis and treatment increase the chances of preserving vision. However, despite advances in techniques for the functional and structural assessment of the retina, specialists still encounter many challenges, in part due to the different presentations of the standard optic nerve head (ONH) in the population, the lack of explicit references that define the limits of glaucomatous optic neuropathy (GON), specialist experience, and the quality of patients' responses to some ancillary exams. Computer vision uses deep learning (DL) methodologies, successfully applied to assist in the diagnosis and progression of GON, with the potential to provide objective references for classification, avoiding possible biases in experts' decisions. To this end, studies have used color fundus photographs (CFPs), functional exams such as visual field (VF), and structural exams such as optical coherence tomography (OCT). However, it is still necessary to know the minimum limits of detection of GON characteristics performed through these methodologies. This study analyzes the use of deep learning (DL) methodologies in the various stages of glaucoma screening compared to the clinic to reduce the costs of GON assessment and the work carried out by specialists, to improve the speed of diagnosis, and to homogenize opinions. It concludes that the DL methodologies used in automated glaucoma screening can bring more robust results closer to reality.

2024

A Comparative Analysis of EfficientNet Architectures for Identifying Anomalies in Endoscopic Images

Authors
Pessoa, CP; Quintanilha, BP; de Almeida, JDS; Braz, G; de Paiva, C; Cunha, A;

Publication
International Conference on Enterprise Information Systems, ICEIS - Proceedings

Abstract
The gastrointestinal tract is part of the digestive system, fundamental to digestion. Digestive problems can be symptoms of chronic illnesses like cancer and should be treated seriously. Endoscopic exams in the tract make detecting these diseases in their initial stages possible, enabling an effective treatment. Modern endoscopy has evolved into the Wireless Capsule Endoscopy procedure, where patients ingest a capsule with a camera. This type of exam usually exports videos up to 8 hours in length. Support systems for specialists to detect and diagnose pathologies in this type of exam are desired. This work uses a rarely used dataset, the ERS dataset, containing 121.399 labelled images, to evaluate three models from the EfficientNet family of architectures for the binary classification of Endoscopic images. The models were evaluated in a 5-fold cross-validation process. In the experiments, the best results were achieved by EfficientNetB0, achieving average accuracy and F1-Score of, respectively, 77.29% and 84.67%. Copyright © 2024 by SCITEPRESS – Science and Technology Publications, Lda.

2024

Automatic Detection of Polyps Using Deep Learning

Authors
Oliveira, F; Barbosa, D; Paçal, I; Leite, D; Cunha, A;

Publication
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST

Abstract
Colorectal cancer is a leading health concern worldwide, with late detection being a primary challenge due to its often-asymptomatic nature. Routine examinations like colonoscopies play a pivotal role in early detection. This study harnesses the potential of Deep Learning, specifically convolutional neural networks, in enhancing the accuracy of polyp detection from medical images. Three distinct models, YOLOv5, YOLOv7, and YOLOv8, were trained on the PICCOLO dataset, a comprehensive collection of polyp images. The comparative analysis revealed YOLOv5’s submodel S as the most efficient, achieving an accuracy of 92.2%, a sensitivity of 69%, an F1 score of 74% and a mAP of 76.8%, emphasizing the effectiveness of these networks in polyp detection. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.

2024

Similarity-Based Explanations for Deep Interpretation of Capsule Endoscopy Images

Authors
Fontes, M; Leite, D; Dallyson, J; Cunha, A;

Publication
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST

Abstract
Artificial intelligence (AI) is playing a growing role today in several areas, especially in health, where understanding AI models and their predictions is extremely important for health professionals. In this context, Explainable AI (XAI) plays a crucial role in seeking to provide understandable explanations for these models. This article analyzes two different XAI approaches applied to analyzing gastric endoscopy images. The first, more conventional approach uses Grad CAM, while the second, even less explored but with great potential, is based on “similarity-based explanations”. This example-based XAI technique aims to provide representative examples to support the decisions of AI models. In this study, we compare these two techniques applied to two different models: one based on the VGG16 architecture and the other based on ResNet50, designed to classify images from the KVASIR-capsule database. The results reveal that Grad-CAM provided intuitive explanations only for the VGG16 model, while the “similarity-based explanations” technique provided consistent explanations for both models. We conclude that exploring other XAI techniques can be a significant asset in improving the understanding of the various AI models. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.

2024

Automating the Annotation of Medical Images in Capsule Endoscopy Through Convolutional Neural Networks and CBIR

Authors
Fernandes, R; Salgado, M; Paçal, I; Cunha, A;

Publication
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST

Abstract
This research addresses the significant challenge of automating the annotation of medical images, with a focus on capsule endoscopy videos. The study introduces a novel approach that synergistically combines Deep Learning and Content-Based Image Retrieval (CBIR) techniques to streamline the annotation process. Two pre-trained Convolutional Neural Networks (CNNs), MobileNet and VGG16, were employed to extract and compare visual features from medical images. The methodology underwent rigorous validation using various performance metrics such as accuracy, AUC, precision, and recall. The MobileNet model demonstrated exceptional performance with a test accuracy of 98.4%, an AUC of 99.9%, a precision of 98.2%, and a recall of 98.6%. On the other hand, the VGG16 model achieved a test accuracy of 95.4%, an AUC of 99.2%, a precision of 97.3%, and a recall of 93.5%. These results indicate the high efficacy of the proposed method in the automated annotation of medical images, establishing it as a promising tool for medical applications. The study also highlights potential avenues for future research, including expanding the image retrieval scope to encompass entire endoscopy video databases. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.

  • 22
  • 26