Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Detalhes

Detalhes

  • Nome

    Alexandre Henrique Neto
  • Cargo

    Assistente de Investigação
  • Desde

    17 maio 2021
  • Nacionalidade

    Portugal
  • Contactos

    +351222094000
    alexandre.h.neto@inesctec.pt
001
Publicações

2023

Gastric cancer detection based on Colorectal Cancer transfer learning

Autores
Nobrega, S; Neto, A; Coimbra, M; Cunha, A;

Publicação
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG

Abstract
Gastric Cancer (GC) and Colorectal Cancer (CRC) are some of the most common cancers in the world. The most common diagnostic methods are upper endoscopy and biopsy. Possible expert distractions can lead to late diagnosis. GC is a less studied malignancy than CRC, leading to scarce public data that difficult the use of AI detection methods, unlike CRC where public data are available. Considering that CRC endoscopic images present some similarities with GC, a CRC Transfer Learning approach could be used to improve AI GC detectors. This paper evaluates a novel Transfer Learning approach for real-time GC detection, using a YOLOv4 model pre-trained on CRC detection. The results achieved are promising since GC detection improved relatively to the traditional Transfer Learning strategy.

2022

Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification

Autores
Camara, J; Neto, A; Pires, IM; Villasana, MV; Zdravevski, E; Cunha, A;

Publicação
JOURNAL OF IMAGING

Abstract
Artificial intelligence techniques are now being applied in different medical solutions ranging from disease screening to activity recognition and computer-aided diagnosis. The combination of computer science methods and medical knowledge facilitates and improves the accuracy of the different processes and tools. Inspired by these advances, this paper performs a literature review focused on state-of-the-art glaucoma screening, segmentation, and classification based on images of the papilla and excavation using deep learning techniques. These techniques have been shown to have high sensitivity and specificity in glaucoma screening based on papilla and excavation images. The automatic segmentation of the contours of the optic disc and the excavation then allows the identification and assessment of the glaucomatous disease's progression. As a result, we verified whether deep learning techniques may be helpful in performing accurate and low-cost measurements related to glaucoma, which may promote patient empowerment and help medical doctors better monitor patients.

2022

Evaluations of Deep Learning Approaches for Glaucoma Screening Using Retinal Images from Mobile Device

Autores
Neto, A; Camara, J; Cunha, A;

Publicação
SENSORS

Abstract
Glaucoma is a silent disease that leads to vision loss or irreversible blindness. Current deep learning methods can help glaucoma screening by extending it to larger populations using retinal images. Low-cost lenses attached to mobile devices can increase the frequency of screening and alert patients earlier for a more thorough evaluation. This work explored and compared the performance of classification and segmentation methods for glaucoma screening with retinal images acquired by both retinography and mobile devices. The goal was to verify the results of these methods and see if similar results could be achieved using images captured by mobile devices. The used classification methods were the Xception, ResNet152 V2 and the Inception ResNet V2 models. The models' activation maps were produced and analysed to support glaucoma classifier predictions. In clinical practice, glaucoma assessment is commonly based on the cup-to-disc ratio (CDR) criterion, a frequent indicator used by specialists. For this reason, additionally, the U-Net architecture was used with the Inception ResNet V2 and Inception V3 models as the backbone to segment and estimate CDR. For both tasks, the performance of the models reached close to that of state-of-the-art methods, and the classification method applied to a low-quality private dataset illustrates the advantage of using cheaper lenses.

2022

Artificial Intelligence for Upper Gastrointestinal Endoscopy: A Roadmap from Technology Development to Clinical Practice

Autores
Renna, F; Martins, M; Neto, A; Cunha, A; Libanio, D; Dinis-Ribeiro, M; Coimbra, M;

Publicação
DIAGNOSTICS

Abstract
Stomach cancer is the third deadliest type of cancer in the world (0.86 million deaths in 2017). In 2035, a 20% increase will be observed both in incidence and mortality due to demographic effects if no interventions are foreseen. Upper GI endoscopy (UGIE) plays a paramount role in early diagnosis and, therefore, improved survival rates. On the other hand, human and technical factors can contribute to misdiagnosis while performing UGIE. In this scenario, artificial intelligence (AI) has recently shown its potential in compensating for the pitfalls of UGIE, by leveraging deep learning architectures able to efficiently recognize endoscopic patterns from UGIE video data. This work presents a review of the current state-of-the-art algorithms in the application of AI to gastroscopy. It focuses specifically on the threefold tasks of assuring exam completeness (i.e., detecting the presence of blind spots) and assisting in the detection and characterization of clinical findings, both gastric precancerous conditions and neoplastic lesion changes. Early and promising results have already been obtained using well-known deep learning architectures for computer vision, but many algorithmic challenges remain in achieving the vision of AI-assisted UGIE. Future challenges in the roadmap for the effective integration of AI tools within the UGIE clinical practice are discussed, namely the adoption of more robust deep learning architectures and methods able to embed domain knowledge into image/video classifiers as well as the availability of large, annotated datasets.