Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Teresa Finisterra Araújo

2019

EyeWeS: Weakly Supervised Pre-Trained Convolutional Neural Networks for Diabetic Retinopathy Detection

Autores
Costa, P; Araujo, T; Aresta, G; Galdran, A; Mendonca, AM; Smailagic, A; Campilho, A;

Publicação
PROCEEDINGS OF MVA 2019 16TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA)

Abstract
Diabetic Retinopathy (DR) is one of the leading causes of preventable blindness in the developed world. With the increasing number of diabetic patients there is a growing need of an automated system for DR detection. We propose EyeWeS, a method that not only detects DR in eye fundus images but also pinpoints the regions of the image that contain lesions, while being trained with image labels only. We show that it is possible to convert any pre-trained convolutional neural network into a weakly-supervised model while increasing their performance and efficiency. EyeWeS improved the results of Inception V3 from 94:9% Area Under the Receiver Operating Curve (AUC) to 95:8% AUC while maintaining only approximately 5% of the Inception V3's number of parameters. The same model is able to achieve 97:1% AUC in a cross-dataset experiment.

2019

BACH: Grand challenge on breast cancer histology images

Autores
Aresta, G; Araujo, T; Kwok, S; Chennamsetty, SS; Safwan, M; Alex, V; Marami, B; Prastawa, M; Chan, M; Donovan, M; Fernandez, G; Zeineh, J; Kohl, M; Walz, C; Ludwig, F; Braunewell, S; Baust, M; Vu, QD; To, MNN; Kim, E; Kwak, JT; Galal, S; Sanchez Freire, V; Brancati, N; Frucci, M; Riccio, D; Wang, YQ; Sun, LL; Ma, KQ; Fang, JN; Kone, ME; Boulmane, LS; Campilho, ARLO; Eloy, CTRN; Polonia, AONO; Aguiar, PL;

Publicação
MEDICAL IMAGE ANALYSIS

Abstract
Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.

2019

iW-Net: an automatic and minimalistic interactive lung nodule segmentation deep network

Autores
Aresta, G; Jacobs, C; Araujo, T; Cunha, A; Ramos, I; Ginneken, BV; Campilho, A;

Publicação
SCIENTIFIC REPORTS

Abstract
We propose iW-Net, a deep learning model that allows for both automatic and interactive segmentation of lung nodules in computed tomography images. iW-Net is composed of two blocks: the first one provides an automatic segmentation and the second one allows to correct it by analyzing 2 points introduced by the user in the nodule's boundary. For this purpose, a physics inspired weight map that takes the user input into account is proposed, which is used both as a feature map and in the system's loss function. Our approach is extensively evaluated on the public LIDC-IDRI dataset, where we achieve a state-of-the-art performance of 0.55 intersection over union vs the 0.59 inter-observer agreement. Also, we show that iW-Net allows to correct the segmentation of small nodules, essential for proper patient referral decision, as well as improve the segmentation of the challenging non-solid nodules and thus may be an important tool for increasing the early diagnosis of lung cancer.

2020

Automatic Lung Nodule Detection Combined With Gaze Information Improves Radiologists' Screening Performance

Autores
Aresta, G; Ferreira, C; Pedrosa, J; Araujo, T; Rebelo, J; Negrao, E; Morgado, M; Alves, F; Cunha, A; Ramos, I; Campilho, A;

Publicação
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Early diagnosis of lung cancer via computed tomography can significantly reduce the morbidity and mortality rates associated with the pathology. However, searching lung nodules is a high complexity task, which affects the success of screening programs. Whilst computer-aided detection systems can be used as second observers, they may bias radiologists and introduce significant time overheads. With this in mind, this study assesses the potential of using gaze information for integrating automatic detection systems in the clinical practice. For that purpose, 4 radiologists were asked to annotate 20 scans from a public dataset while being monitored by an eye tracker device, and an automatic lung nodule detection system was developed. Our results show that radiologists follow a similar search routine and tend to have lower fixation periods in regions where finding errors occur. The overall detection sensitivity of the specialists was 0.67 +/- 0.07, whereas the system achieved 0.69. Combining the annotations of one radiologist with the automatic system significantly improves the detection performance to similar levels of two annotators. Filtering automatic detection candidates only for low fixation regions still significantly improves the detection sensitivity without increasing the number of false-positives.

2020

DR vertical bar GRADUATE: Uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images

Autores
Araujo, T; Aresta, G; Mendonca, L; Penas, S; Maia, C; Carneiro, A; Maria Mendonca, AM; Campilho, A;

Publicação
MEDICAL IMAGE ANALYSIS

Abstract
Diabetic retinopathy (DR) grading is crucial in determining the adequate treatment and follow up of patient, but the screening process can be tiresome and prone to errors. Deep learning approaches have shown promising performance as computer-aided diagnosis (CAD) systems, but their black-box behaviour hinders clinical application. We propose DR vertical bar GRADUATE, a novel deep learning-based DR grading CAD system that supports its decision by providing a medically interpretable explanation and an estimation of how uncertain that prediction is, allowing the ophthalmologist to measure how much that decision should be trusted. We designed DR vertical bar GRADUATE taking into account the ordinal nature of the DR grading problem. A novel Gaussian-sampling approach built upon a Multiple Instance Learning framework allow DR vertical bar GRADUATE to infer an image grade associated with an explanation map and a prediction uncertainty while being trained only with image-wise labels. DR vertical bar GRADUATE was trained on the Kaggle DR detection training set and evaluated across multiple datasets. In DR grading, a quadratic-weighted Cohen's kappa (kappa) between 0.71 and 0.84 was achieved in five different datasets. We show that high kappa values occur for images with low prediction uncertainty, thus indicating that this uncertainty is a valid measure of the predictions' quality. Further, bad quality images are generally associated with higher uncertainties, showing that images not suitable for diagnosis indeed lead to less trustworthy predictions. Additionally, tests on unfamiliar medical image data types suggest that DR vertical bar GRADUATE allows outlier detection. The attention maps generally highlight regions of interest for diagnosis. These results show the great potential of DR vertical bar GRADUATE as a second-opinion system in DR severity grading.

2020

Optic Disc and Fovea Detection in Color Eye Fundus Images

Autores
Mendonça, AM; Melo, T; Araújo, T; Campilho, A;

Publicação
Image Analysis and Recognition - 17th International Conference, ICIAR 2020, Póvoa de Varzim, Portugal, June 24-26, 2020, Proceedings, Part II

Abstract
The optic disc (OD) and the fovea are relevant landmarks in fundus images. Their localization and segmentation can facilitate the detection of some retinal lesions and the assessment of their importance to the severity and progression of several eye disorders. Distinct methodologies have been developed for detecting these structures, mainly based on color and vascular information. The methodology herein described combines the entropy of the vessel directions with the image intensities for finding the OD center and uses a sliding band filter for segmenting the OD. The fovea center corresponds to the darkest point inside a region defined from the OD position and radius. Both the Messidor and the IDRiD datasets are used for evaluating the performance of the developed methods. In the first one, a success rate of 99.56% and 100.00% are achieved for OD and fovea localization. Regarding the OD segmentation, the mean Jaccard index and Dice’s coefficient obtained are 0.87 and 0.94, respectively. The proposed methods are also amongst the top-3 performing solutions submitted to the IDRiD online challenge. © Springer Nature Switzerland AG 2020.

  • 3
  • 4