Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Aurélio Campilho

2018

MedAL: Accurate and Robust Deep Active Learning for Medical Image Analysis

Authors
Smailagic, A; Costa, P; Noh, HY; Walawalkar, D; Khandelwal, K; Galdran, A; Mirshekari, M; Fagert, J; Xu, SS; Zhang, P; Campilho, A;

Publication
2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA)

Abstract
Deep learning models have been successfully used in medical image analysis problems but they require a large amount of labeled images to obtain good performance. However, such large labeled datasets are costly to acquire. Active learning techniques can be used to minimize the number of required training labels while maximizing the model's performance. In this work, we propose a novel sampling method that queries the unlabeled examples that maximize the average distance to all training set examples in a learned feature space. We then extend our sampling method to define a better initial training set, without the need for a trained model, by using Oriented FAST and Rotated BRIEF (ORB) feature descriptors. We validate MedAL on 3 medical image datasets and show that our method is robust to different dataset properties. MedAL is also efficient, achieving 80% accuracy on the task of Diabetic Retinopathy detection using only 425 labeled images, corresponding to a 32% reduction in the number of required labeled examples compared to the standard uncertainty sampling technique, and a 40% reduction compared to random sampling.

2019

EyeWeS: Weakly Supervised Pre-Trained Convolutional Neural Networks for Diabetic Retinopathy Detection

Authors
Costa, P; Araújo, T; Aresta, G; Galdran, A; Mendonça, AM; Smailagic, A; Campilho, A;

Publication
PROCEEDINGS OF MVA 2019 16TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA)

Abstract
Diabetic Retinopathy (DR) is one of the leading causes of preventable blindness in the developed world. With the increasing number of diabetic patients there is a growing need of an automated system for DR detection. We propose EyeWeS, a method that not only detects DR in eye fundus images but also pinpoints the regions of the image that contain lesions, while being trained with image labels only. We show that it is possible to convert any pre-trained convolutional neural network into a weakly-supervised model while increasing their performance and efficiency. EyeWeS improved the results of Inception V3 from 94:9% Area Under the Receiver Operating Curve (AUC) to 95:8% AUC while maintaining only approximately 5% of the Inception V3's number of parameters. The same model is able to achieve 97:1% AUC in a cross-dataset experiment.

2019

UNCERTAINTY-AWARE ARTERY/VEIN CLASSIFICATION ON RETINAL IMAGES

Authors
Galdran, A; Meyer, M; Costa, P; Mendonça; Campilho, A;

Publication
2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019)

Abstract
The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. however, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that, was not used during training, even with considerable differences in terms of appearance and resolution.

2019

BACH: Grand challenge on breast cancer histology images

Authors
Aresta, G; Araújo, T; Kwok, S; Chennamsetty, SS; Safwan, M; Alex, V; Marami, B; Prastawa, M; Chan, M; Donovan, M; Fernandez, G; Zeineh, J; Kohl, M; Walz, C; Ludwig, F; Braunewell, S; Baust, M; Vu, QD; To, MNN; Kim, E; Kwak, JT; Galal, S; Sanchez Freire, V; Brancati, N; Frucci, M; Riccio, D; Wang, YQ; Sun, LL; Ma, KQ; Fang, JN; Kone, ME; Boulmane, LS; Campilho, ARLO; Eloy, CTRN; Polónia, AONO; Aguiar, PL;

Publication
MEDICAL IMAGE ANALYSIS

Abstract
Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.

2019

LEARNING TO SEGMENT THE LUNG VOLUME FROM CT SCANS BASED ON SEMI-AUTOMATIC GROUND-TRUTH

Authors
Sousa, P; Galdran, A; Costa, P; Campilho, A;

Publication
2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019)

Abstract
Lung volume segmentation is a key step in the design of Computer-Aided Diagnosis systems for automated lung pathology analysis. However, isolating the lung from CT volumes can he a challenging process due to considerable deformations and the potential presence of pathologies. Convolutional Neural Networks (CNN) are effective tools for modeling the spatial relationship between lung voxels. Unfortunately, they typically require large quantities of annotated data, and manually delineating the lung from volumetric CT scans can he a cumbersome process. We propose to train a 3D CNN to solve this task based on semi-automatically generated annotations. For this, we introduce an extension of the well-known V-Net architecture that can handle higher dimensional input data. Even if the training set labels are noisy and contain errors, our experiments show that it is possible to learn to accurately segment the lung relying on them. Numerical comparisons on an external test set containing lung segmentations provided by a medical expert demonstrate that the proposed model generalizes well to new data, reaching an average 98.7% Dice coefficient. The proposed approach results in a superior performance with respect to the standard V-Net model, particularly on the lung boundary.

2019

REAL-TIME INFORMATIVE LARYNGOSCOPIC FRAME CLASSIFICATION WITH PRE-TRAINED CONVOLUTIONAL NEURAL NETWORKS

Authors
Galdran, A; Costa, P; Carnpilho, A;

Publication
2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019)

Abstract
Visual exploration of the larynx represents a relevant technique for the early diagnosis of laryngeal disorders. However, visualizing an endoscopy for finding abnormalities is a time-consuming process, and for this reason much research has been dedicated to the automatic analysis of endoscopic video data. In this work we address the particular task of discriminating among informative laryngoscopic frames and those that carry insufficient diagnostic information. In the latter case, the goal is also to determine the reason for this lack of information. To this end, we analyze the possibility of training three different state-of-the-art Convolutional Neural Networks, but initializing their weights from configurations that have been previously optimized for solving natural image classification problems. Our findings show that the simplest of these three architectures not only is the most accurate (outperforming previously proposed techniques), but also the fastest and most efficient, with the lowest inference time and minimal memory requirements, enabling real-time application and deployment in portable devices.

  • 16
  • 49