Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Guilherme Moreira Aresta

2019

BACH: Grand challenge on breast cancer histology images

Authors
Aresta, G; Araujo, T; Kwok, S; Chennamsetty, SS; Safwan, M; Alex, V; Marami, B; Prastawa, M; Chan, M; Donovan, M; Fernandez, G; Zeineh, J; Kohl, M; Walz, C; Ludwig, F; Braunewell, S; Baust, M; Vu, QD; To, MNN; Kim, E; Kwak, JT; Galal, S; Sanchez Freire, V; Brancati, N; Frucci, M; Riccio, D; Wang, YQ; Sun, LL; Ma, KQ; Fang, JN; Kone, ME; Boulmane, LS; Campilho, ARLO; Eloy, CTRN; Polonia, AONO; Aguiar, PL;

Publication
MEDICAL IMAGE ANALYSIS

Abstract
Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.

2019

iW-Net: an automatic and minimalistic interactive lung nodule segmentation deep network

Authors
Aresta, G; Jacobs, C; Araujo, T; Cunha, A; Ramos, I; Ginneken, BV; Campilho, A;

Publication
SCIENTIFIC REPORTS

Abstract
We propose iW-Net, a deep learning model that allows for both automatic and interactive segmentation of lung nodules in computed tomography images. iW-Net is composed of two blocks: the first one provides an automatic segmentation and the second one allows to correct it by analyzing 2 points introduced by the user in the nodule's boundary. For this purpose, a physics inspired weight map that takes the user input into account is proposed, which is used both as a feature map and in the system's loss function. Our approach is extensively evaluated on the public LIDC-IDRI dataset, where we achieve a state-of-the-art performance of 0.55 intersection over union vs the 0.59 inter-observer agreement. Also, we show that iW-Net allows to correct the segmentation of small nodules, essential for proper patient referral decision, as well as improve the segmentation of the challenging non-solid nodules and thus may be an important tool for increasing the early diagnosis of lung cancer.

2020

Automatic Lung Nodule Detection Combined With Gaze Information Improves Radiologists' Screening Performance

Authors
Aresta, G; Ferreira, C; Pedrosa, J; Araujo, T; Rebelo, J; Negrao, E; Morgado, M; Alves, F; Cunha, A; Ramos, I; Campilho, A;

Publication
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Early diagnosis of lung cancer via computed tomography can significantly reduce the morbidity and mortality rates associated with the pathology. However, searching lung nodules is a high complexity task, which affects the success of screening programs. Whilst computer-aided detection systems can be used as second observers, they may bias radiologists and introduce significant time overheads. With this in mind, this study assesses the potential of using gaze information for integrating automatic detection systems in the clinical practice. For that purpose, 4 radiologists were asked to annotate 20 scans from a public dataset while being monitored by an eye tracker device, and an automatic lung nodule detection system was developed. Our results show that radiologists follow a similar search routine and tend to have lower fixation periods in regions where finding errors occur. The overall detection sensitivity of the specialists was 0.67 +/- 0.07, whereas the system achieved 0.69. Combining the annotations of one radiologist with the automatic system significantly improves the detection performance to similar levels of two annotators. Filtering automatic detection candidates only for low fixation regions still significantly improves the detection sensitivity without increasing the number of false-positives.

2020

LNDetector: A Flexible Gaze Characterisation Collaborative Platform for Pulmonary Nodule Screening

Authors
Pedrosa, J; Aresta, G; Rebelo, J; Negrao, E; Ramos, I; Cunha, A; Campilho, A;

Publication
XV MEDITERRANEAN CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING AND COMPUTING - MEDICON 2019

Abstract
Lung cancer is the deadliest type of cancer worldwide and late detection is one of the major factors for the low survival rate of patients. Low dose computed tomography has been suggested as a potential early screening tool but manual screening is costly, time-consuming and prone to interobserver variability. This has fueled the development of automatic methods for the detection, segmentation and characterisation of pulmonary nodules but its application to the clinical routine is challenging. In this study, a platform for the development, deployment and testing of pulmonary nodule computer-aided strategies is presented: LNDetector. LNDetector integrates image exploration and nodule annotation tools as well as advanced nodule detection, segmentation and classification methods and gaze characterisation. Different processing modules can easily be implemented or replaced to test their efficiency in clinical environments and the use of gaze analysis allows for the development of collaborative strategies. The potential use of this platform is shown through a combination of visual search, gaze characterisation and automatic nodule detection tools for an efficient and collaborative computer-aided strategy for pulmonary nodule screening.

2020

DR vertical bar GRADUATE: Uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images

Authors
Araujo, T; Aresta, G; Mendonca, L; Penas, S; Maia, C; Carneiro, A; Maria Mendonca, AM; Campilho, A;

Publication
MEDICAL IMAGE ANALYSIS

Abstract
Diabetic retinopathy (DR) grading is crucial in determining the adequate treatment and follow up of patient, but the screening process can be tiresome and prone to errors. Deep learning approaches have shown promising performance as computer-aided diagnosis (CAD) systems, but their black-box behaviour hinders clinical application. We propose DR vertical bar GRADUATE, a novel deep learning-based DR grading CAD system that supports its decision by providing a medically interpretable explanation and an estimation of how uncertain that prediction is, allowing the ophthalmologist to measure how much that decision should be trusted. We designed DR vertical bar GRADUATE taking into account the ordinal nature of the DR grading problem. A novel Gaussian-sampling approach built upon a Multiple Instance Learning framework allow DR vertical bar GRADUATE to infer an image grade associated with an explanation map and a prediction uncertainty while being trained only with image-wise labels. DR vertical bar GRADUATE was trained on the Kaggle DR detection training set and evaluated across multiple datasets. In DR grading, a quadratic-weighted Cohen's kappa (kappa) between 0.71 and 0.84 was achieved in five different datasets. We show that high kappa values occur for images with low prediction uncertainty, thus indicating that this uncertainty is a valid measure of the predictions' quality. Further, bad quality images are generally associated with higher uncertainties, showing that images not suitable for diagnosis indeed lead to less trustworthy predictions. Additionally, tests on unfamiliar medical image data types suggest that DR vertical bar GRADUATE allows outlier detection. The attention maps generally highlight regions of interest for diagnosis. These results show the great potential of DR vertical bar GRADUATE as a second-opinion system in DR severity grading.

2020

CLASSIFICATION OF LUNG NODULES IN CT VOLUMES USING THE LUNG-RADSTM GUIDELINES WITH UNCERTAINTY PARAMETERIZATION

Authors
Ferreira, CA; Aresta, G; Pedrosa, J; Rebelo, J; Negrao, E; Cunha, A; Ramos, I; Campilho, A;

Publication
2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020)

Abstract
Currently, lung cancer is the most lethal in the world. In order to make screening and follow-up a little more systematic, guidelines have been proposed. Therefore, this study aimed to create a diagnostic support approach by providing a patient label based on the LUNG-RADSTM guidelines. The only input required by the system is the nodule centroid to take the region of interest for the input of the classification system. With this in mind, two deep learning networks were evaluated: a Wide Residual Network and a DenseNet. Taking into account the annotation uncertainty we proposed to use sample weights that are introduced in the loss function, allowing nodules with a high agreement in the annotation process to take a greater impact on the training error than its counterpart. The best result was achieved with the Wide Residual Network with sample weights achieving a nodule-wise LUNG-RADSTM labelling accuracy of 0.735 +/- 0.003.

  • 3
  • 4