Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Guilherme Moreira Aresta

2018

UOLO - Automatic Object Detection and Segmentation in Biomedical Images

Authors
Araujo, T; Aresta, G; Galdran, A; Costa, P; Mendonca, AM; Campilho, A;

Publication
DEEP LEARNING IN MEDICAL IMAGE ANALYSIS AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, DLMIA 2018

Abstract
We propose UOLO, a novel framework for the simultaneous detection and segmentation of structures of interest in medical images. UOLO consists of an object segmentation module which intermediate abstract representations are processed and used as input for object detection. The resulting system is optimized simultaneously for detecting a class of objects and segmenting an optionally different class of structures. UOLO is trained on a set of bounding boxes enclosing the objects to detect, as well as pixel-wise segmentation information, when available. A new loss function is devised, taking into account whether a reference segmentation is accessible for each training image, in order to suitably backpropagate the error. We validate UOLO on the task of simultaneous optic disc (OD) detection, fovea detection, and OD segmentation from retinal images, achieving state-of-the-art performance on public datasets.

2019

An unsupervised metaheuristic search approach for segmentation and volume measurement of pulmonary nodules in lung CT scans

Authors
Shakibapour, E; Cunha, A; Aresta, G; Mendonca, AM; Campilho, A;

Publication
EXPERT SYSTEMS WITH APPLICATIONS

Abstract
This paper proposes a new methodology to automatically segment and measure the volume of pulmonary nodules in lung computed tomography (CT) scans. Estimating the malignancy likelihood of a pulmonary nodule based on lesion characteristics motivated the development of an unsupervised pulmonary nodule segmentation and volume measurement as a preliminary stage for pulmonary nodule characterization. The idea is to optimally cluster a set of feature vectors composed by intensity and shape-related features in a given feature data space extracted from a pre-detected nodule. For that purpose, a metaheuristic search based on evolutionary computation is used for clustering the corresponding feature vectors. The proposed method is simple, unsupervised and is able to segment different types of nodules in terms of location and texture without the need for any manual annotation. We validate the proposed segmentation and volume measurement on the Lung Image Database Consortium and Image Database Resource Initiative - LIDC-IDRI dataset. The first dataset is a group of 705 solid and sub-solid (assessed as part-solid and non-solid) nodules located in different regions of the lungs, and the second, more challenging, is a group of 59 sub-solid nodules. The average Dice scores of 82.35% and 71.05% for the two datasets show the good performance of the segmentation proposal. Comparisons with previous state-of-the-art techniques also show acceptable and comparable segmentation results. The volumes of the segmented nodules are measured via ellipsoid approximation. The correlation and statistical significance between the measured volumes of the segmented nodules and the ground-truth are obtained by Pearson correlation coefficient value, obtaining an R-value >= 92.16% with a significance level of 5%.

2019

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery

Authors
Al Hajj, H; Lamard, M; Conze, PH; Roychowdhury, S; Hu, XW; Marsalkaite, G; Zisimopoulos, O; Dedmari, MA; Zhao, FQ; Prellberg, J; Sahu, M; Galdran, A; Araujo, T; Vo, DM; Panda, C; Dahiya, N; Kondo, S; Bian, ZB; Vandat, A; Bialopetravicius, J; Flouty, E; Qiu, CH; Dill, S; Mukhopadhyay, A; Costa, P; Aresta, G; Ramamurthys, S; Lee, SW; Campilho, A; Zachow, S; Xia, SR; Conjeti, S; Stoyanov, D; Armaitis, J; Heng, PA; Macready, WG; Cochener, B; Quellec, G;

Publication
MEDICAL IMAGE ANALYSIS

Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.

2018

Radiologists' gaze characterization during lung nodule search in thoracic CT

Authors
Machado, M; Aresta, G; Leitao, P; Carvalho, AS; Rodrigues, M; Ramos, I; Cunha, A; Campilho, A;

Publication
2018 1ST INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION (ICGI 2018)

Abstract
Lung cancer diagnosis is made by radiologists through nodule search in chest Computed Tomography (CT) scans. This task is known to be difficult and prone to errors that can lead to late diagnosis. Although Computer-Aided Diagnostic (CAD) systems are promising tools to be used in clinical practice, experienced radiologists continue to perform better diagnosis than CADs. This paper proposes a methodology for characterizing the radiologist's gaze during nodules search in chest CT scans. The main goals are to identify regions that attract the radiologists' attention, which can then be used for improving a lung CAD system, and to create a tool to assist radiologists during the search task. For that purpose, the methodology processes the radiologists' gaze and their mouse coordinates during the nodule search. The resulting data is then processed to obtain a 3D gaze path from which relevant attention studies can be derived. To better convey the found information, a reference model of the lung that eases the communication of the location of relevant anatomical/pathological findings is also proposed. The methodology is tested on a set of 24 real-practice gazes, recorded via an Eye tracker, from 3 radiologists.

2019

Wide Residual Network for Lung-Rads (TM) Screening Referral

Authors
Ferreira, CA; Aresta, G; Cunha, A; Mendonca, AM; Campilho, A;

Publication
2019 6TH IEEE PORTUGUESE MEETING IN BIOENGINEERING (ENBENG)

Abstract
Lung cancer has an increasing preponderance in worldwide mortality, demanding for the development of efficient screening methods. With this in mind, a binary classification method using Lung-RADS (TM) guidelines to warn changes in the screening management is proposed. First, having into account the lack of public datasets for this task, the lung nodules in the LIDC-IDRI dataset were re-annotated to include a Lung-RADS (TM)-based referral label. Then, a wide residual network is used for automatically assessing lung nodules in 3D chest computed tomography exams. Unlike the standard malignancy prediction approaches, the proposed method avoids the need to segment and characterize lung nodules, and instead directly defines if a patient should be submitted for further lung cancer tests. The system achieves a nodule-wise accuracy of 0.87 +/- 0.02.

2019

EyeWeS: Weakly Supervised Pre-Trained Convolutional Neural Networks for Diabetic Retinopathy Detection

Authors
Costa, P; Araujo, T; Aresta, G; Galdran, A; Mendonca, AM; Smailagic, A; Campilho, A;

Publication
PROCEEDINGS OF MVA 2019 16TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA)

Abstract
Diabetic Retinopathy (DR) is one of the leading causes of preventable blindness in the developed world. With the increasing number of diabetic patients there is a growing need of an automated system for DR detection. We propose EyeWeS, a method that not only detects DR in eye fundus images but also pinpoints the regions of the image that contain lesions, while being trained with image labels only. We show that it is possible to convert any pre-trained convolutional neural network into a weakly-supervised model while increasing their performance and efficiency. EyeWeS improved the results of Inception V3 from 94:9% Area Under the Receiver Operating Curve (AUC) to 95:8% AUC while maintaining only approximately 5% of the Inception V3's number of parameters. The same model is able to achieve 97:1% AUC in a cross-dataset experiment.

  • 2
  • 4