Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Teresa Finisterra Araújo

2018

Parametric model fitting-based approach for retinal blood vessel caliber estimation in eye fundus images

Autores
Araujo, T; Mendonca, AM; Campilho, A;

Publicação
PLOS ONE

Abstract
Background Changes in the retinal vessel caliber are associated with a variety of major diseases, namely diabetes, hypertension and atherosclerosis. The clinical assessment of these changes in fundus images is tiresome and prone to errors and thus automatic methods are desirable for objective and precise caliber measurement. However, the variability of blood vessel appearance, image quality and resolution make the development of these tools a non-trivial task. Metholodogy A method for the estimation of vessel caliber in eye fundus images via vessel cross-sectional intensity profile model fitting is herein proposed. First, the vessel centerlines are determined and individual segments are extracted and smoothed by spline approximation. Then, the corresponding cross-sectional intensity profiles are determined, post-processed and ultimately fitted by newly proposed parametric models. These models are based on Difference-of-Gaussians (DoG) curves modified through a multiplying line with varying inclination. With this, the proposed models can describe profile asymmetry, allowing a good adjustment to the most difficult profiles, namely those showing central light reflex. Finally, the parameters of the best-fit model are used to determine the vessel width using ensembles of bagged regression trees with random feature selection. Results and conclusions The performance of our approach is evaluated on the REVIEW public dataset by comparing the vessel cross-sectional profile fitting of the proposed modified DoG models with 7 and 8 parameters against a Hermite model with 6 parameters. Results on different goodness of fitness metrics indicate that our models are constantly better at fitting the vessel profiles. Furthermore, our width measurement algorithm achieves a precision close to the observers, outperforming state-of-the art methods, and retrieving the highest precision when evaluated using cross-validation. This high performance supports the robustness of the algorithm and validates its use in retinal vessel width measurement and possible integration in a system for retinal vasculature assessment.

2018

A No-Reference Quality Metric for Retinal Vessel Tree Segmentation

Autores
Galdran, A; Costa, P; Bria, A; Araujo, T; Mendonca, AM; Campilho, A;

Publicação
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT I

Abstract
Due to inevitable differences between the data used for training modern CAD systems and the data encountered when they are deployed in clinical scenarios, the ability to automatically assess the quality of predictions when no expert annotation is available can be critical. In this paper, we propose a new method for quality assessment of retinal vessel tree segmentations in the absence of a reference ground-truth. For this, we artificially degrade expert-annotated vessel map segmentations and then train a CNN to predict the similarity between the degraded images and their corresponding ground-truths. This similarity can be interpreted as a proxy to the quality of a segmentation. The proposed model can produce a visually meaningful quality score, effectively predicting the quality of a vessel tree segmentation in the absence of a manually segmented reference. We further demonstrate the usefulness of our approach by applying it to automatically find a threshold for soft probabilistic segmentations on a per-image basis. For an independent state-of-the-art unsupervised vessel segmentation technique, the thresholds selected by our approach lead to statistically significant improvements in F1-score (+2.67%) and Matthews Correlation Coefficient (+3.11%) over the thresholds derived from ROC analysis on the training set. The score is also shown to correlate strongly with F1 and MCC when a reference is available.

2018

Towards an Automatic Lung Cancer Screening System in Low Dose Computed Tomography

Autores
Aresta, G; Araujo, T; Jacobs, C; van Ginneken, B; Cunha, A; Ramos, I; Campilho, A;

Publicação
IMAGE ANALYSIS FOR MOVING ORGAN, BREAST, AND THORACIC IMAGES

Abstract
We propose a deep learning-based pipeline that, given a low-dose computed tomography of a patient chest, recommends if a patient should be submitted to further lung cancer assessment. The algorithm is composed of a nodule detection block that uses the object detection framework YOLOv2, followed by a U-Net based segmentation. The found structures of interest are then characterized in terms of diameter and texture to produce a final referral recommendation according to the National Lung Screen Trial (NLST) criteria. Our method is trained using the public LUNA16 and LIDC-IDRI datasets and tested on an independent dataset composed of 500 scans from the Kaggle DSB 2017 challenge. The proposed system achieves a patient-wise recall of 89% while providing an explanation to the referral decision and thus may serve as a second opinion tool to speed-up and improve lung cancer screening.

2018

UOLO - Automatic Object Detection and Segmentation in Biomedical Images

Autores
Araujo, T; Aresta, G; Galdran, A; Costa, P; Mendonca, AM; Campilho, A;

Publicação
DEEP LEARNING IN MEDICAL IMAGE ANALYSIS AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, DLMIA 2018

Abstract
We propose UOLO, a novel framework for the simultaneous detection and segmentation of structures of interest in medical images. UOLO consists of an object segmentation module which intermediate abstract representations are processed and used as input for object detection. The resulting system is optimized simultaneously for detecting a class of objects and segmenting an optionally different class of structures. UOLO is trained on a set of bounding boxes enclosing the objects to detect, as well as pixel-wise segmentation information, when available. A new loss function is devised, taking into account whether a reference segmentation is accessible for each training image, in order to suitably backpropagate the error. We validate UOLO on the task of simultaneous optic disc (OD) detection, fovea detection, and OD segmentation from retinal images, achieving state-of-the-art performance on public datasets.

2019

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery

Autores
Al Hajj, H; Lamard, M; Conze, PH; Roychowdhury, S; Hu, XW; Marsalkaite, G; Zisimopoulos, O; Dedmari, MA; Zhao, FQ; Prellberg, J; Sahu, M; Galdran, A; Araujo, T; Vo, DM; Panda, C; Dahiya, N; Kondo, S; Bian, ZB; Vandat, A; Bialopetravicius, J; Flouty, E; Qiu, CH; Dill, S; Mukhopadhyay, A; Costa, P; Aresta, G; Ramamurthys, S; Lee, SW; Campilho, A; Zachow, S; Xia, SR; Conjeti, S; Stoyanov, D; Armaitis, J; Heng, PA; Macready, WG; Cochener, B; Quellec, G;

Publicação
MEDICAL IMAGE ANALYSIS

Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.

2019

Analysis of the performance of specialists and an automatic algorithm in retinal image quality assessment

Autores
Wanderley, DS; Araujo, T; Carvalho, CB; Maia, C; Penas, S; Carneiro, A; Mendonca, AM; Campilho, A;

Publicação
2019 6TH IEEE PORTUGUESE MEETING IN BIOENGINEERING (ENBENG)

Abstract
This study describes a novel dataset with retinal image quality annotation, defined by three different retinal experts, and presents an inter-observer analysis for quality assessment that can be used as gold-standard for future studies. A state-of-the-art algorithm for retinal image quality assessment is also analysed and compared against the specialists performance. Results show that, for 71% of the images present in the dataset, the three experts agree on the given image quality label. The results obtained for accuracy, specificity and sensitivity when comparing one expert against another were in the ranges [83.0 - 85.2]%, [72.7 - 92.9]% and [80.0 - 94.7]%, respectively. The evaluated automatic quality assessment method, despite not being trained on the novel dataset, presents a performance which is within inter-observer variability.

  • 2
  • 4