Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

I'm Teresa Araújo and I'm a PhD student and researcher at INESC TEC and Faculdade de Engenharia da Universidade do Porto (FEUP).

My master degree is in Bioengineering (FEUP), specifically in the field of Biomedical Engineering. 

I am mainly interested in the fields of computer vision, machine learning and medical image analysis. My current research topic is the grading of diabetic retinopathy in color eye fundus images.

Interest
Topics
Details

Details

001
Publications

2020

DR|GRADUATE: uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images

Authors
Araújo, T; Aresta, G; Mendonça, L; Penas, S; Maia, C; Carneiro, A; Mendonça, AM; Campilho, A;

Publication
Medical Image Analysis

Abstract

2020

Optic Disc and Fovea Detection in Color Eye Fundus Images

Authors
Mendonça, AM; Melo, T; Araújo, T; Campilho, A;

Publication
Lecture Notes in Computer Science - Image Analysis and Recognition

Abstract

2020

DR vertical bar GRADUATE: Uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images

Authors
Araujo, T; Aresta, G; Mendonca, L; Penas, S; Maia, C; Carneiro, A; Maria Mendonca, AM; Campilho, A;

Publication
MEDICAL IMAGE ANALYSIS

Abstract
Diabetic retinopathy (DR) grading is crucial in determining the adequate treatment and follow up of patient, but the screening process can be tiresome and prone to errors. Deep learning approaches have shown promising performance as computer-aided diagnosis (CAD) systems, but their black-box behaviour hinders clinical application. We propose DR vertical bar GRADUATE, a novel deep learning-based DR grading CAD system that supports its decision by providing a medically interpretable explanation and an estimation of how uncertain that prediction is, allowing the ophthalmologist to measure how much that decision should be trusted. We designed DR vertical bar GRADUATE taking into account the ordinal nature of the DR grading problem. A novel Gaussian-sampling approach built upon a Multiple Instance Learning framework allow DR vertical bar GRADUATE to infer an image grade associated with an explanation map and a prediction uncertainty while being trained only with image-wise labels. DR vertical bar GRADUATE was trained on the Kaggle DR detection training set and evaluated across multiple datasets. In DR grading, a quadratic-weighted Cohen's kappa (kappa) between 0.71 and 0.84 was achieved in five different datasets. We show that high kappa values occur for images with low prediction uncertainty, thus indicating that this uncertainty is a valid measure of the predictions' quality. Further, bad quality images are generally associated with higher uncertainties, showing that images not suitable for diagnosis indeed lead to less trustworthy predictions. Additionally, tests on unfamiliar medical image data types suggest that DR vertical bar GRADUATE allows outlier detection. The attention maps generally highlight regions of interest for diagnosis. These results show the great potential of DR vertical bar GRADUATE as a second-opinion system in DR severity grading.

2019

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery

Authors
Al Hajj, H; Lamard, M; Conze, PH; Roychowdhury, S; Hu, XW; Marsalkaite, G; Zisimopoulos, O; Dedmari, MA; Zhao, FQ; Prellberg, J; Sahu, M; Galdran, A; Araujo, T; Vo, DM; Panda, C; Dahiya, N; Kondo, S; Bian, ZB; Vandat, A; Bialopetravicius, J; Flouty, E; Qiu, CH; Dill, S; Mukhopadhyay, A; Costa, P; Aresta, G; Ramamurthys, S; Lee, SW; Campilho, A; Zachow, S; Xia, SR; Conjeti, S; Stoyanov, D; Armaitis, J; Heng, PA; Macready, WG; Cochener, B; Quellec, G;

Publication
Medical Image Analysis

Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future. © 2018 Elsevier B.V.

2019

Analysis of the performance of specialists and an automatic algorithm in retinal image quality assessment

Authors
Wanderley, DS; Araujo, T; Carvalho, CB; Maia, C; Penas, S; Carneiro, A; Mendonca, AM; Campilho, A;

Publication
2019 IEEE 6th Portuguese Meeting on Bioengineering (ENBENG)

Abstract